skip to main content
10.1145/3523150.3523152acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicmlscConference Proceedingsconference-collections
research-article

Deep Reinforcement Learning with Noisy Exploration for Autonomous Driving

Authors Info & Claims
Published:13 April 2022Publication History

ABSTRACT

Autonomous driving decision-making is a great challenge in complex traffic environment, and the deep reinforcement learning (DRL) can contribute to the more intelligent strategy. In the autonomous driving scenarios with DRL algorithms, sufficient exploration to the traffic environment is vital for constructing the state spaces, training the driving decision model and transferring to a new environment. In this paper, three different noise modes are presented to investigate the performance of noisy exploration and generalization in self-driving tasks. Extensive experiments indicate that the noisy exploration is not necessary for the easy traffic environments, and the correlated noisy exploration is an effective technique in generalizing to complex traffic environments, while the uncorrelated noisy exploration may result in a counter-productive effect in inertial autonomous driving system.

References

  1. Tampuu, A., Semikin, M., Muhammad, N., , 2020. A Survey of End-to-End Driving: Architectures and Training Methods. IEEE Transactions on Neural Networks and Learning Systems PP(99), 1-21.Google ScholarGoogle Scholar
  2. Sutton, R., Barto, A., 2018. Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press.Google ScholarGoogle Scholar
  3. Silver, D., Huang, A., Maddison, C. J., , 2016. Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489.Google ScholarGoogle ScholarCross RefCross Ref
  4. Kober, J., Bagnell, J. A., Peters, J., 2013. Reinforcement learning in robotics: A survey. International Journal of Robotics Research 32(11), 1238–1274.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Liang X., Wang T., Yang L., , 2018. CIRL: Controllable imitative reinforcement learning for vision-based self-driving. In: 15th European Conference on Computer Vision (ECCV), pp. 584-599. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Mnih V., Kavukcuoglu K., Silver D., , 2015. Human-level control through deep reinforcement learning. Nature 518(7540), 529-533.Google ScholarGoogle ScholarCross RefCross Ref
  7. Lillicrap, T. P., Hunt, J. J., Pritzel, A., , 2016. Continuous control with deep reinforcement learning. In: 4th International Conference on Learning Representations ( ICLR).Google ScholarGoogle Scholar
  8. Wang, J., Zhang, Q., Zhao, D., , 2019. Lane change decision-making through deep reinforcement learning with rule-based constraints. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1-6. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  9. Isele, D., Rahimi, R., Cosgun, A., , 2018. Navigating occluded intersections with autonomous vehicles using deep reinforcement learning[C]. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2034-2039. IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Sallab, A. E. L., Abdou, M., Perot, E., , 2017. Deep reinforcement learning framework for autonomous driving. Electronic Imaging 2017(19), 70-76.Google ScholarGoogle ScholarCross RefCross Ref
  11. An, H. I., Jung, J., 2019. Decision-making system for lane change using deep reinforcement learning in connected and automated driving. Electronics 8(5), 543.Google ScholarGoogle ScholarCross RefCross Ref
  12. Kendall, A., Hawke, J., Janz, D., , 2019. Learning to drive in a day. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 8248-8254. IEEE.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Watkins, C. J. C. H., Dayan, P., 1992. Q-learning. Machine Learning 8(3-4), 279–292.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Wymann, B., imitrakakis, C., Sumner, A., , 2013. TORCS: The Open Racing Car Simulator, v1.3.5Google ScholarGoogle Scholar
  15. https://yanpanlau.github.io/2016/10/11/Torcs-Keras.html, accessed 2020/07/19.Google ScholarGoogle Scholar
  16. Kiran, B. R., Sobh, I., Talpaert, V., , 2021. Deep Reinforcement Learning for Autonomous Driving: A Survey. IEEE Transactions on Intelligent Transportation Systems PP(99), 1-18.Google ScholarGoogle Scholar
  17. Amin, S., Gomrokchi, M., Satija, H., , 2021. A Survey of Exploration Methods in Reinforcement Learning. arXiv: 2109.00157v2.Google ScholarGoogle Scholar
  18. https://vivek1410patel.github.io/rl_report.pdf, accessed 2021/10/25.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ICMLSC '22: Proceedings of the 2022 6th International Conference on Machine Learning and Soft Computing
    January 2022
    185 pages
    ISBN:9781450387477
    DOI:10.1145/3523150

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 April 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited
  • Article Metrics

    • Downloads (Last 12 months)37
    • Downloads (Last 6 weeks)2

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format