Robust Adversarial Deep Reinforcement Learning

Robust Adversarial Deep Reinforcement Learning

ISBN13: 9798369317389|ISBN13 Softcover: 9798369345689|EISBN13: 9798369317396
DOI: 10.4018/979-8-3693-1738-9.ch005
Cite Chapter Cite Chapter

MLA

Wang, Di. "Robust Adversarial Deep Reinforcement Learning." Deep Learning, Reinforcement Learning, and the Rise of Intelligent Systems, edited by M. Irfan Uddin and Wali Khan Mashwani, IGI Global, 2024, pp. 106-125. https://doi.org/10.4018/979-8-3693-1738-9.ch005

APA

Wang, D. (2024). Robust Adversarial Deep Reinforcement Learning. In M. Uddin & W. Mashwani (Eds.), Deep Learning, Reinforcement Learning, and the Rise of Intelligent Systems (pp. 106-125). IGI Global. https://doi.org/10.4018/979-8-3693-1738-9.ch005

Chicago

Wang, Di. "Robust Adversarial Deep Reinforcement Learning." In Deep Learning, Reinforcement Learning, and the Rise of Intelligent Systems, edited by M. Irfan Uddin and Wali Khan Mashwani, 106-125. Hershey, PA: IGI Global, 2024. https://doi.org/10.4018/979-8-3693-1738-9.ch005

Export Reference

Mendeley
Favorite

Abstract

Deep reinforcement learning has shown remarkable results across various tasks. However, recent studies highlight the susceptibility of DRL to targeted adversarial disruptions. Furthermore, discrepancies between simulated settings and real-world applications often make it challenging to transfer these DRL policies, particularly in situations where safety is essential. Several solutions have been proposed to address these issues to enhance DRL's robustness. This chapter delves into the significance of adversarial attack and defense strategies in machine learning, emphasizing the unique challenges in adversarial DRL settings. It also presents an overview of recent advancements, DRL foundations, adversarial Markov decision process models, and comparisons among different attacks and defenses. The chapter further evaluates the effectiveness of various attacks and the efficacy of multiple defense mechanisms using simulation data, specifically focusing on policy success rates and average rewards. Potential limitations and prospects for future research are also explored.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.