Seminar on Trustworthy Multi-Agent Reinforcement Learning under Adversarial Perturbations
Title: Trustworthy Multi-Agent Reinforcement Learning under Adversarial Perturbations
Place: ECEC 202, NJIT
Time: 9:00-10:00, Wed., Dec. 4, 2024
Zoom: https://njit-edu.zoom.us/j/9735966282?pwd=QlZtZFlOZ2FMZjlXemQwNmVUci9yUT09
Meeting ID: 973 596 6282
Passcode: 124095
Speaker: Ziyuan Zhou, Ph.D. Candidate
Abstract: Multi-Agent Reinforcement Learning (MARL) has been widely used in autonomous driving and recommendation systems. However, models trained with MARL are highly sensitive to adversarial state perturbations, which undermines their trustworthiness in complex environments. This seminar addresses this challenge and aims to establish trustworthy MARL methods from two perspectives: enhancing model robustness against state perturbations and identifying weaknesses through adversarial attacks. To improve robustness, we propose a state-adversarial stochastic game and a robust learning framework that defends against adversarial state perturbations while maintaining strong performance under unperturbed conditions. From the perspective of adversarial attacks, we introduce an attack framework to identify critical agents whose minor perturbations significantly disrupt the system, outperforming existing methods and revealing model weaknesses.
Biography: Ziyuan Zhou received the B.S. degree from China University of Mining and Technology, Xuzhou, China, in 2020. She is currently working toward the Ph.D. degree in the Department of Computer Science, Tongji University, Shanghai, China. Since Jun. 2024, She has been working as a Joint Ph.D. Student with the Department of Electrical and Computer Engineering, New Jersey Institute of Technology. Her research focuses on reinforcement learning, multi-agent systems, and adversarial attacks.
Date and Time
Location
Hosts
Registration
-
Add Event to Calendar
- Newark, New Jersey
- United States