Li, Xinhang and Yang, Yiying and Yuan, Zheng and Wang, Zhe and Wang, Qinwen and Xu, Chen and Li, Lei and He, Jianhua and Zhang, Lin (2024) Progression Cognition Reinforcement Learning with Prioritized Experience for Multi-Vehicle Pursuit. IEEE Transactions on Intelligent Transportation Systems, 25 (8). pp. 10035-10048. DOI https://doi.org/10.1109/TITS.2024.3354196
Li, Xinhang and Yang, Yiying and Yuan, Zheng and Wang, Zhe and Wang, Qinwen and Xu, Chen and Li, Lei and He, Jianhua and Zhang, Lin (2024) Progression Cognition Reinforcement Learning with Prioritized Experience for Multi-Vehicle Pursuit. IEEE Transactions on Intelligent Transportation Systems, 25 (8). pp. 10035-10048. DOI https://doi.org/10.1109/TITS.2024.3354196
Li, Xinhang and Yang, Yiying and Yuan, Zheng and Wang, Zhe and Wang, Qinwen and Xu, Chen and Li, Lei and He, Jianhua and Zhang, Lin (2024) Progression Cognition Reinforcement Learning with Prioritized Experience for Multi-Vehicle Pursuit. IEEE Transactions on Intelligent Transportation Systems, 25 (8). pp. 10035-10048. DOI https://doi.org/10.1109/TITS.2024.3354196
Abstract
Multi-vehicle pursuit (MVP) such as autonomous police vehicles pursuing suspects is important but very challenging due to its mission and safety-critical nature. While multi-agent reinforcement learning (MARL) algorithms have been proposed for MVP in structured grid-pattern roads, the existing algorithms use random training samples in centralized learning, which leads to homogeneous agents showing low collaboration performance. For the more challenging problem of pursuing multiple evaders, these algorithms typically select a fixed target evader for pursuers without considering dynamic traffic situation, which significantly reduces pursuing success rate. To address the above problems, this paper proposes a Progression Cognition Reinforcement Learning with Prioritized Experience for MVP (PEPCRL-MVP) in urban multi-intersection dynamic traffic scenes. PEPCRL-MVP uses a prioritization network to assess the transitions in the global experience replay buffer according to each MARL agent’s parameters. With the personalized and prioritized experience set selected via the prioritization network, diversity is introduced to the MARL learning process, which can improve collaboration and task-related performance. Furthermore, PEPCRL-MVP employs an attention module to extract critical features from dynamic urban traffic environments. These features are used to develop a progression cognition method to adaptively group pursuing vehicles. Each group efficiently targets one evading vehicle. Extensive experiments conducted with a simulator over unstructured roads of an urban area show that PEPCRL-MVP is superior to other state-of-the-art methods. Specifically, PEPCRL-MVP improves pursuing efficiency by 3.95 % over Twin Delayed Deep Deterministic policy gradient-Decentralized Multi-Agent Pursuit and its success rate is 34.78 % higher than that of Multi-Agent Deep Deterministic Policy Gradient. Codes are open-sourced.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Autonomous driving; multi-agent reinforcement learning; multi-vehicle pursuit; prioritized experience |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 09 Feb 2024 16:57 |
Last Modified: | 30 Oct 2024 21:07 |
URI: | http://repository.essex.ac.uk/id/eprint/37558 |
Available files
Filename: Final__Progression_Cognition_Reinforcement_Learning_with_Prioritized_Experience_for_Multi_Vehicle_Pursuit.pdf
Licence: Creative Commons: Attribution 4.0