Eine Plattform für die Wissenschaft: Bauingenieurwesen, Architektur und Urbanistik
Enhanced consensus control architecture for autonomous platoon utilizing multi‐agent reinforcement learning
AbstractCoordinating a platoon of connected and automated vehicles significantly improves traffic efficiency and safety. Current platoon control methods prioritize consistency and convergence performance but overlook the inherent interdependence between the platoon and the the non‐connected leading vehicle. This oversight constrains the platoon's adaptability in car‐following scenarios, resulting in suboptimal optimization performance. To address this issue, this paper proposed a platoon control framework based on multi‐agent reinforcement learning, aiming to integrate cooperative optimization with platoon tracking behavior and internal coordination strategies. This strategy employs a bidirectional cooperative optimization mechanism to effectively decouple the platoon's tracking behavior from its internal coordination control, and then recouple it in a multi‐objective optimized manner. Additionally, it leverages long short‐term memory networks to accurately capture and manage the platoon's dynamic nature over time, aiming to achieve enhanced optimization outcomes. The simulation results demonstrate that the proposed method effectively improves the platoon's cooperative effect and car‐following adaptability. Compared to the consensus control strategy, it reduces the average spacing error by 8.3%. Furthermore, the average length of the platoon decreases by 19.1%.
Enhanced consensus control architecture for autonomous platoon utilizing multi‐agent reinforcement learning
AbstractCoordinating a platoon of connected and automated vehicles significantly improves traffic efficiency and safety. Current platoon control methods prioritize consistency and convergence performance but overlook the inherent interdependence between the platoon and the the non‐connected leading vehicle. This oversight constrains the platoon's adaptability in car‐following scenarios, resulting in suboptimal optimization performance. To address this issue, this paper proposed a platoon control framework based on multi‐agent reinforcement learning, aiming to integrate cooperative optimization with platoon tracking behavior and internal coordination strategies. This strategy employs a bidirectional cooperative optimization mechanism to effectively decouple the platoon's tracking behavior from its internal coordination control, and then recouple it in a multi‐objective optimized manner. Additionally, it leverages long short‐term memory networks to accurately capture and manage the platoon's dynamic nature over time, aiming to achieve enhanced optimization outcomes. The simulation results demonstrate that the proposed method effectively improves the platoon's cooperative effect and car‐following adaptability. Compared to the consensus control strategy, it reduces the average spacing error by 8.3%. Furthermore, the average length of the platoon decreases by 19.1%.
Enhanced consensus control architecture for autonomous platoon utilizing multi‐agent reinforcement learning
Computer aided Civil Eng
Guo, Xin (Autor:in) / Peng, Jiankun (Autor:in) / Pi, Dawei (Autor:in) / Zhang, Hailong (Autor:in) / Wu, Changcheng (Autor:in) / Ma, Chunye (Autor:in)
19.03.2025
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
DOAJ | 2024
|