A platform for research: civil engineering, architecture and urbanism
Modeling Motorcyclist–Pedestrian Near Misses: A Multiagent Adversarial Inverse Reinforcement Learning Approach
Several studies have used surrogate safety measures obtained from microsimulation packages, such as VISSIM, for safety assessments. However, this approach has shortcomings: (1) microsimulation models are developed considering specific rules that tend to avoid collisions; and (2) existing models do not realistically model road users’ behavior and collision avoidance strategies. Moreover, the majority of these models rely on the single-agent modeling assumption (i.e., the remaining agents are considered components of a fixed and stationary environment). Nevertheless, this framework is not realistic, which can limit the models’ representation of the real world. This study used a Markov Game (MG) for modeling concurrent road users’ behavior and evasive actions in near misses. Unlike the conventional game-theoretic approach that considers single-time-step modeling, the MG framework models the sequences of road user decisions. In this framework, road users are modeled as rational agents that aim to maximize their own utility functions by taking rational actions. Road user utility functions are recovered from examples of conflict trajectories using a multiagent adversarial inverse reinforcement learning (MAAIRL) framework. In this study, trajectories from conflicts between motorcyclists and pedestrians in Shanghai, China, were used. Road user policies and collision avoidance strategies in near misses were determined with multiagent actor–critic deep reinforcement learning. A multiagent simulation platform was implemented to emulate pedestrian and motorcyclist trajectories. The results demonstrated that the multiagent model outperformed a Gaussian process inverse reinforcement learning single-agent model in predicting road user trajectories and their evasive actions. The MAAIRL model predicted the interactions’ postencroachment time with high accuracy. Moreover, unlike the single-agent framework, the recovered multiagent reward function captured the equilibrium concept in road user interactions. The multiagent model enables greater understanding of road users’ behavior in conflict interactions and captures the nonstationariness in the environment.
Modeling Motorcyclist–Pedestrian Near Misses: A Multiagent Adversarial Inverse Reinforcement Learning Approach
Several studies have used surrogate safety measures obtained from microsimulation packages, such as VISSIM, for safety assessments. However, this approach has shortcomings: (1) microsimulation models are developed considering specific rules that tend to avoid collisions; and (2) existing models do not realistically model road users’ behavior and collision avoidance strategies. Moreover, the majority of these models rely on the single-agent modeling assumption (i.e., the remaining agents are considered components of a fixed and stationary environment). Nevertheless, this framework is not realistic, which can limit the models’ representation of the real world. This study used a Markov Game (MG) for modeling concurrent road users’ behavior and evasive actions in near misses. Unlike the conventional game-theoretic approach that considers single-time-step modeling, the MG framework models the sequences of road user decisions. In this framework, road users are modeled as rational agents that aim to maximize their own utility functions by taking rational actions. Road user utility functions are recovered from examples of conflict trajectories using a multiagent adversarial inverse reinforcement learning (MAAIRL) framework. In this study, trajectories from conflicts between motorcyclists and pedestrians in Shanghai, China, were used. Road user policies and collision avoidance strategies in near misses were determined with multiagent actor–critic deep reinforcement learning. A multiagent simulation platform was implemented to emulate pedestrian and motorcyclist trajectories. The results demonstrated that the multiagent model outperformed a Gaussian process inverse reinforcement learning single-agent model in predicting road user trajectories and their evasive actions. The MAAIRL model predicted the interactions’ postencroachment time with high accuracy. Moreover, unlike the single-agent framework, the recovered multiagent reward function captured the equilibrium concept in road user interactions. The multiagent model enables greater understanding of road users’ behavior in conflict interactions and captures the nonstationariness in the environment.
Modeling Motorcyclist–Pedestrian Near Misses: A Multiagent Adversarial Inverse Reinforcement Learning Approach
J. Comput. Civ. Eng.
Lanzaro, Gabriel (author) / Sayed, Tarek (author) / Alsaleh, Rushdi (author)
2022-11-01
Article (Journal)
Electronic Resource
English
Near-misses and failure (part 1)
British Library Online Contents | 2013
Evaluation of California Motorcyclist Safety Program
British Library Online Contents | 1998
|