Using Deep Reinforcement Learning to Coordinate Multi-Modal Journey Planning with Limited Transportation Capacity
DOI:
https://doi.org/10.52825/scp.v2i.89Abstract
Multi-modal journey planning for large numbers of simultaneous travellers is a challenging problem, particularly in the presence of limited transportation capacity. Fundamental trade-offs exist between balancing the goals and preferences of each traveller and the optimization of the use of available capacity. Addressing these trade-offs requires careful coordination of travellers’ individual plans. This paper assesses the viability of Deep Reinforcement Learning (DRL) applied to simulated mobility as a means of learning coordinated plans. Specifically, the paper addresses the problem of travel to large-scale events, such as concerts and sports events, where all attendees have as their goal to arrive on time. Multi-agent DRL is used to learn coordinated plans aimed at maximizing just-in-time arrival while taking into account the limited capacity of the infrastructure. Generated plans take account of different transportation modes’ availability and requirements (e.g., parking) as well as constraints such as attendees’ ownership of vehicles. The results are compared with those of a naive decision-making algorithm based on estimated travel time. The results show that the learned plans make intuitive use of the available modes and improve average travel time and lateness, supporting the use of DRL in association with a microscopic mobility simulator for journey planning.
Downloads
References
Zhongyu Wang, Hang Yang, and Bing Wu. Traffic flow characteristics and congestion evolution rules for urban road networks during special events. In CICTP 2017: Transportation Reform and Change—Equity, Inclusiveness, Sharing, and Innovation, pages 2368–2380. American Society of Civil Engineers Reston, VA, 2018.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
Hao Dong, Hao Dong, Zihan Ding, Shanghang Zhang, and Chang. Deep Reinforcement Learning. Springer, 2020.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimiza- tion algorithms. arXiv preprint arXiv:1707.06347, 2017.
Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. Rllib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning, pages 3053–3062. PMLR, 2018.
Ardi Tampuu, Tambet Matiisen, Dorian Kodelja, Ilya Kuzovkin, Kristjan Korjus, Juhan Aru, Jaan Aru, and Raul Vicente. Multiagent cooperation and competition with deep reinforcement learning. PloS one, 12(4):e0172395, 2017.
AlejandroTorreno ,Eva Onaindia, Anton ́ın Komenda, and Michal Sˇtolba. Cooperative multi-agent planning: a survey. ACM Computing Surveys (CSUR), 50(6):1–32, 2017.
Pablo Alvarez Lopez and Michael Behrisch and Laura Bieker-Walz and Jakob Erdmann and Yun-Pang Flo ̈ttero ̈d and Robert Hilbrich and Leonhard Lu ̈cken and Johannes Rummel and Peter Wagner and Evamarie Wießner. Microscopic Traffic Simulation using SUMO. In The 21st IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018.
Hoang Nguyen, Le-Minh Kieu, Tao Wen, and Chen Cai. Deep learning methods in transportation domain: a review. IET Intelligent Transport Systems, 12(9):998–1004, 2018.
Zahra Karamiand and Rasha Kashef. Smart transportation planning :Data, models, and algorithms. Transportation Engineering, 2:100013, 2020.
Mordechai Haklay and Patrick Weber. OpenStreetMap: User-generated street maps. Pervasive Computing, IEEE, 2008.
Sandesh Uppoor and Marco Fiore. Large-scale urban vehicular mobility for networking research. In Vehicular Networking Conference (VNC), 2011 IEEE, pages 62–69. IEEE, 2011.
Luca Bedogni, Marco Gramaglia, Andrea Vesco, Marco Fiore, Jerome Harri, and Francesco Ferrero. The Bologna Ringway dataset: improving road network conversion in SUMO and validating urban mobility via navigation services. Vehicular Technology, IEEE Transactions on, 64(12):5464–5476, 2015.
Lara Codeca and Je ́roˆme Ha ̈rri. Monaco SUMO Traffic (MoST) Scenario: A 3D Mobility Scenario for Co- operative ITS. In SUMO 2018, SUMO User Conference, Simulating Autonomous and Intermodal Transport Systems, Berlin, GERMANY, 05 2018.
Marco Rapelli, Claudio Casetti, and Giandomenico Gagliardi. Tust: from raw data to vehicular traffic simu- lation in turin. In 2019 IEEE/ACM 23rd International Symposium on Distributed Simulation and Real Time Applications (DS-RT), pages 1–8. IEEE, 2019.
Maxime Gueriau and Ivana Dusparic. Quantifying the impact of connected and autonomous vehicles on traffic efficiency and safety in mixed traffic. In 23rd IEEE International Conference on Intelligent Transportation Systems, 2020.
Abubakr O. Al-Abbasi, Arnob Ghosh, and Vaneet Aggarwal. Deeppool: Distributed model-free algorithm for ride-sharing using deep reinforcement learning. IEEE Transactions on Intelligent Transportation Systems, 20(12):4714–4727, 2019.
Kaushik Manchella, Abhishek K. Umrawal, and Vaneet Aggarwal. Flexpool: A distributed model-free deep reinforcement learning algorithm for joint passengers and goods transportation. IEEE Transactions on Intelligent Transportation Systems, 22(4):2035–2047, 2021.
Kaixiang Lin, Renyu Zhao, Zhe Xu, and Jiayu Zhou. Efficient large-scale fleet management via multi- agent deep reinforcement learning. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1774–1783, 2018.
B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Yogamani, and Patrick Pe ́rez. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, pages 1–18, 2021.
Ammar Haydari and Yasin Yilmaz. Deep reinforcement learning for intelligent transportation systems: A survey. IEEE Transactions on Intelligent Transportation Systems, pages 1–22, 2020.
Jing Xin, Huan Zhao, Ding Liu, and Minqi Li. Application of deep reinforcement learning in mobile robot path planning. In 2017 Chinese Automation Congress (CAC), pages 7112–7116, 2017.
Ursula Challita, Walid Saad, and Christian Bettstetter. Deep reinforcement learning for interference-aware path planning of cellular-connected uavs. In 2018 IEEE International Conference on Communications (ICC), pages 1–7, 2018.
Siyu Guo, Xiuguo Zhang,Yisong Zheng,and Yiquan Du. An autonomous path planning model for unmanned ships based on deep reinforcement learning. Sensors, 20(2):426, 2020.
Hao Liu, Ting Li, Renjun Hu, Yanjie Fu, Jingjing Gu, and Hui Xiong. Joint representation learning for multi-modal transportation recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1036–1043, 2019.
Mariagrazia Dotoli, Hayfa Zgaya, Carmine Russo, and Slim Hammadi. A multi-agent advanced traveler information system for optimal trip planning in a co-modal framework. IEEE Transactions on Intelligent Transportation Systems, 18(9):2397–2412, 2017.
Yang Liu, Cheng Lyu, Zhiyuan Liu, and Jinde Cao. Exploring a large-scale multi-modal transportation recommendation system. Transportation Research Part C: Emerging Technologies, 126:103070, 2021.
Michael Redmond ,Ann Melissa Campbell, and Jan Fabian Ehmke. Data-driven planning of reliable itineraries in multi-modal transit networks. Public Transport, 12(1):171–205, 2020.
Zhiguang Cao, Hongliang Guo, Wen Song, Kaizhou Gao, Zhenghua Chen,Le Zhang, and Xuexi Zhang. Using reinforcement learning to minimize the probability of delay occurrence in transportation. IEEE Transactions on Vehicular Technology, 69(3):2424–2436, 2020.
Thomas Liebig, Sebastian Peter, Maciej Grzenda, and Konstanty Junosza-Szaniawski. Dynamic transfer patterns for fast multi-modal route planning. In The Annual International Conference on Geographic Information Science, pages 223–236. Springer, 2017.
Ayat Abedalla, Ali Fadel, Ibraheem Tuffaha, Hani Al-Omari, Mohammad Omari, Malak Abdullah, and Mahmoud Al-Ayyoub. Mtrecs-dlt: Multi-modal transport recommender system using deep learning and tree models. In 2019 Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 274–278, 2019.
Daniel Herzog, Hesham Massoud, and Wolfgang Wo ̈rndl. Routeme: A mobile recommender system for per- sonalized, multi-modal route planning. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, pages 67–75, 2017.
Filip Dvorak, Shiwali Mohan, Victoria Bellotti, and Matthew Klenk. Collaborative optimization and plan- ning for transportation energy reduction. In ICAPS Proceedings of the 6th Workshop on Distributed and Multi-Agent Planning (DMAP, 2018.
Tanzina Afrinand Nita Yodo. A survey of road traffic congestion measures towards a sustainable and resilient transportation system. Sustainability, 12(11):4660, 2020.
Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I Jordan, et al. Ray: A distributed framework for emerging AI applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pages 561–577, 2018.
Erdmann, Jakob. Lane-changing model in SUMO. Proceedings of the SUMO User Conference 2014: Modeling mobility with open data, 2014.
Lara CODECA, Jakob ERDMANN, Vinny CAHILL, and Je ́roˆme HARRI. Saga: An activity-based multi- modal mobility scenario generator for sumo. SUMO User Conference 2020 - From Traffic Flow to Mobility Modeling, 2020.
Evgenii Nikishin, PavelIzmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, and Andrew Gordon Wilson. Improving stability in deep reinforcement learning with weight averaging. In Uncertainty in artificial intelligence workshop on uncertainty in Deep learning, 2018.
Anne Durand, Lucas Harms, Sascha Hoogendoorn-Lanser, and Toon Zijlstra. Mobility-as-a-Service and changes in travel preferences and travel behaviour: a literature review. KiM— Netherlands Institute for Transport Policy Analysis, 2018.
Downloads
Published
How to Cite
Conference Proceedings Volume
Section
License
Copyright (c) 2022 Lara Codeca, Vinny Cahill
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Funding data
-
H2020 Marie Skłodowska-Curie Actions
Grant numbers 713567