A platform for research: civil engineering, architecture and urbanism
LingoTrip: Spatiotemporal context prompt driven large language model for individual trip prediction
Large language models (LLMs) showed superior performance in many language-related tasks. It is promising to model the individual mobility prediction problem as a language model and use pretrained LLMs to predict the individual next trip information (e.g., time and location) for personalized travel recommendations. Theoretically, it is expected to overcome the common limitations of data-driven prediction models in zero/few shot learning, generalization, and interpretability. The paper proposes a LingoTrip model for predicting individual next trip location by designing the spatiotemporal context prompts for LLMs. The designed prompting strategies enable LLMs to capture implicit land use information (trip purposes), spatiotemporal mobility patterns (choice preferences), and geographical dependencies of the stations used (choice variability). The lingoTrip is validated using Hong Kong Mass Transit Railway trip data by comparing it with the state-of-the-art data-driven mobility prediction models under different training data sizes. Sensitivity analyses are performed for model hyperparameters and their tuning methods to adapt for other datasets. The results show that LingoTrip outperforms data-driven models in terms of prediction accuracy, transferability (between individuals), zero/few shot learning (limited training sample size) and interpretability of predictions. The LingoTrip model can facilitate the effective provision of personalized information for system crowding and disruption contexts (i.e., proactively providing information to targeted individuals).
LingoTrip: Spatiotemporal context prompt driven large language model for individual trip prediction
Large language models (LLMs) showed superior performance in many language-related tasks. It is promising to model the individual mobility prediction problem as a language model and use pretrained LLMs to predict the individual next trip information (e.g., time and location) for personalized travel recommendations. Theoretically, it is expected to overcome the common limitations of data-driven prediction models in zero/few shot learning, generalization, and interpretability. The paper proposes a LingoTrip model for predicting individual next trip location by designing the spatiotemporal context prompts for LLMs. The designed prompting strategies enable LLMs to capture implicit land use information (trip purposes), spatiotemporal mobility patterns (choice preferences), and geographical dependencies of the stations used (choice variability). The lingoTrip is validated using Hong Kong Mass Transit Railway trip data by comparing it with the state-of-the-art data-driven mobility prediction models under different training data sizes. Sensitivity analyses are performed for model hyperparameters and their tuning methods to adapt for other datasets. The results show that LingoTrip outperforms data-driven models in terms of prediction accuracy, transferability (between individuals), zero/few shot learning (limited training sample size) and interpretability of predictions. The LingoTrip model can facilitate the effective provision of personalized information for system crowding and disruption contexts (i.e., proactively providing information to targeted individuals).
LingoTrip: Spatiotemporal context prompt driven large language model for individual trip prediction
Zhenlin Qin (author) / Pengfei Zhang (author) / Leizhen Wang (author) / Zhenliang Ma (author)
2025
Article (Journal)
Electronic Resource
Unknown
Metadata by DOAJ is licensed under CC BY-SA 1.0
Online Contents | 2010
|British Library Online Contents | 2010
|A Language Prompt Model for Architectural Aesthetics
Springer Verlag | 2024
|