A platform for research: civil engineering, architecture and urbanism
Evaluation of Transformer model and Self-Attention mechanism in the Yangtze River basin runoff prediction
Study region: In the Yangtze River basin of China. Study focus: We applied a recently popular deep learning (DL) algorithm, Transformer (TSF), and two commonly used DL methods, Long-Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), to evaluate the performance of TSF in predicting runoff in the Yangtze River basin. We also add the main structure of TSF, Self-Attention (SA), to the LSTM and GRU models, namely LSTM-SA and GRU-SA, to investigate whether the inclusion of the SA mechanism can improve the prediction capability. Seven climatic observations (mean temperature, maximum temperature, precipitation, etc.) are the input data in our study. The whole dataset was divided into training, validation and test datasets. In addition, we investigated the relationship between model performance and input time steps. New hydrological insights for the region: Our experimental results show that the GRU has the best performance with the fewest parameters while the TSF has the worst performance due to the lack of sufficient data. GRU and the LSTM models are better than TSF for runoff prediction when the training samples are limited (such as the model parameters being ten times larger than the samples). Furthermore, the SA mechanism improves the prediction accuracy when added to the LSTM and the GRU structures. Different input time steps (5 d, 10 d, 15 d, 20 d, 25 d and 30 d) are used to train the DL models with different prediction lengths to understand their relationship with model performance, showing that an appropriate input time step can significantly improve the model performance.
Evaluation of Transformer model and Self-Attention mechanism in the Yangtze River basin runoff prediction
Study region: In the Yangtze River basin of China. Study focus: We applied a recently popular deep learning (DL) algorithm, Transformer (TSF), and two commonly used DL methods, Long-Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), to evaluate the performance of TSF in predicting runoff in the Yangtze River basin. We also add the main structure of TSF, Self-Attention (SA), to the LSTM and GRU models, namely LSTM-SA and GRU-SA, to investigate whether the inclusion of the SA mechanism can improve the prediction capability. Seven climatic observations (mean temperature, maximum temperature, precipitation, etc.) are the input data in our study. The whole dataset was divided into training, validation and test datasets. In addition, we investigated the relationship between model performance and input time steps. New hydrological insights for the region: Our experimental results show that the GRU has the best performance with the fewest parameters while the TSF has the worst performance due to the lack of sufficient data. GRU and the LSTM models are better than TSF for runoff prediction when the training samples are limited (such as the model parameters being ten times larger than the samples). Furthermore, the SA mechanism improves the prediction accuracy when added to the LSTM and the GRU structures. Different input time steps (5 d, 10 d, 15 d, 20 d, 25 d and 30 d) are used to train the DL models with different prediction lengths to understand their relationship with model performance, showing that an appropriate input time step can significantly improve the model performance.
Evaluation of Transformer model and Self-Attention mechanism in the Yangtze River basin runoff prediction
Xikun Wei (author) / Guojie Wang (author) / Britta Schmalz (author) / Daniel Fiifi Tawia Hagan (author) / Zheng Duan (author)
2023
Article (Journal)
Electronic Resource
Unknown
Runoff prediction , Transformer , LSTM , GRU , Self-Attention , Physical geography , GB3-5030 , Geology , QE1-996.5
Metadata by DOAJ is licensed under CC BY-SA 1.0
Prediction of runoff in the upper Yangtze River based on CEEMDAN-NAR model
DOAJ | 2021
|Elsevier | 2024
|