A platform for research: civil engineering, architecture and urbanism
Detecting Learning Stages within a Sensor-Based Mixed Reality Learning Environment Using Deep Learning
Mixed reality has been envisioned as an interactive and engaging pedagogical tool for providing experiential learning experiences and potentially enhancing the acquisition of technical competencies in construction engineering education. However, to achieve seamless learning interactions and automated learning assessments, it is pertinent that the mixed reality environments are intelligent, proactive, and adaptive to students’ learning needs. With the potentials of artificial intelligence for promoting interactive, assistive, and self-reliant learning environments and the professed effectiveness of deep learning in other domains, this study explores an approach to developing a smart mixed reality environment for technical skills acquisition in construction engineering education. The study is based on the usability assessment of a previously developed mixed reality environment for learning sensing technologies such as laser scanners in the construction industry. In this study, long short-term memory (LSTM) and a hybrid LSTM and convolutional neural networks (CNN) models were trained with augmented eye-tracking data to predict students’ learning interaction difficulties, cognitive development, and experience levels. This was achieved by using predefined labels obtained from think-aloud protocols and demographic questionnaires during laser scanning activities within a mixed reality learning environment. The proposed models performed well in recognizing interaction difficulties, experienced levels, and cognitive development with F1 scores of 95.95%, 98.52%, and 99.49% respectively. The hybrid CNN-LSTM models demonstrated improved performance with an accuracy of at least 20% higher than the LSTM models but at a higher inference time. The efficacy of the models for detecting the required classes, and the potentials of the adopted data augmentation techniques for eye-tracking data were further reported. However, as the model performance increased with data size, the computational cost also increased. This study sets precedence for exploring the applications of artificial intelligence for mixed reality learning environments in construction engineering education.
Detecting Learning Stages within a Sensor-Based Mixed Reality Learning Environment Using Deep Learning
Mixed reality has been envisioned as an interactive and engaging pedagogical tool for providing experiential learning experiences and potentially enhancing the acquisition of technical competencies in construction engineering education. However, to achieve seamless learning interactions and automated learning assessments, it is pertinent that the mixed reality environments are intelligent, proactive, and adaptive to students’ learning needs. With the potentials of artificial intelligence for promoting interactive, assistive, and self-reliant learning environments and the professed effectiveness of deep learning in other domains, this study explores an approach to developing a smart mixed reality environment for technical skills acquisition in construction engineering education. The study is based on the usability assessment of a previously developed mixed reality environment for learning sensing technologies such as laser scanners in the construction industry. In this study, long short-term memory (LSTM) and a hybrid LSTM and convolutional neural networks (CNN) models were trained with augmented eye-tracking data to predict students’ learning interaction difficulties, cognitive development, and experience levels. This was achieved by using predefined labels obtained from think-aloud protocols and demographic questionnaires during laser scanning activities within a mixed reality learning environment. The proposed models performed well in recognizing interaction difficulties, experienced levels, and cognitive development with F1 scores of 95.95%, 98.52%, and 99.49% respectively. The hybrid CNN-LSTM models demonstrated improved performance with an accuracy of at least 20% higher than the LSTM models but at a higher inference time. The efficacy of the models for detecting the required classes, and the potentials of the adopted data augmentation techniques for eye-tracking data were further reported. However, as the model performance increased with data size, the computational cost also increased. This study sets precedence for exploring the applications of artificial intelligence for mixed reality learning environments in construction engineering education.
Detecting Learning Stages within a Sensor-Based Mixed Reality Learning Environment Using Deep Learning
J. Comput. Civ. Eng.
Ogunseiju, Omobolanle (author) / Akinniyi, Abiola (author) / Gonsalves, Nihar (author) / Khalid, Mohammad (author) / Akanmu, Abiola (author)
2023-07-01
Article (Journal)
Electronic Resource
English
Using Mixed Reality in Online Learning Environments
ASCE | 2022
|Using Mixed Reality in Online Learning Environments
British Library Conference Proceedings | 2021
|