Multimodal motivation modelling and computing towards motivationally intelligent E-learning systems

Authors: Wang, R., Chen, L. and Ayesh, A.

Journal: CCF Transactions on Pervasive Computing and Interaction

Volume: 5

Issue: 1

Pages: 64-81

eISSN: 2524-5228

ISSN: 2524-521X

DOI: 10.1007/s42486-022-00107-4

Abstract:

Motivation to engage in learning is essential for learning performance. Learners’ motivation is traditionally assessed using self-reported data, which is time-consuming, subjective, and interruptive to their learning process. To address this issue, this paper proposes a novel framework for multimodal assessment of learners’ motivation in e-learning environments with the ultimate purpose of supporting intelligent e-learning systems to facilitate dynamic, context-aware, and personalized services or interventions, thus sustaining learners’ motivation for learning engagement. We investigated the performance of the machine learning classifier and the most and least accurately predicted motivational factors. We also assessed the contribution of different electroencephalogram (EEG) and eye gaze features to motivation assessment. The applicability of the framework was evaluated in an empirical study in which we combined eye tracking and EEG sensors to produce a multimodal dataset. The dataset was then processed and used to develop a machine learning classifier for motivation assessment by predicting the levels of a range of motivational factors, which represented the multiple dimensions of motivation. We also proposed a novel approach to feature selection combining data-driven and knowledge-driven methods to train the machine learning classifier for motivation assessment, which has been proved effective in our empirical study at selecting predictors from a large number of extracted features from EEG and eye tracking data. Our study has revealed valuable insights for the role played by brain activities and eye movements on predicting the levels of different motivational factors. Initial results using logistic regression classifier have achieved significant predictive power for all the motivational factors studied, with accuracy of between 68.1% and 92.8%. The present work has demonstrated the applicability of the proposed framework for multimodal motivation assessment which will inspire future research towards motivationally intelligent e-learning systems.

https://eprints.bournemouth.ac.uk/37003/

Source: Scopus

Multimodal motivation modelling and computing towards motivationally intelligent E-learning systems

Authors: Wang, R., Chen, L. and Ayesh, A.

Journal: CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION

Volume: 5

Issue: 1

Pages: 64-81

eISSN: 2524-5228

ISSN: 2524-521X

DOI: 10.1007/s42486-022-00107-4

https://eprints.bournemouth.ac.uk/37003/

Source: Web of Science (Lite)

Multimodal Motivation Modelling and Computing towards Motivationally Intelligent ELearning Systems

Authors: Wang, R., Chen, L. and Ayesh, A.

Journal: CCF Transactions on Pervasive Computing and Interaction

DOI: 10.1007/s42486-022-00107-4

Abstract:

Persistent motivation to engage in e-learning systems is essential for users’ learning performance. Learners' motivation is traditionally assessed using subjective, self-reported data which is time-consuming and interruptive to their learning process. To address this issue, this paper proposes a novel framework for multimodal assessment of learners’ motivation in e-learning environments to inform intelligent e-learning systems that can facilitate dynamic, context-aware, and personalized services or interventions to maintain learners’ motivation during use. The applicability of the framework was evaluated in an empirical study in which we combined eye tracking and electroencephalogram (EEG) sensors to produce a multimodal dataset. The dataset was then processed and used to develop a machine learning classifier for motivation assessment by predicting the levels of a range of motivational factors, which represented the multiple dimensions of motivation. We investigated the performance of the machine learning classifier and the most and least accurately predicted motivational factors. We also assessed the contribution of different EEG and eye gaze features to motivation assessment. Our study has revealed valuable insights for the role played by brain activities and eye movements on predicting the levels of different motivational factors. Initial results using logistic regression classifier have achieved significant predictive power for all the motivational factors studied, with accuracy of between 68.1% and 92.8%. The present work has demonstrated the applicability of the proposed framework for multimodal motivation assessment which will inspire future research towards motivationally intelligent e-learning systems.

https://eprints.bournemouth.ac.uk/37003/

Source: Manual

Multimodal Motivation Modelling and Computing towards Motivationally Intelligent ELearning Systems

Authors: Wang, R., Chen, L. and Ayesh, A.

Journal: CCF Transactions on Pervasive Computing and Interaction

ISSN: 2524-521X

Abstract:

Persistent motivation to engage in e-learning systems is essential for users’ learning performance. Learners' motivation is traditionally assessed using subjective, self-reported data which is time-consuming and interruptive to their learning process. To address this issue, this paper proposes a novel framework for multimodal assessment of learners’ motivation in e-learning environments to inform intelligent e-learning systems that can facilitate dynamic, context-aware, and personalized services or interventions to maintain learners’ motivation during use. The applicability of the framework was evaluated in an empirical study in which we combined eye tracking and electroencephalogram (EEG) sensors to produce a multimodal dataset. The dataset was then processed and used to develop a machine learning classifier for motivation assessment by predicting the levels of a range of motivational factors, which represented the multiple dimensions of motivation. We investigated the performance of the machine learning classifier and the most and least accurately predicted motivational factors. We also assessed the contribution of different EEG and eye gaze features to motivation assessment. Our study has revealed valuable insights for the role played by brain activities and eye movements on predicting the levels of different motivational factors. Initial results using logistic regression classifier have achieved significant predictive power for all the motivational factors studied, with accuracy of between 68.1% and 92.8%. The present work has demonstrated the applicability of the proposed framework for multimodal motivation assessment which will inspire future research towards motivationally intelligent e-learning systems.

https://eprints.bournemouth.ac.uk/37003/

Source: BURO EPrints