Lian, Zequan and Xu, Tao and Yuan, Zhen and Li, Junhua and Thakor, Nitish and Wang, Hongtao (2024) Driving Fatigue Detection Based on Hybrid Electroencephalography and Eye Tracking. IEEE Journal of Biomedical and Health Informatics, 28 (11). pp. 6568-6580. DOI https://doi.org/10.1109/jbhi.2024.3446952
Lian, Zequan and Xu, Tao and Yuan, Zhen and Li, Junhua and Thakor, Nitish and Wang, Hongtao (2024) Driving Fatigue Detection Based on Hybrid Electroencephalography and Eye Tracking. IEEE Journal of Biomedical and Health Informatics, 28 (11). pp. 6568-6580. DOI https://doi.org/10.1109/jbhi.2024.3446952
Lian, Zequan and Xu, Tao and Yuan, Zhen and Li, Junhua and Thakor, Nitish and Wang, Hongtao (2024) Driving Fatigue Detection Based on Hybrid Electroencephalography and Eye Tracking. IEEE Journal of Biomedical and Health Informatics, 28 (11). pp. 6568-6580. DOI https://doi.org/10.1109/jbhi.2024.3446952
Abstract
EEG-based unimodal method has demonstrated substantial success in the detection of driving fatigue. Nonetheless, the data from a single modality might be not sufficient to optimize fatigue detection due to incomplete information. To address this limitation and enhance the performance of driving fatigue detection, a novel multimodal architecture combining electroencephalography (EEG) and eye tracking data was proposed in this study. Specifically, EEG and eye tracking data were separately input into encoders, generating two one-dimensional (1D) features. Subsequently, these 1D features were fed into a cross-modal predictive alignment module to improve fusion efficiency and two 1D attention modules to enhance feature representation. Furthermore, the fused features were recognized by a linear classifier. To evaluate the effectiveness of the proposed multimodal method, comprehensive validation tasks were conducted, including intra-session, cross-session, and cross-subject evaluations. In the intra-session task, the proposed architecture achieves an exceptional average accuracy of 99.93%. Moreover, in the cross-session task, our method results in an average accuracy of 88.67%, surpassing the performance of EEG-only approach by 8.52%, eye tracking-only method by 5.92%, multimodal deep canonical correlation analysis (DCCA) technique by 0.42%, and multimodal deep generalized canonical correlation analysis (DGCCA) approach by 0.84%. Similarly, in the cross-subject task, the proposed approach achieves an average accuracy of 78.19%, outperforming EEG-only method by 5.87%, eye tracking-only approach by 4.21%, DCCA method by 0.55%, and DGCCA approach by 0.44%. The experimental results conclusively illustrate the superior effectiveness of the proposed method compared to both single modality approaches and canonical correlation analysis-based multimodal methods. Overall, this study provides a new and effective strategy for driving fatigue detection.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | cross-modal alignment; electroencephalograph; eye tracking; fatigue detection; multi-modality |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 18 Sep 2024 09:21 |
Last Modified: | 29 Nov 2024 12:32 |
URI: | http://repository.essex.ac.uk/id/eprint/39109 |
Available files
Filename: Hybrid EEG and Eye Tracking.pdf