Zhao, Ruiyu and Daly, Ian and Chen, Yixin and Wu, Weijie and Liu, Lifei and Wang, Xingyu and Cichocki, Andrzej and Jin, Jing (2025) MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification. Journal of Neuroscience Methods, 424. p. 110578. DOI https://doi.org/10.1016/j.jneumeth.2025.110578
Zhao, Ruiyu and Daly, Ian and Chen, Yixin and Wu, Weijie and Liu, Lifei and Wang, Xingyu and Cichocki, Andrzej and Jin, Jing (2025) MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification. Journal of Neuroscience Methods, 424. p. 110578. DOI https://doi.org/10.1016/j.jneumeth.2025.110578
Zhao, Ruiyu and Daly, Ian and Chen, Yixin and Wu, Weijie and Liu, Lifei and Wang, Xingyu and Cichocki, Andrzej and Jin, Jing (2025) MSAttNet: Multi-scale attention convolutional neural network for motor imagery classification. Journal of Neuroscience Methods, 424. p. 110578. DOI https://doi.org/10.1016/j.jneumeth.2025.110578
Abstract
Background: Convolutional neural networks (CNNs) are widely employed in motor imagery (MI) classification. However, due to cumbersome data collection experiments, and limited, noisy, and non-stationary EEG signals, small MI datasets present considerable challenges to the design of these decoding algorithms. New method: To capture more feature information from inadequately sized data, we propose a new method, a multi-scale attention convolutional neural network (MSAttNet). Our method includes three main components–a multi-band segmentation module, an attention spatial convolution module, and a multi-scale temporal convolution module. First, the multi-band segmentation module adopts a filter bank with overlapping frequency bands to enhance features in the frequency domain. Then, the attention spatial convolution module is used to adaptively adjust different convolutional kernel parameters according to the input through the attention mechanism to capture the features of different datasets. The outputs of the attention spatial convolution module are grouped to perform multi-scale temporal convolution. Finally, the output of the multi-scale temporal convolution module uses the bilinear pooling layer to extract temporal features and perform noise elimination. The extracted features are then classified. Results: We use four datasets, including BCI Competition IV Dataset IIa, BCI Competition IV Dataset IIb, the OpenBMI dataset and the ECUST-MI dataset, to test our proposed method. MSAttNet achieves accuracies of 78.20%, 84.52%, 75.94% and 78.60% in cross-session experiments, respectively. Comparison with existing methods: Compared with state-of-the-art algorithms, MSAttNet enhances the decoding performance of MI tasks. Conclusion: MSAttNet effectively addresses the challenges of MI-EEG datasets, improving decoding performance by robust feature extraction.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Attention convolution; Brain-Computer Interfaces; Convolutional neural network; Motor imagery |
Subjects: | Z Bibliography. Library Science. Information Resources > ZR Rights Retention |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 22 Sep 2025 13:54 |
Last Modified: | 05 Oct 2025 22:21 |
URI: | http://repository.essex.ac.uk/id/eprint/41585 |
Available files
Filename: JNM.pdf
Licence: Creative Commons: Attribution 4.0