Research Repository

Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off

León, Javier and Escobar, Juan José and Ortiz, Andrés and Ortega, Julio and González, Jesús and Martín-Smith, Pedro and Gan, John Q and Damas, Miguel (2020) 'Deep learning for EEG-based Motor Imagery classification: Accuracy-cost trade-off.' PLoS One, 15 (6). ISSN 1932-6203

[img]
Preview
Text
PLOS_ONE_2020.pdf - Published Version
Available under License Creative Commons Attribution.

Download (2MB) | Preview

Abstract

Electroencephalography (EEG) datasets are often small and high dimensional, owing to cumbersome recording processes. In these conditions, powerful machine learning techniques are essential to deal with the large amount of information and overcome the curse of dimensionality. Artificial Neural Networks (ANNs) have achieved promising performance in EEG-based Brain-Computer Interface (BCI) applications, but they involve computationally intensive training algorithms and hyperparameter optimization methods. Thus, an awareness of the quality-cost trade-off, although usually overlooked, is highly beneficial. In this paper, we apply a hyperparameter optimization procedure based on Genetic Algorithms to Convolutional Neural Networks (CNNs), Feed-Forward Neural Networks (FFNNs), and Recurrent Neural Networks (RNNs), all of them purposely shallow. We compare their relative quality and energy-time cost, but we also analyze the variability in the structural complexity of networks of the same type with similar accuracies. The experimental results show that the optimization procedure improves accuracy in all models, and that CNN models with only one hidden convolutional layer can equal or slightly outperform a 6-layer Deep Belief Network. FFNN and RNN were not able to reach the same quality, although the cost was significantly lower. The results also highlight the fact that size within the same type of network is not necessarily correlated with accuracy, as smaller models can and do match, or even surpass, bigger ones in performance. In this regard, overfitting is likely a contributing factor since deep learning approaches struggle with limited training examples.

Item Type: Article
Divisions: Faculty of Science and Health > Computer Science and Electronic Engineering, School of
Depositing User: Elements
Date Deposited: 17 Jun 2020 13:25
Last Modified: 17 Jun 2020 14:15
URI: http://repository.essex.ac.uk/id/eprint/27890

Actions (login required)

View Item View Item