Shao, Xinghan and Chang, C and Gan, John Q and Wang, Haixian (2025) An interpretable contrastive learning transformer for EEG-based person identification. IEEE Transactions on Information Forensics and Security. p. 1. DOI https://doi.org/10.1109/tifs.2025.3570183
Shao, Xinghan and Chang, C and Gan, John Q and Wang, Haixian (2025) An interpretable contrastive learning transformer for EEG-based person identification. IEEE Transactions on Information Forensics and Security. p. 1. DOI https://doi.org/10.1109/tifs.2025.3570183
Shao, Xinghan and Chang, C and Gan, John Q and Wang, Haixian (2025) An interpretable contrastive learning transformer for EEG-based person identification. IEEE Transactions on Information Forensics and Security. p. 1. DOI https://doi.org/10.1109/tifs.2025.3570183
Abstract
Research on electroencephalogram (EEG)-based person identification is increasing because EEG signals must be collected from the living body, making them difficult to steal or alter. However, EEG signals are greatly influenced by subjects’ states, and most studies on EEG-based person identification have overlooked this influence. In this study, we proposed an interpretable contrastive learning transformer to tackle the impact of state changes on EEG-based person identification. Contrastive learning transformers construct pairs of EEG signal feature samples to capture state-independent and identity-distinct features. Specifically, the power spectral density (PSD) of EEG signals from the same user in different paradigms is used as positive samples, while the PSD from other users is used as negative samples. Pairs of samples are encoded to obtain corresponding features and then projected into a contrastive space through a multi-layer perceptron. Then, the NT-Xent loss function minimizes the distance between positive samples within the same batch and maximizes the distance between negative samples. Finally, to eliminate bias between positive sample pairs from different paradigms, we introduced the cross-paradigm alignment loss for the first time to capture individual consistency. We evaluated our model on two datasets. Dataset 1 contains EEG signals from 109 individuals, recorded across multiple paradigms designed to elicit different states. Dataset 2 consists of EEG signals from 71 individuals, collected across two sessions, with each session including two paradigms. We evaluated the accuracy of both single-paradigm and cross-paradigm recognition. Our proposed model outperforms state-of-the-art models for EEG-based person identification. We also conducted experiments on electrode attention visualization to capture the brain regions that the model focuses on, and the results demonstrate that, unlike in a single-paradigm, models trained in cross-paradigm focus on fewer electrodes and more concentrated regions.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | EEG; biometrics; person identification; contrastive learning; transformer |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 21 May 2025 13:47 |
Last Modified: | 21 May 2025 13:57 |
URI: | http://repository.essex.ac.uk/id/eprint/40918 |
Available files
Filename: An_interpretable_contrastive_learning_transformer_for_EEG-based_person_identification.pdf