Qiao, Shuo and Tang, Chao and Hu, Huosheng and Wang, Wenjian and Tong, Anyang and Ren, Fang (2025) Cross-view identification based on gait bioinformation using a dynamic densely connected spatial-temporal feature decoupling network. Biomedical Signal Processing and Control, 104. p. 107494. DOI https://doi.org/10.1016/j.bspc.2025.107494
Qiao, Shuo and Tang, Chao and Hu, Huosheng and Wang, Wenjian and Tong, Anyang and Ren, Fang (2025) Cross-view identification based on gait bioinformation using a dynamic densely connected spatial-temporal feature decoupling network. Biomedical Signal Processing and Control, 104. p. 107494. DOI https://doi.org/10.1016/j.bspc.2025.107494
Qiao, Shuo and Tang, Chao and Hu, Huosheng and Wang, Wenjian and Tong, Anyang and Ren, Fang (2025) Cross-view identification based on gait bioinformation using a dynamic densely connected spatial-temporal feature decoupling network. Biomedical Signal Processing and Control, 104. p. 107494. DOI https://doi.org/10.1016/j.bspc.2025.107494
Abstract
Existing cross-view identification methods based on gait bioinformation often overlook the importance of feature reuse and the decoupling of spatial–temporal features in gait data. To address these challenges, we propose a novel approach named the Dynamic Densely connected Spatial-Temporal feature decoupling Network (DDSTFDN). First, the continuous gait sequence data are preprocessed by cropping and normalization before being fed into an initial network module to extract shallow gait features. These shallow features are then processed by the dynamic dense spatial–temporal decoupling network, which includes densely connected spatial–temporal feature decoupling blocks and enhanced convolutional block attention modules (E-CBAM) to obtain decoupled spatial–temporal features. Finally, the resultant gait features are divided into probe features and gallery features for similarity calculation, enabling accurate classification. Our approach achieves recognition accuracies of 97.2 % and 87.6 % for the normal walking (NM) conditions of the CASIA-B and OUMVLP datasets, respectively, as well as 93.4 % and 78.1 % recognition accuracies in the walking with a backpack (BG) and walking with a coat or jacket (CL) walking complex scenarios in the CASIA-B dataset. In addition, our method obtains a recognition accuracy of up to 98.6 % on the CASIA-C dataset. On the CASIA-B dataset, we outperform the current baseline by 3.9 percentage points in accuracy for a batch size of 4 × 8, achieving a level of recognition comparable to that of the state-of-the-art (SOTA) approaches. The above experimental results demonstrate that DDSTFDN can effectively improve recognition accuracy and reduce resource consumption.
Item Type: | Article |
---|---|
Divisions: | Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 21 Jan 2025 16:03 |
Last Modified: | 22 Jan 2025 09:52 |
URI: | http://repository.essex.ac.uk/id/eprint/40013 |
Available files
Filename: Accepted.pdf
Licence: Creative Commons: Attribution 4.0