Kiziltepe, Rukiye Savran and Gan, John Q and Escobar, Juan José (2024) Integration of Feature and Decision Fusion With Deep Learning Architectures for Video Classification. IEEE Access, 12. pp. 19432-19446. DOI https://doi.org/10.1109/access.2024.3360929
Kiziltepe, Rukiye Savran and Gan, John Q and Escobar, Juan José (2024) Integration of Feature and Decision Fusion With Deep Learning Architectures for Video Classification. IEEE Access, 12. pp. 19432-19446. DOI https://doi.org/10.1109/access.2024.3360929
Kiziltepe, Rukiye Savran and Gan, John Q and Escobar, Juan José (2024) Integration of Feature and Decision Fusion With Deep Learning Architectures for Video Classification. IEEE Access, 12. pp. 19432-19446. DOI https://doi.org/10.1109/access.2024.3360929
Abstract
Information fusion is frequently employed to integrate diverse inputs, including sensory data, features, or decisions, in order to leverage the advantageous relationships among various features and classifiers. This paper presents a novel approach for video classification using deep learning architectures, including ConvLSTM and vision transformer based fusion architectures, which incorporates the combination of spatial and temporal features, along with the utilisation of decision fusion at multiple levels. The proposed vision transformer based method uses a 3D CNN to extract spatio-temporal information and different attention mechanisms to pay attention to essential features for action recognition and thus learns spatio-temporal dependencies effectively. The effectiveness of the methods proposed in this paper is validated through empirical evaluations conducted on two well-known video classification datasets, namely UCF-101 and KTH. The experimental findings indicate that the utilisation of both spatial and temporal features is essential, with the superior performance gained by using temporal features as the primary source of features in conjunction with two types of distinct spatial features when compared to other configurations. The multi-level decision fusion approach proposed in this study produces results comparable to those of feature fusion methods while offering the advantage of reduced memory requirements and computational costs. The fusion of RGB, HOG, and optical flow representations has demonstrated the best performance compared to other fusion methods examined in this study. It has also been demonstrated that the vision transformer based approaches significantly outperformed the ConvLSTM based approaches. Furthermore, an ablation study was conducted to compare the performances of vision transformer based feature fusion approaches for enhancing the performance of video classification.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Computer vision; data fusion; deep neural networks; human action recognition; spatio-temporal features |
Subjects: | Z Bibliography. Library Science. Information Resources > ZZ OA Fund (articles) |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 18 Mar 2024 14:43 |
Last Modified: | 30 Oct 2024 21:21 |
URI: | http://repository.essex.ac.uk/id/eprint/37829 |
Available files
Filename: IEEE_ACCESS_PublishedVersion.pdf
Licence: Creative Commons: Attribution 4.0