Lu, Qiang and Sun, Xia and Gao, Zhizezhang and Long, Yunfei and Feng, Jun and Zhang, Hao (2024) Coordinated-joint Translation Fusion Framework with Sentiment-interactive Graph Convolutional Networks for Multimodal Sentiment Analysis. Information Processing and Management, 61 (1). p. 103538. DOI https://doi.org/10.1016/j.ipm.2023.103538
Lu, Qiang and Sun, Xia and Gao, Zhizezhang and Long, Yunfei and Feng, Jun and Zhang, Hao (2024) Coordinated-joint Translation Fusion Framework with Sentiment-interactive Graph Convolutional Networks for Multimodal Sentiment Analysis. Information Processing and Management, 61 (1). p. 103538. DOI https://doi.org/10.1016/j.ipm.2023.103538
Lu, Qiang and Sun, Xia and Gao, Zhizezhang and Long, Yunfei and Feng, Jun and Zhang, Hao (2024) Coordinated-joint Translation Fusion Framework with Sentiment-interactive Graph Convolutional Networks for Multimodal Sentiment Analysis. Information Processing and Management, 61 (1). p. 103538. DOI https://doi.org/10.1016/j.ipm.2023.103538
Abstract
Interactive fusion methods have been successfully applied to multimodal sentiment analysis, due to their ability to achieve data complementarity via interaction of different modalities. However, previous methods treat the information of each modality as a whole and usually treat them equally, failing to distinguish the contribution of different semantic regions in non-textual features towards textual features. It caused that the public regions fail to be captured and private regions are hard to be predicted only with textual. Meanwhile, these methods use sentiment-independent encoder to encode textual features, which may mistakenly identify syntactically irrelevant contextual words as clues for judging sentiment. In this paper, we propose a coordinated-joint translation fusion framework with sentiment-interactive graph to solve these problems. Specifically, we generate a novel sentiment-interactive graph to incorporate sentiment associations between different words into the syntactic adjacency matrix. The relationships between nodes are no longer limited to the sole existence of syntactic associations but fully consider the interaction of emotions between different words. Then, we designed a coordinated-joint translation fusion module. This module utilizes a cross-modal masked attention mechanism to determine whether there is a correlation between the text and non-text inputs, thereby identifying the most relevant public semantic features in the visual and acoustic modalities corresponding to the text modality. Subsequently, a cross-modal translation-aware mechanism is used to calculate the differences between the visual and acoustic modalities features transformed into the text modality and the text modality itself, which allows us to reconstruct the visual and acoustic modalities towards text modality to obtain private semantic features. In addition, we construct a multimodal fusion layer to fuse textual features and non-textual public and private features to improve multimodal interaction effects. Experimental results on publicly available datasets CMU-MOSI and CMU-MOSEI illustrate that our proposed model achieve a best accuracy of 86.5% and 86.1%, and best F1 of 86.4% and 86.1%. A series of further analyses also indicate the proposed framework effectively improve the sentiment identification capability.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Cross-modal masked attention; Cross-modal translation-aware mechanism; Multimodal fusion; Multimodal sentiment analysis; Sentiment-interactive graph |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 01 Nov 2023 14:44 |
Last Modified: | 30 Oct 2024 21:13 |
URI: | http://repository.essex.ac.uk/id/eprint/36633 |
Available files
Filename: Coordinated-joint Translation Fusion Framework with Sentiment-interactive Graph Convolutional Networks for Multimodal Sentiment Analysis.pdf
Licence: Creative Commons: Attribution-Noncommercial-No Derivative Works 4.0