Anderson, Andrew James and Bruni, Elia and Lopopolo, Alessandro and Poesio, Massimo and Baroni, Marco (2015) Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text. NeuroImage, 120. pp. 309-322. DOI https://doi.org/10.1016/j.neuroimage.2015.06.093
Anderson, Andrew James and Bruni, Elia and Lopopolo, Alessandro and Poesio, Massimo and Baroni, Marco (2015) Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text. NeuroImage, 120. pp. 309-322. DOI https://doi.org/10.1016/j.neuroimage.2015.06.093
Anderson, Andrew James and Bruni, Elia and Lopopolo, Alessandro and Poesio, Massimo and Baroni, Marco (2015) Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text. NeuroImage, 120. pp. 309-322. DOI https://doi.org/10.1016/j.neuroimage.2015.06.093
Abstract
Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Concept representation; Embodiment; Mental imagery; Perceptual simulation; Language; Multimodal semantic models; Representational similarity |
Subjects: | P Language and Literature > P Philology. Linguistics Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 20 Jul 2015 08:45 |
Last Modified: | 04 Dec 2024 06:25 |
URI: | http://repository.essex.ac.uk/id/eprint/14390 |
Available files
Filename: neuroimage_poesio.pdf