Chartsias, Agisilaos and Papanastasiou, Giorgos and Wang, Chengjia and Semple, Scott and Newby, David and Dharmakumar, Rohan and Tsaftaris, Sotirios (2021) Disentangle, align and fuse for multimodal and semi-supervised image segmentation. IEEE Transactions on Medical Imaging, 40 (3). pp. 781-792. DOI https://doi.org/10.1109/tmi.2020.3036584 (In Press)
Chartsias, Agisilaos and Papanastasiou, Giorgos and Wang, Chengjia and Semple, Scott and Newby, David and Dharmakumar, Rohan and Tsaftaris, Sotirios (2021) Disentangle, align and fuse for multimodal and semi-supervised image segmentation. IEEE Transactions on Medical Imaging, 40 (3). pp. 781-792. DOI https://doi.org/10.1109/tmi.2020.3036584 (In Press)
Chartsias, Agisilaos and Papanastasiou, Giorgos and Wang, Chengjia and Semple, Scott and Newby, David and Dharmakumar, Rohan and Tsaftaris, Sotirios (2021) Disentangle, align and fuse for multimodal and semi-supervised image segmentation. IEEE Transactions on Medical Imaging, 40 (3). pp. 781-792. DOI https://doi.org/10.1109/tmi.2020.3036584 (In Press)
Abstract
Magnetic resonance (MR) protocols rely on several sequences to assess pathology and organ status properly. Despite advances in image analysis, we tend to treat each sequence, here termed modality, in isolation. Taking advantage of the common information shared between modalities (an organ's anatomy) is beneficial for multi-modality processing and learning. However, we must overcome inherent anatomical misregistrations and disparities in signal intensity across the modalities to obtain this benefit. We present a method that offers improved segmentation accuracy of the modality of interest (over a single input model), by learning to leverage information present in other modalities, even if few (semi-supervised) or no (unsupervised) annotations are available for this specific modality. Core to our method is learning a disentangled decomposition into anatomical and imaging factors. Shared anatomical factors from the different inputs are jointly processed and fused to extract more accurate segmentation masks. Image misregistrations are corrected with a Spatial Transformer Network, which non-linearly aligns the anatomical factors. The imaging factor captures signal intensity characteristics across different modality data and is used for image reconstruction, enabling semi-supervised learning. Temporal and slice pairing between inputs are learned dynamically. We demonstrate applications in Late Gadolinium Enhanced (LGE) and Blood Oxygenation Level Dependent (BOLD) cardiac segmentation, as well as in T2 abdominal segmentation.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Image segmentation; Biomedical imaging; Annotations; Training; Semantics; Decoding; Multimodal segmentation; disentanglement; magnetic resonance imaging |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 06 Nov 2020 11:30 |
Last Modified: | 01 Nov 2024 10:18 |
URI: | http://repository.essex.ac.uk/id/eprint/28978 |
Available files
Filename: Chartsias, Papanstasiou, et al_IEEE TMI_acceptedversion_2020.pdf