Wang, Chengjia and Yang, Guang and Papanastasiou, Giorgos and Tsaftaris, Sotirios A and Newby, David E and Gray, Calum and Macnaught, Gillian and MacGillivray, Tom J (2021) DiCyc: Deformation Invariant Cross-Domain Information Fusion for Medical Image Synthesis. Information Fusion, 67. pp. 147-160. DOI https://doi.org/10.1016/j.inffus.2020.10.015 (In Press)
Wang, Chengjia and Yang, Guang and Papanastasiou, Giorgos and Tsaftaris, Sotirios A and Newby, David E and Gray, Calum and Macnaught, Gillian and MacGillivray, Tom J (2021) DiCyc: Deformation Invariant Cross-Domain Information Fusion for Medical Image Synthesis. Information Fusion, 67. pp. 147-160. DOI https://doi.org/10.1016/j.inffus.2020.10.015 (In Press)
Wang, Chengjia and Yang, Guang and Papanastasiou, Giorgos and Tsaftaris, Sotirios A and Newby, David E and Gray, Calum and Macnaught, Gillian and MacGillivray, Tom J (2021) DiCyc: Deformation Invariant Cross-Domain Information Fusion for Medical Image Synthesis. Information Fusion, 67. pp. 147-160. DOI https://doi.org/10.1016/j.inffus.2020.10.015 (In Press)
Abstract
Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Information fusion; GAN; Image synthesis |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 08 Jan 2021 10:58 |
Last Modified: | 30 Oct 2024 16:29 |
URI: | http://repository.essex.ac.uk/id/eprint/28936 |
Available files
Filename: 1-s2.0-S1566253520303845-main.pdf
Licence: Creative Commons: Attribution 3.0