Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11861/10461
DC FieldValueLanguage
dc.contributor.authorWang, Zuojunen_US
dc.contributor.authorDr. NAWAZ Mehmooden_US
dc.contributor.authorKhan, Sheheryaren_US
dc.contributor.authorXia, Pengen_US
dc.contributor.authorIrfan, Muhammaden_US
dc.contributor.authorWong, Eddie C.en_US
dc.contributor.authorChan, Russellen_US
dc.contributor.authorCao, Pengen_US
dc.date.accessioned2024-09-07T06:04:15Z-
dc.date.available2024-09-07T06:04:15Z-
dc.date.issued2023-
dc.identifier.citationComputerized Medical Imaging and Graphics, 2023, vol. 108, article no. 102272.en_US
dc.identifier.issn1879-0771-
dc.identifier.issn0895-6111-
dc.identifier.urihttp://hdl.handle.net/20.500.11861/10461-
dc.description.abstractThis paper presents a cross-modality generative learning framework for transitive magnetic resonance imaging (MRI) from electrical impedance tomography (EIT). The proposed framework is aimed at converting low-resolution EIT images to high-resolution wrist MRI images using a cascaded cycle generative adversarial network (CycleGAN) model. This model comprises three main components: the collection of initial EIT from the medical device, the generation of a high-resolution transitive EIT image from the corresponding MRI image for domain adaptation, and the coalescence of two CycleGAN models for cross-modality generation. The initial EIT image was generated at three different frequencies (70 kHz, 140 kHz, and 200 kHz) using a 16-electrode belt. Wrist T1-weighted images were acquired on a 1.5T MRI. A total of 19 normal volunteers were imaged using both EIT and MRI, which resulted in 713 paired EIT and MRI images. The cascaded CycleGAN, end-to-end CycleGAN, and Pix2Pix models were trained and tested on the same cohort. The proposed method achieved the highest accuracy in bone detection, with 0.97 for the proposed cascaded CycleGAN, 0.68 for end-to-end CycleGAN, and 0.70 for the Pix2Pix model. Visual inspection showed that the proposed method reduced bone-related errors in the MRI-style anatomical reference compared with end-to-end CycleGAN and Pix2Pix. Multifrequency EIT inputs reduced the testing normalized root mean squared error of MRI-style anatomical reference from 67.9% ± 12.7% to 61.4% ± 8.8% compared with that of single-frequency EIT. The mean conductivity values of fat and bone from regularized EIT were 0.0435 ± 0.0379 S/m and 0.0183 ± 0.0154 S/m, respectively, when the anatomical prior was employed. These results demonstrate that the proposed framework is able to generate MRI-style anatomical references from EIT images with a good degree of accuracy.en_US
dc.language.isoenen_US
dc.relation.ispartofComputerized Medical Imaging and Graphicsen_US
dc.titleCross modality generative learning framework for anatomical transitive Magnetic Resonance Imaging (MRI) from Electrical Impedance Tomography (EIT) imageen_US
dc.typePeer Reviewed Journal Articleen_US
dc.identifier.doihttps://doi.org/10.1016/j.compmedimag.2023.102272-
item.fulltextNo Fulltext-
crisitem.author.deptDepartment of Applied Data Science-
Appears in Collections:Applied Data Science - Publication
Show simple item record

Page view(s)

7
checked on Sep 20, 2024

Google ScholarTM

Impact Indices

Altmetric

PlumX

Metrics


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.