Please use this identifier to cite or link to this item:
http://hdl.handle.net/20.500.11861/10461
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wang, Zuojun | en_US |
dc.contributor.author | Dr. NAWAZ Mehmood | en_US |
dc.contributor.author | Khan, Sheheryar | en_US |
dc.contributor.author | Xia, Peng | en_US |
dc.contributor.author | Irfan, Muhammad | en_US |
dc.contributor.author | Wong, Eddie C. | en_US |
dc.contributor.author | Chan, Russell | en_US |
dc.contributor.author | Cao, Peng | en_US |
dc.date.accessioned | 2024-09-07T06:04:15Z | - |
dc.date.available | 2024-09-07T06:04:15Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Computerized Medical Imaging and Graphics, 2023, vol. 108, article no. 102272. | en_US |
dc.identifier.issn | 1879-0771 | - |
dc.identifier.issn | 0895-6111 | - |
dc.identifier.uri | http://hdl.handle.net/20.500.11861/10461 | - |
dc.description.abstract | This paper presents a cross-modality generative learning framework for transitive magnetic resonance imaging (MRI) from electrical impedance tomography (EIT). The proposed framework is aimed at converting low-resolution EIT images to high-resolution wrist MRI images using a cascaded cycle generative adversarial network (CycleGAN) model. This model comprises three main components: the collection of initial EIT from the medical device, the generation of a high-resolution transitive EIT image from the corresponding MRI image for domain adaptation, and the coalescence of two CycleGAN models for cross-modality generation. The initial EIT image was generated at three different frequencies (70 kHz, 140 kHz, and 200 kHz) using a 16-electrode belt. Wrist T1-weighted images were acquired on a 1.5T MRI. A total of 19 normal volunteers were imaged using both EIT and MRI, which resulted in 713 paired EIT and MRI images. The cascaded CycleGAN, end-to-end CycleGAN, and Pix2Pix models were trained and tested on the same cohort. The proposed method achieved the highest accuracy in bone detection, with 0.97 for the proposed cascaded CycleGAN, 0.68 for end-to-end CycleGAN, and 0.70 for the Pix2Pix model. Visual inspection showed that the proposed method reduced bone-related errors in the MRI-style anatomical reference compared with end-to-end CycleGAN and Pix2Pix. Multifrequency EIT inputs reduced the testing normalized root mean squared error of MRI-style anatomical reference from 67.9% ± 12.7% to 61.4% ± 8.8% compared with that of single-frequency EIT. The mean conductivity values of fat and bone from regularized EIT were 0.0435 ± 0.0379 S/m and 0.0183 ± 0.0154 S/m, respectively, when the anatomical prior was employed. These results demonstrate that the proposed framework is able to generate MRI-style anatomical references from EIT images with a good degree of accuracy. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | Computerized Medical Imaging and Graphics | en_US |
dc.title | Cross modality generative learning framework for anatomical transitive Magnetic Resonance Imaging (MRI) from Electrical Impedance Tomography (EIT) image | en_US |
dc.type | Peer Reviewed Journal Article | en_US |
dc.identifier.doi | https://doi.org/10.1016/j.compmedimag.2023.102272 | - |
item.fulltext | No Fulltext | - |
crisitem.author.dept | Department of Applied Data Science | - |
Appears in Collections: | Applied Data Science - Publication |
Page view(s)
14
Last Week
0
0
Last month
checked on Nov 21, 2024
Google ScholarTM
Impact Indices
Altmetric
PlumX
Metrics
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.