TY - GEN
T1 - Region-enhanced joint dictionary learning for cross-modality synthesis in diffusion tensor imaging
AU - Wang, Danyang
AU - Huang, Yawen
AU - Frangi, Alejandro F.
N1 - Publisher Copyright:
© 2017, Springer International Publishing AG.
PY - 2017/9/30
Y1 - 2017/9/30
N2 - Diffusion tensor imaging (DTI) has notoriously long acquisition times, and the sensitivity of the tensor computation often make this technique vulnerable to various interferences, for example, physiological motions, limited scanning time and patients with different medical conditions. In neuroimaging, studies usually involve different modalities. We considered the problem of inferring key information in DTI from other modalities. To address such a problem, several cross-modality image synthesis approaches have been proposed recently, in which the content of an image modality is reproduced based on those of another modality. However, these methods typically focus on two modalities of same complexity. In this work we propose a region-enhanced joint dictionary learning method that combines the region-specific information in a joint learning manner. The proposed method encodes intrinsic differences among different modalities, while the jointly learned dictionaries preserve common structures among them. Experimental results show that our approach has desirable properties on cross-modality image synthesis in diffusion tensor images.
AB - Diffusion tensor imaging (DTI) has notoriously long acquisition times, and the sensitivity of the tensor computation often make this technique vulnerable to various interferences, for example, physiological motions, limited scanning time and patients with different medical conditions. In neuroimaging, studies usually involve different modalities. We considered the problem of inferring key information in DTI from other modalities. To address such a problem, several cross-modality image synthesis approaches have been proposed recently, in which the content of an image modality is reproduced based on those of another modality. However, these methods typically focus on two modalities of same complexity. In this work we propose a region-enhanced joint dictionary learning method that combines the region-specific information in a joint learning manner. The proposed method encodes intrinsic differences among different modalities, while the jointly learned dictionaries preserve common structures among them. Experimental results show that our approach has desirable properties on cross-modality image synthesis in diffusion tensor images.
KW - Cross-modality
KW - Dictionary learning
KW - DTI
KW - Image synthesis
UR - http://www.scopus.com/inward/record.url?scp=85031430785&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-68127-6_5
DO - 10.1007/978-3-319-68127-6_5
M3 - Conference contribution
AN - SCOPUS:85031430785
SN - 9783319681269
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 41
EP - 48
BT - Simulation and Synthesis in Medical Imaging
A2 - Gooya, Ali
A2 - Frangi, Alejandro F.
A2 - Tsaftaris, Sotirios A.
A2 - Prince, Jerry L.
PB - Springer Cham
CY - Cham
T2 - 2nd International Workshop on Simulation and Synthesis in Medical Imaging, SASHIMI 2017 Held in Conjunction with the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2017
Y2 - 10 September 2017 through 10 September 2017
ER -