Multimodal Image Reconstruction Using Supplementary Structural Information in Total Variation Regularization

Daniil Kazantsev, William Lionheart, Philip Withers, Peter Lee

    Research output: Contribution to journalArticlepeer-review

    Abstract

    In this paper, we propose an iterative reconstruction algorithm which uses available information from one dataset collected using one modality to increase the resolution and signal-to-noise ratio of one collected by another modality. The method operates on the structural information only which increases its suitability across various applications. Consequently, the main aim of this method is to exploit available supplementary data within the regularization framework. The source of primary and supplementary datasets can be acquired using complementary imaging modes where different types of information are obtained (e.g. in medical imaging: anatomical and functional). It is shown by extracting structural information from the supplementary image (direction of level sets) one can enhance the resolution of the other image. Notably, the method enhances edges that are common to both images while not suppressing features that show high contrast in the primary image alone. In our iterative algorithm we use available structural information within a modified total variation penalty term. We provide numerical experiments to show the advantages and feasibility of the proposed technique in comparison to other methods.
    Original languageEnglish
    Pages (from-to)1-18
    Number of pages18
    JournalSensing and imaging
    Volume15
    Issue number1
    DOIs
    Publication statusPublished - 21 Aug 2014

    Keywords

    • Anatomical prior
    • Hybrid medical scanners
    • Hybrid modalities
    • Image fusion
    • Positron emission tomography
    • Structural prior

    Fingerprint

    Dive into the research topics of 'Multimodal Image Reconstruction Using Supplementary Structural Information in Total Variation Regularization'. Together they form a unique fingerprint.

    Cite this