Abstract
Purpose
To systematically investigate the influence of various data consistency layers and regularization networks with respect to variations in the training and test data domain, for sensitivity-encoded accelerated parallel MR image reconstruction.
Theory and Methods
Magnetic resonance (MR) image reconstruction is formulated as a learned unrolled optimization scheme with a down-up network as regularization and varying data consistency layers. The proposed networks are compared to other state-of-the-art approaches on the publicly available fastMRI knee and neuro dataset and tested for stability across different training configurations regarding anatomy and number of training samples.
Results
Data consistency layers and expressive regularization networks, such as the proposed down-up networks, form the cornerstone for robust MR image reconstruction. Physics-based reconstruction networks outperform post-processing methods substantially for R = 4 in all cases and for R = 8 when the training and test data are aligned. At R = 8, aligning training and test data is more important than architectural choices.
Conclusion
In this work, we study how dataset sizes affect single-anatomy and cross-anatomy training of neural networks for MRI reconstruction. The study provides insights into the robustness, properties, and acceleration limits of state-of-the-art networks, and our proposed down-up networks. These key insights provide essential aspects to successfully translate learning-based MRI reconstruction to clinical practice, where we are confronted with limited datasets and various imaged anatomies.
Original language | English |
---|---|
Journal | Magnetic Resonance in Medicine |
DOIs | |
Publication status | Published - Oct 2021 |