Abstract
Data assisted reconstruction algorithms, incorporating trained neural
networks, are a novel paradigm for solving inverse problems. One approach
is to rst apply a classical reconstruction method and then apply a neural
network to improve its solution. Empirical evidence shows that plain twostep
methods provide high-quality reconstructions, but they lack a convergence
analysis as known for classical regularization methods. In this paper
we formalize the use of such two-step approaches in the context of classical
regularization theory. We propose data-consistent neural networks that can be
combined with classical regularization methods. This yields a data-driven regularization method for which we provide a convergence analysis with respect to noise. Numerical simulations show that compared to standard two-step deep
learning methods, our approach provides better stability with respect to out of
distribution examples in the test set, while performing similarly on test data
drawn from the distribution of the training set. Our method provides a stable
solution approach to inverse problems that benecially combines the known
nonlinear forward model with available information on the desired solution
manifold in training data.
networks, are a novel paradigm for solving inverse problems. One approach
is to rst apply a classical reconstruction method and then apply a neural
network to improve its solution. Empirical evidence shows that plain twostep
methods provide high-quality reconstructions, but they lack a convergence
analysis as known for classical regularization methods. In this paper
we formalize the use of such two-step approaches in the context of classical
regularization theory. We propose data-consistent neural networks that can be
combined with classical regularization methods. This yields a data-driven regularization method for which we provide a convergence analysis with respect to noise. Numerical simulations show that compared to standard two-step deep
learning methods, our approach provides better stability with respect to out of
distribution examples in the test set, while performing similarly on test data
drawn from the distribution of the training set. Our method provides a stable
solution approach to inverse problems that benecially combines the known
nonlinear forward model with available information on the desired solution
manifold in training data.
Original language | English |
---|---|
Journal | Inverse Problems and Imaging |
DOIs | |
Publication status | Published - 1 Jul 2022 |