Menu Close

Image Reconstruction by Domain-transform Manifold Learning

Abstract

Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging, and radio astronomy1,2,3. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad-hoc stages in a signal processing chain4,5, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artifacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.

https://arxiv.org/pdf/1704.08841.pdf

Authors: Bo Zhu1,2,3, Jeremiah Z. Liu, Bruce R. Rosen1,2, Matthew S. Rosen1,2,3*

What learned from this Nature paper:

  • This is the first work to use deep learning to reconstruct images from sensor-domain instead of image domain, which is the so-called conventional way.
  • The concern to this work is about the capability of generalization.  The conventional methods typically are more general to any particular case than deep learning.
  • The question is how we can make a safety deep-learning based image reconstructor?
  • The second question is what kind of other image reconstruction applications can tolerate that learning-based models gradually improve their capability of generalization.

I may have some ideas in RocMind.  Let’s keep thinking this work.