Satoshi ITO^{1} and Kohei SATO^{1}

Image domain learning designed for image denoiser has superior performance when aliasing artifacts are incoherent; however, its performances will be degraded if the artifacts show small incoherency. In this work, a novel image domain learning CNN is proposed in which images are transformed to scaled space to improve the incoherency of artifacts. Simulation and experiments showed that the quality of obtained image was fairly improved especially for lower sampling rate and the quality was further improved by cascaded network. It was also shown that the resultant PSNR exceeded one of the transform learning method.

Fresnel
transform based multi-resolution analysis (FREBAS) ^{[5,6]} is used to down-scale images. Considering
one-dimensional signal, a decomposed sub image of $$$m$$$-th index $$$\rho(m,x) $$$
in FREBAS domain can be described equivalently as a convolution integral with
the kernel of a band-pass filter function. where $$$\rho (x)$$$ is an image
data, $$$\Delta x$$$ is the pixel width, N is number of data and $$$D$$$ is a
scaling parameter,

$$ \rho (m,x)= \rho (x-mDN \Delta x) \ast {\rm sinc} \left(\frac{2 \pi x}{D \Delta x} \right) \exp\left( -j \frac{2 \pi m x }{ D \Delta x } \right) …(1)$$

Even
though Eq. (1) is described as convolution integral, FREBAS can be calculated
using several FFTs and IFFTs. Since FREBAS is complex transform, it can be
applied to phase varied images straightforwardly. Figure 1 shows the example of
FREBAS transform.
Figure 2 shows the FREBAS
transform under-sampled image. Figure (a) shows a 1-dimensionally under-sampled
image, figs. (b), (c) show the FREBAS transform of (a) using D=1.5 and error
image in that domain, respectively. Figure 3 shows the proposed CNN network. As
shown in Fig.2, where the magnitude of aliasing artifacts is not distributed
uniformly in the FREBAS space, residual learning was performed separately for
central lower-band image and higher-band images. Each CNN has two-channel,
since FREBAS is a complex transform. Deep CNN based on residual learning and
batch normalization^{ [7,8]}
was used for learning the distribution of aliasing artifacts in the scaled
domain. To improve the obtained image quality cascaded 2-stage network was also
examined.
The depth 17 of CNN was set
17 and corresponding receptive field size was 35x35. Three types of layers were
used, (1) Conv+ReLU: for the first layer, 64 filters of size 3 x 3, 2)
Conv+BN+ReLU: for layers 2 ~ 16, 64 filters of size 3 x 3 x 64, 3)
Conv: for the last layer, 3x3x64 filter were used to reconstruct the output.

- Kwon K, Kim D and Park H, A parallel MR imaging method using multilayer perceptron. Med. Phys 2017; 44 6209–6224
- Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks, arXiv:1804.00432
- Zhu B, Liu JZ, Cauley SF: Image Reconstruction by Domain-transform Manifold Learning, Nature, 555: 487-492, 2018
- Akçakaya M, Moeller S, Weingärtner S et al.: Scan-specific Robust Artificial-neural-networks for k-space Interpolation (RAKI): Database-free Deep Learning Reconstruction for Fast Imaging, in: Proc Intl Soc Mag Reson Med, 0576, 2018
- Ito S, Yamada Y, Multiresolution Image analysis using dual Fresnel transform Pairs and Application to Medical Image Denoising. IEEE International Conference on Image Processing 2003, Barcelona, Spain, Map8.7
- Ito S, Yamada Y, FREBAS Domain Super-Resolution Reconstruction of MR Images. ISMRM2010, 2936, Stockholm, Sweden
- He K，Zhang X，Ren S et al. Deep residual learning for image recognition. Conf IEEE CVPR: Las Vegas, 2016, 770-77
- Zhang K, Zuo W, Chen Y et al. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Tran Image Proc 2017:26, 3142-3155
- Yan Y, Jian S, Hbin L and Zongben Xu, ADMM-Net: A Deep Learning Approach for Compressive Sensing MRI. arxiv.org/abs/1705.06869

Fig.1
Example of FREBAS transform; (a) input image, (b) FREBAS transformed image
using D=5, (c) basis function of FREBAS transform for each sub-band images
(only real-part is shown).

Fig.2
FREBAS transform of artifact image; (a) zero-filled image using under-sampled
signal, (b) down-scaled image by FREBAS transform (D=1.5), (c) distribution of
artifacts in scaled domain. Artifacts are not distributed uniformly.

Fig.3
Application of deep residual learning CNN to scaled domain. Since the magnitude
of aliasing artifacts is not distributed uniformly in the FREBAS space,
residual learning was performed separately for central lower-band image and
higher-band images. Each CNN has two-channel, since FREBAS is a complex
transform. To improve the obtained image quality cascaded 2-stage network was
examined.

Fig.4 PSNR characteristics; The relation
between PSNR and FREBAS scaling factor D, (b) PSNR comparison among 2-stage
scaled domain CNN, single-stage scaled domain CNN, image domain CNN and
ADMM-Net.

Fig.5 Comparison of reconstructed images
using 25% signal; (a) fully scanned image, (b) single-stage scaled domain CNN, (c)
2-stage scaled domain CNN (proposed), (d) ADMM-Net, (e) zero-filled image using
under-sampled signal, (f)~(h) error image corresponding to (b)~(d),
respectively.