Quan Dou^{1}, Xue Feng^{1}, Zhixing Wang^{1}, Daniel Weller^{2}, and Craig Meyer^{1}

Movement of the subject during MRI acquisition causes image quality degradation. In this study we adopted a deep CNN to correct motion-corrupted brain images. To get paired training datasets, synthetic motion artifacts were added by simulating k-space data along different sampling trajectories. Quantitative evaluation showed that the CNN significantly improved the image quality. The spiral trajectory performed better than the Cartesian trajectory both before and after the motion deblurring. A network trained with an L1 loss function achieved better RMSE and SSIM than one trained with an L2 loss function after convergence. Overall, deep learning yields rapid and flexible motion compensation.

Brain
images were obtained from an open database^{2}, which comprises T1
weighted FLASH magnitude images for 88 subjects, acquired at 1$$$\times$$$1$$$\times$$$1 mm^{3}. Each subject’s image contains 160 or 176 axial slices.
4362 slices were randomly selected as the training data, and the remaining 1364
slices were selected as the test data. Preprocessing included padding each
image to 256$$$\times$$$256 and intensity normalization. To simulate motion
artifacts, both the original images and translated and rotated images were first
transformed into Cartesian k-space by a fast Fourier transform (FFT) or into
spiral k-space by a nonuniform FFT (NUFFT)^{3}. Then specific
phase-encoding lines or spiral interleaves in the original k-space were
replaced with the corresponding lines or interleaves from the transformed
images. The final motion-corrupted images were reconstructed from the “combined”
k-space by inverse FFT or inverse NUFFT^{4}, as shown in Figure 1. The
same percentage of phase-encoding lines or spiral interleaves were corrupted to
ensure that the motion artifacts were comparable for different trajectories.

Figure 2 shows the network
architecture. The deep CNN was implemented using TensorFlow, based on a model
first proposed for natural image denoising^{1}. The input of the
network is the magnitude-only motion-corrupted image. After several convolution
layers with batch normalization and ReLU, a residual image is predicted and the
output of the network is produced by subtracting the residual image from the
input. We trained the network for Cartesian and spiral trajectories separately.
The parameters were optimized using the Adam^{5} optimizer with L1 loss
function $$$L=|I_{target}-I_{output}|$$$ and learning
rate 0.001. We also implement the L2 loss function $$$L=(I_{target}-I_{output})^2$$$ with learning
rate 0.0001 to compare the performance.

- Zhang K, Zuo W, Chen Y, Meng D, Zhang L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing. 2017 Jul;26(7):3142-55.
- Bullitt E, Zeng D, Gerig G, Aylward S, Joshi S, Smith JK, Lin W, Ewend MG. Vessel tortuosity and brain tumor malignancy: a blinded study. Academic radiology. 2005 Oct 1;12(10):1232-40.
- Fessler JA. Michigan Image Reconstruction Toolbox. Available at https://web.eecs.umich.edu/~fessler/code/
- Lorch B, Vaillant G, Baumgartner C, Bai W, Rueckert D, Maier A. Automated Detection of Motion Artefacts in MR Imaging Using Decision Forests. Journal of medical engineering. 2017;2017.
- Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014 Dec 22.
- Zhao H, Gallo O, Frosio I, Kautz J. Loss functions for image restoration with neural networks. IEEE Transactions on Computational Imaging. 2017 Mar;3(1):47-57.