Philip K. Lee^{1,2}, Makai Mann^{1}, and Brian A. Hargreaves^{1,2,3}

Deep learning has been applied to the Parallel Imaging problem of resolving coherent aliasing in image domain. Convolutional neural networks have finite receptive FOV, where each output pixel is a function of a limited number of input pixels. For uniformly undersampled data, a simple hypothesis is that including the aliased peak in the receptive FOV would improve suppression of aliasing. We show that a simple channel augmentation scheme allows us to resolve aliasing using 50x fewer parameters than a large U-Net with millions of parameters and a global receptive FOV. This method was tested on retrospectively undersampled knee volumes.

ShiftNet slightly outperformed the large U-Net for three subjects and slightly underperformed for one subject. A possible cause for this relative improvement is that fewer layers in ShiftNet improve gradient propagation, improving convergence. A second possibility is that there is reduced overfitting in the small U-Net, as evidenced by the validation loss gap between ShiftNet and the large U-Net, shown in Figure 2.

Our preliminary study considered magnitude images with a small undersampling factor. Applying the channel augmentation scheme to higher undersampling factors could yield larger reconstruction differences when compared to the reference while reducing the number of parameters required for the network. The results here suggest that explicitly incorporating k-space operations that affect a global spatial scale, would improve a neural network’s ability to resolve random undersampling artifacts.

[1] Lustig M, et al. Sparse MRI: The application of compressed sensing for rapid MR imaging. MRM, 2007.

[2] Ronneberger O, et al. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597, 2015.

[3] Diamond S, et al. Unrolled Optimization with Deep Priors. arXiv:1705.08041, 2017.

[4] Eo T, et al. KIKI-net: cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images. MRM, 2018.

[5] Lee P, et al. ShiftNets: Deep Convolutional Neural Networks for MR Image Reconstruction & the Importance of Receptive Field of View. ISMRM ML Workshop Part II, 2018.

[6] Krizhevsky A, et al. ImageNet Classification with Deep Convolutional Neural Networks. NIPS, 2012.

[7] Epperson K, et al. Creation of Fully Sampled MR Data Repository for Compressed Sensing of the Knee. SMRT, 2013.

[8] Lee D, et al. Deep artifact learning for compressed sensing and parallel MRI. arxiv:1703.01120, 2017.