Zi Wang^{1}, Chen Qian^{1}, Di Guo^{2}, Hongwei Sun^{3}, Rushuai Li^{4}, and Xiaobo Qu^{1}

^{1}Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China, ^{2}School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China, ^{3}United Imaging Research Institute of Intelligent Imaging, Beijing, China, ^{4}Department of Nuclear Medicine, Nanjing First Hospital, Nanjing Medical University, Nanjing, China

Deep learning has shown astonishing performance in accelerated MRI. Most methods adopt the convolutional neural network and perform 2D convolution since many MR images or their corresponding k-space are in 2D. In this work, we try a different approach that explores the memory-friendly 1D convolution, making the deep network easier to be trained and generalized. Furthermore, a one-dimensional deep learning architecture (ODL) is proposed for MRI reconstruction. Results demonstrate that, the proposed ODL provides improved reconstructions than state-of-the-art methods and shows nice robustness to some mismatches between the training and test data.

Furthermore, we propose a One-dimensional Deep Learning architecture (ODL). Its unrolled reconstruction incorporates both k-space and image-domain priors. As shown in Figure 2, the $$$k^{th}(k=1,...,K)$$$ network phase is designed as

$$(k-space\ learning\ module):\ \mathbf{r}_{m}^{(k)}=\lambda^{(k)}[{{\mathcal{N}}_{1}}(\mathbf{e}_{m}^{(k-1)})],\ \tag {1}$$

$$(Data\ consistency\ module):\ \mathbf{d}_{m}^{(k)}=\mathbf{e}_{m}^{(k-1)}-{{\gamma }^{(k)}}[{{\mathcal{U}}^{\text{*}}}(\mathcal{U}\mathbf{e}_{m}^{(k-1)}-{{\mathbf{z}}_{m}})+2\mathbf{r}_{m}^{(k)}],\ \tag {2}$$

$$(Image-domain\ learning\ module):\ \mathbf{e}_{m}^{(k)}={{\mathcal{F}}_{PE}}{{\mathcal{N}}_{3}}[soft({{\mathcal{N}}_{2}}(\mathcal{F}_{PE}^{\text{*}}\mathbf{d}_{m}^{(k)});{{\theta }^{(k)}})],\ \tag {3}$$

where $$$e_{m}$$$ is the 1D hybrid data to be reconstructed, $$$\mathcal{U}$$$ is the undersampling operator with zero-filling, $$$\mathcal{F}_{PE}$$$ is the 1D FT along the PE, the superscript $$$*$$$ represents the inverse operation. $$${\lambda }^{(k)}$$$ and $$${\gamma }^{(k)}$$$ are learnable network parameters initialized to 0.001 and 1, respectively. $$${\mathcal{N}}_{1}$$$, $$${\mathcal{N}}_{2}$$$, and $$${\mathcal{N}}_{3}$$$ are multi-layer 1D CNNs, whose number of layers are 6, 3, and 3, respectively. Each convolutional layer contains 48 1D convolution filers of size 3, followed by the batch normalization[12] and ReLU. $$$soft(x;\rho )=\max \left\{ \left| x \right|-\rho \right\}\cdot {x}/{\left| x \right|}\;$$$ is the element-wise soft-thresholding, $$${\theta }^{(k)}$$$ is the learnable threshold initialized to 0.001. When $$$k=1$$$, the initialized network input $$$\mathbf{e}_{m}^{(0)}={{\mathsf{\mathcal{U}}}^{\text{*}}}{{\mathbf{z}}_{m}}$$$ is the zero-filled 1D hybrid data with strong artifacts. The overall number of network phase in our implementation is 10, i.e. $$$K=10$$$. We employ the proposed 1D learning scheme to train our network by minimizing the loss function (mean square error).

In the reconstruction stage, the 1D IFT along the FE is first performed on the undersampled k-space to obtain $$$\mathbf{Z}=\mathsf{\mathcal{F}}_{FE}^{\text{*}}\mathbf{Y}$$$, all rows of $$$\mathbf{Z}$$$ form a batch that is then reconstructed in parallel and stitched back together to yield the reconstructed hybrid data $$$\mathbf{\hat{E}}$$$. After performing the 1D IFT along the PE, we can obtain the final reconstructed image $$$\mathbf{\hat{S}}=\mathsf{\mathcal{F}}_{PE}^{\text{*}}\mathbf{\hat{E}}$$$, as shown in Figure 1.

(i) Matched reconstruction: Figures 3 demonstrates that, for any number of training subjects

(ii) Mismatched reconstruction: Mismatched reconstruction refers to utilizing a trained network to reconstruct test datasets which are different from the acquisition specification of training datasets. Here, we focus on the mismatch of the knee plane orientation (Train/reconstruct: Coronal/sagittal) and brain contrast weighting (Train/reconstruct: T

(iii) By virtue of 1D CNN, the proposed ODL has few trainable parameters (664350), about 3% of IUNET, 2% of DOTA, and 26% of HDSLR, thus it is memory-friendly.

See more details in the full-length paper: https://csrc.xmu.edu.cn/. This work was supported in part by the National Natural Science Foundation of China under grants 62122064, 61971361, 61871341, and 61811530021, the National Key R&D Program of China under grant 2017YFC0108703, and the Xiamen University Nanqiang Outstanding Talents Program. The authors thank Xinlin Zhang and Jian Wu for assisting in data processing and helpful discussions. The authors thank Weiping He, Shaorong Fang, and Tianfu Wu from Information and Network Center of Xiamen University for the help with the GPU computing. The authors also thank Drs. Michael Lustig, Jong Chul Ye, Taejoon Eo, and Mathews Jacob for sharing their codes online.

The correspondence should be sent to Prof. Xiaobo Qu (Email: quxiaobo@xmu.edu.cn)

[1] G. Harisinghani Mukesh, A. O’Shea, and R. Weissleder, "Advances in clinical MRI technology," Science Translational Medicine, vol. 11, no. 523, p. eaba2591, 2019.

[2] M. Lustig, D. Donoho, and J. M. Pauly, "Sparse MRI: The application of compressed sensing for rapid MR imaging," Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182-1195, 2007.

[3] F. Knoll et al., "Deep-learning methods for parallel magnetic resonance imaging reconstruction: A survey of the current approaches, trends, and issues," IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 128-140, 2020.

[4] J. C. Ye, Y. Han, and E. Cha, "Deep convolutional framelets: A general deep learning framework for inverse problems," SIAM Journal on Imaging Sciences, vol. 11, no. 2, pp. 991-1048, 2018.

[5] A. Pramanik, H. Aggarwal, and M. Jacob, "Deep generalization of structured low-rank algorithms (Deep-SLR)," IEEE Transactions on Medical Imaging, vol. 39, no. 12, pp. 4186-4197, 2020.

[6] T. Eo, H. Shin, Y. Jun, T. Kim, and D. Hwang, "Accelerating Cartesian MRI by domain-transform manifold learning in phase-encoding direction," Medical Image Analysis, vol. 63, p. 101689, 2020.

[7] T. Lu et al., "pFISTA-SENSE-ResNet for parallel MRI reconstruction," Journal of Magnetic Resonance, vol. 318, p. 106790, 2020.

[8] D. C. Noll, D. G. Nishimura, and A. Macovski, "Homodyne detection in magnetic resonance imaging," IEEE Transactions on Medical Imaging, vol. 10, no. 2, pp. 154-163, 1991.

[9] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, "SENSE: Sensitivity encoding for fast MRI," Magnetic Resonance in Medicine, vol. 42, no. 5, pp. 952-962, 1999.

[10] Y. Yang, F. Liu, Z. Jin, and S. Crozier, "Aliasing artefact suppression in compressed sensing MRI for random phase-encode undersampling," IEEE Transactions on Biomedical Engineering, vol. 62, no. 9, pp. 2215-2223, 2015.

[11] X. Zhang et al., "Accelerated MRI reconstruction with separable and enhanced low-rank Hankel regularization," arXiv: 2107.11650, 2021.

[12] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," arXiv: 1502.03167, 2015.

[13] K. Hammernik et al., "Learning a variational network for reconstruction of accelerated MRI data," Magnetic Resonance in Medicine, vol. 79, no. 6, pp. 3055-3071, 2018.

[14] M. A. Griswold et al., "Generalized autocalibrating partially parallel acquisitions (GRAPPA)," Magnetic Resonance in Medicine, vol. 47, no. 6, pp. 1202-1210, 2002.

[15] W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, 2004.