Shanshan Wang^{1}, Yanxia Chen^{1}, Leslie Ying^{2}, Cheng Li^{1}, Ziwen Ke^{1}, Taohui Xiao^{1}, Xin Liu^{1}, Dong Liang^{1}, and Hairong Zheng^{1}

Compressive sensing MRI (CS-MRI) is a popular technique to accelerate MR dynamic imaging. Nevertheless, the reconstruction is normally time-consuming and its parameters have to be hand-tuned To address this challenge, we solve a CS-based dynamic MR imaging problem by adopting the Alternating Direction Method of Multipliers (ADMM) iteration method with the most popular deep learning technique. Specifically, we introduce a deep network structure, dubbed as DCTV-NET, for dynamic magnetic resonance image reconstruction from highly under-sampled k-t space data. Experimental results demonstrate that our method is superior to the state-of-the-art dynamic MRI methods.

The essence of the proposed approach is to integrate the merits of model based method in finding theoretically optimal or sub-optimal solutions and strengths of deep learning based methods in automatically learning the weighting parameters with higher reconstruction speed. The dynamic MR imaging reconstruction problem can be described as follows

$$\min_{x,z}\frac{1}{2}\Vert Ax-y\Vert_2^2+\sum_{l=1}^L\lambda_lg(D_lz) \ \ s.t.\ z=x$$

where A=PF is a measurement matrix with P as the undersampling pattern and F as the Fourier transform; đ„ is an image to be reconstructed; y is the under-sampled k-space data. $$$\lambda _l$$$ is a regularization parameter. $$$g(\cdot)$$$ is a regularization function related to data prior. $$$D_l$$$ is a filtering operation. z is the auxiliary variable in the spatial domain. Its augmented Lagrangian equation could be described as follows:

$$\mathcal L_p(x,z,\alpha)=\frac{1}{2}\Vert Ax-y\Vert_2^2+\sum_{l=1}^L\lambda_lg(D_lz)+\langle \alpha,z-x\rangle+\frac{\rho}{2}\Vert z-x\Vert_2^2$$

Then, it could be transformed into the following sub-problems:

$$\begin{cases}\arg\min\limits_x \frac{1}{2}\Vert Ax-y \Vert_2^2 + \langle \alpha,z-x \rangle+\frac{\rho}{2}\Vert z-x\Vert_2^2 \\ \arg\min\limits_z \sum_{l=1}\limits^L \lambda_lg(D_lz)- \langle \alpha,z-x \rangle +\frac{\rho}{2}\Vert z-x\Vert_2^2 \\arg\min\limits_a\langle \alpha,z-x \rangle\end{cases}$$

The sub-problems have the following solutions, where the auxiliary variable z employs a gradient-descent algorithm. $$$\beta=\alpha /\rho$$$ is the scaled multiplier for Lagrangian. $$$\widetilde{\eta}$$$ is an update rate.

$$\begin{cases} x^{(n)}=F^T\left(P^TP+\rho^{(n)}I\right)^{-1}\left[P^Ty+\rho^{(n)}F\left(z^{(z-1)}-\beta^{(n-1)}\right)\right]\\z^{(n,k)}= \mu_1z^{(n,k-1)}+\mu_2\left(x^{(n)}+\beta^{(n-1)}\right) - \sum\limits_{l=1}^L \widetilde{\lambda}_l D_l^T \mathcal H\left(D_l z^{(n,k-1)}\right) \\\beta^{(n)}=\beta^{(n-1)}+\widetilde{\eta}\left(x^{(n)}-z^{(n)}\right) \end{cases}$$

Our defined network, DCTV-NET, is shown in Fig.1. X is the reconstruction layer $$$(X^{(n)})$$$. Z is the denoising layer and is decomposed into an additional layer $$$(A^{(n,k)})$$$, convolution layers $$$(C_1^{(n,k)},C_2^{(n,k)})$$$ and a nonlinear transform layer $$$(H^{(n,k)})$$$. M is the multiplier update layer $$$(M^{(n)})$$$.

1. Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. IEEE Journal of Signal Processing, 25(2):72–82, 2008.

2. H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye, “k-t FOCUSS: A general compressed sensing framework for high resolution dynamic MRI,” Magnetic Resonance in Medicine, vol. 61, no. 1, pp. 103–116, 2009.

3. M. Lustig, J. Santos, D. Donoho, and J. Pauly, “k-t SPARSE: High frame rate dynamic MRI exploiting spatio-temporal sparsity,” in International Society on Magnetic Resonance in Medicine 2006.

4. F. Knoll, K. Bredies, T. Pock, and R. Stollberger, “Second order total generalized variation (TGV) for MRI,”Magnetic resonance in medicine, vol. 65, no. 2, pp. 480–491, 2011.

5. S. Poddar, Mathews Jacob, “Dynamic MRI using SmooThness Regularization on Manifolds (SToRM)”, IEEE, p1106-1115,2015

6. Y. Yang, J. Sun, H. Li, Zongben Xu, “ADMM-Net: A Deep Learning Approach for Compressive Sensing MRI”,2017

7. S. Lingala, Y. Hu, E. DiBella, and M. Jacob, “Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR,” IEEE Trans. Med. Imag., vol. 30, no. 5, pp. 1042–1054, May 2011.

8. Han, Yoseob, Jaejun Yoo, Hak Hee Kim, Hee Jung Shin, Kyunghyun Sung, and Jong Chul Ye. "Deep learning with domain adaptation for accelerated projectionâreconstruction MR." Magnetic resonance in medicine 80, no. 3 (2018): 1189-1205.

9. J. Schlemper, J. Caballero, J.V. Hajnal, A. Price, D. Rueckert, “A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction”, IEEE TMI, DOI: 10. 1109/TMI.2017.2760978 (2017)

10. Qin, Chen, Joseph V. Hajnal, Daniel Rueckert, Jo Schlemper, Jose Caballero, and Anthony N. Price. "Convolutional recurrent neural networks for dynamic MR image reconstruction." IEEE transactions on medical imaging (2018).

11. Zhu, Bo, Jeremiah Z. Liu, Stephen F. Cauley, Bruce R. Rosen, and Matthew S. Rosen. "Image reconstruction by domain-transform manifold learning." Nature 555, no. 7697 (2018): 487.

12. Eo, Taejoon, Yohan Jun, Taeseong Kim, Jinseong Jang, HoâJoon Lee, and Dosik Hwang. "KIKIânet: crossâdomain convolutional neural networks for reconstructing undersampled magnetic resonance images." Magnetic resonance in medicine (2018).

13. K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K.Sodickson, T. Pock, and F. Knoll, “Learning a variationalnetwork for reconstruction of accelerated MRI data,”Magnetic Resonance in Medicine, vol. 79, no. 6, pp.3055–3071, 2018.

14. H.K. Aggarwal, M.P Mani, and Mathews Jacob, MoDL: Model Based Deep Learning Architecture for Inverse Problems, IEEE Transactions on Medical Imaging, 2018

Fig.1: An example of DCTV-NET with three stages and one iteration in each sub-stage.

Fig.2 Reconstruction result on the cardiac data with 5-fold acceleration setting.

Tab.1 Performance comparisons on cardiac data with different random sampling rates.

Fig.3 SER graphs of different comparison algorithms at different acceleration factors.