Pingfan Song^{1}, Yonina C. Eldar^{2}, Gal Mazor^{2}, and Migue Rodrigues^{1}

Dictionary matching based MR Fingerprinting (MRF) reconstruction approaches suffer from inherent quantization errors, as well as time-consuming parameter mapping operations that map temporal MRF signals to quantitative tissue parameters. To alleviate these issues, we design a residual convolutional neural network to capture the mappings from temporal MRF signals to tissue parameters. The designed network is trained on synthesized MRF data simulated with the Bloch equations and fast imaging with steady state precession (FISP) sequences. After training, our network is able to take a temporal MRF signal as input and directly output corresponding tissue parameters, playing the role of a dictionary and look-up table used in conventional approaches. However, the designed network outperforms conventional approaches in terms of both inference speed and reconstruction accuracy, which has been validated on both synthetic data and phantom data generated from healthy subjects.

Magnetic Resonance Fingerprinting (MRF) [1–6] has emerged as a promising Quantitative Magnetic Resonance Imaging (QMRI) approach, with the capability of providing multiple tissue’s intrinsic spin parameters simultaneously, such as the spin-lattice magnetic relaxation time (T1) and the spin-spin magnetic relaxation time (T2). Based on the fact that the response from each tissue with respect to a given pseudo-random pulse sequence is unique, MRF exploits pseudo-randomized acquisition parameters to create unique temporal signal signatures, analogue to a "fingerprint", for different tissues. Then, a dictionary matching operation is performed to map an inquiry temporal signature to the best matching entry in a precomputed dictionary which leads to multiple tissue parameters directly. However, such dictionary matching based signature-to-parameter mapping exhibits some drawbacks [7, 8]: (1) storing the dictionary becomes prohibitively memory-consuming as the dictionary size often grows exponentially with the number of tissue parameters; (2) finding the best matching dictionary entry is very time-consuming due to computation of the inner product between the inquiry temporal signature and each dictionary entry, thus considerably limiting the inference speed.

The proposed network plays the role of dictionary and lookup-table in the dictionary matching based methods. It demonstrates various additional advantages. Owing to the feedforward characteristics, the signature-to-parameter mapping operation using the proposed network is much faster than the conventional dictionary-matching. As a neural network is a compact function representation, storing a trained network needs less memory than storing a large dictionary. Serving as a powerful function representation, neural networks are able to output continuous-valued parameters, thus perform well on estimating parameters which may not exist in a simulated dictionary.

- [1] Dan Ma, Vikas Gulani, Nicole Seiberlich, Kecheng Liu, Jeffrey L Sunshine, Jeffrey L Duerk, and Mark A Griswold, “Magnetic resonance fingerprinting,” Nature, vol. 495, no. 7440, pp. 187, 2013.
- [2] Yun Jiang, Dan Ma, Nicole Seiberlich, Vikas Gulani, and Mark A Griswold, “MR fingerprinting using fast imaging with steady state precession (fisp) with spiral readout,” Magnetic resonance in medicine, vol. 74, no. 6, pp. 1621–1631, 2015.
- [3] Mike Davies, Gilles Puy, Pierre Vandergheynst, and Yves Wiaux, “A compressed sensing framework for magnetic resonance fingerprinting,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp. 2623–2656, 2014.
- [4] Zhe Wang, Hongsheng Li, Qinwei Zhang, Jing Yuan, and Xiaogang Wang, “Magnetic resonance fingerprinting with compressed sensing and distance metric learning,” Neurocomputing, vol. 174, pp. 560–570, 2016.
- [5] Gal Mazor, Lior Weizman, Assaf Tal, and Yonina C Eldar, “Low rank magnetic resonance fingerprinting,” in Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the. IEEE, 2016, pp. 439– 442.
- [6] Gal Mazor, Lior Weizman, Assaf Tal, and Yonina C Eldar, “Low-rank magnetic resonance fingerprinting,” Medical physics, vol. 45, no. 9, pp. 4066–4084, 2018.
- [7] Ouri Cohen, Bo Zhu, and Matthew S Rosen, “Mr fingerprinting deep reconstruction network (drone),” Magnetic resonance in medicine, vol. 80, no. 3, pp. 885–894, 2018.
- [8] Elisabeth Hoppe, Gregor Körzdörfer, Tobias Würfl, Jens Wetzl, Felix Lugauer, Josef Pfeuffer, and Andreas Maier, “Deep learning for magnetic resonance fingerprinting: A new approach for predicting quantitative parameter values from time series,” Stud Health Technol Inform, vol. 243, pp. 202– 206, 2017.
- [9] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436, 2015.
- [10] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio, Deep learning, vol. 1, MIT press Cambridge, 2016.
- [11] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
- [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vision Pattern Recog, 2016, pp. 770–778.
- [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Identity mappings in deep residual networks,” in Proc. Eur. Conf. Comput. Vision. Springer, 2016, pp. 630–645.
- [14] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2016.
- [15] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee, “Deeplyrecursive convolutional network for image super-resolution,” in Proc. IEEE Conf. Comput. Vision Pattern Recog, 2016, pp. 1637–1645.
- [16] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin, “Convolutional sequence to sequence learning,” arXiv preprint arXiv:1705.03122, 2017.
- [17] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.

Fig. 1. Diagram of designed 1-D residual CNN for
signature-to-parameter mapping in MRF reconstruction. During the
training stage, each simulated dictionary entry (1-D time sequence)
is input into the network as a training signature with corresponding
parameters such as T1/T2 relaxation times as the label. During the
testing stage, the signature for each pixel is extracted from a stack
of de-aliased/denoised imaging contrasts to serve as a testing
signature which is mapped to corresponding tissue parameters by the
network.

Fig. 2. Parameter restoration performance using designed network on synthetic data. Blue and red lines represent the groundtruth and estimation of corresponding parameter. It can be noticed that the trained network fits the parameters well in the whole range, yielding high correlation coefficients and low RMSE. Correlation Coefficients for T1 / T2: R2 = 0:99999986=0:99999963; RMSE = 0.659 / 0.491. The most impressive advantage of the network is the fast inference speed – taking only 8.2 s to complete the mapping operation for eighty thousand temporal signatures, that is, 53× faster than 464.1 s using the dictionary matching method.

Fig. 3. Visual performance on the parameter restoration using the dictionary matching method [1] and proposed method on a phantom without subsampling. It is noticed that our approach gives competitive performance for T1 mapping and yields much better performance for T2 mapping, obtaining 7.9dB SNR gains than the competing method [1]. This is owing to the advantage that the trained neural network is a powerful function representation that outputs continuous-valued parameters. In addition, the network takes only 1.6 s to accomplish the mapping for a pair of T1 / T2 parameter maps of size 128 x 128, 56x faster than the dictionary matching method.

Fig. 4. Visual performance on subsampled phantom data with subsampling ratio 0.15. Comparison between Ma et al.'s dictionary matching [1], FLOR [6], and proposed method. Our method uses low-rank based signature de-aliasing followed by network based parameter mapping. It is noticed that our method outperforms the dictionary matching method with significant gains, and also yields competitive performance as the state-of-the-art method FLOR. In addition, our method is 73x faster than FLOR for parameter mapping. We also note that storing the network requires 20.3 megabytes while storing the training dictionary of size 80100 x 200 requires more than 100 megabytes.