Magnetic Resonance Fingerprinting Using a Residual Convolutional Neural Network
Pingfan Song1, Yonina C. Eldar2, Gal Mazor2, and Migue Rodrigues1

1Department of EE, University College London, London, United Kingdom, 2Department of EE, Technion, Israel Institute of Technology, Haifa, Israel


Dictionary matching based MR Fingerprinting (MRF) reconstruction approaches suffer from inherent quantization errors, as well as time-consuming parameter mapping operations that map temporal MRF signals to quantitative tissue parameters. To alleviate these issues, we design a residual convolutional neural network to capture the mappings from temporal MRF signals to tissue parameters. The designed network is trained on synthesized MRF data simulated with the Bloch equations and fast imaging with steady state precession (FISP) sequences. After training, our network is able to take a temporal MRF signal as input and directly output corresponding tissue parameters, playing the role of a dictionary and look-up table used in conventional approaches. However, the designed network outperforms conventional approaches in terms of both inference speed and reconstruction accuracy, which has been validated on both synthetic data and phantom data generated from healthy subjects.


To noticeably accelerate parameter mapping for Magnetic Resonance Fingerprinting, alleviate the burden of storing a large dictionary, as well as overcome quantization errors from conventional dictionary matching based MRF approaches.


Magnetic Resonance Fingerprinting (MRF) [1–6] has emerged as a promising Quantitative Magnetic Resonance Imaging (QMRI) approach, with the capability of providing multiple tissue’s intrinsic spin parameters simultaneously, such as the spin-lattice magnetic relaxation time (T1) and the spin-spin magnetic relaxation time (T2). Based on the fact that the response from each tissue with respect to a given pseudo-random pulse sequence is unique, MRF exploits pseudo-randomized acquisition parameters to create unique temporal signal signatures, analogue to a "fingerprint", for different tissues. Then, a dictionary matching operation is performed to map an inquiry temporal signature to the best matching entry in a precomputed dictionary which leads to multiple tissue parameters directly. However, such dictionary matching based signature-to-parameter mapping exhibits some drawbacks [7, 8]: (1) storing the dictionary becomes prohibitively memory-consuming as the dictionary size often grows exponentially with the number of tissue parameters; (2) finding the best matching dictionary entry is very time-consuming due to computation of the inner product between the inquiry temporal signature and each dictionary entry, thus considerably limiting the inference speed.


We consider alleviating these issues by leveraging deep neural networks (a.k.a. deep learning) [9, 10] to capture the mappings from temporal MRF signals to tissue parameters, replacing the memory-consuming dictionary and time-consuming dictionary matching. The rationale has to do with the fact that a well designed and tuned deep neural networks can approximate very well complex functions, leading to state-of-the-art results in a number of tasks such as image classification, image super-resolution, and many more [11–17]. In particular, based on the fact that MRF data consists of multiple frames exhibiting temporal similarity across time points, we propose a low-rank based de-aliasing method for restoring clean MRF imaging contrasts from subsampled k-space data. Then, each recovered signature in the contrasts is input into the designed network for parameter restoration. As illustrated in Figure 1, the proposed network has a 1-D residual CNN architecture with short-cuts for residual learning. It takes a temporal MRF signature as input and output corresponding tissue parameters. The network starts with two convolutional layers before connecting with 4 residual blocks, and finally ends with a global-average-pooling layer followed by a fullyconnected layers. Each residual block contains a max-pooling layer with stride 2, two convolution layers and a shortcut that enforces the network to learn the residual content in each block. The filter size in each convolutional layer is set to be equal to 7. The number of channels, a.k.a feature maps, in the first two convolutional layers is set to be 32 and then is doubled in each residual block until 512 in the final residual block. The size of each filter map halves due to max-pooling in each block. In this way, we gradually reduce temporal resolution while we extract more features to increase content information. The global-average-pooling layer is used to average each feature map in order to integrate information in each channel for improved robustness to corrupted input data. This global-average-pooling layer also reduces the number of parameters significantly, thus lessening the computation cost as well as preventing over-fitting. The last fully-connected layer outputs estimated parameters, such as T1 and T2 relaxation times. It is trivial to adjust the number of output to adapt to more parameters. Regarding the training, the network is trained on a synthesized dictionary and look-up table to learn the signature-to-parameter mapping. In specific, the network takes each dictionary entry, i.e. simulated temporal MRF signature, as input and use corresponding tissued parameters in the look-up table as the label, in order to capture the mappings between them. Once the network is trained, given an inquiry temporal MRF signature, the network is able to output estimated tissue parameters directly.


The proposed network plays the role of dictionary and lookup-table in the dictionary matching based methods. It demonstrates various additional advantages. Owing to the feedforward characteristics, the signature-to-parameter mapping operation using the proposed network is much faster than the conventional dictionary-matching. As a neural network is a compact function representation, storing a trained network needs less memory than storing a large dictionary. Serving as a powerful function representation, neural networks are able to output continuous-valued parameters, thus perform well on estimating parameters which may not exist in a simulated dictionary.


This work was supported by the Royal Society International Exchange Scheme IE160348, by the European Union's Horizon 2020 grant ERC-BNYQ, by the Israel Science Foundation grant no. 335/14, by ICore: the Israeli Excellence Center 'Circle of Light', by the Ministry of Science and Technology, Israel, by UCL Overseas Research Scholarship (UCL-ORS) and by China Scholarship Council (CSC).


  • [1] Dan Ma, Vikas Gulani, Nicole Seiberlich, Kecheng Liu, Jeffrey L Sunshine, Jeffrey L Duerk, and Mark A Griswold, “Magnetic resonance fingerprinting,” Nature, vol. 495, no. 7440, pp. 187, 2013.
  • [2] Yun Jiang, Dan Ma, Nicole Seiberlich, Vikas Gulani, and Mark A Griswold, “MR fingerprinting using fast imaging with steady state precession (fisp) with spiral readout,” Magnetic resonance in medicine, vol. 74, no. 6, pp. 1621–1631, 2015.
  • [3] Mike Davies, Gilles Puy, Pierre Vandergheynst, and Yves Wiaux, “A compressed sensing framework for magnetic resonance fingerprinting,” SIAM Journal on Imaging Sciences, vol. 7, no. 4, pp. 2623–2656, 2014.
  • [4] Zhe Wang, Hongsheng Li, Qinwei Zhang, Jing Yuan, and Xiaogang Wang, “Magnetic resonance fingerprinting with compressed sensing and distance metric learning,” Neurocomputing, vol. 174, pp. 560–570, 2016.
  • [5] Gal Mazor, Lior Weizman, Assaf Tal, and Yonina C Eldar, “Low rank magnetic resonance fingerprinting,” in Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the. IEEE, 2016, pp. 439– 442.
  • [6] Gal Mazor, Lior Weizman, Assaf Tal, and Yonina C Eldar, “Low-rank magnetic resonance fingerprinting,” Medical physics, vol. 45, no. 9, pp. 4066–4084, 2018.
  • [7] Ouri Cohen, Bo Zhu, and Matthew S Rosen, “Mr fingerprinting deep reconstruction network (drone),” Magnetic resonance in medicine, vol. 80, no. 3, pp. 885–894, 2018.
  • [8] Elisabeth Hoppe, Gregor Körzdörfer, Tobias Würfl, Jens Wetzl, Felix Lugauer, Josef Pfeuffer, and Andreas Maier, “Deep learning for magnetic resonance fingerprinting: A new approach for predicting quantitative parameter values from time series,” Stud Health Technol Inform, vol. 243, pp. 202– 206, 2017.
  • [9] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436, 2015.
  • [10] Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio, Deep learning, vol. 1, MIT press Cambridge, 2016.
  • [11] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
  • [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vision Pattern Recog, 2016, pp. 770–778.
  • [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Identity mappings in deep residual networks,” in Proc. Eur. Conf. Comput. Vision. Springer, 2016, pp. 630–645.
  • [14] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang, “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 2, pp. 295–307, 2016.
  • [15] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee, “Deeplyrecursive convolutional network for image super-resolution,” in Proc. IEEE Conf. Comput. Vision Pattern Recog, 2016, pp. 1637–1645.
  • [16] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin, “Convolutional sequence to sequence learning,” arXiv preprint arXiv:1705.03122, 2017.
  • [17] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.


Fig. 1. Diagram of designed 1-D residual CNN for signature-to-parameter mapping in MRF reconstruction. During the training stage, each simulated dictionary entry (1-D time sequence) is input into the network as a training signature with corresponding parameters such as T1/T2 relaxation times as the label. During the testing stage, the signature for each pixel is extracted from a stack of de-aliased/denoised imaging contrasts to serve as a testing signature which is mapped to corresponding tissue parameters by the network.

Fig. 2. Parameter restoration performance using designed network on synthetic data. Blue and red lines represent the groundtruth and estimation of corresponding parameter. It can be noticed that the trained network fits the parameters well in the whole range, yielding high correlation coefficients and low RMSE. Correlation Coefficients for T1 / T2: R2 = 0:99999986=0:99999963; RMSE = 0.659 / 0.491. The most impressive advantage of the network is the fast inference speed – taking only 8.2 s to complete the mapping operation for eighty thousand temporal signatures, that is, 53× faster than 464.1 s using the dictionary matching method.

Fig. 3. Visual performance on the parameter restoration using the dictionary matching method [1] and proposed method on a phantom without subsampling. It is noticed that our approach gives competitive performance for T1 mapping and yields much better performance for T2 mapping, obtaining 7.9dB SNR gains than the competing method [1]. This is owing to the advantage that the trained neural network is a powerful function representation that outputs continuous-valued parameters. In addition, the network takes only 1.6 s to accomplish the mapping for a pair of T1 / T2 parameter maps of size 128 x 128, 56x faster than the dictionary matching method.

Fig. 4. Visual performance on subsampled phantom data with subsampling ratio 0.15. Comparison between Ma et al.'s dictionary matching [1], FLOR [6], and proposed method. Our method uses low-rank based signature de-aliasing followed by network based parameter mapping. It is noticed that our method outperforms the dictionary matching method with significant gains, and also yields competitive performance as the state-of-the-art method FLOR. In addition, our method is 73x faster than FLOR for parameter mapping. We also note that storing the network requires 20.3 megabytes while storing the training dictionary of size 80100 x 200 requires more than 100 megabytes.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)