Contrast Transfer Learning for Reconstruction of Undersampled Dynamic Contrast-Enhanced MRI
Li Feng1, Fang Liu2, Lihua Chen3,4, and Ricardo Otazo1,5

1Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States, 2Department of Radiology, University of Wisconsin-Madison, Madison, WI, United States, 3Department of Radiology, Southwest Hospital, Chongqing, China, 4Department of Radiology, PLA 101st Hospital, Wuxi, Jiangsu, China, 5Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, United States


The application of deep learning for reconstruction of dynamic contrast-enhanced MRI presents significant challenges caused by the rapid passage of the contrast agent, which makes it difficult to acquire fully-sampled images to train a neural network. This work proposes to use images from a delayed contrast phase, where contrast changes are in a relatively steady state, for training, and to apply the trained neural network for reconstruction of undersampled data acquired in other contrast phases. The proposed contrast transfer learning reconstruction was trained on 55 post-contrast liver cases and tested on a first-pass liver DCE-MR acquisition.


In most deep learning-based image reconstruction studies, supervised training is typically performed using artifact-free reference images and their corresponding undersampled pairs, based on which a neural network is trained to learn latent image features from these image pairs to restore artifact-corrupted images [1-6]. To ensure optimal reconstruction performance, it is often desired that the reference and undersampled image pairs have same image contrast [7]. These requirements are very challenging to achieve for dynamic contrast-enhanced MRI (DCE-MRI) due to the rapid passage of contrast agent through the cardiovascular system and the relatively slow imaging speed of MRI. As a consequence, deep learning has not been applied to reconstruction of undersampled DCE-MRI data. Based on the fact that images at different contrast phases share significant anatomical correlations, this work proposes to use images from a single delayed contrast phase to train a neural network to remove aliasing artifacts and then to apply the trained neural network to reconstruct undersampled k-space data from different contrast phases. The hypothesis of the proposed contrast transfer learning approach is that a neural network can learn how to remove aliasing artifacts in one contrast phase and that knowledge can be transferred to other contrast phases.


In DCE-MRI, passage of the contrast agent is usually segmented into a rapid wash-in phase, a gradual wash-out phase, and a steady-state delayed phase, as shown in Figure 1. While it is challenging to directly acquire fully-sampled images during the rapid wash-in phase, fully-sampled reference images can be obtained during the delayed phase for training a neural network, since the intensity change in delayed contrast phases is significantly reduced. In this study, 55 post-contrast (Gd-EOB-DTPA) 3D liver MRI datasets were acquired during the late delayed phase (approximately 20 minutes after the contrast injection). Data were acquired with institutional IRB approval and all data were continuously acquired during free breathing using a prototype stack-of-stars golden-angle sequence on a 3T clinical scanner (TimTrio, Siemens Healthineers, Germany). Relevant imaging parameters included: matrix=256x256x35, FOV=330x330x216mm3, TR/TE=3.40/1.68ms, number of spokes=1000, TA=178s.

Neural network training was implemented using the following steps. First, 55 reference artifact-free 3D liver datasets were reconstructed by combining all the 1000 spokes in each case. Second, an underdamped image, reconstructed with 89 consecutive spokes, was used for the training. As shown in Figure 2, the undersampled images keeps varying during the training process by randomly selecting a different set of 89 spokes for each training epoch. This was implemented to account for the variation of sampling pattern in different contrast phases in golden-angle radial acquisitions, so that the trained network is capable of learning various artifact structures and thus can be better generalized towards removing new artifact features that may occur during the inference process.

The trained reconstruction neural network was evaluated in one additional liver dataset acquired including both the wash-in and wash-out phases, which have completely different image contrast from the training datasets. The reconstruction neural network, introduced in a previous study [8], was implemented to solve a training objective consisting of three loss components, including i) a standard end-to-end CNN mapping loss using L1 norm, ii) a data fidelity loss using L2 norm for data consistency (e.g., to enforce the CNN output image to be consistent with the acquired k-space measurements), and iii) an adversarial loss promoting high perceptional quality of reconstructed images. Such a structure was tailored from the general cycle-consistent GAN (CycleGAN) framework [9] and was optimized for MRI reconstruction [8]. The network was trained using a combination of U-net and PatchGAN for CNN mapping and adversarial process and was implemented on a Nvidia GeForce GTX 1080Ti card using the Tensorflow toolbox.


Figure 3 shows results of applying the proposed network trained on images acquired in delayed phase (a) to reconstruct undersampled images (89 spokes) acquired during the wash-in and wash-out phase (b). Noted that liver images used for training have a different contrast from the images to be reconstructed. Despite the variation of contrast, these results suggest that the trained network was able to recover image features in different organs, including the liver, aorta and muscle, while retaining contrast enhancement unaffected


The similar anatomical structure across different contrast phases enables to train a reconstruction network during the delayed phase and transfer this knowledge to reconstruct earlier phases. The network learns how to unaliase images, which is independent of contrast, and therefore reconstruction at different phases without contrast contamination is feasible.


The golden-angle radial liver datasets used in this study were acquired at the Southwest Hospital in Chongqing, China with Institutional IRB approval. The authors thank the technicians at the hospital for their help with the imaging studies.


[1] Hammernik et.al. Magn Reson Med. 2018 Jun;79(6):3055-3071

[2] Mardani et.al. IEEE Trans Med Imaging. 2018 Jul 23. doi: 10.1109/TMI.2018.2858752. [Epub ahead of print]

[3] Wang et.al. Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on, 514-517

[4] Schlemper et.al. IEEE Trans Med Imaging. 2018 Feb;37(2):491-503

[5] Zhu et.al. Nature. 2018 Mar 21;555(7697):487-492

[6] Han et.al. Magn Reson Med. 2018 Sep;80(3):1189-1205

[7] Knoll et al. Magn Reson Med. 2018 May 17. doi: 10.1002/mrm.27355. [Epub ahead of print]

[8] Liu et.al. ISMRM ML workshop, March 2018

[9] Zhu et.al. arxiv, 2017. arXiv:1703.10593


Figure 1: Representative contrast enhancement curve showing the different contrast phases. The rapid contrast changes during wash-in and wash-out phases make it difficult to acquire fully-sampled reference images for training. However, the delayed phase presents a relatively constant contrast change, which enables the acquisition of a fully-sampled reference. The trained network can then be applied to reconstruct images acquired during other contrast phases, given that these images share a high level of anatomy/structure correlations.

Figure 2: During the training process, undersampled-reference pairs were formed using undersampled images reconstructed from 89 consecutive spokes and images reconstructed using all the 1000 spokes. An undersampled image from a different set of 89 spokes was used for each epoch of the training to account for the variation of sampling patterns in different contrast phases of golden-angle radial acquisitions. This procedure generalizes the training towards reconstruction of images with arbitrary sampling patterns.

Figure 3: An example of applying a network trained on images with one contrast (a) to images with a different contrast (b). The trained network was able to recover image features in different organs, such as the liver, aorta and muscle, while retaining the physiological contrast unaffected.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)