Virtual Imaging Using Generative Adversarial Networks for Image Translation (VIGANIT): Deep Learning based Prediction of Diffusion-Weighted Images from T2-Weighted Brain MR Images
Vidur Mahajan1, Aravind Upadhyaya2, Vasantha Kumar Venugopal1, Abhishek S Venkataram2, Mukundhan Srinivasan3, Murali Murugavel1, and Harsh Mahajan1,4

1Centre for Advanced Research in Imaging, Neuroscience and Genomics, Mahajan Imaging, New Delhi, India, 2Triocula technologies, Bangalore, India, 3Nvidia, Bangalore, India, 4Mahajan Imaging, New Delhi, India


100 whole brain MRI scans of patients with no abnormality and 30 with acute infarcts, comprising of 25 T2-weighted and Diffusion-Weighted (b=1000) images each, were fed into a Deep Learning model with a 75-25 training-validation split. The T2W image was assigned as the input to predict DW images. Binary Cross entropy of 0.15 for normal and 0.11 for infarct cases was obtained and the predicted images were able to successfully delineate acute and chronic infarcts in all test cases.


Multi-parametric MR imaging involves altering MR acquisition parameters to acquire various types of images, used to visualize different types of anatomy and characterize different pathological conditions. Although the images are inherently linked, in that they are of the same body tissue, the specific features not discernible to the human eye, which is why multiple sequences are acquired. Many attempts have been made to create “Universal Sequences” - one pulse sequence which can be reconstructed into multiple others saving time and making the MRI more efficient. Notable examples include SyntheticMR1 (SyMRI) and Multidimensional Diffusion MR2 (MDMRI). We propose another novel approach, using Deep Learning3 (DL), to convert clinical T2w images into Diffusion Weighted (DW) images, while preserving inherent characteristics of DW images such as ability to delineate acute and chronic infarcts. DW imaging is indispensable for diagnosing acute stroke. DW imaging is also of important clinical value in some hypercellular tumors like meningioma, infectious lesions like abscesses, tuberculoma and autoimmune conditions like multiple sclerosis. Unlike SyMRI or MDMRI we are using AI to establish the relationship between tissue and voxels and then use the same relationship to generate a Virtual MRI sequence. Using U-Net4 Convolutional Neural Network as the baseline we have developed "VIGANIT" network to generate virtual Images by taking one MRI sequence as the input (in this case T2w MRI).


Training data for the DL model included whole brain MRI scans obtained on a 3.0T wide-bore MRI scanner from GE Healthcare, USA (MR750w).100 normal brain MRI scans and 30 brain MRI scans with acute infarcts were identified by a certified radiologist (8 years’ post-residency MRI experience). The images were anonymized and T2w, DW (b=1000) and Average Diffusion Coefficient (ADC) volumes were extracted. Each volume had 25 images covering the whole brain. A DL architecture using a modified version of Variation Auto Encoder (VAE) with the encoder side having a Deep U-Net and RESNET model was designed5. This combination hybrid VAE is able to sample 660 million features and is very exhaustive. This VAE delineates the distinctive grey and white matter areas during image generation. This is an uneven encoder network with more layers on the encoder side than the decoder. See Figure (1). The training data in a 75-25 training – validation split was fed into VIGANIT, which attempts to map at the voxel level a one to one correspondence between the T2 images and diffusion weighted images. Binary cross entropy was used as the loss function and training was done on an NVIDIA DGX-1 GPU system. For testing, T2W images from five independent samples (125 total images) with 7 images having T2 hyperintensities with corresponding diffusion restriction (group-1) and 16 images with no corresponding diffusion restriction (group 2) were employed. The true diffusion and 'virtual' diffusion images were reviewed by a radiologist.


The binary Cross entropy of 0.15 for normal and 0.11 for infarct cases was obtained. The inference for test images was done on an NVIDIA 1080ti GPU and took 750 ms to produce each image. In the test cases, 6 out of 7 T2 hyperintensities in group 1 showed restricted diffusion on virtual DWI and all 16 T2 hyperintensities in group-2 were non-diffusion restricting on virtual DWI. The missed lesion in group 1 was 2.5 mm in size. There was reduced image sharpness on the predicted images which did not impact clinical decision making. Examples of group 1 and group 2 images are shown in figures 2, 3 and 4.


The field of Radiomics that deals with extraction of quantitative data from radiological images had been built on analyzing features that are not evidently visible to the human eye6. Machine learning, more precisely, Deep Learning algorithms are being tried in new situations built upon the premise of detecting and extrapolating invisible information in the images. Predicting Diffusion weighted images from T2 weighted images using deep learning is one such situation with potential applications in imaging workflow. We propose our work as 'proof of concept' of radiomics' ability to successfully delineate features on dicom data which are not visible to the human eye. We are now evaluating the possibility of predicting Apparent Diffusion Co-efficient numbers and the corresponding ADC maps from T2W images using similar deep learning approaches.


Authors would like to thank NVIDIA Corporation for providing GPU compute and Ms. Madhuri Barnwal at Mahajan Imaging for providing the required data.


  1. Drake-Pérez M, Boto J, Fitsiori A, Lovblad K, Vargas MI. Clinical applications of diffusion weighted imaging in neuroradiology. Insights Imaging. 2018;9(4):535-547.
  2. Topgaard, Daniel. “Multidimensional Diffusion MRI.” Journal of Magnetic Resonance 275 (February 2017): 98–113. https://doi.org/10.1016/j.jmr.2016.12.007.
  3. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep Learning.” Nature 521 (May 27, 2015): 436.
  4. O. Ronneberger and P.Fischer and T. Brox. "U-Net: Convolutional Networks for Biomedical Image Segmentation." Medical Image Computing and Computer-Assisted Intervention (MICCAI): 9351. 234-241. available on arXiv:1505.04597
  5. M. Drozdzal, E. Voronstov, G Chartrand et al., ‘The Importance of Skip Connections in Biomedical Image Segmentation’, https://arxiv.org/pdf/1608.04117.pdf
  6. Lambin P., Rios-Velazquez E., Leijenaar R., et al. Radiomics: extracting more information from medical images using advanced feature analysis. European Journal of Cancer. 2012;48(4):441–446. doi: 10.1016/j.ejca.2011.11.036.


VIGANIT high level architecture diagram.

The hyper parameters that were employed were, Bottleneck: 2 X 2 X 2048, number of layers: 170 encoder, 40 decoder, Loss function: Binary cross entropy, Batch size: 1 / GPU, total of 8 GPU’s used, Input batch size – 8, Learning rate: 1 e-4, Epochs – 300, Number of iterations / epoch – 900, Validation set - 25% of training data. The resulting loss was 0.092 and the Validation accuracy : 0.0339

Predicted DWI sequence in chronic ischemic changes. Axial T2w Image (left) showing T2 hyperintensities in the left frontal region and the corresponding B1000 DWI image (middle) shows no restricted diffusion in these areas. The virtual DWI (right) apart from appearing qualitatively similar also predicts the absence of restricted diffusion accurately.

Predicted DWI sequence in acute infarct. Axial T2W Image (left) showing T2 hyperintensities in the left parietal region and the corresponding B1000 DWI image (middle) shows focal restricted diffusion. The virtual DWI (right) accurately predicts the diffusion restriction.

Predicted DWI sequence in acute infarct. Axial T2W Image (left) showing a subtle T2 hyperintensity in the left caudate nucleus showing focal restricted diffusion(middle). The virtual DWI (right) also restricted diffusion in that region.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)