Accelerating Diffusion MRI via Slice Undersampling and Deep-Learning Reconstruction
Yoonmi Hong1, Geng Chen1, Pew-Thian Yap1, and Dinggang Shen1

1Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States


In this abstract, we present a proof-of-concept method for effective diffusion MRI reconstruction from slice-undersampled data. Instead of acquiring full diffusion-weighted (DW) image volumes, only a subsample of equally-spaced slices are acquired. We show that the complementary information from DW volumes corresponding to different diffusion wavevectors can be harnessed using graph convolutional neural networks for reconstruction of the full DW volumes.


Compared with anatomical T1- or T2-weighted MRI, diffusion MRI typically requires longer acquisition times for sufficient coverage of the diffusion wavevector space. In this abstract, we demonstrate that it is possible to reconstruct full diffusion-weighted (DW) image volumes from highly slice-undersampled data using graph convolutional neural networks (GCNNs) in combination with generative adversarial networks (GANs)1,2.


Without loss of generality, we assume that each of the $$$N$$$ DW volumes, $$$\{X_i\,:\,i = 1,2,\cdots,N\}$$$ is undersampled in the $$$z$$$ direction by a factor $$$R$$$ with offset $$$s_i = 0,1,\cdots,R-1$$$ (see Figure 1):


We aim to predict the full DW volumes from the undersampled data using a learned non-linear mapping function $$$f$$$ so that


An overview of our proposed method is shown in Figure 1.

The mapping function can be learned using a GCNN (Figure 2) that takes into account information from both physical space (i.e., x-space) and diffusion wavevector space (i.e., q-space) in the form of graph. Convolution on the graph can be defined via spectral decomposition of the graph Laplacian. Residual convolutional block is employed to ease the training since it can mitigate the vanishing gradient problem3. The upsampling operation in the $$$z$$$-direction is learned by sub-pixel convolution4, which performs standard convolution in low-resolution space followed by a pixel-shuffling. The pixel-shuffling operation remaps the feature maps of size $$$n\times R$$$ to the output of size $$$Rn\times 1$$$ where $$$n$$$ is the number of input graph node. We employ conditional patch-GAN as a discriminator as it is computationally efficient with fewer parameters2. The generator and the discriminator are trained in an alternating manner.


We randomly selected 5 training and 8 testing subjects from the Human Connectome Project (HCP)5 database. 90 DW images (voxel size: $$$1.25\times 1.25\times 1.25\, \text{mm}^3$$$) with $$$b = 2000\,\text{s/mm}^2$$$ were used for evaluation.


We evaluated our method by retrospective undersampling with a factor of $$$R = 4$$$. For each subject, the DW images are divided into 4 groups, each with uniformly distributed gradient directions. Training was carried out using $$$4\times 4\times 1\times 90$$$ input patches and $$$4\times 4\times 4\times 90$$$ output patches.

We compared our method with bilinear and bicubic interpolations. The root-mean-square errors in terms of spherical harmonic coefficients up to order 8 are 84.912, 88.013, and 66.957 for bilinear interpolation, bicubic interpolation, and our method, respectively. The corresponding average peak signal-to-noise ratios for generalized fractional anisotropy (GFA) are 30.328, 30.406, and 32.445, respectively. The GFA maps are provided in Figure 3 for visual comparison.

Figure 4 shows that our method can yield fiber orientation distribution functions (ODFs) that are closer to the ground truth with less partial volume effects.

We extracted the forceps major (FMajor) using ROIs drawn in the occipital cortex and corpus callosum, and also the forceps minor (FMinor) using ROIs drawn in the prefrontal cortex and corpus callosum. Figure 5 shows that our method can yield richer fiber tracts that resembles the ground truth more.


We have demonstrated that full DW image volumes can be reconstructed effectively from slice-undersampled data using a GCNN, which jointly considers the spatio-angular information in diffusion MRI. The experimental results indicate that the DW volumes can be reconstructed with minimal information loss from the data undersampled with a factor of 4.


This work was supported in part by NIH grants (NS093842, EB022880, and EB006733).


1. Goodfellow, Ian, et al., "Generative adversarial nets." Advances in neural information processing systems, 2014.

2. Isola, Phillip, et al., "Image-to-image translation with conditional adversarial networks." arXiv preprint, 2017.

3. He, Kaiming, et al., "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.

4. Shi, Wenzhe, et al., "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.

5. Sotiropoulos, Stamatios N., et al., "Advances in diffusion MRI acquisition and processing in the Human Connectome Project." Neuroimage, 80:125-143, 2013.


Figure 1: Method overview.

Figure 2: The proposed graph CNN architecture.

Figure 3: Predicted GFA maps and the corresponding error maps shown in multiple views.

Figure 4: Representative fiber ODFs.

Figure 5: Representative tractography results.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)