Yoonmi Hong^{1}, Geng Chen^{1}, Pew-Thian Yap^{1}, and Dinggang Shen^{1}

In this abstract, we present a proof-of-concept method for effective diffusion MRI reconstruction from slice-undersampled data. Instead of acquiring full diffusion-weighted (DW) image volumes, only a subsample of equally-spaced slices are acquired. We show that the complementary information from DW volumes corresponding to different diffusion wavevectors can be harnessed using graph convolutional neural networks for reconstruction of the full DW volumes.

**Purpose**

**Methods**

Without loss of generality, we assume that each of the $$$N$$$ DW volumes, $$$\{X_i\,:\,i = 1,2,\cdots,N\}$$$ is undersampled in the $$$z$$$ direction by a factor $$$R$$$ with offset $$$s_i = 0,1,\cdots,R-1$$$ (see Figure 1):

$$\tilde{X}_i(\cdot,\cdot,z)=X_i(\cdot,\cdot,Rz+s_i).$$

We aim to predict the full DW volumes from the undersampled data using a learned non-linear mapping function $$$f$$$ so that

$$(X_1,\cdots,X_N)=f(\tilde{X}_1,\cdots,\tilde{X}_N).$$

An overview of our proposed method is shown in Figure 1.

The
mapping function can be learned using a GCNN (Figure 2) that takes into account
information from both physical space (i.e., x-space) and diffusion wavevector
space (i.e., q-space) in the form of graph. Convolution on the graph can be
defined via spectral decomposition of the graph Laplacian. Residual
convolutional block is employed to ease the training since it can mitigate the
vanishing gradient problem^{3}. The upsampling operation in the $$$z$$$-direction is learned by
sub-pixel convolution^{4}, which performs standard convolution in
low-resolution space followed by a pixel-shuffling. The pixel-shuffling operation
remaps the feature maps of size $$$n\times R$$$ to the output of size $$$Rn\times 1$$$ where $$$n$$$ is the number of input graph node. We employ conditional
patch-GAN as a discriminator as it is computationally efficient with fewer
parameters^{2}. The generator and the discriminator are trained in an
alternating manner.

**Materials**

**Results**

We evaluated our method by retrospective undersampling with a factor of $$$R = 4$$$. For each subject, the DW images are divided into 4 groups, each with uniformly distributed gradient directions. Training was carried out using $$$4\times 4\times 1\times 90$$$ input patches and $$$4\times 4\times 4\times 90$$$ output patches.

We compared our method with bilinear and bicubic interpolations. The root-mean-square errors in terms of spherical harmonic coefficients up to order 8 are 84.912, 88.013, and 66.957 for bilinear interpolation, bicubic interpolation, and our method, respectively. The corresponding average peak signal-to-noise ratios for generalized fractional anisotropy (GFA) are 30.328, 30.406, and 32.445, respectively. The GFA maps are provided in Figure 3 for visual comparison.

Figure 4 shows that our method can yield fiber orientation distribution functions (ODFs) that are closer to the ground truth with less partial volume effects.

We extracted the forceps major (FMajor) using ROIs drawn in the occipital cortex and corpus callosum, and also the forceps minor (FMinor) using ROIs drawn in the prefrontal cortex and corpus callosum. Figure 5 shows that our method can yield richer fiber tracts that resembles the ground truth more.

**Conclusion**

1. Goodfellow, Ian, et al., "Generative adversarial nets." Advances in neural information processing systems, 2014.

2. Isola, Phillip, et al., "Image-to-image translation with conditional adversarial networks." arXiv preprint, 2017.

3. He, Kaiming, et al., "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.

4. Shi, Wenzhe, et al., "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.

5. Sotiropoulos, Stamatios N., et al., "Advances in diffusion MRI acquisition and processing in the Human Connectome Project." Neuroimage, 80:125-143, 2013.