Feasibility of brain white matter segmentation on multi-echo T2-weighted images without registration: a Neural Network approach.
Jackie Yik1,2, Roger Tam3,4, Cristina Rubino5, Lara Boyd6, David K.B. Li4,7, Cornelia Laule1,2,4,8, and Hanwen Liu1,2

1Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada, 2International Collaboration on Repair Discoveries, Vancouver, BC, Canada, 3School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada, 4Radiology, University of British Columbia, Vancouver, BC, Canada, 5Rehabilitation Sciences, University of British Columbia, Vancouver, BC, Canada, 6Physical Therapy, University of British Columbia, Vancouver, BC, Canada, 7Medicine, University of British Columbia, Vancouver, BC, Canada, 8Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada


Most current methods of human brain white matter segmentation require registration to T1 image space. Artificial intelligence can reduce potential errors in, and speed up, this process by segmenting white matter from T2-weighted images directly. A neural network was pre-trained using T1-weighted images and FSL’s FAST followed by T2-weighted images using transfer learning. The network could then segment new T2-weighted images directly. T1- and T2-weighted image segmentations using the neural network were comparable to FSL’s FAST. Our work shows the feasibility of multi-echo T2-weighted images for brain white matter segmentation without initial segmentation and registration of T1-weighted images.


Magnetic resonance imaging (MRI) is a powerful diagnostic and research tool with the capability of producing a variety of image contrasts. For example, T1-weighted images provide detailed structural information while T2/T2*-weighted images highlight pathological conditions such as cerebral hemorrhage1. Examining a variety of scan types together can provide complementary information about brain features. Prior to analysis, multiple data sets are transformed, or registered, into a common space to normalize inter-subject variability, both in positioning and anatomy. Inconsistency in scan resolution may cause information loss during registration. Furthermore, image segmentation, which is sensitive to noise2, ideally requires high contrast between tissues of interest and background. For white matter (WM), grey matter (GM), and cerebral spinal fluid (CSF) segmentation, T1-weighted images are commonly used. Data protocols for other scan types with low contrast between these regions require multi-modal registration, complicating and slowing down the analysis process. Among various techniques, machine learning, like the U-Net3, has been successful for segmentation tasks.

We were particularly interested in the feasibility of direct segmentation of multi-echo T2 images (32 echoes or more), typically used in myelin water imaging4. Although single echo T2 images do not have high contrast between WM, GM, and CSF as compared to T1 images, we hypothesized that multi-echo T2-weighted images would provide enough information for accurate segmentation in native space, comparable to the typical T1-weighted image segmentation and registration to T1 space. Our objective was to create a neural network that could segment WM directly in native T2 space, without first segmenting in T1 space, by using an encoder-decoder machine learning method.


Segmentation Algorithm: An autoencoder convolutional neural network based on the LinkNet5 architecture was built in Keras6 with Tensorflow7 backend. Training parameters were initiated using the Glorot uniform initializer8. Binary cross-entropy was used to calculate loss and trained with Adam optimizer9. T1 labels for pre-training were created using FSL FAST10,11 segmentation with 3 classes after brain extraction. Registering these labels (affine transformation, 12 degrees of freedom) onto T2-weighted images using FSL produced the T2 training labels.

Data and Analysis Pipeline: 3T MRI data were collected using an 8-channel phase-array head coil (Philips Achieva). The network was pre-trained on T1-weighted brain images from 7 healthy subjects (3DT1 whole brain turbo field echo, flip angle=6°, TE/TR=3.7/7.4ms, slices=160, resolution=1x1x1mm3) and then further trained on T2-weighted brain data from 38 healthy subjects (3D gradient and spin echo (GRASE), 32-echo, TE/TR=10/1000ms, slices=40, reconstructed resolution=1x1x2.5mm3) that included the subjects from the T1 model. The network was tested on 11 new T2-weighted brains obtained using the same 32-echo GRASE sequence. Analysis pipeline is shown in Figure 1. Probabilistic output predictions were binarized at threshold 0.5. Dice score, a measure of similarity, was calculated between ground truth and network predictions for accuracy.


Pre-training results are shown in Figure 2. T2 image segmentation is shown in Figure 3. Visual assessment shows good WM segmentation with smoother WM masks from the neural network compared to ground truth. Dice score was 0.8495 for the T2 model. FSL segmentation on T1 images took on average 270 ± 14 sec and an additional 56 ± 0.52 sec to register, while the neural network model took 95 ± 1.6 sec to segment T2-weighted images directly (CPU: 1.4 GHz Intel Core i5).


FSL and the neural network methods segmented the majority of white matter regions similarly in the T1-weighted images. The T1-based neural network model had differences in segmentation of the deeper grey matter regions around the thalamus; FSL included those areas as white matter but the neural network did not. For T2-weighted images, the most notable difference between the two methods was that the neural network segmentation was smoother than the ground truth. The ground truth roughness could be attributed to the transformation from T1 space to T2 space. Interestingly, although the network was trained to match the ground truth, the trained neural network seems to provide better segmentation results due to the smoothing effect. This observation leads to an open question that multi-echo (especially many-echo) images could potentially contain more anatomical segmentation information than that of a single conventional T1-weighted image.


In summary, the use of deep neural networks provides a fast segmentation method that can be easily transferred for direct segmentation of various scan types and increases workflow. The deep neural network has shown its ability in segmentation of multi-echo and many-echo images without manual tuning of the network design.


We thank the study participants and the amazing technologists at the UBC MRI Research Centre. Funding support was provided by the Natural Sciences and Engineering Research Council Undergraduate Student Research Award.


1. Chavhan GB, Babyn PS, Thomas B, et al. Principles, Techniques, and Applications of T2*-based MR Imaging and its Special Applications. Radiographics. 2009;29(5):1433-1449.

2. Sandhya G, Kande GB, and Savithri TS. Multilevel Thresholding Method Based on Electromagnetism for Accurate Brain MRI Segmentation to Detect White Matter, Gray Matter, and CSF. BioMed Research International. 2017;vol 2017, Article ID 6783209: 17 pages.

3. Ronneberger O, Fischer P, and Brox T. U-Net: Convolutional networks for biomedical image segmentation. In Proc. MICCAI. 2015:234-241.

4. Laule C, Vavasour IM, Kolind SH, et al. Magnetic resonance imaging of myelin. Neurotherapeutics. 2007;4(3):460-484.

5. Chaurasia A, and Culurciello E. LinkNet: Exploiting encoder representations for efficient semantic segmentation. In Proc VCIP. 2017.

6. Chollet F. Keras (2017). 2017.

7. Abadi M, Barham P, Chen J, et al. Tensorflow: a system for large-scale machine learning. In OSDI. 2016;16:265-283.

8. Glorot X, and Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In Proc 13th international conference on artificial intelligence and statistics. 2010:249-256.

9. Kingma DP, and Adam BJ: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014.

10. Zhang Y, Brady M, and Smith S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans Med Imag. 2001;20(1):45-57.

11. Jenkinson M, Beckmann CF, Behrens TE, et al. FSL. NeuroImage. 2012;62(2):782-790.


Figure 1: Workflow chart. The neural network is first pre-trained on T1-weighted images. Registration of the T1 labels onto the T2-weighted images and the pre-trained parameters are used in the T2 model and further trained on T2-weighted images.

Figure 2: Segmentation comparison on T1-weighted images. Four representative axial slices are shown to compare the segmentation done by FSL FAST and the deep neural network. From left to right: the structural slice, the ground truth produced by FSL segmentation to train the deep neural network, the probabilistic output from the network, and the network output binarized at threshold 0.5. Areas of notable difference in segmentation smoothness are circled in red.

Figure 3: Segmentation comparison on T2-weighted images. Four representative axial slices are shown to compare the segmentation done by registering FSL FAST from T1-weighted images onto T2-weighted images and the deep neural network. From left to right: the structural slice, the ground truth from registering FSL segmentation T1-weighted images onto T2 space to train the deep neural network, the probabilistic output from the network, and the network output binarized at threshold 0.5. Areas of notable difference in segmentation smoothness are circled in red.

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)