QC-Aautomator: A deep learning based automated artifact detection in dMRI data
Zahra Riahi Samani1, Jacob Alappatt1, Parker Drew1, and Ragini Verma1

1Penn Patho-Connectomics Lab, Radiology, University of Pennsylvania, Philadelphia, PA, United States


We have developed a deep learning based automated Quality Control (QC) tool, QC-Automator, for diffusion weighted MRI data, that will detect different artifacts. This will ensure that appropriate steps can be taken at the pre-processing stage to improve data quality and ensure that these artifacts do not affect the results of subsequent image analysis. Our tool based on convolutional neural nets has 94 – 98% accuracy in detecting the various artifacts including motion, multiband interleaving artifact, ghosting, susceptibility, herringbone and chemical shift. It is robust and fast and paves the way for efficient and effective artifact detection in large datasets.


DTI data is prone to several artifacts like ghosting, motion and signal loss that manifest differently based on acquisition. Detection of artifacts is the first step in medical data analysis, as it defines how the data will be processed. Currently, Quality Control (QC) is mostly undertaken manually, which is error prone, subjective and time consuming. This underlines the need for an automated QC protocol that detects artifacts irrespective of acquisition. Previous studies concentrated on motion artifacts1,2. The purpose of this work is to use deep learning methods to train an automated artifact detection tool to detect various types of artifacts with data from different scanners, including motion, multiband interleaving, ghosting, susceptibility, herringbone and chemical shift.



We have created a dataset of ~14852 samples of artifacts (motion, multiband interleaving, ghosting, susceptibility, herringbone and chemical shift ) and ~100000 samples of artifact-free data, by manual inspection of three differently acquired DWI datasets for artifacts, by two experts. Each sample of the dataset was labeled based on the type of artifact present. Artifacts manifest differently, with some more distinguishable on the axial view than sagittal, and vice-versa. Axial slices were used as samples for ghosting, susceptibility, herringbone, and chemical shift artifacts and the dataset used sagittal slices were used for motion, and multiband interleaving artifacts. Figure 1 depicts the distribution of artifacts in our dataset. Figure 2 shows representative examples of the artifacts that can be detected.

2-Convolutional Neural Network Approach:

As humans rely on the identification of patterns in MRI data, to detect artifacts, deep learning tools, especially convolutional neural networks (CNN) can be a very powerful tool for that purpose. CNNs require a large number of parameters to be optimized during the training process which in turn needs large amounts of training data and computational power to successfully train. To overcome this, we adopted a transfer learning approach3, which consists of taking a classifier trained on another task and re-training a small number of parameters using a smaller amount of data to perform well on another task. We used the pre-trained VGG-Net network, which is one of the top architectures in computer vision4, as our base CNN. The top layer of the network was removed and replaced with a fully connected layer with 256 neurons, followed by a dense layer which performs the classification among two classes (Artifactual vs Artifact-free) using a Softmax layer. All parameters were fixed except those in the newly added layers. This vastly reduced the number of parameters required for training. We trained two classifiers: one for all artifacts manifest in the sagittal view, and the other one for artifacts manifest in the axial view. We used 80% of data for training and 20% for testing. Images for all slices were zero-padded to make them square and replicated three times for the three channels of the network. Each image was scaled so that its intensity lay between 0 and 1. Classifier was trained for 20 epochs using the RMSprop optimizer with a learning rate of 2e-4 and a cross entropy loss function. The classifiers were implemented in Keras 5.


Figure 3 shows the performance of our CNN method in artifact detection across different artifact types. We obtain 98 % accuracy for motion and multiband interleaving artifacts, and 94 % for susceptibility, ghosting, herringbone and chemical shift artifacts. Precision and recall values are reported accordingly.


Figure 3 illustrates that our transfer learning method is able to detect artifacts with high accuracy. Figure 4 shows examples of running our transfer learning method. It demonstrates that our method correctly classified majority of artifacts in different slices of the brain. We were able to detect most cases of susceptibility, herringbone and chemical shift and there were few cases of false detection for ghosting, motion and multiband interleaving artifacts. As the pattern of distortion is more visible in susceptibility, herringbone and chemical shift, we can get better performance by adding more training data for other artifacts in the future.


We have presented a novel method for automated artifact detection using CNN and transfer learning, for detecting various kinds of artifacts across different datasets in diffusion MRI images of human brain. We were able to achieve 96 % accuracy for detecting motion, multiband interleaving, ghosting, susceptibility, herringbone and chemical shift. In the future, we will combine the result of two classifiers by adding another layer of data fusion and classify every sample based on the presence of any artifact.


This research was supported by the National Institutes of Health (NIH) grant 1R01HD089390 (PI: Ragini Verma).


[1] M. S. Graham, I. Drobnjak, and H. Zhang, "A supervised learning approach for diffusion MRI quality control with minimal training data," NeuroImage, 2018.

[2] C. Kelly, M. Pietsch, S. Counsell, and J.-D. Tournier, "Transfer learning and convolutional neural net fusion for motion artefact detection," in Proc. Intl. Soc. Mag. Reson. Med., 2016, pp. 1-2.

[3] S. Hoo-Chang, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, et al., "Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning," IEEE transactions on medical imaging, vol. 35, p. 1285, 2016.

[4] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.

[5] Chollet, F., et al., 2015. Keras.https://github.com/fchollet/keras


Distribution of different types of artifacts in our dataset

Types of artifacts that the classifier was trained on

Classification results

A set of correctly and incorrectly classified cases

Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)