Adversarial Learning for MRI Reconstruction and Classification of Cognitively Impaired Individuals
Akshara Balachandra1, Xiao Zhou2, Michael Romano4, Vijaya Kolachalama3
1Department of Medicine, Stanford University, 2Computer Science, Boston University, 3Boston University, 4Radiology & Biomedical Engineering, University of California, San Francisco
Objective:
To design and train a dual-objective generative adversarial network (GAN) to (1) reconstruct higher quality brain MRIs that (2) accurately retain disease-specific imaging features critical for predicting progression from mild cognitive impairment (MCI) to Alzheimer’s disease (AD).
Background:
Game theory-inspired deep learning using a GAN provides an environment where neural networks competitively interact to accomplish a goal. A classical GAN contains a generator and discriminator that work together to take images from one domain (e.g., low-quality brain MRIs) and create images similar to real training data (e.g., high-quality brain MRIs). Most published work in medical imaging has focused on singular tasks like super-resolution and segmentation.
Design/Methods:
We obtained 3T T1-weighted brain MRIs (i.e., original scans) of participants with MCI from the Alzheimer’s Disease Neuroimaging Initiative (ADNI, N=342) and the National Alzheimer's Coordinating Center (NACC, N=190). We simulated MRIs with missing data by removing 50% of sagittal slices from the original scans (i.e., diced scans). The inputs to the generator were diced scans. We introduced a classifier into the GAN architecture to discriminate between stable (i.e., sMCI) and progressive MCI (i.e., pMCI) to encourage the generator to encode AD-related information during reconstruction. We assessed the quality of the generated images and their utility in distinguishing pMCI from sMCI. The framework was trained and internally validated on ADNI data and externally validated on NACC data.
Results:
In the independent NACC cohort, generated scans had better image quality than the diced scans (structural similarity [SSIM]: 0.553 ± 0.116 versus 0.348 ± 0.108). Furthermore, a classifier utilizing the generated scans distinguished pMCI from sMCI more accurately than with the diced scans (F1-score: 0.634 ± 0.019 versus 0.573 ± 0.028).
Conclusions:
Competitive deep learning frameworks show promise in facilitating disease-oriented image reconstruction in those at risk of developing Alzheimer's disease.
10.1212/WNL.0000000000204800