Artificial Intelligence-based Ocular Motor Biomarkers for Myasthenia Gravis Diagnosis
Preetham Bachina1, Narayani Waggle2, Goknur Kocak2, Andrea Corse1, Nuren Adatepe3, Kemar Green1
1Neurology, Johns Hopkins University School of Medicine, 2Johns Hopkins University School of Medicine, 3Cerrrahpasa Medical School
Objective:
This study seeks to improve the clinical care for myasthenia gravis by translating advances in eye tracking technology to clinical practice.
Background:
Myasthenia gravis (MG) is an autoimmune attack of various neuromuscular junction proteins resulting in chronic neuromuscular disease. The available diagnostic techniques are insensitive, non-specific, technically cumbersome, invasive, or labor-intensive. Eye movement biomarkers such as optokinetic nystagmus (OKN) have been shown to differentiate myasthenia from normal patients. The ability to rapidly differentiate between myasthenic and non-myasthenic eye movement disorders will decrease patient morbidity and defray the high healthcare costs associated with diagnostic workups and disease progression in patients with myasthenia gravis. The unique signatures of the eye movements seen in myasthenia gravis create an ideal scenario for digital automation using artificial intelligence.
Design/Methods:
In this study we utilized 31 binocular recordings (60 seconds each) that were obtained from 10 patients with myasthenia gravis, and 35 binocular recordings (60 seconds each) obtained from 10 normal controls (NC). All videos were recorded using the same video-oculography device. Video editing software was then used to convert each video to monocular recordings of the left and right eye (NC=70 videos; MG=62 videos). We then split each 60 second recording into 10 second clips (MG=186 and NC=210 videos)The videos were then converted to frames (n=118,000 frames). We then split the dataset into a training (MG=43,500 frames; NC=50,700 frames): test (n=12,300 frames per group) ratio of ~4:1. The raw video frames were then used to develop a video-based classifier adopted from a previous method.
Results:
The preliminary model yields an AUC of 0.77 with a sensitivity and specificity of 0.71 and 0.78 respectively. Our novel method achieves an accuracy of 74.1%.
Conclusions:
The preliminary results suggest that deep learning is well-suited to develop an automated myasthenia diagnostic tool from a larger dataset of OKN video recordings.
10.1212/WNL.0000000000206599