Inter- and Intra-Rater Reliability of a Virtual Concussion Assessment During the Era of the COVID-19 Pandemic
Alani Jack1, Helena Digney1, Carter Bell1, Binu Joseph1, Sara Hyman1, Steven Galetta1, Laura Balcer1, Barry Willer2, Mohammad Haider2, Ghazala Saleem2, Scott Grossman1, John Leddy3, Neil Busis1, Daniel Torres4
1Department of Neurology, NYU Grossman School of Medicine, 2State University of New York at Buffalo, 3UBMD Orthopaedics & Sports Medicine, 4Neurology, Northwell Health
Objective:

To assess inter-examiner agreement of virtual concussion telemedicine examinations and to determine agreement of virtual and in-person concussion examinations performed by the same physician.

Background:

We developed a virtual concussion telemedicine examination by adapting several in-office examination methods.  The virtual exam was compared to the in-person concussion exam to examine reliability and to validate the use of telemedicine via audio-video conferencing technology. 

Design/Methods:
A virtual concussion examination form with instructions on how to perform this was developed. The standardized examination included 29 elements, such as orthostatic tolerance, ocularmotor exam, and balance tests. We enrolled 21 participants referred for an initial concussion evaluation at the NYU Concussion Center.  Two virtual concussion telemedicine examinations were conducted in the office setting following study enrollment; one examination was perfomed by the treating physician and another was compled by another physician.  The in-person concussion examination was then performed by the treating physician. We used Cohen’s Kappa to determine inter-modality and inter-examiner agreement.
Results:

We determined Cohen’s kappa to assess agreement on dichotomous examination ratings for each of 29 exam elements across 21 participants.  Kappa values for inter-examiner agreement ranged from 0.31-1.0, with median kappa=0.76; 45% of exam elements had excellent inter-examiner agreement between the two telemedicine examiners (kappa>0.75), while 75% of exam elements had at least intermediate inter-examiner agreement for telemedicine examiners (kappa>0.40). Within the same examiner for telemedicine vs. in-person examinations, Cohen’s kappa values were even higher, with values ranging from 0.48-1.0 overall.  

Conclusions:

We found that the treating physician’s virtual and in-person concussion examination findings largely agreed.  However, there was less agreement between virtual concussion examinations performed by two different physicians.  This suggests that expertise and experience of examining physicians contribute more to variability than do modality or area of the examination.  This study sets the stage for investigations of reliability between in-person and teleneurology examinations. 

10.1212/WNL.0000000000204010