Biased by Design: How Digital Health in Neurology Fails—and Can Serve—Marginalized Patients
Objective:
To examine how bias emerges within AI and digital health technologies across neurology and to propose an equity-centered framework to guide inclusive design and innovation.
Background:
Digital health technologies are rapidly transforming neurologic care, from wearable sensors to AI-based diagnostics. Yet many remain biased by design. Devices and algorithms that overlook phenotypic diversity or rely on exclusionary datasets can distort measurements, delay diagnoses, and widen disparities in brain health.
Design/Methods:
Representative examples from neurotechnology and clinical devices, including EEG, fNIRS, MRI, DBS, and AI-enabled cognitive tools, were analyzed to identify patterns of phenotypic, algorithmic, and structural bias. A conceptual synthesis was performed to derive actionable strategies for equitable innovation.
Results:
Biases were observed across the technology lifecycle: from data acquisition (hair- and skin-related measurement bias) to algorithm training (non-diverse datasets) and deployment (limited access in minority-serving settings). Five corrective strategies emerged: (1) diverse development teams; (2) inclusive datasets; (3) community co-design; (4) device and algorithm auditing across phenotypes; and (5) patient-centered inputs that prioritize lived experience over provider bias.
Conclusions:
Bias in digital health and AI is both a technical and moral failure. Embedding equity from concept to deployment can ensure that AI and digital health technologies serve every brain—not just some.
Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff.