Can AI-generated Simulated Patients be a Tool to Empower Education in Headache Medicine: A Pilot Study
Natalia Murinova1, Ian Hakkinen2, Daniel Krashin3, Jay Dave4, Pengfei Zhang5, Mofei Lu6, Ami Cuneo7, Helen Sullivan8, Markku Hakkinen9
1University Of Washington, 2Evergreen Health, 3Seattle VA, 4Mount Sinai Union Square, 5Beth Israel Deaconess Medical Center, 6Neurology, Ophthalmology, Brigham and Women's Hospital, 7University of Washington, 8Psychology, Rider University, 9Cognitive Science, University of Jyvaskyla
Objective:
To evaluate perceptions of AI and whether AI generated patient simulated chatbots can improve a learners’ ability to diagnose primary headache disorders.
Background:
Headache medicine remains underrepresented throughout medical education despite its high prevalence and significant disability burden. Headache diagnosis is systematically guided by the strict ICHD3, which allows for an algorithmic teaching approach.
AI has broad implications in advancing medicine from a patient centered and medical education standpoint. AI chatbots are tools that can simulate a standardized patient encounter enabling learners at all levels to practice history taking, clinical reasoning and build diagnostic confidence through interactive dialogue. This study evaluated medical learners' perceptions of AI in medical education and assessed whether AI-generated simulated patients enhance confidence in diagnosing primary headache disorders.
Design/Methods:
We conducted an educational interventional study utilizing AI-generated simulated patient chatbots. These chatbots were developed with embedded headache expert case prompts and utilized Claude Sonnet 3.5. Participants underwent an initial pre-survey (Likert: 1=Strongly disagree, 10=Strongly agree). After the pre-survey, access to the simulated patients was given with no limit to time or number of interactions. Following completion subjects performed a post-survey.
Results:
Nine participants completed the pre-survey and six completed both surveys. Post-AI chat ratings were significantly higher than pre-AI chat expectations for directly comparable items (Mann-Whitney U = 47.5, p = .016, r = .62). Participants ratings were consistently high (medians 8.5-10.0), including AI-enhanced education excitement (8.3), belief in AI's future role in medical education (8.6), effectiveness for improving diagnostic skills in headache medicine (9.5), and realism of simulated patient interactions (8.5).
Conclusions:
AI patient simulated chatbots can enhance a learner’s confidence in diagnosing primary headache disorders. Participants expressed interest in AI integration and comfort with AI educational tools post intervention. This pilot study demonstrated a promising approach to improve the teaching of underrepresented topics in medical education.
Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff.