Jeffrey Ratliff1, Jason Margolesky2, Nicole Calautti1, Regina Kurapova1, Helen Hernandez Lage1, Patricia Jokl Graese3, Sara Schaefer4, William Dalrymple5, Courtney Seebadri-White1, Andres Fernandez1
1Thomas Jefferson University, 2University of Miami School of Medicine, 3University of Florida, 4Yale University, 5University of Virginia Health System
Objective:
Develop a custom GPT that simulates virtual patient encounters, enhancing residents' clinical reasoning within a self-regulated learning framework.
Background:
Virtual patient simulations are effective for GME learners to build clinical reasoning skills within a self-regulated learning framework. Simulations facilitate practice in history taking, neurological examination (via video), and clinical formulation. ChatGPT simulates natural language encounters, allowing asynchronous learning across digital devices. Moreover, developing an instruction template for building GPT simulation tools, adaptable to multiple topics, allows transferability to other learning contexts.
Design/Methods:
We instructed a custom GPT to play roles as a virtual patient with tremor and an observing tutor. Prompts were developed through iterative optimization with ChatGPT itself. The GPT's knowledge base comprised published review articles. The initial GPT was shared with 5 trainees, 3 movement disorder faculty, and 2 medical education faculty for testing. Feedback was collected and informed refinements. (ChatGPT was involved in editing portions of this abstract).
Results:
The GPT simulated 5 unique tremor cases. Interactions between learners and the GPT used natural language though GPT responses were deemed slightly unnatural by testers. Refinements prevented virtual patients from summarizing entire histories without targeted questioning. Experts found the interaction consistent with the diagnoses simulated. The tool facilitated clinical reasoning through tutor-facilitated prompting, feedback, and instruction. The GPT hyperlinked videos of examinations enhancing simulation fidelity. Technical limitations from detailed GPT prompting included a high computational burden, limiting session time for users.
Conclusions:
ChatGPT shows high potential for simulating patient encounters to help learners develop clinical skills asynchronously within a self-regulated learning framework. We provide a methodology for development and refinement of an educational ChatGPT. Adopters of this technology must optimize GPT prompts for effectiveness and efficiency to improve user experience and maximize session utility. While validation with learners is pending, this blueprint can guide GPT development across different educational goals.
Disclaimer: Abstracts were not reviewed by Neurology® and do not reflect the views of Neurology® editors or staff.