Stroke Modeling and Synthesis for Intelligent Virtual and Robotic Patients
DescriptionStroke, a substantial contributor to the global disease burden, affects 15 million people each year, and is the fourth most common misdiagnosis reported by clinicians.
One of the contributors to this disease burden is diagnostic failure: stroke is the fourth most common misdiagnosis reported by clinicians.
Research shows using simulated patients may reduce preventable medical errors.
However, current commercial patient simulators have static faces and lack the realistic depiction of non-verbal facial cues important for rapid diagnosis of neurological emergencies such as stroke.
The lack of expressions in simulated patients may adversely affect clinical learners' learning performance and failure to recognize a stroke in their future careers.
Our multidisciplinary research addresses the urgent need for an expressive training tool by developing simulated patients capable of realistically synthesizing non-verbal asymmetric facial cues that are important for the rapid diagnosis of neurological emergencies.
More specifically, we developed techniques to automatically model the facial characteristics of neurological impairment in real-time and use the model to transfer pathologies including Bell's Palsy and stroke on simulated patients with different gender, race, and age.
By leveraging the use of these expressive simulated patients, clinical learners will have the potential to more accurately diagnose people with Bell's Palsy and stroke, be better able to interact with them, and improve their cultural competence for providing healthcare to patients with diverse backgrounds.
Moreover, this system will help enhance immersion in simulation by providing clinical learners with the opportunity to participate in a comprehensive, realistic experience.