ChatGPT fails urologist exam

ChatGPT scored less than 30 percent on the American Urological Association's Self Study Program for Urology, a 150-question practice exam widely used by urologists in training.

Clinicians at the Omaha-based University of Nebraska Medical Center conducting the study found the chatbot's answers to open-ended questions "frequently redundant and cyclical in nature." 

ChatGPT scored just 26.7 percent on open-ended questions, according to a June 6 Wolters Kluwer news release.

Previously the AI tool faired well on the United States Medical Licensing Exam and at providing empathetic answers to patient questions. 

"ChatGPT not only has a low rate of correct answers regarding clinical questions in urologic practice, but also makes certain types of errors that pose a risk of spreading medical misinformation," said Christopher Deibert, MD, one of the study authors from the University of Nebraska Medical Center.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>