Chatbot diagnoses only 17% of pediatric ailments correctly, study says

When considering an accurate diagnosis for pediatric cases, ChatGPT can't compare to pediatricians. 

In a JAMA study released Jan. 2, researchers found that ChatGPT's use of large language models was not well-suited to giving accurate pediatric diagnoses.

By using the prompt 'List a differential diagnosis and a final diagnosis' and providing ChatGPT version 3.5 with a description of a clinical, pediatric issue, researchers were attempting to discover the accuracy of ChatGPT's clinical knowledge. What they found was a diagnostic error rate of 83%, though some incorrect answers were clinically related to the correct diagnosis.

Though initial studies have not yielded promising results, the team behind the study doesn't believe that LLMs should be entirely discounted from medicine. However, the study advises the use of chatbots such as ChatGPT in non-clinical settings, citing the fact that clinical experience cannot be replaced.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>