ChatGPT missed 8 in 10 pediatric diagnoses, study finds

As health systems explore ChatGPT’s uses, a study conducted by New York City-based Cohen Children’s Medical Center found the chatbot missed the mark in pediatrics. 

Advertisement

Researchers fed New England Journal of Medicine pediatric case challenges to ChatGPT version 4 — the newest model from OpenAI, which costs $20 a month. The chatbot misdiagnosed 72 of 100 cases and wrote a too-broad diagnosis for another 11. 

In the study, ChatGPT also couldn’t identify relationships such as the one between autism and vitamin deficiencies. 

The researchers said this is the first known accuracy test for learning language model chatbots and pediatric medical scenarios, “which require the consideration of the patient’s age alongside symptoms.” Clinical AI uses also require up-to-date information, which can be hard to come by for tech companies.

Advertisement

Next Up in Patient Safety & Outcomes

Advertisement

Comments are closed.