AI chatbot startup Babylon Health attacks physician for '2,400 Twitter troll tests'

Babylon Health, the artificial intelligence startup behind triage chatbots such as the U.K. National Health Service's GP at Hand app, issued a press release on Feb. 24 challenging a British physician who has been a vocal critic of the AI in recent years.

Dr. David Watkins, a consultant oncologist at The Royal Marsden NHS Foundation Trust, is one of many clinicians who have repeatedly flagged patient safety concerns arising from their use of Babylon's chatbot. He has documented on Twitter countless cases in which, he claims, the chatbot failed to identify and triage a serious health condition. (Babylon Health claims to offer no diagnoses, but "information only.")

In the release, Babylon accused Dr. Watkins — to whom they refer only as @DrMurphy11, his Twitter handle — of conducting "2,400 Twitter troll tests" and having "trolled us at every turn." The company also claimed that Dr. Watkins has spent "hundreds of hours" testing the Babylon AI and raised concerns over fewer than 100 of his tests.

Babylon said in the release that a "panel of senior clinicians" had investigated each of Dr. Watkins' concerns and concluded that almost all were either misrepresentations or mistakes on his part; the 20 "genuine errors" were immediately fixed.

Those claims, according to Dr. Watkins, are "absolute nonsense," he told TechCrunch. First and foremost, he said, he has likely completed closer to 800 or 900 full run-throughs of Babylon's AI service, with many of them repeats while he confirmed an issue, checked if it had been fixed or documented it to post on Twitter. And while Babylon calculated the rate at which Dr. Watkins has discovered errors with their technology to be around 0.8 percent, he said it is closer to one-third of all tests.

"They've manipulated data to try and discredit someone raising patient safety concerns," he told TechCrunch.

He added, "I'm concerned that there will be clinicians in that company who, if they see this happening, they're going to think twice about raising concerns — because you'll just get discredited in the organization. And that's really dangerous in healthcare. You have to be able to speak up when you see concerns because otherwise patients are at risk of harm and things don't change. You have to learn from error when you see it. You can't just carry on doing the same thing again and again and again."

Read his full interview here, and access the Babylon Health release here.

More articles on AI:
Wearable AI could prevent 30% of heart failure readmissions: U of Utah Health study
AMA, Google & 50 more health IT leaders unveil standard for AI in healthcare
CDC looking to Reddit, Twitter activity to forecast suicide rate

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>