The models were developed using de-identified data from the Chennai, India-based Apollo Hospitals and the National Institutes of Health. Once trained using the datasets and radiologist input, the AI was able to identify pneumothorax, nodules and masses, fractures and airspace opacities in chest X-rays with the same or better accuracy as experts tasked with examining the same images.
In a post on the Google AI Blog, the study’s authors cautioned that their findings do not imply that AI should fully replace radiologists.
“The model often identified findings that were consistently missed by radiologists, and vice versa. As such, strategies that combine the unique ‘skills’ of both the deep learning systems and human experts are likely to hold the most promise for realizing the potential of AI applications in medical image interpretation,” they wrote.
More articles on AI:
VA launches national AI institute: 4 things to know
Beth Israel Deaconess team using AI to improve liver biopsies
DeepMind co-founder transitions to Google’s applied AI team