For ChatGPT to become revolutionary in the healthcare industry, agencies need to create regulatory standards and guidelines for the use of AI models in medicine, as well as ensure that the models are trained on non-biased data, according to a June 28 article published in the Georgetown Journal of Internal Affairs.
Three areas of concerns for ChatGPT in healthcare:
- According to the article, ChatGPT must be governed by policies to ensure that it is properly being used in medicine due to the fact that the technology can sometimes fabricate information.
- ChatGPT is only as good as the data it is trained on; if the data is biased or incomplete, it could lead to inaccurate results, according to the article. In healthcare, if ChatGPT were to make an inaccurate diagnosis or treatment recommendation, it could have severe consequences to the patient.
- ChatGPT could also exacerbate biases in healthcare. For instance, if the model is trained on data that includes a "disproportionate number of patients from one demographic group," according to the article, it could generate responses that are biased against other groups.