Three areas of concerns for ChatGPT in healthcare:
- According to the article, ChatGPT must be governed by policies to ensure that it is properly being used in medicine due to the fact that the technology can sometimes fabricate information.
- ChatGPT is only as good as the data it is trained on; if the data is biased or incomplete, it could lead to inaccurate results, according to the article. In healthcare, if ChatGPT were to make an inaccurate diagnosis or treatment recommendation, it could have severe consequences to the patient.
- ChatGPT could also exacerbate biases in healthcare. For instance, if the model is trained on data that includes a “disproportionate number of patients from one demographic group,” according to the article, it could generate responses that are biased against other groups.