Here are six principles for ethical use of AI:
- The use of AI shouldn’t undermine provider decision-making. Healthcare employees should remain in control of healthcare systems and medical decisions.
- AI products should not harm people. There should be standards algorithms are required to meet for safety and accuracy.
- Algorithms should be explainable to developers, medical providers and patients. Developers should be transparent about how products are designed to function, and this data should be made available before the tool is launched.
- It is the responsibility of hospitals that use AI to ensure it’s being used properly and medical staff is properly trained.
- AI must be designed to encourage inclusiveness and equality so it can be shared with as many patients as possible.
- The performance of AI applications should be continuously assessed to make sure the tool is responsive.