WHO: 6 principles to using AI in healthcare ethically

Listen

The World Health Organization released a guide for how to use artificial intelligence in healthcare settings with guidance from ethics, human rights and tech experts, according to a June report

Here are six principles for ethical use of AI:

  1. The use of AI shouldn't undermine provider decision-making. Healthcare employees should remain in control of healthcare systems and medical decisions.

  2. AI products should not harm people. There should be standards algorithms are required to meet for safety and accuracy.

  3. Algorithms should be explainable to developers, medical providers and patients. Developers should be transparent about how products are designed to function, and this data should be made available before the tool is launched.

  4. It is the responsibility of hospitals that use AI to ensure it's being used properly and medical staff is properly trained.

  5. AI must be designed to encourage inclusiveness and equality so it can be shared with as many patients as possible.

  6. The performance of AI applications should be continuously assessed to make sure the tool is responsive.

Copyright © 2021 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars