AMA issues 7 AI safety guidelines

The seemingly constant emergence of artificial intelligence tools across the healthcare landscape has led the American Medical Association to publish guidelines about the technology, detailing best practices for limiting risks to both patients and clinicians.

"The AMA recognizes the immense potential of health care AI in enhancing diagnostic accuracy, treatment outcomes, and patient care," Jesse Ehrenfeld, MD, president of the AMA stated in a news release. "However, this transformative power comes with ethical considerations and potential risks that demand a proactive and principled approach to the oversight and governance of health care AI."

The eight-page guide, published Nov. 28, highlights seven key areas for clinicians to pay attention to and advocate for within their own hospitals and systems regarding increasing use of AI tools in care, including: 

  1. Oversight: AMA really emphasizes that oversights of AI tools, particularly related to healthcare, should be handed down to hospitals by the government. However, it notes that clinicians should also understand many non-governmental entities will have a hand in shaping AI when it comes to healthcare.

  2. Transparency: Information about the design, implementation, data and training of any AI tool should be shared appropriately and governed by overarching laws, the AMA emphasizes.

  3. Disclosure and documentation: If AI directly affects patient care, medical decision-making or access to care, the AMA recommends that adequate information about those details should be made available, communicated and documented.

  4. Generative AI: With the rapid innovations that have spawned from generative AI, AMA advises healthcare organizations to begin proactively crafting policies about the technology's use now, before adopting any tool that utilizes it.

  5. Privacy: Implementing safeguards related to any AI tools a health system does adopt is also necessary to ensure patient protections, privacy and communicate what those mechanisms are and how they work to build trust with patients. 

  6. Mitigating bias: When AI tools are trained on datasets, clinicians should advocate for guardrails that can identify and mitigate biases that may come from training and work to enhance inclusivity, according to the AMA.

  7. Liability: As an organization, the AMA notes it plans to take steps to ensure that physician liability for use of AI -technologies legally matches what is laid out for current medical liability laws. 

The organization's newly inked principles on AI also extend recommendations to payers, including if or why they use an AI-driven algorithm to determine limits, claim approvals or denials and benefits that they ensure the tool does not "override clinical judgement" by limiting the accessibility of medical care.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>