In healthcare, ethical AI is a life-or-death issue: Q&A with AI Ethics Lab’s founder and director

For every giant leap forward that researchers, healthcare providers and technology developers make in incorporating artificial intelligence into healthcare, they seem to be just as quickly forced back to square one by persistent ethical concerns.

Advertisement

Chief among these issues is the fact that many AI algorithms that are built to improve outcomes were trained on data gathered from a predominantly male, predominantly white sample, as has traditionally been the case in scientific research. As a result, the systems are not nearly as effective when applied to female and non-white populations, further alienating already marginalized groups and reinforcing systemic gender and racial biases.

In response to these pervasive concerns, investors and accelerators have mandated that tech startups establish codes of ethics as they develop AI; billionaires have provided funding for facilities dedicated solely to solving these issues; and leaders from across tech, medicine and even the White House have stressed the importance of addressing ethical issues in AI development. But what, exactly, does ethical AI look like in healthcare?

According to Cansu Canca, PhD, founder and director of the AI Ethics Lab, based in Boston and Istanbul, ethical AI encompasses both a product and process designed with ethics in mind. “By ethical product, we are talking about the value-laden decisions that are incorporated into AI systems and their design, while ethical process refers to the ethical decisions that are made during the development of AI systems,” Dr. Canca said. “Ethical AI should ideally satisfy both conditions.”

Here, Dr. Canca outlines the primary issues facing those responsible for the development and deployment of AI in healthcare and the questions that must be answered before introducing these technologies into clinical use, to ensure AI helps, rather than hurts, patients.

Editor’s note: Responses have been lightly edited for clarity and length.

Question: What does ethical AI look like?

Dr. Cansu Canca: Let’s take an AI diagnostic system for skin cancer as an example. For such a product to be ethical, it needs to be designed to include all skin colors in the society. To be adequate for mainstream use, it must be trained with a balanced dataset to “learn” from. This would ensure that the resulting product is beneficial for anyone within the society even if they are a minority. This is particularly important when such systems become the main diagnostic tool in settings with scarce resources and disadvantaged groups have to rely on their accuracy.

In terms of ethical process, developing this tool would raise several ethical considerations: How do we collect the data for training such a tool without violating user privacy and autonomy? How do we test these tools without putting vulnerable individuals at risk? What are the safeguards that we can put in place to ensure that such tools keep patient privacy and continue to learn from a wide range of individuals and groups within society?

Q: What is the most pressing ethical concern regarding the implementation of AI in healthcare?

CC: An important and critical issue in AI in healthcare is about physician-computer interaction and designing systems that enable meaningful and effective interaction. Various AI systems are in development to provide assistance to physicians, which raises several concerns: How much information is provided through these AI assistants? If physicians “disagree” with the system’s recommendations, does the system enable them to reason through these disagreements? And, of course, what are the incentives built around these systems and physicians’ actions? Meaningful and effective interaction is critical not just for patient safety but also for improving AI systems, for providing new insights to physicians and for retaining physician autonomy.

Another area that deserves attention is where existing ethical problems in healthcare meet shortcomings of AI systems, as is the case with biases. In healthcare, we already have the major problem of women and minorities not being accurately and sufficiently represented in most medical knowledge, and this lack of accurate data from various population groups means that AI systems that learn from existing healthcare data will have a higher error rate when used for those groups. As we become more reliant on AI systems, issues like these will only increase inequality.

Q: How do ethical concerns about healthcare AI compare to ethical concerns in other industries?

CC: In health, the stakes are very high. Many decisions literally have life-or-death consequences. Health is critical in people’s abilities to function, and therefore affects how different groups in societies become better- or worse-off. Moreover, individuals’ health records contain intimate information that requires special care for protecting their privacy. In healthcare systems where access to adequate healthcare depends on insurers and their methods for predicting and screening out potentially costly individuals, this privacy becomes even more critical. Understanding these individual and social aspects of health makes it clear that AI systems, which are designed to be used widescale, could have a systemic impact not only on an individual level but also across entire populations.

Q: Who in the healthcare ecosystem has a responsibility to address AI ethics — tech developers? Researchers? Healthcare providers?

CC: The answer is: all and more. Ethical questions arise in all shapes and forms in every stage of healthcare and every stage of research and development of AI systems. Each party faces different ethical questions and they all have responsibilities in raising these issues and seeking solutions. Tech developers are the ones who can spot ethical issues in datasets and models as they are developing them. Researchers are already bound by research ethics, and healthcare providers by medical ethics. They should see ethical issues related to AI systems as an extension of their ethical duties.

And let’s not forget the ethicists who are tasked with analyzing all these different ethical issues and providing guidance to tech developers, researchers and healthcare providers in determining their ethical options.

Q: How can these and other stakeholders ensure AI is being used ethically in healthcare? What needs to change?

CC: AI has neither more nor fewer ethical issues than other areas. It requires as much attention as other crucially important fields such as public health ethics and biomedical ethics. What is different about AI is its novel ways of integrating new methods into existing practices. Those who work in the fields of ethics and health need to become literate in AI models to assess their ethical issues as these systems become more integrated in healthcare. Healthcare providers and policy makers should think very carefully about the incentives around using AI systems.

We must also keep in mind that technologies that utilize AI systems, the internet of things and big data pose significant questions in relation to individual privacy, autonomy and fairness. It is of extreme importance that systems that are designed to help people do not end up disadvantaging them.

More articles about AI:
AI develops ‘turbocharged’ flu vaccine
12 AI initiatives launched by hospitals, health systems in 2019
7 AI systems outperforming medical experts

Advertisement

Next Up in Health IT

Advertisement

Comments are closed.