AI and cybersecurity: How safe are your systems?

The artificial intelligence (AI) boom is here, and it’s not showing signs of slowing down any time soon. With a current market size of about $28 billion, healthcare AI is projected to exceed $180 billion in 2030.

While new technologies open exciting possibilities for our industry, those of us in the cybersecurity field know that they also introduce vulnerabilities and risks. As AI matures, healthcare organizations need to know these critical nuances in cybersecurity in order to employ it safely.

Understanding hacking methods

First, let’s examine some of the ways malicious actors can corrupt AI systems and access data. These include:

  • Prompt injection: This is a common attack method and typically targets generative AI systems like ChatGPT. Prompt injection involves injecting code into a prompt, or what the developer has created on the back end, which then “tricks” the AI and alters how it is supposed to run. As an example, let’s say a hospital had a symptom checker chatbot on its website and a patient entered symptoms of a stroke. A bad actor could feed the system a prompt so that the chatbot tells the patient the symptoms entered are indicative of the common cold.

  • Inference: With inference attacks, threat actors use the output of a machine learning (ML) model to draw conclusions (i.e., infer) about others’ sensitive information. Using that data, the bad actor can either deduce something about the individual (known as an attribute inference attack) or if the individual’s data was used in the training data for that ML model (known as a membership inference attack).

  • Extraction: Aptly named, for this method of attack hackers extract the data used to train an ML model. Training data may contain personally identifiable information, and the more specific the data is, the more valuable it is to the threat actor.

  • Poisoning: Poisoning occurs when a cyber threat floods an AI model with bad training data to manipulate its decision-making and output. To illustrate this tactic, imagine if a hospital’s business office used an AI program to flag issues with claims before they are submitted to the payer. If a bad actor poisoned the training data, the program might not flag opportunities for corrections.

These methods underscore why healthcare is a top target for cybercriminals: With just a few data points, they can do a lot of damage.

Securing your systems

So, what can healthcare organizations leveraging AI do to maintain their security posture? At a high level, adhering to basic security principles is key because every new solution implemented at an organization represents new risks—AI or otherwise. Adopting zero-trust principles and applying least privilege access are just a few ways to ensure select users are granted access to applications only as necessary and that those users are limited to the functions relevant to their roles. From an AI perspective, be sure to conduct a dynamic review and assessment of training data. Not only do you need to ensure the training data is accurate, but also that it is secure and being accessed by the AI system securely.

Additionally, organizations should do continuous data security analysis and conduct dynamic cybersecurity risk assessments of their AI systems to understand where their data is and how it flows to better protect it. While underutilized in healthcare, offensive cybersecurity practices can also help expose opportunities for improvement. For example, by conducting penetration testing (also known as “pen testing”), organizations can evaluate the extent of damage a bad actor can inflict if they were to breach an AI system. Similarly, with “red teams,” an internal group can simulate an attack to not only test the security posture of the AI, but also the vigilance of the team members tasked with detecting and responding to threats.

Finally, healthcare organizations should remain attuned to industry standards as they evolve alongside AI technologies. The National Institute of Standards and Technology (NIST), for example, has created many cybersecurity standards and frameworks for organizations across a variety of industries, and has released an AI Risk Management Framework. Organizations should also update AI systems as patches are released to fix known vulnerabilities. Cybersecurity is like a cat-and-mouse game. As technologists develop and publicize new protective measures, malicious actors devise ways to bypass these safeguards and the cycle continues.

Staying ahead of threats

At the end of the day, AI solutions are just another part of an organization’s technology portfolio, adding new avenues for malicious actors to achieve their goals. Hackers don’t care what software or hardware you have. They just want to breach systems and access data.

As AI expands in healthcare, Altera Digital Health will continue partnering with clients to put cybersecurity best practices in place as they evolve. We’re on a mission to elevate healthcare—and that includes securing the data your patients and providers rely on every day.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars