Can AI overcome healthcare's cybersecurity hurdles? 3 experts weigh in

Artificial intelligence and machine learning have been used for years to help hospitals and health systems combat cybersecurity threats, but with hackers getting more sophisticated, chief information security officers say AI will need to make improvements to its algorithms and training sets to become vital tools for hospitals information security programs. 

Cybersecurity has been top of mind for healthcare executives as ransomware gangs and hackers continue to target hospitals and health systems for data. Thus far, AI and machine learning technology has been used to identify phishing emails, suspicious logins and anomalous behavior such as employee insider threats, but there is room for improvement. 

"An area ripe for AI development in cybersecurity is the identification of written policy violations e.g. employees not following the employee handbook and written policies," said Gary Chan, system vice president and chief information security officer of St. Louis-based SSM Health. 

The identification of policy violations requires interpretation of written words, an understanding of the business and business processes, and the resources to audit, said Mr. Chan. But, this is something AI cannot handle well today. 

"Being able to demonstrate compliance with policy is very helpful, not only for compliance reasons, but also for closing gaps that all organizations undoubtedly have today and may or may not have awareness of," said Mr. Chan. 

Steven Ramirez, chief information security officer of Reno, Nev.-based Renown Health also said AI has been used for years to prevent known attack methods, but the technology can now identify emerging attacks using learning patterns it has analyzed over time.  

"AI's behavioral analysis can identify malicious activities in real-time and instantaneously respond to threats," said Mr. Ramirez. "AI gets smarter and improves its security measures continuously with time as it continues to monitor, learn and adapt to network and system traffic."

According to Mr. Ramirez healthcare is flooded with expansive attack surfaces and aged operating systems, but with the aid of AI, hospitals and health systems can expedite their responses to a threat.  

Cons of AI in healthcare cybersecurity

But, AI in healthcare cybersecurity does come with its pitfalls. 

"AI is interesting in and of itself as a topic. It sparks different visions of a future that put us in peril or positions of strength," said Jack Kufahl, chief information security officer at Ann Arbor-based Michigan Medicine.

According to Mr. Kufahl, AI has provided hospitals and health systems with the ability to sort and detect threat and vulnerability data, but many don't have the resources to interpret the data consistently. 

In addition, AI and machine learning tools still need to be validated by humans.

"AI should be used as a tool to aid in decision-making and should not be considered the 'final answer' without verification," said Mr. Chan. "Blindly treating the output of AI algorithms as being the right answer is dangerous and can lead to unintended consequences."

AI is also expensive; smaller organizations may not be able to make investments in the technology. 

"AI is expensive and takes time to acclimate to an organization's environment," said Mr. Ramirez. "This requires a lot of man hours to properly tune false positives and ensure proper data is ingested to properly configure the machine learning and behavioral analysis."

Mr. Ramirez also said hackers are starting to "flip the script" on AI by using malicious activities to get cyber AI to shut down or interrupt elements of technical operations, creating an internal DDoS attack.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>