The cost of IT ‘alarm fatigue’: How machine learning can help improve accuracy, reduce risk, and boost productivity

Jack Danahy -

In the medical field, alarm fatigue can be a serious issue that directly impacts patient care.

When roughly three-quarters of clinical alarms are false, it’s easy to see how this constant overload can desensitize nurses, physicians, and other clinicians. Many become so bogged down and frustrated with responding to every false alarm, they eventually begin to ignore them altogether.

While it certainly doesn’t carry the same life-and-death consequences, the same is true for healthcare IT security teams. Because today’s attackers are finding increasingly stealthy ways of executing and persisting on systems, security teams are investing in tools and services that provide them with increasingly granular visibility into the activity on their networks. Unfortunately, attack activity is becoming increasingly difficult to differentiate from benign activity, thanks in part to the now commonplace practice of leveraging otherwise legitimate system tools and processes for malicious purposes. As a result, many security products (particularly those incorporating machine learning) are producing more and more false positive alerts.

Not only do these false positives create barriers to adoption and deployment of new protection, they are also distracting and disruptive to security teams. Every minute IT is busy chasing false alarms is a minute wasted where they are pulled away from other potentially crucial work. Worse, research has shown nearly a third of IT pros already admit to being desensitized by alarm fatigue to the point that they simply ignore alerts. How exactly did we get to this point, and what can we do differently?

Wider protection isn’t always smarter protection
Machine learning has clearly become one of the most popular and powerful tools in security, yet thanks to an abundance of false positives, many current machine learning models require organizations make the uncomfortable choice between playing it safe and getting their jobs done.

Many current machine learning models are biased towards malware. To understand why, we need to step back and take a closer look at how machine learning has been applied to endpoint security. Models are trained to discern between good and bad software, and then block the bad. Making that decision requires data. While gathering data on malware is fairly straightforward — it’s plentiful and the threats are common to all organizations — gathering comparable data on goodware is often limited to well known, packaged applications. In the case of custom solutions, gathering data is much more difficult. That blind spot is significant, as new tools have made it easier than ever for organizations to create or integrate their own applications. Because the default is to block what they don’t understand, many machine learning tools lump these good programs in with the bad, resulting in false positives and disrupting operations.

The problems with false positives

1. Productivity suffers. Not only do IT teams waste time tracking down false positive results, the real productivity loss is felt by individual end users. When preventative solutions block non-malicious code, users are unable to run applications that are required to do their jobs and productivity comes to a crawl. In fact, over 40% of companies say they’ve seen a noticeable drop in productivity as a result of false positives. This bottleneck in the business puts IT in the very undesirable position of hindering productivity rather than supporting it.
2. Urgency wanes. Like clinicians on the patient floors, once IT teams see more false positives than valid alerts, they begin to doubt the urgency or validity of any alert. That means they’re less likely to investigate, which could allow malware to run rampant. As many as 3 out of 10 IT pros admit they’ve ignored security alerts due to high rates of false positives, dramatically increasing the risk that malware will go unaddressed and giving it ample time to proliferate.
3. The costs keep growing. Companies now waste an average of 425 hours each week responding to false positives in their IT security system, costing them $1.37 million annually. That is an exorbitant amount of time and resources to pull away from worthwhile endeavors — like mounting a suitable defense against legitimate threats or implementing solutions that support business objectives.

The solution: Develop more organization-specific models
The ability to gather insight from new data and adapt is a critical component of any machine learning solution. But in this case, the models must also be able to adapt to their environment — namely the software profile of the organization in which they’re deployed.

In the same way machine learning models can be trained to recognize new malware, they must also be trained for the newest good software in use within the organization, including updates, custom solutions and integrations. By training against the broadest samples of malware and the most relevant samples of goodware for health care organizations, machine learning security solutions can deliver the highest degree of accuracy while maintaining low false positive rates, and lowering the cost of protection.

About Jack Danahy
Jack Danahy is the co-founder and CTO of Barkly, the Endpoint Protection Platform that delivers the strongest protection with the fewest false positives and simplest management. A 25-year innovator in computer, network and data security, Jack was previously the founder and CEO of two successful security companies: Qiave Technologies (acquired by Watchguard Technologies in 2000) and Ounce Labs (acquired by IBM in 2009). Jack is a frequent writer and speaker on security and security issues, and has received multiple patents in a variety of security technologies. Prior to founding Barkly, he was the Director of Advanced Security for IBM, and led the delivery of security services for IBM in North America.

The views, opinions and positions expressed within these guest posts are those of the author alone and do not represent those of Becker's Hospital Review/Becker's Healthcare. The accuracy, completeness and validity of any statements made within this article are not guaranteed. We accept no liability for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with them.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.