Vox Media: AI can easily be fooled

Artificial intelligence algorithms are vulnerable to "adversarial attacks," which are inexpensive and easy to carry out, according to a report from Vox.

While many major tech firms are working to prevent these types of attacks, AI is at risk across industries. The report cites a test conducted by Tencent's Keen Security Lab that was able to trick a self-driving Tesla Model S into switching lanes by simply placing three stickers on the road in a line.

An adversarial attack on AI in healthcare could take the form of medical fraud, malicious or well-intentioned. Vox gives the example of a physician who could defraud payers by simply adding a layer of pixels to the image of a mole. This could cause an AI system to incorrectly read the mole as malignant, flagging it for removal and reimbursement. Vox also provides a more "well-meaning" example of a physician who may want to prescribe painkillers for a patient. A prescription authorization tool might prevent the physician from writing the prescription, so that physician could circumvent the system by adding billing codes or phrases they know would pass the system.

To read the full story, click here.

 

More articles on artificial intelligence:

Google pulls plug on AI, tech advisory board
Top AI Google employee jumps ship to Apple
AI may be able to determine success of IVF embryo implantations, study shows

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>