That time our social media health data worked for artificial intelligence

Like footprints in the snow, a distinct trail remains every time we access a digital device that follows us from website to website and from app to app.

While it would seem our time spent online—for fun or work—is our own, it turns out these digital footprints don’t serve us as much as they serve the artificial intelligence constantly looking over our collective shoulders and gathering that information for later use.

Each time we tap on a smartphone screen—which we each do an average of more than 2,600 times every day—the data we access is tracked and stored. (Eighty percent of our smartphone app usage is consumed by querying Google or interacting on Facebook.) Much of the information we leave behind on the internet is widely available to anyone with the interest and (moderate) computer skills to track it down. (And the ongoing uproar over the Facebook/Cambridge Analytica debacle demonstrates just how easy it is to have personal information shared far more broadly than users expect or want.)

Social media sites, where many of us often share very personal details and input detailed user profiles, are used to extract information which is later used to serve us ads. This isn’t a concern when the ad depicts a household appliance or new car, but should we be concerned when advertisers start selling medications and treatments based on what we’ve posted?

Do Hashtags Equal #Depression?
A recent article in The New York Times reported on a slew of new and existing companies that would like to analyze our digital footprints—especially those found on social media—to make decisions about mental health, including depression. Artificial intelligence can monitor how fast you type, the tone of your voice when using the phone, the kind of photos you post and the hashtags used to express emotions, and then extrapolate from this information your well-being.

Research from Arizona State University and Georgia Tech explores the images and hashtags used on social media, specifically Instagram (owned by Facebook), and how “dark” images may be very informative regarding mental health status. As part of the study, they added more than 2 million images to an artificial-intelligence engine to secure results relating to the apparent, downward spiraling mental health status of specific users.

In other words (no pun intended), a picture (and a handful of hashtags) is worth more than a thousand words when it comes to attempting to diagnose mental health from afar.

Today, some social media outlets have already taken on the task—and potential risk—of monitoring users themselves, which is problematic since they are not healthcare professionals: “Although Instagram and other social media platforms have put in place some intervention policies to bring help to those users who engage in mental health disclosure, at best, they can be called ‘blanket’ strategies,” according to the ASU/Georgia Tech research. “This is because the interventions are neither tailored to the individual or the context, nor do they leverage nuanced and subtle cues manifested in shared content.”

Data Mining to Serve Ads
Today, as we travel from website to website we’re served ads based on where we’ve been and what we’ve viewed. If you’ve looked at golf clubs on one website (like I often do), they’ll follow you when you surf, for example, to news website CNN in the form of an in-page ad. This is expected and is part of the implicit contract we’ve made with the internet: unfettered and free access to information and inundation by advertising to support it.

But what happens when you’re not shopping websites, but, rather, posting about a bad day at work on social media? The post is likely public (although users are becoming more aware of privacy issues) and you likely remain the copyright holder, but the social media site “owns” your post in other ways. As more companies mine social media data to publish relevant advertising, there’s a persistent risk in how personal information is used, especially in when it comes to mental or physical health. (The ASU/Georgia Tech researchers, for example, accessed user data via Instagram’s readily-available application programming interface, which included user bios from which they were able to extract additional detail.)

Ethical or not, it seems the logical next step in the data mining sphere is to sell culled information to companies that will use it to, for example, buy ads for prescription drugs and place them in the social media feeds of or the websites visited by those who portray depression through photos or text. This, of course, again raises the issue that these companies are not healthcare providers and the suggested medications may be contraindicated or unnecessary.

And just last week CNBC revealed Facebook was in talks with several healthcare organizations to acquire de-identified patient data and apply it to Facebook data in a process called “hashing,” which allows the two unrelated data sets to be combined in such a way that it’s possible to bring the information together to match people found in both data sets. Although the matched person technically remains anonymous, Facebook or another organization would have enough information to market healthcare products to a specific person.

Will Advertising Dollars Force the Issue?
So should healthcare or pharmaceutical companies serve us ads based on the photos, hashtags and comments found in our social media posts? The answer likely will land somewhere in between “yes” or “no.” The issue will grow as advertisers continue to flood the internet with money: it’s predicted $119 billion will be spent in 2021. There undoubtedly will be a desire to serve these types of ads to healthcare consumers.

Very few people are prepared to go cold turkey and stop using the internet altogether, so users will need to decide how—or if—they want to interact with the artificial intelligence that keeps an eye on our online movements. How far are we prepared to go to protect our privacy?

It will take a combination of social media companies, healthcare organizations, data brokers and, most importantly, social media and internet users coming together to reign in potential misuse and formulate acceptable use of this data. In the meantime, it remains important that internet users be circumspect when posting health information to social media, lest the data be misinterpreted by an algorithm: Artificial intelligence should be used to help us in our daily lives, not make it more complex.

Phil Walsh is Chief Marketing Officer for Healthcare and Life Sciences at Cognizant, a Fortune 500 company.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars

>