In fact, we should probably get rid of the term “artificial intelligence” because it just causes problems.
Why does AI scare some people?
Let’s say we could define human intelligence by putting a dot on a chart and saying, “This is average human intelligence.” We’re going to build machines that are going to get closer and closer to that dot.
We could put the dot all the way out at the smartest human who has ever lived, or even the smartest human who could ever possibly live. But the machines will eventually get there and then blow past it. We will actually achieve “artificial intelligence” for a few seconds, and then we’ll be in a totally different realm.
If the only goal of AI is to hit that target, then we put ourselves at significant risk of not having thought through or taken into account the ethics of when the AI that we’re building is not responsive to us in ways that we would like it to be.
Why is augmented intelligence a better way to think about AI?
If we shift to this notion of augmented intelligence, we instantly feel more comfortable because we’ve been using machines to augment human capabilities for a long time.
The big revolution of the steam engine was the fact that we humans were no longer reliant on human muscle or even horse or oxen muscle to move things. We were able to provide much more power, to move much larger loads, much faster, and much more consistently.
If we view AI as augmenting our intelligence, capability, and humanness, then it doesn’t matter how smart it gets, even if it goes beyond where we ever thought it could go.
If we design and build AI that way, then we will have much less to fear. If it doesn’t replace us, rather makes us better, then we avoid the existential issues that some people get worried about.
So the history of technology has always been about how we augment humans and replace limited human capabilities. But that means we have to identify the limitations we have. And admitting our own limitations can be scary.
What limitations do humans have that AI can improve on?
As humans, we are limited in the speed with which we can perform a task and in the amount of information we can process. There’s also an enormous amount of inconsistency with humans. If you give someone the same exact task in the morning, afternoon, and evening, you’ll get different outcomes. Or if you give two different people the same exact task, you’ll get different outcomes.
One of the promises of AI is it can go orders of magnitude faster than we can, take in orders of magnitude more data than we can, and be designed in such a way that it will do exactly the same thing every time.
Furthermore, each human comes with a set of biases. But if we build AI correctly, it won’t judge a book by its cover. Rather it will skip past the cover, open the book, read every single word of every single page, and make a judgment based on the totality of knowledge about that individual. And it will do this at the exabyte/nanosecond scale and speed.
What limitations does AI have?
We all know that when you call a business, you’re likely to be greeted by an automated phone tree. But none of us like phone trees. We don’t feel like we’re getting better service because of them. And if you know that only a human can answer the complex question you have, a phone tree becomes enormously frustrating.
If you focus on AI as an algorithm solely for automating, you might end up with the unintended consequence of having fewer people to talk to and collaborate with. If your customers and patients are human, this is a bad thing.
Humans have fundamental needs for connection, compassion, care, and to feel as if we’re heard and understood. This is something that AI won’t be able to do well for a long time.
How can we use AI to augment human-centered care?
Think of AI as an application, to amplify, augment, and allow humans to do their jobs better. To remove the drudge work from what they do. The value in AI doesn’t come from reducing the number of people involved. The value comes from finding ways to help people interact better and provide better care for the individual.
Imagine, for example, an AI application that identifies a change in a patient’s disease state. The AI locates the first 30-minute block of time that the nurse, primary-care physician, and specialist can all meet, then sets up a video conference for them. In the meantime, it gathers the patient’s relevant information along with the latest medical evidence and issues a recommendation for the team to review and discuss. It packages all of this up and attaches it to the meeting invite, so it’s ready for the team to evaluate and act on.
The providers aren’t replaced by the AI, but rather freed from the mundane and costly administrative tasks. They are more able to think about the patient’s individual needs and provide the compassion and care they want to.
Is AI coming for our jobs?
There’s no doubt we will build AI that will be more efficient and more effective at certain tasks. But if we just look at AI as a way to automate or remove expensive humans, we lose out on the real value of humanity.
We shouldn’t build AI just to speed things up. We should build AI to overcome the flaws, limitations, or barriers in the current system that are difficult for us to overcome on our own. We should improve speed, volume, consistency, communication, coordination, and collaboration where we know there are already problems. This gives back more time to the providers, payers, clinicians, and call-center representatives to interact with patients and members on a human level.
We can help providers look their patients in the eyes and have a human interaction with them. In healthcare, that’s probably the most important thing we can do. Most physicians, most providers, I believe, become physicians and providers because they want to provide care. Not because they want to sit down all day and look at charts and scans and EMRs.
So no, AI is not coming for our jobs. It’s coming to release us from the drudgery of our jobs so we can do those parts of our job that we really enjoy and love doing the most.
Learn more about artificial intelligence, machine learning, and data science in health care at optum.com/cio
Marc Paradis is vice president and dean of Data Science University at Optum. Since he founded DSU in 2016, it has grown from 12 students to 1,200 global participants, with multiple programs and colleges. He has degrees from MIT and Cornell and has been leading data-science teams and projects for more than 18 years.
Optum is a leading health services and innovation company dedicated to helping make the health system work better for everyone. Optum combines technology, data and expertise to improve the delivery, quality and efficiency of healthcare. Hospitals, doctors, pharmacies, employers, health plans, government agencies and life sciences companies rely on Optum services and solutions to solve their most complex challenges.