Becker's Hospital Review

May 2019 Becker's Hospital Review

Issue link: https://beckershealthcare.uberflip.com/i/1115575

Contents of this Issue

Navigation

Page 5 of 79

6 AI, ethics and healthcare: How to resolve the conflict by augmenting our intelligence By Marc Paradis, VP and Dean of Data Science University at Optum W e ' re l i v i n g i n a world where today's science fiction will be tomorrow's science fact. But moving forward, we need to think of AI as "augmented intelligence." In fact, we should probably get rid of the term "artificial intelligence" because it just causes problems. Why does AI scare some people? Let's say we could define human intelligence by putting a dot on a chart and saying, "is is average human intelligence." We're going to build machines that are going to get closer and closer to that dot. We could put the dot all the way out at the smartest human who has ever lived, or even the smartest human who could ever possibly live. But the machines will eventually get there and then blow past it. We will actually achieve "artificial intelligence" for a few seconds, and then we'll be in a totally different realm. If the only goal of AI is to hit that target, then we put ourselves at significant risk of not having thought through or taken into account the ethics of when the AI that we're building is not responsive to us in ways that we would like it to be. Why is augmented intelligence a better way to think about AI? If we shi to this notion of augmented intelligence, we instantly feel more comfortable because we've been using machines to augment human capabilities for a long time. e big revolution of the steam engine was the fact that we humans were no longer reliant on human muscle or even horse or oxen muscle to move things. We were able to provide much more power, to move much larger loads, much faster, and much more consistently. If we view AI as augmenting our intelligence, capability, and humanness, then it doesn't matter how smart it gets, even if it goes beyond where we ever thought it could go. If we design and build AI that way, then we will have much less to fear. If it doesn't replace us, rather makes us better, then we avoid the existential issues that some people get worried about. So the history of technology has always been about how we augment humans and replace limited human capabilities. But that means we have to identify the limitations we have. And admitting our own limitations can be scary. What limitations do humans have that AI can improve on? As humans, we are limited in the speed with which we can perform a task and in the amount of information we can process. ere's also an enormous amount of inconsistency with humans. If you give someone the same exact task in the morning, aernoon, and evening, you'll get different outcomes. Or if you give two different people the same exact task, you'll get different outcomes. Sponsored by: "One of the promises of AI is it can go orders of magnitude faster than we can, take in orders of magnitude more data than we can, and be designed in such a way that it will do exactly the same thing every time." – Marc Paradis, VP and Dean of Data Science University at Optum

Articles in this issue

view archives of Becker's Hospital Review - May 2019 Becker's Hospital Review