by GRACE MOEN – Content Writer for Health 2.0 covering news, trends, and reflections.
Let’s get one thing straight right away: defining the words. AI and Machine Learning are very buzzy and in the storm of tweets and scandals it’s easy to use them interchangeably, as if they are synonyms. They’re not in fact synonyms but if you’re guilty of using them as such, don’t worry, you’re only a little wrong. According to Gurjeet Singh, CEO of Ayasdi, as he is quoted as saying in a recent Healthcare Analytics News article, Machine Learning is “a set of statistical algorithms in computer science whose primary function is to learn from data,” while “the fundamental aim of AI is to mimic what people do using software.” We’ll come back to this human-element later, but in the meantime remember this: All machine learning is AI, but not all AI is Machine Learning.
Examples of Machine Learning in everyday life abound, and for all the recent attention aimed at the behemoth that is Facebook (and their epic fail of data protection and privacy), the examples of positive outputs may seem a little blurry right now. What I mean by this is that the benefits and the convenience of Machine Learning generally outweighs the bad. Take Amazon Alexa that will answer any question, play any music just by the sound of your voice. Or Google Maps and Waze which give us traffic predictions or my favorite, the “cop car ahead” notification. Intelligent online shopping means that if we have to see ads that at least they’re relevant, and lets not forget the power of the Spam filter! (You may be thinking that there is a cost to this convenience, and you’d be right. I often think of Machine Learning and AI as it relates to the age of explicit consent we’re living in. It’s a different, yet highly related topic that we’ll save for another day and article.) As it relates to healthcare though, we can see strong value prop in using Machine Learning and Artificial Intelligence in Pharma R&D, Clinical Decision Making, Global Health, and beyond.
While Machine Learning is for machines to learn from data, Artificial Intelligence is about augmenting the human experience, so by definition it cannot be achieved without the participation of a human. Enter the Human Diagnosis Project, one such example of (hu)man-machine collaboration. Their aim is to gather information from 7,500 doctors and 500 medical groups in over 80 countries “to develop a system that anyone — patient, doctor, organization, device developer, or researcher — can access in order to make more informed clinical decisions” (Your Future Doctor May Not Be Human). This scale of informed clinical decision making would simply not be possible by machine alone – it requires human knowledge, bridges in understanding, real-life experience inputted into a system and mathematized.
Now, charging a math equation with healthcare decisions is a slippery slope. Did you see the article What Happens When an Algorithm Cuts Your Healthcare that I shared in last months What We’re Reading? It features one woman’s tale of having her in-home nursing hours cut by over half, posing a major threat to her well being, independence, and survival. In this case and in many others, the machine was unable to quantify the qualifiable successfully. What was missing here was the human element to account for nuances the algorithm just can’t see. Yet.
Similarly, Maneesh Juneja asks “How do we ensure these algorithms are fair, ethical and transparent?” in his recent article for the Future Health Index. “The UK government wants to set up a ground-breaking Centre for Data Ethics and Innovation, while India is getting a new research institute, Wadhwani AI, which has a mission of harnessing AI for social good.” Furthermore, Facebook announced earlier this year that their algorithm is capable of catching people who are at risk for committing suicide, well before imminent self-harm has occurred. It’s hard to argue with good intentions such as these, especially if proven effective. Like any form of innovation, it’s never a perfect solution. It will be up to us, the people, to secure a future in which Machine Learning is helpful not harmful.
The opportunities for Machine Learning and Artificial Intelligence to make a positive impact are numerous… The global demand for physicians and specialists is projected to outweigh the supply by 15 million health workers by 2030, and virtual or bot doctors could fill in the gap. Specialists such as in the mental and behavioral health categories can be especially hard to come by if you consider access to rural or hard to reach corners of the globe. Again, virtual care delivery can supplement this workforce. Machines can also detect disease before either the physician or the patient is aware of the symptoms.
You may not love it, but you certainly can’t leave it. Machine Learning and Artificial Intelligence is here and will only prove to become more rampant. Hopefully the way we love it when Alexa plays our favorite song or when Netflix suggests a new binge-worthy show, will be the way we love a diagnosis from a bot doctor when we can’t get to a real one.
Ready for more? You can get it at Dev4Health – Health 2.0 and HIMSS’ upcoming developers conference April 30-May 1 in Cleveland, OH. We even have an entire session of the same title: Beyond the Hype and Into the Hope: AI and Machine Learning, moderated by Shahid Shah – everyone’s favorite Health IT commentator. Shahid will sit in conversation with Anil Jain (IBM), Leland Brewster (Healthbox), Brian Kolowitz (UPMC Enterprises), Brian Maples (Cardinal Analytics), and Anthony Chang (CHOC) to discuss practical applications, the value proposition for investments, and the future. Join us – get your ticket today!