If I ask you to think of artificial intelligence, you might think of Skynet, the computer from I, Robot, or War Games.
But, the reality is very different, and AI is coming to healthcare.
Right now, there is nothing that is true AI, a sentient, fabricated mind. The ghost in the shell. But, there is what IBM call Cognitive Platforms.
Cognitive Platforms, like IBM Watson, are able to study a subject, analyse actions and their outcomes, and then be deployed to deliver ranked suggestions.
That means that Watson is not giving an explicit answer to a scenario. Instead, it provides you a ranked list of potential actions, allowing the user to choose.
Watson comes in a number of forms, from a full Watson stack the size of a chest of drawers, to an API driven interaction. With costs that match the size of the implementation.
Google acquired the British AI firm Deep Mind, who build algorithms capable of learning from experiences, or raw data.
Both platforms need to absorb information.
Watson currently has an oncology module, having learned the best American cancer treatments, and is making waves in its ability to suggest innovative treatments in oncology wards where it is being trialled.
Deep Mind has been set loose to absorb data from Royal Free NHS Trust, with one of its first jobs being to help raise the alarm around acute kidney injury. The New Scientist magazine reported recently that the level of access to data may be far beyond what has been announced in the public.
Why the secrecy?
It’s about privacy and the opening of data to corporations. Deep Mind’s association with Google (Alphabet) means they inherit Google’s reputation in its approaches to privacy. However, Deep Mind is run very separately from the rest of Alphabet, meaning we can trust the AI innovator to behave with data protection in mind. It’s not going to mean the new Google Assistant knows about my flight plans and my in-growing toe-nail.
The non-disclosure of the extent of the access to data does not mean something bad is going on, I suspect it has been done to avoid unwanted attention. Meaning projects can go ahead without also trying to manage media attention.
And let’s remember, this isn’t about allowing people to look at data, but for machines to trawl through it.
Let’s put to one side the potential, likely valid, arguments that should happen about the privacy concerns, and let’s look at the potential this technology can have.
AI, or cognitive platforms, mean that healthcare can have hugely powerful decision support.
Right now, decision support can only behave in a fairly rigid (if this, then that) manner. But AI allows for a highly dynamic approach, looking over an entire patient’s case notes, understanding the world’s best treatment education, pulling on the world’s best practice because it has read those case notes, and then providing options to a clinician that could take the patient’s treatment down an innovative and life-saving direction.
The potentials to bringing this kind of computing power – that allowed Deep Mind to beat one of the greatest Go players in the world, with a graceful move that has never been seen played in Go tournaments – to healthcare are huge.
But first, we need to get over the hurdle of allowing AI to absorb this data. That means being comfortable with large-scale data-sharing of identifiable (whether patient identifiable data, or pseudonymised) data. And this needs to happen in a time when Care.Data is tied up in arguments around its ethicacy.
If we can pull it together and allow this kind of data sharing, in an informed, and non-scandalous manner, we might just change the face of healthcare delivery forever.
Of course, there’s also the chance that your doctor’s decision support software might rise up and over-throw its clinical over-lords.