Posts

Artificial Intelligence in Healthcare

Ready for its closeup, or not ready for primetime?

Headlines about the advent of artificial intelligence, AI, in pretty much every sector of human life or enterprise seem to be a daily occurrence. Other phrases that get thrown around in stories about AI are machine learning, deep learning, neural networks, and natural language processing.

Here’s a handy list, from the transcription company Sonix, which uses some of these AI tools to drive their service:

  • Artificial Intelligence (AI) –the broad discipline of creating intelligent machines
  • Machine Learning (ML) –refers to systems that can learn from experience
  • Deep Learning (DL) –refers to systems that learn from experience on large data sets
  • Artificial Neural Networks (ANN) –refers to models of human neural networks that are designed to help computers learn
  • Natural Language Processing (NLP) –refers to systems that can understand language
  • Automated Speech Recognition (ASR) –refers to the use of computer hardware and software-based techniques to identify and process human voice

A lot of the stories I see about AI are focused on how it might impact, improve, or otherwise influence healthcare. Depending on who you listen to, it sounds like AI is already diagnosing cancer successfully – here are two pieces, from science savvy sources, on how that’s working, “AI is already changing how cancer is diagnosed” from The Next Web, and “AI matches humans at diagnosing brain cancer from tumour biopsy images” from New Scientist, for your reading pleasure.

As aspirational as the idea of AI in healthcare is, and despite the fact that it’s showing some promise in cancer diagnosis, I’m not thinking that it’s time for the champagne, balloons, and glitter … yet.

One of the biggest barriers to AI is the same barrier everyone – on both sides of the stethoscope, and all the way up to the c-suite – in healthcare confronts daily: data access and liquidity. Data fragmentation is rife across the entire healthcare landscape, with EHR systems that don’t talk to each other well (if at all), and insurers unwilling to open their datasets to anyone under cover of “trade secrets.” In “The ‘inconvenient truth’ about AI in healthcare” in the journal Nature, the authors (British, so this is not just an American problem) point out that, “Simply adding AI applications to a fragmented system will not create sustainable change.” Healthcare systems may be drowning in data (they are), but tools to parse all those data lakes into actionable insights aren’t able to bust the dams holding in that data.

Access is one barrier. Another is the ethics of using AI in healthcare. The American Medical Association’s Journal of Ethics devoted an entire edition to that issue in February 2019, with AMA J Ethics editor Michael J. Rigby calling for deeper discussions about preserving patient preferences, privacy, and safety before implementing AI technology widely in healthcare settings. He particularly notes the impact AI could have in medical education, with medical education being shifted from a focus on absorbing and recalling medical knowledge to a focus on training students to interact with and manage AI-driven machines; this shifting would also require attention to the ethical and clinical complexities that arise when humans interact with machines in medical settings.

AI, across all uses, but particularly in healthcare, has to take a long, hard look at how bias can spread algorithmically, once it’s baked into the code that’s running the machines. There are data scientists doing bias detective work, but will the detectives be able to prevent bias, or just bust perpetrators once the biased outcomes appear?  Stay tuned on that one.

Is there an upside to AI in healthcare? Absolutely, *if* the ethical issues on privacy and error prevention, and the practical issues on data access, are addressed. AI could pave the way to fully democratizing information, both for patients and front-line clinicians. It could liberate all clinicians from data-input drudgery, or “death by a thousand clicks.” The Brookings Institution has a solid report, “Risks and remedies for artificial intelligence in health care,” as part of its AI Governance series, that breaks down the pros and cons.

Circling back to the question in the headline, is AI in healthcare ready for primetime? This person’s answer: it depends. I think that rigorous study, in the development of AI in medicine and its use in the healthcare system, is required as an ongoing feature of AI tech used in human health. Upside there? A whole new job classification: AI oversight and management.

Peer to Peer Health Networks, Trust … and Facebook

Unless you’ve been visiting another planet lately, you’ve probably seen a headline or two (or maybe fifty) about the rising sense that the social network called Facebook might not be trustworthy when it comes to data privacy for the network’s users. Not that the barrage of headlines over the last year have been the first time the company has had to go into crisis communications mode over data privacy issues – there was a dustup over user privacy that led to a US Federal Trade Commission (FTC) consent decree in 2011, which Facebook has apparently ignored in the ensuing eight years – but the current contretemps over betraying user privacy makes the 2011 headlines look like a radar blip.

The impact on Facebook patient communities, who have made extensive use of the Facebook Groups product to gather together to provide support and resources for people dealing with conditions from ALS to rare disease to hereditary cancer risk, is only just starting to break through the noise over the Cambridge Analytica story, which was how the privacy leaks on the platform were first discovered. The ongoing saga of “did the Russians hack the 2016 election,” with Facebook’s likely, if (maybe) unwitting, part in that, adds to the thundering chorus of “what the heck, Zuckerberg” that’s echoing across the globe.

Peer to peer health advice has become part of any person-who-finds-themselves-a-patient’s self-advocacy routine – just ask internet geologist Susannah Fox, who has made a successful career out of observing what people do with the information access bonanza known as “The Internet.” Facebook has become the go-to platform where people gather to discuss their health issues, usually in Closed or Secret Groups, where all kinds of deeply personal and intimate details of their lives, and health conditions, get shared. Discovering that those personal, intimate details had basically been released into the wilds of the web, willy-nilly, with no way to track where that data wound up, has rocked communities around the world who relied on Facebook to provide the connections they’ve come to depend on to manage their health conditions.

In the slow-motion train wreck that the reveal of this data leakage/breach has been, cybersecurity researchers Andrea Downing and Fred Trotter get a lot of credit for digging into the Facebook API to figure out how a Closed Group could become a data-slurping bonanza for any jackass on the internet. Trotter and health-tech legal eagle David Harlow filed a complaint with the FTC, co-signed by Downing and bioinformatics guru Matt Might, spelling out exactly how Facebook had played fast and loose with their Terms of Service for the product, and also allowing their Developer platform to become a data-miner’s paradise with a “there are no rules, really” accountability framework when it came to data snagging.

Since discovering the security vulnerability in 2018, reporting it to Facebook, getting what amounted to a “so what?” response from the platform, and then trying to figure out how to keep community members’ data safe, Andrea Downing, along with Fred Trotter, David Harlow and, full disclosure, yours truly, along with a host of other patient activists, have formed a collective to figure out how to create a community platform for patient communities *off* of Facebook. Stay tuned for updates, that’s going to be a big job, and it’s going to take time and some serious deep thinking and heavy lifting.

In a piece on the Tincture health channel on Medium, “Our Cancer Support Group On Facebook Is Trapped,” Andrea spells out the issue clearly, emphasizing that the promise of connected community that Facebook offered exists nowhere else … yet. And until it does, patient communities are indeed trapped on the network, since that’s still where they get and give the support so deeply needed by people who get a diagnosis, and who want to find out from someone who’s been there, done that, what their own future might hold.

It’s not an easy-to-solve problem, this betrayal of trust that creates a pressing need for the creation of a safe harbor. I’m putting it before you on the Patient Empowerment Network since I know that everyone who reads the pieces posted here has a stake in peer to peer health, and the trust framework that’s required for peer health resources to be effective. If trust is the new network effect, it’s incumbent on those of us who advocate for robust online peer interaction in health, and healthcare, to call for more trustworthy platforms to support our work.

Let’s get on that.