Cancer Type
Change My Cancer Selection

Decoding AI in Cancer Care: Can Patients Trust it?

Save

In Episode 3 of our DECODE podcast, Dr. Virginia Haoyu Sun of Harvard Medical School dives into one of the most critical conversations in modern cancer care: how to use artificial intelligence safely, responsibly, and ethically. She speaks to how bias enters AI systems, what trust in digital health tools really means, and how patients and care partners can thoughtfully engage with AI without replacing human judgment.

From protecting privacy to understanding algorithmic limitations, Dr. Sun equips patients and care partners with practical guidance to stay informed and empowered as AI becomes an increasingly influential part of their care journey.

Transcript

Rachel Siddall:

Hi, I’m Rachel Siddall, your host for this Patient Empowerment Network DECODE program. Throughout this series, we’ve talked about what artificial intelligence is and how it’s already being used in cancer care. In this episode, we’re turning to an equally important conversation: how to use AI safely, responsibly, and ethically. We’ll discuss bias, trust, how patients should engage with AI tools, and what the future of AI in cancer care should look like. Joining me is Dr. Virginia Sun from Massachusetts General Hospital and Harvard Medical School. Dr. Sun, it’s such a pleasure to be connecting with you on this important topic.

Dr. Virginia Haoyu Sun:

It’s my pleasure, Rachel, and thank you so much for having me.

Rachel Siddall:

So, Dr. Sun, when we talk about bias in AI, what does this mean?

Dr. Virginia Haoyu Sun:

When we talk about bias in AI, it really means the bias in the data and systems that train it. AI models learn from historical healthcare data and if that data reflects gaps, inequities, or under-representation of certain populations, the AI can unintentionally reproduce or even amplify those patterns in its outputs.

Rachel Siddall:

So, how are researchers and clinicians working to identify and reduce bias in AI tools?

Dr. Virginia Haoyu Sun:

I think the first thing is just for all clinicians, who I think you know at this point we all understand that healthcare data is not neutral. It reflects how care has historically been delivered, and when AI is trained on that data, it can inherit biases tied to access to care, diagnostic delays, or treatment availability. And then ultimately, when AI is used more and more to generate more pieces of data, it can scale that unintentional bias more and more.

And those differences have led to under-representation in many groups, particularly racial and ethnic minorities, older adults, people with disabilities, and those from lower socioeconomic backgrounds. 

The COVID-19 pandemic has really brought along this issue into sharp focus, where we saw in real time how certain communities, despite having the exact same illness, were disproportionately affected, and it wasn’t because of biology alone. And we realized that our data sets didn’t adequately capture those populations, which limited our ability to respond equitably. 

And I think that’s a really big concern moving forward is that while that epidemic has led to these important conversations about diversity and equity in healthcare data, there’s been less of an emphasis these days on having rigorous research focused on representation. And I worry that now with the AI boom happening, that representation is not being delivered, and so, I think addressing bias in AI isn’t about better algorithms, it’s about being intentional in how we collect data, validate models across diverse populations, and hold ourselves accountable for the equitable outcomes.

Rachel Siddall:

Do you think that we need to go back and consistently re-examine some of the AI tools to look for bias that may be occurring?

Dr. Virginia Haoyu Sun:

I think so. I think one thing in particular, so the most accessible tool we have, both for clinicians and patients is ChatGPT. And ChatGPT as well all other large language models they’re basically trained off of information from the Internet, and we know for a fact the Internet contains a lot of biased things, biased information. I think retraining everything can be very difficult, but what we do know is that certain systems such as ChatGPT is really trying to take out some of the biases, so the companies like OpenAI, Microsoft, Google, they are actively trying to take out a lot of the biases, when they are training these systems and I think similarly, when we use these systems too, we just have to take that information that they provide with a little grain of salt.  

Rachel Siddall:

How should clinicians evaluate whether an AI tool is reliable?

Dr. Virginia Haoyu Sun:

I think there are essentially three principles that we use, not just for AI but for any new device, any new drug that’s on the market in terms of how well it works. Basically the three principles are accuracy, precision, and generalizability. So we obviously want our AI tools to be accurate and precise, which means that it gets the right answer a 100 percent of time or as close to 100 percent of the time, and it doesn’t get the wrong answer as much as possible too. 

And that is the gold standard. I think that it’s something that we strive to achieve, but humans will never be perfect and as a result, AI systems which are trained on humans will also never be perfect either. However, I think we’re getting really close, I think AI is being more accurate with every new iteration and every new development, AI is just getting more accurate over time. 

But the thing it still struggles with a lot is it’s generalizability and so I think this is a problem in all AI, where there’s a gap between the concept and the actual production, but I think it’s especially amplified inside of healthcare, and that’s because there is millions of data that is just not being accessed right now by AI training systems. And it’s partially because of privacy, but it’s also because every hospital system is so separate from each other that we aren’t able to access the millions of data that is being generated from lab data, pathology data, imaging data. And when we train AI algorithms, it’s just a minuscule percentage of what is actually out there. 

So really I think the biggest thing to evaluate whether an AI tool is reliable is generalizabiilty and that’s looking at: What was the data trained on? What was it tested on? Was it trained and tested on different hospital centers? Different locations? Different populations? And I think the wider the spectrum in terms of what information was fed into the AI system to create it makes it a more reliable tool.

Rachel Siddall:

So I know that there is some hesitancy using AI, just because understanding these complex tools is very challenging. So, can you speak to transparency in AI tools by understanding how AI arrives at its recommendations, and how we can know the limitations of these tools?

Dr. Virginia Haoyu Sun:

Yeah, I think this has been something that has been plaguing AI, especially in medicine, for a long time, and basically, a lot of people say that AI is just a black box, and that’s where that fear and uncertainty comes from. You basically feed in AI information, and then somehow it generates an outcome, and you don’t really know, like, the decision-making process that was put behind it. The same way, like, an individual might be able to talk you through the steps of, like, how they came into this decision. Ultimately, it depends on what specific type of AI is being used.

Currently, AI is being used in many different forms, and so, let’s say for large language models, which was something that I was doing my research in, it basically predicts what is the most likely word after spitting out an output. So, you know, you ask a question, and then that becomes a seed for the large language model to take it, and then try to answer the question, and then the next word that it generates is based on the most likely probability of what would make sense in that context. 

And so, you don’t necessarily need to have, like, a PhD in computer science to understand how AI works, but maybe just to have a little general understanding what exactly is being used to train the AI? Like what data was used to create this AI can make you feel a little bit more certain, or feel more comfortable using AI, in your own practice.

Rachel Siddall: 

So, Dr. Sun, can you tell us why AI should remain decision support, not decision-making in cancer care?

Dr. Virginia Haoyu Sun:

I think AI should remain more as decision support, just because every person is so unique, and they all have their own priorities, they all have their own understanding of their risk factors and things that they want out of their life. And so, like, if I have a patient who comes with me and, you know, he has late-stage cancer, he’s thinking, like, you know, should I do this aggressive therapy that might give me an extra year of my life, but I’d have to spend three-quarters of it in the hospital, or suffering from some of the side effects of cancer? Or should I transition to hospice, where I can prioritize my comfort and spending time with the people I love?

That’s such a personal decision, and I don’t think AI is at that stage yet, maybe one day it would be, but it’s not even something that I can make as a decision for my patient. It’s ultimately what my patient wants and what matters most to him.

And maybe in AI he can ask about like what specific side effects he might encounter for the specific chemo regimen, but it’s not going to be able to make that decision for him. Just like I won’t be able to make that decision, but I can explore what he values the most, and maybe come up with a recommendation that works for him.

But I think cancer care is so personal, and so it relies so much on understanding a person’s values that I think currently AI is just not at that stage to be able to do that. It can interpret an image, but it won’t necessarily be able to tell you what matters most in your life.

Rachel Siddall:

I think that’s a really great example of taking a person’s values and preferences into account. During their care, and keeping the human in medicine as well. So we’ve been talking about how clinicians use AI, but we know that many patients are already using AI tools and chatbots to do things like interpret their lab results, understand the diagnoses they’ve been given, and prepare for their medical appointments. So, what guidance would you give patients using AI tools?

Dr. Virginia Haoyu Sun:

I think I would encourage the use of AI, not necessarily only in medical care, but also just, you know, familiarize yourself with it. It’s really starting to be integrated in every aspect of our lives these days, and so, you know, if you’re using AI to navigate yourself from point A to point B a good GPS system, or using it to track your sleep data, why not use it in your medical toolkit as well? But like I kind of mentioned before, you know, any data, especially any medical data, take it with a grain of salt. 

There was a well-publicized study from the Annals of Internal Medicine, or I guess a case study from the Annals of Internal Medicine, where a patient who wanted to try and eliminate chloride from his diet looked on ChatGPT and then was basically told, like, you could replace sodium chloride with sodium bromide. And then started taking sodium bromide supplements instead. 

And that led to bromide toxicity, which is something that is incredibly rare, especially since the 21st century we just haven’t really seen it. And it turns out, the AI system was really recommending sodium bromide instead as a cleaning agent, but it was just misinterpreted, and so I expect that as AI becomes more used in the medical setting, that, or, you know, more accessible by individuals, errors like this could happen at any point. So if you are going to consult AI, feel free to do so, but if you are going to make any changes with regards to medicine or, you know, lifestyle, just consult with your physician first.

Rachel Siddall:

I completely agree. That’s a very scary story. So, how can patients ask AI tools better questions, avoid misinformation from these tools, and then also bring AI-generated information into productive conversations with their care team?

Dr. Virginia Haoyu Sun: 

Yeah, so, there’s a field within AI, or specifically within large language models called prompt engineering, and so this is basically how do you ask the right question for your large language model. And so this is assuming you’re using large language models such as ChatGPT or Gemini from Google. And essentially, you want to instruct your model to be as. clear, concise, as accurate, as much as possible, and give you the answer in a way that you wanted, or to be as unbiased as possible. And so, this might be, like, even instructing, like, let’s say you go onto OpenAI, and then you instruct your model by first saying, like, you are a medical decision support tool. 

And your job is to give the most accurate, up-to-date information in an unbiased manner. And same keywords like this will allow you to have much more accurate data than, for example, like a standard large language model that just will take in all unfiltered content from the Internet to generate your response. And so that’s kind of, like, the prompt engineering part of it. And then if,  you know,, you do start having, like, recommendations provided by your large language model, to then, you know, print it out or send it to your care doc or care team, and just say, like, this is something that I saw online. And, you know, this is something that I see all the time as a physician, like, even before the advent of large language models, patients would bring in data that they said, or something that they read on the Internet all the time. And so, I think I welcome that, and I think any good provider should welcome that, because I want my patients to be as educated as possible on their disease, and to know their disease well, and then if there are any misconceptions or miscommunications for us to be able to have an open dialogue about it.

Rachel Siddall:

I think it’s interesting that you bring up, kind of, assigning the AI a role when you’re asking the question. I’ve also experienced that, where if I tell ChatGPT you are a medical laboratory scientist, and doing that instead of just asking the question, the information that I get is so different. So, kind of assigning that some role in medicine when you’re asking these questions can give you some very different answers.

Dr. Virginia Haoyu Sun:

Yeah, and I think that’s something that the large language models in particular are really good at is that they can adapt and be kind of generalized to any settings. So, I had, like, a one patient who wanted to, you know, like, listen to poetry, so I asked, I asked ChatGPT to be, like, a prolific literaturist and write some, like, original poems for him. And then, but then on the other hand, like, I… maybe when I use it to synthesize some patient data, like, I asked it to be, like, a medical writer, but to, like write something that can be easily understood by, like, someone who maybe not have, maybe does not have training in clinical medicine.

Rachel Siddall:

Very cool. So, Dr. Sun, what does responsible AI integration look like over the next 5 to 10 years, and how can we ensure AI improves equity rather than widening gaps, remains patient-centered, and supports clinicians instead of overwhelming them?

Dr. Virginia Haoyu Sun:

So, for responsible AI integration, I think it’ll be a slow process. And one of the most unique things about healthcare is that the payer system is so different compared to any other industry, and as a result the system to integrate things is a lot slower. So while other fields are already using AI more and more, I think healthcare is a little bit slower to adopt things, because it hasn’t identified who the key stakeholders are and who the key payers are. And so I think you know, from the integrations perspective, thinking about what is a scalable way to integrate them. Especially because AI systems are becoming more and more expensive. It uses up a lot of resources, and so thinking about the cost-to-benefit ratio of certain AI tools that we use. 

And then I think the other part of it is also looking at how generalizable the AI system is and making sure that when we do integrate something it’s something that. can stand the test of time. It can stand the test of changing demographics and that will be still something that we struggle with, but we’ll hopefully improve over the next decade or so. 

And so, with regards to, you know, ensuring AI remains patient-centered, I think this is something that AI, we really hope to do, is to basically, instead of, like, you know, look at broad trends to be able to look at the individual as, like, a three-dimensional person to take the personal features of them to see their progression over time and to be able to integrate all of that together. 

And I think lastly, from a clinician support perspective. I think it’s really just being able to decrease some of the decision-making fatigue that sometimes clinicians have to go through, to decrease some of the burden of documentation I think it’s a big thing that hopefully AI integration will be helpful with in the near future as well.

Rachel Siddall:

I have some experience implementing AI for reading slides in the clinical laboratory, and it was a huge support tool. It reduced the amount of hands-on time that people spent having to read slides, in an industry that we’re already struggling to find people to staff the laboratories.

So, it was exciting to see that really take a burden off of the clinical laboratory workflow, and it’s exciting that we can look forward to that in other areas of medicine as well.

Dr. Virginia Haoyu Sun:

Yeah, and I think it’s already starting, to some extent, so, personally, I spend more than half of my time writing notes than actually seeing patients, and so I think the most helpful thing that I’ve seen AI specifically in clinical medicine is just taking over a lot of that note writing and the documentation that we have to do, allowing me to spend more time with my patients.

Rachel Siddall:

That is fantastic. So before we close, Dr. Sun, can you tell us some things that you are excited about with the future of AI in medicine?

Dr. Virginia Haoyu Sun:

There so many things, and there’s both short-term things that are happening, and then also, you know, the long-term goals and aspirations that I have for artificial intelligence. 

I think from, like, the short-term perspective right now, large language models, large visual models, these very generalizable models are starting to become more easily accessible towards both patients and clinicians, and it’s really already started to transform how AI is being used. And I think before when there was, you know, much more of a gap between a concept of AI and to the actual release and deployment of it, now that bridge is becoming smaller, just because, how AI is becoming more generalizable, and the data that we’re using to train AI systems consists of larger and more diverse datasets that allows us to be able to use AI in different circumstances. I think long-term perspective-wise, I think it’s really moving AI away from the discrete problem, or, like, the discrete tasks that AI is being used to solve answers, or to solve questions for. 

And so instead of just being able to tell me at a certain point in time, like, whether or not this chest X-ray has an X percentage of it containing, like, a precancerous lesion, being able to show a person’s evolution over time to tell me there are certain risks, to tell me, you know, what the best way for treatment is whether or not I should just, you know, continue to wait and monitor, or if I should aggressively treat someone. I think that can be something that we can look forward to for AI in the future.

Rachel Siddall:

Well, great, thank you so much for sharing your opinions and perspectives on that. And that brings us to the close of this DECODE podcast.

Dr. Sun, thank you for sharing your expertise and helping us navigate both the promise and responsibility of artificial intelligence in cancer care. To our listeners, thank you for joining this Patient Empowerment Network DECODE program. Our goal is always to help you and your family feel informed, supported, and empowered as you navigate your cancer journey. I’m your host, Rachel Siddall. Until next time, take care and be well.

Share On:

Facebook
Twitter
LinkedIn