Cancer Type
Change My Cancer Selection

How AI Is Shaping the Future of Personalized and Proactive Cancer Care

Save
 

 

In this DECODE episode, host Lisa Hatfield speaks with Dr. Virginia Sun of Massachusetts General Hospital about how AI, particularly large language models like ChatGPT, is already changing cancer care. From identifying immune-related side effects to easing documentation burdens, they explore what’s hype, what’s real, and what patients should know as they navigate cancer in an increasingly tech-driven world.

Transcript

Lisa Hatfield:

Welcome to DECODE, a Patient Empowerment Network podcast that breaks down how emerging technologies—like artificial intelligence—are changing cancer care. I’m your host, Lisa Hatfield. Our goal is simple: to help you stay informed, confident, and empowered as you navigate your care. Let’s get started.

In our last episode, we explored how AI, particularly large language models, are being applied in clinical oncology, and also how these tools are already being used by patients. 

In this episode, we’re looking ahead at what’s next. From predicting who may be at risk for certain cancers to tailoring treatment based on a patient’s individual biology and needs, AI is opening the door to more personalized and preemptive care than ever before. We’re talking about catching cancers earlier, treating it more precisely, and ultimately improving outcomes in ways that weren’t even possible a few years ago.

Joining me again is Dr. Ginny Sun, from Massachusetts General Hospital on the ground floor of this research to help us unpack how these technologies are being developed, validated, and integrated into care, and what this means for patients, providers, and the future of oncology. Dr. Sun, thank you for joining us again today.

Dr. Virginia Sun:

Thanks for having me.

Lisa Hatfield:

So, looking ahead with AI and early cancer detection, are there any AI-driven studies right now aimed at identifying cancer earlier and possibly even pre-symptomatically? 

Dr. Virginia Sun:

Definitely, and this artificial intelligence is being applied in so many different ways that it’s probably its own separate topic and can take sort of like, you know, get more thorough testing with CT scans, for example.

Lisa Hatfield:

Okay. Thank you. And what are the challenges and potentials, the potentials are kind of clear, I mean earlier diagnosis, but are there more potentials for these types of studies for using AI? 

Dr. Virginia Sun:

Yeah. I think there are so many potentials. Some of them are already being used inside our practice today. So, for routine screening colonoscopies, we use artificial intelligence to detect pre-cancerous lesions on the colonoscope. And that can lead to basically removing the pre-cancerous lesion right then and there. I can see it in basically every single cancer you can think about, some sort of like pre-screening thing to maybe be able to remove something. So skin cancers especially, anything that requires like any kind of regular imaging, I think if we’re able to detect it early, we can treat it early, and that can lead to better outcomes. I think one of the challenges though is also measuring, something called lead time bias.

Lead time bias is basically we are detecting things early and then because of that we see a survival benefit, but is it because these patients are actually surviving longer, or is it because we’re just detecting their cancer earlier? And so teasing out what is actually a true clinical benefit versus a lead time bias is actually really difficult. And as we’re thinking about how we can incorporate artificial intelligence, we just need to make sure that the way we’re doing it is smart. It’s not leading to increased anxiety among our patients. It’s not leading to too much like extra testing that can put a strain on the healthcare system as well, that can ultimately not lead to a significant survival benefit.

Lisa Hatfield:

Yeah. And I’m wondering too if one of the challenges, I know in some blood cancers there are precursor conditions they’re not now able to identify before a blood cancer becomes basically a full-blown blood cancer. I wonder if AI if it’s showing these things pre-symptomatically, if there are some people who don’t necessarily need to be treated, who are being treated, if it can lead to that, and like you said, excessive treatment that’s not necessary and an extra expense to the healthcare system. I don’t know what these studies are showing, but it’d be interesting to see how that all falls out in the future if that becomes a challenge. So, the one benefit of AI, I think is, I’m hoping for is that customizing patient care, especially for cancer patients so they can have more customized treatments, understand their outcomes better. How might a patient’s cancer journey look with the introduction of AI into that journey? 

Dr. Virginia Sun:

I think, you know, you can use artificial intelligence in so many different aspects. So, let’s say that just like starting from the very beginning, like you’re a patient, maybe you were recently diagnosed with cancer, and that can be a really frightening thing. I’ve seen patients, they all sort of, they process this diagnosis in many different ways. But I’ve noticed a lot of them, they want to know the information quickly, and they want to know it fast.

And so having artificial intelligence to help answer all of those questions using a chatbot, that’s specifically used to help counsel patients who have cancer diagnoses, I could see that being sort of like a gateway into using artificial intelligence. Then there’s also sort of thinking about, okay, now I had this diagnosis, like, what does this mean for my family? If we’re thinking about genetic testing for my family, what does that mean for them too? Does that need any sort of artificial intelligence? I’m sure it does when sort of the data for genetic testing is so huge. And then let’s say we kind of flip over to the physician’s end and sort of choosing different medications or the different types of treatment regimens. And sometimes it depends.

Sometimes there are certain cancers that have a very regimented regimen already, whereas others, there are maybe a couple of different options. So, thinking about which one maybe will work best for my patient based off of their cancer profile, based off of their other risk factors, based off of their other medical conditions, all of that can also be tailored as well with the help of artificial intelligence. And then finally, thinking about remission and the risk of relapse and everything and sort of repeat screening throughout time. I think artificial intelligence can be another way to help with screening after being treated for cancer as well.

Lisa Hatfield:

Okay. Thank you. And then how is AI being used to better match patients with effective treatments from the start? 

Dr. Virginia Sun:

So, there’s now this very big part within medicine in medical research called precision medicine. And so this is really sort of trying to look at what is an individual patient’s profile and then trying to match the right medications for that. Oftentimes it’s used for cancer, but it’s also used for nearly every branch of medicine as well. And I think ultimately the thing that we need to do and the way precision medicine really works is, you take a look at your individual patient and then you look at the other millions of patients who have the same disease. And then you see how people who are most similar to you, responded to different medications and try to choose which specific medication works best for you based off of that. And so the way I describe that, looking at multiple giant patient populations and trying to figure out what works best for your individual patient, is really like what artificial intelligence is all about inside medicine.

Lisa Hatfield:

Okay. Thank you. I know that, again, talking about blood cancers, we should be talking about all cancers, but one of the blood cancers is called multiple myeloma. They are now studying, patients are typically put on a standard regimen. Everybody’s put on the same one initially. Oftentimes it works for people and sometimes it does not. So three months in, they have to keep testing to see, oh, is their bone marrow showing a reduction? There are some biomarkers they check in the blood too. But after three or four months, it feels like, gosh, I’ve wasted three or four months, have a ton of toxicities from this medication, and it hasn’t worked at all. So, some of the studies are actually looking at these myeloma cells, taking them out of the bone marrow, testing them against different therapies to see which therapy might work best for that specific patient.

So, I’m wondering, if something works, if they’ve taken a cell outside of the body and tested it in the lab, if something works outside of a lab like that, as we’re going toward more precision medicine, is it just as likely to work in the body the same way? Have you seen that, or have you seen studies that indicate that it’s, obviously it seems better. Logically, it seems better. But have you seen studies looking into that? 

Dr. Virginia Sun:

It certainly does. I think that is something that we’re trying to do also within precision medicine. But ultimately, I think the reason why we aren’t able to, I think one of the biggest challenges is doing these huge randomized control trials. And so even with artificial intelligence, to really ensure that a medication is both effective and safe, we need a randomized control trial. We need to recruit people who have that specific condition. We have to get consent from all of them. We have to follow them for years sometimes.

And so that can be a really long and drawn-out process. But the hope is that with artificial intelligence, we’re able to identify these subsets a lot faster and maybe not necessarily need as much large randomized control trials that are so tedious to do. And maybe we can scale down the amount of patients that are being recruited for these trials and allow us to be able to get sort of this idea out to market much faster.

Lisa Hatfield:

Okay. Thank you. And then if a patient, so say a patient’s diagnosed with some type of cancer that has a very standard regimen right from the start, and they decide to use ChatGPT and they plug in all of their labs and they have a specific biomarker and see that, oh, this treatment could work better. Okay. This is just a general ChatGPT question, take it in to their provider. Do you think that’s a reasonable thing to talk to their provider about? And how might that patient approach their provider knowing their provider might not be as interested in that AI that was being used for that patient? 

Dr. Virginia Sun:

It’s a really good question. I think it’s something that I face a challenge with sometimes too, is sort of someone looking up something online and sort of thinking how much do I want to take into that sort of this additional data point into account? This not only happens for medications but sometimes for natural remedies that I’m just not very familiar with. And they’re all being brought up. And, you know, I always encourage these conversations. I do think that there’s no way that a physician can know everything.

And so if you read something that’s online, and it’s maybe different from what your doctor is telling you. I think it’s good to have that conversation. And I think my hope is that like the physician isn’t feeling defensive about it and is really trying to listen to you. I think one thing that will kind of maybe add to like a patient’s point, who’s maybe raising this issue up, is to also include some articles that they read. So not only looking, taking like ChatGPT’s statement like point blank, but then also looking up like what are the research studies that support ChatGPT’s argument? Like, was there ever any like randomized control trials that showed this? Or was there a specific patient population inside a publication that seemed to benefit from this specific thing? 

And then that can be solid things that a physician can sort of actually take into consideration, because at this point, we’re not even allowed to use ChatGPT inside our own clinical decision-making. It’s just that’s not what ChatGPT was meant to do.

Lisa Hatfield:

Right. That makes sense. And I think some of us are starting to do that a little bit more frequently as we get more comfortable using ChatGPT. So, those are good recommendations for patients. So going back to the customizing care, precision medicine, when it comes to especially things like blood cancers or very complex cancers, how close do you think we are to the right treatment the first time around? 

Dr. Virginia Sun:

I think it’s hard to put like a specific number into how close we are, but I think we are getting closer. And I think there’s a lot more hope now, too. There’s so much more work being done, especially in the immunotherapy space, which really took off in the last decade or two. And I think that really opens up this whole new branch of how we’re able to target blood cancers, other cancers as well. CAR T-cell therapy, for example, is now really big in that space as well. And so we’re getting better treatments for sure.

And then sort of the next follow-up question is, how do we know what the right treatment is? And it’s going to ultimately boil down to needing a lot more trials to sort of a lot more data to compare different treatments head to head. But in terms of how to design that right treatment method, how to design that right protocol, and then also teasing out like which specific populations benefit more from one treatment versus the other, I think artificial intelligence can play a big role in that.

Lisa Hatfield:

Yeah, that makes sense. And then when it comes to treatment, so you mentioned things like immunotherapy. So, CAR T therapy can be curative for some cancers. And it would seem logical that a similar type of cancer would be curative for that too but it’s not. Do you think that using something like the large language models that you’re using in your research may be helpful in determining why or why not that particular type of therapy is a cure for one type of cancer and not another? And how might that happen in the future? How might AI or large language models help us determine how we can have the effective treatment similar to a treatment that’s already in existence? 

Dr. Virginia Sun:

Yeah. That’s a really good question. I think we’d have to be really careful about what specific artificial intelligence tool and what we’re building to really be able to make it robust enough to do so, to extrapolate the data from one cancer to another. And I think that’s one of the pitfalls that we have in artificial intelligence right now is that it’s not really good at extrapolating. So, during the last podcast, I mentioned how the large language model that I built is really just a really good reading comprehension tool. And so a good reading comprehension tool is not going to be smart enough to extrapolate, okay, this CAR T-cell therapy worked for this leukemia, but does it work for a different type of cancer? And so I think the short answer is that specific large language model would probably not do well. But we can maybe do other things. We can use other deep learning techniques basically make these really powerful computers that are able to predict things and be able to simulate the environment of cancer cells inside the human body. And maybe with that, we can actually be able to make some progress in terms of extrapolating data from one specific cancer to another.

Lisa Hatfield:

Okay. And this is a really big question, might be a loaded question, you can choose to answer or not. But I know a lot of work is being done. It kind of goes along the lines of precision medicine. A lot of work is being done looking at whole genome sequencing for determining cancer and the types of cancer and specific biomarkers of certain cancer. There is so much data to be used with whole genome sequencing. Do you think that there will come a time when we can use data using whole genome sequencing from a population of people to predict not only cancer, but a specific type of cancer in an individual person? 

Dr. Virginia Sun:

I think it’s possible. I think one of the things about whole genome sequencing is that it sequences the whole genome of a specific cell type or like whatever tissue sample you pulled. So, if your tissue sample that you pulled was not actually from the cancer, it wouldn’t actually show if there’s any cancer. If it’s something from maybe a pre-cancerous lesion, then the whole genome sequencing would show the mutation for cancer. And then that could be treated. There is kind of like the separate thing outside of, it’s similar to whole genome sequencing, but basically called cell-free DNA. It looks at circulating DNA inside your blood and if any of that DNA could be like a sign of cancer. It’s not being used to screen for cancer right now, but it is used to help detect relapsed cancer. And I think that’s a really growing field as well.

Lisa Hatfield:

Yeah. And that’s interesting to know too, because the sooner we can detect relapse and get treatment, hopefully the better the outcome and prognosis, so. Interesting. Okay. All right. So looking forward, the future vision of AI in oncology care, Dr. Sun, what are you the most excited about as far as the direction of AI in oncology? 

Dr. Virginia Sun:

I think, and maybe this is a little bit selfish of me, but I think for me personally, I think artificial intelligence can really just change my own workflow. It can allow me to have a lot more time with my patients. I don’t have to spend as much time in front of the desk, writing notes, doing chart reviews, things like that. And I can just really focus on talking to my patients. And then I also think that it will also make me a better doctor as well. I think it can make everyone a better doctor because now instead of having to sift through thousands of articles to just try and stay up to date, we can use artificial intelligence to help streamline, like, what is the most important information that I need to know based off of the new research that’s coming out? And then from the patient’s perspective, I think it can help in so many ways as well.

So, there’s sort of like all of the research that’s being done using artificial intelligence already that is now slowly being incorporated into how patients are getting treated from anything from colonoscopy screenings to chest X-ray readings, things like that. It’s probably affecting them more than a lot of patients already know. And at the same time, I think it can hopefully help with bridging some of the gaps between the physician and the patient as well. So, whether it’s using artificial intelligence to help with some of like the things that are mysterious, the questions that are unanswered as well, hopefully that will also just help destigmatize and sort of like make cancer feel like a little less scary and a little bit more accessible and bring a lot more hope to our patients.

Lisa Hatfield:

Yeah. That makes sense. Thank you for mentioning that. You also mentioned earlier that as a doctor, one way also to free up more time for you is there’s a program that’s been written to pull all PubMed articles about a certain topic or something. I imagine that is great for providers who use that because does that program also summarize? Say you want to learn more about stage IV lung cancer or small cell lung cancer. Will it summarize maybe 10,000 articles on a specific topic if you used it? 

Dr. Virginia Sun:

I wish. I don’t think it’s at that specific point yet, but certainly what it does is maybe you ask a question, let’s say, about stage IV lung cancer, and then it’ll sort of pull the most relevant results, and usually it’s maybe three or four different research articles, and then it’ll summarize those, but then not only that, you have the option to actually look at the research articles yourself and read through them.

Lisa Hatfield:

Okay. That’s good to know, and patients might want to use that too. Again, ChatGPT, I wouldn’t use that as an only source, but I suppose they could type in a prompt. The more specific the prompt, the better information will be coming back. But I imagine that it can be typed in as a prompt, pull up all peer-reviewed scientific or medical journals regarding small cell lung cancer, stage IV small cell lung cancer, and that way the patients have more information and are more empowered bringing that information back to you. Some providers, like I said, may or may not be excited about receiving that information back, but it certainly would help in the decision-making process between patient and provider and the whole patient care team, so yeah. Well, thanks for that great information. And then regarding the future vision also, with some unknowns with artificial intelligence, what should patients and clinicians be hopeful or cautious about? 

Dr. Virginia Sun:

I think we should really be hopeful about just what this means for basically the future of patient care. It’s going to transform every aspect of our lives. It’s going to do everything from help write our notes for us, it can help with identifying things earlier, help with patient decision-making, and also clinician decision-making as well. And so there’s just so much hope, and I really hope one thing that it will do is to help detect things more early that we can intervene faster on. If we can detect cancer earlier, if we are able to come up with better treatments using artificial intelligence, I think that just brings so much hope into how we practice.

At the same time, we always should be cautious about any new technology, especially once it’s being used for patient care, because we want to make sure that whatever we’re using is accurate. Just as an example, if I typed in ChatGPT and then I wanted to write one argument saying, like, tell me why vaccines are good for you, and then I also ask it, like, tell me why vaccines are bad for you, it will give really convincing arguments on both sides. So, which is the truth?

And so similarly, like, if we take maybe less controversial topics outside of vaccines, but then also like any topic in cancer therapy as well, we just have to make sure it’s accurate, it’s not biased. And then I think one thing to also be cautious about is just the explainability of artificial intelligence. Oftentimes, AI, large language models in particular, are called a black box, in that we don’t really know what’s going on. We don’t know how the neural network, let’s say, or like the mind of the artificial intelligence is actually doing or thinking. And so that’s like, for example, they’ll give you an answer, but you don’t actually know how they got there. If you remember in math class, it’s like showing your answer but not showing your work. So similarly, artificial intelligence can do that. And so that can be a really big problem in medicine, where you really want to be able to understand the reasoning behind decisions and not just the output.

Lisa Hatfield:

Well, you mentioned two things, I want to touch back on that. So understanding the output, is it possible both as a clinician and as a patient, say we’re using some type of, whether it’s a large language model that’s been implemented at your facility, or for a patient, we’re using something like cloud or ChatGPT, is it possible to say, what are the limitations or tell me your sources, can you type that in the prompt to say, what are your sources for this information? Or what are the limitations of this information? Will the response come back appropriately, usually? 

Dr. Virginia Sun:

Usually, and I think it also depends on how exactly this large language model is trained, and what is it meant to do? And then also, I think large language models are a little bit better in the sense that they actually try to understand human text and generate something that actually makes sense. But then there are also other sorts of deep learning techniques that are completely like sort of shrouded in mystery. So, within large language models, there are sort of ways to ensure that what you do, or whatever questions you’re asking are grounded, the answers are grounded in truth.

And so that goes into a little bit more technical details. It uses a specific technique called retrieval augmented generation, oftentimes. But it’s basically like equipping your large language model with its own search engine. And the answers of the large language model have to be based off of the results from the search engine. And so that’s what sources like open evidence does. I think ChatGPT now tries to do that as well when you ask it a question. But like I said, ChatGPT didn’t always do that. It used to make up random resources or like fake research articles to try and answer a question.

And so the newer models are better because the engineers behind it hard coded it so that it goes through sort of an extra search function and gives you the resources. But there are maybe some other free large language models out there that don’t do that yet.

Lisa Hatfield:

Okay. Thank you. And I also want to go back to one other comment that you made, because I know there are patients who are looking up things like, what is this? Am I relapsing? Will this treatment work for me? You’d mentioned trying to prompt for both sides of it, say an argument, I know it’s an argument, but say a patient, a cancer patient saying, well, will this treatment benefit me? Also look at what are the downsides to this treatment, or which patients won’t this treatment benefit? That might not be the best example but looking at both sides of something, you know, what are the benefits of this and what are the pitfalls of this? So when patients, if they do want to take information to their doctors, they have both sides, they can understand both sides. And again, like you’ve mentioned, not using one source for all information.

It requires multiple sources of information and especially our providers’ input and their brain working to try to figure out the diagnosis, the prognosis, the treatment, the outcomes, the adverse events. So using multiple sources, but I really liked your idea of looking at both sides. You know, maybe we need to understand on the positive side this and on the negative side this, or this might work, or this might not work. So I appreciate those comments. When patients are looking up information, it might be helpful to look at both sides of an argument. I guess it’s the best way to explain it. So yeah, I like that. Thank you.

Dr. Virginia Sun:

Yeah. I think it’s, you know, I actually really empower patients to do that actually. So, like I mentioned in the last podcast, my specific research is on immune-related adverse events. And so these are autoimmune conditions that result from starting immune checkpoint inhibitor therapy. It’s different from other sort of chemotherapy side effects in that oftentimes with chemotherapy, you stop the medication, you stop the adverse event. So once you stop taking the medication, your hair will grow back, for example. But in something like an immune-related adverse event, you can stop the medication and you’ll still end up with this autoimmune disease that can lead to a lot of morbidity and maybe even mortality.

And so, I think if like a doctor were to start a patient on immune checkpoint inhibitor therapy and just says like, okay, this is going to help treat the cancer, but it might lead to an immune-related adverse event, and that could be like an autoimmune condition. It’s really hard for a patient to really understand what exactly that means, what that means for their quality of life if they were to develop it, and whether or not the risk of getting an immune-related adverse event is worth the benefit of potentially treating this cancer.

And so that is something that I think we’re trying to work, and we’re trying to explore whether or not artificial intelligence can help with that as well, just to help patients better understand the adverse effects that come out with cancer therapy.

Lisa Hatfield:

That’s super interesting. And like you said, too, to see if artificial intelligence down the road can predict the risk of that. So, then the decisions can be more informed and then also have the providers and the patients be more aware of that if symptoms do pop up. So yeah, this is so interesting to me. And just talking one more question about patients, I just lost my train of thought here. Oh, so if a facility is using a large language model, for example, you’re doing research, will that patient know that their medical record is being reviewed and maybe data being retrieved from that to train a model? Is there any requirement for consent for that or is on the final product, is that not even an issue? 

Dr. Virginia Sun:

So, I think there are different levels of how patient data is being used right now inside research. If there is any possibility of that information being de-identified or being shared across outside of the hospital, then absolutely there needs to be consent from the patient. I think for the large language model that I used, I was able to get around it because I didn’t do any pre-training. I didn’t have to use any patient medical records to train it. And then all I did was look at prior patients retrospectively and then basically ask, does this person’s medical record have an immune-related adverse event? All of that was only shared within the hospital, within the researchers who specifically were granted access to that data. And then even inside the final product, no patient data was ever leaked or released. But then if you’re trying to create a new artificial intelligence tool that uses patient data, then that leads to more barriers. It also makes it a lot harder for people to share artificial intelligence tools across different systems.

So, I can’t create this network or let’s say this neural network that’s with patient data and then share it to the hospital next door because it’s being used. It has patient data inside it. And so certain things require consent. Others just require very strict monitoring from the hospital’s basically privacy governing board or the institutional review board. And the goal of it is to make sure that no patient data is ever leaked or even accidentally exposed to the public.

Lisa Hatfield:

Okay. Thank you. And then final words, what would your last words be to reassure patients and maybe even some providers that there are a lot of benefits to using artificial intelligence in oncology research and care, so that maybe they’re a little less fearful knowing that it might be down the road, there might be some progress down the road with AI use in oncology care? 

Dr. Virginia Sun:

Yeah. I think it’s very smart of people to be wary of artificial intelligence. I think the best artificial intelligence engineers are also wary about artificial intelligence itself. They know what the pitfalls are, they know where the biases are, and they’re actively trying to ensure that people are educated about what these pitfalls are. So that way, other people know about it. And then they’re also trying to sort of bridge some of these biases as well and get rid of some of these biases. So, we have to be smart about it. And I think it’s absolutely incredible, like it’s an intelligent reaction to be wary about this. I think that being said, though, I think to help you understand how artificial intelligence works, to understand how it can help you and where it doesn’t help you is really to just work with it yourself. So, you can start with something as simple as ChatGPT. And I think I’ve seen a lot of people who were initially skeptical say that, you know, now they’ve been able to incorporate it somewhere in their routine, maybe not in everything, but even from like drafting an email, or helping them write a letter, things like that.

I used it once to write a poem for one of my patients, because he said he missed Ireland, and he loves sort of Irish limericks. So I wrote him a limerick. And so that was how I used ChatGPT, for example. So even small things like that can really sort of like bring joy to someone’s life, maybe brings convenience to some people. And so that’s sort of like one way in which artificial intelligence can be a little bit more accessible to people. And then I think the other thing just to note is artificial intelligence is already being used in probably a lot more ways than you expect. So, from opening your phone with your face, to driving using GPS navigation, all of that uses some form of artificial intelligence. And so I think the way to move healthcare forward is also to use artificial intelligence as well.

Lisa Hatfield:

Thank you for that. And I feel more reassured just talking with you, a lot more hopeful about using this bigger umbrella of artificial intelligence in oncology research. I think, like you said before, we’re coming up on more precision medicine for patients who are diagnosed with cancer. And that is definitely a reason for us to stay hopeful. So that brings us to the end of today’s episode. A big huge thank you to Dr. Sun for sharing valuable insights. And to all of you for joining us, we’re grateful to have you as part of this Patient Empowerment Network’s DECODE program, where we’re decoding complex topics to help you stay informed and empowered in your cancer care. I’m Lisa Hatfield. And until next time, take care and be well.

Share On:

Facebook
Twitter
LinkedIn