What Are Potential Impacts of Artificial Intelligence on AML Patient Care?
What Are Potential Impacts of Artificial Intelligence on AML Patient Care? from Patient Empowerment Network on Vimeo.
How might acute myeloid leukemia (AML) patient care be impacted by artificial intelligence? Expert Dr. Andrew Hantel from Dana-Farber Cancer Institute and Harvard Medical School shares his perspective on potential risks and benefits of the impact of AI on AML patient care.
Download Resource Guide | Descargar guía de recursos
Related Resources:
Why Is Post-Access Enrollment Vital in AML Clinical Trial Participation? |
Novel AML Therapy Use | Impact of Socioeconomic Status and Other Factors |
Transcript:
Lisa Hatfield:
Dr. Hantel, can you elaborate on the significance of oncologists believing that AI-based clinical decision models need to be explainable? And how might this impact AML patient care and decision-making processes?
Dr. Andrew Hantel:
Sure. So I think just taking a step back and saying you know what is AI, and what does explainability of AI even mean? So AI or artificial intelligence is essentially computer algorithms that learn to some extent like us, but in other ways differently, kind of how to process information and make decisions based on that information or make recommendations, at least.
And to some extent, like you or I, we can’t really explain “Why did I decide to have Cheerios this morning versus having like whole wheat toast or something?” It’s kind of difficult for me to say, “Oh, I just felt like I wanted to do that instead of that.” To some extent, AI also does that. It can kind of arrive at a decision after digesting a lot of different data over its lifetime to say that it prefers Cheerios versus whole wheat toast.
But it can’t necessarily tell you why it wanted one versus the other. And in medical decisions, to some extent, the same things can happen. It can’t really adequately explain to some extent why it might recommend one treatment versus another. And we like to think that in medicine, we’re making evidence-based recommendations that we choose treatment one or treatment two over treatment three, because the evidence for one and two is better for the person in front of us.
And AI can also kind of explain things some ways to that extent, but in other ways it might not know all of the other characteristics of the person that aren’t in that computer that make us think treatment one or two is better than three. And so our ability to actuallyd say, “Is the AI making this decision appropriately and able to explain why it came to decision one and two?”
If it can’t do that, we can’t actually understand whether or not it’s gone wrong and whether or not we should trust what it’s recommending. And so for that, we kind of have to create artificial intelligence models that are explainable by saying, “I’m telling you, you should choose this option versus that option because of reasons A, B, and C as they apply to this patient who is being taken care of.” And the hope is that there are ways computer scientists are using to try and get AI towards that.
But we really need to make sure that we create an AI that’s trustworthy in order for us to make you know AML patient care decisions that do better for our patients, because we know that AI is powerful, and it can bring in a lot of different data sources that are difficult for any human to make in any kind of scenario. But to be able to do that in a way that doesn’t put patients at risk and that really improves their care and improves our ability to maintain and optimize people’s health is essential. And so while AI is not kind of right now being used to make decisions in AML patient care, it’s going to be tested probably in the near future to help out with that in clinical trials and controlled settings.
And so you as a patient or somebody who is very interested in the power of AI, I would say once we start to hear about those things, it might be something that you’re interested in participating in a trial, or you’re interested in kind of learning more about that. We could come back and talk about that more. For the moment though, I think it’s just more of a risk that we’re trying to avoid of making AI that’s not explainable and potentially harms patients rather than helps them.
Lisa Hatfield:
Okay, thank you. One of the things I know in some cancer research is they are using artificial intelligence and machine learning models to help predict outcomes based on certain therapies. And I wonder if you have any comments on, because the data used is historical and real time coming in all the time, but we know there are inherent biases based on disparities in healthcare anyway from underrepresented communities. Do you think that those biases can be overcome in future models that are used to predict outcomes to treatment for different types of cancers?
Dr. Andrew Hantel:
Yes. So I think there’s a number of different biases that can come into artificial intelligence models. And it’s the same, a lot of the same biases that we have in our current clinical trials, and that historically marginalized groups have not been well-represented, either in participating in trials or in their data that’s input into these AI models. And for kind of the same reason, we don’t really know how generalizable the data that we have from the trials or from the AI really apply to those populations.
We assume because they have a lot of the other same characteristics as the people who are in the trials or kind of in these models that we can apply those data to them. But I think the push is to use both data sets and to encourage participation in trials for those communities, such that we know that these drugs and that AI are safe and effective for them.
And so there are both efforts to do that in leukemia and cancer broadly, and across healthcare even more broadly. And that can be either by working together with kind of multinational consortia of physicians and researchers to kind of pool data that includes patient populations from around the world. And the same thing is being done for trials as well as to kind of help make sure that the people who are underserved also kind of within our own communities are included in both of these processes.
Lisa Hatfield:
If a patient were to come on to you and said, “Dr. Hantel, I looked up on ChatGPT, what is the best treatment for me given these mutations or this characteristic of my disease?” What might you say to them? Would you involve that in your decision-making? Would you discuss that with them a little bit more? How would you handle that?
Dr. Andrew Hantel:
I think I would just generally be curious about you know what the actual transcript of the conversation was like. I think right now one of the major concerns for a lot of AI is that it can hallucinate things. And so there are some famous examples of lawyers putting in you know kind of briefs that they wanted to file and the AI coming up with like court cases that never existed to justify things. And so the last thing that we want is in medical decisions for people to rely on kind of made-up facts to make treatment choices.
And so, I’d be interested in kind of its medical decision-making process and kind of the data that it was able to rely on to make the decision. More from the standpoint of curiosity and education for myself to understand how patients are interacting with these things, as well as to make sure that the patient was also understanding kind of the information that was being put out and wasn’t having any misconceptions.
I think that the potential for these AI to help patients is vast in terms of their ability to understand a lot of the medical jargon and a lot of the information that’s coming at patients through portals and everything else, that could be very scary. But I also want to make sure that we’re not kind of overloading patients with what we think is an answer, but actually can come with a lot of falsehoods and harm.