Webinar

Navigating the Future: How Artificial Intelligence is Reshaping Health Care


Time & Location

Feb
07
12:00 pm - 1:00 pm ET

Event Materials

Health outcomes could improve by 40% and treatment costs could be reduced by 50% with the use of artificial intelligence (AI), according to the Harvard School of Public Health. Despite AI’s promising ability to improve health outcomes and transform medicine, there is a potential to disrupt care and introduce risks resulting in harmful and inequitable outcomes. Health care stakeholders are beginning to explore the role of regulatory bodies and their approach to AI in the health care sector. Join us to gain valuable insights into navigating the evolving landscape of AI in health care.

Speakers will discuss:

  • Separating myths from tangible advancements to help us understand challenges and limitations of AI.
  • Real-world use cases where AI is transforming patient care and empowering clinicians.
  • The complex landscape of ethical considerations and legal challenges associated with integrating AI into health care.

0:04

Hello, everyone, and welcome to today's webinar, Navigating the Future: How Artificial Intelligence is Reshaping Health Care.

0:11
Before we begin, just a few tips to help you participate in today's event.
You've joined the webcast muted, should you have any technical issues. During the webcast, please use the Questions icon. They'll have the opportunity to submit text questions, which you may submit at any time throughout the event. We'll have time set aside at the end for live Q&A.

0:32
I would now like to introduce Kathryn Santoro.

0:36
Thank you. And good afternoon. I'm Kathryn Santoro, Senior Director of Programming at the National Institute for Health Care Management Foundation. On behalf of ..., thank you for joining us today for this discussion on the future of artificial intelligence and healthcare. This webinar, as part of the Artificial Intelligence Webinar Series, where we are sharing research and expert perspectives on the integration of artificial intelligence in healthcare and its impact on patients lives, care, delivered a livery and health equity. And our next webinar in this series will explore the impact of AI on health equity more in depth.

1:18
Nicole, previous work on AI includes supporting the work of Ziad, Obermeyer and colleagues through our Research Grant Program. We'll hear more later about how their discovery of unintentional racial bias and a commonly used algorithm has brought about important changes and how health care needs are assessed.

1:36
And, Matt: Today, we're joined by a prestigious panel of experts to learn about the opportunities and challenges of using AI in healthcare, examine real-world use cases, and consider the ethical and legal implications of integrating AI into healthcare.

1:54
Before we hear from them, I'd like to thank NIHCM's President and CEO, Nancy Chockley and the NIHCM team, who helped to convene today's event.

2:03
You can find biographical information for our speakers along with today's agenda and copies of their slides on our website.

2:11
Closed captioning information for today's event can also be found on our website and in the webinar chat.

2:19
We also invite you to join the conversation on social media using the hashtag AI tech, Nick M, which is an I, H C M.

2:30
Am now pleased to introduce our first speaker, doctor Michael ..., Michael, as the director of the Center for Improving the Public's Health with Informatics, and Professor and Biomedical Informatics, Medicine, and Biostatistics at the Vanderbilt University Medical Center.

2:49
Over the course of his career. Michael has produced academic work on the topics of natural language processing, risk prediction, modeling, and signal detection for machine learning methods. We're so grateful to have him with us today to share his perspective and his research. Michael?

3:10
Thank you. I wanna, first thank you, Katherine and Nick. I'm for the opportunity to come present today, so near a topic that's been near and dear to my heart, for, for many years. So, let's, let's dive right in. Next slide.

3:24
Next slide.

3:26
So, I don't have any conflicts of interest to disclose relative to any of the tools or technologies and all of the funding I received is really from the FDA NIH, or DOD, or VA. Next slide.

3:39
Sort of dive right in. And I'm sure everyone has heard this saying that there's been, you know, just unprecedented and tremendous growth in the volume and complexity of medical scientific data. And in volume and complexity of patient data itself. But, when I went around, I'm a data driven person. And being an ..., and I went around trying to really get a good example of that for, for discussion.

4:01
one of the things I found that was really interesting was a systematic review, looking at oncology clinical trial guidelines, essentially looking at the guidelines across different cancer types.

4:11
And, what they found was extinction essentially an exponential growth in both the volume of documentation, the data and information required to deliver the clinical practice guidelines and an exponential growth in the amount of scientific knowledge and references that are required to deliver that information. Which is fascinating.

4:30
And, I thought it was really nice concrete example. You know beyond that sort of delusion of scientific information That clinical healthcare providers are responsible for. An increasingly, having challenges addressing the, even when you take something as straightforward as a clinical practice reminder, you know, everyone. That's all Patients with coronary artery disease should be on a cholesterol medicine in the statin class, if possible.

4:56
You look across the nation and you can see a tremendous amount of practice variability in the in patients that are actually receiving that is shown on the right-hand side of this slide. And that holds true for many other conditions, such as ...

5:07
therapies in patients with COPD and many others, so, when facing these challenges, really, we need help in managing all of this information. Next slide.

5:20
And, so, you know, it's really why we're here today is that artificial intelligence and machine learning technologies really hold huge promise, and being able to bridge some of these gaps, address some of the challenges, and help improve healthcare and healthcare delivery. And beyond just that, the frame of reference that I deal with, primarily in my career, there's work going on an ambient AI wishes, attempting to draft provider notes without providers having to type anything.

5:46
I'm using recorded information, there's basic discovery, and science, and drug discovery, image processing, clinical decision support, and even in rare cases, autonomous AI applications that have been given approval by the FDA to operate without physician interpretation.

6:07
So with all of this excitement and promise and opportunity, what comes with it is, is a challenge of too much expectations and too much hype. And in fact, the artificial intelligence community has seen prior instances where there is excitement over technology. And everyone rushed out to sort of use it. And then there were some really bad occurrences that soured the public's perception of the use these tools and they saw a drastic decline in use.

6:35
So, what I find fascinating here is, I've been watching the Gartner hype cycle over the years, sort of represent this for, for the public. And in 20 19, deep learning, machine learning and natural language processing were all technologies that were really of high interest.

6:49
A lot of investment, a lot of excitement, and you fast-forward that just four years, and you see deep learning machine learning, natural language processing aren't even on the graph as individual just sub disciplines. And, in fact, generative AI sort of taken all of the oxygen in the room for excitement and interest in, in the general AI machine learning technologies, even though it's still true that all of these other older technologies are still operating and have a lot of Promise. Next, slide.

7:22
So, I can't cover the waterfront. And I think each of the panelists will present their own view on things.

7:27
I'm just going to give a few examples of some of the challenges that we face when we're trying to implement AI in healthcare. And the example that I like to use here is I'm looking at the Pulse oximeter. So there was a group during the pandemic that looked at critical care patients and essentially evaluated the pulse oximeter readings with the arterial blood gas which is a reference standard to be able to detect the auction. And the way that artificial intelligence is used in these applications is that a red laser is sort of shown through the finger and in the wavelengths that I get through the finger are measured on the other side.

7:56
What they found that among black and African American patients is that pulse oximeter was systematically reading lower than the actual.

8:05
It was reading higher than the actual oxygen levels in the blood, which gave false confidence that the patient had sufficient oxygen ... when, in fact, a portion of them didn't. And that was significantly higher and that subpopulation. And this was largely due to the fact that the training data that was used for these pulse oximeter was largely white or Caucasian population.

8:24
And it's really relevant to today's conversation because just this week, the FDA is convened an expert panel that's going through the data and information around Pulse, oximeter seeking to provide guidance and recommendations around what can be done to make these more accurate across across different skin tones.

8:42
Next slide.

8:45
So work that we've done also, to highlight, not only challenges in training data, but also challenges and applying these applications and algorithms over time. We took a series of different machine learning algorithms, and looked at, essentially, changes over time, and event rates for outcomes in associations between different features in the health care, as well as differences in case mix changes over time, that are delivered. We found that all of these algorithms were susceptible to drift, and performance problems over time, even though deep learning and neural networks were the least effective. They were still affected, and this can have significant safety events in clinical practice.

9:23
Next slide.

9:26
And so, I can't do a presentation without generative AI. And so I think it's really important to note that ..., large language models are not immune to any of these issues. In fact, it's known that if you try to do prompt response for these algorithms, and you ask it something after the end of its training data, it has challenges inferring and providing recommendations. It's also true, if you look on the right-hand side of the slide, A group looked at ... 3.5 and 4 across 8 different tasks.

9:58
And basically, evaluating the accuracy, those tasks compared to manual annotation, and found that even among three months of training updates to these algorithms, that the accuracy of different tasks varied widely. And you can imagine, when you're trying to deploy this in clinical production systems, that the safety of these is going to be even more challenging for the healthcare systems and for the FDA to regulate in the setting of continuous improvement of these algorithms.

10:24
And lastly, I just want to highlight that, you know, these generative AI tools of sort of rushed onto the scene, but really tens of thousands of human hours are spent doing manual labeling and Jen and reinforcement learning to remove inappropriate bias and derogatory responses that have enabled the gold rush. That is generative AI. Next slide.

10:46
So, work that we began with the National Academy that I co-lead and with others on the slide back in 2018, sought to sort of do an environmental scan of healthcare and artificial intelligence.

10:57
Provide some general recommendations and guidance around opportunities, even though the field has moved forward at lightspeed since 2019, when this was published. And I still think one of my favorite parts of that was a group of us got together and made some recommendations around guidelines for best practices for implementing AI in healthcare. And we drew from multiple different disciplines, Learning, Health Systems, Implementation Science, user centered Design, and Human Computer Interaction, and then work with doctor Radwanska Mullen. Later, overlaid the lens of racial justice, Health equity, and ethical AI into the framework. So, I'm not going to go into too much detail for time, other than to say, I think the most important concepts are that. It's really critical to understand what you what you want to change, or what needs to be fixed.

11:44
Then, evaluate that in the context of the workflow. The stakeholders, the end users that are going to be affected, the patients, the caregivers. It may be, the AI is not the right tool for this. And so, I think one of the challenges in generative AI right now is we're using the tool seeking. Use cases for the tool. When, when a better practices, identify the problem that you need, and then apply the technologies that are needed for that to make health care safer and more effective. But, given that, if you do, choose AI or machine learning technologies, you adapted, integrated into the system. Be deployed into production practice, and monitor it, and make sure it's safe.

12:22
And then you do ongoing monitoring or prevent performance DK, also, with an eye to when the tool needs to come out of service, when the prediction, But it's no longer relevant or the prediction itself or performance are degrading.

12:35
Next slide, please.

12:38
So, I'm just briefly going to go through to work the work that we've done in the past year with a lens towards relevance for this panel, for AI and for the implementation life cycle.

12:48
This was some work that a group of us did Dartmouth and Vanderbilt, looking at the VA, but essentially using 20 cardiac catheterization labs in the VA.

12:58
As the research sites, We randomize to interventions and intensive quality improvement coaching, as well as the Informatics visualization risk adjustment dashboard, sort of surveil The Cath labs provided a machine learning based assessments of risk, along with ability to sort of view their data.

13:17
Some sites received one. Some sites received other, some sites received both. And at the end of the 18 months of intervention, which the sites weren't required to make any changes whatsoever, or they were empowered to do whatever, was right for their practice, for their workflow. We saw that the odds ratio is zero point five four for the incidence of acute kidney injury following catheterization the significant reduction, and after implementation of these tools.

13:41
And I think, from an AI perspective, one of the coolest things about this was that a priori, doctor Davis, had developed a set of algorithms, algorithms, AI, algorithms, to watch the other AI algorithms that were conducting the risk adjustment. Basically, monitor them for performance degradation, and update them to maintain performance. We started this trial before the pandemic and went through the pandemic. And the only way that we were able to successfully complete this was because of the algorithms watching the other algorithms and maintaining the performance of the system as the trial went on. And so I was very grateful that we sort of a priori implemented that during the beginning of the trial. Next slide.

14:24
Lastly, shifting from a population based frame to a more of a provider patient dyad frame.

14:31
I have a lot of interest in precision clinical decision support, in which context relevant information as deliberate about the patient beyond just a regular clinical reminders. So, this was some work that we did, adapting natural language processing tools, older technologies, tools to basically summarize, or for pay people with coronary artery disease, looking at the need for fat and cholesterol medicine, if they weren't all that medicine. It basically went in and looked at all the prior tries that they gave of that medicine. Any adverse events documented. The chart, why they had coronary artery disease, gave the providers information about where to find all that information, and recommendations on what to do.

15:09
We injected that directly into the electronic health record for the primary care providers. Essentially, we tried to do that per their wishes, right, Before the primary care, clinical encounter, in order to sort of be appropriately the workflow. I mean, what we saw was that if that was done properly, one in ten reminders resulted in an increase stating a renewal or start.

15:31
And as you can see in the bottom, it was much preferred, if it was done right before the encounter. There were situations where we had to give that information in-between encounters, and because that wasn't alive with the workflow, there were challenges therein. And so, you know, the other thing I would say in conclusion is, I think large language models have the capacity to summarize, conduct, information synthesis, and summarize care in a way that will allow this sort of work that took years to put together much more scalability and employability. So. next slide.

16:01
So, I thank you very much for your time.

16:03
Next slide. I'm happy to answer questions at the end and I'll pass it to the next panelists. Thank you.

16:11
Thank you so much, Michael, for sharing some of your research on the opportunities and challenges associated with deploying AI in clinical settings. Next. We will hear from Svetlana Bender. ... serves as the Vice President of AI and behavioral Science for Guide.

16:29
Well and Florida Blue, lot of leads, generative, AI, strategy, vision, responsible adoption throughout the organization, and its partnerships in order to improve the well-being and decision making of their customers. We're so honored to have her with us today to hear about their approach. Lana.

16:49
Thank you so much for having me, and thank you, everyone for joining us today.

16:54
I'll start by saying that, I'm sure it's not modeled.

16:57
But there's numerous headlines in research about AI and whether you love it or hate it, I'll say that probably all of you are proficient users ....

17:06
Even if you don't realize it, just think about it.

17:09
Whether it's unlocking your phone with your face recognition, we're leveraging Google Maps to get you to a point that, you know, to get you quickly. Find the best route for you to get somewhere.

17:19
We're even enjoying predictive nature of Netflix that recommends shows based on your preferences.

17:25
We've come a long way in the last 70 years, and have become very good at reaching benefits of AI, and although it's been around for decades, it wasn't until recently, especially, as Michael mentioned, with the latest developments in the generative AI. tools, such as ..., that it captured the public's imagination.

17:43
As Michael said, consumed the oxygen in the room, and truly got the attention of a lot of people.

17:49
But what about AI within the healthcare space?

17:52
Let's jump to the next slide. We saw in Michael's presentation that has significant implications for every stakeholder within the healthcare ecosystem, from hospitals, to physician groups to payers, pharmaceuticals, and so on, and we've already seen excellent applications of AI in healthcare.

18:09
Medical imaging space for example, AI can assist clinicians in radiology by scanning, MRIs and lab results for abnormalities, which in turn results in more capacity in their day, or within the drug discovery space.

18:23
I mean, imagine if we can accelerate cancer drug discovery from decades to months, and we already have pharmaceutical companies that are looking just at that.

18:32
So there's a lot of opportunity, And think about the impact this technology can have on costs, as you can see on the slide, it's estimated that we could see anywhere between 200 to 360 billion C in savings annually.

18:48
Unfortunately, despite the enormous benefits and excitement around AI, adoption is very low, and that's a huge problem.

18:55
In fact, less than 5% of healthcare organizations are using AI as of 2022.

19:02
And that number is really lagging behind a lot of other industries. So, today, there's a few things I'd like to cover.

19:08
First, I'd like to focus on adoption, challenges that almost every company will need to address, if they're looking to leverage this technology, to provide you with techniques that you can leverage to overcome these challenges.

19:20
And three, I'll share some of the applications that we've been leveraging at guide law, with the focus on improving our member outcomes, provider experiences, and really, to help us drive better health for our members.

19:33
So, hopefully, by the end of this session, you'll be able to leverage some of this content and accelerate adoption at your company, and do it safely, which is most important.

19:43
So, why are we seeing? And let's go to the next slide. Why are we seeing such small, small adoption rates?

19:49
And I'm sure you can come up with a whole laundry list of reasons, but for many companies, a key barrier is human behavior.

19:58
And this is where behavioral science comes in. Some of you aren't familiar with that. It's also known as behavioral economics, and it really just studies cognitive, emotional, and motivational factors that influence people's decisions and behavior. And if we can understand the root cause for certain behavior within this design, he really informed strategies to solve those challenges.

20:19
And I can tell you, as you can see on the slide, that fear drives a lot of our behavior, fear of change, fear of the unknown, and rationally. So fear of algorithms. So each one of these kids, AI adoption, but we can fix that. So let's look at these in more detail. Let's go to the next slide.

20:38
So fear of Change causes us to have very strong preference for status quo, or default bias. And this class excited, where I'm showing you a group of researchers looked at organ donation rates across different countries, and as you can see, proportions of individuals who enroll to be an organ donor varies substantially from the countries that are on the left and the countries that are on the right.

21:01
And of course, for this, to, you know, for a lot of this work infrastructure needs to be in place, But the differences of the countries that are on the left-hand side, individuals have to opt in, to be enrolled to be an organ donor.

21:15
Countries on the right there ought to enroll to be an organ donor, and they have a choice to opt out. So seeking to status quo is the easy option. Even when it comes to such important decisions. I want it to be an organ donor or not.

21:28
So the law of inertia, in other words, is very much true when it comes to changes. And so, is the case for AI adoption. So, some of the solutions that you can think about when it comes to addressing status quo and people's, no preference to avoid change is, one. Make sure you have clear vision and strategy, and ensure that you are engaging all the relevant leaders and stakeholders from different parts of the organization.

21:52
This includes your legal team, cybersecurity, marketing, and other business functions.

21:57
But, basically, you need this group to make sure you can prioritize and identify high value use cases where you can apply AI, and also, it enables you to co-ordinate and implement, use cases safely all across your organization.

22:13
Second strategy is to really make sure you have an effective communication strategy to ensure alignment across the company and making sure everybody understands what you're trying to achieve and why you're doing it.

22:25
Even in the communication strategies, you can leverage some of the behavioral principles, for example. Rather than saying we must adopt, so you could say what happens if we don't adopt, Right? Because we know that people are more likely to engage when they're missing out on something, or they're losing something. So again, how you frame your communication, and your information is important as well.

22:47
And lastly, we know that education does not necessarily lead to action, So you need to make sure that you provide tools to your employees that they can play with, that they can experiment with. In other words, make the use of AI a default when possible.

23:04
And, if we go to the next slides, let's look at the second here. Which is the fear of unknown. Here's another example from Behavioral Economics.

23:12
If I were to ask you, would you rather take $500 guaranteed? Or would you rather take a gamble? Which is 50% chance of getting one thousand bucks or zero?

23:24
Economically speaking, there are equal, right, the excepted value is different. However, because of our preference for certain C we majority of us will go for this certain option of 500 bucks. Well, similar situation happens when it comes to AI adoption. There's a lot of uncertainties. Well, Let's take your job.

23:41
Isn't the right investment? Is it too risky, so it naturally causes us to freeze, and may lead a lot of individuals to say, Hold on, I'm just gonna wait and see, and miss out on the all the benefits and opportunities.

23:54
So, again, as an organization, it's important to make sure you understand what employees are feeling. So, potentially do a pulse survey to understand where they are, what they're perfect for.

24:04
You know, perception is, then, you can create targeted education to address some of those barriers. So, for example, we have seen that, AI as a job display, displacement is a big barrier. So, again, rather than focusing on that, you can truly educate people that will augment your role, right.

24:22
Not necessarily replace, and so forth and so forth. And last, but not least, I will say, important thing, is to make sure you upskill employees, so to ensure, not just long term success, but also to give them confidence and familiarity with this technology, so that they're not afraid of it, but instead, they actually welcoming it and wants to use it.

24:44
And let's move on to the next slide, and look at the loss here.

24:48
And that's fear of algorithm, and Michael, previous, he thought, touched on that as well. It's legitimate concern, right, related to potential biases or machine based decisions that could be wrong, or discriminatory. So it's not surprising that intuitively reaction is supersede, AI is scary.

25:06
And of course, that lack of trust hinders the adoption of AI. But in reality, if you think about an AI, unlike humans, does not have innate predisposition to their personal experiences, opinions, and beliefs to influence decisions, AI is a direct product of our creation, right? So, bias can come from, or training data, which is human generated.

25:31
It could be due to the poor algorithm building, which is, again, because of us doing, or building, import, testing, where we're monitoring practices against human has a control over that.

25:43
So, for AI to be, without bias, we need to ensure that we're not introducing bias and creating biased algorithms. Again, that's super important element, and to make sure that, that happens, we need to ensure responsible AI adoption is key.

26:00
That's happening, you need to have governance. You need to have controls in place, and ensure you have things like transparency, and clarity for stakeholders on when, where, and how AI is used, to make sure you introduce fairness. In other words, have processes in place to ensure AI output a decision that free from bias and discrimination.

26:21
Another, for teachers, to make sure, especially for us, as a company, human oversight, right? Make sure you include human in the loop for every step to make sure everything is done properly. You have to figure out how to address risk management, as well as primacy and make sure you have robust AI systems that behave reliably.

26:43
So, those are all very important. Let's jump to the next slide. Now, I'll turn to share with you a little bit about our portfolio that we haven't gone well. So, we've actually been successfully employ AI tools for quite some time now.

26:59
And really, the intent is to improve our member and provider experiences and help drive better health outcomes. So, within the predictive analytics space, I know Michael Scherer, you know that even though generating the AI is dominating the space right now, there's still a lot of different AI capabilities that we can leverage. So, we've been building and leveraging machine learning models to help us better identify members who could benefit from tailored education and clinical care and ensure that we proactive with our engagement efforts.

27:29
For example, we have a preventable emergency room model that helps us to identify members who may benefit from education and resources on where to seek care when and if they need it.

27:41
So, for example, if somebody goes to ER, every time they have a skin condition, then we may find that valuable knowing that they can resolve that via othello dot org virtual solution, which means quicker and better and more personalized care for our members. Another example is pre authorization approval. So, in 2022, Florida blue actually became the first US payer to automate links the prior authorization approvals through AI powered clinical reviews.

28:11
And in terms of results, we've been able to streamline the reviews of 75% of prior authorization requests with AI, which enables a lengthy turnaround time on the decisions within NaN name names here. This allows us members to get to their next step in their career journey much quicker.

28:31
And, at the same time, or clinical, staff spends less time on admin functions, and more time in patient care. And last, but not least, generated a tie generating AI.

28:43
We obviously know, or have heard, many times, this is supposedly the most transformational technology we've seen. So, what we've done a guy, will, we're ready, stood up an internal capability. It's Chen GPC likes to look for ways to use instant unity.

28:59
We are already seeing excellent benefits of using this technology. And the ultimate goal for us is to leverage the latest tech to ensure we create seamless and personalized experience for our Members, that we enhance care, health equity outcomes, while optimizing cost and operational efficiency.

29:18
And doing it safely. So, for most of the next slide, in, summary, AI has the potential to transform industries, And if responsible, AI is important for your company, here's a summary of items to think through to ensure success.

29:33
First of all, structure. As I mentioned before, it's crucial that you convene a cross functional group of company leaders and engage the right stakeholders. Training, focusing on education and building technical capabilities on upskilling your workforce, not only ensure long term success, but also make sure your employees are comfortable with the technology. And provide clear guidance on how to use that technology.

30:00
Ethics: Again, ensure ethical and responsible AI adoption and use, so make sure you develop guidelines, policies, addressed concerns, such as privacy, security, accuracy, in the models.

30:13
Technology. Of course, assessing your company's technical capability is going to be very important if you're leveraging the latest on the greatest. And for some companies, it might be worthwhile to partner with the right companies to accelerate execution.

30:29
And if we go to the next slide, I'll end by saying that, for us, as a company, by taking advantage of AI responsibly, we aim to improve access to care, as well as health equity, make healthcare more affordable, reduce admin workload for clinicians. And hopefully, you can leverage some of this content to accelerate adoption in your company, and, most importantly, do it safely. Thank you so much.

30:56
Thank you so much, ..., for helping us understand how AI is being used to improve member and provide our experiences, and help drive better health outcomes. Next, we're going to move into our discussion on the ethical and legal challenges associated with integrating AI into healthcare. And we will hear from Glenn Cohen. Glenn is the faculty director of the Petrie Flom Center for Health Law Policy, Biotechnology, and Bioethics and a deputy dean of the Harvard Law School. Glenn is one of the world's leading experts on the intersection of bioethics and the law, as well as health law.

31:37
And his current work focuses on the areas of medical AI, health information technologies, and research ethics. We're so grateful to have gone with us today.
31:47
Glenn: Thank you so much, Hopefully, everybody can hear me. Next slide, please. I will slow I often say that my job is to bring the doom and gloom, because I'm dual trained as a lawyer and as an ethicist. Next slide, please.

32:03
These are just some of my disclosures and conflicts. Next slide, please. I want to just start by kind of giving you an overview about how I think about how legal and ethical issues creep up into algorithm design.

32:17
And particularly for machine learning. And I often thinking about this as the phase is necessary for building one of these models, and the idea that there are issues that arise at each of the stages of the life cycle.

32:28
So first, the first phase, acquiring the data. Among the issues, where does the data come from? Are you getting consent from the patients whose data you're using? How are you stripping identifiers? Is that sufficient?

32:40
How representative is the data? Is the data diverse and racial and other ways to serve the needs and contexts in which it will be deployed? What kind of governance opportunities? are you offering patients?

32:52
In the second phase, building and validating the model. How do you know when a model works well enough that it can be used on real patients, What standards and validation should be put in place?

33:02
Who's going to be doing the validating and how do we know we should trust them? How do you balance intellectual property protection in the form of trade secrecy against transparency needs to make sure these things are actually working?

33:14
In the third phase, testing them all real-world settings.

33:17
What, if anything, will patients be told about the fact that AI is being used to partially direct their care reimbursement? Do you need a separate informed consent for this? Can patients opt out? How do we handle liability? I'll come back to both of those in a moment. Which regulator or combinations of regulators are best fitted to oversee this area.

33:38
Finally, the last phase, broad dissemination, it's great. You've got a model that actually works. It's helping patients. And now you have a question about whether it's being equitable use. So do all patients who contribute data to the building, the model gets benefit.

33:53
How do you make it commercially viable while also ensuring that there's equitable access? Next slide, please.

34:00
So I mentioned this already, in terms of liability.

34:05
I want to emphasize at the outset that there's been shockingly few cases of our liability for medical AI. And most of the ones we've actually seen have been about surgical robots, where, arguably, it's not really the AI that's causing the distinctive issues.

34:19
Now, it's possible there's a large gap between what we observe in the reported cases and what's actually going on. Maybe there's a lot of settlement that's possible. But in general, my own take away, and what I would leave you with, is that people probably overestimate the importance of liability issues in this space, given the data, but still we should try to understand it.

34:37
So this is a paper I did with gurkha and Price in JAMA. Essentially, we're looking at working through an AI recommendation to a physician, whether the position at offset, and the idea that it either harms, or does not help the patient as much as what the position would have recommended In the absence of AI assistants. And the question we're interested in is, who's liable when that happens when there's an adverse event?

35:01
Even when medical AI works well, it's not infallible. Sometimes patients will be injured as a result. And, in general, in the US, to avoid medical malpractice liability, a physician has to provide care at the level of a competent physician within the same specialty, taking into account available resources. So, we call the standard of care.

35:22
What happens when you introduce an AI algorithmic recommendation, as part of that process? And the figure we have here represents possible outcomes for a very simple toy case. So, we're imagining a binary, and AI recommends the drug dosage for a patient with ovarian cancer.

35:40
And if you assume, the standard of care, would be to administer 15 milligrams per kilogram every three weeks of the chemotherapeutic agent, BI, for a particular reason, for this particular patient, recommends a larger dose. We examine essentially, in this paper, what could happen in terms of liability, depending on what happens next. I guess the takeaway I really want you to have is that last column with the color coding, it's stop sign color coding. So, red, yellow, green.

36:07
Under current law, a physician faces liability only when she or he does not follow the standard of care and an injury results. Those are the red boxes in effect. And what's important about that analysis is it's just that the implication for physicians is as long as they follow the standard of care, they're going to be OK.

36:29
And that's the safest way to use medical AI from a liability perspective.

36:34
Is this a confirmatory tool to support existing decision making processes rather than as a source of a way to improve care? So, I was gonna do it anyways. BI told me to do it, and I'm very happy about that. Might be comfortable for physicians who are seeking to avoid liability, but it's terrible news for medicine, because it's precisely in the cases where the standard of care is inappropriate, and the AI tells us. So that's the cases where there's actually an advantage to use the AI and the possibility that we'll actually do better.

37:04
And so that's a real problem with the current design of liability that may come a time when it becomes actually liability risking not to use AI because AI becomes the standard of care. But we're not there yet, and we're happy to talk more during the Q&A about hospital liability, about developer liability. But now, if I can move on to the next slide, please.

37:24
Now, this question is about informed consent. Do patients have a right to know about AI is involved in their care, and under what circumstances and what details? supposed to use a case study from reproductive medicine? You're a patient undergoing. In vitro fertilization, you have multiple embryos. You're deciding which to implant. Your physician recommends a particular embryo and gives you some reasons to do so.

37:46
It turns out, though, that your physician doesn't disclose, they arrived, at this decision in part, based on a machine learning system, which recommends this embryo based on molecular imagery, analysis of your personal characteristics and other factors. Is it a problem that your physician hasn't told you that was the basis in part for the decision?

38:05
What if the physician actually chooses to overrule, quote, unquote, an AI system recommendation and the physician fails to tell you that they violated your rights of legal or ethical informed consent?

38:17
Come to think of it. When you last saw your own physician, do you know whether an AI or ML system was used and involved in the care decision?

38:25
I have, on this slide a paper we just did a couple of months ago on a mental health chatbots for adolescence where some players have used generative AI to help generate responses or at least suggest, responses is actually a famous case in Belgium where AI chatbot encouraged him and his life. And, unfortunately, tragically he did.

38:44
Is that the kind of thing patients have a right to know that AI is being involved when they're dealing with an AI.

38:50
Is this like failing to tell a patient that a substitute surgeon is scrubbing in, which our case law tells us, is a violation of informed consent?

38:59
Or is it more like styling to tell your patients? The reason you recommend a particular course of action was because of this black box up here, which included your memories, were case you psych residency. Lecture. You made me remember from medical school and the last 10 patients who saw that were somewhat similar that are influencing your thought process. What is the right way to think about how AI works here at our duty to disclose? And by the way, that little cute seal on the slide is power of a therapeutic robot is sometimes used with patients with declining cognition due to age. And he's cute and cuddly and they love him and they treated like a pet. But he's also a little spy because he reports back all sorts of information to positions.

39:41
How do we handle the kinds of cases involving parle, which are vulnerable populations that may not understand even if we tell them the limits of what we're doing here?

39:50
Next slide, please. So where does the data to train models come from? 39:56A lot is from EHRs. Is it a problem? We don't explicitly ask patients to share their data, and this is not just a theoretically interesting question. The University of Chicago and Google were recently suit in the class action, alleging that they released patient EHR data with a time-stamp that was otherwise de identified, allow Google with its ability to geo located patients based on boat data to actually re identify patients that violated.

40:22
There are many such lawsuits going on right now. I'm actually an expert witness in a couple of them, and next slide, please. The problem with the US approach to Health privacy is very sectoral HIPAA demonstrates one of the problems with that approach, HIPAA attaches unlimited data protection to traditional healthcare relationships and environments, but the reality of the 21st century is that HIPAA covered data forms is small and diminishing share of the health information sword and treated in cyberspace as we tried to show with this fake.

40:55
Most of this stuff is below the waterline and to put the point more forcefully.

41:00
in the future, the data that best predicts or health may not be health care data, but instead data about our social media presence, online shopping, et cetera, et cetera. In one particularly fascinating illustration, a study use machine learning to try to diagnose depression for Instagram posts, showed that Instagram posting habits built her choice and changes of that, was actually a very good predictor of depression, much better than many people would have complicated.

41:27
Well, most of the data privacy discourse thus far is focused on patient privacy. one very important to think about is about what the way it reveals a lot about physicians. How well are they doing their jobs, or we reimburse them appropriately? Is their problems, their practice style? So it's not just patients that may feel surveilled but also our physicians. Next slide, please.

41:49
So I know the foundation is going to have an entirely different session on equity and bias. So I'll just whet your appetite for it. We can chat more during Q&A.

41:57
What I want to suggest is that the problem is actually much worse than you think. Most people think of, OK, well, the dataset is not very representative of the population. I often joke as a mid forties white guy in Boston. I'm dead center, in most datasets use Trade Medical AI, not so, for women who are over 60, who are African American, living in the rural South. And so often, people think, OK, well, the problem is that the app has not been trained on the right dataset. And, you know, that is a major problem, but it's one to wrap your head around. It's not that hard, hard to fix, easy to conceptualize.

The problems that we can't see that are much trickier ziad Obermaier was mentioned before, is one of the recipients of funding. He has this great paper that I have over here on the slide, one of the best, and most famous examinations, and what they show is they take a hospital re-admission algorithm that's widely in use, meant to find patients most need to follow up care during discharge. And they show it favored white patients over black ones at the same level of sickness, why? Not because the dataset wasn't representative.

43:00
It's because there are differences in care delivery and care seeking behavior between these populations. I mean, white patients are just much more expensive than black patients. To simplify the algorithm was told to use health costs as a proxy for Health State, which is a decent ex ante design choice, but one that results in favor of white patients over similarly sick blockage. This problem gets worse when you go beyond race, sex, age, which we can think about, and try to train and try to test for, to think about all the other sources of variation in populations.

43:33
But as I often say, it is true that medical AI often make mistakes, cannot explain how it reaches conclusions, and is biased, but the same is true of your physician. So the real question is, how much of an improvement can we make on the scores by medical AI, and how do these errors distribute between populations we care about?

43:53
Next slide. So this is my last substantive slide. I'll stop here. And that's just to say, as bad as I've made the problem sound, it's actually even worse than you expect. Why? Because we call in this paper, in science and elsewhere, the update problem.

44:09
even as a hospital system or a regulator, you check all this out. You're satisfied with all the answers, and that's true for the quote unquote factory settings. We actually want algorithms to be adapted to learn out in the world and on the job, in part, because every context, every hospital system, every patient population, is quite different. We wouldn't want one that's trained only for patients and look like X to be used in a place where patients look like why. But then you have the following problem as a hospital system as a regulator.

44:40
Why does it change so much that you need to re review? And how often do you do that? FDA, which is really kind of very innovative in this space, is true, currently, trying to get off the ground and approach called pre-determined change, Control Plants. Talk more about that in the Q and A, very interesting, but I'm not entirely sure it's going to be an adequate solution, even though it's the best when we have.

45:01
Next slide. And I'll just close by thanking some of the people who funded this, and thank you all for listening to me, and I'm looking forward to the Q and A So, thank you very much. Great, thank you so much for sharing your work and helping us understand the complexities involved in understanding liability concern and bias. We'd like to use our remaining time to engage in Q&A. I'm going to ask all of our panelists to come back on camera and off mute.

45:33
Our audience can continue to submit questions in the console and net.

45:41
Glenn, wanted to start with a question for you, And also Michael, Michael had mentioned, you know, using medical devices that are using AI. And I know there's, you know, a lot of discussion in the regulatory space about kind of what is the FDA's ability to regulate how medicine touches AI? And, you know, it's one of the hot topics of the day.

46:10
Maybe I'll start by going and you can jump that, Michael. I have the pleasure of Working International Academy of stuff like this. Here's what I'll say is that actually if you're hoping FDA will save, you put that hope away not because it's not an expert agency, because the ambit of what it actually reviews is a small portion of the medical AI out there.

46:28
And that's because of the way Congress has acted in the 21st Century Cures Act to pare back a bunch of FDA's authority, FDA's own decisions and interpretations of that authority. And then, also, the discretion. It's set itself, and what its chosen to review and not review, and how it's structured things. So, it's typically things that are built into the devices as are more likely to be reviewed by FDA. But actually, the vast majority of medical AI being run, and also all the stuff that's being run for administrative purposes or insurance purposes.

46:56
The like FDA's not looking at that. So, my own view is that we actually really need to have a lot more mini FTAs of institutions, and for them to be public and transparent about the way they're thinking about this. Because the agency as good as it is, it's just not setup, currently, to do a vast majority of this kind of review.

47:18
Yeah, thank you. Yeah, I think that covers most of the waterfront, I think the only thing I would add to that is, I think there are some other players stepping in health and human services. And the Office of National co-ordinator are stepping into a portion of the vacuum to try to give guidance around AI that's integrated specifically with electronic health records and clinical decision support.

47:40
Know, I'm, so, that's one area, you actually see the consumer protection agency stepping in on some of the entertainment. You know, the non medical use pieces, although it's a little bit unclear to me exactly how that is playing out satisfy. So, I agree that there is that There are gaps.

47:59
You know, I feel like the change process at the FDA is proposing is a step in the right direction be to help everyone concretely think through when you're implementing an algorithm that's either continuously updating or, or updating and increments how you think about making sure to maintain that properly, so.


48:23
Hmm, Glenn. I know we didn't we got a little into general. I know you've had some more on that. We had a question about some of the administrative uses of generative AI, and how they might carry lower risk, but what should be disclosed about how such models are used?

48:43
Yeah, so, I mean, I think the illustrative bucket, it's kind of super interesting, because I think if you ask 10 people, why falls in the bucket and what doesn't, you might get 10 different answers. So, one, I'm very interesting, and it's my chart right now, as people now, because it changes Congress made, we're all getting our lab test results almost immediately before physicians. I've seen them. Patients are now constantly e-mailing their physicians above, or my chart, and there are some hospital systems that are experimenting now with having AI to the first draft of the response to that.

49:12
And so, you know, that seems like a relatively low risk. It has human in the loop. It looks administrative, I will say, I can imagine use cases where actually, a patient gets a response and the physician has looked at over lightly.

49:26
But, in fact, the response causes the patient to engage in some kind of behavior that's really problematic or not engages in behavior. That isn't so. To me, that's an interesting test case and whether disclosing to patients, so I think the vast majority of patients are unaware that some of these messages, or at least initially drafted by an AI and why don't Question is, why are we telling them, Should there be a disclaimer on each one, the way, there's kind of discussions about having disclaimers on AI generated images that AI generated this message. So, I think that's a very interesting use case.

50:00
Um hmm, Lorna. We had a question for you, if you could speak to, you know, some of the challenges about the data and you know, the data that you're using for the predictive analytics. And, just, you know, general issues with data and quality.

50:20
Yeah, absolutely. And that's a super important question. I mean, yeah, everything that we do within AI depends on the data quality. So, we have a very solid data governance someplace some quality controls in place. Sign that's number-one priority for us as a company.

50:36
So, ensure that we have the right, you know, sample groups, the right diversity in there. So, we even, interestingly, we're even leveraging or exploring some use cases where we can leverage generative AI to strap social determinants of health information from clinical notes.

50:56
And that as you can imagine, I mean there's research that indicates that 92 more than 92% of socialism itself and information can be extracted through clinical notes relative to traditional methods by just looking at the codes within the structured data.

51:13
So ICD 10 codes. So that's another part of it. So how can we enhance our data? How can we extract as much as possible that we know about our members?

51:22
And, again, if we think about health equity, that's the sort of data that you might see in the clinical notes. Or provider. And Member Interaction writes the old text data. Now, with this latest technology, we can enhance and do much more, and understand our members much better.

51:37
So, not only how we're thinking about quality, and where the data comes from, we're always thinking about, how can we enhance our datasets that we're truly using to, you know, create predict enhanced predictive modeling, as well as leveraging AI super to enhance other AI tools that will leveraging.

51:53
Excellent question.

51:55
Great.

51:57
Had another question for you, if you could talk a little bit more about the employee facing Jan AI tool night that you talked about, and now, I'd like to add one more on the administrative side, clinical side? Yeah, absolutely. So, another great question, So we have managed to get the right skillset and get the tool in house.

52:23
In other words, we have a tool that's within our company walls, which means it's secure, or the data never leaves our company walls, which was a huge priority for us, given that we're, you know, a health insurer.

52:36
So, we want to be very cautious about security in our members data, So, even though I know a lot of companies, you know, within marketing and communication domains, they can easily go ahead and leverage publicly available. In my chats, you can see, like, capabilities in different models. We have to be clever about creating our own infrastructure and setting up architecture in house to make it possible for employees to go in and explore it.

53:01
And as, you know, open AI did would dare to look pretty much open that up, and let individuals figure out what are the innovative ways they can leverage it to enhance their productivity, and what they do on a daily basis. And that's the approach that we took, as well. So, really training our employees, making sure we have all the guardrails in place, conducting requirement, training on risk assessment, security assessments.

53:29
So we did a lot of due diligence, and a lot of send up, a lot of guardrails in place and before we let everybody use it, but as you can imagine, the use cases are very diverse, right? We have different teams across the enterprise getting very creative in terms of what they can do. Some already, you know, quantifying efficiencies as risky summer. You know, creating innovative capabilities that we have on? how does the company before?

53:53
And whether it's, you know, for somebody who codes or who writes or even our clinical teams, now, we can create, for example, we're testing out, can we create automated clinical notes that patients can take home with them as a result of interaction between the provider and the member in the general clinical, most that we have.

54:13
So, we're really trying to automate a lot of the solutions and, again, as a result, not only improve members, well-being, and access to health and health equity, but also our employees' well-being as well because now they can focus more on supporting, you know, our members instead of doing repetitive tasks.

54:33
OK, we had another question about workforce shortages, and I know, like, Michael, you spoke about how AI can be used to help providers and manage, you know, changes and just the immense amount of clinical guidance out there. Running, You mentioned kind of cutting down on the administrative side or making it more seamless. How do we anticipate AI advancing to help bridge the disparity of, you know, the shortages that we're hopping and healthcare providers?

55:09
Yeah, it's a really good, good question. I mean, I think, you know, part of this isn't going to be addressed with AI. I think some of this is that national policy level with no medical education and salary embarrassment. and, and a lot of other facets, I think, where AI can really help, like, what something I'm actually, even, personally excited about is Ambient AI, so the, the thought that I could sit in my room with a patient.

55:34
And actually, maintain eye contact the whole time, and have a conversation, without any typing, other than to look maybe at a screen for lab data or something, And then, it generate the narrative note. I look at it a few minutes, make sure it's right. And I'm done, is transformative. I've talked to a few providers who are in pilots with these. And they're saving them like, over an hour or two a day from, but it's sort of tongue in cheek called pajama Rounds, where you're at home doing the rest of your clinical work. And so, right now, that looks to be one of the most amazing opportunities for frontline providers, and I have some other examples, but I'll, I'll pass the baton.

56:15
Yeah, I definitely agree, and I would even extend that, extend that use case, because it's not even just the interaction. It's even before the interaction.

56:24
Rather than spending a lot of time reading through the notes of what happened, with the patient in the past, or the history, imagine you have a beautiful summary with all the elements that you need done for you, that you can glance over, and you have all the information at your fingertips immediately. So, I think, is the whole experience. And even after the interaction, like I was saying, you already have a beautiful report or something nice that you can give to the members, so they don't forget what was discussed them when they're supposed to do.

56:51
So I think that has a lot of impact, not only in the provider, but also the outcome for the member. Maybe I'll just jump in here and just say something slightly more cloudy on this, which is to say, I'm somewhat ambivalent about this for the following reasons. Right.

57:06
On the one hand, I think what I refer to as the democratization of expertise is great, the idea that we could take AI and give a lot of people who are currently not accessing the health care system. Something that allows them to get an on ramp to have something is really positive.

57:20
On the flip side, I do kind of worry about this idea that we kind of have highways and there's one highway that you get on if you have great health insurance and you can see a physician, and there's another highway that we put you on that involves a lot of AI tractions and gate keeping. And that we also exacerbate kind of, differences about, who actually gets in the door to a physician. So I think it would be very thoughtful about the way we build this enroll this out because it's a strong interest in kind of sorting patients in this way.

57:49
It's a great comment, and we had several questions about, you, know, can we afford to use AI and some of the foundational models, but can we afford not to? And, you know, I'd be curious everyone's thoughts and just, you know, also some questions on how do we make it accessible for things like federally qualified health centers and, you know, places serving underserved populations.

58:18
I can, I can dive in on the last comment, which is, actually, it's a huge concern right now, that the computational resources, the clinical and technical expertise required to appropriately build, develop, execute, validate, and maintain these technologies, is a really high bar. And, you know, there is a high risk right now of effectively creating a bias and inequity from, from Sites, individuals' institutions that cannot deploy that. So, I hope that as long as we continue to have high awareness that, this is risk and continue to reach out for ways to sort of disseminate and scale and make these deployable and safely and in all environments I think will mitigate it. But it's a significant issue.

59:09
Great. Well, we are out of time. This has been such a fascinating discussion. Really? appreciate our audience being with us. And as we mentioned, we'll continue the conversation delving much more into Health Equity, so watch out for that upcoming webinar announcement. You can also find our research and say publication that summarizes the research of Ziad Obermeyer, that was funded by NIHCM.

59:40
I want to thank our excellent panelists speakers. Thank you for taking time to be with us today and sharing your valuable work and your perspectives and thank you to our audience for joining and for your great questions. Your feedback's important to us, so please take a moment to complete a brief survey that will open on your screen. And just thank you all again for joining us today.

Speakers

Michael E. Matheny, MD

Director, Center for Improving the Public’s Health through Informatics, Vanderbilt University Medical Center

I. Glenn Cohen, JD

Faculty Director, Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics & Deputy Dean, Harvard Law School

Svetlana Bender, PhD

Vice President, AI & Behavioral Science, GuideWell and Florida Blue

 


More Related Content