Is AI part of your diversity & inclusion plan?
Published Jun 11 2020 12:16 PM 2,584 Views
Microsoft

Claire Bonaci: You're watching the Microsoft US Health and Life Sciences, Confessions of Health Geeks podcast, a show that offers industry Insight from the health geeks and data freaks of the US health and life sciences industry team. I'm your host, Claire Bonaci. Today we'll talk about AI and ethics with Tom Lawry, our national director for AI in health and life sciences, and author of the book AI in health. This is episode three of our AI in health and life sciences series. And we'll cover the ethical issues and opportunities surrounding AI and how ethical leaders can build AI models ethically and responsibly. Welcome back for another episode Tom

Tom Lawry : Hey Claire. Great to be back.

Claire Bonaci: So let's talk about the ethics of AI. You devoted an entire chapter of your book to this topic. But how does AI actually relate to diversity and inclusion programs? And do you mind giving a few examples?

Tom Lawry: Well, sure, let me start by saying two things come to mind. First, a quote from Tim Cook, the CEO of Apple who basically said, we get to choose whether we're going to use AI to the benefit of humanity, or use it to its detriment. And obviously, I know which one I want to pick. But you know, when it comes to using AI for good, kind of that second level below what Tim's talking about is, while AI can produce overall good on any problem we want to apply it to if it's applied on evenly, it can lead to inequitable use and impact across the populations we serve. I think that's especially true and important in healthcare. So with that, let me give you an example. So everyone listening and watching is now officially certified as a clinical leader or a hospital executive. So imagine in these role. We want to look at reducing adverse events that occur inside our hospital. An adverse event can be something like a hospital acquired infection, it can be something around a med surge patient comes in for basic procedure, they've had surgery, they're doing fine. They're about ready to be discharged. And all of a sudden, something happens where their heart stops, they stop breathing. They get coded. Instead of getting discharged, they end up in ICU. Those are adverse events. So we bring our data science and clinical leaders together, we work on a plan using predictive analytics and AI. We spin up an algorithm using real time data. And we're risk rating every inpatient to predict an adverse event with the hopes of preventing it. So on our pilot, we've done this and I'm proud to report that in the pilot, we're able to reduce adverse events by 60% in the hospital. So think about that, from a quality perspective decreasing adverse events is a huge quality improvement clinically. Financially, if we're reducing adverse events, meaning they go home, instead of going in ICU, we're making much better use of our economic resources. So I think on balance, everyone would agree. That's a huge win from a quality and a cost management standpoint. But what's important to do is pull back and say 60% is a statistical average. If that 60% is based on the fact that we were three times better able to predict adverse events in white males versus hispanic females, would that be okay? It's improving quality. It's reducing costs is producing overall good, but it's doing so unevenly and, and, you know, everything I just told as a story is compliant with HIPAA compliant with all regulatory legal issues. And those are the kind of things that are coming up across all segments, but particularly in healthcare when it comes to saying how do we provide AI in an ethical fashion? In this case, it's an example of having bias come into AI.

Claire Bonaci: Okay, so you do mention bias. So how is it that bias even creeps into AI?

Tom Lawry: Well, when it comes to bias being introduced and things like predictive capabilities, it can happen in any of a number of ways. Probably most often it happens because the the data we are using to drive algorithms has bias built in. Having said that, I pull back and say, for the most part, I've never seen or experienced a data set that's totally unbiased and healthcare. Bias comes in many forms. Many times it comes from the data representing the people we are serving. So here in America, no matter what you believe, legally, healthcare is not a right. So when you just think through that of all the people who are uninsured or under insured, they're showing up less for services because they can't afford if they don't have access. Therefore, the data that we're using on those patient populations that we have access to are skewed towards those that are well insured or have the ability to pay. So that alone then drives predictive bias when it comes to underrepresented certain populations. Bias can also come in the form of things like not purposeful, but in how developers and data scientists actually programming algorithm. Finally, it can come in things like as we create what are known as self learning algorithms. We're having an algorithm trying to solve a problem, and it uses its own set of computer logic to get there and there are times where to find the shortest path to solving what we ask it to do. It may take shortcuts, obviously unknowingly because that's an algorithm but that alone can produce bias when it comes to just a self learning algorithm trying to reach a goal without having other considerations built in.

Claire Bonaci:  Okay, so this really does go beyond the usual security and compliance rules and regulations. That's a great point. So I've heard you present in the past and you often ask the question, is AI part of your diversity and inclusion plan? Why is that such an important question to ask and what do you actually learn by asking that question?

Tom Lawry: Well, again, if we come back to kind of the history and the parameters for healthcare in general, you know, having access, keeping all populations healthy is a huge issue and a big goal. What what I see is a lot of great work being done by healthcare organizations of all stripes that really are focusing on diversity, inclusion and everything else they do, how staff are being trained how patients are and consumers are being treated and managed to make sure there is this equality, on accessibility on how we treat them. So what's interesting is you can have the best programs for things like that. And back to the story I just told, if people aren't paying attention to how AI is coming in and becoming pervasive in the organization, there's another whole set of biases, things that create the unequal distribution of the benefit of AI that are creeping in at the same time we're trying to address it or all those other normal realms. So just throwing that out as a provocative question is a way of saying, you know, diversity inclusion cuts across all things we do to try and ensure that the best quality and accessibility for all of the populations we serve.

Claire Bonaci: So you have mentioned that AI must be designed responsibly, and that's definitely a given. So where does ethical AI actually fit into responsible inclusive design and who is responsible to ensure that AI is designed and deployed responsibly?

Tom Lawry: Well, another great question. You know, when it comes to, you know, looking at how, again, many times technology will get out ahead of the regulators and the legislators. So, sooner or later, I believe there will be regulations and other things that catch up with the technology. But for now, again, you can do something that's legally correct. Regulatorily correct. And still have things like bias and other things. So So when it comes to solving for that it does come down to effective design. And effective design starts with leaders, not necessarily the technical leaders. First of all, understanding and recognizing the issues. We see leaders and healthcare, getting very excited about AI, hiring data science teams, putting things in the field. And they're not having the conversation around things like stress testing, whatever they're putting in the field for things like buying For things like transparency. So good design starts with healthcare leaders really making the issue of, you know, principle the AI principles being applied to anything they're putting in place. Beyond that, when it comes to those actually developing, deploying and managing things like predictive capabilities, a lot of that comes down to I can create an algorithm that is has high correlational value on average. But what I really want to know is, is there variance in my predictive capability across all the populations I'm serving? So that gets down to things like stress testing that algorithm, not just for the overall averages I mentioned in my story, but looking at, you know, how effective is it when it comes to the same predictive capabilities, by gender, by race, by age by any of a number of factors that can be looked at and tested for. And so it really is doubling down on not just saying this is an algorithm that has high correlational value in general. But to be able to say the variance between that producing good in a white male, hispanic female, anyone is, is within a range, that would be acceptable. Again, it's highly unlikely you'd have something that's equal across all populations. But it's more a matter of making sure you're mindful of stress testing, looking at having very little variance. So it's doing good for all of those we're serving.

Claire Bonaci: Okay, and definitely just incorporating the health organization, the actual developer, just twofold or threefold of whoever is involved in creating them. That's great.

Tom Lawry: Yeah, absolutely. And then so take it a step farther. You know, what we're seeing is there's a lot of AI coming into healthcare organizations that's being generated by the organization themselves. But increasingly, it's all of those major vendors that serve healthcare organizations that are infusing AI into their own products. And so I believe there'll be a time in the not too distant future where the majority of our of intelligence is really going to be coming from the EMRs, and the lab systems and radiology. And once again, it's critically important for organizations who are using those vendors to hold them up to the same standards they're holding themselves up to. So that's where we're starting to talk with our clients about things like as you're putting out an RFP for a new system, or an RFI, what are the criteria you're baking in? That's asking that vendor for not only the roadmap for infusing intelligence, but what they are doing to safeguard against things like bias and and incredibly important as we look at moving ahead.

Claire Bonaci: That's actually one of my next questions of if there was one question that you would ask leaders or vendors to ensure that they're actually starting on the right path to ethical AI? Would it be related to RFPs in that RFP process

Tom Lawry: Typically, if you look at the purchasing process for anything, that's that's kind of where the logistics come into play. But more importantly, it's conversations like this just to make leaders aware and mindful of AI can produce great good overall. But if you know if it's spotty if it's uneven across the populations we serve, chances are that is not in keeping with the mission of most of the organizations that I get to work with. And probably the number one thing whether you're, you know, the CEO of a major vendor, or you know, you're the product development, vice president or whether you're the hospital CEO or the chief medical officer, just being mindful of, again, the general theme of AI can do good, but if it's being applied unevenly And benefiting some and not all. I think that's a big issue in healthcare.

Claire Bonaci:  Yeah, so definitely the education piece is huge. Would you share a societal issue that you're most looking forward to seeing AI tackle in the diversity and inclusion or accessibility space?

 

Tom Lawry:  Wow, that's a big question. I think I may need another webcast for that.  Um, you know, a lot of what we're talking about already on just making sure that you know, all the people we serve are having reasonably good and equal access is key. And probably the one that comes to mind right now, just given the fact that COVID is on everyone's mind. We had a situation a few weeks ago, where the Washington Post did a really great article on how COVID was affecting black communities and neighborhoods at a much higher rate than anyone else. And what's interesting is, there are a number of factors there. Everyone's still trying to get a handle on on why certain populations like African Americans are being affected more than others. But this was a great article from the The Post that basically covered one neighborhood in North Milwaukee and how that it had been decimated with COVID. Even though the African American population is a small percentage of the overall population in that state, and as I read that article, they were talking about a lot of social economic factors. And it got me thinking so I went out we have a great partner called Jvion has put together a tool using predictive capabilities that basically looks at COVID vulnerability by geography, so you can look by state by county down to a neighborhood level. So after reading that article, I went out and use this tool and I zoomed in on Milwaukee in North Milwaukee, and I saw exactly what this article is covering where that neighborhood was. Very high propensity for COVID. And what was interesting is if you look at the mapping, neighborhoods that literally were right next door were very low vulnerability. And it showed the uneven nature. And then the beauty of this is it drove a lot of this by showing the differences in social determinant data. So I look at that. Therein lies a lot of information that would allow us to basically get ahead of that to not only understand it, but to be able to prevent it. So going forward, that was a long answer and story. Imagine being able to use AI, and whether it's COVID or anything else to look at a social issue or a medical issue, to get it down to being able to predict here are the hotspots in this community for this problem, and then being able to get out ahead of those problems by deploying resources to either prevent from happening particularly in vulnerable populations, where to minimum mitigating the impact of something that's happening for one population if for whatever reason it's not happening with other populations, so hot spotting, using a lot of data and predicting to really even out some of these social issues in our communities.

Claire Bonaci: That's a great example and very relevant today. So thank you so much, Tom, and I'm looking forward to having you back next time.

Tom Lawry: Hey, thanks. always appreciate, you know, coming in and talking with you Claire.

Claire Bonaci: Thank you all for watching. To purchase Tom's book, visit www.CRCpress.com. We look forward to continuing the AI and health series next month. Please feel free to leave us questions or comments below and check back soon for more content from the HLS industry team.

 

Version history
Last update:
‎Jun 11 2020 12:16 PM
Updated by: