How to generate counterfactuals for a model with Responsible AI (Part 8)
Published Apr 24 2023 01:18 PM 3,376 Views
Microsoft

A robust AI solution does not just make predictions but can also provide recommendations on how to change the model’s outcome. Data scientists and AI developers train and tune models to have the most optimal performance. However, for an AI system to be beneficial to decision-makers and end-users, it sometimes needs to be able to provide alternative suggestions on what the user can do to change the prediction. This is useful in strategic planning, preventative actions, or corrective actions. In the case of corrective action for instance, if an AI system rejected a loan to a customer, the loan officer may be able to use the AI solution to give the customer recommendations such as increasing the credit score by 75 points or increasing their salary by $10k in order to gain approval for a loan. An example of preventative action would be if a doctor diagnosed a patient as a pre-diabetic with the assistance of an AI system, the AI system may also be able to provide the doctor with recommendations such as reducing their BMI or blood glucose level within a given range. All these examples are ways that an AI solution can assist decision-makers with What-If questions based on a model prediction and take better alternative actions. Being able to provide users with alternatives from a model prediction helps promote fairness, transparency and accountability.

 

The Responsible AI (RAI) dashboard provides the Counterfactual component that enables data scientists and decision-makers to observe how the model generates alternate predictions from changes made to the original data. The capability offers suggestions on what feature values to change to get opposite or desired model prediction. In addition, the interactive user interface enables users to make custom what-if adjustments for individual data points to understand how the model reacts to data feature changes.

 

In this tutorial, we will explore how to use the Counterfactual section of the RAI dashboard. This is a continuation of our Diabetes Hospital Readmission use case we’ve been using in this tutorial series. In the prior tutorial, we were able to discover key feature contributors driving our model’s prediction for explainability and interpretability. We used the Feature Importance capability of the RAI dashboard to understand why the model made predictions overall or on an individual patient level. Now, let us explore how to generate opposite model predictions by slightly changing data features.

 

Prerequisites

 

 

Counterfactual/What-If analysis

 

The Counterfactuals component of the dashboard help users engage in an interactive and custom what-if adjustment for individual data points to understand how their model reacts to feature changes.  The component provides individual datapoints in the dataset to enable a user to visualize a scatterplot of the data for a given cohort. Each data point represents a diabetic patient.   In addition, the data can be narrowed down to a specific set of data by setting the desired x-axis and y-axis values. From this filter, the user can click on a specific data point to explore how perturbing a few features can change a model’s prediction.

 

Since “Prior_Inpatient” was a problematic feature from the Error Analysis tutorial, we’ll set it to be the y-axis and “Readmitted” to be our x-axis. Here, we’ll evaluate what features we need to change in order for a patient not to be readmitted back to the hospital.

 

11-cf-dataopoints.png

 

To see how we can reduce the possibility of a patient being readmitted back to the hospital within 30 days of being discharged, we’ll select a patient with 6 prior inpatient hospitalizations (on line 6). The selected datapoint (index #489), on the far-right hand side of the graph, has a high prediction of being Readmitted. When we double click on the data point, the dashboard displays the top-ranking list of features that can be adjusted to achieve a desired outcome from the model. In our case, we’ll be trying to find out “What-If” we changed any of these features, will the patient not be readmitted in 30 days by a hospital?

 

11-cf-perturb-candidates.png

 

As you can see from the chart, there are 11 features that will influence our selected diabetic patient’s readmission back to a hospital. However, it is not realistic to expect the patient to change all of these features in order to not be readmitted. The goal of counterfactuals is to make a minimum change to one or more features in order to achieve a desired outcome. The next challenge will be to figure out what change or the amount of change needed for a feature. The dashboard addresses this dilemma by offering the capability to generate an interactive view that lists alternative features that yield opposite outcomes from the selected data point. In addition, it shows the features and exact values that can be changed to get the desired prediction. 

 

Let’s explore the different options available to see “What-If” we changed certain feature(s), will it yield an outcome of our selected patient (index 489) not being readmitted. To do this, we’ll click on the “Counterfactual” button to generate the opposite outcomes for our selected patient. 

 

 

11-cf-generate-cfs.png

 

The first record from the generated “What-If” counterfactuals is the reference data point or patient record that we selected. As you can see, the predicted hospitalization is 1, meaning the patient is predicted to be “Readmitted. The result of the 10 records are the counterfactuals with the opposite predictions (Not readmitted). For each counterfactual record, the values in bold are the “What-If” feature values. These values show one or two feature that if change from the diabetic’s current patient data, will yield a “Not Readmitted” prediction from the model. 

 

11-whatif-cfs-list.png

 

Each of the 10 suggested opposite value(s) has a “Set Value” link. When we click on the “Set Value” link, the column value(s) on the line are updated with the corresponding column for the existing patient record or the reference datapoint at the bottom of the page. From there you can evaluate the new probability of Readmitted vs. Not Readmitted with the changes we’ve made. It also shows you the delta of the change. These deltas can help a decision-maker weigh which changes will be more feasible. 

 

To illustrate this, let’s click on the “Set Value” link for “Counterfactual Ex 3.” When we do this, we see that the “Create your own counterfactual section” at the bottom of the page updates the “Prior_Inpatient” box from 6 to 0 and the “Prior_Emergency” box from 4 to 0 with the copy of the reference patient datapoint. When this update happens, the “Predictive value (readmit_status)” changed from 1 to 0. To validate the change from the original datapoint, we need to hover over the “See prediction deltas” on the bottom left-hand side of the page. This reveals that if the change is made, the probability of the selected patient not being readmitted back to a hospital within 30 days is 0.798. In comparing these helpful insights, decision-makers can weigh the different alternate suggestions to make better-informed decisions.

 

11-cf-selection-delta.png

 

 

The generated counterfactuals are useful for data scientists or AI developers to spot potential issues in the alternate recommendations. For instance, “Counterfactual Ex 7” suggests that to prevent our selected diabetic patient from being readmitted, they need to change their race from “African-American” to “Caucasian” and the Prior_Inpatient set to 0.  It’s obviously not realistic to recommend that a patient changes their race to get a desired outcome.  To avoid recommending sensitive or non-sensitive features that are not good candidates to be perturbed, it is important to consider which features should be changed for Counterfactuals. Before using the component, developers need to think of the features that should not be changed and remove them using the features_to_vary setting (see tutorial 3). Then running the tool will provide more meaningful Counterfactuals. 

 

Once the user determines the best changes that will be suitable from the list of What-If/Counterfactuals, the component provides the capability for them to save a copy of the new datapoint for the patient.

 

11-cf-save-change.png

 

In this tutorial, we learned how the RAI dashboard helps decision-makers ask What-If questions regarding how the data can be changed minimally in order to get a desired outcome from a model. The ability to explore counterfactuals enables stakeholders and end-users to have transparent, data-driven information on how they can take corrective, preventive or strategic actions from a model’s prediction. This transparency also builds trust from end-users on the fair recommendations the model can provide from its prediction. Lastly, the component offers safe measures for developers to ensure sensitive features (e.g., Age, Gender, Race etc.) or non-sensitive being suggested are acceptable and do not cause fairness issues, depending on the use case.

 

Now that you’ve learned how to use What-If/Counterfactual…share in the comments how you used it on your model and what interesting insights did you get.

 

DISCLAIMER:  Microsoft products and services (1) are not designed, intended or made available as a medical device, and (2) are not designed or intended to be a substitute for professional medical advice, diagnosis, treatment, or judgment and should not be used to replace or as a substitute for professional medical advice, diagnosis, treatment, or judgment. Customers/partners are responsible for ensuring their solutions comply with applicable laws and regulations. Customers/partners also will need to thoroughly test and evaluate whether an AI tool is fit for purpose and identify and mitigate any risks or harms to end users associated with its use. 

 

 

 

 

 

 

 

 

Co-Authors
Version history
Last update:
‎May 17 2023 08:56 AM
Updated by: