This post is co-authored by Mehrnoosh Sameki, Program Manager, Azure Machine Learning.
Model interpretability and fairness are part of the ‘Understand’ pillar of Azure Machine Learning’s Responsible ML offerings. As machine learning becomes ubiquitous in decision-making from the end-user utilizing AI-powered applications to the business stakeholders using models to make data-driven decisions, it is necessary to provide tools at scale for model transparency and fairness.
Explaining a machine learning model and performing fairness assessment is important for the following users:
Customers like Scandinavian Airlines (SAS) and Ernst & Young (EY) put interpretability and fairness packages to the test to be able to deploy models more confidently.
We are releasing enhanced experiences and feature additions for the interpretability and fairness toolkits in Azure Machine Learning, to empower more ML practitioners and teams to build trust with AI systems.
These two toolkits can be used together to understand model predictions and mitigate unfairness. For this demonstration, we shall take a look at a loan allocation scenario. Let’s say that the label indicates whether each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.
Our revamped fairness dashboard can help uncover the harm of allocation which leads to the model unfairly allocating loans among different demographic groups. The dashboard can additionally uncover harm of quality of service which leads to a model failing to provide the same quality of service to some people as they do to others. Using the fairness dashboard, you can identify if our model treats different demographics of sex unfairly.
When you first load the fairness dashboard, you need to configure it with desired settings, including:
After setting the configurations, you will land on a model assessment view where you can see how the model is treating different demographic groups.
Our fairness assessment shows an 18.3% disparity in the selection rate (or demographic group difference). According to that insight, 18.3% more males are receiving qualifications for loan acceptance compared to females. Now that you’ve seen some unfairness indicators in your model, you can next use our interpretability toolkit to understand why your model is making such predictions.
The new revamped interpretability dashboard greatly improves the user experience of the previous dashboard. In the loan allocation scenario, you can understand how a model treats female loan applicants differently than male loan applicants using the interpretability toolkit:
Once some potential fairness issues are observed and diagnosed, you can move to mitigate those unfairness issues.
The unfairness mitigation part is powered by the Fairlearn open-source package which includes two types of mitigation algorithms: postprocessing algorithms (ThresholdOptimizer) and reduction algorithms (GridSearch, ExponentiatedGradient). Both operate as “wrappers” around any standard classification or regression algorithm. GridSearch, for instance, treats any standard classification or regression algorithm as a black box, and iteratively (a) re-weight the data points and (b) retrain the model after each re-weighting. After 10 to 20 iterations, this process results in a model that satisfies the constraints implied by the selected fairness metric while maximizing model performance. ThresholdOptimizer on the other hand takes as its input a scoring function that underlies an existing classifier and identifies a separate threshold for each group to optimize the performance metric, while simultaneously satisfying the constraints implied by the selected fairness metric.
The fairness dashboard also enables the comparison of multiple models, such as the models produced by different learning algorithms and different mitigation approaches. Bypassing the dominated models of GridSearch for instance, you can see the unmitigated model on the upper right side (with the highest accuracy and highest demographic parity difference) and can click on any of the mitigated models to observe them further. This allows you to examine trade-offs between performance and fairness.
After applying the unfairness mitigation, we go back to the interpretability dashboard and compare the unmitigated model with the mitigated model. In the figure below, we see a more even probability distribution for the female cohort for the mitigated model on the right:
Revisiting the fairness assessment dashboard, we also see a drastic decrease in demographic parity difference from 18.8% (unmitigated model) to 0.412% (mitigated model):
Azure Machine Learning’s (AzureML) interpretability and fairness toolkits can be run both locally and remotely. If run locally, the libraries will not contact any Azure services. Alternatively, you can run the algorithms remotely on AzureML compute and log all the explainability and fairness information into AzurML’s run history via the AzureML SDK to save and share them with other team members or stakeholders in AzureML studio.
Azure ML’s Automated ML supports explainability for its best model as well as on-demand explainability for any other models generated by Automated ML.
Explore this scenario and other sample notebooks in the Azure Machine Learning sample notebooks GitHub.
Learn more about the Azure Machine Learning service.
Learn more about Responsible ML offerings in Azure Machine Learning.
Get started with a free trial of the Azure Machine Learning service.
 This dataset is from the 1994 US Census Bureau Database where “sex” in the data was limited to binary categorizations.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.