April is AI month for the developer community. You’re going to notice that whenever Azure Machine Learning is discussed, the phrase responsible AI is going to crop up. This all goes back to a simple principle Uncle Ben taught Spiderman: “With great power comes great responsibility.” Because being able to work with machine learning is one of your superpowers, it is important to always remember to use it with an understanding of how it may be used by others and what the unintentional consequences might be.
To underscore the point, let’s look at some well-known examples of irresponsible AI:
- A 2018 study demonstrated that gender detection had higher error rates for women, especially women of color.
- AI-driven risk-assessment software used by law enforcement ranks minorities at higher risk of recidivism, possibly reinforcing implicit bias rather than purportedly explaining it.
- Tay, an AI-driven chatbot, accidentally was fed racist attitudes by the Twitterverse and had to be taken down.
In each of these cases, the good intent of the programmers was eventually undermined by circumstances. Even more concerning at this time is AI being used to do harm intentionally. We see this with unfortunate applications like Russian bots on Facebook and Twitter, and the persistent fear that video manipulation technology will be used to affect democratic elections with misleading videos.
The best tonic for these worries is the recognition that AI is also being used for good:
- AI for Earth is a grant program that uses AI to change the way people and organizations monitor, model, and manage Earth’s natural systems.
- AI for Sustainability is a global environmental network that empowers organizations and individuals working to advance sustainability.
- AI for Accessibility is a Microsoft grant program that harnesses the power of AI to amplify human capability for the more than one billion people around the world with a disability.
- Deepfake Detection Challenge is an industry driven contest to develop AI techniques to identify manipulated images and videos created by other AI algorithms.
Responsible AI at Microsoft can be summed up by these six principles:
- Fairness—AI systems should treat all people fairly.
- Reliability & Safety—AI systems should perform reliably and safely.
- Privacy & Security—AI systems should be secure and respect privacy.
- Inclusiveness—AI systems should empower everyone and engage people.
- Transparency—AI systems should be understandable.
- Accountability—AI systems should have algorithmic accountability.
While it’s important to internalize all these principles when we do work in AI, the two most important ones are transparency and accountability. The biggest difficultly for coming to terms with AI is that machine learning, especially deep learning, works like a black box. Engineers put training data in and get models from this, but don’t always understand the underlying thinking. This makes it difficult to assess, in many cases, whether an AI system is meeting the other principled goals we currently expect from AI. Transparency about how the AI was trained is a good start toward alleviating some of these issues.
A side effect of the black-box nature of most AI is that it is all too easy to blame any unintended consequences of a poorly trained AI on the lack of transparency itself. Those in charge can say that they don’t understand how the AI model works. This is why it is also important to set up accountability structures ahead of time. This can be done with mechanisms like an accountability committee responsible for maintaining standards of responsible AI in a company, and can extend to AI audits and AI testing that would run alongside other automated-testing processes. For example, at Microsoft, this is accomplished through Microsoft's AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and our Office of Responsible AI.
It’s not an overstatement to say that machine learning is a major component of the future of technology. It is also perhaps the first time that mastering an important technology also includes learning the ethics of the tech along with the science and math behind it.
After all, with great AI power comes greater AI responsibility.
Want to learn more about AI for developers? Check out the latest AI content created by our Cloud Advocacy team and check out the #AIApril hashtag on Twitter to join in the conversation!