April is AI month for the developer community. You’re going to notice that whenever Azure Machine Learning is discussed, the phrase responsible AI is going to crop up. This all goes back to a simple principle Uncle Ben taught Spiderman: “With great power comes great responsibility.” Because being able to work with machine learning is one of your superpowers, it is important to always remember to use it with an understanding of how it may be used by others and what the unintentional consequences might be.
To underscore the point, let’s look at some well-known examples of irresponsible AI:
In each of these cases, the good intent of the programmers was eventually undermined by circumstances. Even more concerning at this time is AI being used to do harm intentionally. We see this with unfortunate applications like Russian bots on Facebook and Twitter, and the persistent fear that video manipulation technology will be used to affect democratic elections with misleading videos.
The best tonic for these worries is the recognition that AI is also being used for good:
Responsible AI at Microsoft can be summed up by these six principles:
While it’s important to internalize all these principles when we do work in AI, the two most important ones are transparency and accountability. The biggest difficultly for coming to terms with AI is that machine learning, especially deep learning, works like a black box. Engineers put training data in and get models from this, but don’t always understand the underlying thinking. This makes it difficult to assess, in many cases, whether an AI system is meeting the other principled goals we currently expect from AI. Transparency about how the AI was trained is a good start toward alleviating some of these issues.
A side effect of the black-box nature of most AI is that it is all too easy to blame any unintended consequences of a poorly trained AI on the lack of transparency itself. Those in charge can say that they don’t understand how the AI model works. This is why it is also important to set up accountability structures ahead of time. This can be done with mechanisms like an accountability committee responsible for maintaining standards of responsible AI in a company, and can extend to AI audits and AI testing that would run alongside other automated-testing processes. For example, at Microsoft, this is accomplished through Microsoft's AI, Ethics, and Effects in Engineering and Research (Aether) Committee, and our Office of Responsible AI.
It’s not an overstatement to say that machine learning is a major component of the future of technology. It is also perhaps the first time that mastering an important technology also includes learning the ethics of the tech along with the science and math behind it.
After all, with great AI power comes greater AI responsibility.
Want to learn more about AI for developers? Check out the latest AI content created by our Cloud Advocacy team and check out the #AIApril hashtag on Twitter to join in the conversation!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.