Five best practices from nonprofit AI pioneers
Published Oct 13 2023 04:22 PM 2,694 Views
Microsoft

Written by Erik Arnold, CTO at Microsoft Philanthropies

 

I’m lucky enough that I get to spend time with nonprofit organizations every day.  Learning how they use technology to advance mission outcomes is one of the most rewarding aspects of my role as CTO at Microsoft Philanthropies. It’s probably no surprise that lately most of these conversations have centered on AI's potential impact, especially the challenges and risks of leveraging AI in day-to-day operations.  There is a lot of hope, and a lot of justified concerns. Let’s break through some of the hype and doomsday coverage around AI to share some of the learnings we captured from recent conversations with nonprofits getting started with AI. While I know this is not a representative survey of the sector, I do believe we can learn from these lessons and start exploring ways for your organizations to get going with AI. 

 

You might wonder – why bother? Why spend time away from doing the things you know you have to get done, for exploring emerging technologies that may or may not help you accelerating your mission outcomes. A valid question. There’s a lot of “noise” out there, including claims of miraculous time savings or – on the flip side – horror stories of “AI gone bad”. What I’d like to propose is a simple journey of learning, experimentation, and discovery. Get in the arena! Try things. See what works. Recruit help if needed and available. Most importantly, be engaged and know how to engage others – both internally and externally – on this journey of discovery. Don’t be afraid of AI taking over your job. It’s people who know how to take advantage of this emerging technology that will thrive. 

 

We hear about many different AI use cases from the nonprofit and humanitarian organizations we work with and there are clear patterns that serve to highlight best practices when approaching your own AI learning journey:  

 

  1. Audit and monitor the existing use of public AI tools. Set guidelines for safe and responsible use. There is a quote from The Sun Also Rises by Ernest Hemingway. When the protagonist, Mike, is asked “How did you get bankrupt?” he replies, “Two ways. Gradually, then suddenly.” I feel like that’s how the world just experienced AI. It’s been around for years, but previously required heavy investment in data science, scientists, and lots of internal data. We saw gradual adoption. Then suddenly, over the last ten months, we’ve seen ChatGPT from OpenAI have the fastest adoption to 100 million users of any technology ever invented. It completely democratized access to AI, is already embedded in many tools in common use, and unsurprisingly, many nonprofits already have employees leveraging Generative AI tools at work. However, exposing sensitive, internal data to these public tools can be risky. While trying to halt the use of AI-enabled tools in the workplace is like trying to stop the tide, it pays to be aware of how these tools interact with the data used in prompts. Some tools provide protections and administrative controls, while others have fewer restrictions for how they use data. We recommend understanding what’s already in use in your organization, providing appropriate options that meet your organization’s needs, and educating users on the proper guidelines for safe and responsible use.  

  2. Set up AI working groups. Get on the same page. When exploring how AI can benefit the nonprofit organization, it’s best practice to involve a diverse group of stakeholders – not just technical staffers (if you’re a nonprofit lucky enough to have technical staff). Bring different perspectives to the table. Understand the different risks and desires. Develop a common taxonomy and aligned objectives for what use cases AI could solve for, then iterate, measure, and iterate again. It's not about having the perfect framework on day one but to create a learning culture that can evolve to meet the goals.  

  3. Create a sandbox environment to learn quickly and safely. In software engineering, we describe a type of non-production environment meant for experimentation as a sandbox. It’s a place to test new ideas and experiment with new technology. A sandbox environment allows you to explore data with AI models in a safe environment disconnected from mission-critical operations.  

  4. Enable access to centralized data and develop guidelines for sharing data internally and externally.  In computer science, the adage “garbage in, garbage out” summarizes the concept that the value of the input data is directly correlated to the value of the output data. It should be no surprise that the value of AI models is closely correlated with the quality and access of the data exposed to the models. One key challenge for many organizations is a fragmented data environment, where critical information is dispersed across different tools and data silos. Think hard about both the quality of and the access to data exposed to AI in your ecosystem. Consider starting with quality data to speed innovation and ideation. Make the data appropriately accessible in the cloud to improve security and performance.  

  5. Develop AI skills via training, partnerships, and vendor resources. Remember, AI is just a tool and it’s here to assist you, the human. To most effectively use a tool, train on it. Every organization will come under increasing pressure to train staff on the latest AI capabilities, the appropriate considerations, and the risks. Fortunately, more and more training resources are coming online every day. 

 

Now is the time to start learning, experimenting, and using AI. Check out these resources to help you get going: 

Co-Authors
Version history
Last update:
‎Oct 18 2023 09:52 AM
Updated by: