Blog Post

Microsoft 365 Copilot
3 MIN READ

Now Available: the Copilot for Microsoft 365 Risk Assessment QuickStart Guide

tannerbriggs's avatar
tannerbriggs
Icon for Microsoft rankMicrosoft
Aug 12, 2024

Copilot for Microsoft 365 is an intelligent assistant designed to enhance user productivity by leveraging relevant information and insights from various sources such as SharePoint, OneDrive, Outlook, Teams, Bing, and third-party solutions via connectors and extensions. Using natural language processing and machine learning, Copilot understands user queries and delivers personalized results, generating summaries, insights, and recommendations. 

 

This QuickStart guide aims to assist organizations in performing a comprehensive risk assessment of Copilot for Microsoft 365. The document serves as an initial reference for risk identification, mitigation exploration, and stakeholder discussions. It is structured to cover:

 

  1. AI Risks and Mitigations Framework: Outlining the primary categories of AI risks and how Microsoft addresses them at both company and service levels. 
  2. Sample Risk Assessment: Presenting a set of real customer-derived questions and answers to assess the service and its risk posture. 
  3. Additional Resources: Providing links to further materials on Copilot for Microsoft 365 and AI risk management. 

 

Copilot for Microsoft 365 Risks and Mitigations 

 

Bias 

AI technologies can unintentionally perpetuate societal biases. Copilot for Microsoft 365 uses foundation models from OpenAI, which incorporate bias mitigation strategies during their training phases. Microsoft builds upon these mitigations by designing AI systems to provide equitable service quality across demographic groups, implementing measures to minimize disparities in outcomes for marginalized groups, and developing AI systems that avoid stereotyping or demeaning any cultural or societal group. 

 

Disinformation 

Disinformation is false information spread to deceive. This QuickStart guide covers Copilot for Microsoft 365 mitigations which include grounding responses in customer data and web data and requiring explicit user instruction for any action.  

 

Overreliance and Automation Bias 

Automation bias occurs when users over-rely on AI-generated information, potentially leading to misinformation. The QuickStart guide discusses methods of mitigating automation bias through measures such as informing users they are interacting with AI, disclaimers about the fallibility of AI, and more.  

 

Ungroundedness (Hallucination) 

AI models sometimes generate information not based on input data or grounding data. The QuickStart guide explores various mitigations for ungroundedness, including performance and effectiveness measures, metaprompt engineering, harms monitoring, and more. 

 

Privacy 

Data is a critical element for the functionality of an AI system, and without proper safeguards, this data may be exposed to risks. The QuickStart guide talks about how Microsoft ensures customer data remains private and is governed by stringent privacy commitments. Access controls and data usage parameters are also discussed.  

 

Resiliency 

Service disruptions can impact organizations. The QuickStart guide discusses mitigations such as redundancy, data integrity checking, uptime SLAs, and more. 

 

Data Leakage 

The QuickStart guide explores data leakage prevention (DLP) measures including zero trust, logical isolation, and rigorous encryption.  

 

Security Vulnerabilities 

Security is integral to AI development. Microsoft follows Security Development Lifecycle (SDL) practices, which include training, threat modelling, static and dynamic security testing, incident response, and more.   

 

Sample Risk Assessment: Questions & Answers 

This section contains a comprehensive set of questions and answers based on real customer inquiries. These cover privacy, security, supplier relationships, and model development concerns. The responses are informed by various Microsoft teams and direct attestations from OpenAI. Some key questions include: 

 

  • Privacy: How personal data is anonymized before model training. 
  • Security: Measures in place to prevent AI model compromise. 
  • Supplier Relationships: Due diligence resources on OpenAI, a Microsoft strategic partner. 
  • Model Development: Controls for data integrity, access management, and threat modeling. 

 

By utilizing this guide, organizations can better understand the AI risk landscape integral to understanding Copilot for Microsoft 365 in an efficient manner enabling enterprise deployment. It serves as a foundational tool for risk assessment and frames further dialogue with Microsoft to address specific concerns or requirements. 

 

Additional Resources 

In addition to the framework and the sample assessment, the QuickStart guide provides links to a host of resources and materials that offer further detailed insights into Copilot for Microsoft 365 and AI risk management.  

Updated Aug 13, 2024
Version 2.0
  • Why is the document on the service trust portal? This requires everyone who downloads the file to enter into an NDA with Microsoft - which not everyone will be authorized to do.
    tannerbriggs 

  • Laxman V's avatar
    Laxman V
    Brass Contributor

    Excellent overview, many organizations are looking for these details. Thanks for sharing. 

  • Michel-Ehlert's avatar
    Michel-Ehlert
    Brass Contributor

    DanielGlennDean_Gross, while I do get your point, I see it differently.

    Why would a commercial company share deeper insights into the products they sell, the (safe) applicability and usage of those products, just like that to everyone (including competitors)? It is not an open source product, free to use by anyone, right?


    So I understand why the document is shared under NDA, and existing customers can access it and use it internally to make sure they use AI safely and responsible, making this a good document for their customers.

    Customers still considering to purchase will have plenty of opportunity to engage in conversation on the same.