Blog Post

Microsoft 365 Copilot Blog
2 MIN READ

Microsoft leads the way earning CSA STAR for AI 42001

Om_Vaiti's avatar
Om_Vaiti
Icon for Microsoft rankMicrosoft
Nov 20, 2025

We are proud to announce a major milestone in responsible AI: recognition as one of the first organizations to achieve CSA STAR for AI Level 2 certification under the Cloud Security Alliance (CSA) new STAR for AI 42001 program. This achievement pairs ISO/IEC 42001 certification with CSA’s AI-specific transparency artifacts, setting a new benchmark for AI governance.     

CSA has been at the forefront of building industry standards for transparency with Cloud Controls Matrix and now with AI Controls Matrix. Its Security, Trust, Assurance, and Risk (STAR) Registry is a publicly accessible registry that documents security, privacy, and now AI controls provided by cloud computing offerings.  

Microsoft 365’s STAR for AI 412001 Level 2 certification includes:   

  • Transparency: Publication of the Consensus Assessments Initiative Questionnaires we submitted to CSA’s STAR Registry; AI CAIQ in demonstrating compliance with AI Control Matrix, and CAIQ in demonstrating compliance with Cloud Control Matrix in CSA’s STAR Registry. 
  • CSA quality validation: Recognition that the submitted CAIQs meet CSA’s established benchmarks for completeness, clarity, and governance maturity.

Building Trust with Transparency: The Microsoft Way 

At Microsoft, "Trust with Transparency" is a journey—anchored in our Responsible AI Standard, proven through rigorous implementation, validated by independent certification, and reinforced by public reporting.

1. Setting the Foundation: Microsoft Responsible AI Standard

Our journey begins with the Microsoft Responsible AI Standard, a comprehensive policy framework guiding every AI system we build or deploy. Crafted by Microsoft’s Office of Responsible AI (ORA), this standard covers six domains: Accountability, Transparency, Fairness, Reliability & Safety, Privacy & Security, and Inclusiveness. Each domain is backed by concrete requirements and operationalized across engineering, policy, and governance teams. The Standard evolves with new research, regulations, and stakeholder feedback.

2. Implementation in Practice: ISO/IEC 42001 Alignment for Microsoft 365 Copilot

We put our Responsible AI Standard into action through operational processes and purpose build tooling. For Microsoft 365 Copilot, we mapped every ISO/IEC 42001 Annex A control to our existing RAI, Privacy, and Security practices. This alignment is documented in Control Artifact Microsoft 365 Copilot ISO 42001 Alignment which provides: 

  • Control requirements from ISO 42001 
  • Narrative on how Microsoft’s practices meet each requirement 

This control artifact demonstrates how our teams implement responsible AI, from policy to engineering, and how we maintain accountability, transparency, and continual improvement. 

3. Deep Transparency: Responsible AI Transparency Report 

Finally, we believe that trust is earned through deep transparency. Our Annual Responsible AI Transparency Report details our progress, challenges, and learnings in Responsible AI. It covers how we operationalize our standards, the outcomes of our impact assessments, and the metrics we use to monitor and improve our systems. This report is publicly available for customers, partners, and regulators to review. 

Please refer to additional Microsoft AI Artifacts on Service Trust Portal. We look forward to continuing to earn your trust with deep transparency. 

Updated Nov 20, 2025
Version 1.0
No CommentsBe the first to comment