Excellent framing, Herain. I particularly like the point that securing AI requires coordinated effort across IT, developers, and security
leaders, not just security teams.
The stat that 55% of leaders lack clarity on AI regulatory requirements resonates strongly with what we're seeing in practice.
Microsoft is building the right technical primitives here. Entra Agent ID for identity, Purview for data governance, Defender for
threat protection; these are essential infrastructure.
But there's a complementary challenge that sits above the tooling layer: organisational governance readiness. Do organisations have
the policies, accountability structures, risk frameworks, and oversight mechanisms to use these controls effectively? In our
experience building AI governance assessments on Azure, the gap isn't usually technical, it's that leadership teams don't have a
structured, scored view of where they stand against frameworks like the EU AI Act, ISO 42001, or the UK DSIT principles.
The Compliance Manager enhancements with AI-powered regulatory templates are a significant step toward closing this. Curious
whether Microsoft sees a role for partner solutions that provide the organisational assessment layer — helping customers understand
their governance maturity before they configure the technical controls.
Marcus Hall, Founder — Revue-ai (Azure-hosted AI intelligence platform)