information barriers
9 TopicsAuthorization and Governance for AI Agents: Runtime Authorization Beyond Identity at Scale
Designing Authorization‑Aware AI Agents at Scale Enforcing Runtime RBAC + ABAC with Approval Injection (JIT) Microsoft Entra Agent Identity enables organizations to govern and manage AI agent identities in Copilot Studio, improving visibility and identity-level control. However, as enterprises deploy multiple autonomous AI agents, identity and OAuth permissions alone cannot answer a more critical question: “Should this action be executed now, by this agent, for this user, under the current business and regulatory context?” This post introduces a reusable Authorization Fabric—combining a Policy Enforcement Point (PEP) and Policy Decision Point (PDP)—implemented as a Microsoft Entra‑protected endpoint using Azure Functions/App Service authentication. Every AI agent (Copilot Studio or AI Foundry/Semantic Kernel) calls this fabric before tool execution, receiving a deterministic runtime decision: ALLOW / DENY / REQUIRE_APPROVAL / MASK Who this is for Anyone building AI agents (Copilot Studio, AI Foundry/Semantic Kernel) that call tools, workflows, or APIs Organizations scaling to multiple agents and needing consistent runtime controls Teams operating in regulated or security‑sensitive environments, where decisions must be deterministic and auditable Why a V2? Identity is necessary—runtime authorization is missing Entra Agent Identity (preview) integrates Copilot Studio agents with Microsoft Entra so that newly created agents automatically get an Entra agent identity, manageable in the Entra admin center, and identity activity is logged in Entra. That solves who the agent is and improves identity governance visibility. But multi-agent deployments introduce a new risk class: Autonomous execution sprawl — many agents, operating with delegated privileges, invoking the same backends independently. OAuth and API permissions answer “can the agent call this API?” They do not answer “should the agent execute this action under business policy, compliance constraints, data boundaries, and approval thresholds?” This is where a runtime authorization decision plane becomes essential. The pattern: Microsoft Entra‑Protected Authorization Fabric (PEP + PDP) Instead of embedding RBAC logic independently inside every agent, use a shared fabric: PEP (Policy Enforcement Point): Gatekeeper invoked before any tool/action PDP (Policy Decision Point): Evaluates RBAC + ABAC + approval policies Decision output: ALLOW / DENY / REQUIRE_APPROVAL / MASK This Authorization Fabric functions as a shared enterprise control plane, decoupling authorization logic from individual agents and enforcing policies consistently across all autonomous execution paths. Architecture (POC reference architecture) Use a single runtime decision plane that sits between agents and tools. What’s important here Every agent (Copilot Studio or AI Foundry/SK) calls the Authorization Fabric API first The fabric is a protected endpoint (Microsoft Entra‑protected endpoint required) Tools (Graph/ERP/CRM/custom APIs) are invoked only after an ALLOW decision (or approval) Trust boundaries enforced by this architecture Agents never call business tools directly without a prior authorization decision The Authorization Fabric validates caller identity via Microsoft Entra Authorization decisions are centralized, consistent, and auditable Approval workflows act as a runtime “break-glass” control for high-impact actions This ensures identity, intent, and execution are independently enforced, rather than implicitly trusted. Runtime flow (Decision → Approval → Execution) Here is the runtime sequence as a simple flow (you can keep your Mermaid diagram too). ```mermaid flowchart TD START(["START"]) --> S1["[1] User Request"] S1 --> S2["[2] Agent Extracts Intent\n(action, resource, attributes)"] S2 --> S3["[3] Call /authorize\n(Entra protected)"] S3 --> S4 subgraph S4["[4] PDP Evaluation"] ABAC["ABAC: Tenant · Region · Data Sensitivity"] RBAC["RBAC: Entitlement Check"] Threshold["Approval Threshold"] ABAC --> RBAC --> Threshold end S4 --> Decision{"[5] Decision?"} Decision -->|"ALLOW"| Exec["Execute Tool / API"] Decision -->|"MASK"| Masked["Execute with Masked Data"] Decision -->|"DENY"| Block["Block Request"] Decision -->|"REQUIRE_APPROVAL"| Approve{"[6] Approval Flow"} Approve -->|"Approved"| Exec Approve -->|"Rejected"| Block Exec --> Audit["[7] Audit & Telemetry"] Masked --> Audit Block --> Audit Audit --> ENDNODE(["END"]) style START fill:#4A90D9,stroke:#333,color:#fff style ENDNODE fill:#4A90D9,stroke:#333,color:#fff style S1 fill:#5B5FC7,stroke:#333,color:#fff style S2 fill:#5B5FC7,stroke:#333,color:#fff style S3 fill:#E8A838,stroke:#333,color:#fff style S4 fill:#FFF3E0,stroke:#E8A838,stroke-width:2px style ABAC fill:#FCE4B2,stroke:#999 style RBAC fill:#FCE4B2,stroke:#999 style Threshold fill:#FCE4B2,stroke:#999 style Decision fill:#fff,stroke:#333 style Exec fill:#2ECC71,stroke:#333,color:#fff style Masked fill:#27AE60,stroke:#333,color:#fff style Block fill:#C0392B,stroke:#333,color:#fff style Approve fill:#F39C12,stroke:#333,color:#fff style Audit fill:#3498DB,stroke:#333,color:#fff ``` Design principle: No tool execution occurs until the Authorization Fabric returns ALLOW or REQUIRE_APPROVAL is satisfied via an approval workflow. Where Power Automate fits (important for readers) In most Copilot Studio implementations, Agents calls Power Automate (agent flows), is the practical integration layer that calls enterprise services and APIs. Copilot Studio supports “agent flows” as a way to extend agent capabilities with low-code workflows. For this pattern, Power Automate typically: acquires/uses the right identity context for the call (depending on your tenant setup), and calls the /authorize endpoint of the Authorization Fabric, returns the decision payload to the agent for branching. Copilot Studio also supports calling REST endpoints directly using the HTTP Request node, including passing headers such as Authorization: Bearer <token>. Protected endpoint only: Securing the Authorization Fabric with Microsoft Entra For this V2 pattern, the Authorization Fabric must be protected using Microsoft Entra‑protected endpoint on Azure Functions/App Service (built‑in auth). Microsoft Learn provides the configuration guidance for enabling Microsoft Entra as the authentication provider for Azure App Service / Azure Functions. Step 1 — Create the Authorization Fabric API (Azure Function) Expose an authorization endpoint: HTTP Step 2 — Enable Microsoft Entra‑protected endpoint on the Function App In Azure Portal: Function App → Authentication Add identity provider → Microsoft Choose Workforce configuration (enterprise tenant) Set Require authentication for all requests This ensures the Authorization Fabric is not callable without a valid Entra token. Step 3 — Optional hardening (recommended) Depending on enterprise posture, layer: IP restrictions / Private endpoints APIM in front of the Function for rate limiting, request normalization, centralized logging (For a POC, keep it minimal—add hardening incrementally.) Externalizing policy (so governance scales) To make this pattern reusable across multiple agents, policies should not be hardcoded inside each agent. Instead, store policy definitions in a central policy store such as Cosmos DB (or equivalent configuration store), and have the PDP load/evaluate policies at runtime. Why this matters: Policy changes apply across all agents instantly (no agent republish) Central governance + versioning + rollback becomes possible Audit and reporting become consistent across environments (For the POC, a single JSON document per policy pack in Cosmos DB is sufficient. For production, add versioning and staged rollout.) Store one PolicyPack JSON document per environment (dev/test/prod). Include version, effectiveFrom, priority for safe rollout/rollback. Minimal decision contract (standard request / response) To keep the fabric reusable across agents, standardize the request payload. Request payload (example) Decision response (deterministic) Example scenario (1 minute to understand) Scenario: A user asks a Finance agent to create a Purchase Order for 70,000. Even if the user has API permission and the agent can technically call the ERP API, runtime policy should return: REQUIRE_APPROVAL (threshold exceeded) trigger an approval workflow execute only after approval is granted This is the difference between API access and authorized business execution. Sample Policy Model (RBAC + ABAC + Approval) This POC policy model intentionally stays simple while demonstrating both coarse and fine-grained governance. 1) Coarse‑grained RBAC (roles → actions) FinanceAnalyst CreatePO up to 50,000 ViewVendor FinanceManager CreatePO up to 100,000 and/or approve higher spend 2) Fine‑grained ABAC (conditions at runtime) ABAC evaluates context such as region, classification, tenant boundary, and risk: 3) Approval injection (Agent‑level JIT execution) For higher-risk/high-impact actions, the fabric returns REQUIRE_APPROVAL rather than hard deny (when appropriate): How policies should be evaluated (deterministic order) To ensure predictable and auditable behavior, evaluate in a deterministic order: Tenant isolation & residency (ABAC hard deny first) Classification rules (deny or mask) RBAC entitlement validation Threshold/risk evaluation Approval injection (JIT step-up) This prevents approval workflows from bypassing foundational security boundaries such as tenant isolation or data sovereignty. Copilot Studio integration (enforcing runtime authorization) Copilot Studio can call external REST APIs using the HTTP Request node, including passing headers such as Authorization: Bearer <token> and binding response schema for branching logic. Copilot Studio also supports using flows with agents (“agent flows”) to extend capabilities and orchestrate actions. Option A (Recommended): Copilot Studio → Agent Flow (Power Automate) → Authorization Fabric Why: Flows are a practical place to handle token acquisition patterns, approval orchestration, and standardized logging. Topic flow: Extract user intent + parameters Call an agent flow that: calls /authorize returns decision payload Branch in the topic: If ALLOW → proceed to tool call If REQUIRE_APPROVAL → trigger approval flow; proceed only if approved If DENY → stop and explain policy reason Important: Tool execution must never be reachable through an alternate topic path that bypasses the authorization check. Option B: Direct HTTP Request node to Authorization Fabric Use the Send HTTP request node to call the authorization endpoint and branch using the response schema. This approach is clean, but token acquisition and secure secretless authentication are often simpler when handled via a managed integration layer (flow + connector). AI Foundry / Semantic Kernel integration (tool invocation gate) For Foundry/SK agents, the integration point is before tool execution. Semantic Kernel supports Azure AI agent patterns and tool integration, making it a natural place to enforce a pre-tool authorization check. Pseudo-pattern: Agent extracts intent + context Calls Authorization Fabric Enforces decision Executes tool only when allowed (or after approval) Telemetry & audit (what Security Architects will ask for) Even the best policy engine is incomplete without audit trails. At minimum, log: agentId, userUPN, action, resource decision + reason + policyIds approval outcome (if any) correlationId for downstream tool execution Why it matters: you now have a defensible answer to: “Why did an autonomous agent execute this action?” Security signal bonus: Denials, unusual approval rates, and repeated policy mismatches can also indicate prompt injection attempts, mis-scoped agents, or governance drift. What this enables (and why it scales) With a shared Authorization Fabric: Avoid duplicating authorization logic across agents Standardize decisions across Copilot Studio + Foundry agents Update governance once (policy change) and apply everywhere Make autonomy safer without blocking productivity Closing: Identity gets you who. Runtime authorization gets you whether/when/how. Copilot Studio can automatically create Entra agent identities (preview), improving identity governance and visibility for agents. But safe autonomy requires a runtime decision plane. Securing that plane as an Entra-protected endpoint is foundational for enterprise deployments. In enterprise environments, autonomous execution without runtime authorization is equivalent to privileged access without PIM—powerful, fast, and operationally risky.Optimizing Cybersecurity Costs with FinOps
This blog highlights the integration of two essential technologies: Cybersecurity best practices and effective budget management across tools and services. Let’s understand FinOps FinOps is a cultural practice for cloud cost management. It enables teams to take ownership of cloud usage. It helps organizations maximize value by fostering collaboration among technology, finance, and business teams on data-driven spending decisions. FinOps Framework The FinOps Framework works across the following areas: Principles Collaborate as a team. Take responsibility for cloud resources. Ensure timely access to reports. Phases Inform: Visibility and allocation Optimize: Utilization Operate: Continuous improvement and operations Maturity: Crawl, Walk, Run Key Components of Cybersecurity Budgets Preventive Measures Preventive measures serve as the initial line of defense in cybersecurity. These measures encompass firewalls, antivirus software, and encryption tools. The primary objective of these measures is to avert cybersecurity incidents from occurring. They constitute a critical component of any comprehensive cybersecurity strategy and often account for a substantial portion of the budget. Detection & Monitoring Tools like Azure Firewalls and Azure monitoring are essential for identifying potential security threats and alerting teams early to minimize impact. Incident Response Incident response comprises the measures taken to mitigate the impact of a security breach after its occurrence. This process includes isolating compromised systems, eliminating malicious software, and restoring affected systems to their normal functionality Training & Awareness Training and awareness are crucial for cybersecurity. Educating employees about threats, teach them how to avoid risks, and inform them of company security policies. Investing in training can prevent security incidents. FinOps approach to managing the cost of Security Security Cost-Optimization Security is crucial as threats and cyber-attacks evolve. Azure FinOps helps identify and remove cloud spending inefficiencies, allowing resources to be reallocated to advanced threat detection, robust controls like MFA and ZTNA, and continuous monitoring tools. Azure FinOps provides visibility into cloud costs, identifying underutilized or redundant resources and over-provisioned budgets that can be redirected to cybersecurity. Continuous real-time monitoring helps spot trends, anomalies, and inefficiencies, aligning resources with strategic goals. Regular audits may reveal overlapping subscriptions or unused security features, while ongoing monitoring prevents these issues from recurring. The efficiency gained can fund advanced threat detection, new protection measures, or security training. FinOps ensures every dollar spent on cloud services adds value, transforming waste into a secure, efficient cloud environment. Risk Mitigation FinOps boosts visibility and transparency, helping teams find weaknesses and risks in licenses, identities, devices, and access points. This is crucial for improving IAM, configuring access controls correctly, and using MFA to protect systems and data, also involves continuous monitoring to spot security gaps early and align measures with organizational goals. It helps manage financial risk by estimating breach costs and allocating resources efficiently. Regular risk assessments and budget adjustments ensure effective security investments that balance defense and business objectives. Improved Compliance and Governance Complying with standards like GDPR, HIPAA, or PCI-DSS is essential for strong cyber defenses. A FinOps approach helps by automating compliance reporting, allowing organizations to use cost-effective tools such as Azure FinOps toolkit to meet regulations. Conclusion Azure FinOps is a useful tool for managing cybersecurity costs. It enhances cost visibility and accountability, enables budget optimization and assists with compliance audits and reporting, also helps businesses invest their resources effectively and efficiently.Microsoft Purview Best Practices
Microsoft Purview is a solution that helps organizations manage data and compliance. It also uses AI to classify data, monitor compliance, and identify risks. Key features include data discovery, classification, governence, retention, compliance management, encryption, and access controls. Purview ensures data security, prevents insider threats, and helps implement data loss prevention policies to meet compliance requirements. Hello everyone - This is just a short introduction, I am Dogan Colak. I have been working as an M365 Consultant for about 5 years, holding certifications such as MCT, SC-100, SC-200, SC-300, and MS-102, with a focus on Security & Compliance. This year, I am excited to share what I have learned with the Microsoft Technology Community. In the coming days, I will be publishing videos and articles based on the training agenda I have created. I will also share these articles on LinkedIn, so feel free to follow me there. I am always open to feedback and suggestions. See you soon!1.3KViews2likes1CommentStreamlining AI Compliance: Introducing the Premium Template for Indonesia's PDP Law in Purview
In today’s evolving regulatory environment, businesses must navigate complex data privacy laws while fostering customer trust, especially as AI transforms industries. To support organizations in meeting compliance requirements, we’re introducing the Premium Assessment Template for Indonesia's Personal Data Protection (PDP) Law within Microsoft Purview Compliance Manager. This powerful tool automates critical compliance tasks, simplifies assessments, and integrates seamlessly with Microsoft’s E5 security and Purview solutions, helping businesses reduce manual effort and ensure compliance more efficiently. Discover how this template can streamline your compliance efforts and build trust in an AI-driven world.4.5KViews0likes0CommentsNew Blog | Architecting secure Gen AI applications: Preventing Indirect Prompt Injection Attacks
By Roee Oz As developers, we must be vigilant about how attackers could misuse our applications. While maximizing the capabilities of Generative AI (Gen-AI) is desirable, it's essential to balance this with security measures to prevent abuse. In a recent blog post, we discussed how a Gen AI application should use user identities for accessing sensitive data and performing sensitive operations. This practice reduces the risk of jailbreak and prompt injections, preventing malicious users from gaining access to resources they don’t have permissions to. However, what if an attacker manages to run a prompt under the identity of a valid user? An attacker can hide a prompt in an incoming document or email, and if a non-suspecting user uses a Gen-AI large language model (LLM) application to summarize the document or reply to the email, the attacker’s prompt may be executed on behalf of the end user. This is called indirect prompt injection. Let's start with some definitions: Prompt injection vulnerability occurs when an attacker manipulates a large language model (LLM) through crafted inputs, causing the LLM to unknowingly execute the attacker's intentions. This can be done directly by "jailbreaking" the system prompt or indirectly through manipulated external inputs, potentially leading to data exfiltration, social engineering, and other issues. Direct prompt injections, also known as "jailbreaking," occur when a malicious user overwrites or reveals the underlying system prompt. This allows attackers to exploit backend systems by interacting with insecure functions and data stores accessible through the LLM. Indirect Prompt Injections occur when an LLM accepts input from external sources that can be controlled by an attacker, such as websites or files. The attacker may embed a prompt injection in the external content, hijacking the conversation context. This can lead to unstable LLM output, allowing the attacker to manipulate the LLM or additional systems that the LLM can access. Also, indirect prompt injections do not need to be human-visible/readable, if the text is parsed by the LLM. Read the full post here: Architecting secure Gen AI applications: Preventing Indirect Prompt Injection Attacks287Views0likes0CommentsInformation Barriers - Student/Staff Content
We are a career center school and we use Teams, Sharepoint and OneDrive with our students and staff. I believe Information Barriers is the correct tool to resolve the concerns that we have, but I am struggling to find a definitive answer on one question that has come up. In our scenario, we are looking to restrict students from seeing documents and information that is only meant for staff. We are a single tenant, so this is difficult to accomplish with the other tools available. My question has to do with content that is staff created but meant to be shared with students. So in our scenario we would not want an internal document on student discipline procedures to be searchable or available to students, but we do want all staff to have access. Howerver, if an instructor creates a document for her class, we do want her to be able to share that with her class. Is it possible to use IB's and still allow teachers the ability to share with the students through Teams classrooms?Solved897Views0likes1CommentWindows and Office license discounts
Hello dear community! I am a government employee assigned to contact Microsoft to inquire about your support of education in a developing countries. What offers and special programmes you have in aid for educational institutions including public school? Best regards, Fatima ICT center Dushanbe.478Views0likes0CommentsBeginner to Security Analyst
Hi Community, I have searched across the internet but cannot find a real-life example or path. I'm a complete beginner in the security field (Passed my SC900, AI900, AZ900). I want to become a security analyst (sc 200). That's just the exam but I know I need to dive into networking, KQL, powershell(I have experience in this), bash and eventually do some go & python programming. Is it wise for me to jump into sc200 by October then learn the rest as I go by? The aim is to switch careers in 5months. Or should I do sc400 then sc300 then sc200. I have some AD experience and a bit of helpdesk experience in triage. (If these questions have been asked before, please do point me to the right direction). Muchos gracias6.3KViews0likes3Comments