governance
195 TopicsBuilding Secure, Enterprise Ready AI Agents with Purview SDK and Agent Framework
At Microsoft Ignite, we announced the public preview of Purview integration with the Agent Framework SDK—making it easier to build AI agents that are secure, compliant, and enterprise‑ready from day one. AI agents are quickly moving from demos to production. They reason over enterprise data, collaborate with other agents, and take real actions. As that happens, one thing becomes non‑negotiable: Governance has to be built in. That’s where Purview SDK comes in. Agentic AI Changes the Security Model Traditional apps expose risks at the UI or API layer. AI agents are different. Agents can: Process sensitive enterprise data in prompts and responses Collaborate with other agents across workflows Act autonomously on behalf of users Without built‑in controls, even a well‑designed agent can create compliance gaps. Purview SDK brings Microsoft’s enterprise data security and compliance directly into the agent runtime, so governance travels with the agent—not after it. What You Get with Purview SDK + Agent Framework This integration delivers a few key things developers and enterprises care about most: Inline Data Protection Evaluate prompts and responses against Data Loss Prevention (DLP) policies in real time. Content can be allowed or blocked automatically. Built‑In Governance Send AI interactions to Purview for audit, eDiscovery, communication compliance, and lifecycle management—without custom plumbing. Enterprise‑Ready by Design Ship agents that meet enterprise security expectations from the start, not as a follow‑up project. All of this is done natively through Agent Framework middleware, so governance feels like part of the platform—not an add‑on. How Enforcement Works (Quickly) When an agent runs: Prompts and responses flow through the Agent Framework pipeline Purview SDK evaluates content against configured policies A decision is returned: allow, redact, or block Governance signals are logged for audit and compliance This same model works for: User‑to‑agent interactions Agent‑to‑agent communication Multi‑agent workflows Try It: Add Purview SDK in Minutes Here’s a minimal Python example using Agent Framework: That’s it! From that point on: Prompts and responses are evaluated against Purview policies setup within the enterprise tenant Sensitive data can be automatically blocked Interactions are logged for governance and audit Designed for Real Agent Systems Most production AI apps aren’t single‑agent systems. Purview SDK supports: Agent‑level enforcement for fine‑grained control Workflow‑level enforcement across orchestration steps Agent‑to‑agent governance to protect data as agents collaborate This makes it a natural fit for enterprise‑scale, multi‑agent architectures. Get Started Today You can start experimenting right away: Try the Purview SDK with Agent Framework Follow the Microsoft Learn docs to configure Purview SDK with Agent Framework. Explore the GitHub samples See examples of policy‑enforced agents in Python and .NET. Secure AI, Without Slowing It Down AI agents are quickly becoming production systems—not experiments. By integrating Purview SDK directly into the Agent Framework, Microsoft is making governance a default capability, not a deployment blocker. Build intelligent agents. Protect sensitive data. Scale with confidence.Strengthening your Security Posture with Microsoft Security Store Innovations at RSAC 2026
Security teams are facing more threats, more complexity, and more pressure to act quickly - without increasing risk or operational overhead. What matters is being able to find the right capability, deploy it safely, and use it where security work already happens. Microsoft Security Store was built with that goal in mind. It provides a single, trusted place to discover, purchase, and deploy Microsoft and partner-built security agents and solutions that extend Microsoft Security - helping you improve protection across SOC, identity, and data protection workflows. Today, the Security Store includes 75+ security agents and 115+ solutions from Microsoft and trusted partners - each designed to integrate directly into Microsoft Security experiences and meet enterprise security requirements. At RSAC 2026, we’re announcing capabilities that make it easier to turn security intent into action- by improving how you discover agents, how quickly you can put them to use, and how effectively you can apply them across workflows to achieve your security outcomes. Meet the Next Generation of Security Agents Security agents are becoming part of day-to-day operations for many teams - helping automate investigations, enrich signals, and reduce manual effort across common security tasks. Since Security Store became generally available, Microsoft and our partners have continued to expand the set of agents that integrate directly with Microsoft Defender, Sentinel, Entra, Purview, Intune and Security Copilot. Some of the notable partner-built agents available through Security Store include: XBOW Continuous Penetration Testing Agent XBOW’s penetration testing agents perform pen-tests, analyzes findings, and correlates those findings with a customer’s Microsoft Defender detections. XBOW integrates offensive security directly into Microsoft Security workflows by streaming validated, exploitable AppSec findings into Microsoft Sentinel and enabling investigation through XBOW's Copilot agents in Microsoft Defender. With XBOW’s pen-testing agents, offensive security can run continuously to identify which vulnerabilities are actually exploitable, and how to improve posture and detections. Tanium Incident Scoping Agent The Tanium Incident Scoping Agent (In Preview) is bringing real-time endpoint intelligence directly into Microsoft Defender and Microsoft Security Copilot workflows. The agent automatically scopes incidents, identifies impacted devices, and surfaces actionable context in minutes-helping teams move faster from detection to containment. By combining Tanium’s real-time intelligence with Microsoft Security investigations, you can reduce manual effort, accelerate response, and maintain enterprise-grade governance and control. Zscaler In Microsoft Sentinel, the Zscaler ZIA–ZPA Correlation Agent correlates ZIA and ZPA activity for a given user to speed malsite/malware investigations. It highlights suspicious patterns and recommends ZIA/ZPA policy changes to reduce repeat exposure. These agents build on a growing ecosystem of Microsoft and partner capabilities designed to work together, allowing you to extend Microsoft Security with specialized expertise where it has the most impact. Discover and Deploy Agents and Solutions in the Flow of Security Work Security teams work best when they don’t have to switch tools to make decisions. That’s why Security Store is embedded directly into Microsoft Security experiences - so you can discover and evaluate trusted agents and solutions in context, while working in the tools you already use. When Security Store became generally available, we embedded it into Microsoft Defender, allowing SOC teams to discover and deploy trusted Microsoft and partner‑built agents and solutions in the middle of active investigations. Analysts can now automate response, enrich investigations, and resolve threats all within the Defender portal. At RSAC, we’re expanding this approach across identity and data security. Strengthening Identity Security with Security Store in Microsoft Entra Identity has become a primary attack surface - from fraud and automated abuse to privileged access misuse and posture gaps. Security Store is now embedded in Microsoft Entra, allowing identity and security teams to discover and deploy partner solutions and agents directly within identity workflows. For external and verified identity scenarios, Security Store includes partner solutions that integrate with Entra External ID and Entra Verified ID to help protect against fraud, DDoS attacks, and intelligent bot abuse. These solutions, built by partners such as IDEMIA, AU10TIX, TrueCredential, HUMAN Security, Akamai and Arkose Labs help strengthen trust while preserving seamless user experiences. For enterprise identity security, more than 15 agents available through the Entra Security Store provide visibility into privileged activity and identity risk, posture health and trends, and actionable recommendations to improve identity security and overall security score. These agents are built by partners such as glueckkanja, adaQuest, Ontinue, BlueVoyant, Invoke, and Performanta. This allows you to extend Entra with specialized identity security capabilities, without leaving the identity control plane. Extending Data Protection with Security Store in Microsoft Purview Protecting sensitive data requires consistent controls across where data lives and how it moves. Security Store is now embedded in Microsoft Purview, enabling teams responsible for data protection and compliance to discover partner solutions directly within Purview DLP workflows. Through this experience, you can extend Microsoft Purview DLP with partner data security solutions that help protect sensitive data across cloud applications, enterprise browsers, and networks. These include solutions from Microsoft Entra Global Secure Access and partners such as Netskope, Island, iBoss, and Palo Alto Networks. This experience will be available to customers later this month, as reflected on the M365 roadmap. By discovering solutions in context, teams can strengthen data protection without disrupting established compliance workflows. Across Defender, Entra, and Purview, purchases continue to be completed through the Security Store website, ensuring a consistent, secure, and governed transaction experience - while discovery and evaluation happen exactly where teams already work. Outcome-Driven Discovery, with Security Store Advisor As the number of agents and solutions in the Store grow, finding the right fit for your security scenario quickly becomes more important. That’s why we’re introducing the AI‑guided Security Store Advisor, now generally available. You can describe your goal in natural language - such as “investigate suspicious network activity” and receive recommendations aligned to that outcome. Advisor also includes side-by-side comparison views for agents and solutions, helping you review capabilities, integrated services, and deployment requirements more quickly and reduce evaluation time. Security Store Advisor is designed with Responsible AI principles in mind, including transparency and explainability. You can learn more about how Responsible AI is applied in this experience in the Security Store Advisor Responsible AI FAQ. Overall, this outcome‑driven approach reduces time to value, improves solution fit, and helps your team move faster from intent to action. Learning from the Security Community with Ratings and Reviews Security decisions are strongest when informed by real world use cases. This is why we are introducing Security Store ratings and reviews from security professionals who have deployed and used agents and solutions in production environments. These reviews focus on practical considerations such as integration quality, operational impact, and ease of use, helping you learn from peers facing similar security challenges. By sharing feedback, the security community helps raise the bar for quality and enables faster, more informed decisions, so teams can adopt agents and solutions with greater confidence and reduce time to value. Making agents easier to use post deployment Once you’ve deployed your agents, we’re introducing several new capabilities that make it easier to work with your agents in your daily workflows. These updates help you operationalize agents faster and apply automation where it delivers real value. Interactive chat with agents in Microsoft Defender lets SOC analysts ask questions to agents with specialized expertise, such as understanding impacted devices or understanding what vulnerabilities to prioritize directly in the Defender portal. By bringing a conversational experience with agents into the place where analysts do most of their investigation work, analysts can seamlessly work in collaboration with agents to improve security. Logic App triggers for agents enables security teams to include security agents in their automated, repeatable workflows. With this update, organizations can apply agentic automation to a wider variety of security tasks while integrating with their existing tools and workflows to perform tasks like incident triage and access reviews. Product combinations in Security Store make it easier to deploy complete security solutions from a single streamlined flow - whether that includes connectors, SaaS tools, or multiple agents that need to work together. Increasingly, partners are building agents that are adept at using your SaaS security tools and security data to provide intelligent recommendations - this feature helps you deploy them faster with ease. A Growing Ecosystem Focused on Security Outcomes As the Security Store ecosystem continues to expand, you gain access to a broader set of specialized agents and solutions that work together to help defend your environment - extending Microsoft Security with partner innovation in a governed and integrated way. At the same time, Security Store provides partners a clear path to deliver differentiated capabilities directly into Microsoft Security workflows, aligned to how customers evaluate, adopt, and use security solutions. Get Started Visit https://securitystore.microsoft.com/ to discover security agents and solutions that meet your needs and extend your Microsoft Security investments. If you’re a partner, visit https://securitystore.microsoft.com/partners to learn how to list your solution or agent and reach customers where security decisions are made. Where to find us at RSAC 2026? Security Reborn in the Era of AI workshop Get hands‑on guidance on building and deploying Security Copilot agents and publishing them to the Security Store. March 23 | 8:00 AM | The Palace Hotel Register: Security Reborn in the Era of AI | Microsoft Corporate Microsoft Security Store: An Inside Look Join us for a live theater session exploring what’s coming next for Security Store March 26 | 1:00 PM | Microsoft Security Booth #5744 | North Expo Hall Visit us at the Booth Experience Security Store firsthand - test the experience and connect with experts. Microsoft Booth #1843Security Dashboard for AI - Now Generally Available
AI proliferation in the enterprise, combined with the emergence of AI governance committees and evolving AI regulations, leaves CISOs and AI risk leaders needing a clear view of their AI risks, such as data leaks, model vulnerabilities, misconfigurations, and unethical agent actions across their entire AI estate, spanning AI platforms, apps, and agents. 53% of security professionals say their current AI risk management needs improvement, presenting an opportunity to better identify, assess and manage risk effectively. 1 At the same time, 86% of leaders prefer integrated platforms over fragmented tools, citing better visibility, fewer alerts and improved efficiency. 2 To address these needs, we are excited to announce the Security Dashboard for AI, previously announced at Microsoft Ignite, is now generally available. This unified dashboard aggregates posture and real-time risk signals from Microsoft Defender, Microsoft Entra, and Microsoft Purview - enabling users to see left-to-right across purpose-built security tools from within a single pane of glass. The dashboard equips CISOs and AI risk leaders with a governance tool to discover agents and AI apps, track AI posture and drift, and correlate risk signals to investigate and act across their entire AI ecosystem. Security teams can continue using the tools they trust while empowering security leaders to govern and collaborate effectively. Gain Unified AI Risk Visibility Consolidating risk signals from across purpose-built tools can simplify AI asset visibility and oversight, increase security teams’ efficiency, and reduce the opportunity for human error. The Security Dashboard for AI provides leaders with unified AI risk visibility by aggregating security, identity, and data risk across Defender, Entra, Purview into a single interactive dashboard experience. The Overview tab of the dashboard provides users with an AI risk scorecard, providing immediate visibility to where there may be risks for security teams to address. It also assesses an organization's implementation of Microsoft security for AI capabilities and provides recommendations for improving AI security posture. The dashboard also features an AI inventory with comprehensive views to support AI assets discovery, risk assessments, and remediation actions for broad coverage of AI agents, models, MCP servers, and applications. The dashboard provides coverage for all Microsoft AI solutions supported by Entra, Defender and Purview—including Microsoft 365 Copilot, Microsoft Copilot Studio agents, and Microsoft Foundry applications and agents—as well as third-party AI models, applications, and agents, such as Google Gemini, OpenAI ChatGPT, and MCP servers. This supports comprehensive visibility and control, regardless of where applications and agents are built. Prioritize Critical Risk with Security Copilots AI-Powered Insights Risk leaders must do more than just recognize existing risks—they also need to determine which ones pose the greatest threat to their business. The dashboard provides a consolidated view of AI-related security risks and leverages Security Copilot’s AI-powered insights to help find the most critical risks within an environment. For example, Security Copilot natural language interaction improves agent discovery and categorization, helping leaders identify unmanaged and shadow AI agents to enhance security posture. Furthermore, Security Copilot allows leaders to investigate AI risks and agent activities through prompt-based exploration, putting them in the driver’s seat for additional risk investigation. Drive Risk Mitigation By streamlining risk mitigation recommendations and automated task delegation, organizations can significantly improve the efficiency of their AI risk management processes. This approach can reduce the potential hidden AI risk and accelerate compliance efforts, helping to ensure that risk mitigation is timely and accurate. To address this, the Security Dashboard for AI evaluates how organizations put Microsoft’s AI security features into practice and offers tailored suggestions to strengthen AI security posture. It leverages Microsoft’s productivity tools for immediate action within the practitioner portal, making it easy for administrators to delegate recommendation tasks to designated users. With the Security Dashboard for AI, CISOs and risk leaders gain a clear, consolidated view of AI risks across agents, apps, and platforms—eliminating fragmented visibility, disconnected posture insights, and governance gaps as AI adoption scales. Best of all, the Security Dashboard for AI is included with eligible Microsoft security products customers already use. If an organization is already using Microsoft security products to secure AI, they are already a Security Dashboard for AI customer. Getting Started Existing Microsoft Security customers can start using Security Dashboard for AI today. It is included when a customer has the Microsoft Security products—Defender, Entra and Purview—with no additional licensing required. To begin using the Security Dashboard for AI, visit http://ai.security.microsoft.com or access the dashboard from the Defender, Entra or Purview portals. Learn more about the Security Dashboard for AI at Microsoft Security MS Learn. 1AuditBoard & Ascend2 Research. The Connected Risk Report: Uniting Teams and Insights to Drive Organizational Resilience. AuditBoard, October 2024. 2Microsoft. 2026 Data Security Index: Unifying Data Protection and AI Innovation. Microsoft Security, 2026New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation
As organizations adopt AI, security and governance remain core primitives for safe AI transformation and acceleration. After all, data leaders are aware of the notion that: Your AI is only as good as your data. Organizations are skeptical about AI transformation due to concerns of sensitive data oversharing and poor data quality. In fact, 86% of organizations lack visibility into AI data flows, operating in darkness about what information employees share with AI systems [1]. Compounding on this challenge, about 67% of executives are uncomfortable using data for AI due to quality concerns [2]. The challenges of data oversharing and poor data quality requires organizations to solve these issues seamlessly for the safe usage of AI. Microsoft Purview offers a modern, unified approach to help organizations secure and govern data across their entire data estate, in particular best in class integrations with M365, Microsoft Fabric, and Azure data estates, streamlining oversight and reducing complexity across the estate. At FabCon Atlanta, we’re announcing new Microsoft Purview innovations for Fabric to help seamlessly secure and confidently activate your data for AI transformation. These updates span data security and data governance, granting Fabric users to both Discover risks and prevent data oversharing in Fabric Improve governance processes and data quality across their data estate 1. Discover risks and prevent data oversharing in Fabric As data volume increases with AI usage, Microsoft Purview secures your data with capabilities such as Information Protection, Data Loss Prevention (DLP), Insider Risk Management (IRM), and Data Security Posture Management (DSPM). These capabilities work together to secure data throughout its lifecycle and now specifically for your Fabric data estate. Here are a few new Purview innovations for your Fabric estate: Microsoft Purview DLP policies to prevent data leakage for Fabric Warehouse and KQL/SQL DBs Now generally available, Microsoft Purview DLP policies allow Fabric admins to prevent data oversharing in Fabric through policy tip triggering when sensitive data is detected in assets uploaded to Warehouses. Additionally, in preview, Purview DLP enables Fabric admins to restrict access to assets with sensitive data in KQL/SQL DBs and Fabric Warehouses to prevent data oversharing. This helps admins limit access to sensitive data detected in these data sources and data stores to just asset owners and allowed collaborators. These DLP innovations expand upon the depth and breadth of existing DLP policies to ensure sensitive data in Fabric is protected. Figure 1. DLP restrict access preventing data oversharing of customer information stored in a KQL database. Microsoft Purview Insider Risk Management (IRM) indicators for Lakehouse, IRM data theft quick policy for Fabric, and IRM pay-as-you-go usage report for Fabric Microsoft Purview Insider Risk Management is now generally available for Microsoft Fabric extending its risk-detection capabilities to Microsoft Fabric lakehouses (in addition to Power BI which is supported today) by offering ready-to-use risk indicators based on risky user activities in Fabric lakehouses, such as sharing data from a Fabric lakehouse with people outside the organization . Additionally, IRM data theft policy is now generally available for security admins to create a data theft policy to detect Fabric data exfiltration, such as exporting Power BI reports. Also, organizations now have visibility into how much they are billed with the IRM pay-as-you-go usage report for Fabric, providing customers with an easy-to-use dashboard to track their consumption and predictability on costs. Figure 2. IRM identifying risky user behavior when handling data in a Fabric Lakehouse. Figure 3. Security admins can create a data theft policy to detect Fabric data exfiltration. Figure 4. Security admins can check the pay-as-you-go usage (processing units) across different workloads and activities such as the downgrading of sensitivity labels of a lakehouse through the usage report. Microsoft Purview for all Fabric Copilots and Agents Microsoft Purview currently provides capabilities in preview for all Copilots and Agents in Fabric. Organizations can: Discover data risks such as sensitive data in user prompts and responses and receive recommended actions to reduce these risks. Detect and remediate oversharing risks with Data Risk Assessments on DSPM, that identify potentially overshared, unprotected, or sensitive Fabric assets, giving teams clear visibility into where data exposure exists and enabling targeted actions—like applying labels or policies—to reduce risk and ensure Fabric data is AI‑ready and governed by design. Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI. Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection. Figure 5. Purview DSPM provides admins with the ability to discover data risks such as a user’s attempt to obtain historical data within a data agent in the Data Science workload in Fabric. DSPM subsequently provides actions to solve this risk. Now that we’ve covered how Purview helps secure Fabric data and AI, the next focus is ensuring Fabric users can use that data responsibly. 2. Improve governance processes and data quality across their data estate Once an organization’s data is secured for AI, the next challenge is ensuring consumers can easily find and trust the data needed for AI. This is where the Purview Unified Catalog comes in, serving as the foundation for enterprise data governance. Estate-wide data discovery provides a holistic view of the data landscape, helping prevent valuable data from being underutilized. Built-in data quality tools enable teams to measure, monitor, and remediate issues such as incomplete records, inconsistencies, and redundancies, ensuring decisions and AI outcomes are based on trusted, reliable data. Purview provides additional governance capabilities for all data consumers and governance teams and supplement those who utilize the Fabric OneLake catalog. Here are a few new innovations within the Purview Unified Catalog: Publication workflows for data products and glossary terms Now generally available, data owners can leverage Workflows in the Purview Unified Catalog to manage how data products and glossary terms are published. Customizable workflows enable governance teams to work faster to create a well curated catalog, specifically by ensuring that data products and glossary terms are published and governed responsibly. Data consumers can request access to data products and be reassured that the data is held to a certain governance standard by governance teams. Figure 6. Customizing a Workflow for publishing a glossary term in your catalog. Data quality for ungoverned assets in the Unified Catalog, including Fabric data In the Unified Catalog, Data Quality for ungoverned data assets allows organizations to run data quality on data assets, including Fabric assets, without linking them to data products. This approach enables data quality stewards to run data quality at a faster speed and on greater scale, ensuring that their organizations can democratize high quality data for AI use cases. Figure 7. Running data quality on data assets without it being associated with a data product. Looking Forward As organizations accelerate their AI ambitions, data security and governance become essential. Microsoft Purview and Microsoft Fabric deliver an integrated and unified foundation that enables organizations to innovate with confidence, ensuring data is protected, governed, and trusted for responsible AI activation. We’re committed to helping you stay ahead of evolving challenges and opportunities as you unlock more value from your data. Explore these new capabilities and join us on the journey toward a more secure, governed, and AI‑ready data future. [1] 2025 AI Security Gap: 83% of Organizations Flying Blind [2] The Importance Of Data Quality: Metrics That Drive Business SuccessBulk enable Azure Arc Connected Machine Agent Automatic Upgrade (Tag Scoped) with Azure Cloud Shell
Overview Keeping the Azure Arc Connected Machine agent current is a foundational hygiene task for any hybrid server estate, especially when you’re operating at scale and onboarding hundreds (or thousands) of machines into Arc. The good news: Azure Arc supports an automatic agent upgrade (preview) capability that can be enabled per Arc machine by setting the agentUpgrade.enableAutomaticUpgrade property via Azure Resource Manager (ARM). Microsoft’s public guidance shows enabling this using a PATCH call (via Invoke-AzRestMethod) against the Arc machine resource with the 2024-05-20-preview API version. Manage and maintain the Azure Connected Machine agent - Azure Arc | Microsoft Learn In real environments, you rarely want to enable this across every Arc-enabled server in one shot. Instead, you typically: start with a pilot ring (e.g., Dev/Test or low‑risk servers), validate results, and then expand coverage gradually. The Script: Abhishek-Sharan/ExtensionManagement: Install & Manage Extensions The script implements exactly that approach by: prompting an explicit disclaimer acknowledgement (safety gate), selecting Arc machines by tag (a controlled blast-radius technique), enabling automatic upgrade using ARM PATCH through Invoke-AzRestMethod, producing a final summary report of success/failure per machine. This post walks through what the script does, why each section exists, and what to consider before using it in production. Why Tag‑Scoped Enablement? In many enterprise deployments, tags are the simplest way to define a “ring” of servers: Ring=Pilot Environment=NonProd Workload=LowRisk This script discovers resources of type Microsoft.HybridCompute machines in a given resource group and filters them by a tag/value pair. That makes it easy to: onboard machines first, apply tags as part of provisioning, then flip on agent auto-upgrade only for the right cohort. Script Details (Walkthrough) 1) Safety Gate: Disclaimer + Explicit User Consent This script prints a disclaimer block and requires the operator to type Y to proceed. If the user types anything else, the script exits. Why it matters: It prevents accidental execution (especially in shared shells or jump boxes). It reinforces that this is a potentially impactful change across multiple machines. 2) Configuration: Subscription, Resource Group, and Tag Scope The script sets the active Azure context: Set-AzContext -Subscription "YOUR SUBSCRIPTION" Then defines: $resourceGroup $tagName, $tagValue 3) Discovery: Find Azure Arc Machines with a Target Tag Discovery uses: Get-AzResource -ResourceType "Microsoft.HybridCompute/machines" filters by tag match on the returned resource object. This ensures you are only targeting Arc-enabled servers represented as Microsoft.HybridCompute machines resources. If no machines are found, the script exits cleanly, which avoids the “silent no-op” problem and helps operators quickly validate that scope selection is correct. 4) Update: Enable Automatic Upgrade via ARM PATCH For each machine, the script uses Invoke-AzRestMethod with: ResourceProviderName = "Microsoft.HybridCompute" ResourceType = "Machines" ApiVersion = "2024-05-20-preview" Method = "PATCH" payload: {"properties":{"agentUpgrade":{"enableAutomaticUpgrade":true}}} 5) Output: Per‑Machine Result + Final Summary Table The script records results into an array of PSCustomObject entries with: MachineName EnableAutomaticUpgrade Result (Success/Failed) Then prints a formatted table. This is useful for: quick operator confirmation, change records, attaching output to internal work items / change tickets. Conclusion This script is a solid operational accelerant for teams managing Arc-enabled servers at scale. It combines: safety (explicit disclaimer + opt-in), control (tag-based targeting), automation (bulk enabling via ARM PATCH), observability (clear per-server results and a final summary). If you’re trying to standardize operational hygiene across hundreds of Arc machines, tag-scoped enablement like this is one of the cleanest ways to start small, learn safely, and then scale.Microsoft Fabric Lakehouse sub-item metadata in Microsoft Purview
Working at the intersection of data security, engineering, and governance, the Microsoft Purview product team continually explores capabilities that reshape how organizations understand and manage their data estate. One such capability—the ability to scan and extract metadata from Microsoft Fabric Lakehouse—has generated genuine excitement and strong customer demand. We are pleased to announce the GA of Microsoft Fabric Lakehouse sub‑item metadata in Microsoft Purview. The Problem It Solves Anyone who has managed a growing data estate knows the pain: data sources and workspaces multiply, Lakehouse accumulate tables and files, and before long nobody has a clear, centralized picture of what data lives where, what it looks like, or how it flows. Data governance becomes a spreadsheet exercise. Audits become stressful. Trust in data erodes. Microsoft Purview directly addresses this by automatically scanning your Fabric tenant and bringing metadata into the Unified Catalog — without requiring your data teams to manually document anything. What Purview Actually Extracts Here is where it gets interesting from a product perspective. The integration distinguishes between two levels of metadata: Item-level metadata covers the top-level workspace artifacts — the Lakehouse, Warehouses etc. Each of these is treated as a single entity in Purview, inventoried automatically after a scan completes. Sub-item level metadata — and this is the exciting part — now extends into the Lakehouse itself. Purview can now scan tables (Delta format) and files within a Lakehouse, surfacing column-level detail, data types, and structural information directly in the Unified Catalog. For a data steward or data consumer, this is the difference between knowing "a Lakehouse called Sales Gold exists" and knowing "that Lakehouse contains a Delta table called fact orders with 14 columns including order date (date) and revenue (decimal)." That distinction matters enormously for data discoverability, data contracts, and onboarding new consumers onto your data products. Setting It Up — Simpler Than You Think Connecting Purview to your Fabric tenant in the same Microsoft Entra tenancy is refreshingly straightforward. At a high level, the steps are: Register your Fabric tenant as a data source in the Purview Data Map. Create a security group in Microsoft Entra ID, add your Purview Managed Identity (MSI) or service principal to it, and grant that group read-only Admin API access in the Fabric tenant admin portal. Enable the "Enhance admin APIs responses with detailed metadata" setting in the Fabric Admin portal. This is easy to miss but critical — without it, sub-item scanning won't function correctly. Configure and schedule your scan, scoping it to all workspaces or a targeted subset. Support for Managed Identity authentication is now available, which simplifies credential management for teams already invested in Azure's identity infrastructure. One practical note: if you are running multiple Fabric or Power BI scans simultaneously, you may encounter rate limiting. The recommended approach is to stagger scans across different time windows rather than running them in parallel. What You Can Do With It Once scanned, the metadata surfaces in Purview's Unified Catalog, where your teams can browse by source type, workspace, or Fabric experience, and search for specific assets by name, description, or other attributes. This makes it genuinely easy for data consumers to find and evaluate data before requesting access! From a governance standpoint, this unlocks several capabilities that matter to modern data teams: Data discoverability — analysts and data scientists can find Lakehouse tables in the catalog without relying on tribal knowledge or chasing down the engineer who built the pipeline six months ago. Are you ready to setup Microsoft Fabric scan in Microsoft Purview? Head over to the Microsoft Purview Portal and select Data Map. Learn more in the Register Microsoft Fabric in Microsoft Purview documentation.Purview Community: Call for Sessions!
Microsoft Purview Community Lighting Talks is a virtual event series featuring real-world demos, tips, and solutions directly from you, the Purview Community! The goal of these short, fluff-free lighting sessions is to foster authentic knowledge sharing, community connection, and highlight real experiences to empower others who use Microsoft Purview. The virtual events will be held in a simu-live format across multiple time zones and will include subject-matter experts and the speakers in a live-chat Q&A for community participation. For more information and to apply to speak, please visit the Purview Lighting Talks Call for Speakers Submission Form SUBMISSIONS DUE MARCH 16, 2026 So, to our Purview customers, partners, and developers- gather your session title and description, along with a quick 1-2 sentence bio, and submit today! Event details and registration will be shared here and at aka.ms/securitycommunity as soon as they are scheduled. Wish to be notified? Subscribe to our email list.Azure Update Manager to support CIS hardened images among other images
What’s coming in by first week of August: Azure Update Manager will add support for 35 CIS hardened images. This is the first time that Update Management product in Azure is supporting CIS hardened images. Apart from CIS hardened images, Azure Update Manager will also add support for 59 other images to unblock Automation Update Management migrations to Azure Update Manager. What’s coming in September: After this release, another batch of 30 images will be added support for. Please refer to the article below to check the details of which images will be supported. Below 35 CIS images will be supported by Azure Update Manager by first week of August. Please note Publisher for all these images is center-for-internet-security-inc. Offer Plan cis-windows-server cis-windows-server2016-l1-gen1 cis-windows-server2019-l1-gen1 cis-windows-server2019-l1-gen2 cis-windows-server2019-l2-gen1 cis-windows-server2022-l1-gen2 cis-windows-server2022-l2-gen2 cis-windows-server2022-l1-gen1 cis-windows-server-2022-l1 cis-windows-server-2022-l1 cis-windows-server-2022-l1-gen2 cis-windows-server-2022-l2 cis-windows-server-2022-l2 cis-windows-server-2022-l2-gen2 cis-windows-server-2019-v1-0-0-l1 cis-ws2019-l1 cis-windows-server-2019-v1-0-0-l2 cis-ws2019-l2 cis-windows-server-2016-v1-0-0-l1 cis--l1 cis-windows-server-2016-v1-0-0-l2 cis-ws2016-l2 cis-windows-server-2012-r2-v2-2-1-l2 cis-ws2012-r2-l2 cis-rhel9-l1 cis-rhel9-l1 cis-rhel9-l1-gen2 cis-rhel-8-l1 cis-rhel-8-l2 cis-rhel8-l2 cis-rhel-7-l2 cis-rhel7-l2 cis-rhel cis-redhat7-l1-gen1 cis-redhat8-l1-gen1 cis-redhat8-l2-gen1 cis-redhat9-l1-gen1 cis-redhat9-l1-gen2 cis-ubuntu-linux-2204-l1 cis-ubuntu-linux-2204-l1 cis-ubuntu-linux-2204-l1-gen2 cis-ubuntu-linux-2004-l1 cis-ubuntu2004-l1 cis-ubuntu-linux-1804-l1 cis-ubuntu1804-l1 cis-ubuntu cis-ubuntu1804-l1 cis-ubuntulinux2004-l1-gen1 cis-ubuntulinux2204-l1-gen1 cis-ubuntulinux2204-l1-gen2 cis-oracle-linux-8-l1 cis-oracle8-l1 Apart from CIS hardened images, below are the other 59 images which will be supported by Azure Update Manager by first week of August: Publisher Offer Plan almalinux almalinux-x86_64 8_7-gen2 belindaczsro1588885355210 belvmsrv01 belvmsrv003 cloudera cloudera-centos-os 7_5 cloud-infrastructure-services rds-farm-2019 rds-farm-2019 cloud-infrastructure-services ad-dc-2019 ad-dc-2019 cloud-infrastructure-services sftp-2016 sftp-2016 cloud-infrastructure-services ad-dc-2016 ad-dc-2016 cloud-infrastructure-services hpc2019-windows-server-2019 hpc2019-windows-server-2019 cloud-infrastructure-services dns-ubuntu-2004 dns-ubuntu-2004 cloud-infrastructure-services servercore-2019 servercore-2019 cloud-infrastructure-services ad-dc-2022 ad-dc-2022 cloud-infrastructure-services squid-ubuntu-2004 squid-ubuntu-2004 cognosys sql-server-2016-sp2-std-win2016-debug-utilities sql-server-2016-sp2-std-win2016-debug-utilities esri arcgis-enterprise byol-108 byol-109 byol-111 byol-1081 byol-1091 esri arcgis-enterprise-106 byol-1061 esri arcgis-enterprise-107 byol-1071 esri pro-byol pro-byol-29 filemagellc filemage-gateway-vm-win filemage-gateway-vm-win-001 filemage-gateway-vm-win-002 github github-enterprise github-enterprise matillion matillion matillion-etl-for-snowflake microsoft-ads windows-data-science-vm windows2016 windows2016byol microsoft-dsvm ubuntu-1804 1804-gen2 netapp netapp-oncommand-cloud-manager occm-byol nginxinc nginx-plus-ent-v1 nginx-plus-ent-centos7 ntegralinc1586961136942 ntg_oracle_8_7 ntg_oracle_8_7 procomputers almalinux-8-7 almalinux-8-7 procomputers rhel-8-2 rhel-8-2 RedHat rhel 8_9 redhat rhel-byos rhel-lvm79 rhel-lvm79-gen2 rhel-lvm8 rhel-lvm82-gen2 rhel-lvm83 rhel-lvm84 rhel-lvm84-gen2 rhel-lvm85-gen2 rhel-lvm86 rhel-lvm86-gen2 rhel-lvm87-gen2 rhel-raw76 redhat rhel 8.1 redhat rhel-sap 7.4 redhat rhel-sap 7.7 redhat rhel 89-gen2 southrivertech1586314123192 tn-ent-payg Tnentpayg southrivertech1586314123192 tn-sftp-payg Tnsftppayg suse sles-sap-15-sp2-byos gen2 suse sles-15-sp5 gen2 talend talend_re_image tlnd_re thorntechnologiesllc sftpgateway Sftpgateway veeam office365backup veeamoffice365backup veeam veeam-backup-replication veeam-backup-replication-v11 zscaler zscaler-private-access zpa-con-azure Below images will be supported in September: Publisher Offer Plan aod win2019azpolicy win2019azpolicy belindaczsro1588885355210 belvmsrv03 belvmsrv001 center-for-internet-security-inc cis-rhel-7-v2-2-0-l1 cis-rhel7-l1 center-for-internet-security-inc cis-rhel-7-stig cis-rhel-7-stig center-for-internet-security-inc cis-win-2016-stig cis-win-2016-stig center-for-internet-security-inc cis-windows-server-2012-r2-v2-2-1-l1 cis-ws2012-r2-l1 cloudrichness rockey_linux_image rockylinux86 Credativ Debian 8 microsoftdynamicsnav dynamicsnav 2017 microsoftwindowsserver windowsserver-hub 2012-r2-datacenter-hub 2016-datacenter-hub MicrosoftWindowsServer WindowsServer-HUB 2016-Datacenter-HUB ntegralinc1586961136942 ntg_cbl_mariner_2 ntg_cbl_mariner_2_gen2 openvpn openvpnas access_server_byol rapid7 nexpose-scan-engine nexpose-scan-engine rapid7 rapid7-vm-console rapid7-vm-console suse sles 12-sp3 suse sles-15-sp1-basic gen1 suse sles-15-sp2-basic gen1 suse sles-15-sp3-basic gen1 gen2 suse sles-15-sp4-basic gen2 suse sles-sap 12-sp3 15 gen2-15 suse sles-sap-byos 15 suse SLES-SAP-BYOS 15 suse sles-sap-15-sp1-byos gen1 Tenable tenablecorenessus tenablecorenessusbyolAuthorization and Identity Governance Inside AI Agents
Designing Authorization‑Aware AI Agents Enforcing Microsoft Entra ID RBAC in Copilot Studio As AI agents move from experimentation to enterprise execution, authorization becomes the defining line between innovation and risk. AI agents are rapidly evolving from experimental assistants into enterprise operators—retrieving user data, triggering workflows, and invoking protected APIs. While many early implementations rely on prompt‑level instructions to control access, regulated enterprise environments require authorization to be enforced by identity systems, not language models. This article presents a production‑ready, identity‑first architecture for building authorization‑aware AI agents using Copilot Studio, Power Automate, Microsoft Entra ID, and Microsoft Graph, ensuring every agent action executes strictly within the requesting user’s permissions. Why Prompt‑Level Security Is Not Enough Large Language Models interpret intent—they do not enforce policy. Even the most carefully written prompts cannot: Validate Microsoft Entra ID group or role membership Reliably distinguish delegated user identity from application identity Enforce deterministic access decisions Produce auditable authorization outcomes Relying on prompts for authorization introduces silent security failures, over‑privileged access, and compliance gaps—particularly in Financial Services, Healthcare, and other regulated industries. Authorization is not a reasoning problem. It is an identity enforcement problem. Common Authorization Anti‑Patterns in AI Agents The following patterns frequently appear in early AI agent implementations and should be avoided in enterprise environments: Hard‑coded role or group checks embedded in prompts Trusting group names passed as plain‑text parameters Using application permissions for user‑initiated actions Skipping verification of the user’s Entra ID identity Lacking an auditable authorization decision point These approaches may work in demos, but they do not survive security reviews, compliance audits, or real‑world misuse scenarios. Authorization‑Aware Agent Architecture In an authorization‑aware design, the agent never decides access. Authorization is enforced externally, by identity‑aware workflows that sit outside the language model’s reasoning boundary. High‑Level Flow The Copilot Studio agent receives a user request The agent passes the User Principal Name (UPN) and intended action A Power Automate flow validates permissions using Microsoft Entra ID via Microsoft Graph Only authorized requests are allowed to proceed Unauthorized requests fail fast with a deterministic outcome Authorization‑aware Copilot Studio architecture enforces Entra ID RBAC before executing any business action. The agent orchestrates intent. Identity systems enforce access. Enforcing Entra ID RBAC with Microsoft Graph Power Automate acts as the authorization enforcement layer: Resolve user identity from the supplied UPN Retrieve group or role memberships using Microsoft Graph Normalize and compare memberships against approved RBAC groups Explicitly deny execution when authorization fails This keeps authorization logic: Centralized Deterministic Auditable Independent of the AI model Reference Implementation: Power Automate RBAC Enforcement Flow The following import‑ready Power Automate cloud flow demonstrates a secure RBAC enforcement pattern for Copilot Studio agents. It validates Microsoft Entra ID group membership before allowing any business action. Scenario Trigger: User‑initiated agent action Identity model: Delegated user identity Input: userUPN, requestedAction Outcome: Authorized or denied based on Entra ID RBAC { "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", "contentVersion": "1.0.0.0", "triggers": { "Copilot_Request": { "type": "Request", "kind": "Http", "inputs": { "schema": { "type": "object", "properties": { "userUPN": { "type": "string" }, "requestedAction": { "type": "string" } }, "required": [ "userUPN" ] } } } }, "actions": { "Get_User_Groups": { "type": "Http", "inputs": { "method": "GET", "uri": "https://graph.microsoft.com/v1.0/users/@{triggerBody()?['userUPN']}/memberOf?$select=displayName", "authentication": { "type": "ManagedServiceIdentity" } } }, "Normalize_Group_Names": { "type": "Select", "inputs": { "from": "@body('Get_User_Groups')?['value']", "select": { "groupName": "@toLower(item()?['displayName'])" } }, "runAfter": { "Get_User_Groups": [ "Succeeded" ] } }, "Check_Authorization": { "type": "Condition", "expression": "@contains(body('Normalize_Group_Names'), 'ai-authorized-users')", "runAfter": { "Normalize_Group_Names": [ "Succeeded" ] }, "actions": { "Authorized_Action": { "type": "Compose", "inputs": "User authorized via Entra ID RBAC" } }, "else": { "actions": { "Access_Denied": { "type": "Terminate", "inputs": { "status": "Failed", "message": "Access denied. User not authorized via Entra ID RBAC." } } } } } } } This pattern enforces authorization outside the agent, aligns with Zero Trust principles, and creates a clear audit boundary suitable for enterprise and regulated environments. Flow Diagram: Agent Integrated with RBAC Authorization Flow and Sample Prompt Execution: Delegated vs Application Permissions Scenario Recommended Permission Model User‑initiated agent actions Delegated permissions Background or system automation Application permissions Using delegated permissions ensures agent execution remains strictly within the requesting user’s identity boundary. Auditing and Compliance Benefits Deterministic and explainable authorization decisions Centralized enforcement aligned with identity governance Clear audit trails for security and compliance reviews Readiness for SOC, ISO, PCI, and FSI assessments Enterprise Security Takeaways Authorization belongs in Microsoft Entra ID, not prompts AI agents must respect enterprise identity boundaries Copilot Studio + Power Automate + Microsoft Graph enable secure‑by‑design AI agents By treating AI agents as first‑class enterprise actors and enforcing authorization at the identity layer, organizations can scale AI adoption with confidence, trust, and compliance.Improper AVD Host Decommissioning – A Practical Governance Framework
Hi everyone, After working with multiple production Azure Virtual Desktop environments, I noticed a recurring issue that rarely gets documented properly: Improper host decommissioning. Scaling out AVD is easy. Scaling down safely is where environments silently drift. Common issues I’ve seen in the field: Session hosts deleted before drain completion Orphaned Entra ID device objects Intune-managed device records left behind Stale registration tokens FSLogix containers remaining locked Defender onboarding objects not cleaned Host pool inconsistencies over time The problem is not technical complexity. It’s lifecycle governance. So I built a structured approach to host decommissioning focused on: Drain validation Active session verification Controlled removal from host pool VM deletion sequencing Identity cleanup validation Registration token rotation Logging and execution safety I’ve published a practical framework here: The framework is fully documented and includes validation logic and logging. https://github.com/modernendpoint/AVD-Host-Decommission-Framework The goal is simple: Not just removing a VM — but preserving platform integrity. I’m curious: How are you handling host lifecycle management in your AVD environments? Fully automated? Manual? Integrated with scaling plans? Identity cleanup included? Would love to hear how others approach this. Menahem Suissa AVD | Intune | Identity-Driven Architecture158Views0likes0Comments