threat protection
77 TopicsGenAI vs Cyber Threats: Why GenAI Powered Unified SecOps Wins
Cybersecurity is evolving faster than ever. Attackers are leveraging automation and AI to scale their operations, so how can defenders keep up? The answer lies in Microsoft Unified Security Operations powered by Generative AI (GenAI). This opens the Cybersecurity Paradox: Attackers only need one successful attempt, but defenders must always be vigilant, otherwise the impact can be huge. Traditional Security Operation Centers (SOCs) are hampered by siloed tools and fragmented data, which slows response and creates vulnerabilities. On average, attackers gain unauthorized access to organizational data in 72 minutes, while traditional defense tools often take on average 258 days to identify and remediate. This is over eight months to detect and resolve breaches, a significant and unsustainable gap. Notably, Microsoft Unified Security Operations, including GenAI-powered capabilities, is also available and supported in Microsoft Government Community Cloud (GCC) and GCC High/DoD environments, ensuring that organizations with the highest compliance and security requirements can benefit from these advanced protections. The Case for Unified Security Operations Unified security operations in Microsoft Defender XDR consolidates SIEM, XDR, Exposure management, and Enterprise Security Posture into a single, integrated experience. This approach allows the following: Breaks down silos by centralizing telemetry across identities, endpoints, SaaS apps, and multi-cloud environments. Infuses AI natively into workflows, enabling faster detection, investigation, and response. Microsoft Sentinel exemplifies this shift with its Data Lake architecture (see my previous post on Microsoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection), offering schema-on-read flexibility for petabyte-scale analytics without costly data rehydration. This means defenders can query massive datasets in real time, accelerating threat hunting and forensic analysis. GenAI: A Force Multiplier for Cyber Defense Generative AI transforms security operations from reactive to proactive. Here’s how: Threat Hunting & Incident Response GenAI enables predictive analytics and anomaly detection across hybrid identities, endpoints, and workloads. It doesn’t just find threats—it anticipates them. Behavioral Analytics with UEBA Advanced User and Entity Behavior Analytics (UEBA) powered by AI correlates signals from multi-cloud environments and identity providers like Okta, delivering actionable insights for insider risk and compromised accounts. [13 -Micros...s new UEBA | Word] Automation at Scale AI-driven playbooks streamline repetitive tasks, reducing manual workload and accelerating remediation. This frees analysts to focus on strategic threat hunting. Microsoft Innovations Driving This Shift For SOC teams and cybersecurity practitioners, these innovations mean you spend less time on manual investigations and more time leveraging actionable insights, ultimately boosting productivity and allowing you to focus on higher-value security work that matters most to your organization. Plus, by making threat detection and response faster and more accurate, you can reduce stress, minimize risk, and demonstrate greater value to your stakeholders. Sentinel Data Lake: Unlocks real-time analytics at scale, enabling AI-driven threat detection without rehydration costs. Microsoft Sentinel data lake overview UEBA Enhancements: Multi-cloud and identity integrations for unified risk visibility. Sentinel UEBA’s Superpower: Actionable Insights You Can Use! Now with Okta and Multi-Cloud Logs! Security Copilot & Agentic AI: Harnesses AI and global threat intelligence to automate detection, response, and compliance across the security stack, enabling teams to scale operations and strengthen Zero Trust defenses defenders. Security Copilot Agents: The New Era of AI, Driven Cyber Defense Sector-Specific Impact All sectors are different, but I would like to focus a bit on the public sector at this time. This sector and critical infrastructure organizations face unique challenges: talent shortages, operational complexity, and nation-state threats. GenAI-centric platforms help these sectors shift from reactive defense to predictive resilience, ensuring mission-critical systems remain secure. By leveraging advanced AI-driven analytics and automation, public sector organizations can streamline incident detection, accelerate response times, and proactively uncover hidden risks before they escalate. With unified platforms that bridge data silos and integrate identity, endpoint, and cloud telemetry, these entities gain a holistic security posture that supports compliance and operational continuity. Ultimately, embracing generative AI not only helps defend against sophisticated cyber adversaries but also empowers public sector teams to confidently protect the services and infrastructure their communities rely on every day. Call to Action Artificial intelligence is driving unified cybersecurity. Solutions like Microsoft Defender XDR and Sentinel now integrate into a single dashboard, consolidating alerts, incidents, and data from multiple sources. AI swiftly correlates information, prioritizes threats, and automates investigations, helping security teams respond quickly with less manual work. This shift enables organizations to proactively manage cyber risks and strengthen their resilience against evolving challenges. Picture a single pane of glass where all your XDRs and Defenders converge, AI instantly shifts through the noise, highlighting what matters most so teams can act with clarity and speed. That may include: Assess your SOC maturity and identify silos. Use the Security Operations Self-Assessment Tool to determine your SOC’s maturity level and provide actionable recommendations for improving processes and tooling. Also see Security Maturity Model from the Well-Architected Framework Explore Microsoft Sentinel, Defender XDR, and Security Copilot for AI-powered security. Explains progressive security maturity levels and strategies for strengthening your security posture. What is Microsoft Defender XDR? - Microsoft Defender XDR and What is Microsoft Security Copilot? Design Security in Solutions from Day One! Drive embedding security from the start of solution design through secure-by-default configurations and proactive operations, aligning with Zero Trust and MCRA principles to build resilient, compliant, and scalable systems. Design Security in Solutions from Day One! Innovate boldly, Deploy Safely, and Never Regret it! Upskill your teams on GenAI tools and responsible AI practices. Guidance for securing AI apps and data, aligned with Zero Trust principles Build a strong security posture for AI About the Author: Hello Jacques "Jack” here! I am a Microsoft Technical Trainer focused on helping organizations use advanced security and AI solutions. I create and deliver training programs that combine technical expertise with practical use, enabling teams to adopt innovations like Microsoft Sentinel, Defender XDR, and Security Copilot for stronger cyber resilience. #SkilledByMTT #MicrosoftLearnBlog Series: Securing the Future: Protecting AI Workloads in the Enterprise
Post 1: The Hidden Threats in the AI Supply Chain Your AI Supply Chain Is Under Attack — And You Might Not Even Know It Imagine deploying a cutting-edge AI model that delivers flawless predictions in testing. The system performs beautifully, adoption soars, and your data science team celebrates. Then, a few weeks later, you discover the model has been quietly exfiltrating sensitive data — or worse, that a single poisoned dataset altered its decision-making from the start. This isn’t a grim sci-fi scenario. It’s a growing reality. AI systems today rely on a complex and largely opaque supply chain — one built on shared models, open-source frameworks, third-party datasets, and cloud-based APIs. Each link in that chain represents both innovation and vulnerability. And unless organizations start treating the AI supply chain as a security-critical system, they risk building intelligence on a foundation they can’t fully trust. Understanding the AI Supply Chain Much like traditional software, modern AI models rarely start from scratch. Developers and data scientists leverage a mix of external assets to accelerate innovation — pretrained models from public repositories like Hugging Face (https://huggingface.co/), data from external vendors, third-party labeling services, and open-source ML libraries. Each of these layers forms part of your AI supply chain — the ecosystem of components that power your model’s lifecycle, from data ingestion to deployment. In many cases, organizations don’t fully know: Where their datasets originated. Whether the pretrained model they fine-tuned was modified or backdoored. If the frameworks powering their pipeline contain known vulnerabilities. AI’s strength — its openness and speed of adoption — is also its greatest weakness. You can’t secure what you don’t see, and most teams have very little visibility into the origins of their AI assets. The New Threat Landscape Attackers have taken notice. As enterprises race to operationalize AI, threat actors are shifting their attention from traditional IT systems to the AI layer itself — particularly the model and data supply chain. Common attack vectors now include: Data poisoning: Injecting subtle malicious samples into training data to bias or manipulate model behavior. Model backdoors: Embedding hidden triggers in pretrained models that can be activated later. Dependency exploits: Compromising widely used ML libraries or open-source repositories. Model theft and leakage: Extracting proprietary weights or exploiting exposed inference APIs. These attacks are often invisible until after deployment, when the damage has already been done. In 2024, several research teams demonstrated how tampered open-source LLMs could leak sensitive data or respond with biased or unsafe outputs — all due to poisoned dependencies within the model’s lineage. The pattern is clear: adversaries are no longer only targeting applications; they’re targeting the intelligence that drives them. Why Traditional Security Approaches Fall Short Most organizations already have strong DevSecOps practices for traditional software — automated scanning, dependency tracking, and secure Continuous Integration/Continuous Deployment (CI/CD) pipelines. But those frameworks were never designed for the unique properties of AI systems. Here’s why: Opacity: AI models are often black boxes. Their behavior can change dramatically from minor data shifts, making tampering hard to detect. Lack of origin: Few organizations maintain a verifiable “family tree” of their models and datasets. Limited tooling: Security tools that detect code vulnerabilities don’t yet understand model weights, embeddings, or training lineage. In other words: You can’t patch what you can’t trace. The absence of traceability leaves organizations flying blind — relying on trust where verification should exist. Securing the AI Supply Chain: Emerging Best Practices The good news is that a new generation of frameworks and controls is emerging to bring security discipline to AI development. The following strategies are quickly becoming best practices in leading enterprises: Establish Model Origin and Integrity Maintain a record of where each model originated, who contributed to it, and how it’s been modified. Implement cryptographic signing for model artifacts. Use integrity checks (e.g., hash validation) before deploying any model. Incorporate continuous verification into your MLOps pipeline. This ensures that only trusted, validated models make it to production. Create a Model Bill of Materials (MBOM) Borrowing from software security, an MBOM documents every dataset, dependency, and component that went into building a model — similar to an SBOM for code. Helps identify which datasets and third-party assets were used. Enables rapid response when vulnerabilities are discovered upstream. Organizations like NIST, MITRE, and the Cloud Security Alliance are developing frameworks to make MBOMs a standard part of AI risk management. NIST AI Risk Management Framework (AI RMF) https://www.nist.gov/itl/ai-risk-management-framework NIST AI RMF Playbook https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook MITRE AI Risk Database (with Robust Intelligence) https://www.mitre.org/news-insights/news-release/mitre-and-robust-intelligence-tackle-ai-supply-chain-risks MITRE’s SAFE-AI Framework https://atlas.mitre.org/pdf-files/SAFEAI_Full_Report.pdf Cloud Security Alliance – AI Model Risk Management Framework https://cloudsecurityalliance.org/artifacts/ai-model-risk-management-framework Secure Your Data Supply Chain The quality and integrity of training data directly shape model behavior. Validate datasets for anomalies, duplicates, or bias. Use data versioning and lineage tracking for full transparency. Where possible, apply differential privacy or watermarking to protect sensitive data. Remember: even small amounts of corrupted data can lead to large downstream risks. Evaluate Third-Party and Open-Source Dependencies Open-source AI tools are powerful — but not always trustworthy. Regularly scan models and libraries for known vulnerabilities. Vet external model vendors and require transparency about their security practices. Treat external ML assets as untrusted code until verified. A simple rule of thumb: if you wouldn’t deploy a third-party software package without security review, don’t deploy a third-party model that way either. The Path Forward: Traceability as the Foundation of AI Trust AI’s transformative potential depends on trust — and trust depends on visibility. Securing your AI supply chain isn’t just about compliance or risk mitigation; it’s about protecting the integrity of the intelligence that drives business decisions, customer interactions, and even national infrastructure. As AI becomes the engine of enterprise innovation, we must bring the same rigor to securing its foundations that we once brought to software itself. Every model has a lineage. Every lineage is a potential attack path. In the next post, we’ll explore how to apply DevSecOps principles to MLOps pipelines — securing the entire AI lifecycle from data collection to deployment. Key Takeaway The AI supply chain is your new attack surface. The only way to defend it is through visibility, origin, and continuous validation — before, during, and after deployment. Contributors Juan José Guirola Sr. (Security GBB for Advanced Identity - Microsoft) References Hugging Face - https://huggingface.co/ Research Gate - https://www.researchgate.net/publication/381514112_Exploiting_Privacy_Vulnerabilities_in_Open_Source_LLMs_Using_Maliciously_Crafted_Prompts/fulltext/66722cb1de777205a338bbba/Exploiting-Privacy-Vulnerabilities-in-Open-Source-LLMs-Using-Maliciously-Crafted-Prompts.pdf NIST AI Risk Management Framework (AI RMF) https://www.nist.gov/itl/ai-risk-management-framework NIST AI RMF Playbook https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook MITRE AI Risk Database (with Robust Intelligence) https://www.mitre.org/news-insights/news-release/mitre-and-robust-intelligence-tackle-ai-supply-chain-risks MITRE’s SAFE-AI Framework https://atlas.mitre.org/pdf-files/SAFEAI_Full_Report.pdf Cloud Security Alliance – AI Model Risk Management Framework https://cloudsecurityalliance.org/artifacts/ai-model-risk-management-frameworkMicrosoft Sentinel data lake is now generally available
Security is being reengineered for the AI era, shifting from static controls to fast, platform-driven defense. Traditional tools, scattered data, and outdated systems struggle against modern threats. An AI-ready, data-first foundation is needed to unify telemetry, standardize agent access, and enable autonomous responses while ensuring humans are in command of strategy and high-impact investigations. Security teams already anchor their operations around SIEMs for comprehensive visibility. We're building on that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. Today, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; an MCP server and tools to make data agent ready; new developer capabilities; and a Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. We’ve reached a major milestone in our journey to modernize security operations — Microsoft Sentinel data lake is now generally available. This fully managed, cloud-native data lake is redefining how security teams manage, analyze, and act on their data cost-effectively. Since its introduction, organizations across sectors are embracing Sentinel data lake for its transformative impact on security operations. Customers consistently highlight its ability to unify security data from diverse sources, enabling enhanced threat detection and investigation. Many cite cost efficiency as a key benefit, with tiered storage and flexible retention, helping reduce costs. With petabytes of data already ingested, users are gaining real-time and historical insights at scale. "With Microsoft Sentinel data lake integration, we now have a scalable and cost-efficient solution for retaining Microsoft Sentinel data for long-term retention. This empowers our security and compliance teams with seamless access to historical telemetry data right within the data lake explorer and Jupyter notebooks - enabling advanced threat hunting, forensic analysis, and AI-powered insights at scale" Farhan Nadeem, Senior Security Engineer Government of Nunavut Industry partners also commend its role in modernizing SOC workflows and accelerating AI-driven analytics. “Microsoft Sentinel data lake amplifies BlueVoyant’s ability to transform security operations into a mature, intelligence-driven discipline. It preserves institutional memory across years of telemetry, which empowers advanced threat hunting strategies that evolve with time. Security teams can validate which data sources yield actionable insights, uncover persistent attack patterns, and retain high-value indicators that support long-term strategic advantage.” Milan Patel, CRO BlueVoyant Microsoft Sentinel data lake use-cases There are many powerful ways customers are unlocking value with Sentinel data lake—here are just a few impactful examples.: Threat investigations over extended timelines: Security analysts query data older than 90 days to uncover slow-moving attacks—like brute-force and password spray campaigns—that span accounts and geographies. Behavioral baselining for deeper insights: SOC engineers build time-series models using months of sign-in logs to establish a standard of normal behavior and identify unusual patterns, such as credential abuse or lateral movement. Alert enrichment: SOC teams correlate alerts with Firewall and Netflow data, often stored only in the data lake, reducing false positives and increasing alert accuracy. Retrospective threat hunting with new indicators of compromise (IOCs): Threat intelligence teams react to emerging IOCs by running historical queries across the data lake, enabling rapid and informed response. ML-Powered insights: SOC engineers use Spark Notebooks to build and operationalize custom machine learning models for anomaly detection, alert enrichment, and predictive analytics. The Sentinel data lake is more than a storage solution—it’s the foundation for modern, AI-powered security operations. Whether you're scaling your SOC, building deeper analytics, or preparing for future threats, the Sentinel data lake is ready to support your journey. What’s new Regional expansion In light of strong customer demand in public preview, at GA we are expanding Sentinel data lake availability to additional regions. These new regions will roll out progressively over the coming weeks. For more information, see documentation. Flexible data ingestion and management With over 350 native connectors, SOC teams can seamlessly ingest both structured and semi-structured data at scale. Data is automatically mirrored from the analytics tier to the data lake tier, at no additional cost, ensuring a single, unified copy is available for diverse use cases across security operations. Since the public preview of Microsoft Sentinel data lake, we've launched 45 new connectors built on the scalable and performant Codeless Connector Framework (CCF), including connectors for: GCP: SQL, DNS, VPC Flow, Resource Manager, IAM, Apigee AWS: Security Hub findings, Route53 DNS Others: Alibaba Cloud, Oracle, Salesforce, Snowflake, Cisco Sentinel’s connector ecosystem is designed to help security teams seamlessly unify signals across hybrid environments, without the need for heavy engineering effort. Explore the full list of connectors in our documentation here. App Assure Microsoft Sentinel data lake promise As part of our commitment to customer success, we are expanding the App Assure Microsoft Sentinel promise to Sentinel data lake. This means customers can confidently onboard their data, knowing that App Assure stands ready to help resolve connector issues such as replacing deprecated APIs with updated ones, and accelerating new integrations. Whether you're working with existing Independent Software Vendor (ISV) solutions or building new ones, App Assure will collaborate directly with ISVs to ensure seamless data ingestion into the lake. This promise reinforces our dedication to delivering reliable, scalable, and secure security operations, backed by engineering support and a thriving partner ecosystem. Cost management and billing We are introducing new cost management features in public preview to help customers with cost predictability, billing transparency, and operational efficiency. Customers can set usage-based alerts on specific meters to monitor and control costs. For example, you can receive alerts when query or notebook usage passes set limits, helping avoid unexpected expenses and manage budgets. In-product reports provide customers with insights into usage trends over time, enabling them to identify cost drivers and optimize data retention and processing strategies. To support the ingestion and standardization of diverse data sources, we are introducing a new Data Processing feature that applies a $0.10 per GB charge for all data as it is ingested into the data lake. This feature enables a broad array of transformations like redaction, splitting, filtering and normalizing data. This feature was not billed during public preview but will be chargeable at GA starting October 1,2025. Data lake ingestion charges of $0.05 per GB will apply to Entra asset data; starting October 1, 2025. This was previously not billed during public preview. Retaining security data to perform deep analytics and investigations is crucial for defending against threats. To help enable customers to retain all their security data for extended periods cost effectively, data lake storage, including asset data storage, is now billed with a simple and uniform data compression rate of 6:1 across all data sources. Please refer to Plan costs and understand Microsoft Sentinel pricing and billing article for more information. For detailed prerequisites and instructions on configuring and managing asset connectors, refer to the official documentation: Asset data in Microsoft Sentinel data lake. KQL and Notebook enhancements We are introducing several enhancements to our data lake analytics capabilities with an upgraded KQL and notebook experience. Security teams can now run multi-workspace KQL queries for broader threat correlation and schedule KQL jobs more frequently. Frequent KQL jobs enable SOC teams to automate historical threat intelligence matching, summarize alert trends, and aggregate signals across workspaces. For example, schedule recurring jobs to scan for matches against newly ingested IOCs, helping uncover threats that were previously undetected and strengthening threat hunting and investigation workflows. The enhanced Jobs page offers operational clarity for SOC teams with a comprehensive view into job health and activity. At the top, a summary dashboard provides instant visibility into key metrics, total jobs, completions, and failures, helping teams quickly assess job health. A filterable list view displays essential details such as job names, status, frequency, and last run information, enabling quick prioritization and triage. For more detailed diagnostics, users can view individual jobs to access job runs telemetry such as job run duration, row count, and additional historical execution trends, providing additional visibility. Notebooks are receiving a significant upgrade, offering streamlined user experience for querying the data lake. Users now benefit from IntelliSense support for syntax and table names, making query authoring faster and more intuitive. They can also configure custom compute session timeouts and warning windows to better manage resources. Scheduling notebooks as jobs is now simpler, and users can leverage GitHub Copilot for intelligent assistance throughout the process. Together, these KQL and notebook improvements deliver deeper, more customizable analytics, helping customers unlock richer insights, accelerate threat response, and scale securely across diverse environments. Powering agentic defense Data centralization powers AI agents and automation to access comprehensive, historical, and real-time data for advanced analytics, anomaly detection, and autonomous threat response. Support for tools like KQL queries, Spark notebooks, and machine learning models in the data lake, allows agentic systems to continuously learn, adapt, and act on emerging threats. Integration with Security Copilot and MCP Server further enhances agentic defense, enabling smarter, faster, and context-rich security operations—all built on the foundation of Sentinel’s unified data lake. Microsoft Sentinel 50 GB commitment tier promotional pricing To make Microsoft Sentinel more accessible to small and mid-sized customers, we are introducing a new 50 GB commitment tier in public preview, with promotional pricing offered from October 1, 2025, to March 31, 2026. Customers who choose the 50 GB commitment tier during this period will maintain their promotional rate until March 31, 2027. This offer is available in all regions where Microsoft Sentinel is sold, with regional variations in promotional pricing. It is accessible through EA, CSP, and Direct channels. The new 50 GB commitment tier details will be available starting October 1, 2025, on the Microsoft Sentinel pricing page. Thank you to our customers and partners We’re incredibly grateful for the continued partnership and collaboration from our customers and partners throughout this journey. Your feedback and trust have been instrumental in shaping Microsoft Sentinel data lake into what it is today. Thank you for being a part of this critical milestone—we’re excited to keep building together. Get started today By centralizing data, optimizing costs, expanding coverage, and enabling deep analytics, Microsoft Sentinel empowers security teams to operate smarter, faster, and more effectively. Get started with Microsoft Sentinel data lake today in the Microsoft Defender experience. To learn more, see: Microsoft Sentinel—AI-Powered Cloud SIEM & Platform Pricing: Pricing page, Plan costs and understand Microsoft Sentinel pricing and billing Documentation: Connect Sentinel to Defender, Jupyter notebooks in Microsoft Sentinel data lake, KQL and the Microsoft Sentinel data lake, Permissions for Microsoft Sentinel data lake, Manage data tiers and retention in Microsoft Defender experience Blogs: Sentinel data lake FAQ blog, Empowering defenders in the era of AI, Microsoft Sentinel graph announcement, App Assure Microsoft Sentinel data lake promiseAnnouncing Microsoft Sentinel Model Context Protocol (MCP) server – Public Preview
Security is being reengineered for the AI era—moving beyond static, rulebound controls and after-the-fact response toward platform-led, machine-speed defense. The challenge is clear: fragmented tools, sprawling signals, and legacy architectures that can’t match the velocity and scale of modern attacks. What’s needed is an AI-ready, data-first foundation—one that turns telemetry into a security graph, standardizes access for agents, and coordinates autonomous actions while keeping humans in command of strategy and high-impact investigations. Security teams already center operations on their SIEM for end-to-end visibility, and we’re advancing that foundation by evolving Microsoft Sentinel into both the SIEM and the platform for agentic defense—connecting analytics and context across ecosystems. And now, we’re introducing new platform capabilities that build on Sentinel data lake: Sentinel graph for deeper insight and context; an MCP server and tools to make data agent ready; new developer capabilities; and Security Store for effortless discovery and deployment—so protection accelerates to machine speed while analysts do their best work. Introducing Sentinel MCP server We’re excited to announce the public preview of Microsoft Sentinel MCP (Model Context Protocol) server, a fully managed cloud service built on an open standard that lets AI agents seamlessly access the rich security context in your Sentinel data lake. Recent advances in large language models have enabled AI agents to perform reasoning—breaking down complex tasks, inferring patterns, and planning multistep actions, making them capable of autonomously performing business processes. To unlock this potential in cybersecurity, agents must operate with your organization’s real security context, not just public training data. Sentinel MCP server solves that by providing standardized, secure access to that context—across graph relationships, tabular telemetry, and vector embeddings—via reusable, natural language tools, enabling security teams to unlock the full potential of AI-driven automation and focus on what matters most. Why Model Context Protocol (MCP)? Model Context Protocol (MCP) is a rapidly growing open standard that allows AI models to securely communicate with external applications, services, and data sources through a well-structured interface. Think of MCP as a bridge that lets an AI agents understand and invoke an application’s capabilities. These capabilities are exposed as discrete “tools” with natural language inputs and outputs. The AI agent can autonomously choose the right tool (or combination of tools) for the task it needs to accomplish. In simpler terms, MCP standardizes how an AI talks to systems. Instead of developers writing custom connectors for each application, the MCP server presents a menu of available actions to the AI in a language it understands. This means an AI agent can discover what it can do (search data, run queries, trigger actions, etc.) and then execute those actions safely and intelligently. By adopting an open standard like MCP, Microsoft is ensuring that our AI integrations are interoperable and future-proof. Any AI application that speaks MCP can connect. Security Copilot offers built-in integration, while other MCP-compatible platforms can leverage your Sentinel data and services can quickly connect by simply adding a new MCP server and typing Sentinel’s MCP server URL. How to Get Started Sentinel MCP server is a fully managed service now available to all Sentinel data lake customers. If you are already onboarded to Sentinel data lake, you are ready to begin using MCP. Not using Sentinel data lake yet? Learn more here. Currently, you can connect to the Sentinel MCP server using Visual Studio Code (VS Code) with the GitHub Copilot add-on. Here’s a step-by-step guide: Open VS Code and authenticate with an account that has at least Security Reader role access (required to query the data lake via the Sentinel MCP server) Open the Command Palette in VS Code (Ctrl + Shift + P) Type or select “MCP: Add Server…” Choose “HTTP” (HTTP or Server-Sent Event) Enter the Sentinel MCP server URL: “https://sentinel.microsoft.com/mcp/data-exploration" When prompted, allow authentication with the Sentinel MCP server by clicking “Allow” Once connected, GitHub Copilot will be linked to Sentinel MCP server. Open the agent pane, set it to Agent mode (Ctrl + Shift + I), and you are ready to go. GitHub Copilot will autonomously identify and utilize Sentinel MCP tools as necessary. You can now experience how AI agents powered by the Sentinel MCP server access and utilize your security context using natural language, without knowing KQL, which tables to query and wrangle complex schemas. Try prompts like: “Find the top 3 users that are at risk and explain why they are at risk.” “Find sign-in failures in the last 24 hours and give me a brief summary of key findings.” “Identify devices that showed an outstanding amount of outgoing network connections.” To learn more about the existing capabilities of Sentinel MCP tools, refer to our documentation. Security Copilot will also feature native integration with Sentinel MCP server; this seamless connectivity will enhance autonomous security agents and the open prompting experience. Check out the Security Copilot blog for additional details. What’s coming next? The public preview of Sentinel MCP server marks the beginning of a new era in cybersecurity—one where AI agents operate with full context, precision, and autonomy. In the coming months, the MCP toolset will expand to support natural language access across tabular, graph, and embedding-based data, enabling agents to reason over both structured and unstructured signals. This evolution will dramatically boost agentic performance, allowing them to handle more complex tasks, surface deeper insights, and accelerate protection to machine speed. As we continue to build out this ecosystem, your feedback will be essential in shaping its future. Be sure to check out the Microsoft Secure for a deeper dive and live demo of Sentinel MCP server in action.5.6KViews2likes0CommentsPlanning your move to Microsoft Defender portal for all Microsoft Sentinel customers
In November 2023, Microsoft announced our strategy to unify security operations by bringing the best of XDR and SIEM together. Our first step was bringing Microsoft Sentinel into the Microsoft Defender portal, giving teams a single, comprehensive view of incidents, reducing queue management, enriching threat intel, streamlining response and enabling SOC teams to take advantage of Gen AI in their day-to-day workflow. Since then, considerable progress has been made with thousands of customers using this new unified experience; to enhance the value customers gain when using Sentinel in the Defender portal, multi-tenancy and multi-workspace support was added to help customers with more sophisticated deployments. Our mission is to unify security operations by bringing all your data, workflows, and people together to unlock new capabilities and drive better security outcomes. As a strong example of this, last year we added extended posture management, delivering powerful posture insights to the SOC team. This integration helps build a closed-loop feedback system between your pre- and post-breach efforts. Exposure Management is just one example. By bringing everything together, we can take full advantage of AI and automation to shift from a reactive to predictive SOC that anticipates threats and proactively takes action to defend against them. Beyond Exposure Management, Microsoft has been constantly innovating in the Defender experience, adding not just SIEM but also Security Copilot. The Sentinel experience within the Defender portal is the focus of our innovation energy and where we will continue to add advanced Sentinel capabilities going forward. Onboarding to the new unified experience is easy and doesn’t require a typical migration. Just a few clicks and permissions. Customers can continue to use Sentinel in the Azure portal while it is available even after choosing to transition. Today, we’re announcing that we are moving to the next phase of the transition with a target to retire the Azure portal for Microsoft Sentinel by July 1, 2026. Customers not yet using the Defender portal should plan their transition accordingly. “Really amazing to see that coming, because cross querying with tables in one UI is really cool! Amazing, big step forward to the unified [Defender] portal.” Glueckkanja AG “The biggest benefit of a unified security operations solution (Microsoft Sentinel + Microsoft Defender XDR) has been the ability to combine data in Defender XDR with logs from third party security tools. Another advantage developed has been to eliminate the need to switch between Defender XDR and Microsoft Sentinel portals, now having a single pane of glass, which the team has been wanting for some years.” Robel Kidane, Group Information Security Manager, Renishaw PLC Delivering the SOC of the future Unifying threat protection, exposure management and security analytics capabilities in one pane of glass not only streamlines the user experience, but also enables Sentinel customers to realize security outcomes more efficiently: Analyst efficiency: A single portal reduces context switching, simplifies workflows, reduces training overhead, and improves team agility. Integrated insights: SOC-focused case management, threat intelligence, incident correlation, advanced hunting, exposure management, and a prioritized incident queue enriched with business and sensitivity context—enabling faster, more informed detection and response across all products. SOC optimization: Security controls that can be adjusted as threats and business priorities change to control costs and provide better coverage and utilization of data, thus maximizing ROI from the SIEM. Accelerated response: AI-driven detection and response which reduces mean time to respond (MTTR) by 30%, increases security response efficiency by 60%, and enables embedded Gen AI and agentic workflows. What’s next: Preparing for the retirement of the Sentinel Experience in the Azure Portal Microsoft is committed to supporting every single customer in making that transition over the next 12 months. Beginning July 1, 2026, Sentinel users will be automatically redirected to the Defender portal. After helping thousands of customers smoothly make the transition, we recommend that security teams begin planning their migration and change management now to ensure continuity and avoid disruption. While the technical process is very straightforward, we have found that early preparation allows time for workflow validation, training, and process alignment to take full advantage of the new capabilities and experience. Tips for a Successful Migration to Microsoft Defender 1. Leverage Microsoft’s help: Leverage Microsoft documentation, instructional videos, guidance, and in-product support to help you be successful. A good starting point is the documentation on Microsoft Learn. 2. Plan early: Engage stakeholders early including SOC and IT Security leads, MSSPs, and compliance teams to align on timing, training and organizational needs. Make sure you have an actionable timeline and agreement in the organization around when you can prioritize this transition to ensure access to the full potential of the new experience. 3. Prepare your environment: Plan and design your environment thoroughly. This includes understanding the prerequisites for onboarding Microsoft Sentinel workspaces, reviewing and deciding on access controls, and planning the architecture of your tenant and workspace. Proper planning will ensure a smooth transition and help avoid any disruptions to your security operations. 4. Leverage Advanced Threat Detection: The Defender portal offers enhanced threat detection capabilities with advanced AI and machine learning for Microsoft Sentinel. Make sure to leverage these features for faster and more accurate threat detection and response. This will help you identify and address critical threats promptly, improving your overall security posture. 5. Utilize Unified Hunting and Incident Management: Take advantage of the enhanced hunting, incident, and investigation capabilities in Microsoft Defender. This provides a comprehensive view for more efficient threat detection and response. By consolidating all security incidents, alerts, and investigations into a single unified interface, you can streamline your operations and improve efficiency. 6. Optimize Cost and Data Management The Defender portal offers cost and data optimization features, such as SOC Optimization and Summary Rules. Make sure to utilize these features to optimize your data management, reduce costs, and increase coverage and SIEM ROI. This will help you manage your security operations more effectively and efficiently. Unleash the full potential of your Security team The unified SecOps experience available in the Defender portal is designed to support the evolving needs of modern SOCs. The Defender portal is not just a new home for Microsoft Sentinel - it’s a foundation for integrated, AI-driven security operations. We’re committed to helping you make this transition smoothly and confidently. If you haven’t already joined the thousands of security organizations that have done so, now is the time to begin. Resources AI-Powered Security Operations Platform | Microsoft Security Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn Shifting your Microsoft Sentinel Environment to the Defender Portal | Microsoft Learn Microsoft Sentinel is now in Defender | YouTube36KViews9likes21CommentsMicrosoft Sentinel’s New Data Lake: Cut Costs & Boost Threat Detection
Microsoft Sentinel is leveling up! Already a trusted cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) solution, it empowers security teams to detect, investigate, and respond to threats with speed and precision. Now, with the introduction of its new Data Lake architecture, Sentinel is transforming how security data is stored, accessed, and analyzed, bringing unmatched flexibility and scale to threat investigation. Unlike Microsoft Fabric OneLake, which supports analytics across the organization, Sentinel’s Data Lake is purpose-built for security. It centralizes raw structured, semi-structured, and unstructured data in its original format, enabling advanced analytics without rigid schemas. This article is written by someone who’s spent years helping security teams navigate Microsoft’s evolving ecosystem, translating complex capabilities into practical strategies. What follows is a hands-on look at the key features, benefits, and challenges of Sentinel’s Data Lake, designed to help you make the most of this powerful new architecture. Current Sentinel Features To tackle the challenges security teams, face today—like explosive data growth, integration of varied sources, and tight compliance requirements—organizations need scalable, efficient architectures. Legacy SIEMs often become costly and slow when analyzing multi-year data or correlating diverse events. Security data lakes address these issues by enabling seamless ingestion of logs from any source, schema-on-read flexibility, and parallelized queries over massive datasets. This schema-on-read allows SOC analysts to define how data is interpreted at the time of analysis, rather than when it is stored. This means analysts can flexibly adapt queries and threat detection logic to evolving threats, without reformatting historical data making investigations more agile and responsive to change. This empowers security operations to conduct deep historical analysis, automate enrichment, and apply advanced analytics, such as machine learning, while retaining strict control over data access and residency. Ultimately, decoupling storage and compute allows teams to boost detection and response speed, maintain compliance, and adapt their Security Operation Center (SOC) to future security demands. As organizations manage increasing data and limited budgets, many are moving from legacy SIEMs to advanced cloud-native options. Microsoft Sentinel’s Data Lake separates storage from computing, offering scalable and cost-effective analytics and compliance. For instance, storing 500 TB of logs in Sentinel Data Lake can cut costs by 60–80% compared to Log Analytics, due to lower storage costs and flexible retention. Integration with modern tools and open formats enables efficient threat response and regulatory compliance. Microsoft Sentinel data lake pricing (preview) Sentinel Data Lake Use Cases Log Retention: Long-term retention of security logs for compliance and forensic investigations Hunting: Advanced threat hunting using historical data Interoperability: Integration with Microsoft Fabric and other analytics platforms Cost: Efficient storage prices for high-volume data sources How Microsoft Sentinel Data Lake Helps Microsoft Sentinel’s Data Lake introduces a powerful paradigm shift for security operations by architecting the separation of storage and compute, enabling organizations to achieve petabyte-scale data retention without the traditional overhead and cost penalties of legacy SIEM solutions. Built atop highly scalable, cloud-native infrastructure, Sentinel Data Lake empowers SOCs to ingest telemetry from virtually unlimited sources ranging from on-premises firewalls, proxies, and endpoint logs to SaaS, IaaS, and PaaS environments—while leveraging schema-on-read, a method that allows analysts to define how data is interpreted at query time rather than when it is stored, offering greater flexibility in analytics. For example, a security analyst can adapt to the way historical data is examined as new threats emerge, without needing to reformat or restructure the data stored in the Data Lake. From Microsoft Learn – Retention and data tiering Storing raw security logs in open formats like Parquet (this is a columnar storage file format optimized for efficient data compression and retrieval, commonly used in big data processing frameworks like Apache Spark and Hadoop) enables easy integration with analytics tools and Microsoft Fabric, letting analysts efficiently query historical data using KQL, SQL, or Spark. This approach eliminates the need for complex ETL and archived data rehydration, making incident response faster; for instance, a SOC analyst can quickly search for years of firewall logs for threat detection. From Microsoft Learn – Flexible querying with Kusto Query Language Granular data governance and access controls allow organizations to manage sensitive information and meet legal requirements. Storing raw security logs in open formats enables fast investigations of long-term data incidents, while automated lifecycle management reduces costs and ensures compliance. Data Lakes integrate with Microsoft platforms and other tools for unified analytics and security. Machine learning helps detect unusual login activity across years, overcoming previous storage issues. From Microsoft Learn – Powerful analytics using Jupyter notebooks Pros and Cons The following table highlights the advantages and potential opportunities that Microsoft Sentinel Data Lake offers. This follows the same Pay-As-You-Go pricing model as currently available with Sentinel. Pros Cons License Needed Scalable, cost-effective long-term retention of security data Requires adaptation to new architecture Pay-As-You-Go model Seamless integration with Microsoft Fabric and open data formats Initial setup and integration may involve a learning curve Pay-As-You-Go model Efficient processing of petabyte-scale datasets Transitioning existing workflows may require planning Pay-As-You-Go model Advanced analytics, threat hunting, and AI/ML across historical data Some features may depend on integration with other services Pay-As-You-Go model Supports compliance use cases with robust data governance and audit trails Complexity in new data governance features Pay-As-You-Go model Microsoft Sentinel Data Lake solution advances cloud-native security by overcoming traditional SIEM limitations, allowing organizations to better retain, analyze, and respond to security data. As cyber threats grow, Sentinel Data Lake offers flexible, cost-efficient storage for long-term retention, supporting detection, compliance, and audits without significant expense or complexity. Quick Guide: Deploy Microsoft Sentinel Data Lake Assess Needs: Identify your security data volume, retention, and compliance requirements - Sentinel Data Lake Overview. Prepare Environment: Ensure Azure permissions and workspace readiness - Onboarding Guide. Enable Data Lake: Use Azure CLI or Defender portal to activate - Setup Instructions. Ingest & Import Data: Connect sources and migrate historical logs - Microsoft Sentinel Data Connectors. Integrate Analytics: Use KQL, notebooks, and Microsoft Fabric for scalable analysis - Fabric Overview Train & Optimize: Educate your team and monitor performance - Best Practices. About the Author: Hi! Jacques “Jack” here, I’m a Microsoft Technical Trainer at Microsoft. I wanted to share this as it’s something I often asked during my Security Trainings. This improves the already impressive Microsoft Sentinel feature stack helping the Defender Community to secure their environment in this ever-growing hacked world. I’ve been working with Microsoft Sentinel since September 2019, and I have been teaching learners about this SIEM since March 2020. I have experience using Security Copilot and Security AI Agents, which have been effective in improving my incident response and compromise recovery times.Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In today’s cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely manner—despite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technology—synthetic media generated using AI—to impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviews—gaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch management—not just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering them—using them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewEnterprise-grade controls for AI apps and agents built with Azure AI Foundry and Copilot Studio
AI innovation is moving faster than ever, and more AI projects are moving beyond experimentation into deployment, to drive tangible business impact. As organizations accelerate innovation with custom AI applications and agents, new risks emerge across the software development lifecycle and AI stack related to data oversharing and leaks, new vulnerabilities and threats, and non-compliance with stringent regulatory requirements Through 2025, poisoning of software supply chains and infrastructure technology stacks will constitute more than 70% of malicious attacks against AI used in the enterprise 1 , highlighting potential threats that originate early in development. Today, the average cost of a data breach is $4.88 million, but when security issues are caught early in the development process, that number drops dramatically to just $80 per incident 2 . The message is very clear; security can’t be an afterthought anymore. It must be a team sport across the organization, embedded from the start and throughout the development lifecycle. That's why developers and security teams should align on processes and tools that bring security into every stage of the AI development lifecycle and give security practitioners visibility into and the ability to mitigate risks. To address these growing challenges and help customers secure and govern their AI workloads across development and security teams, we are: Enabling Azure AI Foundry and Microsoft Copilot Studio to provide best-in-class foundational capabilities to secure and govern AI workloads Deeply integrating and embedding industry-leading capabilities from Microsoft Purview, Microsoft Defender, and Microsoft Entra into Azure AI Foundry and Microsoft Copilot Studio This week, 3,000 developers are gathering in Seattle for the annual Microsoft Build conference, with many more tuning in online, to learn practical skills for accelerating their AI apps and agents' innovation. To support their AI innovation journey, today we are excited to announce several new capabilities to help developers and organizations secure and govern AI apps and agents. New Azure AI Foundry foundational capabilities to secure and govern AI workloads Azure AI Foundry enhancements for AI security and safety With 70,000 customers, 100 trillion tokens processed this quarter, and 2 billion enterprise search queries each day, Azure AI Foundry has grown beyond just an application layer—it's now a comprehensive platform for building agents that can plan, take action, and continuously learn to drive real business outcomes. To help organizations build and deploy AI with confidence, we’re introducing new security and safety capabilities and insights for developers in Azure AI Foundry Introducing Spotlighting to detect and block prompt injection attacks in real time As AI systems increasingly rely on external data sources, a new class of threats has emerged. Indirect prompt injection attacks embed hidden instructions in documents, emails, and web content, tricking models into taking unauthorized actions without any direct user input. These attacks are difficult to detect and hard to prevent using traditional filters alone. To address this, Azure AI Content Safety is introducing Spotlighting, now available in preview. Spotlighting strengthens the Prompt Shields guardrail by improving its ability to detect and handle potential indirect prompt injections, where hidden adversarial instructions are embedded in external content. This new capability helps prevent the model from inadvertently acting on malicious prompts that are not directly visible to the user. Enable Spotlighting in Azure AI Content Safety to detect potential indirect prompt injection attacks New capabilities for task adherence evaluation and task adherence mitigation to ensure agents remain within scope As developers build more capable agents, organizations face growing pressure to help confirm those agents act within defined instructions and policy boundaries. Even small deviations can lead to tool misuse, broken workflows, or risks like unintended exposure of sensitive data. To solve this, Azure AI Foundry now includes task adherence for agents, now in preview and powered by two components: a real-time evaluation and a new control within Azure AI Content Safety. At the core is a real-time task adherence evaluation API, part of Azure AI Content Safety. This API assesses whether an agent’s behavior is aligned with its assigned task by analyzing the user’s query, system instructions, planned tool calls, and the agent’s response. The evaluation framework is built on Microsoft’s Agent Evaluators, which measure intent resolution, tool selection accuracy, completeness of response, and overall alignment to the original request. Developers can run this scoring logic locally using the Task Adherence Evaluator in the Azure AI Evaluation SDK, with a five-point scale that ranges from fully nonadherent to fully adherent. This gives teams a flexible and transparent way to inspect task-level behavior before it causes downstream issues. Task adherence is enforced through a new control in Azure AI Content Safety. If an agent goes off-task, the control can block tool use, pause execution, or trigger human review. In Azure AI Agent Service, it is available as an opt-in feature and runs automatically. Combined with real-time evaluation, this control helps to ensure that agents stay on task, follow instructions, and operate according to enterprise policies. Learn more about Prompt Shields in Azure AI Content Safety. Azure AI Foundry continuous evaluation and monitoring of agentic systems Maintaining high performance and compliance for AI agents after deployment is a growing challenge. Without ongoing oversight, issues like performance degradation, safety risks, or unintentional misuse of resources can slip through unnoticed. To address this, Azure AI Foundry introduces continuous evaluation and monitoring of agentic systems, now in preview, provides a single pane of glass dashboard to track key metrics such as performance, quality, safety, and resource usage in real time. Continuous evaluation runs quality and safety evaluations at a sampled rate of production usage with results made available in the Azure AI Foundry Monitoring dashboard and published to Application Insights. Developers can set alerts to detect drift or regressions and use Azure Monitor to gain full-stack visibility into their AI systems. For example, an organization using an AI agent to assist with customer-facing tasks can monitor groundedness and detect a decline in quality when the agent begins referencing irrelevant information, helping teams to act before it potentially negatively affects trust of users. Azure AI Foundry evaluation integrations with Microsoft Purview Compliance Manager, Credo AI, and Saidot for streamlined compliance AI regulations and standards introduce new requirements for transparency, documentation, and risk management for high-risk AI systems. As developers build AI applications and agents, they may need guidance and tools to help them evaluate risks based on these requirements and seamlessly share control and evaluation insights with compliance and risk teams. Today, we are announcing previews for Azure AI Foundry evaluation tool’s integration with a compliance management solution, Microsoft Purview Compliance Manager, and AI governance solutions, Credo AI and Saidot. These integrations help define risk parameters, run suggested compliance evaluations, and collect evidence for control testing and auditing. For example, for a developer who’s building an AI agent in Europe may be required by their compliance team to complete a Data Protection Impact Assets (DPIA) and Algorithmic Impact Assessment (AIA) to meet internal risk management and technical documentation requirements aligned with emerging AI governance standards and best practices. Based on Purview Compliance Manager’s step-by-step guidance on controls implementation and testing, the compliance teams can evaluate risks such as potential bias, cybersecurity vulnerabilities, or lack of transparency in model behavior. Once the evaluation is conducted in Azure AI Foundry, the developer can obtain a report with documented risk, mitigation, and residual risk for compliance teams to upload to Compliance Manager to support audits and provide evidence to regulators or external stakeholders. Assess controls for Azure AI Foundry against emerging AI governance standards Learn more about Purview Compliance Manager. Learn more about the integration with Credo AI and Saidot in this blogpost. Leading Microsoft Entra, Defender and Purview value extended to Azure AI Foundry and Microsoft Copilot Studio Introducing Microsoft Entra Agent ID to help address agent sprawl and manage agent identity Organizations are rapidly building their own AI agents, leading to agent sprawl and a lack of centralized visibility and management. Security teams often struggle to keep up, unable to see which agents exist and whether they introduce security or compliance risks. Without proper oversight, agent sprawl increases the attack surface and makes it harder to manage these non-human identities. To address this challenge, we’re announcing the public preview of Microsoft Entra Agent ID, a new capability in the Microsoft Entra admin center that gives security admins visibility and control over AI agents built with Copilot Studio and Azure AI Foundry. With Microsoft Entra Agent ID, an agent created through Copilot Studio or Azure AI Foundry is automatically assigned an identity with no additional work required from the developers building them. This is the first step in a broader initiative to manage and protect non-human identities as organizations continue to build AI agents. : Security and identity admins can gain visibility into AI agents built in Copilot Studio and Azure AI Foundry in the Microsoft Entra Admin Center This new capability lays the foundation for more advanced capabilities coming soon to Microsoft Entra. We also know that no one can do it alone. Security has always been a team sport, and that’s especially true as we enter this new era of protecting AI agents and their identities. We’re energized by the momentum across the industry; two weeks ago, we announced support for the Agent-to-Agent (A2A) protocol and began collaborating with partners to shape the future of AI identity workflows. Today, we’re also excited to announce new partnerships with ServiceNow and Workday. As part of this, we’ll integrate Microsoft Entra Agent ID with the ServiceNow AI Platform and the Workday Agent System of Record. This will allow for automated provisioning of identities for future digital employees. Learn more about Microsoft Entra Agent ID. Microsoft Defender security alerts and recommendations now available in Azure AI Foundry As more AI applications are deployed to production, organizations need to predict and prevent potential AI threats with natively integrated security controls backed by industry-leading Gen AI and threat intelligence for AI deployments. Developers need critical signals from security teams to effectively mitigate security risks related to their AI deployments. When these critical signals live in separate systems outside the developer experience, this can create delays in mitigation, leaving opportunities for AI apps and agents to become liabilities and exposing organizations to various threats and compliance violations. Now in preview, Microsoft Defender for Cloud integrates AI security posture management recommendations and runtime threat protection alerts directly into the Azure AI Foundry portal. These capabilities, previously announced as part of the broader Microsoft Defender for Cloud solution, are extended natively into Azure AI Foundry enabling developers to access alerts and recommendations without leaving their workflows. This provides real-time visibility into security risks, misconfigurations, and active threats targeting their AI applications on specific Azure AI projects, without needing to switch tools or wait on security teams to provide details. Security insights from Microsoft Defender for Cloud help developers identify and respond to threats like jailbreak attacks, sensitive data leakage, and misuse of system resources. These insights include: AI security posture recommendations that identify misconfigurations and vulnerabilities in AI services and provide best practices to reduce risk Threat protection alerts for AI services that notify developers of active threats and provide guidance for mitigation, across more than 15 detection types For example, a developer building an AI-powered agent can receive security recommendations suggesting the use of Azure Private Link for Azure AI Services resources. This reduces the risk of data leakage by handling the connectivity between consumers and services over the Azure backbone network. Each recommendation includes actionable remediation steps, helping teams identify and mitigate risks in both pre- and post-deployment phases. This helps to reduce risks without slowing down innovation. : Developers can view security alerts on the Risks + alerts page in Azure AI Foundry : Developers can view recommendations on the Guardrails + controls page in Azure AI Foundry This integration is currently in preview and will be generally available in June 2025 in Azure AI Foundry. Learn more about protecting AI services with Microsoft Defender for Cloud. Microsoft Purview capabilities extended to secure and govern data in custom-built AI apps and agents Data oversharing and leakage are among the top concerns for AI adoption, and central to many regulatory requirements. For organizations to confidently deploy AI applications and agents, both low code and pro code developers need a seamless way to embed security and compliance controls into their AI creations. Without simple, developer-friendly solutions, security gaps can quickly become blockers, delaying deployment and increasing risks as applications move from development to production. Today, Purview is extending its enterprise-grade data security and compliance capabilities, making it easier for both low code and pro code developers to integrate data security and compliance into their AI applications and agents, regardless of which tools or platforms they use. For example, with this update, Microsoft Purview DSPM for AI becomes the one place data security teams can see all the data risk insights across Microsoft Copilots, agents built in Agent Builder and Copilot Studio, and custom AI apps and agents built in Azure AI Foundry and other platforms. Admins can easily drill into security and compliance insights for specific AI apps or agents, making it easier to investigate and take action on potential risks. : Data security admins can now find data security and compliance insights across Microsoft Copilots, agents built with Agent Builder and Copilot Studio, and custom AI apps and agents in Microsoft Purview DSPM for AI In the following sections, we will provide more details about the updates to Purview capabilities in various AI workloads. 1. Microsoft Purview data security and compliance controls can be extended to any custom-built AI application and agent via the new Purview SDK or the native Purview integration with Azure AI Foundry. The new capabilities make it easy and effortless for security teams to bring the same enterprise-grade data security compliance controls available today for Microsoft 365 Copilot to custom AI applications and agents, so organizations can: Discover data security risks, such as sensitive data in user prompts, and data compliance risks, such as harmful content, and get recommended actions to mitigate risks proactively in Microsoft Purview Data Security Posture Management (DSPM) for AI. Protect sensitive data against data leakage and insider risks with Microsoft Purview data security policies. Govern AI interactions with Audit, Data Lifecycle Management, eDiscovery, and Communication Compliance. Microsoft Purview SDK Microsoft Purview now offers Purview SDK, a set of REST APIs, documentation, and code samples, currently in preview, enabling developers to integrate Purview's data security and compliance capabilities into AI applications or agents within any integrated development environment (IDE). : By embedding Purview APIs into the IDE, developers help enable their AI apps to be secured and governed at runtime For example, a developer building an AI agent using an AWS model can use the Purview SDK to enable their AI app to automatically identify and block sensitive data entered by users before it’s exposed to the model, while also providing security teams with valuable signals that support compliance. With Purview SDK, startups, ISVs, and partners can now embed Purview industry-leading capabilities directly into their AI software solutions, making these solutions Purview aware and easier for their customers to secure and govern data in their AI solutions. For example, Infosys Vice President and Delivery Head of Cyber Security Practice, Ashish Adhvaryu indicates, “Infosys Cyber Next platform integrates Microsoft Purview to provide enhanced AI security capabilities. Our solution, the Cyber Next AI assistant (Cyber Advisor) for the SOC analyst, leverages Purview SDK to drive proactive threat mitigation with real-time monitoring and auditing capabilities. This integration provides holistic AI-assisted protection, enhancing cybersecurity posture." Microsoft partner EY (previously known as Ernst and Young) has also leveraged the new Purview SDK to embed Purview value into their GenAI initiatives. “We’re not just building AI tools, we are creating Agentic solutions where trust, security, and transparency are present from the start, supported by the policy controls provided through the Purview SDK. We’re seeing 25 to 30 percent time savings when we build secure features using the Purview SDK,” noted Sumanta Kar, Partner, Innovation and Emerging Tech at EY. Learn more about the Purview SDK. Microsoft Purview integrates natively with Azure AI Foundry Organizations are developing an average of 14 custom AI applications. The rapid pace of AI innovation may leave security teams unaware of potential data security and compliance risks within their environments. With the update announced today, Azure AI Foundry signals are now directly integrated with Purview Data Security Posture Management for AI, Insider Risk Management, and data compliance controls, minimizing the need for additional development work. For example, for AI applications and agents built with Azure AI Foundry models, data security teams can gain visibility into AI usage and data risks in Purview DSPM for AI, with no additional work from developers. Data security teams can also detect, investigate, and respond to both malicious and inadvertent user activities, such as a departing employee leveraging an AI agent to retrieve an anomalous amount of sensitive data, with Microsoft Purview Insider Risk Management (IRM) policies. Lastly, user prompts and AI responses in Azure AI apps and agents can now be ingested into Purview compliance tools as mentioned above. Learn more about Microsoft Purview for Azure AI Foundry. 2. Purview data protections extended to Copilot Studio agents grounded in Microsoft Dataverse data Coming to preview in June, Purview Information Protection extends auto-labeling and label inheritance coverage to Dataverse to help prevent oversharing and data leaks. Information Protection makes it easier for organizations to automatically classify and protect sensitive data at scale. A common challenge is that sensitive data often lands in Dataverse from various sources without consistent labeling or protection. The rapid adoption of agents built using Copilot Studio and grounding data from Dataverse increases the risk of data oversharing and leakage if data is not properly protected. With auto-labeling, data stored in Dataverse tables can be automatically labeled based on policies set in Microsoft Purview, regardless of its source. This reduces the need for manual labeling effort and protects sensitive information from the moment it enters Dataverse. With label inheritance, AI agent responses grounded in Dataverse data will automatically carry and honor the source data’s sensitivity label. If a response pulls from multiple tables with different labels, the most restrictive label is applied to ensure consistent protection. For example, a financial advisor building an agent in Copilot Studio might connect multiple Dataverse tables, some labeled as “General” and others as “Highly Confidential.” If a response pulls from both, it will inherit the most restrictive label, in this case, "Highly Confidential,” to prevent unauthorized access and ensure appropriate protections are applied across both maker and users of the agent. Together, auto-labeling and label inheritance in Dataverse support a more secure, automated foundation for AI. : Sensitivity labels will be automatically applied to data in Dataverse : AI-generated responses will inherit and honor the source data’s sensitivity labels Learn more about protecting Dataverse data with Microsoft Purview. 3. Purview DSPM for AI can now provide visibility into unauthenticated interactions with Copilot Studio agents As organizations increasingly use Microsoft Copilot Studio to deploy AI agents for frontline customer interactions, gaining visibility into unauthenticated user interactions and proactively mitigating risks becomes increasingly critical. Building on existing Purview and Copilot Studio integrations, we’ve extended DSPM for AI and Audit in Copilot Studio to provide visibility into unauthenticated interactions, now in preview. This gives organizations a more comprehensive view of AI-related data security risks across authenticated and unauthenticated users. For example, a healthcare provider hosting an external, customer-facing agent assistant must be able to detect and respond to attempts by unauthenticated users to access sensitive patient data. With these new capabilities in DSPM for AI, data security teams can now identify these interactions, assess potential exposure of sensitive data, and act accordingly. Additionally, integration with Purview Audit provides teams with seamless access to information needed for audit requirements. : Gain visibility into all AI interactions, including those from unauthenticated users Learn more about Purview for Copilot Studio. 4. Purview Data Loss Prevention extended to more Microsoft 365 agent scenarios To help organizations prevent data oversharing through AI, at Ignite 2024, we announced that data security admins could prevent Microsoft 365 Copilot from using certain labeled documents as grounding data to generate summaries or responses. Now in preview, this control also extends to agents published in Microsoft 365 Copilot that are grounded by Microsoft 365 data, including pre-built Microsoft 365 agents, agents built with the Agent Builder, and agents built with Copilot Studio. This helps ensure that files containing sensitive content are used appropriately by AI agents. For example, confidential legal documents with highly specific language that could lead to improper guidance if summarized by an AI agent, or "Internal only” documents that shouldn’t be used to generate content that can be shared outside of the organization. : Extend data loss prevention (DLP) policies to Microsoft 365 Copilot agents to protect sensitive data Learn more about Data Loss Prevention for Microsoft 365 Copilot and agents. The data protection capabilities we are extending to agents in Agent Builder and Copilot Studio demonstrate our continued investment in strengthening the Security and Governance pillar of the Copilot Control System (CSS). CCS provides integrated controls to help IT and security teams secure, manage, and monitor Copilot and agents across Microsoft 365, spanning governance, management, and reporting. Learn more here. Explore additional resources As developers and security teams continue to secure AI throughout its lifecycle, it’s important to stay ahead of emerging risks and ensure protection. Microsoft Security provides a range of tools and resources to help you proactively secure AI models, apps, and agents from code to runtime. Explore the following resources to deepen your understanding and strengthen your approach to AI security: Learn more about Security for AI solutions on our webpage Learn more about Microsoft Purview SDK Get started with Azure AI Foundry Get started with Microsoft Entra Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Get started with Microsoft 365 Copilot Get started with Copilot Studio Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial 1 Predicts 2025: Navigating Imminent AI Turbulence for Cybersecurity, Jeremy D'Hoinne, Akif Khan, Manuel Acosta, Avivah Litan, Deepak Seth, Bart Willemsen, 10 February 2025 2 IBM. "Cost of a Data Breach 2024: Financial Industry." IBM Think, 13 Aug. 2024, https://www.ibm.com/think/insights/cost-of-a-data-breach-2024-financial-industry; Cser, Tamas. "The Cost of Finding Bugs Later in the SDLC." Functionize, 5 Jan. 2023, https://www.functionize.com/blog/the-cost-of-finding-bugs-later-in-the-sdlcUnderstanding and mitigating security risks in MCP implementations
Introducing any new technology can introduce new security challenges or exacerbate existing security risks. In this blog post, we’re going to look at some of the security risks that could be introduced to your environment when using Model Context Protocol (MCP), and what controls you can put in place to mitigate them. MCP is a framework that enables seamless integration between LLM applications and various tools and data sources. MCP defines: A standardized way for AI models to request external actions through a consistent API Structured formats for how data should be passed to and from AI systems Protocols for how AI requests are processed, executed, and returned MCP allows different AI systems to use a common set of tools and patterns, ensuring consistent behavior when AI models interact with external systems. MCP architecture MCP follows a client-server architecture that allows AI models to interact with external tools efficiently. Here’s how it works: MCP Host – The AI model (e.g., Azure OpenAI GPT) requesting data or actions. MCP Client – An intermediary service that forwards the AI model's requests to MCP servers. MCP Server – Lightweight applications that expose specific capabilities (APIs, databases, files, etc.). Data Sources – Various backend systems, including local storage, cloud databases, and external APIs. MCP security controls Any system which has access to important resources has implied security challenges. Security challenges can generally be addressed through correct application of fundamental security controls and concepts. As MCP is only newly defined, the specification is changing very rapidly and as the protocol evolves. Eventually the security controls within it will mature, enabling a better integration with enterprise and established security architectures and best practices. Research published in the Microsoft Digital Defense Report states that 98% of reported breaches would be prevented by robust security hygiene and the best protection against any kind of breach is to get your baseline security hygiene, secure coding best practices and supply chain security right – those tried and tested practices that we already know about still make the most impact in reducing security risk. Let's look at some of the ways that you can start to address security risks when adopting MCP. MCP server authentication (if your MCP implementation was before 26th April 2025) Problem statement: The original MCP specification assumed that developers would write their own authentication server. This requires knowledge of OAuth and related security constraints. MCP servers acted as OAuth 2.0 Authorization Servers, managing the required user authentication directly rather than delegating it to an external service such as Microsoft Entra ID. As of 26 April 2025, an update to the MCP specification allows for MCP servers to delegate user authentication to an external service. Risks: Misconfigured authorization logic in the MCP server can lead to sensitive data exposure and incorrectly applied access controls. OAuth token theft on the local MCP server. If stolen, the token can then be used to impersonate the MCP server and access resources and data from the service that the OAuth token is for. Mitigating controls: Thoroughly review your MCP server authorization logic, here some posts discussing this in more detail - Azure API Management Your Auth Gateway For MCP Servers | Microsoft Community Hub and Using Microsoft Entra ID To Authenticate With MCP Servers Via Sessions · Den Delimarsky Implement best practices for token validation and lifetime Use secure token storage and encrypt tokens Excessive permissions for MCP servers Problem statement: MCP servers may have been granted excessive permissions to the service/resource they are accessing. For example, an MCP server that is part of an AI sales application connecting to an enterprise data store should have access scoped to the sales data and not allowed to access all the files in the store. Referencing back to the principle of least privilege (one of the oldest security principles), no resource should have permissions in excess of what is required for it to execute the tasks it was intended for. AI presents an increased challenge in this space because to enable it to be flexible, it can be challenging to define the exact permissions required. Risks: Granting excessive permissions can allow for exfiltration or amending data that the MCP server was not intended to be able to access. This could also be a privacy issue if the data is personally identifiable information (PII). Mitigating controls: Clearly define the permissions that the MCP server has to access the resource/service it connects to. These permissions should be the minimum required for the MCP server to access the tool or data it is connecting to. Indirect prompt injection attacks Problem statement: Researchers have shown that the Model Context Protocol (MCP) is vulnerable to a subset of Indirect Prompt Injection attacks known as Tool Poisoning Attacks. Tool poisoning is a scenario where an attacker embeds malicious instructions within the descriptions of MCP tools. These instructions are invisible to users but can be interpreted by the AI model and its underlying systems, leading to unintended actions that could ultimately lead to harmful outcomes. Risks: Unintended AI actions present a variety of security risks that include data exfiltration and privacy breaches. Mitigating controls: Implement AI prompt shields: in Azure AI Foundry, you can follow these steps to implement AI prompt shields. Implement robust supply chain security: you can read more about how Microsoft implements supply chain security internally here. Established security best practices that will uplift your MCP implementation’s security posture Any MCP implementation inherits the existing security posture of your organization's environment that it is built upon, so when considering the security of MCP as a component of your overall AI systems it is recommended that you look at uplifting your overall existing security posture. The following established security controls are especially pertinent: Secure coding best practices in your AI application - protect against the OWASP Top 10, the OWASP Top 10 for LLMs, use of secure vaults for secrets and tokens, implementing end-to-end secure communications between all application components, etc. Server hardening – use MFA where possible, keep patching up to date, integrate the server with a third party identity provider for access, etc. Keep devices, infrastructure and applications up to date with patches Security monitoring – implementing logging and monitoring of an AI application (including the MCP client/servers) and sending those logs to a central SIEM for detection of anomalous activities Zero trust architecture – isolating components via network and identity controls in a logical manner to minimize lateral movement if an AI application were compromised. Conclusion MCP is a promising development in the AI space that enables rich data and context access. As developers embrace this new approach to integrating their organization's APIs and connectors into LLMs, they need to be aware of security risks and how to implement controls to reduce those risks. There are mitigating security controls that can be put in place to reduce the risks inherent in the current specification, but as the protocol develops expect that some of the risks will reduce or disappear entirely. We encourage you to contribute to and suggest security related MCP RFCs to make this protocol even better! With thanks to OrinThomas, dasithwijes, dendeli and Peter Marcu for their inputs and collaboration on this post.