microsoft defender for cloud apps
276 TopicsProtect Copilot Studio AI Agents in Real Time with Microsoft Defender
Building AI agents has never been easier. Platforms like Microsoft Copilot Studio democratize the creation of AI agents and empower non-technical users to build intelligent agents that automate tasks and streamline business processes. These agents can answer questions, orchestrate complex tasks, and integrate with enterprise systems to boost productivity and creativity. Organizations are embracing a future where every team has AI agents working alongside them to increase efficiency and responsiveness. While AI agents unlock exciting new possibilities, they also introduce new security risks, most notably prompt injection attacks and a broader attack surface. Attackers are already testing ways to exploit them, such as abusing tool permissions, sneaking in malicious instructions, or tricking agents into sharing sensitive data. Prompt injection is especially concerning because it happens when an attacker feeds an agent malicious inputs to override the agent’s intended behavior. These risks aren’t due to flaws in Copilot Studio or any single platform — they’re a natural challenge that comes with democratizing AI development. As more people build and deploy agents, strong, real-time protection will be critical to keeping them secure. To help organizations safely unlock the potential of generative AI, Microsoft Defender has introduced innovations ranging from shadow AI discovery to out-of-the-box threat protection for both pre-built and custom-built generative AI apps. Today, we’re excited to take the next step in securing AI agents: Microsoft Defender now delivers real-time protection during agent runtime for AI agents built with Copilot Studio. It automatically stops agents from executing unsafe actions during runtime if suspicious behavior, such as a prompt injection attack attempt, is detected and notifies security teams with a detailed alert in the Defender portal. Defender’s AI agent runtime protection is part of our broader approach to securing Copilot Studio AI agents, as outlined in this blog post. Monitor AI agent runtime activities and detect prompt injection attacks Prompt injections are particularly dangerous because they exploit the very AI logic that powers these agents. A well-crafted input can trick an agent’s underlying language model into ignoring its safety guardrails or revealing secrets it was supposed to keep. With thousands of agents operating and interacting with external inputs, the risk of prompt injection is not theoretical - it’s a pressing concern that grows with every new agent deployed. The new real-time protection for AI agents built with Copilot Studio adds a safety net at the most critical point when the agent is running and acting. It helps safeguard AI agents during their operation, reducing the chance that malicious inputs can exploit them during runtime. Microsoft Defender now monitors agent tool invocation calls in real time. If a suspicious or high-risk action is detected, such as a known prompt injection pattern, the action is blocked before it is executed. The agent halts processing and informs the user that their request was blocked due to a security risk. For example, if an HR chatbot agent is tricked by a hidden prompt to send out confidential salary information, Defender will detect this unauthorized action and block it before any tool is invoked. Investigate suspicious agent behaviors in a unified experience See the full attack story, not just the alerts. Today’s attacks are targeted and multi‑stage. When Defender stops risky Copilot Studio AI agent activity at runtime, it raises an alert - and immediately begins correlating related signals across email, endpoints, identities, apps, and cloud into a single incident. That builds the complete attack narrative, often before anyone even opens the queue, so the SOC can see how they’re being targeted and what to do next. In the Microsoft Defender portal, incidents arrive enriched with timelines, entity relationships, relevant TTPs, and threat intelligence. Automated investigation and response gathers evidence, determines scope, and recommends or executes remediation to cut triage time. With Security Copilot embedded, analysts get instant incident summaries, guided response and hunting in natural language, and contextualize threat intelligence to accelerate deeper analysis and stay ahead of threats. If you use Microsoft Sentinel, the unified SOC experience brings Defender XDR incidents together with third‑party data. And with the new Microsoft Sentinel data lake (preview), teams can retain and analyze years of security data in one place, then hunt across that history using natural‑language prompts that Copilot translates to KQL. Because runtime protection already stops the unsafe actions of Copilot Studio AI agents, most single alerts don’t require immediate intervention. But the SOC still needs to know when they’re being persistently targeted. Defender automatically flags emerging patterns, such as sustained activity from the same actor or technique, and, when warranted and a supporting scenario like ransomware, can trigger automatic attack disruption to contain active threats while analysts' review. For Copilot Studio builders, Defender extends the same protection to AI agents: real‑time runtime protection helps prevent unsafe actions and prompt‑injection attempts, and detections are automatically correlated and investigated, without moving data outside a trusted, industry‑leading XDR. By embedding security into the runtime of AI agents, Microsoft Defender helps organizations embrace the full potential of Copilot Studio while maintaining the trust and control they need. Real-time protection during agent runtime is a foundational step in Microsoft’s journey to secure the future of AI agents, laying the foundation for more advanced capabilities coming soon to Microsoft Defender. It reflects our belief that innovation and security go hand in hand. With this new capability, organizations can feel more confident using AI agents, knowing that Microsoft Defender is monitoring in real time to keep their environments protected. Learn more: Read the blog to learn more about securing Copilot Studio agents Check the documentation to learn how Defender blocks agent tool invocation in real time Explore how to build and customize agents with Copilot Studio Agent Builder262Views0likes0CommentsFrom Traditional Security to AI-Driven Cyber Resilience: Microsoft’s Approach to Securing AI
By Chirag Mehta, Vice President and Principal Analyst - Constellation Research AI is changing the way organizations work. It helps teams write code, detect fraud, automate workflows, and make complex decisions faster than ever before. But as AI adoption increases, so do the risks, many of which traditional security tools were not designed to address. Cybersecurity leaders are starting to see that AI security is not just another layer of defense. It is becoming essential to building trust, ensuring resilience, and maintaining business continuity. Earlier this year, after many conversations with CISOs and CIOs, I saw a clear need to bring more attention to this topic. That led to my report on AI Security, which explores how AI-specific vulnerabilities differ from traditional cybersecurity risks and why securing AI systems calls for a more intentional approach. Why AI Changes the Security Landscape AI systems do not behave like traditional software. They learn from data instead of following pre-defined logic. This makes them powerful, but also vulnerable. For example, an AI model can: Misinterpret input in ways that humans cannot easily detect Be tricked into producing harmful or unintended responses through crafted prompts Leak sensitive training data in its outputs Take actions that go against business policies or legal requirements These are not coding flaws. They are risks that originate from how AI systems process information and act on it. These risks become more serious with agentic AI. These systems act on behalf of humans, interact with other software, and sometimes with other AI agents. They can make decisions, initiate actions, and change configurations. If one is compromised, the consequences can spread quickly. A key challenge is that many organizations still rely on traditional defenses to secure AI systems. While those tools remain necessary, they are no longer enough. AI introduces new risks across every layer of the stack, including data, networks, endpoints, applications, and cloud infrastructure. As I explained in my report, the security focus must shift from defending the perimeter to governing the behavior of AI systems, the data they use, and the decisions they make. The Shift Toward AI-Aware Cyber Resilience Cyber resilience is the ability to withstand, adapt to, and recover from attacks. Meeting that standard today requires understanding how AI is developed, deployed, and used by employees, customers, and partners. To get there, organizations must answer questions such as: Where is our sensitive data going, and is it being used safely to train models? What non-human identities, such as AI agents, are accessing systems and data? Can we detect when an AI system is being misused or manipulated? Are we in compliance with new AI regulations and data usage rules? Let’s look at how Microsoft has evolved its mature security portfolio to help protect AI workloads and support this shift toward resilience. Microsoft’s Approach to Secure AI Microsoft has taken a holistic and integrated approach to AI security. Rather than creating entirely new tools, it is extending existing products already used by millions to support AI workloads. These features span identity, data, endpoint, and cloud protection. 1. Microsoft Defender: Treating AI Workloads as Endpoints AI models and applications are emerging as a new class of infrastructure that needs visibility and protection. Defender for Cloud secures AI workloads across Azure and other cloud platforms such as AWS and GCP by monitoring model deployments and detecting vulnerabilities. Defender for Cloud Apps extends protection to AI-enabled apps running at the edge Defender for APIs supports AI systems that use APIs, which are often exposed to risks such as prompt injection or model manipulation Additionally, Microsoft has launched tools to support AI red-teaming, content safety, and continuous evaluation capabilities to ensure agents operate safely and as intended. This allows teams identify and remediate risks such as jailbreaks or prompt injection before models are deployed. 2. Microsoft Entra: Managing Non-Human Identities As organizations roll out more AI agents and copilots, non-human identities are becoming more common. These digital identities need strong oversight. Microsoft Entra helps create and manage identities for AI agents Conditional Access ensures AI agents only access the resources they need, based on real-time signals and context Privileged Identity Management manages, controls, and monitors AI agents access to important resources within an organization 3. Microsoft Purview: Securing Data Used in AI Purview plays an important role in securing both the data that powers AI apps and agents, and the data they generate through interactions. Data discovery and classification helps label sensitive information and track its use Data Loss Prevention policies help prevent leaks or misuse of data in tools such as Copilot or agents built in Azure AI Foundry Insider Risk Management alerts security teams when employees feed sensitive data into AI systems without approval Purview also helps organizations meet transparency and compliance requirements, extending the same policies they already use today to AI workloads, without requiring separate configurations, as regulations like the EU AI Act take effect. Here's a video that explains the above Microsoft security products: Securing AI Is Now a Strategic Priority AI is evolving quickly, and the risks are evolving with it. Traditional tools still matter, but they were not built for systems that learn, adapt, and act independently. They also weren’t designed for the pace and development approaches AI requires, where securing from the first line of code is critical to staying protected at scale. Microsoft is adapting its security portfolio to meet this shift. By strengthening identity, data, and endpoint protections, it is helping customers build a more resilient foundation. Whether you are launching your first AI-powered tool or managing dozens of agents across your organization, the priority is clear. Secure your AI systems before they become a point of weakness. You can read more in my AI Security report and learn how Microsoft is helping organizations secure AI supporting these efforts across its security portfolio.Monthly news - August 2025
Microsoft Defender XDR Monthly news - August 2025 Edition This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our Defender products. In this edition, we are looking at all the goodness from July 2025. Defender for Cloud has it's own Monthly News post, have a look at their blog space. Microsoft Defender Microsoft Sentinel is moving to the Microsoft Defender portal to deliver a unified, AI-powered security operations experience. Many customers have already made the move. Learn how to plan your transition and take advantage of new capabilities in the this blog post. Introducing Microsoft Sentinel data lake. We announced a significant expansion of Microsoft Sentinel’s capabilities through the introduction of Sentinel data lake, now rolling out in public preview. Read this blog post for a look at some of Sentinel data lake’s core features. (Public Preview) The GraphApiAuditEvents table in advanced hunting is now available for preview. This table contains information about Microsoft Entra ID API requests made to Microsoft Graph API for resources in the tenant. (Public Preview) The DisruptionAndResponseEvents table, now available in advanced hunting, contains information about automatic attack disruption events in Microsoft Defender XDR. These events include both block and policy application events related to triggered attack disruption policies, and automatic actions that were taken across related workloads. Increase your visibility and awareness of active, complex attacks disrupted by attack disruption to understand the attacks' scope, context, impact, and actions taken. Introducing Summary Rules Templates: Streamlining Data Aggregation in Microsoft Sentinel. Microsoft Sentinel’s new Summary Rules Templates offer a structured and efficient approach to aggregating verbose data - enabling security teams to extract meaningful insights while optimizing resource usage. Automating Microsoft Sentinel: Playbook Fundamentals. This is the third entry of the blog series on automating Microsoft Sentinel. In this post, we’re going to start talking about Playbooks which can be used for automating just about anything. Customer success story: Kuwait Credit Bank boosts threat detection and response with Microsoft Defender. To modernize its security posture, the bank unified its security operations under Microsoft Defender XDR, integrating Microsoft Sentinel and Microsoft Purview. Microsoft Defender for Cloud Apps App Governance is now also available in Brazil, Sweden, Norway, Switzerland, South Africa, South Korea, Arab Emirates and Asia Pacific. For more details, see our documentation.. Updated network requirements for GCC and Gov customers. To support ongoing security enhancements and maintain service availability, Defender for Cloud Apps now requires updated firewall configurations for customers in GCC and Gov environments. To avoid service disruption, take action by August 25, 2025, and update your firewall configuration as described here. Discover and govern ChatGPT and other AI apps accessing Microsoft 365 with Defender for Cloud Apps. In this blog post, we’ll explore how Defender for Cloud Apps helps security teams gain enhanced visibility into the permissions granted to AI applications like ChatGPT as they access Microsoft 365 data. We’ll also share best practices for app governance to help security teams make informed decisions and take proactive steps to enable secure usage of AI apps accessing Microsoft 365 data. Microsoft Defender for Endpoint (General Availability) Microsoft Defender Core service is now generally available on Windows Server 2019 or later which helps with the stability and performance of Microsoft Defender Antivirus. Microsoft Defender for Identity Expanded coverage in ITDR deployment health widget. With this update, the widget also includes deployment status for ADFS, ADCS, and Entra Connect servers - making it easier to track and ensure full sensor coverage across all supported identity infrastructure. Time limit added to Recommended test mode. Recommended test mode configuration on the Adjust alert thresholds page, now requires you to set an expiration time (up to 60 days) when enabling it. The end time is shown next to the toggle while test mode is active. For customers who already had Recommended test mode enabled, a 60-day expiration was automatically applied. Identity scoping is now available in Governance environments. Organizations can now define and refine the scope of Defender for Identity monitoring and gain granular control over which entities and resources are included in security analysis. For more information, see Configure scoped access for Microsoft Defender for Identity. New security posture assessments for unmonitored identity servers. Defender for Identity has three new security posture assessments that detect when Microsoft Entra Connect, Active Directory Federation Services (ADFS), or Active Directory Certificate Services (ADCS) servers are present in your environment but aren't monitored. Learn more in our documentation. Microsoft Defender for Office 365 Protection against multi-modal attacks with Microsoft Defender. This blog post showcases how Microsoft Defender can detect and correlate certain hybrid, multi-modal attacks that span across email, Teams, identity, and endpoint vectors; and how these insights surface in the Microsoft Defender portal. Users can report external and intra-org Microsoft Teams messages from chats, standard and private channels, meeting conversations to Microsoft, the specified reporting mailbox, or both via user reported settings. Microsoft Security Blogs Frozen in transit: Secret Blizzard’s AiTM campaign against diplomats. Microsoft Threat Intelligence has uncovered a cyberespionage campaign by the Russian state actor we track as Secret Blizzard that has been ongoing since at least 2024, targeting embassies in Moscow using an adversary-in-the-middle (AiTM) position to deploy their custom ApolloShadow malware. Sploitlight: Analyzing a Spotlight-based macOS TCC vulnerability. Microsoft Threat Intelligence has discovered a macOS vulnerability, tracked as CVE-2025-31199, that could allow attackers to steal private data of files normally protected by Transparency, Consent, and Control (TCC), including the ability to extract and leak sensitive information cached by Apple Intelligence. Disrupting active exploitation of on-premises SharePoint vulnerabilities. Microsoft has observed two named Chinese nation-state actors, Linen Typhoon and Violet Typhoon, exploiting vulnerabilities targeting internet-facing SharePoint servers.1.8KViews3likes1CommentDiscover risks in AI model providers and MCP servers with Microsoft Defender
AI model providers and Model Context Protocol (MCP) are being adopted at an unprecedented pace. As these new AI tools become deeply integrated into business operations and bring endless opportunities for productivity, security must not be ignored. MCP and AI model providers enable seamless communication between AI agents, tools, and models - but this convenience comes with significant security risks. MCP can expose sensitive information to unverified context providers, creating data leaks, malicious agent chaining, and supply chain attacks - all without consistent logging or enforcement. Microsoft Defender is expanding its capabilities to protect AI MCP use across the enterprise. Building on recent enhancements in Microsoft Defender for Cloud which now provides visibility into containers running MCP across AWS, GCP, and Azure, we're now adding support in Microsoft Defender for Cloud Apps to help security teams discover, manage, and protect not only generative AI apps, but also AI model providers and MCP servers. As AI tools spread, so does shadow AI - unauthorized or unmanaged use of AI tools that bypass IT and security controls. MCP servers and AI model providers explained MCP servers take productivity a step further by enabling AI to operate in real-time context. As intelligent intermediaries, MCP servers connect models to live enterprise data and applications - standardizing interactions and removing silos between tools, systems, and information. This unlocks the full potential of AI: not just generating insights, but acting autonomously, adapting to business conditions, and streamlining operations without any human in the loop. SaaS-based AI model providers are services that deliver sophisticated AI capabilities through simple APIs, allowing companies to integrate intelligence without massive infrastructure investments or specialized machine learning expertise. However, as productivity evolves, unauthorized or unmanaged use of AI tools that bypass IT and security controls evolve too. Software engineering teams can easily configure the AI model provider or MCP server to an unsanctioned one in their AI code assistant. These integrations can inadvertently expose sensitive information, violate compliance policies, or introduce threats like tool shadowing and prompt injections. As AI becomes deeply embedded across the enterprise, visibility and governance must keep pace to ensure security is never compromised. How Microsoft Defender secures the use of AI model providers and MCP servers Based on an extensive customer survey with large enterprises, we now understand the first step to securing AI use is gaining visibility into which of these services are in use across the organization. The cloud app catalog already provides a comprehensive list of over 35,000 discoverable cloud apps. It helps analyze traffic logs, gain visibility into cloud use, and assess the risk posture of these apps to manage security and compliance effectively. The catalog includes detailed risk parameters, such as data handling practices, authentication methods, and integration scopes. Starting today, the catalog has expanded to include AI model providers and MCP servers, enabling security teams to assess usage patterns and understand the risk posture of each. In addition to this expanded catalog, we aim to provide more security insights to customers, based on customer feedback on AI model providers and MCP servers, to help reduce the risks they introduce. Furthermore, if you're hosting your own MCP server Defender for Cloud, AI Security Posture Management can help you discover all MCP servers hosted across multi-cloud environments, identify misconfigurations and vulnerabilities in the AI application and agents using them, and prioritize remediation process based on attack path analysis. Microsoft Defender now helps you discover, monitor, and govern the use of AI model providers and MCP servers - giving you visibility, control, and protection against shadow AI risks with automated policies and real-time enforcement. Learn more How to use Microsoft Defender for Cloud Apps to stay safe in the Gen AI era Check out our website to learn more about Defender for Cloud Apps Not a customer, yet? Start a free trial today Visit our Cloud App Catalog documentation Learn more about AI Security Posture Management (AI-SPM)2.6KViews5likes0CommentsMonthly news - July 2025
Microsoft Defender XDR Monthly news - July 2025 Edition This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our Defender products. In this edition, we are looking at all the goodness from May 2025. Defender for Cloud has it's own Monthly News post, have a look at their blog space. Microsoft Defender (General Availability) In advanced hunting, Microsoft Defender portal users can now use the adx() operator to query tables stored in Azure Data Explorer. You no longer need to go to log analytics in Microsoft Sentinel to use this operator if you're already in Microsoft Defender. Learn more on our docs. Introducing TITAN powered recommendations in Security Copilot guided response. This blog post explains the power of Guided Response with Security Copilot and and the integration of Threat Intelligence Tracking via Adaptive Networks (TITAN). (General Availability) Case management now supports multiple tenants in Microsoft Defender experience. We’re excited to share that multi-tenant support is now generally available in our case management experience. This new capability empowers security teams to view and manage incidents across all their tenants from a single, unified interface—directly within the Microsoft Defender Multi-Tenant (MTO) portal. You can read this blog for more information. Microsoft Defender for Cloud Apps (General Availability) The Behaviors data type significantly enhances overall threat detection accuracy by reducing alerts on generic anomalies and surfacing alerts only when observed patterns align with real security scenarios. This data type is now generally available. Learn more on how to use Behaviors and new detections in this blog post. New Dynamic Threat Detection model. Defender for Cloud Apps new dynamic threat detection model continuously adapts to the ever-changing SaaS apps threat landscape. This approach ensures your organization remains protected with up-to-date detection logic without the need for manual policy updates or reconfiguration. Microsoft Defender for Endpoint (General Availability) Global exclusions on Linux are now generally available. We just published a new blog post, that discussed how you can manage global exclusion policies for Linux across both AV and EDR. (General Availability) Support for Alma Linux and Rocky Linux is now generally available for Linux. (General Availability) Behavior monitoring on macOS is now generally available. Read this blog post to learn more about it and how it improves the early detection and prevention of suspicious and malicious activities targeting macOS users. (Public Preview) Selective Isolation allows you to exclude specific devices, processes, IP addresses, or services from isolation actions. More details in this blog post "Maintain connectivity for essential services with selective network isolation" Microsoft Defender for Identity (Public Preview) Domain-based scoping for Active Directory is now available in public preview. This new capability enables SOC analysts to define and refine the scope of Defender for Identity monitoring, providing more granular control over which entities and resources are included in security analysis. Read this announcement blog for more details. (Public Preview) Defender for Identity is extending its identity protection to protect Okta identities, that’s in addition to the already robust protection for on-premises Active Directory and Entra ID identities. For more details, have a look at this announcement blog post. Microsoft Defender for Office 365 Introducing the Defender for Office 365 ICES Vendor Ecosystem - a unified framework that enables seamless integration with trusted third-party vendors. Learn more about this exciting announcement in this blog post. (General Availability) Auto-Remediation of malicious messages in Automated Investigation and Response is now generally available. Have a look at this detailed blog post on how it works. Mail bombing is now an available Detection technology value in Threat Explorer, the Email entity page, and the Email summary panel. Mail bombing is also an available DetectionMethods value in Advanced Hunting. For more information, see MC1096885. AI-powered Submissions Response introduces generative AI explanations for admin email submissions to Microsoft. For more information, see Submission result definitions. Microsoft Security Exposure Management (Public Preview) Enhanced External Attack Surface Management integration with Exposure Management. This new integration allows you to incorporate detailed external attack surface data from Defender External Attack Surface Management into Exposure Management. Learn more on our docs. Microsoft Security Blogs Unveiling RIFT: Enhancing Rust malware analysis through pattern matching As threat actors are adopting Rust for malware development, RIFT, an open-source tool, helps reverse engineers analyze Rust malware, solving challenges in the security industry. Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations Since 2024, Microsoft Threat Intelligence has observed remote IT workers deployed by North Korea leveraging AI to improve the scale and sophistication of their operations, steal data, and generate revenue for the North Korean government. Threat Analytics (Access to the Defender Portal needed) Tool Profile: Qilin ransomware. Qilin (also called Agenda) is a ransomware as a service (RaaS) offering that was first observed in 2022. It has been used by multiple cybercriminal groups, including Pistachio Tempest, Octo Tempest, and most recently Moonstone Sleet. While the ransom attacks appear to be opportunistic rather than targeted, they have had notable impacts against healthcare and media companies. Activity Profile: Emerald Sleet using QR codes for credential harvesting. In May 2025, Microsoft Threat Intelligence observed the North Korean threat actor that Microsoft tracks as Emerald Sleet using QR (quick response) codes designed to lure recipients to credential-harvesting sites in phishing emails. Vulnerability profile: CVE-2025-34028 – Commvault Command Center Innovation Release. According to the National Institute of Standards and Technology (NIST), “the Commvault Command Center Innovation Release allows an unauthenticated actor to upload ZIP files that represent install packages that, when expanded by the target server, are vulnerable to path traversal vulnerability that can result in Remote Code Execution via malicious JSP. This issue affects Command Center Innovation Release: 11.38.0 to 11.38.20. The vulnerability is fixed in 11.38.20 with SP38-CU20-433 and SP38-CU20-436 and also fixed in 11.38.25 with SP38-CU25-434 and SP38-CU25-438.” Activity Profile: Forest Blizzard trojanizes Ukraine software to deliver new variant of Blipslide downloader. Since March, Microsoft Threat intelligence observed the Russian military intelligence threat actor Forest Blizzard infect devices in Ukraine with a new variant of BlipSlide malware, a downloader that the threat actor uses for command and control (C2). Actor Profile: Storm-2416. The threat actor that Microsoft tracks as Storm-2416 is a nation-state activity group based out of China. Storm-2416 is known to primarily target information technology (IT), government, and other business entities in Europe, Asia, Oceania, and South and North America. Activity Profile: Suspicious OAuth applications used to retrieve and send emails. In late February 2025, Microsoft discovered a set of malicious Open Authorization (OAuth) applications, including one that impersonated Outlook, that can retrieve and send emails. Actor Profile: Storm-0126. The threat actor that Microsoft tracks as Storm-0126 is a nation-state activity group based out of China. Storm-0126 is known to primarily target defense industry enterprises, public institutions, research institutes, and military-industrial organizations worldwide. Actor Profile: Storm-2001. Microsoft assesses with high confidence that the threat actor Microsoft tracks as Storm-2001 is a Russian state-sponsored actor. It is known to primarily target defense organizations in the North Atlantic Treaty Organization (NATO) alliance—specifically, member states that form NATO’s Enhanced Forward Presence (EFP) program, recent NATO members, and other related organizations that engage in NATO-related communications and planning. Activity profile: Storm-2561 distributes trojanized SonicWall NetExtender SilentRoute. In late May 2025, Storm-2561 began distributing malware that Microsoft detects as SilentRoute. The malware is a trojanized version of SonicWall’s SSL VPN NetExtender application that transmits the user’s VPN configuration data to a hardcoded IP address.2.5KViews4likes0CommentsBring AI out of the shadows with agents for Microsoft 365 Copilot Chat
For IT admins and Microsoft 365 admins 7-minute read Overview Shadow AI is almost certainly happening across your organization—whether you can see it or not. Employees are using tools like ChatGPT and Notion AI to get work done, even without organizational knowledge or approval. This creates real risks like data leakage, compliance violations, and a lack of visibility into how employees are using artificial intelligence. Fortunately, IT admins are in a unique position to fix the problem at its core. Today's article is intended to be a practical playbook for helping IT admins lead the charge toward responsible AI use in their organizations by empowering secure, compliant, and easy-to-manage agents for Microsoft 365 Copilot Chat. What is shadow AI? Like shadow IT, the term ‘shadow AI’ exists for a reason: it refers to unsanctioned, often hidden, use of AI tools. In the shadows, artificial intelligence can be hard to detect and even harder to govern. Tools can be browser-based, embedded in SaaS apps, or used on personal devices. Controls that mitigate shadow IT—like app blocking or firewall rules—don’t necessarily translate to AI use. Both shadow IT and shadow AI involve technical and behavioral elements, however unauthorized use of AI presents deeper behavioral challenges beyond unauthorized tools. These challenges center around how users make decisions and potentially bypass governance in ways that are harder to detect and control. While employees may not want to go rogue or bypass IT—and they generally don’t want to put the organization at risk—they do want to get their work done efficiently. They turn to public AI tools when they can’t find the capabilities they need inside the tools they have permission to use. Agents for Microsoft 365 Copilot Chat give you a way to lead AI use into the light and meet your users’ needs with modern AI business tools. By building and deploying task-specific, data-grounded chat experiences that live inside Microsoft 365, users get fast, relevant answers they’re looking for without having to step into the shadows and leave the secure environment you manage. These agents are part of the broader Microsoft 365 Copilot ecosystem and are designed to automate and execute business processes directly within Copilot Chat. Should you ignore or even allow shadow AI? When employees use public AI tools without oversight, they create risks that are harder to detect, harder to govern, and harder to reverse. For IT admins, the stakes are high for operational, security, and technical risks: Loss of visibility and control: You can’t protect what you can’t see. Shadow AI obscures oversight. It’s harder to track usage or enforce policies for tools used outside your environment. No centralized monitoring = no control. Without a unified view, you can’t troubleshoot issues, optimize usage, or step in when something goes wrong. Shadow data silos emerge. Generative AI content created outside your tenant isn’t retained or governed, which complicates lifecycle management, legal holds, and compliance requests. Security and compliance risks Enterprise-grade protections are lacking. Most public AI tools don’t support conditional access, audit logs, or data loss prevention (DLP) policies, leaving you with blind spots and increased risk of data leaks. Sensitive data exposure. Employees may unknowingly input proprietary or regulated data into public models, risking violations of GDPR, HIPAA, or internal policies. Compliance gaps. If tools aren’t tracked or documented, they increase the burden of proving compliance and can become major liabilities during audits or regulatory reviews. IT and governance challenges IT is out of the loop. Adoption of unauthorized AI tools sidelines IT, preventing teams from recommending secure, supported alternatives or aligning tools with organizational standards. When users go rogue with AI tools, they aren't using recommended secure, supported options that align with your environment and policies. Tool sprawl = more support tickets. Unapproved tools often lack integration with existing systems, creating support burdens and increasing the risk of misconfigurations. Bottom line: Allowing or ignoring shadow AI will make it much harder to manage later. That’s why Copilot Chat agents, combined with strong governance and user education, are such a powerful response: they give you a way to meet end user demand without losing control. What IT admins are up against When it comes to eradicating rogue AI, admins have their work cut out for them. Here’s a summary table of how activating Copilot Chat agents at your organization can help stem the tide: Unsanctioned AI use contributes to: How to stem the problem: Loss of visibility and control Employees use unsanctioned AI tools. Reframe shadow AI as a signal Offer sanctioned tools that meet user needs and bring AI usage into the light. Data governance gaps Unapproved tools bypass DLP and compliance policies. Keep data in your tenant Copilot agents respect Microsoft 365 compliance, identity, and data boundaries. Inconsistent AI use across teams Different tools create fragmented workflows. Centralize AI access Deploy agents across Teams and Microsoft 365 to unify usage. Security and compliance risks Shadow tools may not meet regulatory standards. Use enterprise-grade protection Copilot agents are authenticated with Azure AD and governed by Microsoft Purview. Lack of deployment clarity Admins may not know where to start. Follow a clear blueprint This blog outlines steps for setup, governance, and scaling. Missed innovation opportunities IT is seen as a blocker, not a partner. Support safe innovation Let business units build AI chat agents with IT guardrails in place. Copilot Chat agents remove the roadblocks to getting value from AI Microsoft's chat agents aren’t just another AI tool—they’re designed to work the way IT works. Secure by design: Agents run inside your Microsoft 365 tenant and authenticate through Azure AD. Compliant by default: They respect DLP and audit policies and retention through Microsoft Purview. Customizable and governable: You can define access, data sources, and usage policies. Easy to deploy: Agents live inside Teams and Microsoft apps, so users don’t need to install anything new. Copilot Chat agents strengthen governance While Copilot for Microsoft 365 helps users work more efficiently inside apps like Word, Excel, and Teams, Copilot's AI agents go a step further. They give IT the ability to create task-specific, role-based, and data-grounded AI experiences that directly replace the kinds of tools employees might otherwise seek out on their own. Key deployment benefits for IT admins Benefit Impact Visibility Know who’s using AI, how, and with what data. Control Define and enforce usage policies. Compliance Align AI use with regulatory standards. Efficiency Reduce support tickets with self-service agents. Innovation Empower business units without losing oversight. Take the next step Like shadow IT, you may not get rid of shadow AI completely or overnight. But you can meet it head-on with tools that work for your users and comply with your policies. Start by deploying a few AI Chat agents in high-impact areas. Use the resources in this article to guide your rollout. With Copilot Chat agents, you’re not just solving a technical problem. You’re leading your organization toward safer, smarter AI adoption. Tools that make it easier When it comes to Microsoft 365 deployments, you’re never alone. FastTrack for Microsoft 365 offers a full set of resources to help you learn about, build, manage, and instruct end users on Copilot Chat agents: Credentialed access, sign in required: Microsoft 365 advanced deployment guides and assistance Microsoft 365 Copilot onboarding hub Microsoft 365 Copilot: Quickstart, Copilot Chat licensing Open access, no sign-in required: Get started with Microsoft 365 Copilot extensibility Microsoft 365 Copilot ADG: Streamlining your Copilot journey (video) Copilot Chat Success Kit – Microsoft Adoption Microsoft Copilot AI setup and usage guides AI in business: Artificial intelligence tools & solutions (blog) Request assistance from FastTrack Deployment blueprint: Get started today Remember: You don’t need to roll out everything at once. Start small, build momentum, and scale responsibly. Here’s a blueprint that will get you to the finish line: Copilot Chat agent deployment checklist Step 1: Prepare your environment ☐ Set up Copilot Studio and review licensing. ☐ Create Power Platform environments that reflect your data boundaries and governance needs. ☐ Identify early declarative agent use cases (e.g., HR FAQs, IT help desk). Note: Only declarative agents are currently supported in Copilot Chat. Agents that access tenant data (e.g., SharePoint, Graph) require pay-as-you-go billing. Step 2: Define governance policies ☐ Use role-based access control (RBAC) to manage who can create, publish, and use agents. ☐ Apply naming conventions, approval workflows, and publishing guidelines. ☐ Set up guardrails for data access, agent behavior, and knowledge sources. ☐ Assign maker permissions via Microsoft Entra groups or Copilot Studio user licenses. Step 3: Deploy and monitor ☐ Use the Microsoft admin center and Power Platform admin center to manage billing and access. ☐ Monitor usage with audit logs, analytics, and the Copilot Control System. ☐ Identify which teams are still using unauthorized AI tools and guide them toward approved Copilot agents. Step 4: Support and scale ☐ Offer training, templates, and office hours to support agent creators and users. ☐ Establish a Center of Excellence (CoE) to share best practices and governance. ☐ Highlight successful use cases to drive adoption and build momentum. ☐ Encourage feedback loops to refine agent behavior and expand scenarios. Shadow AI prevention checklist What else should you do to discourage shadow AI? Here's a handy checklist of actions to take: Data protection ☐ Apply Microsoft Purview DLP policies to monitor and restrict sensitive data. ☐ Use sensitivity labels and encryption to protect data at rest and in transit. ☐ Set up conditional access policies to limit AI tool usage by role, device, or location. Acceptable use ☐ Publish clear guidance on approved AI tools and data usage. ☐ Include AI-specific clauses in acceptable use and security policies. ☐ Reinforce policies through onboarding, training, and regular reminders. Monitoring and detection ☐ Use Microsoft Defender for Cloud Apps (MCAS) to detect unsanctioned AI usage. ☐ Analyze browser traffic and app usage patterns for high-risk behavior. ☐ Set up alerts for uploads to known AI endpoints (e.g., ChatGPT, Claude). Education and empowerment ☐ Run awareness campaigns about shadow AI risks and approved alternatives. ☐ Offer training on how to use Copilot and Copilot Chat agents effectively. ☐ Create a feedback loop for users to request new AI capabilities. Internal partnerships ☐ Collaborate with HR, legal, and other teams to understand AI needs. ☐ Support business units in building Copilot Chat agents with IT oversight. ☐ Use shadow AI behavior as a signal for unmet needs and prioritize accordingly. Governance alignment ☐ Align Copilot deployment with your organization’s responsible AI principles. ☐ Document how Copilot Chat agents support ethical and regulatory standards. ☐ Use audit logs and analytics to support transparency and accountability.491Views0likes0CommentsBlock Access to Unsanctioned Apps with Microsoft Defender ATP & Cloud App Security
Microsoft Cloud App Security and Microsoft Defender ATP teams have partnered together to build a Microsoft Shadow IT visibility and control solution. After Shadow IT Discovery for endpoint users was officially announced earlier this year, we are now ready to move forward to the next phase of this integration and announce the preview of the functionality to block access to unsanctioned apps by leveraging Microsoft Defender network protection capability is now publicly available.Monthly news - June 2025
Microsoft Defender XDR Monthly news - June 2025 Edition This is our monthly "What's new" blog post, summarizing product updates and various new assets we released over the past month across our Defender products. In this edition, we are looking at all the goodness from May 2025. Defender for Cloud has it's own Monthly News post, have a look at their blog space. Unified Security Operations Platform: Microsoft Defender XDR & Microsoft Sentinel From on-premises to cloud: Graph-powered detection of hybrid attacks with Microsoft exposure graph. In this blog, we explain how the exposure graph, an integral part of our pre-breach security exposure solution, supercharges our post-breach threat protection capabilities to detect and respond to such multi-faceted threats. (Public Preview) Unified detections rules list that includes both analytics rules and custom detections is in public preview. Learn more in our docs. The Best of Microsoft Sentinel — Now in Microsoft Defender. We are proud to share that the most advanced and integrated SIEM experience from Microsoft Sentinel is now fully available within the Microsoft Defender portal as one unified experience. (General Available) Multi workspace for single and multi tenant is now in General Available. (Public Preview) Case management now available for the Defender multitenant portal. For more information, see View and manage cases across multiple tenants in the Microsoft Defender multitenant portal. (Public Preview) You can now highlight your security operations achievements and the impact of Microsoft Defender using the unified security summary. For more information, see Visualize security impact with the unified security summary. (Public Preview) New Microsoft Teams table: The MessageEvents table contains details about messages sent and received within your organization at the time of delivery (Public Preview) New Microsoft Teams table: The MessagePostDeliveryEvents table contains information about security events that occurred after the delivery of a Microsoft Teams message in your organization (Public Preview) New Microsoft Teams table: The MessageUrlInfo table contains information about URLs sent through Microsoft Teams messages in your organization Unified IdentityInfo table in advanced hunting now includes the largest possible set of fields common to both Defender and Azure portals. Microsoft Defender for Endpoint (Webinar - YouTube Link) Secure Your Servers with Microsoft's Server Protection Solution- This webinar offers an in-depth exploration of Microsoft Defender for Endpoint on Linux. Defender for Endpoint successfully passes the AV-Comparatives 2025 Anti-Tampering Test. Discover how automatic attack disruption protects critical assets while ensuring business continuity. Microsoft Defender for Office 365 Part 2: Build custom email security reports and dashboards with workbooks in Microsoft Sentinel New deployment guide: Quickly configure Microsoft Teams protection in Defender for Office 365 Plan 2 New SecOps guide: Security Operations Guide for Teams protection in Defender for Office 365 Video - Ninja Show: Advanced Threat Detection with Defender XDR Community Queries Video- Mastering Microsoft Defender for Office 365: Configuration Best Practices Video - Ninja Show: Protecting Microsoft Teams with Defender for Office 365 This blog discussed the new Defender for Office 365 Language AI for Phish Model. SafeLinks Protection for Links Generated by M365 Copilot Chat and Office Apps. Microsoft Defender for Cloud Apps New Applications inventory page now available in Defender XDR. The new Applications page in Microsoft Defender XDR provides a unified inventory of all SaaS and connected OAuth applications across your environment. For more information, see Application inventory overview. The Cloud app catalog page has been revamped to meet security standards. The new design includes improved navigation, making it easier for you to discover and manage your cloud applications. Note: As part of our ongoing convergence process across Defender workloads, Defender for Cloud Apps SIEM agents will be deprecated starting November 2025. Learn more. Microsoft Defender for Identity (Public Preview) Expanded New Sensor Deployment Support for Domain Controllers. Learn more. Active Directory Service Accounts Discovery Dashboard. Learn more. Improved Visibility into Defender for Identity New Sensor Eligibility in the Activation page. The Activation Page now displays all servers from your device inventory, including those not currently eligible for the new Defender for Identity sensor. Note: Local administrators collection (using SAM-R queries) feature will be disabled. Microsoft Security Blogs Analyzing CVE-2025-31191: A macOS security-scoped bookmarks-based sandbox escape Marbled Dust leverages zero-day in Output Messenger for regional espionage Lumma Stealer: Breaking down the delivery techniques and capabilities of a prolific infostealer New Russia-affiliated actor Void Blizzard targets critical sectors for espionage Defending against evolving identity attack techniques Threat Analytics (Access to the Defender Portal needed) Activity profile - AITM campaign with brand impersonated OAUTH applications Threat overview: SharePoint Server and Exchange Server threats Vulnerability profile: CVE-2025-24813 – Apache Tomcat Path Equivalence Vulnerability Actor profile: Storm-0593 [TA update] Actor profile: Storm-0287 Activity Profile: Marbled Dust leverages zero-day to conduct regional espionage [TA update] Technique profile: ClickFix technique leverages clipboard to run malicious commands Technique profile: LNK file UI feature abuse Technique profile: Azure Blob Storage threats Activity profile: Lumma Stealer: Breaking down the delivery techniques and capabilities of a prolific infostealer Vulnerability profile - CVE-2025-30397 Activity profile: Recent OSINT trends in information stealers2.5KViews2likes0Comments