defender experts for xdr
39 TopicsCharting the Future of SOC: Human and AI Collaboration for Better Security
Co-authors: Sylvie Liu, Principal Product Manager Rajiv Bharadwaja, Principal Software Engineering Manager Abhishek Kumar, Principal Group Manager - Security Research & Operations Security operations centers are under pressure from unprecedented scale and complexity. Speed, precision, and consistency matter more than ever, and AI is everywhere—but hype alone doesn’t solve the challenge. This blog shares our journey and insights from building autonomous AI agents for MDR operations and explores how the shift to a GenAI-powered SOC redefines collaboration between humans and AI. Beyond our managed services, Microsoft Defender Experts strive to be a trusted partner in SOC evolution, helping customers across the broader security ecosystem to anticipate process changes, plan for upskilling, and adopt agentic workflows with confidence. From Vision to Reality: Building the SOC of the Future Attackers are evolving at unprecedented speed, using AI to outpace defenses scale. Defender Experts is pioneering the transformation to build the SOC of the future by integrating advanced AI capabilities into our SOC workflows, which is critical for today’s threat landscape. We’ve seen AI deliver real results—in our earlier blog, we shared how Defender Experts applies AI to cut through noise without compromising on detecting real threats, enabling 50% of noise to be triaged automatically with high precision. Autonomous AI agents are foundational to the SOC of the future. Our vision is a predictive, adaptive model where agentic AI and automation remove manual toil, accelerate contextual insight, and execute both single tasks and complex workflows. Analysts are elevated, acting as orchestrators of governed action, driving high-impact decisions, and continuously tuning the system for transparency and trust. Agents handle repetitive, time-intensive tasks, while humans remain the final authority for strategic outcomes. Together, this creates a SOC that moves from reactive alert handling to proactive, explainable defense. It is always auditable and under human governance. How Microsoft Defender Experts is Pioneering This Shift Defender Experts builds autonomous AI agents with expert knowledge, expert-defined guardrails and human-in-the-loop validation to deliver structured, trustworthy outputs that accelerate investigations without compromising quality. These AI agents are designed to drive efficiency and consistency across our MDR operations, helping us respond to the threats faster and with confidence. As we advance this model, we’re not only improving speed and precision, we’re redefining our security operations. That means rethinking SOC analyst roles, skill composition, workflow design, the tooling support, the accompanying automation, and the evaluation and monitoring systems needed to maintain trust. Abhishek Kumar, lead of the Defender Experts security operations team, is deeply engaged in this transformation as we build the GenAI-powered SOC. From Abhishek’s perspective “This is an exciting era for anyone in security research and operations. We are seeing a monumental shift where security analysts and threat hunters are elevating the role from handling routine tasks, to delivering high value insights. AI agents are rapidly reducing analyst fatigue and freeing up essential time, allowing experts to focus on critical thinking and contextual analysis of incidents." Agents are not just a productivity leap, they're enabling analysts and hunters to better investigate emerging and hidden threats, develop more hypotheses, and connect clues to unravel complex campaigns. Time once spent on repetitive work is now devoted to advanced tasks like posture data analysis, traversing security graphs, and using cross-product intelligence to uncover novel threats and threat actor infrastructure. Another way the autonomous AI agents are helping is by reducing cognitive loads on humans and enabling interactions with agents to achieve specific outcomes. For example, if there are hundreds of login attempts from unfamiliar locations, probably only one or two may be worth deeper investigations as they have additional insights attached to them which could be surfaced quickly by the agent. Similarly, an end point process tree that could take significant effort for humans to analyze can be done much faster with the agent to spot suspicious anomalies. To maximize the impact, one important skill needed by SOC analysts is to be able to craft and finetune prompts to get the right insights with GenAI. Inside the Technology: How We Bring Autonomous Agents to Life Behind the scenes, delivering trustworthy GenAI-based solutions at scale requires rigorous engineering and continuous collaboration with the security operations teams. We’ve built AI agents on a foundation of expert-defined guardrails, curated test sets, and deployment-time checks to ensure reliability. Engineers, security analysts and researchers collaborated to refine workflows, enhance precision, and broaden coverage as the agents adapt to real-world threats. Each workflow begins under human oversight, reinforced by efficient engineering and analyst feedback loops that accelerate development while upholding security, privacy, and compliance standards. This transformation also demanded deep integration into Defender Experts core systems, from case management to remediation services, requiring ground-up engineering to accommodate long-running GenAI-based workflows alongside asynchronous backend processes. There is also a need for an orchestration engine that coordinates multi-layer automations, enabling rule-based logic, GenAI-powered features, and traditional AI models to work seamlessly together with the autonomous AI agents to maximize quality, efficiency and cost-effectiveness. The impact is clear: AI agents are now running on 75% of the phishing and malware incidents landing in the Defender Experts analyst queue. The AI agents autonomously arrive at the verdict determination, justification with data-backed summaries, customer-side queries for verification, and actionable remediation steps. With this combined Human and AI agent approach, we resolve incidents nearly 72% faster while maintaining quality and transparency. To achieve this, we follow a deliberate development and release journey. We start with internal evaluation on historic cases under strict privacy and compliance controls, establishing baselines for precision, recall, and quality. Next, we deploy the agents in “dark mode,” where agents investigate side-by-side with human analysts, enabling close monitoring and iterative improvements. From there, we move into pilot with customer design partners to validate methods and gather feedback, before expanding for broader adoption —all with human backstop for review and validation. This disciplined autonomous AI agent development approach ensures that every step balances autonomy with oversight, giving customers confidence that advanced AI capabilities are grounded in proven outcomes and designed to strengthen resilience at scale. Preparing for the Future Our experience developing autonomous AI agents and deploying them in real MDR operations has reinforced our vision for the SOC of the future, a collaborative model where humans remain in the driver’s seat to teach and lead, working alongside AI agents rather than being replaced by them. Together, they create faster, smarter, and more resilient security operations. As SOC teams embrace the shift to GenAI‑powered operations, these insights reflect the journey we’ve taken and offer practical guidance to help navigate the transformation with confidence: Anticipate Process Changes: SOC teams will not follow the same workflows as before. Prepare for evolving processes and establish a lifecycle for AI and agent adoption with confidence. Foster Mindset Shift: Analysts used to traditional approaches often find it challenging to adopt new methods (e.g., running Kusto queries vs. writing prompts, run full end to end investigation vs. leveraging the agent output). Plan for change management and provide training to ease this transition. Evolving SOC Skills: Analyst roles are shifting in a GenAI-powered SOC. Analysts need to build expertise in prompt engineering, moving beyond manual case investigations to focus on advanced tasks such as posture data analysis and leveraging cross-product intelligence to uncover novel threats and map threat actor infrastructure. These evolving skills position analysts as strategic decision-makers, building collaboration between humans and AI to maximize effectiveness. Build Trust and Confidence: As security operations adopt AI agents, maintain a strong human–AI feedback loop. Guardrails and human oversight are essential for trustworthy automation. Plan for Multi-layer AI and Automation: Automation continues to play a critical role in security operations. Explore how to orchestrate traditional automation and AI together to achieve efficiency, cost-effectiveness, and consistent quality. As we evolve toward the SOC of the future, we’re learning what it takes to make human and AI collaboration successful, and we’ll continue sharing those insights as we reimagine security operations together.339Views4likes0CommentsCloud forensics: Why enabling Microsoft Azure Key Vault logs matters
Co-authors - Christoph Dreymann - Abul Azed - Shiva P. Introduction As organizations increase their cloud adoption to accelerate AI readiness, Microsoft Incident Response has observed the rise of cloud-based threats as attackers seek to access sensitive data and exploit vulnerabilities stemming from misconfigurations often caused by rapid deployments. In this blog series, Cloud Forensics, we share insights from our frontline investigations to help organizations better understand the evolving threat landscape and implement effective strategies to protect their cloud environments. This blog post explores the importance of enabling and analyzing Microsoft Azure Key Vault logs in the context of security investigations. Microsoft Incident Response has observed cases where threat actors specifically targeted Key Vault instances. In the absence of proper logging, conducting thorough investigations becomes significantly more difficult. Given the highly sensitive nature of the data stored in Key Vault, it is a common target for malicious activity. Moreover, attacks against this service often leave minimal forensic evidence when verbose logging is not properly configured during deployment. We will walk through realistic attack scenarios, illustrating how these threats manifest in log data and highlighting the value of enabling comprehensive logging for detection. Key Vault Key Vault is a cloud service designed for secure storage and retrieval of critical secrets such as passwords or database connection strings. In addition to secrets, it can store other information such as certificates and cryptographic keys. To ensure effective monitoring of activities performed on a specific instance of Key Vault, it is essential to enable logging. When audit logging is not enabled, and there is a security breach, it is often difficult to ascertain which secrets were accessed without comprehensive logs. Given the importance of the assets protected by Key Vault, it is imperative to enable logging during the deployment phase. How to enable logging Logging must be enabled separately for each Key Vault instance either in the Microsoft Azure portal, Azure command-line interface (CLI) or Azure PowerShell. How to enable logging can be found here. Alternatively, it can be configured within the default log analytics workspace as an Azure Policy. How to use this method can be found here. By directing these logs to a Log Analytics workspace, storage account, or event hub for security information and event management (SIEM) ingestion, they can be utilized for threat detection and, more importantly, to ascertain when an identity was compromised and which type of sensitive information was accessed through that compromised identity. Without this logging, it is difficult to confirm whether any material has been accessed and therefore may need to be treated as compromised. NOTE: There are no license requirements to enable logging within Key Vault, but Log Analytics charges based on ingestion and retention for usage of that service (Pricing - Azure Monitor | Microsoft Azure) Next, we will review the structure of the Audit Logs originating from the Key Vault instance. These logs are located in the AzureDiagnostics table. Interesting fields Below is a good starting query to begin investigating activity performed against a Key Vault instance: AzureDiagnostics | where ResourceType == 'VAULTS' The "operationName" field is of particular significance as it indicates the type of operation that took place. An overview of Key Vault operations can be found here. The "Identity" field includes details about the identity responsible for an activity, such as the object identifier and UPN. Lastly, the “callerIpAddress” shows which IP address the requests originate from. The table below displays the fields highlighted and used in this article. Field name Description time Date and time in UTC. resourceId The Key Vault resource ID uniquely identifies a Key Vault in Azure and is used for various operations and configurations. callerIpAddress IP address of the client that made the request. Identity The identity structure includes various information. The identity can be a "user," a "service principal," or a combination such as "user+appId" when the request originates from an Azure PowerShell cmdlet. Different fields are available based on this. The most important ones are: identity_claim_upn_s: Specifies the upn of a user identity_claim_appid_g: Contains the appid identity_claim_idtyp_s: Shows what type of identity was used OperationName The name of the operation, for instance SecretGet Resource Key Vault Name ResourceType Always “VAULTS” requestUri_s The requested Key Vault API call contains valuable information. Each API call has its own structure. For example, the SecretGet request URI is: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4. For more information, please see: https://learn.microsoft.com/en-us/rest/api/keyvault/?view=rest-keyvault-keys-7.4 httpStatusCode_d Indicates if an API call was successful A complete list of fields can be found here. To analyze further, we need to understand how a threat actor can access a Key Vault by examining the Access Policy and Azure role-based access control (RBAC) permission model used within it. Access Policy permission model vs Azure RBAC The Access Policy Permission Model operates solely on the data plane, specifically for Azure Key Vault. The data plane is the access pathway for creating, reading, updating, and deleting assets stored within the Key Vault instance. Via a Key Vault Access Policy, you can assign individual permissions and grant access to security principals such as users, groups, service principals, and managed identities, at the Key Vault scope with appropriate Control Plane privileges. This model provides flexibility by granting access to keys, secrets, and certificates through specific permissions. However, it is considered a legacy authorization system native to Key Vault. Note: The Access Policies permission model has privilege escalation risks and lacks Privileged Identity Management support. It is not recommended for critical data and workloads. On the other hand, Azure RBAC operates on both Azure's control and data planes. It is built on Azure Resource Manager, allowing for centralized access management of Azure resources. Azure RBAC controls access by creating role assignments, which consist of a security principal, a role definition (predefined set of permissions), and a scope (a group of resources or an individual resource). RBAC offers several advantages, including a unified access control model for Azure resources and integration with Privileged Identity Management. More information regarding Azure RBAC can be found here. Now, let’s dive into how threat actors can gain access to a Key Vault. How a threat actor can access a Key Vault When a Key Vault is configured with Access Policy permission, privileges can be escalated under certain circumstances. If a threat actor gains access to an identity that has been assigned the Key Vault Contributor Azure RBAC role, Contributor role or any role that includes 'Microsoft.KeyVault/vaults/write' permissions, they can escalate their privileges by setting a Key Vault access policy to grant themselves data plane access, which in turn allows them to read and modify the contents of the Key Vault. Modifying the permission model requires 'Microsoft.Authorization/roleAssignments/write' permission, which is included in the Owner and User Access Administrator roles. Therefore, a threat actor cannot change the permission model without one of these roles. Any change to the authorization mode will be logged in the Activity Logs of the subscription, as shown in the figure below: If a new Access Policy is added, it will generate the following entry within the Azure Activity Log: When Azure RBAC is the permissions model for a Key Vault, a threat actor must identify an identity within the Entra ID tenant that has access to sensitive information or one capable of assigning such permissions. Information about Azure RBAC roles for Key Vault access, specifically those who can access Secrets, can be found here. A threat actor that has compromised an identity with an Owner role is authorized to manage all operations, including resources, access policies, and roles within the Key Vault. In contrast, a threat actor with a Contributor role can handle management operations but does not have access to keys, secrets, or certificates. This restriction applies when the RBAC model is used within a Key Vault. The following section will examine the typical actions performed by a threat actor after gathering permissions. Attack scenario Let's review the common steps threat actors take after gaining initial access to Microsoft Azure. We will focus on the Azure Resource Manager layer (responsible for deploying and managing resources), as its Azure RBAC or Access Policy permissions determine what a threat actor can view or access within Key Vault(s). Enumeration Initially, threat actors aim to understand the existing organizations' attack surface. As such, all Azure resources will be enumerated. The scope of this enumeration is determined by the access rights held by the compromised identity. If the compromised identity possesses access comparable to that of a reader or a Key Vault reader at the subscription level (reader permission is included in a variety of Azure RBAC roles), it can read numerous resource groups. Conversely, if the identity's access is restricted, it may only view a specific subset of resources, such as Key Vaults. Consequently, a threat actor can only interact with those Key Vaults that are visible to them. Once the Key Vault name is identified, a threat actor can interact with the Key Vault, and these interactions will be logged within the AzureDiagnostics table. List secrets / List certificates Operation With the Key Vault Name, a threat actor could list secrets or certificates (Operation: SecretList and CertificateList) if they have the appropriate rights (while this is not the final secret, it indicates under which name the secret or certificate can be retrieved). If not, access attempts would appear as unsuccessful operations within the httpStatusCode_d field, aiding in detecting such activities. Therefore, a high number of unauthorized operations on different Key Vaults could be an indicator of suspicious activity as shown in the figure below: The following query assists in detecting potential unauthorized access patterns. Query: AzureDiagnostics | where ResourceType == 'VAULTS' and OperationName != "Authentication" | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount=count(), UnauthorizedAccess=countif(httpStatusCode_d >= 400), OperationNames = make_set(OperationName), make_set_if(httpStatusCode_d, httpStatusCode_d >= 400), VaultName=make_set(Resource) by CallerIPAddress | where OperationNames has_any ("SecretList", "CertificateList") and UnauthorizedAccess > 0 When a threat actor uses a browser for interaction, the VaultGet operation is usually the first action when accessing a Key Vault. This operation can also be performed via direct API calls and is not limited to browser use. High-Privileged account store Next, we assume a successful attempt to access a global admin password for Entra ID. Analyzing Secret retrieval When an individual has the identifier of a Key Vault and has SecretList and SecretGet access rights, they can list all the secrets stored within the Key Vault (OperationName SecretList). In this instance, this secret includes a password. Upon identifying the secret name, the secret value can be retrieved (OperationName SecretGet). The image below illustrates what appears in the AzureDiagnostics table. The HTTP status code indicates that these actions were successful. The requestUri contains the name of the secret, such as "BreakGlassAccountTenant" for the SecretGet operation. With this information, one can ascertain what secret has been accessed. The requestUri_s format for the SecretGet operation is as follows: {vaultBaseUrl}/secrets/{secret-name}/{secret-version}?api-version=7.4 When the browser accesses the Key Vault service through the Azure portal, additional API calls are often involved due to the various views within the Key Vault services in Azure. The figure below illustrates this process. When someone accesses a specific Key Vault via a browser, the VaultGet operation is followed by SecretList. To further distinguish actions, SecretListVersion will also be used, as the Key Vault service shows different versions of a Secret, which may indicate direct browser usage. The final SecretGet Operation retrieves the actual secret. When using the Key Vault, SecretList operations can be accompanied by SecretGet operations. This is less common for emergency accounts since these accounts are infrequently used. Setting up alerts when certain secrets are retrieved can assist in identifying unusual activity. Entra ID Application certificate store In addition to storing secrets, certificates that provide access to Entra ID applications can also be managed within a Key Vault. When creating an Entra ID application with a certificate for authentication, you can automatically store that certificate within a Key Vault of your choice. Access to such certificates could allow a threat actor to leverage the access rights of the associated Entra ID application and gain access to Entra ID. For instance, if the Entra ID application possesses significant permissions, the extracted certificate could be utilized to exercise those permissions. Various Entra ID roles can be leveraged to elevate privileges; however, for this scenario, we assume the targeted application holds the "RoleManagement.ReadWrite.Directory" permission. Consequently, the Entra ID application would have the capability to assign the Global Admin role to a user account controlled by the threat actor. We have also described this scenario here. Analyzing Certificate retrieval The figure below outlines the procedure for a threat actor to download a certificate and its private key using the Key Vault API. First, the CertificateList operation displays all certificates within a Key Vault. Next, the SecretGet operation retrieves a specific certificate along with its private key (the SecretGet operation is required to obtain both the certificate and its private key). When a threat actor uses the browser through the Azure portal, the sequence of actions should resemble those in the figure below: When a Certificate object is selected within a specific Key Vault view, all certificates are displayed (Operation: CertificateList). Upon selecting a particular certificate in this view, the operations CertificateGet and CertificateListVersions are executed. Subsequently, when a specific version is selected, the CertificateGet operation will be invoked again. When "Download in PFX/PEM format" is selected, the SecretGet Operation downloads the Certificate and private key within the Browser. With the downloaded certificate, the threat actor can sign in as the Entra application and utilize the assigned permissions. Key Vault summary Detecting misuse of a Key Vault instance can be challenging, as operations like SecretGet can be legitimate. A threat actor might easily masquerade their activities among legitimate users. Nevertheless, unusual attributes, such as IP addresses or peculiar access patterns, could serve as indicators. If an identity is known to be compromised and has utilized Key Vaults, the Key Vault logs must be checked to determine what has been accessed to respond appropriately. Coming up next Stay tuned for the next blog in the Cloud Forensics series. If you haven’t already, please read our previous blog about hunting with Microsoft Graph activity logs.Sploitlight: Hunting Beyond the Patch
Many people aren’t aware that Microsoft security isn't just about Microsoft, it’s also about the platforms supporting the products we build. This means our reach extends across all operating systems: iOS, Android, Linux, and macOS! In early 2025 Microsoft disclosed CVE-2025-31199, a macOS vulnerability that abused Spotlight, macOS’s metadata importer framework to bypass Transparency, Consent, and Control (TCC). After the Defender team reported this to Apple, a patch was released that closed the hole. But, the underlying behavior behind the threat still matters to Microsoft! Once attackers learn that trusted macOS services can be redirected, they will reuse the method for nefarious purposes, so it is important to track them down. The next variant won’t look the same, and Spotlight is a commonly targeted service. [1] So, in this article, we teach you how to hunt beyond the patch! Why Hunt for Sploitlight Spotlight importers (.mdimporter) extend macOS indexing. They normally process metadata for search visibility. Attackers can twist that design to index protected files, extract sensitive data, or trigger code execution, perhaps with elevated system trust and privileges. Even with the patch in place, the same logic paths remain valuable targets for attackers. We recommend hunting for patterns around importers, indexing behavior, and TCC privileged binaries to help detect attempts to rebuild this chain of abuse. Advanced Hunting Queries (AHQs) 1. Detect Unusual Spotlight Importer Activity Looking for manual invocations of mdimport may tip you off to attacker activity DeviceProcessEvents |where ProcessCommandLine contains "mdimport" OR DeviceProcessEvents | where ProcessCommandLine contains "mdimport" | where isempty(extract(@"-(\w+)", 1, ProcessCommandLine)) == false | extend mdimportFlag = extract(@"-(\w+)", 1, ProcessCommandLine) | where mdimportFlag in~ ("r", "i", "t", "L") Why it’s important: A Spotlight plugin being developed or tested will be called from the command line using the mdimport utility. For a wide-sweeping query, just search for mdimport alone. However, to get more granular, you can search for it with common parameters such as "r", "i", "t", or "L". 2. Investigate Anomalous Spotlight Activity Use this query to monitor Spotlight activity in the background DeviceProcessEvents | where FileName in~ ("mdworker", "mdworker_shared") Why it’s important: The Advanced Hunting Portal creates timelines for you to quickly zoom in on abnormal behavior, and peaks can show when new Spotlight plugins are invoked. Defender Recommendations Establish a baseline of normal Spotlight activity before setting detection thresholds. Tag importer activity by TCC domain to surface unexpected access. Correlate unsigned importer drops with system events such as privilege escalation or installer execution. Deploy these AHQs in Microsoft Defender XDR or Sentinel for continuous telemetry review. The Bigger Picture The point isn’t to memorize CVEs. It’s to understand the logic that made them possible and look for it everywhere else. Threat actors don’t repeat exploits; they repeat success patterns. Visibility is the only real control. If a process touches data, moves it, or indexes it, it’s part of your attack surface. Treat it that way. 👉 Join the Defender Experts S.T.A.R. Forum to see Sploitlight detection strategies and live hunting demonstrations: Defender Experts Webinar Series [1] References: https://theevilbit.github.io/posts/macos_persistence_spotlight_importers/ https://www.blackhat.com/docs/us-15/materials/us-15-Wardle-Writing-Bad-A-Malware-For-OS-X.pdf https://newosxbook.com/home.html https://www.microsoft.com/en-us/security/blog/2025/07/28/sploitlight-analyzing-a-spotlight-based-macos-tcc-vulnerability/The invisible attack surface: hunting AI threats in Defender XDR
As organizations embed AI across their business, the same technology that drives productivity also introduces a new class of risk: prompts that can be manipulated, data that can be leaked, and AI systems that can be tricked into doing things they shouldn’t. Attackers are already testing these boundaries, and defenders need visibility into how AI is being used - not just where it’s deployed. Microsoft Defender for Cloud now brings that visibility into the hunt. Its AI threat protection detects prompt injection, sensitive data exposure, and misuse of credentials in real time, correlating those signals with endpoint, identity, and cloud telemetry through Microsoft Defender XDR. The result is a single, searchable surface for investigating how both people and AI-driven systems behave under pressure. As of 2025, Defender for AI is fully integrated into Microsoft Defender for Cloud, extending protection to AI models, prompts, and datasets across Azure AI workloads. This makes Defender for Cloud the central platform for securing enterprise AI environments. Meanwhile, Microsoft Defender Experts continues expanding across Defender XDR, offering 24/7 human-led monitoring and investigation, with full active coverage for servers within Defender for Cloud today. For threat hunters, this evolution isn’t theoretical; it’s tactical. The same curiosity and precision that uncover lateral movement or data exfiltration now apply to AI misuse. In this post, we’ll walk through practical KQL hunts to surface suspicious AI activity, from abnormal model usage patterns to subtle signs of data exfiltration that traditional detections might miss. The AI attack surface: old playbook, new players Attackers aren’t reinventing the wheel; they’re repurposing it. The top risks map neatly to the OWASP Top 10 for LLM Applications: Prompt injection (LLM01) – Manipulating model logic through crafted inputs or poisoned context Sensitive data disclosure (LLM06) – AI returning confidential data due to mis-scoped access Shadow AI usage – Employees using external copilots with corporate data Wallet abuse – API tokens or service principals driving massive, unintended consumption It’s not about new telemetry; correlation is what matters. Defender surfaces these risks by tying AI alerts from Defender for Cloud to real user behavior across your XDR environment. Threat hunting: from AI alerts to insight Forget slide decks. These are practical, production-ready hunting patterns using real Defender data tables. 1. Shadow AI exfiltration detection Office apps sending data to external AI endpoints (the #1 exfil path today). ( DeviceNetworkEvents | where RemoteUrl has_any (dynamic(["openai.com","anthropic.com","claude.ai","cohere.ai","chatgpt.com","gemini.google.com","huggingface.co","perplexity.ai"])) | where InitiatingProcessFileName in~ (dynamic(["EXCEL.EXE","WINWORD.EXE","OUTLOOK.EXE","POWERPNT.EXE","ONENOTE.EXE"])) or InitiatingProcessFileName in~ (dynamic(["chrome.exe","msedge.exe","firefox.exe","brave.exe"])) | extend Device = toupper(split(DeviceName, ".")[0]), IsOffice = InitiatingProcessFileName in~ (dynamic(["EXCEL.EXE","WINWORD.EXE","OUTLOOK.EXE","POWERPNT.EXE","ONENOTE.EXE"])) | summarize Connections = count(), IsOffice = max(IsOffice), AITime = max(Timestamp) by Device, User = InitiatingProcessAccountName ) | join kind=inner ( DeviceFileEvents | where ActionType in~ ("FileCopied","FileCreated","FileModified","FileRenamed") | extend Device = toupper(split(DeviceName, ".")[0]), Lower = tolower(strcat(FolderPath, FileName)) | extend HeuristicFlag = case( Lower has_any ("password","credential","secret","api_key") or Lower endswith ".key" or Lower endswith ".pem", "Credential", Lower has_any ("confidential","restricted","classified","sensitive"), "Classified", Lower has_any ("ssn","salary","payroll"), "PII", Lower has_any ("finance","hr","legal","executive"), "OrgSensitive", "Other" ), LabelFlag = case( SensitivityLabel has "Highly Confidential", "Classified", SensitivityLabel has "Confidential", "Sensitive", SensitivityLabel has "Internal", "Internal", isnotempty(SensitivityLabel), "Labeled", "Unlabeled" ) | where HeuristicFlag != "Other" or LabelFlag in ("Classified","Sensitive","Internal","Labeled") | summarize Files = count(), HeuristicCount = countif(HeuristicFlag != "Other"), DLPCount = countif(isnotempty(SensitivityLabel)), Types = make_set_if(HeuristicFlag, HeuristicFlag != "Other"), Labels = make_set_if(SensitivityLabel, isnotempty(SensitivityLabel)), FileTime = max(Timestamp) by Device, User = InitiatingProcessAccountName ) on Device, User | extend Delta = datetime_diff('minute', AITime, FileTime) | where abs(Delta) <= 240 | extend Priority = case( IsOffice == 1, "Critical", Labels has_any ("Highly Confidential","Confidential") or Types has "Credential" or Types has "Classified", "High", Files >= 20, "High", "Medium" ) | project Priority, Device, User, Connections, Files, HeuristicCount, DLPCount, Types, Labels, Delta | order by Priority desc, Files desc Why it works: Correlates outbound AI traffic with sensitive file access. Action: Block the key, review DLP coverage, fix workflow gaps. 2. Anomalous consumption patterns Off-hours Azure OpenAI activity isn’t necessarily productivity; it might be unsanctioned automation or exfiltration. // Azure OpenAI & LLM Off-Hours Detection - PER USER TIMEZONE // DISCLAIMER: Time zone detection is approximate, based on behavioral inference. // Validate per user/device when high-risk anomalies are flagged. // If authoritative time zone data (e.g., Entra sign-in or mailbox settings) is available, prefer that source. let MinRequestsThreshold = 500; let MinTokensThreshold = 20000; let OffHoursStart = 21; let OffHoursEnd = 5; let UserTimezones = CloudAppEvents | where Timestamp > ago(60d) | where Application has_any ("OpenAI", "Azure OpenAI", "ChatGPT", "Claude", "Gemini", "Anthropic", "Perplexity", "Microsoft 365 Copilot") | extend HourUTC = datetime_part("Hour", Timestamp) | summarize ActivityByHour = count() by AccountDisplayName, HourUTC | summarize arg_max(ActivityByHour, HourUTC) by AccountDisplayName | extend TimezoneOffset = iff((HourUTC - 14 + 24) % 24 > 12, (HourUTC - 14 + 24) % 24 - 24, (HourUTC - 14 + 24) % 24) | project AccountDisplayName, TimezoneOffset; CloudAppEvents | where Timestamp > ago(30d) | where Application has_any ("OpenAI", "Azure OpenAI", "ChatGPT", "Claude", "Gemini", "Anthropic", "Perplexity", "Microsoft 365 Copilot") | extend HourUTC = datetime_part("Hour", Timestamp), DayUTC = toint(dayofweek(Timestamp)), Tokens = toint(RawEventData.totalTokens) | join kind=leftouter (UserTimezones) on AccountDisplayName | extend TZ = coalesce(TimezoneOffset, 0) | extend HourLocal = (HourUTC + TZ + 24) % 24 | extend DayLocal = (DayUTC + iff(HourUTC + TZ >= 24, 1, iff(HourUTC + TZ < 0, -1, 0)) + 7) % 7 | extend IsAnomalous = (DayLocal in (0, 6)) or (HourLocal >= OffHoursStart or HourLocal < OffHoursEnd) | where IsAnomalous | extend IsWeekend = DayLocal in (0, 6), IsOffHours = HourLocal >= OffHoursStart or HourLocal < OffHoursEnd | summarize Requests = count(), TokensUsed = sum(Tokens), WeekendRequests = countif(IsWeekend), LateNightRequests = countif(IsOffHours and not(IsWeekend)), LocalHours = make_set(HourLocal), LocalDays = make_set(DayLocal), Applications = make_set(Application), ActionTypes = make_set(ActionType), FirstSeen = min(Timestamp), LastSeen = max(Timestamp), DetectedTZ = any(TZ) by AccountDisplayName, IPAddress | where Requests >= MinRequestsThreshold or TokensUsed >= MinTokensThreshold | extend UserTimezone = case( DetectedTZ == 0, "UTC/GMT", DetectedTZ == -5, "EST (UTC-5)", DetectedTZ == -4, "EDT (UTC-4)", DetectedTZ == -6, "CST (UTC-6)", DetectedTZ == -7, "MST (UTC-7)", DetectedTZ == -8, "PST (UTC-8)", DetectedTZ == 1, "CET (UTC+1)", DetectedTZ == 8, "CST China (UTC+8)", DetectedTZ == 9, "JST Japan (UTC+9)", DetectedTZ > 0, strcat("UTC+", DetectedTZ), strcat("UTC", DetectedTZ) ) | extend ThreatPattern = case( array_length(Applications) > 1, "Multiple LLM Services", WeekendRequests > LateNightRequests * 2, "Weekend Automation", LateNightRequests > WeekendRequests * 2, "Late-Night Automation", Requests > 500, "High-Volume Script", "Unusual Off-Hours Activity" ) | extend RiskScore = case( Requests > 1000 and TokensUsed > 100000, 100, Requests > 500 and WeekendRequests > 100, 95, TokensUsed > 50000 or Requests > 200, 85, WeekendRequests > 100, 80, Requests > 100 or TokensUsed > 20000, 70, 60 ) | extend RiskLevel = case( RiskScore >= 90, "Critical", RiskScore >= 75, "High", RiskScore >= 60, "Medium", "Low" ) | project AccountDisplayName, IPAddress, RiskLevel, RiskScore, ThreatPattern, Requests, TokensUsed, WeekendRequests, LateNightRequests, Applications, UserTimezone, LocalHours, LocalDays, ActionTypes, FirstSeen, LastSeen | sort by RiskScore desc, Requests desc Why it works: Humans sleep. Scripts don’t. Temporal anomalies expose automation faster than anomaly models. Action: Check grounding sources, confirm the IP, disable keys or service principals. 3. Bot-like behavior hunt Highlights automation vs. compromise and early detection. // ---- Tunables (adjust if needed) ---- let LookbackDays = 7d; let MinEvents = 3; // ignore trivial users let RPM_AutoThresh = 50.0; // requests/hour threshold that smells like a bot let MaxIPs_Auto = 1; // single IP suggests fixed worker let MaxApps_Auto = 1; // single app suggests fixed worker let MaxUAs_Auto = 2; // very few UAs over lookback let MaxHighTokPct = 5.0; // % of requests over 4k tokens still considered benign CloudAppEvents | where Timestamp > ago(LookbackDays) | where Application has_any ("OpenAI", "Azure OpenAI", "Microsoft 365 Copilot Chat") | extend User = tolower(AccountDisplayName) | extend raw = todynamic(RawEventData) | extend Tokens = toint(coalesce(raw.totalTokens, raw.total_tokens, raw.usage_total_tokens)) | summarize TotalRequests = count(), HighTokenRequests = countif(Tokens > 4000), AvgTokens = avg(Tokens), MaxTokens = max(Tokens), UniqueIPs = dcount(IPAddress), IPs = make_set(IPAddress, 50), UniqueApps = dcount(Application), Apps = make_set(Application, 20), UniqueUAs = dcount(UserAgent), FirstRequest = min(Timestamp), LastRequest = max(Timestamp) by User | where TotalRequests >= MinEvents | extend _dur = toreal(datetime_diff('hour', LastRequest, FirstRequest)) | extend DurationHours = iif(_dur <= 0, 1.0, _dur) | extend RequestsPerHour = TotalRequests / DurationHours | extend HighTokenRatio = (HighTokenRequests * 100.0) / TotalRequests // ---- Heuristic: derive likely automation (no lists/regex) ---- | extend IsLikelyAutomation = (UniqueIPs <= MaxIPs_Auto and UniqueApps <= MaxApps_Auto and UniqueUAs <= MaxUAs_Auto and RequestsPerHour >= RPM_AutoThresh and HighTokenRatio <= MaxHighTokPct) // ---- Techniques & risk ---- | extend IsRapidFire = RequestsPerHour > 20, IsHighVolume = TotalRequests > 50, IsTokenAbuse = HighTokenRatio > 30, IsMultiService = UniqueApps > 1, IsMultiIP = UniqueIPs > 2, IsEscalating = DurationHours < 24 and TotalRequests > 10 | where IsRapidFire or IsHighVolume or IsTokenAbuse or IsMultiService or IsMultiIP or IsEscalating | extend TechniqueCount = toint(IsRapidFire) + toint(IsHighVolume) + toint(IsTokenAbuse) + toint(IsMultiService) + toint(IsMultiIP) + toint(IsEscalating) | extend Risk = case( IsLikelyAutomation and UniqueIPs == 1 and UniqueApps == 1 and IsTokenAbuse == 0, "Low - Likely Automation", TechniqueCount >= 4, "Critical - Multi-Vector Behavior", TechniqueCount >= 3, "High - Attack Pattern", TechniqueCount >= 2, "Medium - Anomalous Behavior", "Low" ) // Custom sort: Critical > High > Medium > Low - Likely Automation > Low | extend RiskOrder = case( Risk startswith "Critical", 1, Risk startswith "High", 2, Risk startswith "Medium", 3, Risk == "Low - Likely Automation", 4, 5 ) | project Risk, User, TotalRequests, RequestsPerHour, TechniqueCount, IsLikelyAutomation, IsRapidFire, IsHighVolume, IsTokenAbuse, IsMultiIP, IsMultiService, IsEscalating, UniqueIPs, IPs, UniqueApps, UniqueUAs, HighTokenRatio, DurationHours, FirstRequest, LastRequest, RiskOrder | sort by RiskOrder asc, TotalRequests desc Why it works: Hunting automation-like patterns that could indicate either sanctioned scripts or early-stage compromise, enabling proactive detection before alerts fire. Action: Investigate flagged accounts immediately to confirm intent and mitigate potential AI misuse. Operational lessons that scale beyond the lab Custom detections > Ad hoc hunts – Turn query #1 into a scheduled detection. Shadow AI isn’t a one-off behavior. Security Copilot ≠ search bar – Use it for triage context, not hunting logic. Set quotas, treat them like controls – Token budgets and rate limits are as critical as firewalls for AI workloads. Defender for Cloud Apps – Block risky generative AI apps while letting sanctioned copilots run. Getting started with threat hunting for AI workloads Before you run these hunts at scale, make sure your environment is instrumented for cognitive visibility. That means insight into how your AI models are being used and what data they reason over, not just how much compute they consume. Traditional telemetry shows process, network, and authentication events. Cognitive visibility adds prompts, model responses, grounding sources, and token behavior, giving analysts the context that explains why an AI acted the way it did. Defender for AI Services integrates with Defender for Cloud to provide that visibility layer, but the right configuration turns data collection into situational awareness. Enable the AI services plan – Make sure Defender for AI Services is enabled at the subscription level. This activates continuous monitoring for Azure OpenAI, AI Foundry, and other managed AI workloads. Microsoft Learn → Enable user prompt evidence – Turn on prompt capture for Defender for AI alerts. Seeing the exact input and model response during an attack is the difference between speculation and evidence. Microsoft Learn → Validate your schema – Always test KQL queries in your workspace. Field names and event structures can differ across tenants and tiers, especially in CloudAuditEvents and AlertEvidence. Use Security Copilot for acceleration – Let Copilot translate natural language hypotheses into KQL, then fine-tune the logic yourself. It is the fastest way to scale your hunts without losing precision. Microsoft Learn → Monitor both sides of the equation – Hunt for both AI-specific risks such as prompt injection, model abuse, or token sprawl, and traditional threats that target AI systems such as compromised credentials, exposed storage, or lateral movement through service principals. Visibility is only as strong as the context you capture. The sooner you enable these settings, the sooner your SOC can understand why your models behave the way they do, not just what they did. Final thoughts: from prompts to protections As AI becomes part of core infrastructure, its telemetry must become part of your SOC’s muscle memory. The same principles that power endpoint or identity defense (i.e. visibility, correlation, anomaly detection) now apply to model inference, token usage, and data grounding. Defender for Cloud and Defender XDR give you that continuity: alerts flow where your analysts already work, and your hunting logic evolves without a separate stack. Protecting AI isn’t about chasing every model. It’s about extending proven security discipline to the systems that now think alongside you. Further Reading Defender for Cloud AI Threat Protection Advanced Hunting in Microsoft Defender XDR OWASP Top 10 for LLM Applications Found a better pattern? Post it. The threat surface is new, but the hunt discipline isn’t.579Views1like0CommentsDelivering more threat hunting insights with Microsoft Defender Experts’ newest capabilities
The cybersecurity threat landscape continues to evolve with novel attacks and techniques emerging each day. Microsoft Defender Experts for Hunting, included with Microsoft Defender Experts for XDR, helps security teams stay ahead of evolving attacks by providing proactive threat hunting, powered by Microsoft’s vast threat intelligence with 100 trillion daily signals processed by over 10,000 experts. To date, our managed threat hunting reports have provided details about the hunts we conduct after observing suspicious activity, with full attack summary details provided for verified threats (also known as Defender Experts Notifications). Today, we are excited to announce the general availability of new capabilities that deliver deeper hunting context to our customers. More specifically, we will provide greater insight into each hunt we carry out—not just the ones that result in verified threats. And we’ll also give our customers visibility into the hypothesis-based hunts we conduct on their behalf. Introducing investigation summaries for the hunts we conduct Each hunt we conduct tells a story, even when no active threat is found. So, to keep you informed, you will now receive an investigation summary to go along with nearly each hunt we conduct in their environment—regardless of whether a confirmed threat was found. This summary will detail what we hunted for, why we hunted for it, and how we reached our final determination. Beyond transparency, these summaries provide assurance that we thoroughly hunted down the threat and that your defenses remain intact. They help validate your security posture and, when applicable, highlight any previously uncovered threats during the process. Even in cases where no threat is detected, you can analyze our hunt summaries to be tangibly assured that we are continuously hunting on your behalf—keeping you informed, prepared, and ahead of new risks. New Emerging threats section of the Defender Experts for Hunting report Our threat hunters constantly analyze substantial amounts of threat intelligence to hunt for new and emerging techniques. To share this information with you, we are unveiling a new section of our report titled “Emerging threats” which details the proactive, hypothesis-based hunts we’ve conducted in your environment. These hunts focus on tactics that adversaries are just beginning to adopt, meaning they might bypass traditional detection mechanisms. This section will provide a title briefly describing each emerging threat, the severity we’ve ascribed to it, its relevant threat category, and most importantly, whether we’ve identified any evidence of impact in your environment. Additionally, by clicking into the hunt, you’ll see when we started and ended our hunt for the threat, along with a full investigation summary detailing our hunt. By surfacing these emerging threat hunts, we give you visibility into how we’re anticipating attacker behavior, validating your defenses against cutting-edge techniques, and identifying relevant suspicious activity before significant exploitation. Conclusion With these new capabilities, Microsoft Defender Experts for Hunting goes beyond detection to deliver transparency, assurance, and proactive defense. By surfacing investigation summaries and emerging threat insights, we help security teams validate their defenses, anticipate attacker tactics, and stay ahead of evolving risks. You can access these new capabilities by visiting your Hunting report, located in the Defender portal. To learn more about our hunting service, visit our Microsoft Defender Experts for Hunting page, read our hunting documentation, or watch our explainer video. To learn more about our managed XDR service, visit our Microsoft Defender Experts for XDR page, or read our XDR documentation. You can also visit our Tech Community discussion space to ask questions, engage in conversations, and share your expertise and feedback. What's next? Join us at Microsoft Ignite in San Francisco on November 17–21, or online, November 18–20, for deep dives and practical labs to help you maximize your Microsoft Defender investments and to get more from the Microsoft capabilities you already use. Security is a core focus at Ignite this year, with the Security Forum on November 17th, deep dive technical sessions, theater talks, and hands-on labs designed for security leaders and practitioners Featured sessions BRK237: Identity Under Siege: Modern ITDR from Microsoft Join experts in Identity and Security to hear how Microsoft is streamlining collaboration across teams and helping customers better protect, detect, and respond to threats targeting your identity fabric. BRK240 – Endpoint security in the AI era: What's new in Defender Discover how Microsoft Defender’s AI-powered endpoint security empowers you to do more, better, faster. BRK236 – Your SOC’s ally against cyber threats, Microsoft Defender Experts See how Defender Experts detect, halt, and manage threats for you, with real-world outcomes and demos. LAB541 – Defend against threats with Microsoft Defender Get hands-on with Defender for Office 365 and Defender for Endpoint, from onboarding devices to advanced attack mitigation. Explore and filter the full security catalog by topic, format, and role: aka.ms/SessionCatalogSecurity. Why attend? Ignite is the place to learn about the latest Defender capabilities, including new agentic AI integrations and unified threat protection. We will also share future-facing innovations in Defender, as part of our ongoing commitment to autonomous defense. Security Forum—Make day 0 count (November 17) Kick off with an immersive, in person preday focused on strategic security discussions and real-world guidance from Microsoft leaders and industry experts. Select Security Forum during registration. Register for Microsoft Ignite >Cloud shadows: How attackers exploit Azure’s elasticity for stealth and scale
Threats like password spray or adversary-in-the-middle (AiTM) are routine and too easily overlooked in an endless stream of security alerts. But what if these routine threats are only a small part of a much deeper, more sophisticated attack? Since June 2025, Microsoft Defender Experts has been closely monitoring a sophisticated and continuously evolving attack campaign targeting poorly managed Azure cloud environments. What sets these threats apart is their use of Azure’s elasticity and interconnected structure, which allows users and attackers alike to move more easily through multi-tenant environments and avoid basic detection. By specifically targeting student and Pay-As-You-Go accounts that are improperly secured and poorly monitored, adversaries can rapidly move across tenants, weaponize ephemeral resources, and manipulate quotas—constructing a resilient and dynamic ecosystem. Their methods blend so seamlessly with legitimate cloud activity that they frequently evade basic threat detection methods, taking full advantage of trusted cloud features to ensure persistence and scale. The campaigns demonstrate how today’s adversaries can transform even a single compromised credential into a sprawling and complex attack across multiple tenants. Attackers no longer simply establish static footholds; instead, every compromised account becomes a possible springboard, every tenant a new beachhead. Their arsenal is thoroughly cloud-native: rapidly deploying short-lived virtual machines, registering OAuth applications for ongoing access, manipulating service quotas to expand their attack infrastructure, and abusing machine learning workspaces for covert activity. The result is an attack ecosystem that’s agile, elusive, and built to endure in the fast-moving world of the cloud. Why are these attacks worth watching? These attacks represent a strategic evolution in threat actor behavior—blending into legitimate cloud activity, evading traditional detection, and exploiting the very features that drive business agility. The scale, adaptability, and persistence demonstrated in this campaign is a wake-up call: defenders must look beyond the surface, understand the full lifecycle of cloud-native attacks, and be prepared to counter adversaries who are already mastering the art of stealth and scale. This blog doesn’t just recount what happened, it breaks down the anatomy of a cloud-scale attack. Whether you're a security analyst, cloud architect, or threat hunter, the goal is to help you recognize the signs, understand the methods, and prepare your defenses. With the cloud, organizations benefit from scale, global access, and agility. But if not properly secured, those attributes also benefit threat actors. Resource development: exploiting the weakest links Microsoft Defender Experts has observed ongoing, large-scale campaigns on Azure environments. Student and Pay-As-You-Go (PAYG) accounts, were exploited due to poor security hygiene. These accounts often lacked essential protections: weak or default passwords, no multi-factor authentication (MFA), and no active security monitoring or Defender for Cloud subscription. Initial access was achieved via adversary in the middle (AiTM) attacks or password sprays against Azure User Profile Application (UPA) accounts, commonly using infrastructure hosted by M247 Europe SRL & LTD (New York) and Latitude. Weaponizing ephemeral infrastructure Once access was established using a compromised account, the attacker created new Resource Groups and deployed short-lived Virtual Machines (VMs). These VMs ran for as little as 3–4 hours and up to 1–2 days before being deleted. This approach enabled rapid rotation of attack infrastructure, minimal forensic footprint, and evasion of long-term detection. From these ephemeral VMs, large-scale password spray attacks were launched (predominantly utilizing user agents—BAV2ROPC, python-requests/2.32.3, python-requests/2.32.4) against thousands of accounts across multiple Azure tenants. Operating within Azure’s ecosystem helped the campaign stay below conventional alerting thresholds. Alerts that did occur were often dismissed as false positives or benign because they originated from legitimate Azure associated IP addresses. Scaling through multi-hop and multitenant techniques The sophistication of this campaign lies in their multi-hop and multitenant architecture: Multi-hop: Attacker used compromised Azure VMs to pivot and launch attacks on other accounts, masking their origin and complicating attribution. Multitenant: By controlling multiple Azure tenants, attackers distribute their operations, scale attacks, and maintain resilience against takedowns. This cross-tenant movement within the Azure environment allows attackers to expand their footprint more easily, making detection more challenging. Impact: spam, financial fraud, phishing, and sextortion campaigns Following each successful password spray attack, the campaign expanded across compromised Azure tenants. Using access gained from earlier stages, the attacker repurposed virtual machines within these tenants to send large volumes of phishing and scam emails. These phishing campaigns were carefully crafted to deceive users in compromised tenants, often leading to financial fraud involving URL shorteners like rebrand.ly, redirecting victims to fraudulent non-work related websites such as those with personal interest, entertainment, or leisure activity content. On those fake sites, users were prompted to: Complete surveys or questionnaires Provide personal information Download malicious Android APKs such as FM WhatsApp or Yo WhatsApp Note: The APK is a resigned WhatsApp clone trojan that exploits elevated WhatsApp permissions to harvest private data (contacts, files) while mimicking legitimate registration by communicating with official servers to evade detection. Its malicious actions are triggered via commands hosted in a compromised GitHub repo (xiaoqaingkeke/Stat), indicating a GitHub based C2. In some cases, victims were lured to enter their mobile numbers for chat services or install additional video calling apps—further expanding the attacker’s reach and enabling data harvesting and even extortion. Persistence and expansion Privileged access and the infrastructure the attacker compromised, built, and used in this campaign are worthless if the attacker cannot maintain control. To maintain and strengthen their foothold, the adversary deployed multiple persistence mechanisms. Below is a summary of the persistence techniques used by the attacker, as observed by Microsoft Defender experts across compromised tenants during the investigation. Abuse of OAuth applications Once access to an Azure tenant was obtained, the campaign escalated by registering OAuth applications. Two distinct types of applications were observed: Azure CLI–themed apps (named like "Azure-CLI-2025-06-DD-HH-MM-SS" and "Azure CLI") were registered with the compromised tenant as owner. The attacker added password credentials and created service principals for these apps to enable persistent backdoors (even attempted to re-enable a disabled subscription). In one instance, two custom Azure CLI apps were used to authenticate to Azure Databricks so access would survive account disables. The attacker registered a malicious custom application named MyNewApp, which was used to send large volumes of phishing emails and was successfully traced the campaign by analysing Microsoft Graph API calls, which revealed delivery and engagement patterns Quota manipulation To amplify the campaign’s infrastructure, the attacker exploited compromised credentials to submit service tickets requesting quota increases for Azure VM families: A request was made to raise the quota for the DaV4 VM family to 960 cores across multiple regions. A guest account, added during the attack, submitted a similar request for the EIADSv5 VM family. These actions reflect a deliberate effort to scale up the virtual machine farm, enabling broader password spray operations and phishing campaigns. Notably, the VM farm created by the compromised user was dismantled within three hours, while the farm initiated by the guest account remained active for a full day. This highlights the risk of guest access persistence, which often remains unless explicitly revoked. Advanced abuse in Azure: ML workspaces, Key Vaults, and beyond The recent campaign against a poorly managed, monitored, and configurated Azure environment was marked by a sophisticated, multi-stage attack that leveraged the elasticity and trusted features of cloud-native infrastructures for stealth and scale. The attacker’s operations were not limited to simple credential theft or cross-tenant movement—they demonstrated advanced abuse of Azure’s Machine Learning (ML) services, notebook proxies, Key Vaults, and blob storage to automate, persist, and exfiltrate at scale. ML workspaces and notebook proxies: a stealthy execution layer The attacker repeatedly created Machine Learning workspaces (Microsoft.MachineLearningServices/workspaces/write) and deployed notebook proxies (Microsoft.Notebooks/NotebookProxies/write) using both compromised user accounts and invited guest identities. Attackers can abuse Azure ML to run cryptominers or malicious jobs disguised as training, poison or deploy compromised models, use workspaces/notebooks for persistent RCE, and exfiltrate data via linked storage. They scale with automated pipelines and quota requests, all while blending into normal AI workflows to evade detection. Blob storage exploitation: payload staging and data exfiltration Simultaneously, the attacker provisioned blob storage containers (Microsoft.Storage/storageAccounts/blobServices/containers/write) to stage payloads, host malicious scripts, and store sensitive datasets. The global accessibility and high availability of blob storage made it an ideal channel for covert data exfiltration and operational agility, minimizing the likelihood of detection. Key Vault manipulation: securing persistence The creation and modification of Key Vaults (Microsoft.KeyVault/vaults/write) suggests a deliberate effort to store secrets, credentials, and access tokens. That allowed the attacker to automate interactions with other Azure services and maintain long-term persistence. By embedding themselves into the fabric of cloud identity and access management, they ensured continued access even if initial entry points were remediated. Damage statistics from the campaign controlled by single attacker machine The impact? Staggering. In a matter of days, a single attacker machine was able to: Target nearly 1.9 million global users and compromise over 51,000 accounts. Infiltrate 35 Azure tenants and abuse 36 subscriptions. Spin up 154 virtual machines with 86 used specifically for password spray attacks. Raise over 800,000 Defender alerts, flooding security teams and masking true malicious activity. Send 2.6million spam emails. Abuse Azure’s machine learning services, register malicious OAuth apps, and manipulate quotas to scale up attacks—all while maintaining persistence and evading detection. Recommendations Harden identity to prevent attackers from exploiting low-hanging student subscriptions. Enforce MFAand password protection as most of the users often don't enroll in MFA. Investigate and auto remediate risky users/sign ins; enable token protection (where available) to reduce the blast radius of stolen cookies. Microsoft’s public AiTM guidance consolidates these defenses, and XDR’s AiTM disruption revokes cookies and disables users during active compromise. Constrain abuse pathways in Azure. Apply least privilege RBAC, review guest invitations, and monitor for role promotions on a schedule and via near Realtime analytics, as outlined in Microsoft’s subscription compromise post. Watch for subscription directory/transfer changes and couple with approval style processes; remember transfer can move management (and thus logs) while billing may not change by default. Treat quota as a credit limit and instrument alerts for large, fast, or multiregion quota consumption to spot bursts (legitimate or not). Microsoft’s ML quota docs explain defaults, VM family splits (e.g., “Nseries” GPUs default to zero), and how to request increases. If you suspect your subscription is being misused Start an investigation using Microsoft’s playbooks (password spray) and the hunting queries below; prioritize containment of accounts with risky sign ins and recent ARM writes. If you’re a CSP partner, subscribe to Unauthorized Party Abuse (UPA) alerts and follow the documented response steps for compromised Azure subscriptions. These alerts help surface anomalous consumption and abuse earlier. Clean up tenants/subscriptions you don’t need and understand transfer/cancellation mechanics (“Protect tenants and subscriptions from abuse and fraud attacks”). This both reduces your attack surface and simplifies response. Report abuse (e.g., spam, DoS, brute force, malware) observed from Azure IPs or URLs via the MSRC reporting portal; this ensures the platform teams can act on infra being used against others. A practical hunting mini playbook 1) Azure resource writes, role assignments, etc (last 24h) from high-risk sign-in accounts. let RiskySignin = SigninLogs | where TimeGenerated > ago(24h) | where RiskLevelAggregated == "high" | project RiskTime = TimeGenerated, UserPrincipalName, IPAddress; AzureActivity | where TimeGenerated > ago(24h) | where OperationNameValue has_any ( "Microsoft.MachineLearningServices/workspaces/write", "Microsoft.MachineLearningServices/workspaces/computes/write", "Microsoft.Compute/virtualMachines/extensions/write", "Microsoft.Authorization/roleAssignments/write", "Microsoft.Resources/subscriptions/resourceGroups/write", // Optional: include the VM create/update itself (not just extensions) "Microsoft.Compute/virtualMachines/write" ) or (ActivityStatusValue == "Success" and OperationNameValue == "Microsoft.Subscription/aliases/write") | extend CallerIP = coalesce(CallerIpAddress, tostring(parse_json(Properties).callerIpAddress)) | join kind=inner (RiskySignin) on $left.Caller == $right.UserPrincipalName | where TimeGenerated between (RiskTime .. RiskTime + 2h) | summarize Ops = count(), DistinctOps = dcount(OperationNameValue) by Caller, CallerIP, bin(TimeGenerated, 30m) | order by Ops desc 2) Azure Activity (Sentinel): Support ticket creation before ML service deployment for Quota abuse //Below query shows the risky users writing support tickets which involve quota increase let RiskySignin = SigninLogs | where TimeGenerated > ago(24h) | where RiskLevelAggregated == "high" | project RiskTime = TimeGenerated, UserPrincipalName, IPAddress; AzureActivity | where TimeGenerated > ago(24h) | where OperationNameValue has_any ("supportTickets/write","usages/write") | project QuotaTime = TimeGenerated, Caller, CallerIpAddress = tostring(parse_json(Properties).callerIpAddress) | join kind=inner (RiskySignin) on $left.Caller == $right.UserPrincipalName | where QuotaTime between (RiskTime .. RiskTime + 2h) In conclusion The cloud offers organizations many important benefits. Unfortunately, threat actors are leveraging cloud attributes such as elasticity, scale, and interconnectedness to orchestrate persistent, multitenant attacks that evade traditional defenses. As demonstrated, even a single compromised account can rapidly escalate into a widespread attack, affecting thousands of users and tenants. To counter those evolving threats, defenders must adopt proactive measures: enforce strong identity controls, monitor for suspicious activity, limit privileges, and regularly audit cloud resources. Ultimately, maintaining vigilance and adapting security practices to the dynamic nature of cloud environments, such as Azure, is essential to protect against increasingly stealthy and scalable adversaries and making your cloud more secure. What's next? Join us at Microsoft Ignite in San Francisco on November 17–21, or online, November 18–20, for deep dives and practical labs to help you maximize your Microsoft Defender investments and to get more from the Microsoft capabilities you already use. Security is a core focus at Ignite this year, with the Security Forum on November 17th, deep dive technical sessions, theater talks, and hands-on labs designed for security leaders and practitioners Featured sessions BRK237: Identity Under Siege: Modern ITDR from Microsoft Join experts in Identity and Security to hear how Microsoft is streamlining collaboration across teams and helping customers better protect, detect, and respond to threats targeting your identity fabric. BRK240 – Endpoint security in the AI era: What's new in Defender Discover how Microsoft Defender’s AI-powered endpoint security empowers you to do more, better, faster. BRK236 – Your SOC’s ally against cyber threats, Microsoft Defender Experts See how Defender Experts detect, halt, and manage threats for you, with real-world outcomes and demos. LAB541 – Defend against threats with Microsoft Defender Get hands-on with Defender for Office 365 and Defender for Endpoint, from onboarding devices to advanced attack mitigation. Explore and filter the full security catalog by topic, format, and role: aka.ms/SessionCatalogSecurity. Why attend? Ignite is the place to learn about the latest Defender capabilities, including new agentic AI integrations and unified threat protection. We will also share future-facing innovations in Defender, as part of our ongoing commitment to autonomous defense. Security Forum—Make day 0 count (November 17) Kick off with an immersive, in person preday focused on strategic security discussions and real-world guidance from Microsoft leaders and industry experts. Select Security Forum during registration. Register for Microsoft Ignite >How Microsoft Defender Experts and partners like Quorum Cyber are redefining cybersecurity teamwork
In today’s rapidly evolving threat landscape, cybersecurity demands more than just great technology—it requires great teamwork. That’s the story behind the collaboration between Microsoft Defender Experts and MXDR partner, Quorum Cyber, joining forces to deliver end-to-end threat protection for organizations worldwide. Microsoft-verified MXDR partner Microsoft Defender Experts recognized the need for partner-led managed services to complement their first-party MDR (Managed Detection and Response) service. Quorum Cyber is a trusted Microsoft solutions partner and MSSP of the Year. They are also a Microsoft-verified MDR partner, which means they passed Microsoft’s validation process to deliver services using Microsoft’s security technologies. Quorum Cyber complements Microsoft Defender Experts, MDR services with additional security operations center (SOC) capabilities, extended coverage, non-Microsoft telemetry, and 3rd party domain expertise. “Quorum Cyber’s reputation for customer focus and security expertise made them the ideal Microsoft-verified MDR partner.” – Vivek Kumar, Microsoft “We saw Defender Experts as a way to extend our reach and deliver even more value to customers. It wasn’t about replacing—it was about enhancing.” – Ricky Simpson, Quorum Cyber Why teamwork matters The Microsoft-verified MDR partner program was born out of a shared mission: to provide holistic, customer-led security solutions to address the growing security needs of organizations worldwide. Today, cyber security needs to be a team sport. Organizations that provide security services, like Microsoft’s Defender Experts and Quorum Cyber, need to join together with customers to defend an ever-expanding attack surface against today’s sophisticated threats. Facing the modern threat landscape together From skill shortages to complex attacks, organizations need security providers who can adapt and collaborate. “Hackers only need to get it right once while SecOps needs to get it right every time. Customers need an end-to-end security solution to eliminate gaps and strengthen vulnerabilities. No single provider can address the needs of every organization—everywhere. Only teamwork can get the job done.” – Vivek Kumar, Microsoft How MDR providers working together is important for CISOs and other security leaders Meeting real-world challenges Modern SecOps must navigate an increasingly complex and multifaceted threat landscape. One of the most pressing challenges is the global shortage of cybersecurity professionals. Although the security workforce has grown by 9%, the gap has widened even further, with nearly 4.8 million additional professionals needed to adequately protect organizations last year. ¹ Meanwhile, adversaries are becoming more sophisticated and agile. They work in groups, using many individuals who process deep domain expertise is executing various attack techniques and tactics. In May 2024 alone, Microsoft Defender XDR detected over 176,000 incidents involving tampering with security settings, impacting more than 5,600 organizations. ² That surge in threat activity coincides with a pivotal moment in technological evolution as organizations rapidly scale cloud operations and explore the transformative potential of generative AI. These innovations, while powerful, also expand the attack surface and the likelihood of gaps and vulnerabilities. Comprehensive coverage across security domains Microsoft Defender Experts brings deep integration across Microsoft’s ecosystem and manages incidents across Microsoft Defender products (Endpoint, Office 365, Identity, Cloud Apps, and Defender for Cloud/Servers). Quorum Cyber, a Microsoft-verified partner, offers flexibility and specialized coverage to extend beyond Microsoft Defender Experts. “What is so exciting about this approach, is that together, we created a layered defense strategy that’s greater than the sum of its parts and provides coverage for nearly all of the customers’ environment. Microsoft SDM/SecDeliveryExperts worked together with Quorum Cyber to create a nearly seamless, unified defense strategy. They not only help to eliminate the skills gap but are designed to scale easily to address nearly any volume of sophisticated threats.” – Sebastien Molendijk, Microsoft With shared tooling, real-time communication, and complementary expertise, this teamwork eliminates blind spots and delivers coverage across an environment that includes non-Defender Experts supported technology such as 3rd party and legacy systems, custom applications, IoT, firewalls, network gear, and more. Additionally, the combined telemetry for all covered systems, Defender Experts and Quorum Cyber, enriches incident context and improves detection accuracy and hunting. Real-world impact – Customer success stories Proactive threat hunting is a core component of Defender Experts. Experts are not just cross-checking Indicators of Compromise (IOCs) against the environment or only validating them against known tactics, techniques, and procedures (TTPs). The hunting approach is differentiated by the 78T signals and hundreds of tracked threat actors. The intelligence informing Microsoft hunts spans across nation state, criminal activity, evolving vulnerabilities, and newly observed behaviors. That is something Defender Experts can uniquely provide customers. One of many customer examples of this teamwork involved an organization already engaged with Quorum Cyber MDR for Microsoft E5 services. When Defender Experts engaged with the customer, the two teams co-created a solution tailored to meet the CISOs needs by combining Quorum Cyber’s analytics and monitoring with Defender Expert’s proactive threat hunting. That not only expanded coverage but provided the customer with both proactive and reactive services across nearly their entire environment. Another example is adversary in the middle alerts, Defender Experts performs the investigation of malicious QR codes and then escalates to Quorum Cyber if malicious activity is observed. Quorum Cyber then takes delegated authority to reset the user's password, revokes their sessions, and takes other actions as needed. Unique services Collaboration is more than Quorum Cyber and Microsoft working as one. Quorum Cyber develops unique services including their data security service – Clarity Data. This service handles incidents generated via Microsoft Purview - DLP and IRM. It includes Quorum Cyber’s 24/7 365 SOC services to address those incidents without interfering with security signals being addressed by other analysts. Operational flexibility Customers have the option to divide responsibilities. For example, Microsoft manages Defender-specific alerts and Quorum Cyber manages alerts from all the other tools. Guided response playbooks allow Microsoft Defender Experts and Quorum Cyber teams to work as one to perform containment and remediation across workstreams. “We built solutions from scratch, keeping customer outcomes at the center. The results are frictionless, powerful security models that address unique customer needs.” – Ricky Simpson, Quorum Cyber Overcoming challenges, building trust, working as one Like building any team, there were hurdles. From workflow alignment to incident handoffs, mutual respect and a shared commitment to customer satisfaction paved the way to building frictionless workstreams. Teamwork thrived on technical integration. Because Defender Experts is built atop the Microsoft Defender portal and Microsoft Graph, the service is inherently designed for seamless collaboration. When Defender Experts assigns incidents, initiates proactive threat hunts, publishes investigation notes, or executes one-click remediation actions, those activities are fully integrated into both the Defender user experience and the Graph API. That integration enables Quorum Cyber to synchronize directly with those workflows, allowing their teams to operate within their existing toolsets while customers receive real-time updates and final resolutions through platforms such as Microsoft Defender, Sentinel, or their ITSM systems. A notable example is the ‘real-time chat’ feature within Defender Experts, which is architected to support joint participation from both customers and partners like Quorum Cyber—ensuring transparency and responsiveness throughout the incident lifecycle. That level of tooling integration is essential to delivering a unified experience. Customers benefit from the deep expertise of Defender Experts, the broad coverage of a trusted partner like Quorum Cyber, and the operational efficiency of a tightly connected security services ecosystem. It truly represents the best of both worlds. “Defender Experts’ use of Microsoft Graph and Defender Portal enabled seamless incident sharing, real-time chat, and synchronized updates across platforms. Live dashboards from Defender Experts offer a clear, prioritized view of incidents. That allowed Defender Experts and Quorum Cyber to work as one team to keep customers secure and do that quickly and efficiently.” – Ricky Simpson, Quorum Cyber The bigger picture – innovation and growth This partnership isn’t just about solving today’s problems—it’s about shaping the future. It has opened doors for Quorum Cyber to expand into new service areas, like managed data security, while reinforcing Microsoft’s commitment to flexible, scalable security solutions. Customers don’t have to choose between Microsoft and their trusted MDR provider like Quorum Cyber—they can have both. By combining Microsoft Defender Experts with MDR providers like Quorum Cyber, organizations gain a flexible, scalable, and deeply integrated security strategy that adapts to their unique needs and can grow as they grow. Whether you're augmenting your SOC, expanding global coverage, or navigating a transition, this “better together” model ensures your security operations are resilient, responsive, and ready for what’s next. “We’ve proven, and our customer agree, that first-party and partner-led services can coexist and thrive together.” – Ricky Simpson, Quorum Cyber “Customers get the best of both worlds—expertise from Defender Experts and coverage from Quorum Cyber, all delivered as it should be—in a timely and seamless way.” – Vivek Kumar, Microsoft In summary – Microsoft Defender Experts and Quorum Cyber – the benefits are clear End-to-End Threat Protection – Combines Microsoft Defender capabilities with Quorum Cyber extended SOC services and third-party telemetry. Comprehensive Coverage –Protects both Microsoft and non-Microsoft environments, including legacy systems, IoT, and custom applications. Proactive and Reactive Security –Integrates threat hunting with incident response for full-spectrum defense. Operational Flexibility –Allows tailored division of responsibilities and coordinated remediation through guided playbooks. Real-Time Collaboration –Enables seamless communication and incident management via shared tooling, dashboards, and chat features. Advanced Threat Intelligence –Leverages Microsoft’s 78T signals and threat actor tracking, with partner TI, to enrich incident context and improve detection. Complementary Services –For example, Quorum Cyber’s Clarity Data service handles Microsoft Purview incidents without disrupting other security workflows. Unified Customer Experience –Delivers frictionless, scalable, and resilient security operations through deep integration and mutual trust. Learn more If you like this blog, and would like to learn more, see this insightful webinar for more details The Better Together Story of Defender Experts and Quorum Cyber - Quorum Cyber And listen to what these experts from Quorum Cyber and Microsoft have to say about the benefits of ‘Better Together.’ Ricky Simpson | LinkedIn Paul Caiazzo | LinkedIn Scott McManus | LinkedIn Raae Wolfram | LinkedIn Sebastien Molendijk | LinkedIn Henry Yan | LinkedIn Vivek Kumar | LinkedIn Next Steps For organizations considering a multi-provider strategy, the message is clear: collaboration works. Microsoft Defender Experts and Quorum Cyber show that when service providers align around customer needs, the results are transformative. “Microsoft Security has got you covered—whether through Defender Experts, partners like Quorum Cyber, or both.” – Vivek Kumar, Microsoft Ready to strengthen your cyber resilience, Join the conversation through Microsoft’s public webinar series Explore the CTI community Reach out to learn more about how this partnership can support your organization. Sources ¹ ISC2-2024-Cybersecurity-Workforce-Study ² Microsoft Digital Defense Report 2024355Views0likes0CommentsCloud forensics: Why enabling Microsoft Azure Storage Account logs matters
Co-authors - Christoph Dreymann - Shiva P Introduction Azure Storage Accounts are frequently targeted by threat actors. Their goal is to exfiltrate sensitive data to an external infrastructure under their control. Because diagnostic logging is not always fully enabled by default, valuable evidence of their malicious actions may be lost. With this blog, we will explore realistic attack scenarios and demonstrate the types of artifacts those activities generate. By properly enabling Microsoft Azure Storage Account logs investigators gain a better understanding of the scope of the incident. The information can also provide guidance for remediating the environment and on preventing data theft from occurring again. Storage Account A Storage Account provides scalable, secure, and highly available storage for storing and managing data objects. Due to the variety of sensitive data that can be stored, it is another highly valued target by a threat actor. Threat actors exploit misconfigurations, weak access controls, and leaked credentials to gain unauthorized access. Key risks include Shared Access Signature token (SAS) misuse that allows threat actors to access or modify exposed blob storages. Storage Account key exposure could grant privileged access to the data plane. Investigating storage-related security incidents requires familiarity with Azure activity logs and Diagnostic logs. Diagnostic log types for Storage accounts are StorageBlob, StorageFile, StorageQueue, and StorageTable. These logs can help identify unusual access patterns, role changes, and unauthorized SAS token generation. This blog is centered around StorageBlob activity logs. Storage Account logging The logs for a Storage Account aren’t enabled by default. These logs capture operations, requests, and use such as read, write, and delete actions/requests on storage objects like blobs, queues, files, or tables. NOTE: There are no license requirements to enable Storage Account logging, but Log Analytics charges based on ingestion and retention (Pricing - Azure Monitor | Microsoft Azure) For more information on enabling logging for a Storage Account can be found here. Notable fields The log entries contain various fields which are of use not only during or after an incident, but for general monitoring of a storage account during normal operations (for a full list, see what data is available in the Storage Logs). Once the storage log is enabled, one of the key tables within Log Analytics is StorageBlobLogs, which provides details about blob storage operations, including read, write, and delete actions. Key columns such as OperationName, AuthenticationType, StatusText, and UserAgentHeader capture essential information about these activities. The OperationName field identifies operations on a storage account, such as “PutBlob” for uploads or “DeleteBlob” and “DeleteFile” for deletions. The UserAgentHeader fields offer valuable insights into the tools used to access a Blob storage. Accessing blob storages through the Azure portal is typically logged with a generic user agent, which indicates the application used to perform the access, such as a web browser like Mozilla Firefox. In contrast, tools like AzCopy or Microsoft Azure Storage Explorer are explicitly identified in the logs. Analyzing the UserAgentHeader provides crucial details about the access method, helping determine how the blob storage was accessed. The following table includes additional investigation fields, Field name Description TimeGenerated [UTC] The date and time of the operation request. AccountName Name of the Storage account. OperationName Name of the operation. A detailed list of for StorageBlob operation can be found here. AuthenticationType The type of authentication that was used to make this request. StatusCode The HTTP status code for the request. If the request is interrupted, this value might be set to Unknown. StatusText The status of the requested operation. Uri Uniform resource identifier that is requested. CallerIpAddress The IP address of the requester, including the port number. UserAgentHeader The User-Agent header value. ObjectKey Provides the path of the object requested. RequesterUpn User Principal Name of the requester. AuthenticationHash Hash of the authentication token used during a request. Request authenticated with SAS token includes a SAS signature specifying the hash derived from the signature part of the SAS token. For a full list, see what data is available in the Storage Logs. How a threat actor can access a Storage Account Threat actors can access the Storage Account through Azure-assigned RBAC, a SAS token (including User delegated SAS token), Azure Storage Account Keys and Anonymous Access (if configured). Storage Account Access Keys Azure Storage Account Access Keys are shared secrets that enable full access to Azure storage resources. When creating a storage account, Azure generates two access keys, both can be used for authentication with the storage account. These keys are permanent and do not have an expiration date. Both Storage Account Owners and roles such as Contributor or any other role with the assigned action of Microsoft.Storage/storageAccounts/listKeys/action can retrieve and use these credentials to access the storage account. Account Access Keys can be rotated/regenerated but if done unintentionally, it could disrupt applications or services dependent on the key for authentication. Additionally, this action invalidates any SAS tokens derived from that key, potentially revoking access to dependent workflows. Monitoring key rotations can help detect unexpected changes and mitigate disruptions. Query: This query can help identify instances of account key rotations in the logs AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/REGENERATEKEY/ACTION" | where ActivityStatusValue has "Start" | extend resource = parse_json(todynamic(Properties).resource) | extend requestBody = parse_json(todynamic(Properties).requestbody) | project TimeGenerated, OperationNameValue, resource, requestBody, Caller, CallerIpAddress Shared Access Signature SAS tokens offer a granular method for controlling access to Azure storage resources. SAS tokens enable specific permitted actions on a resource and their duration. They can be generated for blobs, queues, tables, and file shares within a storage account, providing precise control over data access. A SAS token allows access via a signed URL. A Storage Account Owner can generate a SAS token and connection strings for various resources within the storage account (e.g., blobs, containers, tables) without restrictions. Additionally, roles with Microsoft.Storage/storageAccounts/listKeys/action rights can also generate SAS tokens. SAS tokens enable access to storage resources using tools such as Azure Storage Explorer, Azure CLI, or PowerShell. It is important to note that the logs do not indicate when a SAS token was generated [How a shared access signature works]. However, their usage can be inferred by tracking configuration changes that enable the use of storage account keys option which is disabled by default. Figure 1: Configuration setting to enable account key access Query: This query is designed to detect configuration changes made to enable access via storage account keys AzureActivity | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where ActivityStatusValue has "Success" | extend allowSharedKeyAccess = parse_json(tostring(parse_json(tostring(parse_json(Properties).responseBody)).properties)).allowSharedKeyAccess | where allowSharedKeyAccess == "true" User delegated Shared Access Signature A User Delegation SAS is a type of SAS token that is secured using Microsoft Entra ID credentials rather than the storage account key. For more details see Authorize a user delegation SAS. To request a SAS token using the user delegation key, the identity must possess the Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey action (see Assign permissions with RBAC). Azure Role-Based Access Control A threat actor must identify a target (an identity) that can assign roles or already holds specific RBAC roles. To assign Azure RBAC roles, an identity must have Microsoft.Authorization/roleAssignments/write, which allows the assignment of roles necessary for accessing storage accounts. Some examples of roles that provide permissions to access data within Storage Account (see Azure built-in roles for blob): Storage Account Contributor (Read, Write, Manage Access) Storage Blob Data Contributor (Read, Write) Storage Blob Data Owner (Read, Write, Manage Access) Storage Blob Data Reader (Read Only) Additionally, to access blob data in the Azure portal, a user must also be assigned the Reader role (see Assign an Azure role). More information about Azure built-in roles for a Storage Account can be found here Azure built-in roles for Storage. Anonymous Access If the storage account configuration 'Allow Blob anonymous access' is set to enabled and a container is created with anonymous read access, a threat actor could access the storage contents from the internet without any authorization. Figure 2: Configuration settings for Blob anonymous access and container-level anonymous access. Query: This query helps identify successful configuration changes to enable anonymous access AzureActivity | join kind=rightouter (AzureActivity | where TimeGenerated > ago(30d) | where OperationNameValue has "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" | where Properties has "allowBlobPublicAccess" | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess = todynamic(requestbody).properties["allowBlobPublicAccess"] | where allowBlobPublicAccess has "true" | summarize by CorrelationId) on CorrelationId | extend ProperTies = parse_json(Properties) | evaluate bag_unpack(ProperTies) | extend allowBlobPublicAccess_req = todynamic(requestbody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess_res = todynamic(responseBody).properties["allowBlobPublicAccess"] | extend allowBlobPublicAccess = case (allowBlobPublicAccess_req!="", allowBlobPublicAccess_req, allowBlobPublicAccess_res!="", allowBlobPublicAccess_res, "") | project OperationNameValue, ActivityStatusValue, ResourceGroup, allowBlobPublicAccess, Caller, CallerIpAddress, ResourceProviderValue Key notes regarding the authentication methods When a user accesses Azure Blob Storage via the Azure portal, the interaction is authenticated using OAuth and is authorized by the Azure RBAC roles configuration for the given user. In contrast, authentication using Azure Storage Explorer and AzCopy depends on the method used: If a user interactively signs in via the Azure portal or utilizes the Device code flow, authentication appears as OAuth based. When using a SAS token, authentication is recorded as SAS-based for both tools. Access via Azure RBAC is logged in Entra ID Sign-in Logs, however, activity related to SAS token usage does not appear in the sign-in logs, as it provides pre-authorized access. Log analysis should consider all operations, since initial actions can reveal the true authentication method even OAuth-based access may show as SAS in logs. The screenshot below illustrates three distinct cases, each showcasing different patterns of authentication types used when accessing storage resources. A SAS token is consistently used across various operations, where the SAS token is the primary access method. The example below highlighted as ‘2’ demonstrates a similar pattern, with OAuth (using assigned Azure RBAC role) serving as the primary authentication method for all listed operations. Lastly, example number ‘3’, Operations start with OAuth authentication (using an assigned Azure RBAC role for authorization) and then uses a SAS token, indicating mixed authentication types. Figure 3: Different patterns of authentication types Additionally, when using certain applications such as Azure Storage Explorer with Account Access Keys authentication, the initial operations such as ListContainers and ListBlob are logged with the authentication type reported as “AccountKey”. However, for subsequent actions like file uploads or downloads, the authentication type switches to SAS in the logs. To accurately determine whether an Account Access Keys or SAS was used, it's important to correlate these actions with the earlier enumeration or sync activity within the logs. With this understanding, let’s proceed to analyze specific attack scenarios by utilizing the log analytics, such as the StorageBlobLogs table. Attack scenario This section will examine the typical steps that a threat actor might take when targeting a Storage Account. We will specifically focus on the Azure Resource Manager layer, where Azure RBAC initially dictates what a threat actor can discover. Enumeration During enumeration, a threat actor’s goal is to map out the available storage accounts. The range of this discovery is decided by the access privileges of a compromised identity. If that identity holds at least a minimum level of access (similar to a Reader) at the subscription level, it can view storage account resources without making any modifications. Importantly, this permission level does not grant access to the actual data stored within the Azure Storage itself. Hence, a threat actor is limited to interacting only with those storage accounts that are visible to them. To access and download files from Blob Storage, a threat actor must be aware of the names of containers (Operation: ListContainers) and the files within those containers (Operation: ListBlobs). All interactions with these storage elements are recorded in the StorageBlobLogs table. Containers or blobs in a container can be listed by a threat actor with the appropriate access rights. If access is not authorized, attempts to do so will result in error codes shown in the StatusCode field. A high number of unauthorized attempts resulting in errors would be a key indicator of suspicious activity or misconfiguration. Figure 4: Failure attempts to list blobs/containers Query: This query serves as a starting point for detecting a spike in unauthorized attempts to enumerate containers, blobs, files, or queues union Storage* | extend StatusCodeLong = tolong(StatusCode) | where OperationName has_any ("ListBlobs", "ListContainers", "ListFiles", "ListQueues") | summarize MinTime = min(TimeGenerated), MaxTime = max(TimeGenerated), OperationCount = count(), UnauthorizedAccess = countif(StatusCodeLong >= 400), OperationNames = make_set(OperationName), ErrorStatusCodes = make_set_if(StatusCode, StatusCodeLong >= 400), StorageAccountName = make_set(AccountName) by CallerIpAddress | where UnauthorizedAccess > 0 Note: The UnauthorizedAccess filter attribute must be adjusted based on your environment. Data exfiltration Let’s use the StorageBlobLogs to analyze two different attack scenarios. Scenario 1: Compromised user has access to a storage account In this scenario, the threat actor either compromises a user account with access to one or more storage accounts or alternatively, obtains a leaked Account Access Key or SAS token. With a compromised identity, the threat actor can either enumerate all storage accounts the user has permissions to (as covered in enumeration) or directly access a specific blob or container if the leaked key grants scoped access. Account Access Keys (AccountKey)/SAS tokens The threat actor might either use the storage account’s access keys or SAS token retrieved through the compromised user account provided they have the appropriate permissions or the leaked key itself may already be either an Account access key or SAS token. Access keys grant complete control while SAS key can generate a time-bound access, to authorize data transfers enabling them to view, upload, or download data at will. Figure 5: Account key used to download/view data Figure 6: SAS token used to download/view data Query: This query helps identify cases where an AccountKey/SAS was used to download/view data from a storage account StorageBlobLogs | where OperationName has "GetBlob" | where AuthenticationType in~ ("AccountKey", "SAS") | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, AuthenticationType, Uri, ObjectKey, StatusText, UserAgentHeader, CallerIpAddress, AuthenticationHash User Delegation SAS Available for Blob Storage only, a User Delegation SAS functions similar to a SAS but is protected with Microsoft Entra ID credentials rather than the storage account key. The creation of a User Delegation SAS is tracked as a corresponding "GetUserDelegationKey" log entry in StorageBlobLogs table. Figure 7: User-Delegation Key created Query: This query helps identify creation of a User-Delegation Key. The RequesterUpn provides the identity of the user account creating this key. StorageBlobLogs | where OperationName has "GetUserDelegationKey" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project TimeGenerated, AccountName, OperationName, RequesterUpn, Uri, CallerIpAddress, ObjectKey, AuthenticationType, StatusCode, StatusText Figure 8: User-Delegation activity to download/read Query: This query helps identify cases where a download/read action was initiated while authenticated via a User delegation key StorageBlobLogs | where AuthenticationType has "DelegationSas" | where OperationName has "GetBlob" | where StatusText in~ ("Success", "AnonymousSuccess", "SASSuccess") | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress, Uri The operation "GetUserDelegationKey" within the StorageBlobLogs captures the identity responsible for generating a User Delegation SAS token. The AuthenticationHash field shows the Key used to sign the SAS token. When the SAS token is used, any operations will include the same SAS signature hash enabling you to correlate various actions performed using this token even if the originating IP addresses differ. Query: The following query extracts a SAS signature hash from the AuthenticationHash field. This helps to track the token's usage, providing an audit trail to identify potentially malicious activity. StorageBlobLogs | where AuthenticationType has "DelegationSas" | extend SasSHASignature = extract(@"SasSignature\((.*?)\)", 1, AuthenticationHash) | project Type, TimeGenerated, OperationName, AccountName, UserAgentHeader, ObjectKey, AuthenticationType, StatusCode, CallerIpAddress In the next scenario, we examine how a threat actor already in control of a compromised identity uses Azure RBAC to assign permissions. With administrative privileges over a storage account, the threat actor can grant access to additional accounts and establish long-term access to the storage accounts. Scenario 2: A user account is controlled by the threat actor and has elevated access to the Storage Account An identity named Bob was identified as compromised due to an unauthorized IP login. The investigation triggers when Azure Sign-in logs reveal logins originating from an unexpected location. This account has owner permissions for a resource group, allowing full access and role assignments in Azure RBAC. The threat actor grants access to another account they control, as shown in the AzureActivity logs. The AzureActivity logs in the figure below show that Reader, Data Access, and Storage Account Contributor roles were assigned to Hacker2 for a Storage Account within Azure: Figure 9: Assigning a role to a user Query: This query helps identify if a role has been assigned to a user AzureActivity | where Caller has "Bob" | where OperationNameValue has "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE" | extend RoleDefintionIDProperties = parse_json(Properties) | evaluate bag_unpack(RoleDefintionIDProperties) | extend RoleDefinitionIdExtracted = tostring(todynamic(requestbody).Properties.RoleDefinitionId) | extend RoleDefinitionIdExtracted = extract(@"roleDefinitions/([a-f0-9-]+)", 1, RoleDefinitionIdExtracted) | extend RequestedRole = case( RoleDefinitionIdExtracted == "ba92f5b4-2d11-453d-a403-e96b0029c9fe", "Storage Blob Data Contributor", RoleDefinitionIdExtracted == "b7e6dc6d-f1e8-4753-8033-0f276bb0955b", "Storage Blob Data Owner", RoleDefinitionIdExtracted == "2a2b9908-6ea1-4ae2-8e65-a410df84e7d1", "Storage Blob Data Reader", RoleDefinitionIdExtracted == "db58b8e5-c6ad-4a2a-8342-4190687cbf4a", "Storage Blob Delegator", RoleDefinitionIdExtracted == "c12c1c16-33a1-487b-954d-41c89c60f349", "Reader and Data Access", RoleDefinitionIdExtracted == "17d1049b-9a84-46fb-8f53-869881c3d3ab","Storage Account Contributor", "") | extend roleAssignmentScope = tostring(todynamic(Authorization_d).evidence.roleAssignmentScope) | extend AuthorizedFor = tostring(todynamic(requestbody).Properties.PrincipalId) | extend AuthorizedType = tostring(todynamic(requestbody).Properties.PrincipalType) | project TimeGenerated, RequestedRole, roleAssignmentScope, ActivityStatusValue, Caller, CallerIpAddress, CategoryValue, ResourceProviderValue, AuthorizedFor, AuthorizedType Note: Refer to this resource for additional Azure in-built role IDs that can be used in this query. The Sign-in logs indicate that Hacker2 successfully accessed Azure from the same malicious IP address. We can examine StorageBlobLogs to determine if the user accessed data of the blob storage since specific roles related to the Storage Account were assigned to them. The activities within the blob storage indicate several entries attributed to the Hacker2 user, as shown in the figure below. Figure 10: User access to blob storage Query: This query helps identify access to blob storage from a malicious IP StorageBlobLogs | where TimeGenerated > ago (30d) | where CallerIpAddress has {{IPv4}} | extend ObjectName= ObjectKey | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, StatusText, RequesterUpn, CallerIpAddress, UserAgentHeader, ObjectName, Category An analysis of the StorageBlobLogs, as shown in the figure below, reveals that Hacker2 performed a "StorageRead" operation on three files. This indicates that data was accessed or downloaded from blob storage. Figure 11: Blob Storage Read/Download activities The UserAgentHeader suggests that the storage account was accessed through the Azure portal. Consequently, the SignInLogs can offer further detailed information. Query: This query checks for read, write, or delete operations in blob storage and their access methods, StorageBlobLogs | where TimeGenerated > ago(30d) | where CallerIpAddress has {{IPv4}} | where OperationName has_any ("PutBlob", "GetBlob", "DeleteBlob") and StatusText == "Success" | extend Notes = case( OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was written through Azure Storage Explorer", OperationName == "PutBlob" and Category == "StorageWrite" and UserAgentHeader has "AzCopy", "Blob was written through AzCopy Command", OperationName == "PutBlob" and Category == "StorageWrite" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was written through Azure portal", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was Read/Download through Azure Storage Explorer", OperationName == "GetBlob" and Category == "StorageRead" and UserAgentHeader has "AzCopy", "Blob was Read/Download through AzCopy Command", OperationName == "GetBlob" and Category == "StorageRead" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was Read/Download through Azure portal", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "Microsoft Azure Storage Explorer", "Blob was deleted through Azure Storage Explorer", OperationName == "DeleteBlob" and Category == "StorageDelete" and UserAgentHeader has "AzCopy", "Blob was deleted through AzCopy Command", OperationName == "DeleteBlob" and Category == "StorageDelete" and not(UserAgentHeader has_any("AzCopy","Microsoft Azure Storage Explorer")), "Blob was deleted through Azure portal","") | project TimeGenerated, AccountName, OperationName, AuthenticationType, StatusCode, CallerIpAddress, ObjectName=ObjectKey, Category, RequesterUpn, Notes The log analysis confirms that the threat actor successfully extracted data from a storage account. Storage Account summary Detecting misuse within a Storage Account can be challenging, as routine operations may hide malicious activities. However, enabling logging is essential for investigation to help track accesses, especially when compromised identities or misused SAS tokens or keys are involved. Unusual changes in user permissions and irregularities in role assignments which are documented in the Azure Activity Logs, can signal unauthorized access, while Microsoft Entra ID sign-in logs can help identify compromised UPNs and suspicious IP addresses that ties into OAuth-based storage account access. By thoroughly analyzing Storage Account logs which details operation types and access methods, investigators can identify abuse and determine the scope of compromise. That not only helps when remediating the environment but can also provide guidance on preventing unauthorized data theft from occurring again.3.5KViews2likes0CommentsPost-breach browser abuse: a new frontier for threat actors
Co-authors - Raae Wolfram | Sam Gardener Once an attacker has gained access to a system, the browser becomes a rich source of credentials, a platform for persistence, and a stealthy channel for data exfiltration. This blog outlines key abuse techniques and provides actionable detection strategies using Microsoft Defender for Endpoint and Microsoft Defender XDR. Why browsers matter after the breach Post-compromise, browsers offer attackers: Access to credentials (cookies, tokens, autofill data) Control over peripherals (camera, microphone, location) A trusted execution environment for evasion A platform for persistence via extensions or debugging interfaces These capabilities make browsers a high-value target even after initial access has been achieved. Key abuse techniques and detection strategies 1. Credential theft via memory scraping Attackers can extract sensitive data directly from browser memory using tools like Mimikittenz. Security team members can proactively hunt for threats with advanced hunting in Microsoft Defender. Advanced hunting detection query: let PROCESS_VM_READ=0x0010; DeviceEvents | where ActionType == "OpenProcessApiCall" and FileName in~ ("chrome.exe", "msedge.exe", "firefox.exe", "brave.exe", "opera.exe") | project FileName, InitiatingProcessFileName, DesiredAccess=tolong(parse_json(AdditionalFields).DesiredAccess) | where binary_and(DesiredAccess, PROCESS_VM_READ) != 0 Learn more at about hunting queries: Overview - Advanced hunting - Microsoft Defender XDR | Microsoft Learn 2. TLS key logging for passive credential capture Setting the SSLKEYLOGFILE environment variable allows attackers to dump TLS pre-master secrets, enabling decryption of HTTPS traffic. Detection query: DeviceRegistryEvents | where RegistryKey =~ @"SYSTEM\CurrentControlSet\Control\Session Manager\Environment" and RegistryValueName =~ "SSLKEYLOGFILE" 3. Remote debugging port abuse Chromium-based browsers support remote debugging via WebSocket. Attackers can launch browsers with flags like --remote-debugging-port and control them programmatically. Detection queries: DeviceProcessEvents | where FileName in~ ("chrome.exe", "msedge.exe", "brave.exe", "opera.exe") and ProcessCommandLine contains "--remote" DeviceNetworkEvents | where RemotePort in (9222, 9223, 9229) | where RemoteIP == "127.0.0.1" | where InitiatingProcessFileName !in~ ("chrome.exe", "msedge.exe", "brave.exe", "opera.exe") DeviceProcessEvents | where FileName has_any ("chrome", "msedge", "brave", "opera") and ProcessCommandLine contains "--remote" 4. Persistence via malicious extensions Attackers can sideload or auto-update malicious extensions using enterprise policies or developer mode. Detection queries: DeviceProcessEvents | where ProcessCommandLine has "--load-extension" | where FileName in~ ("chrome.exe", "msedge.exe") DeviceRegistryEvents | where RegistryKey has "ExtensionInstallForcelist" | where RegistryValueData has_any ("http", "crx") 5. Anomalous child process spawning Unexpected child processes from browsers may indicate injection, persistence, or evasion. Detection query: DeviceProcessEvents | where InitiatingProcessFileName in~ ("chrome.exe", "msedge.exe", "firefox.exe", “brave.exe”, “opera.exe”) | where FileName !in~ ("chrome.exe", "msedge.exe", "firefox.exe") Recommendations for defenders: Monitor for debugging flags in browser launch commands. Alert on unexpected registry or file modifications related to extensions. Track environment variable usage that affects browser behavior. Investigate RWX memory pages in browser processes. Use Defender for Endpoint to correlate these signals with broader attack chains. Conclusion Post-breach browser abuse is a growing concern that blends stealth, persistence, and credential access into a single threat vector. By understanding these techniques and implementing the detection strategies outlined above, defenders can close a critical visibility gap and better protect their environments. See what our experts have to say. Watch the recorded webinar, download the presentation - and learn more about - Post-Breach Browsers: The Hidden Threat You’re Overlooking.Elevate your protection with expanded Microsoft Defender Experts coverage
Co-authors: Henry Yan, Sr. Product Marketing Manager and Sylvie Liu, Principal Product Manager Security Operations Centers (SOCs) are under extreme pressure due to a rapidly evolving threat landscape, an increase in volume and frequency of attacks driven by AI, and a widening skills gap. To address these challenges, organizations across industries are relying on Microsoft Defender Experts for XDR and Microsoft Defender Experts for Hunting to bolster their SOC and stay ahead of emerging threats. We are committed to continuously enhancing Microsoft Defender Experts services to help our customers safeguard their organizations and focus on what matters most. We are excited to announce the general availability of expanded Defender Experts coverage. With this update, Defender Experts for XDR and Defender Experts for Hunting now deliver around the clock protection and proactive threat hunting for your cloud workloads, starting with hybrid and multicloud servers in Microsoft Defender for Cloud. Additionally, third-party network signals from Palo Alto Networks, Zscaler, and Fortinet can now be used for incident enrichment in Defender Experts for XDR, enabling faster and more accurate detection and response. Extend 24/7, expert-led defense and threat hunting to your hybrid and multicloud servers As cloud adoption accelerates, the sophistication and frequency of cloud attacks are on the rise. According to IDC, in 2024, organizations experienced an average of more than nine cloud security incidents, with 89% reporting an increase year over year. Furthermore, cloud security is the leading skills gap with almost 40% of respondents in the O’Reilly 2024 State of Security Survey identifying it as the top area in need of skilled professionals. Virtual machines (VMs) are the backbone of cloud infrastructure, used to run critical applications with sensitive data while offering flexibility, efficiency, and scalability. This makes them attractive targets for attackers as compromised VMs can be used to potentially carry out malicious activities such as data exfiltration, lateral movement, and resource exploitation. Defender Experts for XDR now delivers 24/7, expert-led managed extended detection and response (MXDR) for your hybrid and multicloud servers in Defender for Cloud. Our security analysts will investigate, triage, and respond to alerts on your on-premises and cloud VMs across Microsoft Azure, Amazon Web Services, and Google Cloud Platform. With Defender Experts for Hunting, which is included in Defender Experts for XDR and also available as a standalone service, our expert threat hunters will now be able to hunt across hybrid and multicloud servers in addition to endpoints, identities, emails, and cloud apps, reducing blind spots and uncovering emerging cloud threats. Figure 1: Incidents from servers in Defender for Cloud investigated by Defender Experts Incident enrichment for improved detection accuracy and faster response By enriching Defender incidents with third-party network signals from Palo Alto Networks (PAN-OS Firewall), Zscaler (Zscaler Internet Access and Zscaler Private Access), and Fortinet (FortiGate Next-Generation Firewall), our security analysts gain deeper insights into attack paths. The additional context helps Defender Experts for XDR identify patterns and connections across domains, enabling more accurate detection and faster response to threats. Figure 2: Third-party enrichment data in Defender Experts for XDR report In this hypothetical scenario, we explore how incident enrichment with third-party network signals helped Defender Experts for XDR uncover lateral movement and potential data exfiltration attempts. Detection: Microsoft Defender for Identity flagged an "Atypical Travel" alert for User A, showing sign-ins from India and Germany within a short timeframe using different devices and IPs, suggesting possible credential compromise or session hijacking. However, initial identity and cloud reviews showed no signs of malicious activity. Correlation: From incident enrichment with third-party network signals, Palo Alto firewall logs revealed attempts to access unauthorized remote tools, while Zscaler proxy data showed encrypted traffic to an unprotected legacy SharePoint server. Investigation: Our security analysts uncovered that the attacker authenticated from a managed mobile device in Germany. Due to token reuse and a misconfigured Mobile Device Management profile, the device passed posture checks and bypassed Conditional Access, enabling access to internal SharePoint. Insights from third-party network signals helped Defender Experts for XDR confirm lateral movement and potential data exfiltration. Response: Once malicious access was confirmed, Defender Experts for XDR initiated a coordinated response, revoking active tokens, isolating affected devices, and hardening mobile policies to enforce Conditional Access. Flexible, cost-effective pricing Defender Experts coverage of servers in Defender for Cloud is priced per server per month, with charges based on the total number of server hours each month. You have the flexibility to scale your servers as needed while ensuring cost effectiveness as you only pay for Defender Experts coverage based on resources you use. For example, if you have a total of 4000 hours across all servers protected by Defender for Cloud in June (June has a total of 720 hours), you will be charged for a total of 5.56 servers in June (4000/720 = 5.56). There is no additional charge for third-party network signal enrichment beyond the data ingestion charge through Microsoft Sentinel. Please contact your Microsoft account representative for more information on pricing. Get started today Defender Experts coverage of servers in Defender for Cloud will be available as an add-on to Defender Experts for XDR and Defender Experts for Hunting. To enable coverage, you must have the following: Defender Experts for XDR or Defender Experts for Hunting license Defender for Servers Plan 1 or Plan 2 in Defender for Cloud You only need a minimum of 1 Defender Experts for XDR or Defender Experts for Hunting license to enable coverage of all your servers in Defender for Cloud. If you are interested in purchasing Defender Experts for XDR or the add-on for Defender Experts coverage of servers in Defender for Cloud, please complete this interest form. Third-party network signals for enrichment are available only for Defender Experts for XDR customers. To enable third-party network signals for enrichment, you must have the following: Microsoft Sentinel instance deployed Microsoft Sentinel onboarded to Microsoft Defender portal At least one of the supported network signals ingested through Sentinel built-in connectors: Palo Alto Networks (PAN-OS Firewall) Zscaler (Zscaler Internet Access and Zscaler Private Access) Fortinet (FortiGate Next-Generation Firewall) If you are an existing Defender Experts for XDR customer and are interested in enabling third-party network signals for enrichment, please reach out to your Service Delivery Manager. Learn more Technical Documentation Microsoft Defender Experts for XDR Microsoft Defender Experts for Hunting Third-party network signals for enrichment Plan Defender for Servers deployment Defender Experts Ninja Training3.1KViews3likes0Comments