best practices
127 TopicsTable Talk: Sentinel’s New ThreatIntel Tables Explained
Key updates On April 3, 2025, we publicly previewed two new tables to support STIX (Structured Threat Information eXpression) indicator and object schemas: ThreatIntelIndicators and ThreatIntelObjects. To summarize the important dates: 31 August 2025: We previously announced that data ingestion into the legacy ThreatIntelligenceIndicator table would cease on the 31 July 2025. This timeline has now been extended and the transition to the new ThreatIntelIndicators and ThreatIntelObjects tables will proceed gradually until the 31 st of August 2025. The legacy ThreatIntelligenceIndicator table (and its data) will remain accessible, but no new data will be ingested there. Therefore, any custom content, such as workbooks, queries, or analytic rules, must be updated to reference the new tables to remain effective. If you require additional time to complete the transition, you may opt into dual ingestion, available until the official retirement on the 21 st of May 2026, by submitting a service request. Update: The opt in to dual ingestion ended on the 31 st of August and is no longer available. 31 May 2026: ThreatIntelligenceIndicator table support will officially retire, along with ingestion for those who opt-in to dual ingestion beyond 31 st of August 2025. What’s changing: ThreatIntelligenceIndicator VS ThreatIntelIndicators and ThreatIntelObjects Let’s summarise some of the differences. ThreatIntelligenceIndicator ThreatIntelIndicators ThreatIntelObjects Status Extended data ingestion until the 31st of August 2025, opt-in for additional transition time available. Deprecating on the 31st of May 2026 — no new data will be ingested after this date. Active and recommended for use. Active and complementary to ThreatIntelIndicators. Purpose Originally used to store threat indicators like IPs, domains, file hashes, etc. Stores individual threat indicators (e.g. IPs, URLs, file hashes). Stores STIX objects that provide contextual information about indicators. Examples: threat actors, malware families, campaigns, attack patterns. Characteristics Limitations: o Less flexible schema. o Limited support for STIX (Structured Threat Information eXpression) objects. o Fewer contextual fields for advanced threat hunting. Enhancements: o Supports STIX indicator schema. o Includes a Data column with full STIX object data for advanced hunting. o More metadata fields (e.g. LastUpdateMethod, IsDeleted, ExpirationDateTime). o Optimized ingestion: excludes empty key-value pairs and truncates long fields over 1,000 characters. Enhancements: o Enables richer threat modelling and correlation. o Includes fields like StixType, Data.name, and Data.id. Use cases Legacy structure for storing threat indicators. Migration Note: All custom queries, workbooks, and analytics rules referencing this table must be updated to use the new tables . Ideal for identifying and correlating specific threat indicators. Threat Hunting: Enables hunting for specific Indicators of Compromise (IOCs) such as IP addresses, domains, URLs, and file hashes. Alerting and detection rules: Can be used in KQL queries to match against telemetry from other tables (e.g. Heartbeat, SecurityEvent, Syslog). Example query correlating threat indictors with threat actors: Identify threat actors associated with specific threat indicators Useful for understanding relationships between indicators and broader threat entities (e.g. linking an IP to a known threat actor). Threat Hunting: Adds context by linking indicators to threat actors, malware families, campaigns, and attack patterns. Alerting and Detection rules: Enrich alerts with context like threat actor names or malware types. Example query listing TI objects related to a threat actor, “Sangria Tempest.” : List threat intelligence data related to a specific threat actor Benefits of the new ThreatIntelIndicators and ThreatIntelObjects tables In addition to what’s mentioned in the table above. The main benefits of the new table include: Enhanced Threat Visibility More granular and complete representation of threat intelligence. Support for advanced hunting scenarios and complex queries. Enables attribution to threat actors and relationships. Improved Hunting Capabilities Generic parsing of STIX patterns. Support for all valid STIX IoCs, Threat Actors, Identity, and Relationships. Important considerations with the new TI tables Higher volume of data being ingested: o In the legacy ThreatIntelligenceIndicator table, only the IoCs with Domain, File, URL, Email, Network sources were ingested. o The new tables support a richer schema and more detailed data, which naturally increases ingestion volume. The Data column in both tables stores full STIX objects, which are often large and complex. o Additional metadata fields (e.g. LastUpdateMethod, StixType, ObservableKey, etc.) increase the size of each record. o Some fields like description and pattern are truncated if they exceed 1,000 characters, indicating the potential for large payloads. More Frequent Republishing: o Previously, threat intelligence data was republished over a 12-day cycle. Now, all data is republished every 7-10 days (depending on the volume), increasing the ingestion frequency and volume. o This change ensures fresher data but also leads to more frequent ingestion events. o Republishing is identifiable by LastUpdateMethod = "LogARepublisher" in the tables. Optimising data ingestion There are two mechanisms to optimise threat intelligence data ingestion and control costs. Ingestion Rules See ingestion rules in action: Introducing Threat Intelligence Ingestion Rules | Microsoft Community Hub Sentinel supports Ingestion Rules that allow organizations to curate data before it enters the system. In addition, it enables: Bulk tagging, expiration extensions, and confidence-based filtering, which may increase ingestion if more indicators are retained or extended. Custom workflows that may result in additional ingestion events (e.g. tagging or relationship creation). Reduce noise by filtering out irrelevant TI Objects such as low confidence indicators (e.g. drop IoCs with a confidence score of 0), suppressing known false positives from specific feeds. These rules act on TI objects before they are ingested into Sentinel, giving you control over what gets stored and analysed. Data Collection Rules/ Data transformation As mentioned above, the ThreatIntelIndicator and ThreatIntelObjects tables include a “Data” column which contains the full original STIX object and may or may not be relevant for your use cases. In this case, you can use a workspace transformation DCR to filter it out using a KQL query. An example of this KQL query is shown below, for more examples about using workspace transformations and data collection rules: Data collection rules in Azure Monitor - Azure Monitor | Microsoft Learn source | project-away Data A few things to note: o Your threat intelligence feeds will be sending the additional STIX objects data and IoCs, if you prefer not to receive these additional TI data, you can modify the filter out data according to your use cases as mentioned above. More examples are mentioned here: Work with STIX objects and indicators to enhance threat intelligence and threat hunting in Microsoft Sentinel (Preview) - Microsoft Sentinel | Microsoft Learn o If you are using a data collection rule to make schema changes such as dropping the fields, please make sure to modify the relevant Sentinel content (e.g. detection rules, Workbooks, hunting queries, etc.) that are using the tables. o There can be additional cost when using Azure Monitor data transformations (such as when adding extra columns or adding enrichments to incoming data), however, if Sentinel is enabled on the Log Analytics workspace, there is no filtering ingestion charge regardless of how much data the transformation filters. New Threat Intelligence solution pack available A new Threat Intelligence solution is now available in the Content Hub, providing out of the box content referencing the new TI tables, including 51 detection rules, 5 hunting queries, 1 Workbook, 5 data connectors and also includes 1 parser for the ThreatIntelIndicators. Please note, the previous Threat Intelligence solution pack will be deprecated and removed after the transition phase. We recommend downloading the new solution from the Content Hub as shown below: Conclusion The transition to the new ThreatIntelIndicators and ThreatIntelObjects tables provide enhanced support for STIX schemas, improved hunting and alerting features, and greater control over data ingestion allowing organizations to get deeper visibility and more effective threat detection. To ensure continuity and maximize value, it's essential to update existing content and adopt the new Threat Intelligence solution pack available in the Content Hub. Related content and references: Work with STIX objects and indicators to enhance threat intelligence and threat hunting in Microsoft Sentinel Curate Threat Intelligence using Ingestion Rules Announcing Public Preview: New STIX Objects in Microsoft Sentinel3.1KViews1like2CommentsGeneral Availability of on-demand scanning in Defender for Storage
When malware protection was initially introduced in Microsoft Defender for Storage, security administrators gained the ability to safeguard their storage accounts against malicious attacks during blob uploads. This means that any time a blob is uploaded—whether from a web application, server, or user—into an Azure Blob storage account, malware scanning powered by Microsoft Defender Antivirus examines the content for any malicious elements within the blob, including images, documents, zip files and more. 🎉In addition to on-upload malware protection, on-demand malware protection is now generally available in Defender for Storage. This article will focus on the recent general availability release of on-demand scanning, its benefits, and how security administrators can begin utilizing this feature today. 🐞What is on-demand scanning? Unlike on-upload scanning, which is a security feature that automatically scan blobs for malware when they are uploaded or modified in cloud storage environments, on-demand scanning enables security administrators to manually initiate scans of entire storage accounts for malware. This scanning method is particularly beneficial for targeted security inspections, incident response, creating security baselines for specific storage accounts and compliance with regulatory requirements. Scanning all existing blobs in a storage account can be performed via the API and Azure portal user interface. Let's explore some use case scenarios and reasons why an organization might need on-demand scanning. Contoso IT Department has received a budget to enhance the security of their organization following the acquisition of Company Z. Company Z possesses numerous storage accounts containing dormant data that have not undergone malware scanning. To integrate these data blobs into the parent organization, it is essential that they first be scanned for malware. Contoso Health Department is mandated by state law to conduct a scheduled quarterly audit of the storage accounts. This audit ensures data integrity and provides documented assurance of security controls for compliance. It involves verifying that important cloud-hosted documents are secure and free from malware. Contoso Legal Corporation experienced a recent breach where the attacker accessed several storage accounts. Post-breach, Contoso Legal Corporation must assure their stakeholders that the storage accounts are free of malware. 💪Benefits of on-demand scanning On-demand scanning offers numerous advantages that security administrators can leverage to safeguard their cloud storage. This section details some of the primary benefits associated with on-demand scanning. Native scan experience: Malware scanning within Defender for Storage is an agentless solution that requires no additional infrastructure. Security administrators can enable malware protection easily and observe its benefits immediately. Respond to security events: Immediately scan storage accounts when security alerts or suspicious activities are detected. Security audits and maintenance: Performing on-demand scans is crucial during security audits or routine system maintenance to ensure that all potential issues are identified and addressed. Latest malware signatures: On-demand scanning ensures that the most recent malware signatures are utilized. Blobs that may have previously evaded detection by previous malware scans can be identified during a manual scan. 🫰On-demand scanning cost estimation Organizations frequently possess extensive amounts of data and require scanning for malware due to various security considerations. A lack of understanding regarding the precise cost of this operation can hinder security leaders from effectively safeguarding their organization. To address this issue, Defender for Storage offers an integrated cost estimation tool within the Azure portal user interface for on-demand scanning. This new UI will display the size of the blob storage and provide estimated costs for scans based on the volume of data. Access to this crucial information facilitates budgeting processes. 🤔On-upload or on-demand scanning In the current configuration of malware protection within Defender for Storage, it is required to have on-upload malware scanning enabled to use the on-demand functionality. On-demand scanning is offered as an additional option. On-upload scanning ensures that incoming blobs are free from malware, while on-demand scanning provides malware baselines and verifies blob health using the latest malware signatures. On-upload and on-demand scanning have distinct triggers. On-upload scanning is automatically performed when new blobs are uploaded to a blob-based storage account, whereas on-demand scanning is manually triggered by a user or an API call. On-demand scanning can also be initiated by workflow automation, such as using a logic app within Azure for scheduled scans. 👟Start scanning your blobs with on-demand scanning Prerequisites Malware protection in Defender for Storage is exclusively available in the per-storage account plan. If your organization is still using the classic Defender for Storage plan, we highly recommend upgrading to take advantage of the full range of security benefits and the latest features. To get started with this agentless solution, please look at the prerequisites in our public documentation here. Test on-demand Malware Scanning Within the Microsoft Defender for Cloud Ninja Training available on GitHub, security administrators can utilize Exercise 12: Test On-demand Malware Scanning in Module 19. The exercise includes detailed instructions and screenshots for testing on-demand malware scanning. This test can be performed using the Azure Portal User Interface or API. Best Practices To maximize the effectiveness of on-demand malware scanning in Microsoft Defender for Storage, please take a look at the best practices that are outlined in our public documentation here. 📖 Conclusion In this article we have explored the newly available on-demand scanning feature in Defender for Storage, which complements existing on-upload scanning capabilities by allowing security administrators to manually initiate malware scans for storage accounts. This feature is particularly useful for targeted security checks, incident response, creating security baseline for storage accounts and compliance audits. Additionally, Defender for Storage includes a built-in cost estimation tool to help organizations budget for on-demand scanning based on their data volume. ⚙️Additional Resources Defender for Storage Malware Protection Overview On-demand malware protection in Defender for Storage On-upload malware protection in Defender for Storage We want to hear from you! Please take a moment to fill out this survey to provide direct feedback to the Defender for Storage engineering team.Case Management: Incidents, Cases, and When to Use Them
In March, Case Management went to GA status within the unified portal for customers. This introduced new functionality and experiences such as: A new case queue Custom statuses New Case task experience Linking incidents to cases This can be a little confusing for existing users who are familiar with incidents and the incident experience for either Microsoft Defender or Sentinel. Let’s break this down into more detail. What are Incidents? Incidents are artifacts that act as containers for alerts to signal that a noteworthy event took place that involves one or more malicious activities. These serve to be a single landing page for alerts, activities, entities, and more. When to use Incidents? Incidents are the default experience for analysts as they perform incident investigations and response. Incidents are where they will find any and all details available for alerts and entities while performing the basic tasks of a SOC analyst. Incidents should be used when investigating and responding to malicious activity within the environment. The current incident experience provides features such as: Alert timeline Entity mapping and tracking Entity investigation graph Copilot for Security Pre-performed investigations and responses What are Cases? Cases are artifacts that represent an actionable or trackable item, such as incident investigation, validating a threat hunting hypothesis, reviewing threat intelligence review, managing endpoint vulnerabilities, and more. They can exist without alerts or incidents. When to use Cases vs. Incidents? This section is not meant to put one over the other, but is meant to clear up some confusion. Cases serve as items that can be created to track important activities within the SOC, they don’t have to just be for incident response. A case can be created for any notable activity that the SOC performs, as mentioned above. Cases can be used as a collaboration tool within your SOC team. While cases may seem redundant to incident, that is not true one bit. Here are a few distinguishing points: As incidents are a container for alerts, cases can be a container for incidents, allowing multiple incidents to be worked on at once if they are related by threat actor, impacted entities, and more. Cases offer a native task experience, similar to the experience within Microsoft Sentinel in Azure. Cases offer attachment support, allowing analysts a more traditional case management experience that incidents do not have. Cases allow for more customization, such as custom statuses. Incidents do not offer custom statuses. Let’s look at two example scenarios: Cases with Incidents I am a SOC Analyst that is reviewing the incident queue. I find an incident that involves multiple threat types and scripts. I would like to work on this incident with my colleagues while tracking notable artifacts that we find in our investigation. For example: I visit the unified incident queue and see that I have a multi-stage incident, involving multiple alerts for multiple assets. I perform my initial triage and confirm that this is a true positive that should be addressed. I will then cut a case and attach this incident to it for collaboration. Within the case, I can add a code block to list any query that I have performed within Advanced Hunting, as well as paste results from my queries directly in the case for tracking. If using Copilot for Security, I can copy and paste the Copilot incident summary in the case so that my colleagues can get an incident summary without having to leave the case. Cases without Incidents I am a SOC Analyst that is responsible for remediating device vulnerabilities. I check our current CVE’s within Exposure Management and see that I have several devices that are currently vulnerable to CVE-2025-5419, a Microsoft Edge Chromium vulnerability. I save my list of devices to a CSV file so that I can attach it to my case. I also copy the description of the CVE to add the case notes to make it more convenient for my colleagues to join the case and not need to leave it. I then pivot to Advanced Hunting to review activities by any of these vulnerable devices. I have a match and would like to connect that result to my case, so I use Export > Copy to Clipboard so that I can paste it in the case. Back within the case, I begin uploading the CSV of exposed devices as evidence, I leave a message that is formatted to draw attention to the findings, and I paste my findings based on my query. Based on my findings, I begin generating new tasks for each device owner and pasting the instructions for remediation of the CVE. These are just some examples of the many uses for cases within the Defender Portal. Hopefully this highlights the versatility of case management today and how it can operate both with and without an incident involved. Keep an eye out for more improvements as Case Management matures. If looking to learn about case management, please check out the below resources: Public documentation: Manage security operations cases natively in the Microsoft Defender portal - Unified security operations | Microsoft Learn Video based learning: https://www.youtube.com/watch?v=G-vfMJSL11g Demo: Case Management in Microsoft Defender1.2KViews0likes0CommentsProtecting Your Azure Key Vault: Why Azure RBAC Is Critical for Security
Introduction In today’s cloud-centric landscape, misconfigured access controls remain one of the most critical weaknesses in the cyber kill chain. When access policies are overly permissive, they create opportunities for adversaries to gain unauthorized access to sensitive secrets, keys, and certificates. These credentials can be leveraged for lateral movement, privilege escalation, and establishing persistent footholds across cloud environments. A compromised Azure Key Vault doesn’t just expose isolated assets it can act as a pivot point to breach broader Azure resources, potentially leading to widespread security incidents, data exfiltration, and regulatory compliance failures. Without granular permissioning and centralized access governance, organizations face elevated risks of supply chain compromise, ransomware propagation, and significant operational disruption. The Role of Azure Key Vault in Security Azure Key Vault plays a crucial role in securely storing and managing sensitive information, making it a prime target for attackers. Effective access control is essential to prevent unauthorized access, maintain compliance, and ensure operational efficiency. Historically, Azure Key Vault used Access Policies for managing permissions. However, Azure Role-Based Access Control (RBAC) has emerged as the recommended and more secure approach. RBAC provides granular permissions, centralized management, and improved security, significantly reducing risks associated with misconfigurations and privilege misuse. In this blog, we’ll highlight the security risks of a misconfigured key vault, explain why RBAC is superior to legacy Access Policies and provide RBAC best practices, and how to migrate from access policies to RBAC. Security Risks of Misconfigured Azure Key Vault Access Overexposed Key Vaults create significant security vulnerabilities, including: Unauthorized access to API tokens, database credentials, and encryption keys. Compromise of dependent Azure services such as Virtual Machines, App Services, Storage Accounts, and Azure SQL databases. Privilege escalation via managed identity tokens, enabling further attacks within your environment. Indirect permission inheritance through Azure AD (AAD) group memberships, making it harder to track and control access. Nested AAD group access, which increases the risk of unintended privilege propagation and complicates auditing and governance. Consider this real-world example of the risks posed by overly permissive access policies: A global fintech company suffered a severe breach due to an overly permissive Key Vault configuration, including public network access and excessive permissions via legacy access policies. Attackers accessed sensitive Azure SQL databases, achieved lateral movement across resources, and escalated privileges using embedded tokens. The critical lesson: protect Key Vaults using strict RBAC permissions, network restrictions, and continuous security monitoring. Why Azure RBAC is Superior to Legacy Access Policies Azure RBAC enables centralized, scalable, and auditable access management. It integrates with Microsoft Entra, supports hierarchical role assignments, and works seamlessly with advanced security controls like Conditional Access and Defender for Cloud. Access Policies, on the other hand, were designed for simpler, resource-specific use cases and lack the flexibility and control required for modern cloud environments. For a deeper comparison, see Azure RBAC vs. access policies. Best Practices for Implementing Azure RBAC with Azure Key Vault To effectively secure your Key Vault, follow these RBAC best practices: Use Managed Identities: Eliminate secrets by authenticating applications through Microsoft Entra. Enforce Least Privilege: Precisely control permissions, granting each user or application only minimal required access. Centralize and Scale Role Management: Assign roles at subscription or resource group levels to reduce complexity and improve manageability. Leverage Privileged Identity Management (PIM): Implement just-in-time, temporary access for high-privilege roles. Regularly Audit Permissions: Periodically review and prune RBAC role assignments. Detailed Microsoft Entra logging enhances auditability and simplifies compliance reporting. Integrate Security Controls: Strengthen RBAC by integrating with Microsoft Entra Conditional Access, Defender for Cloud, and Azure Policy. For more on the Azure RBAC features specific to AKV, see the Azure Key Vault RBAC Guide. For a comprehensive security checklist, see Secure your Azure Key Vault. Migrating from Access Policies to RBAC To transition your Key Vault from legacy access policies to RBAC, follow these steps: Prepare: Confirm you have the necessary administrative permissions and gather an inventory of applications and users accessing the vault. Conduct inventory: Document all current access policies, including the specific permissions granted to each identity. Assign RBAC Roles: Map each identity to an appropriate RBAC role (e.g., Reader, Contributor, Administrator) based on the principle of least privilege. Enable RBAC: Switch the Key Vault to the RBAC authorization model. Validate: Test all application and user access paths to ensure nothing is inadvertently broken. Monitor: Implement monitoring and alerting to detect and respond to access issues or misconfigurations. For detailed, step-by-step instructions—including examples in CLI and PowerShell—see Migrate from access policies to RBAC. Conclusion Now is the time to modernize access control strategies. Adopting Role-Based Access Control (RBAC) not only eliminates configuration drift and overly broad permissions but also enhances operational efficiency and strengthens your defense against evolving threat landscapes. Transitioning to RBAC is a proactive step toward building a resilient and future-ready security framework for your Azure environment. Overexposed Azure Key Vaults aren’t just isolated risks — they act as breach multipliers. Treat them as Tier-0 assets, on par with domain controllers and enterprise credential stores. Protecting them requires the same level of rigor and strategic prioritization. By enforcing network segmentation, applying least-privilege access through RBAC, and integrating continuous monitoring, organizations can dramatically reduce the blast radius of a potential compromise and ensure stronger containment in the face of advanced threats. Want to learn more? Explore Microsoft's RBAC Documentation for additional details.Agentless code scanning for GitHub and Azure DevOps (preview)
🚀 Start free preview ▶️ Watch a video on agentless code scanning Most security teams want to shift left. But for many developers, "shift left" sounds like "shift pain". Coordination. YAML edits with extra pipeline steps. Build slowdowns. More friction while they're trying to go fast. 🪛 Pipeline friction YAML edits with extra steps ⏱️ Build slowdowns More friction, less speed 🧩 Complex coordination Too many moving parts That's the tension we wanted to solve. With agentless code scanning in Defender for Cloud, you get broad visibility into code and infrastructure risks across GitHub and Azure DevOps - without touching your CI/CD pipelines or installing anything. ✨ Just connect your environment. We handle the rest. Already in preview, here's what's new Agentless code scanning was released in November 2024, and we're expanding the preview with capabilities to make it more actionable, customizable, and scalable: ✅ GitHub & Azure DevOps Connect your GitHub org and scan every repository automatically 🎯 Scoping controls Choose exactly which orgs, projects, and repos to scan 🔍 Scanner selection Enable code scanning, IaC scanning, or both 🧰 UI and REST API Manage at scale, programmatically or in-portal or Cloud portal 🎁 Available for free during the preview under Defender CSPM How agentless code scanning works Agentless code scanning runs entirely outside your pipelines. Once a connector has been created, Defender for Cloud automatically discovers your repositories, pulls the latest code, scans for security issues, and publishes findings as security recommendations - every day. Here's the flow: 1 Discover Repositories in GitHub or Azure DevOps are discovered using a built-in connector. 2 Retrieve The latest commit from the default branch is pulled immediately, then re-scanned daily. 3 Analyze Built-in scanners run in our environment: Code Scanning – looks for insecure patterns, bad crypto, and unsafe functions (e.g., `pickle.loads`, `eval()`) using Bandit and ESLint. Infrastructure as Code (IaC) – detects misconfigurations in Terraform, Bicep, ARM templates, CloudFormation, Kubernetes manifests, Dockerfiles, and more using Checkov and Template Analyzer. 4 Publish Findings appear as Security recommendations in Defender for Cloud, with full context: file path, line number, rule ID, and guidance to fix. Get started in under a minute 1 In Defender for Cloud, go to Environment settings → DevOps Security 2 Add a connector: Azure DevOps – requires Azure Security Admin and ADO Project Collection Admin GitHub – requires Azure Security Admin and GitHub Org Owner to install the Microsoft Security DevOps app 3 Choose your scanning scope and scanners 4 Click Save – and we'll run the first scan immediately s than a minute No pipeline configuration. No agent installed. No developer effort. Do I still need in-pipeline scanning? Short answer: yes - if you want depth and speed in the development workflow. Agentless scanning gives you fast, wide coverage. But Defender for Cloud also supports in-pipeline scanning using Microsoft Security DevOps (MSDO) command line application for Azure DevOps or GitHub Action. Each method has its own strengths. Here's how to think about when to use which - and why many teams choose both: When to use... ☁️ Agentless Scanning 🏗️ In-Pipeline Scanning Visibility Quickly assess all repos at org-level Scans and enforce every PR and commit Setup Requires only a connector Requires pipeline (YAML) edits Dev experience No impact on build time Inline feedback inside PRs and builds Granularity Repo-level control with code and IaC scanners Fine-tuned control per tool or branch Depth Default branch scans, no build context Full build artifact, container, and dependency scanning 💡 Best practice: start broad with agentless. Go deeper with in-pipeline scans where "break the build" makes sense. Already using GitHub Advanced Security (GHAS)? GitHub Advanced Security (GHAS) includes built-in scanning for secrets, CodeQL, and open-source dependencies - directly in GitHub and Azure DevOps. You don't need to choose. Defender for Cloud complements GHAS by: Surfaces GHAS findings inside Defender for Cloud's Security recommendations Adds broader context across code, infrastructure, and identity Requires no extra setup - findings flow in through the connector You get centralized visibility, even if your teams are split across tools. One console. Full picture. Core scenarios you can tackle today 🛡️ Catch IaC misconfigurations early Scan for critical misconfigurations in Terraform, ARM, Bicep, Dockerfiles, and Kubernetes manifests. Flag issues like public storage access or open network rules before they're deployed. 🎯 Bring code risk into context All findings appear in the same portal you use for VM and container security. No more jumping between tools - triage issues by risk, drill into the affected repository and file, and route them to the right owner. 🔍 Focus on what matters Customize which scanners run and where. Continuously scan production repositories. Skip forks. Run scoped PoCs. Keep pace as repositories grow - new ones are auto-discovered. What you'll see - and where All detected security issues show up as security recommendations in the recommendations and DevOps Security blades in Defender for Cloud. Every recommendation includes: ✅ Affected repository, branch, file path, and line number 🛠️ The scanner that found it 💡 Clear guidance to fix What's next We're not stopping here. These are already in development: 🔐 Secret scanning Identify leaked credentials alongside code and IaC findings 📦 Dependency scanning Open-source dependency scanning (SCA) 🌿 Multi-branch support Scan protected and non-default branches Follow updates in our Tech Community and release notes. Try it now - and help us shape what comes next Connect GitHub or Azure DevOps to Defender for Cloud (free during preview) and enable agentless code scanning View your discovered DevOps resources in the Inventory or DevOps Security blades Enable scanning and review recommendations Microsoft Defender for Cloud → Recommendations Shift left without slowing down. Start scanning smarter with agentless code scanning today. Helpful resources to learn more Learn more in the Defender for Cloud in the Field episode on agentless code scanning Overview of Microsoft Defender for Cloud DevOps security Agentless code scanning - configuration, capabilities, and limitations Set up in-pipeline scanning in: Azure DevOps GitHub action Other CI/CD pipeline tools (Jenkins, BitBucket Pipelines, Google Cloud Build, Bamboo, CircleCI, and more)Microsoft Defender for Cloud Adds Four New Regulatory Frameworks
As organizations accelerate their digital transformation and embrace artificial intelligence (AI) across industries, the regulatory landscape is evolving just as rapidly. From financial resilience to responsible AI governance, enterprises are under increasing pressure to demonstrate compliance with a growing number of global standards across multiple cloud platforms. At Microsoft, we are committed to helping customers meet these challenges with integrated, scalable, and intelligent security solutions. Today, we’re excited to announce the public preview of four new regulatory frameworks in Microsoft Defender for Cloud. These frameworks are now available across Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP), further expanding our multicloud compliance capabilities. What’s New in Public Preview The following regulatory frameworks are now supported in Microsoft Defender for Cloud: Digital Operational Resilience Act (DORA) European Union Artificial Intelligence Act (EU AI Act) Korean Information Security Management System for Public Cloud (k-ISMS-P) Center for Internet Security (CIS) Microsoft Azure Foundations Benchmark v3.0 Each of these frameworks addresses a critical area of modern cloud security and compliance. Let’s explore what they are, why they matter, and how Defender for Cloud helps you stay ahead. Digital Operational Resilience Act (DORA) The Digital Operational Resilience Act is a groundbreaking regulation from the European Union aimed at strengthening the digital resilience of financial institutions. DORA applies to a wide range of financial entities, including banks, insurance companies, investment firms, and third-party ICT providers, and mandates that these organizations can withstand, respond to, and recover from all types of ICT-related disruptions and threats. Why DORA Matters In today’s interconnected financial ecosystem, operational disruptions can have cascading effects across markets and geographies. DORA introduces a unified regulatory framework that emphasizes: Rigorous ICT risk management Incident reporting and response Digital operational resilience testing Oversight of third-party ICT service providers With Defender for Cloud, organizations can now assess their compliance posture against DORA requirements, identify gaps, and implement recommended controls across Azure, AWS, and GCP. This helps financial institutions not only meet regulatory obligations but also build a more resilient digital infrastructure. European Union Artificial Intelligence Act (EU AI Act) The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It introduces a risk-based classification system for AI systems, ranging from minimal to unacceptable risk, and imposes strict obligations on providers and users of high-risk AI applications. Why the EU AI Act Matters As AI becomes embedded in critical decision-making processes—from healthcare diagnostics to financial services, governments and regulators are stepping in to ensure these systems are safe, transparent, and accountable. The EU AI Act focuses on: Risk classification and governance Data quality and transparency Human oversight and accountability Robust documentation and monitoring Defender for Cloud now enables organizations to monitor AI workloads and evaluate their compliance posture under the EU AI Act. This includes mapping security controls to regulatory requirements and surfacing actionable recommendations to reduce risk. By integrating AI governance into your cloud security strategy, you can innovate responsibly and build trust with customers and regulators alike. Korean Information Security Management System for Public Cloud (k-ISMS-P) The k-ISMS-P is a South Korean regulatory standard that integrates personal information protection and information security management for public cloud services. It is a mandatory certification for cloud service providers and enterprises handling sensitive data in South Korea. Why k-ISMS-P Matters As cloud adoption grows in South Korea, so does the need for robust compliance frameworks that protect personal and organizational data. The k-ISMS-P standard covers: Organizational and technical security controls Personal data lifecycle management Incident response and audit readiness Defender for Cloud now supports k-ISMS-P, enabling organizations to assess their compliance posture and prepare for audits with confidence. This is especially valuable for multinational companies operating in or partnering with South Korean entities. CIS Microsoft Azure Foundations Benchmark v3.0 The Center for Internet Security (CIS) Azure Foundations Benchmark is a widely adopted set of best practices for securing Microsoft Azure environments. Version 3.0 introduces updated recommendations that reflect the latest cloud security trends and technologies. Why CIS v3.0 Matters Security benchmarks like CIS provide a foundational layer of protection that helps organizations reduce risk and improve their security posture. Key updates in version 3.0 include: Enhanced identity and access management controls Improved logging and monitoring configurations Updated recommendations for storage, networking, and compute Defender for Cloud now supports CIS Azure Foundations Benchmark v3.0, offering automated assessments and remediation guidance. This helps security teams stay aligned with industry standards and continuously improve their cloud security hygiene. Unified Compliance Across Multicloud Environments With the addition of these four frameworks, Microsoft Defender for Cloud now supports an extensive library of regulatory standards and benchmarks across Azure, AWS, and GCP. This multicloud support is critical for organizations operating in hybrid environments or managing complex supply chains. The Regulatory Compliance dashboard in Defender for Cloud provides a centralized view of your compliance posture, complete with: Framework-specific control mapping Assessments and scoring Actionable recommendations and remediation steps Integration with Microsoft Purview and Microsoft Entra for unified governance Get Started Today These new frameworks are available in public preview and can be enabled directly from the Microsoft Defender for Cloud portal. To get started: Navigate to the Regulatory Compliance blade. Select Manage compliance standards. Select an account or management account (Azure subscription or management group, AWS account or management account, GCP project or organization) to assign the security standard. Select Security policies. Locate the standard you want to enable and toggle the status to On. Review your compliance posture and implement recommended actions. For more information, visit our documentation. By expanding our regulatory coverage, we’re helping customers stay ahead of compliance requirements, reduce risk, and build trust in a rapidly evolving digital world. Whether you’re navigating AI governance, financial resilience, or regional data protection laws, Microsoft Defender for Cloud is here to support your journey.1.9KViews2likes0CommentsProtecting Cloud Storage in the Age of AI
Introduction In the age of AI, cloud storage isn’t just infrastructure, it’s the foundation of innovation. Generative AI models rely on massive datasets for grounding, model training and fine-tuning, many containing sensitive or proprietary data. If compromised, the damage can be severe: IP theft, privacy violations, or even model poisoning. What comes with the importance is the risks of being compromised: 70% of organizations found hidden sensitive data during audits. 78% struggle with compliance, especially with growing AI and data regulations. 47% have faced malware in storage, costing $2.3M on average per breach. In this blog, we’ll explore how Defender for Cloud helps to safeguard customer’s most valuable data by helping them to start secure and stay secure. The museum metaphor: Imagine your cloud storage as a high-tech museum, housing priceless artifacts—your sensitive data, customer records, and AI training sets. Like any museum, protecting what’s inside requires strong defenses from day one and ongoing vigilance. To protect your important artifacts, you should Start secure by preventing risks before the doors open. You’ll need to lock every entry point, position security cameras, and test alarms. Fix misconfigurations, close access gaps, and identify exposed data early—before attackers can. Stay secure with continuous monitoring. Consider how museums never stop watching. Security systems run 24/7, and staff respond to suspicious activity. In the same way, you need to detect threats in real time, enforce policies, and block malicious actions and malware—like someone trying to upload poisonous data into your AI pipeline. Whether you’re storing business-critical data or fueling innovation with AI, you will need to protect your data like it belongs in a vault. In the same way, Microsoft Defender for Cloud Storage Security helps Azure storage customers to start secure and stay secure when it comes to protecting their cloud storage. Start secure – proactively reduce storage risks The first step of “start secure" is enabling security. It’s important to have native integrations with existing storage infrastructure for effective security. Defender for Cloud provides seamless integration with Azure Storage, allowing one-click enablement and reducing operational overhead. After enabling security, it's important to identify and address risks. Defender for Cloud offers prioritized recommendations to detect and fix storage posture issues by integrating with various cloud providers. It identifies misconfigurations like shadow data, network weaknesses, and excessive access, providing clear remediation steps and guidance for administrators. However, it is not enough to understand where the risks are, without risk prioritization, security admins can get overwhelmed by the number of recommendations. Defender for Cloud's Attack Path Analysis feature offers a comprehensive understanding of the attack surface by simulating potential attack paths. This helps organizations identify and prioritize potential vulnerabilities and misconfigurations in their cloud environment that could be exploited by attackers. By proactively addressing these weaknesses, organizations can significantly reduce their attack surface and minimize the risk of breaches. For example, Defender for Cloud can identify an internet-exposed VM with a high-severity vulnerability that has access to a storage account containing sensitive data. Without proper remediation, attackers can exploit this chain of posture issues to infiltrate the sensitive data. Stay secure – detect and responds to storage threats On top of helping storage accounts to start secure by managing security posture and reducing risks, keeping storage accounts secure requires continuous monitoring for threats and preventing malware in cloud storage. This is where we need to introduce the idea of the control plane and data plane of cloud storage. The control plane governs management operations like creating or deleting storage accounts, setting access policies, and configuring diagnostics—typically via ARM endpoints. The data plane, on the other hand, handles the actual read/write operations on blobs, files, and queues—often using SAS tokens or access keys. This is where the majority of Azure Storage traffic flows, and it’s also where many traditional security tools fall short. While most storage security solutions in the market focus on control plane activities like blob creation or deletion, the data plane— where over 67% of Azure Storage traffic happens— handles most operations and often goes unmonitored. Attackers can access the data plane directly with keys or tokens, which many security teams overlook. Defender for Cloud addresses this by analyzing data plane logs and alerting suspicious activity, such as token leaks, lateral movements, or insider threats. Additionally, Defender for Cloud offers ongoing monitoring and sensitive data discovery to detect and prevent breaches involving unauthorized access, exfiltration, or corruption of information in Azure Blob Storage. All of these threat insights are directly available for investigation in the Defender XDR portal. Keeping storage account malware free As discussed above, “stay secure” has two aspects to it, threat detection and response and malware protection. Malware Scanning allows organizations to detect and prevent polymorphic and metamorphic malware distribution events with content scanning upon upload or on-demand using Microsoft Defender Antivirus technologies. If a malicious file is found, access to the file can be blocked and the scan result will automatically trigger a security alert in Defender for Cloud. Common use cases for storage security: Based on above features, let’s look into common industry use case for Storage security. 1. Protect sensitive data in AI applications Industries: Generative AI platforms, customer service providers, Personas: AI architects, infrastructure admins Pain Points: Growing threat landscape targeting sensitive data Over-permissive access configurations Difficulty identifying high-priority assets to monitor Solution: Defender for Cloud helps organizations secure storage accounts holding sensitive data by providing robust posture management. It continuously assesses configurations, highlights risks, and enables teams to prioritize critical storage resources. When integrated with Microsoft Defender XDR, it extends protection with threat detection and response capabilities—alerting security operational teams to malware presence and enabling rapid investigation and remediation. 2. prevent malware from spreading through file uploads Industries: Customer service, healthcare, data-driven applications with file upload pipelines Personas: SOC analysts, infrastructure admins, Security admins Pain Points: Risk of malware in customer-uploaded files Compliance pressure and industry mandates for data hygiene Slow or manual malware detection and response processes Solution: Defender for Cloud’s malware scanning proactively detects malicious content in uploaded files before it can spread across systems. Using fast, sampling-based scanning, security teams receive results quickly—helping them reduce time to remediation and automate responses. This improves compliance readiness and strengthens overall data hygiene for customer-facing environments. Learn more about Defender for Cloud storage security: Microsoft Defender for Cloud | Microsoft Security Start a free Azure trial. Read more about Microsoft Defender for Cloud Storage Security here.Optimizing Resource Allocation with Microsoft Defender CSPM
This article is part of our series on “Strategy to Execution: Operationalizing Microsoft Defender CSPM.” If you’re new to the series, or want broader strategic context, begin with our main overview article, then explore Article 1, Article 2, and Article 3 for details on risk identification, compliance, and DevSecOps workflows. Introduction Organizations today face an array of challenges in their cloud security efforts, ever-growing multicloud infrastructures, finite budgets, and evolving threat landscapes. Effectively allocating limited resources is critical: security teams must prioritize the vulnerabilities posing the highest risk while avoiding spending precious time and money on lower-priority issues. Defender CSPM (Cloud Security Posture Management) provides a data-driven approach to this problem. By continuously analyzing the security posture across Azure, AWS, and GCP, Defender CSPM calculates risk scores based on factors such as business impact, exposure, and potential exploitability. Armed with these insights, security teams can make informed decisions about where to focus resources, maximizing impact and reducing their overall risk. In this fourth, and last article of our series, we’ll examine how to operationalize resource allocation with Defender CSPM. We’ll discuss the common allocation challenges, explain how CSPM’s risk-based prioritization helps address them, and provide practical steps to implement an effective allocation strategy. Why Resource Allocation Matters in Multicloud Security Resource allocation is critical in multicloud security because securing environments that span multiple cloud providers introduces unique challenges that require careful planning. Before you can decide where to invest your time, budget, and headcount, you need to understand the hurdles that make multicloud allocation especially tough: Overwhelming Volume of Vulnerabilities Modern cloud environments are common with potential vulnerabilities. Multicloud setups compound this challenge by introducing platform-specific risks. Without a clear prioritization method, teams risk tackling too many issues at once, often leaving truly critical threats under-addressed. Competing Priorities Across Teams Security, DevOps, and IT teams frequently have diverging goals. Security may emphasize high-risk vulnerabilities, while DevOps focuses on uptime and rapid releases. Aligning everyone on which vulnerabilities matter most ensures strategic clarity and reduces internal friction. Limited Budgets and Skilled Personnel Constrained cybersecurity budgets and headcount force tough decisions about which fixes or upgrades to fund. By focusing on vulnerabilities that present the highest risk to the business, organizations can make the most of available resources. Lack of Centralized Visibility Monitoring and correlating vulnerabilities across multiple cloud providers can be time-intensive and fragmented. Without a unified view, it’s easy to miss critical issues or duplicate remediation efforts, both of which squander limited resources. How Defender CSPM Enables Risk-Based Resource Allocation To address the complex task of resource allocation in sprawling, multicloud estates, security teams need more than raw vulnerability data, they need a system that continually filters, enriches, and ranks findings by real-world impact. Microsoft Defender CSPM equips security teams with automated, prioritized insights and unified visibility. It brings together telemetry from Azure, AWS, and GCP, applies advanced analytics to assess which weaknesses pose the greatest danger, and then packages those insights into clear, actionable priorities. The following capabilities form the backbone of a risk-based allocation strategy: Risk Scoring and Prioritization Defender CSPM continuously evaluates vulnerabilities and security weaknesses, assigning each one a risk score informed by: Business Impact – How vital a resource or application is to daily operations. Exposure – Whether a resource is publicly accessible or holds sensitive data. Exploitability – Contextual factors (configuration, known exploits, network paths) that heighten or lower a vulnerability’s real-world risk. This approach ensures that resources, time, budget, and staff are channeled toward the issues that most endanger the organization. Centralized Visibility Across Clouds Multicloud support means you can view vulnerabilities across Azure, AWS, and GCP in a single pane of glass. This unified perspective helps teams avoid duplicative efforts and ensures each high-risk finding is appropriately addressed, no matter the platform. Automated, Context-Aware Insights Manual vulnerability evaluations are time-consuming and prone to oversight. Defender CSPM automates the risk-scoring process, updating risk levels as new vulnerabilities arise or resources change, so teams can act promptly on the most critical gaps. Tailored Remediation Guidance In addition to highlighting high-risk issues, Defender CSPM provides recommended steps to fix them, such as applying patches, adjusting access controls, or reconfiguring cloud resources. Having guided instructions accelerates remediation efforts and reduces the potential for human error. Step-by-Step: Operationalizing Resource Allocation with Defender CSPM Below is a practical workflow integrating both the strategic and operational aspects of allocating resources effectively. Step 1: Build a Risk Assessment Framework Identify Business-Critical Assets Collaborate with business leaders, application owners, and architects to label high-priority workloads (e.g., production apps, data stores with customer information). Use resource tagging (Azure tags, AWS tags, GCP labels) to systematically mark essential resources. Align Defender CSPM’s Risk Scoring with Business Impact Customize Defender CSPM’s scoring model to reflect your organization’s unique risk tolerance. Set up periodic risk-scoring workshops with security, compliance, and business stakeholders to keep definitions current. Categorize Vulnerabilities Group vulnerabilities into critical, high, medium, or low, based on the assigned risk score. Establish remediation SLAs for each severity level (e.g., 24-48 hours for critical; 7-14 days for medium). Step 2: Allocate Budgets and Personnel Based on Risk Prioritize Funding for High-Risk Issues Work with finance or procurement to ensure the biggest threats receive adequate budget. This may cover additional tooling, specialized consulting, or staff training. If a public-facing resource with sensitive data is flagged, you might immediately allocate budget for patching or additional third-party security review. Track Resource Utilization Monitor how much time and money go into specific vulnerabilities. Overinvesting in less severe issues can starve critical areas of necessary attention. Use dashboards in Power BI or similar tools to visualize resource allocation versus risk impact. Define Clear SLAs Set more aggressive SLAs for higher-risk items. For instance, fix critical vulnerabilities within 24-48 hours to minimize dwell time. Align your ticketing system (e.g., ServiceNow, Jira) with Defender CSPM so each newly discovered high-risk vulnerability automatically flags an urgent ticket. Step 3: Continuously Track Metrics and Improve Mean Time to Remediate (MTTR) Monitor how long it takes to fix vulnerabilities after they’re identified. Strive for a shorter MTTR on top-priority issues. Reduction in Risk Exposure Track how many high-priority vulnerabilities are resolved over time. A downward trend indicates effective remediation. Re-assess risk after major remediation efforts; scores should reflect newly reduced exposure. Resource Utilization Efficiency Compare security spending or labor hours to actual risk reduction outcomes. If you’re using valuable resources on low-impact tasks, reallocate them. Evaluate whether your investments, tools, staff, or specialized training, are paying off in measurable risk reduction. Compliance Improvement For organizations under regulations like HIPAA or PCI-DSS, measure compliance posture. Defender CSPM can highlight policy violations and track improvement over time. Benchmark Against Industry Standards Compare your results (MTTR, risk exposure, compliance posture) against sector-specific benchmarks. Adjust resource allocation strategies if you’re lagging behind peers. Strategic Benefits of a Risk-Based Approach Maximized ROI By focusing on truly critical issues, you’ll see faster, more tangible reductions in risk for each security dollar spent. Faster Remediation of High-Risk Vulnerabilities With Defender CSPM’s clear rankings, teams know which issues to fix first, minimizing exposure windows for the worst threats. Improved Collaboration Providing a transparent, data-driven explanation for why certain vulnerabilities get priority eases friction between security, DevOps, and operations teams. Scalable for Growth As you add cloud workloads, CSPM’s automated scoring scales with you. You’ll always have an updated queue of the most urgent vulnerabilities to tackle. Stronger Risk Management Posture Continuously focusing on top risks aligns security investments with business goals and helps maintain compliance with evolving standards and regulations. Conclusion Resource allocation is a central concern for any organization striving to maintain robust cloud security. Microsoft Defender for Cloud’s CSPM makes these decisions more straightforward by automatically scoring vulnerabilities according to impact, exposure, and other contextual factors. Security teams can thus prioritize their limited budgets, personnel, and time for maximum effect, reducing the window of exposure and minimizing the likelihood of critical breaches. By following the steps outlined here, building a risk assessment framework, allocating resources proportionally to risk severity, and monitoring metrics to drive continuous improvement, you can ensure your security program remains agile and cost-effective. In doing so, you’ll align cybersecurity investments with broader business objectives, ultimately delivering measurable risk reduction in today’s dynamic, multicloud environment. Microsoft Defender for Cloud - Additional Resources Strategy to Execution: Operationalizing Microsoft Defender CSPM Considerations for risk identification and prioritization in Defender for Cloud Strengthening Cloud Compliance and Governance with Microsoft Defender CSPM Integrating Security into DevOps Workflows with Microsoft Defender CSPM Download the new Microsoft CNAPP eBook at aka.ms/MSCNAPP Become a Defender for Cloud Ninja by taking the assessment at aka.ms/MDCNinja Reviewers Yuri Diogenes, Principal PM Manager, CxE Defender for Cloud