cloud security
1365 TopicsProtecting Your Azure Key Vault: Why Azure RBAC Is Critical for Security
Introduction In today’s cloud-centric landscape, misconfigured access controls remain one of the most critical weaknesses in the cyber kill chain. When access policies are overly permissive, they create opportunities for adversaries to gain unauthorized access to sensitive secrets, keys, and certificates. These credentials can be leveraged for lateral movement, privilege escalation, and establishing persistent footholds across cloud environments. A compromised Azure Key Vault doesn’t just expose isolated assets it can act as a pivot point to breach broader Azure resources, potentially leading to widespread security incidents, data exfiltration, and regulatory compliance failures. Without granular permissioning and centralized access governance, organizations face elevated risks of supply chain compromise, ransomware propagation, and significant operational disruption. The Role of Azure Key Vault in Security Azure Key Vault plays a crucial role in securely storing and managing sensitive information, making it a prime target for attackers. Effective access control is essential to prevent unauthorized access, maintain compliance, and ensure operational efficiency. Historically, Azure Key Vault used Access Policies for managing permissions. However, Azure Role-Based Access Control (RBAC) has emerged as the recommended and more secure approach. RBAC provides granular permissions, centralized management, and improved security, significantly reducing risks associated with misconfigurations and privilege misuse. In this blog, we’ll highlight the security risks of a misconfigured key vault, explain why RBAC is superior to legacy Access Policies and provide RBAC best practices, and how to migrate from access policies to RBAC. Security Risks of Misconfigured Azure Key Vault Access Overexposed Key Vaults create significant security vulnerabilities, including: Unauthorized access to API tokens, database credentials, and encryption keys. Compromise of dependent Azure services such as Virtual Machines, App Services, Storage Accounts, and Azure SQL databases. Privilege escalation via managed identity tokens, enabling further attacks within your environment. Indirect permission inheritance through Azure AD (AAD) group memberships, making it harder to track and control access. Nested AAD group access, which increases the risk of unintended privilege propagation and complicates auditing and governance. Consider this real-world example of the risks posed by overly permissive access policies: A global fintech company suffered a severe breach due to an overly permissive Key Vault configuration, including public network access and excessive permissions via legacy access policies. Attackers accessed sensitive Azure SQL databases, achieved lateral movement across resources, and escalated privileges using embedded tokens. The critical lesson: protect Key Vaults using strict RBAC permissions, network restrictions, and continuous security monitoring. Why Azure RBAC is Superior to Legacy Access Policies Azure RBAC enables centralized, scalable, and auditable access management. It integrates with Microsoft Entra, supports hierarchical role assignments, and works seamlessly with advanced security controls like Conditional Access and Defender for Cloud. Access Policies, on the other hand, were designed for simpler, resource-specific use cases and lack the flexibility and control required for modern cloud environments. For a deeper comparison, see Azure RBAC vs. access policies. Best Practices for Implementing Azure RBAC with Azure Key Vault To effectively secure your Key Vault, follow these RBAC best practices: Use Managed Identities: Eliminate secrets by authenticating applications through Microsoft Entra. Enforce Least Privilege: Precisely control permissions, granting each user or application only minimal required access. Centralize and Scale Role Management: Assign roles at subscription or resource group levels to reduce complexity and improve manageability. Leverage Privileged Identity Management (PIM): Implement just-in-time, temporary access for high-privilege roles. Regularly Audit Permissions: Periodically review and prune RBAC role assignments. Detailed Microsoft Entra logging enhances auditability and simplifies compliance reporting. Integrate Security Controls: Strengthen RBAC by integrating with Microsoft Entra Conditional Access, Defender for Cloud, and Azure Policy. For more on the Azure RBAC features specific to AKV, see the Azure Key Vault RBAC Guide. For a comprehensive security checklist, see Secure your Azure Key Vault. Migrating from Access Policies to RBAC To transition your Key Vault from legacy access policies to RBAC, follow these steps: Prepare: Confirm you have the necessary administrative permissions and gather an inventory of applications and users accessing the vault. Conduct inventory: Document all current access policies, including the specific permissions granted to each identity. Assign RBAC Roles: Map each identity to an appropriate RBAC role (e.g., Reader, Contributor, Administrator) based on the principle of least privilege. Enable RBAC: Switch the Key Vault to the RBAC authorization model. Validate: Test all application and user access paths to ensure nothing is inadvertently broken. Monitor: Implement monitoring and alerting to detect and respond to access issues or misconfigurations. For detailed, step-by-step instructions—including examples in CLI and PowerShell—see Migrate from access policies to RBAC. Conclusion Now is the time to modernize access control strategies. Adopting Role-Based Access Control (RBAC) not only eliminates configuration drift and overly broad permissions but also enhances operational efficiency and strengthens your defense against evolving threat landscapes. Transitioning to RBAC is a proactive step toward building a resilient and future-ready security framework for your Azure environment. Overexposed Azure Key Vaults aren’t just isolated risks — they act as breach multipliers. Treat them as Tier-0 assets, on par with domain controllers and enterprise credential stores. Protecting them requires the same level of rigor and strategic prioritization. By enforcing network segmentation, applying least-privilege access through RBAC, and integrating continuous monitoring, organizations can dramatically reduce the blast radius of a potential compromise and ensure stronger containment in the face of advanced threats. Want to learn more? Explore Microsoft's RBAC Documentation for additional details.Exposing hidden threats across the AI development lifecycle in the cloud
Introduction: The AI Lifecycle in the Cloud and Its Risks As organizations increasingly adopt AI to drive innovation, the development and deployment of AI models, applications and agents is now taking place in the cloud more than ever before. Leading cloud platforms make it easier than ever to build, train, and deploy AI systems at scale - offering powerful compute, seamless integrations, and collaborative tools. However, this shift also introduces new security challenges at every stage of the development lifecycle. Whether you're training an AI model or deploying an AI application or agent, the AI development lifecycle in the cloud includes multiple stages, including data collection, model training, Fine-tuning pipelines, and the deployment of AI applications and agents. If attackers compromise even one part of this lifecycle, it can put the entire AI system and the business operations it supports at risk. What adds to the complexity of this landscape is the rapid evolution of cloud-based AI platforms. New features are released at a fast pace, often outpacing the maturity of existing security controls - leaving gaps that attackers can exploit. This blog will examine the risks associated with each phase of the AI development lifecycle in the cloud – whether it’s models, applications, or agents. We’ll explore how attackers can abuse them, and how Microsoft Defender for Cloud helps organizations reduce AI posture risks with AI Posture management across their multi cloud environment. Understanding the Threat Landscape Across the AI Lifecycle Whether it’s poisoning training data, stealing proprietary models, or hijacking deployed AI systems to manipulate outputs, securing the cloud-based AI development lifecycle requires a comprehensive understanding of the risks associated with every phase. Let’s explore how attackers can target various stages of the AI development lifecycle and the specific consequences of those compromises. Data and training It all begins with data, which is often the most valuable and the most vulnerable asset. Whether it's customer records, transaction logs, emails, or images, this data is used to train models that will eventually make decisions on behalf of the organization. In cloud AI environments, such data is typically stored in cloud storage. If attackers gain access to such storage account with training data, due to misconfigured storage or overly permissive cloud account permissions, the consequences can be severe. For instance, they might inject poisoned or manipulated data into the training set, subtly altering the behavior of the model. In one scenario, they could bias a credit scoring model to approve fraudulent applications. In another, they could insert a hidden backdoor - causing the model to behave normally most of the time but output incorrect or malicious predictions when triggered by a specific input. Once the data is prepared, it flows into the training pipeline: a critical but often overlooked attack surface. This pipeline automates the full training workflow: ingesting data, executing transformation scripts, spinning up GPU-powered training jobs, and saving the resulting model. If attackers infiltrate this pipeline, they can gain persistent control over the AI system. For example, they could modify preprocessing scripts to inject subtle distortions into the data, or they might replace a model artifact with a manipulated one that appears legitimate but behaves maliciously under specific conditions. Since pipelines often run with elevated permissions and can access cloud storage, compute resources, and secrets, they also become convenient pivot points for lateral movement across cloud infrastructure. Model Artifacts & Registries Once trained, models in the cloud are typically stored in model registries or artifact repositories. These are often considered secure because they’re not directly exposed to users. However, they represent a high-value target. Attackers who gain access to stored models can steal intellectual property, especially if the model architecture or parameters represent years of R&D. In addition to theft, an attacker might attempt to delete critical models to disrupt business and operations. Even more concerning, they could upload a malicious model in place of a legitimate one. Such a model could be designed to behave subtly but incorrectly, introduce biases, leak data during inference, or provide manipulated outputs that mislead downstream systems and users. This type of tampering not only undermines trust in AI systems but can also have serious operational and security consequences. Model Fine-tuning In addition to full model training, many organizations rely on fine-tuning: a process where a pre-trained foundation model is adapted using domain-specific data. Fine-tuning offers a faster and more cost-effective path to building specialized models, but it also introduces new attack vectors. The fine-tuning inherits all the risks of traditional training, plus a few more. For instance, attackers can target fine-tuning jobs or the associated fine-tuning files (e.g., in storage buckets) to manipulate the behavior of a pre-trained model without raising suspicion. By injecting poisoned fine-tuning data, they can create task-specific vulnerabilities, such as altering outputs related to a particular customer or product. The risk is especially high because fine-tuned models are often deployed directly into production environments. This means attackers don’t need to compromise the full model training workflow to achieve impact - they can introduce malicious behavior just by manipulating a smaller, faster process with fewer controls. Given this, securing fine-tuning pipelines and datasets is just as critical as protecting full-scale training jobs. Models Inference & Endpoints After deployment, models are exposed to the outside world through inference endpoints, typically REST APIs that receive input data and return predictions, decisions, text, or other outputs. The main risk at this stage is unbounded consumption. This occurs when attackers or even legitimate users are able to perform excessive, uncontrolled requests, especially with resource-intensive models like Large Language Models (LLMs). Such abuse can lead to denial of service (DoS), inflated operational costs, and overall service degradation. In cloud environments, where resource usage drives cost and performance, this kind of exploitation can have serious financial and operational impacts. In addition to consumption-based abuse, attackers with access to a poorly secured endpoint may attempt destructive actions such as deleting the endpoint to disrupt availability and business operations, or deploying a different model to the endpoint, potentially replacing trusted outputs with manipulated or malicious ones. Securing inference endpoints is critical to maintaining the integrity, availability, and cost-effectiveness of AI services in the cloud. The rise of AI Agents and apps AI agents, autonomous LLM-driven systems that can search, retrieve, write code, execute workflows, and make decisions, are rapidly becoming a central component in modern AI systems. Unlike traditional models that simply return predictions or text, agents are designed to perform complex, goal-oriented tasks by autonomously chaining multiple actions, tools, and reasoning steps. They can interact with external systems, call APIs, query databases, invoke tools like code execution environments or vector stores, and even communicate with other agents. This growing autonomy and connectivity unlock powerful capabilities - but it also introduces a new and expanding attack surface. One of the biggest concerns with AI agents is the amplification of existing risks. Vulnerabilities like prompt injection, which might have limited impact in a basic chatbot, can become far more dangerous when exploited in an agent that has access to tools and can take real actions. A single malicious input could cause an agent to leak sensitive information, perform unintended operations, or invoke tools in harmful ways. In addition, attackers with access to the agent itself, whether though a compromised cloud account permissions or leaked API keys, can access the agent tools, change the agent’s behavior by manipulating its instructions, or deleting it to disrupt business. As the adoption of AI agents grows, it's critical for organizations to integrate security thinking into their design and deployment. This includes implementing strict controls on agent permissions, monitoring and logging agent behavior, hardening agent tools and APIs, and applying layered protections against manipulation and misuse. Models’ and Agents dependencies Cloud-based AI systems increasingly rely on external data sources and tools to perform complex tasks accurately. For example, retrieval-augmented generation (RAG) models depend on grounding data from document stores or vector databases to generate up-to-date, context-aware responses. Similarly, AI agents may be configured to interact with APIs, databases, cloud functions, or internal systems as part of their reasoning or execution loop. These dependencies act as the AI system's supply chain, where a breach in one part can undermine the integrity of the entire system. If attackers tamper the grounding data, a model’s output can be intentionally skewed or poisoned. Likewise, if the tools an agent depends on - such as cloud automation function - are compromised or misconfigured, the agent could execute malicious actions or leak sensitive information. Securing these dependencies is essential, as attackers may exploit trust in the AI supply chain to manipulate behavior, exfiltrate data, or pivot deeper into the cloud infrastructure. Across all these components, one theme is clear: the interconnected nature of AI in the cloud means that a single weak link can compromise the entire lifecycle. Data corruption can lead to model failure. Pipeline compromise can lead to infrastructure access. Endpoint manipulation can lead to silent data leaks. This is why AI security posture must be end-to-end - from data to deployment. Securing AI in the cloud – it all starts with visibility AI Security Posture Management (AI-SPM), part of Microsoft Defender for Cloud's CNAPP solution, provides security from code to deployed AI models, applications and agents. It offers comprehensive visibility into AI assets, including data assets, models, endpoints, and agents. By identifying vulnerabilities and misconfigurations, AI-SPM enables organizations to reduce risks and detect and respond to AI applications. Reduce AI application risks with Defender for Cloud By leveraging its agentless detection capabilities, Defender for Cloud uncovers misconfigurations and attack paths that could be exploited to compromise AI components at every stage of the lifecycle outlined above. These insights empower security teams to focus on critical risks and address them effectively, minimizing the overall risk. For example, as illustrated in Figure 1, an attack path can demonstrate how an attacker might utilize a virtual machine with a high-severity vulnerability to gain access to an organization's AI platform. This visualization helps security admin teams to take preventative actions, safeguarding the AI environment from potential breaches. The AI-SPM capabilities in Defender for Cloud also supports multi-cloud resources. In another example, as shown in figure 2, the attack path illustrates how an attacker can exploit a vulnerable GCP compute instance to gain access to a custom model deployment in Vertex AI. This scenario underscores the importance of securing every layer of the AI environment, including cloud infrastructure and compute resources, to prevent unauthorized access to sensitive AI components. In yet another scenario, as depicted in figure 3, an attacker might exploit a vulnerable GCP compute instance not only to access the model itself, but also to target the data used to train the AI model. This type of data poisoning attack could lead to altered model and application behavior, potentially skewing outputs, introducing bias, or corrupting downstream processes. Such attacks emphasize the critical need to secure data integrity across all stages of the AI lifecycle, from ingestion and training pipelines to active deployment. Safeguarding the data layer is as vital as securing the underlying infrastructure to ensure that AI applications remain trustworthy and resilient against threats. Summary: Build AI Security from the Ground Up To address these challenges across the whole cloud AI development lifecycle, Microsoft Defender for Cloud provides a suite of security tools tailored for AI workloads. By enabling AI Security Posture Management (AI-SPM) within the Defender for Cloud Defender CSPM plan, organizations gain comprehensive multicloud posture visibility and risk prioritization across platforms such as Azure AI Foundry, OpenAI services, AWS Bedrock, and GCP Vertex AI. This multicloud approach ensures critical vulnerabilities and potential attack paths are effectively identified and mitigated, creating a unified and secure AI ecosystem. Additionally, Defender for AI Services introduces a runtime protection plan specifically designed for custom-built AI applications. This plan extends the security coverage to AI models deployed on Azure AI Foundry and OpenAI services, safeguarding the entire lifecycle - from code to runtime. Together, these integrated solutions empower enterprises to build, deploy, and operate AI technologies securely, even within a diverse and evolving threat landscape. To learn more about Security for AI with Defender for Cloud, visit our website and documentation.Introducing the new File Integrity Monitoring with Defender for Endpoint integration
As the final and most complex piece of this puzzle is the release of File Integrity Monitoring (FIM) powered by Defender for Endpoint, marks a significant milestone in the Defender for Servers simplification journey. The new FIM solution based on Defender for Endpoint offers real-time monitoring on critical file paths and system files, ensuring that any changes indicating a potential attack are detected immediately. In addition, FIM offers built-in support for relevant security regulatory compliance standards, such as PCI-DSS, CIS, NIST, and others, allowing you to maintain compliance.Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In today’s cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely manner—despite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technology—synthetic media generated using AI—to impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviews—gaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch management—not just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering them—using them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workers’ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewMicrosoft Defender for Cloud expands U.S. Gov Cloud support for CSPM and server security
U.S. government organizations face unique security and compliance challenges as they migrate essential workloads to the cloud. To help meet these needs, Microsoft Defender for Cloud has expanded support in the Government Cloud with Defender cloud security posture management (CSPM) and Defender for Servers Plan 2. This expansion helps strengthen security posture with advanced threat protection, vulnerability management, and contextual risk insights across hybrid and multi-cloud environments. Defender CSPM and Defender for Servers are available in the following Microsoft Government Clouds: Microsoft Azure Government (MAG) – FedRamp High, DISA IL4, DISA IL5 Government Community Cloud High (GCCH) – FedRamp High, DISA IL4 Defender for Cloud offers support for CSPM in U.S. Government Cloud First, Defender CSPM is generally available for U.S. Government cloud customers. This expansion brings advanced cloud security posture management capabilities to U.S. federal and government agencies—including the Department of Defense (DoD) and civilian agencies—helping them strengthen their security posture and compliance in the cloud. Defender CSPM empowers agencies to continuously discover, assess, monitor, and improve their cloud security posture, including the ability to monitor and correct configuration drift, ensuring they meet regulatory requirements and proactively manage risk in highly regulated environments. Additional benefits for government agencies: Continuous Compliance Assurance Unlike static audits, Defender CSPM provides real-time visibility into the security posture of cloud environments. This enables agencies to demonstrate ongoing compliance with federal standards—anytime, not just during audit windows Risk-Based Prioritization Defender CSPM uses contextual insights and attack path analysis to help security teams focus on the most critical risks first—maximizing impact while optimizing limited resources Agentless Monitoring With agentless scanning, agencies can assess workloads without deploying additional software—ideal for sensitive or legacy systems Security recommendations in Defender CSPM To learn more about Defender CSPM, visit our technical documentation. Defender for Cloud now offers full feature parity for server security in U.S. Government Cloud In addition to Defender CSPM, we’re also expanding our support for server security in the U.S. GovCloud. Government agencies face mounting challenges in securing the servers that support their critical operations and sensitive data. As server environments expand across on-premises, hybrid, and multicloud platforms, maintaining consistent security controls and compliance with federal standards like FedRAMP and NIST SP 800-53 becomes increasingly difficult. Manual processes and periodic audits can’t keep up with configuration drift, unpatched vulnerabilities, and evolving threats—leaving agencies exposed to breaches and compliance risks. Defender for Servers provides continuous, automated threat protection, vulnerability management, and compliance monitoring across all server environments, enabling agencies to safeguard their infrastructure and maintain a strong security posture. We are excited to share that all capabilities in Defender for Servers Plan 2 are now available in U.S. GovCloud, including these newly added capabilities: Agent-based and agentless vulnerability assessment recommendations Secrets detection recommendations EDR detection recommendations Agentless malware detection File integrity monitoring Baseline recommendations Customers can start using all capabilities of Defender for Servers Plan 2 in U.S. Government Cloud starting today. To learn more about Defender for Servers, visit our technical documentation. Get started today! To gain access to the robust capabilities provided by Defender CSPM and Defender for Servers, you need to enable the plans on your subscription. To enable the Defender CSPM and Defender for Servers plans on your subscription: Sign in to the Azure portal. Search for and select Microsoft Defender for Cloud. In the Defender for Cloud menu, select Environment settings. Select the relevant Azure subscription On the Defender plans page, toggle the Defender CSPM plan and/or Defender for Servers to On. Select Save.484Views0likes0CommentsAnnouncing a New Microsoft Security Virtual Training Day
We’re thrilled to announce a brand-new opportunity for learning and growth: Microsoft Virtual Training Day: Strength Cloud Security with Microsoft Defender for Cloud! This free, online event is designed to empower professionals with the skills and knowledge needed to thrive in today’s digital landscape. During this training, you’ll be able to: Learn how to increase cloud security using Microsoft Defender for Cloud and how to deploy security across your DevOps workflows. Discover how to detect risks, maintain compliance, and protect hybrid and multicloud environments. Find out how to defend servers, containers, storage, and databases using built-in security. Chat with Microsoft experts—ask questions and get answers on real-world security challenges. Here’s what you can expect: Part 1 Part 2 Introduction Introduction What a comprehensive cloud-native application protection platform looks like Comprehensive workload protection (part 1) Break: 10 minutes Break: 10 minutes Starting with proactive security Comprehensive workload protection (part 2) Break: 10 minutes Automating responses Operationalizing Posture Management Closing question and answer Closing question and answer Why Attend this Virtual Training Day? Microsoft Virtual Training Days offer a host of benefits: Flexible Learning: Attend from anywhere, at your own pace. Expert Instruction: Gain insights from industry leaders and certified professionals. Certification Opportunities: Many sessions prepare you for Microsoft certifications. Networking: Connect with peers and professionals across industries. Free Resources: Access downloadable materials and follow-up learning paths. Earn a voucher: Upon completion of the event, the exam is offered at a 50% discount off the exam rate. Don't miss out on this opportunity. Go and registertoday! For more information on all things security, please visit our Security Hub.Introducing Microsoft Sentinel data lake
Today, we announced a significant expansion of Microsoft Sentinel’s capabilities through the introduction of Sentinel data lake, now rolling out in public preview. Security teams cannot defend what they cannot see and analyze. With exploding volumes of security data, organizations are struggling to manage costs while maintaining effective threat coverage. Do-it-yourself security data architectures have perpetuated data silos, which in turn have reduced the effectiveness of AI solutions in security operations. With Sentinel data lake, we are taking a major step to address these challenges. Microsoft Sentinel data lake enables a fully managed, cloud-native, data lake that is purposefully designed for security, right inside Sentinel. Built on a modern lake architecture and powered by Azure, Sentinel data lake simplifies security data management, eliminates security data silos, and enables cost-effective long-term security data retention with the ability to run multiple forms of analytics on a single copy of that data. Security teams can now store and manage all security data. This takes the market-leading capabilities of Sentinel SIEM and supercharges it even further. Customers can leverage the data lake for retroactive TI matching and hunting over a longer time horizon, track low and slow attacks, conduct forensics analysis, build anomaly insights, and meet reporting & compliance needs. By unifying security data, Sentinel data lake provides the AI ready data foundation for AI solutions. Let’s look at some of Sentinel data lake’s core features. Simplified onboarding and enablement inside Defender Portal: Customers can easily discover and enable the new data lake from within the Defender portal, either from the banner on the home page or from settings. Setting up a modern data lake now is just a click away, empowering security teams to get started quickly without a complex setup. Simplified security data management: Sentinel data lake works seamlessly with existing Sentinel connectors. It brings together security logs from Microsoft services across M365, Defender, Azure, Entra, Purview, Intune plus third-party sources like AWS, GCP, network and firewall data from 350+ connectors and solutions. The data lake supports Sentinel’s existing table schemas while customers can also create custom connectors to bring raw data into the data lake or transform it during ingestion. In the future, we will enable additional industry-standard schemas. The data lake expands beyond just activity logs by including a native asset store. Critical asset information is added to the data lake using new Sentinel data connectors for Microsoft 365, Entra, and Azure, enabling a single place to analyze activity and asset data enriched with Threat intelligence. A new table management experience makes it easy for customers to choose where to send and store data, as well as set related retention policies to optimize their security data estate. Customers can easily send critical, high-fidelity security data to the analytics tier or choose to send high-volume, low fidelity logs to the new data lake tier. Any data brought into the analytics tier is automatically mirrored into the data lake at no additional charge, making data lake the central location for all security data. Advanced data analysis capabilities over data in the data lake: Sentinel data lake stores all security data in an open format to enable analysts to do multi-modal security analytics on a single copy of data. Through the new data lake exploration experience in the Defender portal, customers can leverage Kusto query language to analyze historical data using the full power of Kusto. Since the data lake supports the Sentinel table schema, advanced hunting queries can be run directly on the data lake. Customers can also schedule long-running jobs, either once or on a schedule, that perform complex analysis on historical data for in-depth security insights. These insights generated from the data lake can be easily elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Additionally, as part of the public preview, we are also releasing a new Sentinel Visual Studio Code extension that enables security teams to easily connect to the same data lake data and use Python notebooks, as well as spark and ML libraries to deeply analyze lake data for anomalies. Since the environment is fully managed, there is no compute infrastructure to set up. Customers can just install the Visual Studio Code extension and use AI coding agents like GitHub Copilot to build a notebook and execute it in the managed environment. These notebooks can also be scheduled as jobs and the resulting insights can be elevated to analytics tier and leveraged in Sentinel for threat investigation and response. Flexible business model: Sentinel data lake enables customers to separate their data ingestion and retention needs from their security analytics needs, allowing them to ingest and store data cost effectively and then pay separately when analyzing data for their specific needs. Let’s put this all together and show an example of how a customer can operationalize and derive value from the data lake for retrospective threat intelligence matching in Microsoft Sentinel. Network logs are typically high-volume logs but can often contain key insights for detecting initial entry point of an attack, command and control connection, lateral movement or an exfiltration attempt. Customers can now send these high-volume logs to the data lake tier. Next, they can create a python notebook that can join latest threat intelligence from Microsoft Defender Threat Intelligence to scan network logs for any connections to/from a suspicious IP or domain. They can schedule this notebook to run as a scheduled job, and any insights can then be promoted to analytics tiers and leveraged to enrich ongoing investigation, hunts, response or forensics analysis. All this is possible cost-effectively without having to set up any complex infrastructure, enabling security teams to achieve deeper insights. This preview is now rolling out for customers in Defender portal in our supported regions. To learn more, check out our Mechanics video and our documentation or talk to your account teams. Get started today Join us as we redefine what’s possible in security operations: Onboard Sentinel data lake: https://aka.ms/sentineldatalakedocs Explore our pricing: https://aka.ms/sentinel/pricingblog For the supported regions, please refer to https://aka.ms/sentinel/datalake/geos Learn more about our MDTI news: http://aka.ms/mdti-convergence General Availability of Auxiliary Logs and Reduced PricingNew innovations to protect custom AI applications with Defender for Cloud
Today’s blog post introduced new capabilities to enhance AI security and governance across multi-model and multi-cloud environments. This follow-on blog post dives deeper into how Microsoft Defender for Cloud can help organizations protect their custom-built AI applications. The AI revolution has been transformative for organizations, driving them to integrate sophisticated AI features and products into their existing systems to maintain a competitive edge. However, this rapid development often outpaces their ability to establish adequate security measures for these advanced applications. Moreover, traditional security teams frequently lack the visibility and actionable insights needed, leaving organizations vulnerable to increasingly sophisticated attacks and struggling to protect their AI resources. To address these challenges, we are excited to announce the general availability (GA) of threat protection for AI services, a capability that enhances threat protection in Microsoft Defender for Cloud. Starting May 1, 2025, the new Defender for AI Services plan will support models in Azure AI and Azure OpenAI Services. Note: Effective August 1, 2025, the price for Defender for AI Services was updated to $0.0008 per 1,000 tokens per month (USD – list price). “Security is paramount at Icertis. That’s why we've partnered with Microsoft to host our Contract Intelligence platform on Azure, fortified by Microsoft Defender for Cloud. As large language models (LLMs) became mainstream, our Icertis ExploreAI Service leveraged generative AI and proprietary models to transform contract management and create value for our customers. Microsoft Defender for Cloud emerged as our natural choice for the first line of defense against AI-related threats. It meticulously evaluates the security of our Azure OpenAI deployments, monitors usage patterns, and promptly alerts us to potential threats. These capabilities empower our Security Operations Center (SOC) teams to make more informed decisions based on AI detections, ensuring that our AI-driven contract management remains secure, reliable, and ahead of emerging threats.” Subodh Patil, Principal Cyber Security Architect at Icertis With these new threat protection capabilities, security teams can: Monitor suspicious activity in Azure AI resources, abiding by security frameworks like the OWASP Top 10 threats for LLM applications to defend against attacks on AI applications, such as direct and indirect prompt injections, wallet abuse, suspicious access to AI resources, and more. Triage and act on detections using contextual and insightful evidence, including prompt and response evidence, application and user context, grounding data origin breadcrumbs, and Microsoft Threat Intelligence details. Gain visibility from cloud to code (right to left) for better posture discovery and remediation by translating runtime findings into posture insights, like smart discovery of grounding data sources. Requires Defender CSPM posture plan to be fully utilized. Leverage frictionless onboarding with one-click, agentless enablement on Azure resources. This includes native integrations to Defender XDR, enabling advanced hunting and incident correlation capabilities. Detect and protect against AI threats Defender for Cloud helps organizations secure their AI applications from the latest threats. It identifies vulnerabilities and protects against sophisticated attacks, such as jailbreaks, invisible encodings, malicious URLs, and sensitive data exposure. It also protects against novel threats like ASCII smuggling, which could otherwise compromise the integrity of their AI applications. Defender for Cloud helps ensure the safety and reliability of critical AI resources by leveraging signals from prompt shields, AI analysis, and Microsoft Threat Intelligence. This provides comprehensive visibility and context, enabling security teams to quickly detect and respond to suspicious activities. Prompt analysis-based detections aren’t the full story. Detections are also designed to analyze the application and user behavior to detect anomalies and suspicious behavior patterns. Analysts can leverage insights into user context, application context, access patterns, and use Microsoft Threat Intelligence tools to uncover complex attacks or threats that escape prompt-based content filtering detectors. For example, wallet attacks are a common threat where attackers aim to cause financial damage by abusing resource capacity. These attacks often appear innocent because the prompts' content looks harmless. However, the attacker's intention is to exploit the resource capacity when left unconstrained. While these prompts might go unnoticed as they don't contain suspicious content, examining the application's historical behavior patterns can reveal anomalies and lead to detection. Respond and act on AI detections effectively The lack of visibility into AI applications is a real struggle for security teams. The detections contain evidence that is hard or impossible for most SOC analysts to access. For example, in the below credential exposure detection, the user was able to solicit secrets from the organizational data connected to the Contoso Outdoors chatbot app. How would the analyst go about understanding this detection? The detection evidence shows the user prompt and the model response (secrets are redacted). The evidence also explicitly calls out what kind of secret was exposed. The prompt evidence of this suspicious interaction is rarely stored, logged, or accessible anywhere outside the detection. The prompt analysis engine also tied the user request to the model response, making sense of the interaction. What is most helpful in this specific detection is the application and user context. The application name instantly assists the SOC in determining if this is a valid scenario for this application. Contoso Outdoors chatbot is not supposed to access organizational secrets, so this is worrisome. Next, the user context reveals who was exposed to the data, through what IP (internal or external) and their supposed intention. Most AI applications are built behind AI gateways, proxies, or Azure API Management (APIM) instances, making it challenging for SOC analysts to obtain these details through conventional logging methods or network solutions. Defender for Cloud addresses this issue by using a straightforward approach that fetches these details directly from the application’s API request to Azure AI. Now, the analyst can reach out to the user (internal) or block (external) the identity or the IP. Finally, to resolve this incident, the SOC analyst intends to remove and decommission the secret to mitigate the impact of the exposure. The final piece of evidence presented reveals the origin of the exposed data. This evidence substantiates the fact that the leak is genuine and originates from internal organizational data. It also provides the analyst with a critical breadcrumb trail to successfully remove the secret from the data store and communicate with the owner on next steps. Trace the invisible lines between your AI application and the grounding sources Defender for Cloud excels in continuous feedback throughout the application lifecycle. While posture capabilities help triage detections, runtime protection provides crucial insights from traffic analysis, such as discovering data stores used for grounding AI applications. The AI application's connection to these stores is often hidden from current control or data plane tools. The credential leak example provided a real-world connection that was then integrated into our resource graph, uncovering previously overlooked data stores. Tagging these stores improves attack path and risk factor identification during posture scanning, ensuring safe configuration. This approach reinforces the feedback loop between runtime protection and posture assessment, maximizing cloud-native application protection platform (CNAPP) effectiveness. Align with AI security frameworks Our guiding principle is widely recognized by OWASP Top 10 for LLMs. By combining our posture capabilities with runtime monitoring, we can comprehensively address a wide range of threats, enabling us to proactively prepare for and detect AI-specific breaches with Defender for Cloud. As the industry evolves and new regulations emerge, frameworks such as OWASP, the EU AI Act, and NIST 600-1 are shaping security expectations. Our detections are aligned with these frameworks as well as the MITRE ATLAS framework, ensuring that organizations stay compliant and are prepared for future regulations and standards. Get started with threat protection for AI services To get started with threat protection capabilities in Defender for Cloud, it’s as simple as one-click to enable it on your relevant subscription in Azure. The integration is agentless and requires zero intervention in the application dev lifecycle. More importantly, the native integration directly inside Azure AI pipeline does not entail scale or performance degradation in the application runtime. Consuming the detections is easy, it appears in Defender for Cloud’s portal, but is also seamlessly connected to Defender XDR and Sentinel, leveraging the existing connectors. SOC analysts can leverage the correlation and analysis capabilities of Defender XDR from day one. Explore these capabilities today with a free 30-day trial*. You can leverage your existing AI application and simply enable the “AI workloads” plan on your chosen subscription to start detecting and responding to AI threats. *Trial free period is limited to up to 75B tokens scanned. Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9. Explore additional resources Learn more about Runtime protection Learn more about Posture capabilities Watch the Defender for Cloud in the Field episode on securing AI applications Get started with Defender for Cloud3.6KViews3likes0Comments