microsoft defender for cloud
171 TopicsEnhance AI security and governance across multi-model and multi-cloud environments
Generative AI adoption is accelerating, with AI transformation happening in real-time across various industries. This rapid adoption is reshaping how organizations operate and innovate, but it also introduces new challenges that require careful attention. At Ignite last fall, we announced several new capabilities to help organizations secure their AI transformation. These capabilities were designed to address top customer priorities such as preventing data oversharing, safeguarding custom AI, and preparing for emerging AI regulations. Organizations like Cummins, KPMG, and Mia Labs have leveraged these capabilities to confidently strengthen their AI security and governance efforts. However, despite these advancements, challenges persist. One major concern is the rise of shadow AI—applications used without IT or security oversight. In fact, 78% of AI users report bringing their own AI tools, such as ChatGPT and DeepSeek, into the workplace 1 . Additionally, new threats, like indirect prompt injection attacks, are emerging, with 77% of organizations expressing concerns and 11% of organizations identifying them as a critical risk 2 . To address these challenges, we are excited to announce new features and capabilities that help customers do the following: Prevent risky access and data leakage in shadow AI with granular access controls and inline data security capabilities Manage AI security posture across multi-cloud and multi-model environments Detect and respond to new AI threats, such as indirect prompt injections and wallet abuse Secure and govern data in Microsoft 365 Copilot and beyond In this blog, we’ll explore these announcements and demonstrate how they help organizations navigate AI adoption with confidence, mitigating risks, and unlocking AI’s full potential on their transformation journey. Prevent risky access and data leakage in shadow AI With the rapid rise of generative AI, organizations are increasingly encountering unauthorized employee use of AI applications without IT or security team approval. This unsanctioned and unprotected usage has given rise to “shadow AI,” significantly heightening the risk of sensitive data exposure. Today, we are introducing a set of access and data security controls designed to support a defense-in-depth strategy, helping you mitigate risks and prevent data leakage in third-party AI applications. Real-time access controls to shadow AI The first line of defense against security risks in AI applications is controlling access. While security teams can use endpoint controls to block access for all users across the organization, this approach is often too restrictive and impractical. Instead, they need more granular controls at the user level to manage access to SaaS-based AI applications. Today we are announcing the general availability of the AI web category filter in Microsoft Entra Internet Access to help enforce access controls that govern which users and groups have access to different AI applications. Internet Access deep integration with Microsoft Entra ID extends Conditional Access to any AI application, enabling organizations to apply AI access policies with granularity. By using Conditional Access as the policy control engine, organizations can enforce policies based on user roles, locations, device compliance, user risk levels, and other conditions, ensuring secure and adaptive access to AI applications. For example, with Internet Access, organizations can allow your strategy team to experiment with all or most consumer AI apps while blocking those apps for highly privileged roles, such as accounts payable or IT infrastructure admins. For even greater security, organizations can further restrict access to all AI applications if Microsoft Entra detects elevated identity risk. Inline discovery and protection of sensitive data Once users gain access to sanctioned AI applications, security teams still need to ensure that sensitive data isn’t shared with those applications. Microsoft Purview provides data security capabilities to prevent users from sending sensitive data to AI applications. Today, we are announcing enhanced Purview data security capabilities for the browser available in preview in the coming weeks. The new inline discovery & protection controls within Microsoft Edge for Business detect and block sensitive data from being sent to AI apps in real-time, even if typed directly. This prevents sensitive data leaks as users interact with consumer AI applications, starting with ChatGPT, Google Gemini, and DeepSeek. For example, if an employee attempts to type sensitive details about an upcoming merger or acquisition into Google Gemini to generate a written summary, the new inline protection controls in Microsoft Purview will block the prompt from being submitted, effectively blocking the potential leaks of confidential data to an unsanctioned AI app. This augments existing DLP controls for Edge for Business, including protections that prevent file uploads and the pasting of sensitive content into AI applications. Since inline protection is built natively into Edge for Business, newly deployed policies automatically take effect in the browser even if endpoint DLP is not deployed to the device. : Inline DLP in Edge for Business prevents sensitive data from being submitted to consumer AI applications like Google Gemini by blocking the action. The new inline protection controls are integrated with Adaptive Protection to dynamically enforce different levels of DLP policies based on the risk level of the user interacting with the AI application. For example, admins can block low-risk users from submitting prompts containing the highest-sensitivity classifiers for their organization, such as M&A-related data or intellectual property, while blocking prompts containing any sensitive information type (SIT) for elevated-risk users. Learn more about inline discovery & protection in the Edge for Business browser in this blog. In addition to the new capabilities within Edge for Business, today we are also introducing Purview data security capabilities for the network layer available in preview starting in early May. Enabled through integrations with Netskope and iboss to start, organizations will be able to extend inline discovery of sensitive data to interactions between managed devices and untrusted AI sites. By integrating Purview DLP with their SASE solution (e.g. Netskope and iBoss), data security admins can gain visibility into the use of sensitive data on the network as users interact with AI applications. These interactions can originate from desktop applications such as the ChatGPT desktop app or Microsoft Word with a ChatGPT plugin installed, or non-Microsoft browsers such as Opera and Brave that are accessing AI sites. Using Purview Data Security Posture Management (DSPM) for AI, admins will also have visibility into how these interactions contribute to organizational risk and can take action through DSPM for AI policy recommendations. For example, if there is a high volume of prompts containing sensitive data sent to ChatGPT, DSPM for AI will detect and recommend a new DLP policy to help mitigate this risk. Learn more about inline discovery for the network, including Purview integrations with Netskope and iBoss, in this blog. Manage AI security posture across multi-cloud and multi-model environments In today’s rapidly evolving AI landscape, developers frequently leverage multiple cloud providers to optimize cost, performance, and availability. Different AI models excel at various tasks, leading developers to deploy models from various providers for different use cases. Consequently, managing security posture across multi-cloud and multi-model environments has become essential. Today, Microsoft Defender for Cloud supports deployed AI workloads across Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock. To further enhance our security coverage, we are expanding AI Security Posture Management (AI-SPM) in Defender for Cloud to improve compatibility with additional cloud service providers and models. This includes: Support for Google Vertex AI models Enhanced support for Azure AI Foundry model catalog and custom models With this expansion, AI-SPM in Defender for Cloud will now offer the discovery of the AI inventory and vulnerabilities, attack path analysis, and recommended actions to address risks in Google VertexAI workloads. Additionally, it will support all models in Azure AI Foundry model catalog, including Meta Llama, Mistral, DeepSeek, as well as custom models. This expansion ensures a consistent and unified approach to managing AI security risks across multi-model and multi-cloud environments. Support for Google Vertex AI models will be available in public preview starting May 1, while support for Azure AI Foundry model catalog and custom models is generally available today. Learn More. 2: Microsoft Defender for Cloud detects an attack path to a DeepSeek R1 workload. In addition, Defender for Cloud will also offer a new data and AI security dashboard. Security teams will have access to an intuitive overview of their datastores and AI services across their multi-cloud environment, top recommendations, and critical attack paths to prioritize and accelerate remediation. The dashboard will be generally available on May 1. The new data & AI security dashboard in Microsoft Defender for Cloud provides a comprehensive overview of your data and AI security posture. These new capabilities reflect Microsoft’s commitment to helping organizations address the most critical security challenges in managing AI security posture in their heterogeneous environments. Detect and respond to new AI threats Organizations are integrating generative AI into their workflows and facing new security risks unique to AI. Detecting and responding to these evolving threats is critical to maintaining a secure AI environment. The Open Web Application Security Project (OWASP) provides a trusted framework for identifying and mitigating such vulnerabilities, such as prompt injection and sensitive information disclosure. Today, we are announcing Threat protection for AI services, a new capability that enhances threat protection in Defender for Cloud, enabling organizations to secure custom AI applications by detecting and responding to emerging AI threats more effectively. Building on the OWASP Top 10 risks for LLM applications, this capability addresses those critical vulnerabilities highlighted on the top 10 list, such as prompt injections and sensitive information disclosure. Threat protection for AI services helps organizations identify and mitigate threats to their custom AI applications using anomaly detection and AI-powered insights. With this announcement, Defender for Cloud will now extend its threat protection for AI workloads, providing a rich suite of new and enriched detections for Azure OpenAI Service and models in the Azure AI Foundry model catalog. New detections include direct and indirect prompt injections, novel attack techniques like ASCII smuggling, malicious URL in user prompts and AI responses, wallet abuse, suspicious access to AI resources, and more. Security teams can leverage evidence-based security alerts to enhance investigation and response actions through integration with Microsoft Defender XDR. For example, in Microsoft Defender XDR, a SOC analyst can detect and respond to a wallet abuse attack, where an attacker exploits an AI system to overload resources and increase costs. The analyst gains detailed visibility into the attack, including the affected application, user-entered prompts, IP address, and other suspicious activities performed by the bad actor. With this information, the SOC analyst can take action and block the attacker from accessing the AI application, preventing further risks. This capability will be generally available on May 1. Learn More. : Security teams can investigate new detections of AI threats in Defender XDR. Secure and govern data in Microsoft 365 Copilot and beyond Data oversharing and non-compliant AI use are significant concerns when it comes to securing and governing data in Microsoft Copilots. Today, we are announcing new data security and compliance capabilities. New data oversharing insights for unclassified data available in Microsoft Purview DSPM for AI: Today, we are announcing the public preview of on-demand classification for SharePoint and OneDrive. This new capability gives data security admins visibility into unclassified data stored in SharePoint and OneDrive and enables them to classify that data on demand. This helps ensure that Microsoft 365 Copilot is indexing and referencing files in its responses that have been properly classified. Previously, unclassified and unscanned files did not appear in DSPM for AI oversharing assessments. Now admins can initiate an on-demand data classification scan, directly from the oversharing assessment, ensuring that older or previously unscanned files are identified, classified, and incorporated into the reports. This allows organizations to detect and address potential risks more comprehensively. For example, an admin can initiate a scan of legacy customer contracts stored in a specified SharePoint library to detect and classify sensitive information such as account numbers or contact information. If these newly classified documents match the classifiers included in any existing auto-labeling policies, they will be automatically labeled. This helps ensure that documents containing sensitive information remain protected when they are referenced in Microsoft 365 Copilot interactions. Learn More. Security teams can trigger on-demand classification scan results in the oversharing assessment in Purview DSPM for AI. Secure and govern data in Security Copilot and Copilot in Fabric: We are excited to announce the public preview of Purview for Security Copilot and Copilot in Fabric, starting with Copilot in Power BI, offering DSPM for AI, Insider Risk Management, and data compliance controls, including eDiscovery, Audit, Data Lifecycle Management, and Communication Compliance. These capabilities will help organizations enhance data security posture, manage compliance, and mitigate risks more effectively. For example, admins can now use DSPM for AI to discover sensitive data in user prompts and responses and detect unethical or risky AI usage. Purview’s DSPM for AI provides admins with comprehensive reports on user activities and data interactions in Copilot for Power BI, as part of the Copilot in Fabric experience, and Security Copilot. DSPM Discoverability for Communication Compliance: This new feature in Communication Compliance, which will be available in public preview starting May 1, enables organizations to quickly create policies that detect inappropriate messages that could lead to data compliance risks. The new recommendation card on the DSPM for AI page offers a one-click policy creation in Microsoft Purview Communication Compliance, simplifying the detection and mitigation of potential threats, such as regulatory violations or improperly shared sensitive information. With these enhanced capabilities for securing and governing data in Microsoft 365 Copilot and beyond, organizations can confidently embrace AI innovation while maintaining strict security and compliance standards. Explore additional resources As organizations embrace AI, securing and governing its use is more important than ever. Staying informed and equipped with the right tools is key to navigating its challenges. Explore these resources to see how Microsoft Security can help you confidently adopt AI in your organization. Learn more about Security for AI solutions on our webpage Get started with Microsoft Purview Get started with Microsoft Defender for Cloud Sign up for a free Microsoft 365 E5 Security Trial and Microsoft Purview Trial Learn more about the innovations designed to help your organization protect data, defend against cyber threats, and stay compliant. Join Microsoft leaders online at Microsoft Secure on April 9. [1] 2024 Work Trend Index Annual Report, Microsoft and LinkedIn, May 2024, N=31,000. [2] Gartner®, Gartner Peer Community Poll – If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks? GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.3KViews1like0CommentsThe security benefits of structuring your Azure OpenAI calls – The System Role
In the rapidly evolving landscape of GenAI usage by companies, ensuring the security and integrity of interactions is paramount. A key aspect is managing the different conversational roles—namely system, user, and assistant. By clearly defining and separating these roles, you can maintain clarity and context while enhancing security. In this blog post, we explore the benefits of structuring your Azure OpenAI calls properly, focusing especially on the system prompt. A misconfigured system prompt can create a potential security risk for your application, and we’ll explain why and how to avoid it. The Different Roles in an AI-Based Chat Application Any AI chat application, regardless of the domain, is based on the interaction between two primary players, the user and the assistant. The user provides input or queries. The assistant generates contextually appropriate and coherent responses. Another important but sometimes overlooked player is the designer or developer of the application. This individual determines the purpose, flow, and tone of the application. Usually, this player is referred to as the system. The system provides the initial instructions and behavioral guidelines for the model. Microsoft Defender for Cloud’s researchers identified emerging anti-pattern Microsoft Defender for Cloud (MDC) offers security posture management and threat detection capabilities across clouds and has recently released a new set of features to help organizations build secure enterprise-ready gen-AI apps in the cloud, helping them build securely and stay secure. MDC’s research experts continuously track the development patterns to enhance the offering but also to promote secure practices to their customers and the wider tech community. They are also primary contributors to the OWASP Top 10 threats for LLM (Idan Hen, research team manager). Recently, MDC's research experts identified a common anti-pattern in AI application development is emerging – appending the system to the user prompt. Mixing these sections is easy and tempting – developers often use it because it’s slightly faster while building and also allows them to maintain context through long conversations. But this practice is harmful – it introduces detrimental security risks that could easily result in 'game over' – exposing sensitive data, getting your computer abused, or making your system vulnerable to Jailbreak attacks. Diving deeper: How system prompts evaluation keeps your application secure Separate system, user and assistant prompts with Azure OpenAI ChatCompletion API Azure OpenAI Service's Chat Completion API is a powerful tool designed to facilitate rich and interactive conversational experiences. Leveraging the capabilities of advanced language models, this API enables developers to create human-like chat interactions within their applications. By structuring conversations with distinct roles—system, user, and assistant—the API ensures clarity and context throughout the dialogue: [{"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request]}, {"role": "assistant", "content": [Model’s response] } ] This structured interaction model allows for enhanced user engagement across various use cases such as customer support, virtual assistants, and interactive storytelling. By understanding and predicting the flow of conversation, the Chat Completion API helps create not only natural and engaging user experiences but securer applications, driving innovation in communication technology. Anti-pattern explained When developers append their instructions to the user prompt. The model receives single input composed by two different sources: developer and user: {"role": "user", "content”: [Developer’s instructions] + [User’s request]} {"role": "assistant", "content": [Model’s response] } When developer instructions are mingled with user input, detection and content filtering systems often struggle to distinguish between the two. Anti-pattern resulting in less secured application This blurring of input roles can facilitate easier manipulation through both direct and indirect prompt injections, thereby increasing the risk of misuse and harmful content not being detected properly by security and safety systems. Developer instructions frequently contain security-related content, such as forbidden requests and responses, as well as lists of do's and don'ts. If these instructions are not conveyed using the system role, this important method for restricting model usage becomes less effective. Additionally, customers have reported that protection systems may misinterpret these instructions as malicious behavior, leading to a high rate of false positive alerts and the unwarranted blocking of benign content. In one case, a customer described forbidden behavior and appended it to the user role. The threat detection system then flagged it as malicious user activity. Moreover, developer instructions may contain private content and information related to the application's inner workings, such as available data sources and tools, their descriptions, and legitimate and illegitimate operations. Although it is not recommended, these instructions may also include information about the logged-in user, connected data sources and information related to the application's operation. Content within the system role enjoys higher privacy; a model can be instructed not to reveal it to the user, and a system prompt leak is considered a security vulnerability. When developer instructions are inserted together with user instructions, the probability of a system prompt leak is much higher, thereby putting our application at risk. 1: Good Protection vs Poor Protection Why do developers mingle their instructions with user input? In many cases, recurring instructions improve the overall user experience. During lengthy interactions, the model tends to forget earlier conversations, including the developer instructions provided in the system role. For example, a model instructed to role-play in an English teaching application or act as a medical assistant in a hospital support application may forget its assigned role by the end of the conversation. This can lead to poor user experience and potential confusion. To mitigate this issue, it is crucial to find methods to remind the model of its role and instructions throughout the interaction. One incorrect approach is to append the developer's instructions to user input by adding them to the User role. Although it keeps developers’ instructions fresh in the model's 'memory,' this practice can significantly impact security, as we saw earlier. Enjoy both user experience and secured application To enjoy both quality detection and filtering capabilities along with a maximal user experience throughout the entire conversation, one option is to refeed developer instructions using the system role several times as the conversation continues: {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 1]} {"role": "assistant", "content": [Model’s response 1] } {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 2]} {"role": "assistant", "content": [Model’s response 2] } By doing so, we achieve the best of both worlds: maintaining the best practice of separating developer instructions from user requests using the Chat Completion API, while keeping the instructions fresh in the model's memory. This approach ensures that detection and filtering systems function effectively, our instructions get the model's full attention, and our system prompt remains secure, all without compromising the user experience. To further enhance the protection of your AI applications and maximize detection and filtering capabilities, it is recommended to provide contextual information regarding the end user and the relevant application. Additionally, it is crucial to identify and mark various input sources and involved entities, such as grounding data, tools, and plugins. By doing so, our system can achieve a higher level of accuracy and efficacy in safeguarding your AI application. In our upcoming blog post, we will delve deeper into these critical aspects, offering detailed insights and strategies to further optimize the protection of your AI applications. Start secure and stay secure when building Gen-AI apps with Microsoft Defender for Cloud Structuring your prompts securely is the best-practice when designing chatbots. There are other lines of defense that must be put in place to fully secure your environment. Sign up and Enable the new Defender for cloud threat protection for AI for active threat detection (preview). Enable posture management to cover all your cloud security risks, including new AI posture features. Further Reading Microsoft Defender for Cloud (MDC). AI protection using MDC. Chat Completion API. Security challenges related to GenAI. How to craft effective System Prompt. The role of System Prompt in Chat Completion API. Responsible AI practices for Azure OpenAI models. Asaf Harari, Data Scientist, Microsoft Threat Protection Research. Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud. Slava Reznitsky, Principal Architect, Microsoft Defender for Cloud.674Views2likes0CommentsMicrosoft Security in Action: Zero Trust Deployment Essentials for Digital Security
The Zero Trust framework is widely regarded as a key security model and a commonly referenced standard in modern cybersecurity. Unlike legacy perimeter-based models, Zero Trust assumes that adversaries will sometimes get access to some assets in the organization, and you must build your security strategy, architecture, processes, and skills accordingly. Implementing this framework requires a deliberate approach to deployment, configuration, and integration of tools. What is Zero Trust? At its core, Zero Trust operates on three guiding principles: Assume Breach (Assume Compromise): Assume attackers can and will successfully attack anything (identity, network, device, app, infrastructure, etc.) and plan accordingly. Verify Explicitly: Protect assets against attacker control by explicitly validating that all trust and security decisions use all relevant available information and telemetry. Use Least Privileged Access: Limit access of a potentially compromised asset, typically with just-in-time and just-enough-access (JIT/JEA) and risk-based policies like adaptive access control. Implementing a Zero Trust architecture is essential for organizations to enhance security and mitigate risks. Microsoft's Zero Trust framework essentially focuses on six key technological pillars: Identity, Endpoints, Data, Applications, Infrastructure, & Networks. This blog provides a structured approach to deploying each pillar. 1. Identity: Secure Access Starts Here Ensure secure and authenticated access to resources by verifying and enforcing policies on all user and service identities. Here are some key deployment steps to get started: Implement Strong Authentication: Enforce Multi-Factor Authentication (MFA) for all users to add an extra layer of security. Adopt phishing-resistant methods, such as password less authentication with biometrics or hardware tokens, to reduce reliance on traditional passwords. Leverage Conditional Access Policies: Define policies that grant or deny access based on real-time risk assessments, user roles, and compliance requirements. Restrict access from non-compliant or unmanaged devices to protect sensitive resources. Monitor and Protect Identities: Use tools like Microsoft Entra ID Protection to detect and respond to identity-based threats. Regularly review and audit user access rights to ensure adherence to the principle of least privilege. Integrate threat signals from diverse security solutions to enhance detection and response capabilities. 2. Endpoints: Protect the Frontlines Endpoints are frequent attack targets. A robust endpoint strategy ensures secure, compliant devices across your ecosystem. Here are some key deployment steps to get started: Implement Device Enrollment: Deploy Microsoft Intune for comprehensive device management, including policy enforcement and compliance monitoring. Enable self-service registration for BYOD to maintain visibility. Enforce Device Compliance Policies: Set and enforce policies requiring devices to meet security standards, such as up-to-date antivirus software and OS patches. Block access from devices that do not comply with established security policies. Utilize and Integrate Endpoint Detection and Response (EDR): Deploy Microsoft Defender for Endpoint to detect, investigate, and respond to advanced threats on endpoints and integrate with Conditional Access. Enable automated remediation to quickly address identified issues. Apply Data Loss Prevention (DLP): Leverage DLP policies alongside Insider Risk Management (IRM) to restrict sensitive data movement, such as copying corporate data to external drives, and address potential insider threats with adaptive protection. 3. Data: Classify, Protect, and Govern Data security spans classification, access control, and lifecycle management. Here are some key deployment steps to get started: Classify and Label Data: Use Microsoft Purview Information Protection to discover and classify sensitive information based on predefined or custom policies. Apply sensitivity labels to data to dictate handling and protection requirements. Implement Data Loss Prevention (DLP): Configure DLP policies to prevent unauthorized sharing or transfer of sensitive data. Monitor and control data movement across endpoints, applications, and cloud services. Encrypt Data at Rest and in Transit: Ensure sensitive data is encrypted both when stored and during transmission. Use Microsoft Purview Information Protection for data security. 4. Applications: Manage and Secure Application Access Securing access to applications ensures that only authenticated and authorized users interact with enterprise resources. Here are some key deployment steps to get started: Implement Application Access Controls: Use Microsoft Entra ID to manage and secure access to applications, enforcing Conditional Access policies. Integrate SaaS and on-premises applications with Microsoft Entra ID for seamless authentication. Monitor Application Usage: Deploy Microsoft Defender for Cloud Apps to gain visibility into application usage and detect risky behaviors. Set up alerts for anomalous activities, such as unusual download patterns or access from unfamiliar locations. Ensure Application Compliance: Regularly assess applications for compliance with security policies and regulatory requirements. Implement measures such as Single Sign-On (SSO) and MFA for application access. 5. Infrastructure: Securing the Foundation It’s vital to protect the assets you have today providing business critical services your organization is creating each day. Cloud and on-premises infrastructure hosts crucial assets that are frequently targeted by attackers. Here are some key deployment steps to get started: Implement Security Baselines: Apply secure configurations to VMs, containers, and Azure services using Microsoft Defender for Cloud. Monitor and Protect Infrastructure: Deploy Microsoft Defender for Cloud to monitor infrastructure for vulnerabilities and threats. Segment workloads using Network Security Groups (NSGs). Enforce Least Privilege Access: Implement Just-In-Time (JIT) access and Privileged Identity Management (PIM). Just-in-time (JIT) mechanisms grant privileges on-demand when required. This technique helps by reducing the time exposure of privileges that are required for people, but are only rarely used. Regularly review access rights to align with current roles and responsibilities. 6. Networks: Safeguard Communication and Limit Lateral Movement Network segmentation and monitoring are critical to Zero Trust implementation. Here are some key deployment steps to get started: Implement Network Segmentation: Use Virtual Networks (VNets) and Network Security Groups (NSGs) to segment and control traffic flow. Secure Remote Access: Deploy Azure Virtual Network Gateway and Azure Bastion for secure remote access. Require device and user health verification for VPN access. Monitor Network Traffic: Use Microsoft Defender for Endpoint to analyze traffic and detect anomalies. Taking the First Step Toward Zero Trust Zero Trust isn’t just a security model—it’s a cultural shift. By implementing the six pillars comprehensively, organizations can potentially enhance their security posture while enabling seamless, secure access for users. Implementing Zero Trust can be complex and may require additional deployment approaches beyond those outlined here. Cybersecurity needs vary widely across organizations and deployment isn’t one-size-fits all, so these steps might not fully address your organization’s specific requirements. However, this guide is intended to provide a helpful starting point or checklist for planning your Zero Trust deployment. For a more detailed walkthrough and additional resources, visit Microsoft Zero Trust Implementation Guidance. The Microsoft Security in Action blog series is an evolving collection of posts that explores practical deployment strategies, real-world implementations, and best practices to help organizations secure their digital estate with Microsoft Security solutions. Stay tuned for our next blog on deploying and maximizing your investments in Microsoft Threat Protection solutions.2.2KViews1like0CommentsExtremely Slow Performance Since Defender Was Pushed on Us
Compliance, Security, Protection, and Defender are all extremely slow, with responses from screen to screen ranging from 30 seconds to multiple minutes between clicking items and waiting for Microsoft cloud to return results. I have a GB link and speed test well over 600 Mbps so it's not on my end. It appears the cutover in late January to this new "Defender" platform has been extremely detrimental to the Office portal response times in these portals. What is being done to resolve this?20KViews1like12CommentsAMA: CNAPP Demystified
Bring your questions and Ask Microsoft Anything (AMA) as we explore the risks and threats for cloud native applications and how misconfigurations can lead to compromise. This session is part of the Microsoft Security Tech Accelerator. RSVP for event reminders, add it to your calendar, and post your questions and comments below! This session will also be recorded and available on demand shortly after conclusion of the live event.1.4KViews3likes12CommentsAMA: Microsoft Defender for Cloud
Have questions on how to strengthen your data security posture? Ask Microsoft Anything (AMA)! We'll answer those and more as we explain how a Microsoft Defender cloud security posture management (CSPM) plan enables you to proactively identify and prioritize critical risks to sensitive data. This session is part of the Microsoft Security Tech Accelerator. RSVP for event reminders, add it to your calendar, and post your questions and comments below! This session will also be recorded and available on demand shortly after conclusion of the live event.3.1KViews5likes35CommentsImplementing Defender for Cloud, Microsoft’s CNAPP to embed security from code to cloud
Explore key Cloud Native Application Protection Platform (CNAPP) implementation strategies for protecting multicloud and hybrid environments with comprehensive security across the full lifecycle, from development to runtime and breaking the silos between developers, security admins, and SOC analyst teams with Microsoft Defender for Cloud. This session is part of the Microsoft Secure Tech Accelerator. RSVP for event reminders, add it to your calendar, and post your questions and comments below! This session will also be recorded and available on demand shortly after conclusion of the live event.3.2KViews4likes16CommentsAccelerate AI adoption with next-gen security and governance capabilities
Generative AI adoption is accelerating across industries, and organizations are looking for secure ways to harness its potential. Today, we are excited to introduce new capabilities designed to drive AI transformation with strong security and governance tools.9.8KViews2likes0CommentsMicrosoft Security Exposure Management Graph: Prioritization is the king
Unlock the full potential of Microsoft’s Security Exposure Management raph in Advanced Hunting. This blog post delves into essential concepts like Blast Radius and Asset Exposure, equipping you with powerful queries to enhance your security posture.5KViews1like0Comments