securing ai
19 TopicsMicrosoft Defender for Cloud Apps - Ninja Training
Welcome to our Ninja Training for Microsoft Defender for Cloud Apps! Are you trying to protect your SaaS applications? Are you concerned about the posture of the apps you are using? Is shadow IT or AI a concern of yours? Then you are in the right place. The training below will aggregate all the relevant resources in one convenient location for you to learn from. Let’s start here with a quick overview of Microsoft Defender for Cloud Apps’ capabilities. Microsoft Defender for Cloud Apps | Microsoft Security Overview of Microsoft Defender for Cloud Apps and the capability of a SaaS Security solution. Overview - Microsoft Defender for Cloud Apps | Microsoft Learn Understand what Microsoft Defender for Cloud Apps is and read about its main capabilities. Quick Start The basic features of Defender for Cloud Apps require almost no effort to deploy. The recommended steps are to: Connect your apps Enable App Discovery Enable App Governance After enabling these features, all default detections and alerts will start triggering in the Microsoft Defender XDR console, and give you tremendous value with minimal configuration. Simplified SaaS Security Deployment with Microsoft Defender for Cloud Apps | Virtual Ninja Training Step-by-step video on how to quickly deploy Defender for Cloud Apps Get started - Microsoft Defender for Cloud Apps This quickstart describes how to start working with Microsoft Defender for Cloud Apps on the Microsoft Defender Portal. Review this if you prefer text to video Basic setup - Microsoft Defender for Cloud Apps The following procedure gives you instructions for customizing your Microsoft Defender for Cloud Apps environment. Connect apps to get visibility and control - Microsoft Defender for Cloud Apps App connectors use the APIs of app providers to enable greater visibility and control by Microsoft Defender for Cloud Apps over the apps you connect to. Make sure to connect all your available apps as you start your deployment Turn on app governance in Microsoft Defender for Cloud Apps App governance in Defender for Cloud Apps is a set of security and policy management capabilities designed for OAuth-enabled apps registered on Microsoft Entra ID, Google, and Salesforce. App governance delivers visibility, remediation, and governance into how these apps and their users access, use, and share sensitive data in Microsoft 365 and other cloud platforms through actionable insights out-of-the box threat detections, OAuth apps attack disruption, automated policy alerts and actions. It only takes a few minutes to enable and provide full visibility on your users’ Oauth app consents Shadow IT Discovery - Integrate with Microsoft Defender for Endpoint This article describes the out-of-the-box integration available between Microsoft Defender for Cloud Apps and Microsoft Defender for Endpoint, which simplifies cloud discovery and enabling device-based investigation. Control cloud apps with policies Policies in Microsoft Defender for Cloud Apps help define user behavior in the cloud, detect risky activities, and enable remediation workflows. There are various types of policies, such as Activity, Anomaly Detection, OAuth App, Malware Detection, File, Access, Session, and App Discovery policies. These policies help mitigate risks like access control, compliance, data loss prevention, and threat detection. Detect Threats and malicious behavior After connecting your cloud apps in Defender for Cloud Apps, you will start seeing alerts in your XDR portal. Here are resources to learn more about these alerts and how to investigate them. Note that we are constantly adding new built-in detections, and they are not necessarily part of our public documentation. How to manage incidents - Microsoft Defender XDR Learn how to manage incidents, from various sources, using Microsoft Defender XDR. How to investigate anomaly detection alerts Microsoft Defender for Cloud Apps provides detections for malicious activities. This guide provides you with general and practical information on each alert, to help with your investigation and remediation tasks. Note that detections are added on a regular basis, and not all of them will have entries in this guide. Configure automatic attack disruption in Microsoft Defender XDR - Microsoft Defender XDR | Microsoft Learn Learn how to take advantage of XDR capabilities to automatically disrupt high confidence attacks before damage is done. OAuth apps are natively integrated as part of Microsoft XDR. Create activity policies - Microsoft Defender for Cloud Apps | Microsoft Learn In addition to all the built-in detections as part of Microsoft Defender for Cloud Apps, you can also create your own policies, including Governance actions, based on the Activity log captured by Defender for Cloud Apps. Create and manage custom detection rules in Microsoft Defender XDR - Microsoft Defender XDR | Microsoft Learn Learn how to leverage XDR custom detection rules based on hunting data in the platform. CloudAppEvents table in the advanced hunting schema - Microsoft Defender XDR | Microsoft Learn Learn about the CloudAppEvents table which contains events from all connected applications with data enriched by Defender for Cloud Apps in a common schema. This data can be hunted across all connected apps and your separate XDR workloads. Investigate behaviors with advanced hunting - Microsoft Defender for Cloud Apps | Microsoft Learn Learn about behaviors and how they can help with security investigations. Investigate activities - Microsoft Defender for Cloud Apps | Microsoft Learn Learn how to search the activity log and investigate activities with a simple UI without the need for KQL App Governance – Protect from App-to-App attack scenario App governance in Microsoft Defender for Cloud Apps is crucial for several reasons. It enhances security by identifying and mitigating risks associated with OAuth-enabled apps, which can be exploited for privilege escalation, lateral movement, and data exfiltration. Organizations gain clear visibility into app compliance, allowing them to monitor how apps access, use, and share sensitive data. It provides alerts for anomalous behaviors, enabling quick responses to potential threats. Automated policy alerts and remediation actions help enforce compliance and protect against noncompliant or malicious apps. By governing app access, organizations can better safeguard their data across various cloud platforms. These features collectively ensure a robust security posture, protecting both data and users from potential threats. Get started with App governance - Microsoft Defender for Cloud Apps Learn how app governance enhances the security of SaaS ecosystems like Microsoft 365, Google Workspace, and Salesforce. This video details how app governance identifies integrated OAuth apps, detects and prevents suspicious activity, and provides in-depth monitoring and visibility into app metadata and behaviors to help strengthen your overall security posture. App governance in Microsoft Defender for Cloud Apps and Microsoft Defender XDR - Microsoft Defender for Cloud Apps | Microsoft Learn Defender for Cloud Apps App governance overview Create app governance policies - Microsoft Defender for Cloud Apps | Microsoft Learn Many third-party productivity apps request access to user data and sign in on behalf of users for other cloud apps like Microsoft 365, Google Workspace, and Salesforce. Users often accept these permissions without reviewing the details, posing security risks. IT departments may lack insight into balancing an app's security risk with its productivity benefits. Monitoring app permissions provides visibility and control to protect your users and applications. App governance visibility and insights - Microsoft Defender for Cloud Apps | Microsoft Learn Managing your applications requires robust visibility and insight. Microsoft Defender for Cloud Apps offers control through in-depth insights into user activities, data flows, and threats, enabling effective monitoring, anomaly detection, and compliance Reduce overprivileged permissions and apps Recommendations for reducing overprivileged permissions App Governance plays a critical role in governing applications in Entra ID. By integrating with Entra ID, App Governance provides deeper insights into application permissions and usage within your identity infrastructure. This correlation enables administrators to enforce stringent access controls and monitor applications more effectively, ensuring compliance and reducing potential security vulnerabilities. This page offers guidelines for reducing unnecessary permissions, focusing on the principle of least privilege to minimize security risks and mitigate the impact of breaches. Investigate app governance threat detection alerts List of app governance threat detection alerts classified according to MITRE ATT&CK and investigation guidance Manage app governance alerts Learn how to govern applications and respond to threat and risky applications directly from app governance or through policies. Hunt for threats in app activities Learn how to hunt for app activities directly form the XDR console (Microsoft 365 Connector required as discussed in quick start section). How to Protect Oauth Apps with App Governance in Microsoft Defender for Cloud Apps Webinar | How to Protect Oauth Apps with App Governance in Microsoft Defender for Cloud Apps. Learn how to protect Oauth applications in your environment, how to efficiently use App governance within Microsoft Defender for Cloud Apps to protect your connected apps and raise your security posture. App Governance is a Key Part of a Customers' Zero Trust Journey Webinar| learn about how the app governance add-on to Microsoft Defender for Cloud Apps is a key component of customers' Zero Trust journey. We will examine how app governance supports managing to least privilege (including identifying unused permissions), provides threat detections that are able and have already protected customers, and gives insights on risky app behaviors even for trusted apps. App Governance Inclusion in Defender for Cloud Apps Overview Webinar| App governance overview and licensing requirements. Frequently asked questions about app governance App governance FAQ Manage the security Posture of your SaaS (SSPM) One of the key components of Microsoft Defender for Cloud Apps is the ability to gain key information about the Security posture of your applications in the cloud (AKA: SaaS). This can give you a proactive approach to help avoid breaches before they happen. SaaS Security posture Management (or SSPM) is part the greater Exposure Management offering, and allows you to review the security configuration of your key apps. More details in the links below: Transform your defense: Microsoft Security Exposure Management | Microsoft Secure Tech Accelerator Overview of Microsoft Exposure Management and it’s capabilities, including how MDA & SSPM feed into this. SaaS Security Posture Management (SSPM) - Overview - Microsoft Defender for Cloud Apps | Microsoft Learn Understand simply how SSPM can help you increase the safety of your environment Turn on and manage SaaS security posture management (SSPM) - Microsoft Defender for Cloud Apps | Microsoft Learn Enabling SSPM in Defender for Cloud Apps requires almost no additional configuration (as long as your apps are already connected), and no extra license. We strongly recommend turning it on, and monitoring its results, as the cost of operation is very low. SaaS Security Initiative - Microsoft Defender for Cloud Apps | Microsoft Learn The SaaS Security Initiative provides a centralized place for software as a service (SaaS) security best practices, so that organizations can manage and prioritize security recommendations effectively. By focusing on the most impactful metrics, organizations can enhance their SaaS security posture. Secure your usage of AI applications AI is Information technologies’ newest tool and strongest innovation area. As we know it also brings its fair share of challenges. Defender for Cloud Apps can help you face these from two different angles: - First, our App Discovery capabilities give you a complete vision of all the Generative AI applications in use in an environment - Second, we provide threat detection capabilities to identify and alert from suspicious usage of Copilot for Microsoft 365, along with the ability to create custom detection using KQL queries. Secure AI applications using Microsoft Defender for Cloud Apps Overview of Microsoft Defender for Cloud Apps capabilities to secure your usage of Generative AI apps Step-by-Step: Discover Which Generative AI Apps Are Used in Your Environment Using Defender for Cloud Apps Detailed video-guide to deploy Discovery of Gen AI apps in your environment in a few minutes Step-by-Step: Protect Your Usage of Copilot for M365 Using Microsoft Defender for Cloud Apps Instructions and examples on how to leverage threat protection and advanced hunting capabilities to detect any risky or suspicious usage of Copilot for Microsoft 365 Get visibility into DeepSeek with Microsoft Defender for Cloud Apps Understand how fast the Microsoft Defender for Cloud Apps team can react when new apps or new threats come in the market. Discover Shadow IT applications Shadow IT and Shadow AI are two big challenges that organizations face today. Defender for Cloud Apps can help give you visibility you need, this will allow you to evaluate the risks, assess for compliance and apply controls over what can be used. Getting started The first step is to ensure the relevant data sources are connected to Defender for Cloud Apps to provide you the required visibility: Integrate Microsoft Defender for Endpoint - Microsoft Defender for Cloud Apps | Microsoft Learn The quickest and most seamless method to get visibility of cloud app usage is to integrate MDA with MDE (MDE license required). Create snapshot cloud discovery reports - Microsoft Defender for Cloud Apps | Microsoft Learn A sample set of logs can be ingested to generate a Snapshot. This lets you view the quality of the data before long term ingestion and also be used for investigations. Configure automatic log upload for continuous reports - Microsoft Defender for Cloud Apps | Microsoft Learn A log collector can be deployed to facilitate the collection of logs from your network appliances, such as firewalls or proxies. Defender for Cloud Apps cloud discovery API - Microsoft Defender for Cloud Apps | Microsoft Learn MDA also offers a Cloud Discovery API which can be used to directly ingest log information and mitigate the need for a log collector. Evaluate Discovered Apps Once Cloud Discovery logs are being populated into Defender for Cloud Apps, you can start the process of evaluating the discovered apps. This includes reviewing their usage, user count, risk scores and compliance factors. View discovered apps on the Cloud discovery dashboard - Microsoft Defender for Cloud Apps | Microsoft Learn View & evaluate the discovered apps within Cloud Discovery and Generate Cloud Discovery Executive Reports Working with the app page - Microsoft Defender for Cloud Apps | Microsoft Learn Investigate app usage and evaluate their compliance and risk factors Discovered app filters and queries - Microsoft Defender for Cloud Apps | Microsoft Learn Apply granular filtering and app tagging to focus on apps that are important to you Work with discovered apps via Graph API - Microsoft Defender for Cloud Apps | Microsoft Learn Investigate discovered apps via the Microsoft Graph API Add custom apps to cloud discovery - Microsoft Defender for Cloud Apps | Microsoft Learn You can add custom apps to the catalog which can then be matched against log data. This is useful for LOB applications. Govern Discovered Apps Having evaluated your discovered apps, you can then take some decisions on what level of governance and control each of the applications require and whether you want custom policies to help govern future applications: Govern discovered apps using Microsoft Defender for Endpoint - Microsoft Defender for Cloud Apps | Microsoft Learn Setup governance enforcement actions when using Microsoft Defender for Endpoint Govern discovered apps - Microsoft Defender for Cloud Apps | Microsoft Learn Apply governance actions to discovered apps from within the Cloud Discovery area Create cloud discovery policies - Microsoft Defender for Cloud Apps | Microsoft Learn Create custom Cloud Discovery policies to identify usage, alert and apply controls Operations and investigations - Sample AH queries - Tips on investigation - (section for SOC) Advanced Hunting Compromised and malicious applications investigation | Microsoft Learn Investigate anomalous app configuration changes Impersonation and EWS in Exchange | Microsoft Learn Audits impersonate privileges in Exchange Online Advanced Hunting Queries Azure-Sentinel/Solutions/Microsoft Entra ID/Analytic Rules/ExchangeFullAccessGrantedToApp.yaml at master · Azure/Azure-Sentinel · GitHub This detection looks for the full_access_as_app permission being granted to an OAuth application with Admin Consent. This permission provide access to all Exchange mailboxes via the EWS API can could be exploited to access sensitive data by being added to a compromised application. The application granted this permission should be reviewed to ensure that it is absolutely necessary for the applications function Azure-Sentinel/Solutions/Microsoft Entra ID/Analytic Rules/AdminPromoAfterRoleMgmtAppPermissionGrant.yaml at master · Azure/Azure-Sentinel · GitHub This rule looks for a service principal being granted permissions that could be used to add a Microsoft Entra ID object or user account to an Admin directory role. Azure-Sentinel/Solutions/Microsoft Entra ID/Analytic Rules/SuspiciousOAuthApp_OfflineAccess.yaml at master · Azure/Azure-Sentinel · GitHub Offline access will provide the Azure App with access to the listed resources without requiring two-factor authentication. Consent to applications with offline access and read capabilities should be rare, especially as the known Applications list is expanded Best Practice recommendations Common threat protection policies - Microsoft Defender for Cloud Apps | Microsoft Learn Common Defender for Cloud Apps Threat Protection policies Recommended Microsoft Defender for Cloud Apps policies for SaaS apps | Microsoft Learn Recommended Microsoft Defender for Cloud Apps policies for SaaS apps Best practices for protecting your organization - Microsoft Defender for Cloud Apps | Microsoft Learn Best practices for protecting your organization with Defender for Cloud Apps Completion certificate! Click here to get your shareable completion certificate!! Advanced configuration Training Title Description Importing user groups from connect apps This article outlines the steps on how to import user groups from connected apps Manage Admin Access This article describes how to manage admin access in Microsoft Defender for Cloud Apps. Configure MSSP Access In this video, we walk through the steps on adding Managed Security Service Provider (MSSP) access to Microsoft Defender for Cloud Apps. Provide managed security service provider (MSSP) access - Microsoft Defender XDR | Microsoft Learn Provide managed security service provider (MSSP) access Integrate with Secure Web Gateways Microsoft Defender for Cloud Apps integrates with several secure web gateways available in the market. Here are the links to configure this integration. Integrate with Zscaler Integrate with iboss Integrate with Corrata Integrate with Menlo Additional resources Microsoft Defender for Cloud Apps Tech Community This is a Microsoft Defender for Cloud Apps Community space that allows users to connect and discuss the latest news, upgrades, and best practices with Microsoft professionals and peers.568Views4likes1CommentUnlocking the Power of Microsoft Purview for ChatGPT Enterprise
In today's rapidly evolving technology landscape, data security and compliance are key. Microsoft Purview offers a robust solution for managing and securing interactions with AI based solutions. This integration not only enhances data governance but also ensures that sensitive information is handled with the appropriate controls. Let's dive into the benefits of this integration and outline the steps to integrate with ChatGPT Enterprise in specific. The integration works for Entra connected users on the ChatGPT workspace, if you have needs that goes beyond this, please tell us why and how it impacts you. Benefits of Integrating ChatGPT Enterprise with Microsoft Purview Enhanced Data Security: By integrating ChatGPT Enterprise with Microsoft Purview, organizations can ensure that interactions are securely captured and stored within their Microsoft 365 tenant. This includes user text prompts and AI app text responses, providing a comprehensive record of communications. Compliance and Governance: Microsoft Purview offers a range of compliance solutions, including Insider Risk Management, eDiscovery, Communication Compliance, and Data Lifecycle & Records Management. These tools help organizations meet regulatory requirements and manage data effectively. Customizable Detection: The integration allows for the detection of built in can custom classifiers for sensitive information, which can be customized to meet the specific needs of the organization. To help ensures that sensitive data is identified and protected. The audit data streams into Advanced Hunting and the Unified Audit events that can generate visualisations of trends and other insights. Seamless Integration: The ChatGPT Enterprise integration uses the Purview API to push data into Compliant Storage, ensuring that external data sources cannot access and push data directly. This provides an additional layer of security and control. Step-by-Step Guide to Setting Up the Integration 1. Get Object ID for the Purview account in Your Tenant: Go to portal.azure.com and search for "Microsoft Purview" in the search bar. Click on "Microsoft Purview accounts" from the search results. Select the Purview account you are using and copy the account name. Go to portal.azure.com and search for “Enterprise" in the search bar. Click on Enterprise applications. Remove the filter for Enterprise Applications Select All applications under manage, search for the name and copy the Object ID. 2. Assign Graph API Roles to Your Managed Identity Application: Assign Purview API roles to your managed identity application by connecting to MS Graph utilizing Cloud Shell in the Azure portal. Open a PowerShell window in portal.azure.com and run the command Connect-MgGraph. Authenticate and sign in to your account. Run the following cmdlet to get the ServicePrincipal ID for your organization for the Purview API app. (Get-MgServicePrincipal -Filter "AppId eq '9ec59623-ce40-4dc8-a635-ed0275b5d58a'").id This command provides the permission of Purview.ProcessConversationMessages.All to the Microsoft Purview Account allowing classification processing. Update the ObjectId to the one retrieved in step 1 for command and body parameter. Update the ResourceId to the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{a4543e1f-6e5d-4ec9-a54a-f3b8c156163f}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam It will look something like this from the command line We also need to add the permission for the application to read the user accounts to correctly map the ChatGPT Enterprise user with Entra accounts. First run the following command to get the ServicePrincipal ID for your organization for the GRAPH app. (Get-MgServicePrincipal -Filter "AppId eq '00000003-0000-0000-c000-000000000000'").id The following step adds the permission User.Read.All to the Purview application. Update the ObjectId with the one retrieved in step 1. Update the ResourceId with the ServicePrincipal ID retrieved in the last step. $bodyParam= @{ "PrincipalId"= "{ObjectID}" "ResourceId" = "{ResourceId}" "AppRoleId" = "{df021288-bdef-4463-88db-98f22de89214}" } New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId '{ObjectId}' -BodyParameter $bodyParam 3. Store the ChatGPT Enterprise API Key in Key Vault The steps for setting up Key vault integration for Data Map can be found here Create and manage credentials for scans in the Microsoft Purview Data Map | Microsoft Learn When setup you will see something like this in Key vault. 4. Integrate ChatGPT Enterprise Workspace to Purview: Create a new data source in Purview Data Map that connects to the ChatGPT Enterprise workspace. Go to purview.microsoft.com and select Data Map, search if you do not see it on the first screen. Select Data sources Select Register Search for ChatGPT Enterprise and select Provide your ChatGPT Enterprise ID Create the first scan by selecting Table view and filter on ChatGPT Add your key vault credentials to the scan Test the connection and once complete click continue When you click continue the following screen will show up, if everything is ok click Save and run. Validate the progress by clicking on the name, completion of the first full scan may take an extended period of time. Depending on size it may take more than 24h to complete. If you click on the scan name you expand to all the runs for that scan. When the scan completes you can start to make use of the DSPM for AI experience to review interactions with ChatGPT Enterprise. The mapping to the users is based on the ChatGPT Enterprise connection to Entra, with prompts and responses stored in the user's mailbox. 5. Review and Monitor Data: Please see this article for required permissions and guidance around Microsoft Purview Data Security Posture Management (DSPM) for AI, Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Use Purview DSPM for AI analytics and Activity Explorer to review interactions and classifications. You can expand on prompts and responses in ChatGPT Enterprise 6. Microsoft Purview Communication Compliance Communication Compliance (here after CC) is a feature of Microsoft Purview that allows you to monitor and detect inappropriate or risky interactions with ChatGPT Enterprise. You can monitor and detect requests and responses that are inappropriate based on ML models, regular Sensitive Information Types, and other classifiers in Purview. This can help you identify Jailbreak and Prompt injection attacks and flag them to IRM and for case management. Detailed steps to configure CC policies and supported configurations can be found here. 7. Microsoft Purview Insider Risk Management We believe that Microsoft Purview Insider Risk Management (here after IRM) can serve a key role in protecting your AI workloads long term. With its adaptive protection capabilities, IRM dynamically adjusts user access based on evolving risk levels. In the event of heightened risk, IRM can enforce Data Loss Prevention (DLP) policies on sensitive content, apply tailored Entra Conditional Access policies, and initiate other necessary actions to effectively mitigate potential risks. This strategic approach will help you to apply more stringent policies where it matters avoiding a boil the ocean approach to allow your team to get started using AI. To get started use the signals that are available to you including CC signals to raise IRM tickets and enforce adaptive protection. You should create your own custom IRM policy for this. Do include Defender signals as well. Based on elevated risk you may select to block users from accessing certain assets such as ChatGPT Enterprise. Please see this article for more detail Block access for users with elevated insider risk - Microsoft Entra ID | Microsoft Learn. 8. eDiscovery eDiscovery of AI interactions is crucial for legal compliance, transparency, accountability, risk management, and data privacy protection. Many industries must preserve and discover electronic communications and interactions to meet regulatory requirements. Including AI interactions in eDiscovery ensures organizations comply with these obligations and preserves relevant evidence for litigation. This process also helps maintain trust by enabling the review of AI decisions and actions, demonstrating due diligence to regulators. Microsoft Purview eDiscovery solutions | Microsoft Learn 9. Data Lifecycle Management Microsoft Purview offers robust solutions to manage AI data from creation to deletion, including classification, retention, and secure disposal. This ensures that AI interactions are preserved and retrievable for audits, litigation, and compliance purposes. Please see this article for more information Automatically retain or delete content by using retention policies | Microsoft Learn. Closing By following these steps, organizations can leverage the full potential of Microsoft Purview to enhance the security and compliance of their ChatGPT Enterprise interactions. This integration not only provides peace of mind but also empowers organizations to manage their data more effectively. We are still in preview some of the features listed are not fully integrated, please reach out to us if you have any questions or if you have additional requirements.1.8KViews1like2CommentsShowcase your skills with this new Security Certification
Introducing the Microsoft Certified: Information Security Administrator Certification Designed specifically for data security and information protection professionals, our new Microsoft Certified: Information Security Administrator Certification validates the skills needed to plan and implement information security for sensitive data by using Microsoft Purview and related services. It also validates the skills needed to mitigate risks from internal and external threats by protecting data inside collaboration environments that are managed by Microsoft 365. Plus, it verifies subject matter expertise needed to participate in information security incident responses. The Microsoft Certified: Information Security Administrator Certification is currently in Beta and will become available in April 2025, and you can earn the Certification by passing Exam SC-401: Administering Information Security in Microsoft 365. While this new Certification’s study material includes learning modules from SC-400, it also includes new modules tailored to data security and information protection skillsets. Understand Microsoft Purview Insider Risk Management Microsoft Purview Insider Risk Management is a compliance solution designed to minimize internal risks by detecting, investigating, and acting on malicious and inadvertent activities within your organization. This training module provides an in-depth understanding of how to identify potential risks using analytics and create policies to manage security and compliance. By the end of this module, you'll be equipped with the knowledge to implement insider risk management effectively, ensuring user-level privacy through pseudonymization and role-based access controls. Prepare for Microsoft Purview Insider Risk Management Preparation is key to successfully implementing any security solution. The "Prepare for Microsoft Purview Insider Risk Management" training module guides you through the strategies for planning and configuring the solution to meet your organizational needs. You'll learn how to collaborate with stakeholders, understand the prerequisites for implementation, and configure settings to align with compliance and privacy requirements. This module is essential for administrators and risk practitioners looking to protect their organization's data and privacy. Create and Manage Insider Risk Management Policies Creating and managing effective policies is crucial for mitigating insider risks. This training module covers the process of developing and implementing insider risk management policies using Microsoft Purview. You'll learn how to define the types of risks to identify, configure risk indicators, and customize event thresholds for policy indicators. The module also provides insights into using templates for quick policy creation and configuring anomaly detections to identify unusual user activities. By mastering these skills, you can ensure that your organization is well-protected against potential internal threats. Identify and Mitigate AI Data Security Risks As artificial intelligence (AI) becomes increasingly integrated into business operations, understanding and mitigating AI-related data security risks is vital. The "Identify and Mitigate AI Data Security Risks" training module offers a comprehensive overview of AI security fundamentals. You'll learn about the types of security controls applicable to AI systems and the security testing procedures that can enhance the security posture of AI environments. This module is perfect for developers, administrators, and security engineers looking to safeguard their AI-driven systems. Retiring the Information Protection and Compliance Administrator Associate Certification We’re retiring the Microsoft Certified: Information Protection and Compliance Administrator Associate Certification and its related Exam SC-400: Administering Information Protection and Compliance in Microsoft 365. The Certification, related exam, and renewal assessments will all be retired on May 31, 2025. For data security and information protection professionals: We’re introducing a new Certification – more on that in the section below! For compliance professionals: We don’t have plans to create a new Certification for compliance-related roles, however we do offer Microsoft Applied Skills that can validate these skills. You can find more details in this blog. The following questions and answers can help you determine how these retirements could impact your learning goals: Q: What if I’m studying for Exam SC-400? A: If you’re currently preparing for Exam SC-400, you should take and pass the exam before May 31, 2025. If you’re just starting your preparation process, we recommend that you explore the new Information Security Administrator Certification and its related Exam SC-401: Administering Information Security in Microsoft 365. Q: I’ve already earned the Information Protection and Compliance Administrator Associate Certification. What happens now? A: If you’ve already earned the Information Protection and Compliance Administrator Associate Certification, it will stay on the transcript in your profile on Microsoft Learn. If you’re eligible to renew your Certification before May 31, 2025, we recommend that you consider doing so, because it won’t be possible to renew the Certification after this date. Find the right resources to support your security journey Whether you are looking to build on your existing expertise, need specific product documentation, or want to connect with like-minded communities, partners, and thought leaders, you can find the latest security skill-building content on our Security hub on MS Learn.2.2KViews0likes0CommentsThe security benefits of structuring your Azure OpenAI calls – The System Role
In the rapidly evolving landscape of GenAI usage by companies, ensuring the security and integrity of interactions is paramount. A key aspect is managing the different conversational roles—namely system, user, and assistant. By clearly defining and separating these roles, you can maintain clarity and context while enhancing security. In this blog post, we explore the benefits of structuring your Azure OpenAI calls properly, focusing especially on the system prompt. A misconfigured system prompt can create a potential security risk for your application, and we’ll explain why and how to avoid it. The Different Roles in an AI-Based Chat Application Any AI chat application, regardless of the domain, is based on the interaction between two primary players, the user and the assistant. The user provides input or queries. The assistant generates contextually appropriate and coherent responses. Another important but sometimes overlooked player is the designer or developer of the application. This individual determines the purpose, flow, and tone of the application. Usually, this player is referred to as the system. The system provides the initial instructions and behavioral guidelines for the model. Microsoft Defender for Cloud’s researchers identified emerging anti-pattern Microsoft Defender for Cloud (MDC) offers security posture management and threat detection capabilities across clouds and has recently released a new set of features to help organizations build secure enterprise-ready gen-AI apps in the cloud, helping them build securely and stay secure. MDC’s research experts continuously track the development patterns to enhance the offering but also to promote secure practices to their customers and the wider tech community. They are also primary contributors to the OWASP Top 10 threats for LLM (Idan Hen, research team manager). Recently, MDC's research experts identified a common anti-pattern in AI application development is emerging – appending the system to the user prompt. Mixing these sections is easy and tempting – developers often use it because it’s slightly faster while building and also allows them to maintain context through long conversations. But this practice is harmful – it introduces detrimental security risks that could easily result in 'game over' – exposing sensitive data, getting your computer abused, or making your system vulnerable to Jailbreak attacks. Diving deeper: How system prompts evaluation keeps your application secure Separate system, user and assistant prompts with Azure OpenAI ChatCompletion API Azure OpenAI Service's Chat Completion API is a powerful tool designed to facilitate rich and interactive conversational experiences. Leveraging the capabilities of advanced language models, this API enables developers to create human-like chat interactions within their applications. By structuring conversations with distinct roles—system, user, and assistant—the API ensures clarity and context throughout the dialogue: [{"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request]}, {"role": "assistant", "content": [Model’s response] } ] This structured interaction model allows for enhanced user engagement across various use cases such as customer support, virtual assistants, and interactive storytelling. By understanding and predicting the flow of conversation, the Chat Completion API helps create not only natural and engaging user experiences but securer applications, driving innovation in communication technology. Anti-pattern explained When developers append their instructions to the user prompt. The model receives single input composed by two different sources: developer and user: {"role": "user", "content”: [Developer’s instructions] + [User’s request]} {"role": "assistant", "content": [Model’s response] } When developer instructions are mingled with user input, detection and content filtering systems often struggle to distinguish between the two. Anti-pattern resulting in less secured application This blurring of input roles can facilitate easier manipulation through both direct and indirect prompt injections, thereby increasing the risk of misuse and harmful content not being detected properly by security and safety systems. Developer instructions frequently contain security-related content, such as forbidden requests and responses, as well as lists of do's and don'ts. If these instructions are not conveyed using the system role, this important method for restricting model usage becomes less effective. Additionally, customers have reported that protection systems may misinterpret these instructions as malicious behavior, leading to a high rate of false positive alerts and the unwarranted blocking of benign content. In one case, a customer described forbidden behavior and appended it to the user role. The threat detection system then flagged it as malicious user activity. Moreover, developer instructions may contain private content and information related to the application's inner workings, such as available data sources and tools, their descriptions, and legitimate and illegitimate operations. Although it is not recommended, these instructions may also include information about the logged-in user, connected data sources and information related to the application's operation. Content within the system role enjoys higher privacy; a model can be instructed not to reveal it to the user, and a system prompt leak is considered a security vulnerability. When developer instructions are inserted together with user instructions, the probability of a system prompt leak is much higher, thereby putting our application at risk. 1: Good Protection vs Poor Protection Why do developers mingle their instructions with user input? In many cases, recurring instructions improve the overall user experience. During lengthy interactions, the model tends to forget earlier conversations, including the developer instructions provided in the system role. For example, a model instructed to role-play in an English teaching application or act as a medical assistant in a hospital support application may forget its assigned role by the end of the conversation. This can lead to poor user experience and potential confusion. To mitigate this issue, it is crucial to find methods to remind the model of its role and instructions throughout the interaction. One incorrect approach is to append the developer's instructions to user input by adding them to the User role. Although it keeps developers’ instructions fresh in the model's 'memory,' this practice can significantly impact security, as we saw earlier. Enjoy both user experience and secured application To enjoy both quality detection and filtering capabilities along with a maximal user experience throughout the entire conversation, one option is to refeed developer instructions using the system role several times as the conversation continues: {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 1]} {"role": "assistant", "content": [Model’s response 1] } {"role": "system", "content": [Developer’s instructions]}, {"role": "user", "content”: [User’s request 2]} {"role": "assistant", "content": [Model’s response 2] } By doing so, we achieve the best of both worlds: maintaining the best practice of separating developer instructions from user requests using the Chat Completion API, while keeping the instructions fresh in the model's memory. This approach ensures that detection and filtering systems function effectively, our instructions get the model's full attention, and our system prompt remains secure, all without compromising the user experience. To further enhance the protection of your AI applications and maximize detection and filtering capabilities, it is recommended to provide contextual information regarding the end user and the relevant application. Additionally, it is crucial to identify and mark various input sources and involved entities, such as grounding data, tools, and plugins. By doing so, our system can achieve a higher level of accuracy and efficacy in safeguarding your AI application. In our upcoming blog post, we will delve deeper into these critical aspects, offering detailed insights and strategies to further optimize the protection of your AI applications. Start secure and stay secure when building Gen-AI apps with Microsoft Defender for Cloud Structuring your prompts securely is the best-practice when designing chatbots. There are other lines of defense that must be put in place to fully secure your environment. Sign up and Enable the new Defender for cloud threat protection for AI for active threat detection (preview). Enable posture management to cover all your cloud security risks, including new AI posture features. Further Reading Microsoft Defender for Cloud (MDC). AI protection using MDC. Chat Completion API. Security challenges related to GenAI. How to craft effective System Prompt. The role of System Prompt in Chat Completion API. Responsible AI practices for Azure OpenAI models. Asaf Harari, Data Scientist, Microsoft Threat Protection Research. Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud. Slava Reznitsky, Principal Architect, Microsoft Defender for Cloud.585Views2likes0CommentsAccelerate AI adoption with next-gen security and governance capabilities
Generative AI adoption is accelerating across industries, and organizations are looking for secure ways to harness its potential. Today, we are excited to introduce new capabilities designed to drive AI transformation with strong security and governance tools.9.2KViews2likes0CommentsInsider Risk Management empowering risky AI usage visibility and security investigations
Discover how Microsoft Purview Insider Risk Management helps you safeguard your data in the AI era and empowers security operations centers to enhance incident investigations with comprehensive data security context.4.7KViews0likes1CommentSafely activate your data estate with Microsoft Purview
60% of CDOs cite data integration challenges as a top pain-point due to lack of knowledge of where relevant data resides [1]. Companies operate on multi-platform, multi-cloud data estates making it harder than ever to seamlessly discover, secure, govern and activate data. This increases the overall complexity when enabling users to responsibly derive insights and drive business value from data. In the era of AI, data governance is no longer an afterthought, data security and data governance are now both table stakes. Data Governance is not a new concept but with the proliferation of AI and evolving regulatory landscape, data governance is critical for safeguarding data related to AI-driven business innovation. With 95% of organizations implementing or developing an AI strategy [2], customers are facing emerging governance challenges, such as: False signals: The lack of clean accurate data can cause false signals in AI which can trigger consequential business outcomes or lead to incorrect reported forecasting and regulatory fines. Time to insight: Data scientists and analysts spend 60-80% of their time on data access and preparation to feed AI initiatives which leads to staff frustration, increased OPEX, and delays in critical AI innovation priorities. Shadow innovation: Data innovation outside governance can increase business risks around data leakage, oversharing, or inaccurate outcomes. This is why federated governance has surfaced as a top priority across security and data leaders because it unlocks data innovation while maintaining appropriate data oversight to help minimize risks. Customers are seeking more unified solutions that enable data security and governance seamlessly across their complex data estate. To help customers better respond to these needs, Microsoft Purview unifies data security, data governance, and data compliance solutions across the heterogeneous data estate for the era of AI. Microsoft Purview also works closely with Microsoft Fabric to integrate capabilities that help seamlessly secure and govern data to help reduce risks associated with data activation across the Microsoft Intelligent Data Platform and across the Microsoft Cloud portfolio. Microsoft Fabric delivers a pre-integrated and optimized SaaS environment for data teams to work faster together over secure and governed data within the Fabric environment. Combining the strengths of Microsoft Purview and Microsoft Fabric enables organizations to more confidently leverage Fabric to unlock data innovation across data engineers, analysts, data scientists, and developers whilst Purview enables data security teams to extend Purview advanced data security value and enables the central data office to extend Purview advanced data governance value across Fabric, Azure, M365, and the heterogenous data estate. Furthering this vision, today Microsoft is announcing 1. a new name for the Purview Data Governance solution, Purview Unified Catalog, to better reflect its growing catalog capabilities, 2. integration with new OneLake catalog, 3. a new data quality scan engine, 4. Purview Analytics in OneLake, and 5. expanded Data Loss Prevention (DLP) capabilities for Fabric lakehouse and semantic models. Introducing Unified Catalog: a new name for the visionary solution The Microsoft Purview data governance solution, made generally available in September, delivers comprehensive visibility, data confidence, and responsible innovation—for greater business value in the era of AI. The solution streamlines metadata from disparate catalogs and sources, like OneLake, Databricks Unity, and Snowflake Polaris, into a unified experience. To better reflect these comprehensive customer benefits, Microsoft Purview Data Catalog is being renamed to Microsoft Purview Unified Catalog to exemplify the growing catalog capabilities such as deeper data quality support for more cloud sources, and Purview Analytics in OneLake. A data catalog serves as a comprehensive inventory of an organization's data assets. As the Microsoft Purview Unified Catalog continues to add on capabilities within curation, data quality, and third-party platform integration, the new Unified Catalog name reflects the current cross-cloud capability. This cross-cloud capability is illustrated in the figure below. This data product contains data assets from multiple different sources, including a Fabric lakehouse table, Snowflake Table and Azure Databricks Table. With the proper curation of analytics into data products, data users can govern data assets easier than ever. Figure 1: Curation of a data product from disparate data sources within Purview’s Unified Catalog Introducing OneLake catalog (Preview) As announced in the Microsoft Fabric blog earlier today, the OneLake catalog is a solution purpose-built for data engineers, data scientists, developers, analysts, and data consumers to explore, manage, and govern data in Fabric. The new OneLake catalog works with Purview by seamlessly connecting data assets governed by OneLake catalog into Purview Unified Catalog, enabling the central data office to centrally govern and manage data assets. The Purview Unified Catalog offers data stewards and data owners advanced capabilities for data curation, advanced data quality, end-to-end data lineage, and an intuitive global catalog that spans the data estate. For data leaders, Unified Catalog offers built-in reports for actionable insights into data health and risks and the ability to confidently govern data across the heterogeneous data estate. In figure 2, you can see how Fabric data is seamlessly curated into the Corporate Emissions Created by AI for CY2024 Data Product, built with data assets from OneLake. Figure 2: Data product curated with Fabric assets Introducing a new data quality scan engine for deeper data quality (Preview) Purview offers deeper data quality support, through a new data quality scan engine for big data platforms, including: Microsoft Fabric, Databricks Unity Catalog, Snowflake, Google Big Query, and Amazon S3, supporting open standard file and table formats. In short, this new scan engine allows businesses to centrally perform rich data quality management from within the Purview Unified Catalog. In Figure 3, you can see how users can run different data quality rules on a particular asset, in this case, a table hosted in OneLake, and when users click on “run quality scan”, the scanner runs a deep scan on the data itself, running the data quality rules in real time, and updating the quality score for that particular asset. Figure 3: Running a data quality scan on an asset living in OneLake Introducing Purview Analytics in OneLake (Preview) To further an organization’s data quality management practice, data stewards can now leverage a new Purview Analytics in OneLake capability, in preview, to extract tenant-specific metadata from the Purview Unified Catalog and publish to OneLake. This new capability enables deeper data quality and lineage investigation using the rich capabilities in Power BI within Microsoft Fabric. Figure 4: In Unified Catalog settings, a user can add self-serve analytics to Microsoft Fabric Figure 5: Curated metadata from Purview within Fabric Expanded Data Loss Prevention (DLP) capabilities for Fabric lakehouse and semantic models To broaden Purview data security features for Fabric, today we are announcing that the restrict access action in Purview DLP policies now extends to Fabric semantic models. With the restrict access action, DLP admins can configure policies to detect sensitive information in semantic models and limit access to only internal users or data owners. This control is valuable for when a Fabric tenant includes guest users and you want to limit unnecessary access to internal proprietary data. The addition of the restrict access action for Fabric semantic models augments the existing ability to detect upload of sensitive data to Fabric lakehouses announced earlier this year. Learn more about the new Purview DLP capabilities for Fabric lakehouses and semantic models in the DLP blog. Figure 6: Example of restricted access to a Fabric semantic model enforced through a Purview DLP policy. Summary With these investments in security and governance, Microsoft Purview is delivering on its vision to extend data protection customer value and innovation across your heterogenous data estate for reduced complexities and improved risk mitigation. Together Purview and Fabric set the foundations for a modern intelligent data platform with seamless security and governance to drive AI innovation you can trust. Learn more As we continue to innovate our products to expand the security and governance capabilities, check out these resources to stay informed. https://aka.ms/Try-Purview-Governance https://www.microsoft.com/en-us/security/business/microsoft-purview https://aka.ms/try-fabric [1] Top 7 Challenges in Data Integration and How to Solve Them | by Codvo Marketing | Medium [2] Microsoft internal research May 2023, N=6382.9KViews1like0CommentsStreamlining AI Compliance: Introducing the Premium Template for Indonesia's PDP Law in Purview
In today’s evolving regulatory environment, businesses must navigate complex data privacy laws while fostering customer trust, especially as AI transforms industries. To support organizations in meeting compliance requirements, we’re introducing the Premium Assessment Template for Indonesia's Personal Data Protection (PDP) Law within Microsoft Purview Compliance Manager. This powerful tool automates critical compliance tasks, simplifies assessments, and integrates seamlessly with Microsoft’s E5 security and Purview solutions, helping businesses reduce manual effort and ensure compliance more efficiently. Discover how this template can streamline your compliance efforts and build trust in an AI-driven world.4.1KViews0likes0CommentsArchitecting secure Gen AI applications: Preventing Indirect Prompt Injection Attacks
Indirect Prompt Injection is an emerging attack vector specifically designed to target and exploit Generative AI applications powered by large language models (LLMs). But what exactly is indirect prompt injection, and how can you build applications that are more resilient and less susceptible to abuse by attackers?9.1KViews1like0CommentsBest practices to architect secure generative AI applications
This blog post delves into the best practices to securely architect Gen AI applications, ensuring they operate within the bounds of authorized access and maintain the integrity and confidentiality of sensitive data.13KViews2likes1Comment