insider risk management
144 TopicsBecome an Insider Risk Management Ninja
We are very excited and pleased to announce this rendition of the Ninja Training Series. There are several videos and resources out there and the overall purpose of the Insider Risk Management Ninja training is to help you get the relevant resources to get started and become more proficient in this area.57KViews13likes12CommentsAnnouncing the public preview of Insider Risk Management
At Ignite last November, we announced Microsoft 365 Insider Risk Management and began a private preview. Since then, we’ve been hard at work developing the ability to scale and deliver a solution that leverages AI and automation to quickly identify and investigate insider risks. Today, we move to the next phase and announce the public preview of Insider Risk Management. Now, all multi-tenant customers worldwide can access this solution. Insider Risk Management helps your organization intelligently identify and take action on insider risks by: Ensuring employee privacy is built-in with anonymity controls Correlating native and third-party signals within tailored policy templates to intelligently identify insider threats Enabling end-to-end investigations with integrated workflows across IT, HR and legal teams As we start public preview, we’ll continue listening and taking feedback, to ensure we’re meeting your needs as we head toward general availability in the coming months. At general availability the solution will also be available in go-local environments with government following shortly thereafter. Insider Risk Management is part of the Microsoft 365 E5 suite, and you can sign up for a trial or navigate to the Microsoft 365 Compliance Center to get started today. **Ready to get started? Check out our getting started guide to learn how.23KViews13likes3CommentsLeveraging AI and automation to quickly identify and investigate insider risks
I spent several years in the Microsoft internal digital security and risk organization helping to develop various programs to identify insider risks, threats and code of conduct policy violations in collaboration with our human resources (HR) and legal teams. The ability to identify these risks and policy violations and then take action to minimize the negative impact is a priority for organizations worldwide. Modern workplaces offer innovative technology that employees love, empowering them to communicate, collaborate, and produce with agility. Trusting your employees is the key to creating a dynamic, inclusive workplace and increasing productivity. But, with trust also comes risk. In fact, a survey by Crowd Research Partners indicated that 90% of organizations feel vulnerable to insider risks and 53% confirmed insider risks against their organization in the previous 12 months. We know from our own experience that it’s hard to maintain trust without the right visibility, processes and control. However, the effort required to identify these risks and violations is not trivial. Think about the number of people accessing resources and communicating with each other, as well as the natural cycle of people entering and leaving the company. How do you quickly determine what is an intentional risk vs. an unintentional one at scale? And how do you achieve this level of visibility, while aligning to the cultural, legal and privacy requirements in which you operate? For example, truly malicious insiders do things such as intentionally stealing your intellectual property, turning off security controls or harassing others at work. But there are many more situations in which an insider might not even know they are causing a risk to the organization or violating your policies, like when they’re excited about something new they’re working on and send files or photos to tell others about it. Ultimately, it’s important to see the activities and communications that occurred in the context of intent, in order to take the right course of action. The only way to do this efficiently and at scale is by leveraging intelligence and machine learning, as human driven processes can’t keep up and aren’t always that accurate. Furthermore, a holistic solution to this problem requires effective collaboration across security, HR and legal, as well as a balanced approach across privacy and risk management. Today I am excited to announce two new Microsoft 365 solutions, Insider Risk Management and Communication Compliance. These solutions can help you and your organization to leverage intelligence to identify and remediate insider risks and code of conduct policy violations, while meeting regulatory requirements. Insider Risk Management Hear from Microsoft CVP & CISO Bret Arsenault and his team about how they think about insider risk management: Insider Risk Management leverages the Microsoft Graph, security services and connectors to human resources (HR) systems like SAP, to obtain real-time native signals such as file activity, communications sentiment, abnormal user behaviors and resignation date. A set of configurable playbooks tailored specifically for risks – such as digital IP theft, confidentiality breach, and HR violations – use machine learning and intelligence to correlate these signals to identify hidden patterns and risks that traditional or manual methods might miss. Using intelligence allows the solution to focus on actual suspicious activities, so you don’t get overloaded with alerts. Furthermore, display names for risky users can be pseudonymized by default to maintain privacy and prevent bias. A comprehensive 360° view provides a curated and easy-to-understand visual summary of individual risks within your organization. This view includes an historical timeline of relevant in-scope activities and trends associated with each identified user. For example, you could see if a user submitted their resignation, downloaded some files and copied some of them to a USB device. The system also evaluates whether any of those files had classification labels on them and whether they contained sensitive information. With the right permission, the files accessed from Microsoft cloud resources like SharePoint Online can also be made available for the investigator to view, which further helps with the risk determination. Having all this information at your fingertips allows you to quickly decide whether this risk is one that warrants further investigation, saving you considerable time. Finally, end-to-end integrated workflows ensure that the right people across security, HR, legal and compliance are involved to quickly investigate and take action once a risk has been identified. For example, if the risk was determined to be unintentional, you could send an email saying this is a violation of company policy with a link to training or the policy handbook. If the risk was determined to be malicious, you could open an investigation that would collate and preserve all the evidence collected, including the documents, and create a case for legal and HR to take appropriate actions. Insider Risk Management is available as part of the Microsoft 365 E5 suite and is currently in limited private preview. You can sign up for an opportunity to participate here. Communication Compliance Communication Compliance is a brand-new solution that helps all organizations address code-of-conduct policy violations in company communications, while also helping organizations in regulated industries meet specific supervisory compliance requirements. Communication Compliance supports a number of company communications channels, including Exchange email, Teams, Skype for Business Online, Twitter, Facebook and Bloomberg instant messages. Organizations need the ability to improve investigating potential violations and facilitate taking adequate remediation action based on local regulations. To provide granularity in identifying specific words and phrases, we have three out-of-box machine learning models to identify physical violence, harassment, and profanities. You can also build-your-own trainable classifiers that understand meaning and context that are unique to your organization’s need such as insider trading or unethical practice, freeing you from a sea of false positives. Once a violation has been flagged and the designated supervisor is alerted, it is important that the review process enables them to efficiently act on violations. Communication Compliance includes features such as historical user context on past violations, conversation threading and keyword highlighting, which together allow the supervisor to quickly triage the violation and take the appropriate remediation actions. The interactive dashboard provides an effective way to manage the growing volume of communications risks to ensure violations aren’t missed. Proactive intelligent alerts on policy violations requiring immediate attention allows the supervisor to prioritize and focus on the most critical violations first. In addition, violations, actions and trends by policy provide a quick view on the effectiveness of your program. The Financial Industry Regulatory Authority (FINRA) Rule 3110 is a good example of a requirement for regulated organizations to have solutions in place to detect violations in communications. For example, safeguarding against potential money-laundering, insider trading, collusion, or bribery activities between broker-dealers is a critical priority. For organizations in regulated industries, Communication Compliances provides a full audit of review activities and tracking of policy implementation to help you meet the regulatory requirements you may be subject to. Communication Compliance is available today as part of the Microsoft 365 E5 suite, and you can sign up for a trial or navigate to the Microsoft 365 Compliance Center to get started today. We encourage customers who are currently using Supervision in Office 365 to use the new Communication Compliance solution to address your regulatory requirements with a much richer set of intelligent capabilities. **Update: Check out our session at Ignite 2019 that covers Insider Risk Management & Communication Compliance. Thank you, Talhah Mir, Principal Program Manager, Microsoft 365 Security and Compliance EngineeringSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in “Get Started” and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under “Not Started”. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting “Guided assistance to AI regulations” in the Recommendations tab and walking through the “Actions to take”. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in “Get Started” to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery – this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge – a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites – a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites – a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on “View details” for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security – This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites – this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge – this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Use Supervision to monitor email, Microsoft Teams, manage risk, meet regulatory requirement and more
Monitoring digital communications is critical to mitigating conduct, reputational, and financial risks. With Microsoft 365 Supervision solution you can monitor Exchange email, Microsoft Teams chats and channels, leverage sensitive information types and use advance message filters, and utilize an integrated review process.163KViews8likes13CommentsIntroducing the Azure Threat Research Matrix
When performing a security assessment, it’s common to find the assessment team attribute their actions to the MITRE ATT&CK knowledge base so that high-level stakeholders can visually see what techniques were successful and defenders can understand the techniques that were performed. However, the commonly utilized MITRE knowledge base lacks formal documentation of Azure or AzureAD-related tactics, techniques, or procedures (TTPs) that assessment teams can attribute to. Over the past year, Microsoft has worked with some of the top Azure security researchers to create the Azure Threat Research Matrix (ATRM), a matrix that provides details around the tactics & techniques a potential adversary may use to compromise an Azure Resource or Azure Active Directory.28KViews7likes6CommentsProtecting against insider risks in an uncertain environment
Countless security and compliance officers around the world are now asking themselves, “Is my organization effectively prepared to identify and take action on ever increasing insider risks?” The primary reason for this question is COVID-19, and the rapid digital transformation it has forced organizations to undertake. According to a recent survey, the lives of up to 300 million information workers worldwide have been upended, and many are now working remotely with limited resources, insecure wi-fi networks and increased stress. In addition, due to the rapid progression of the pandemic, many information workers are expected to use their home PCs or other shared devices. Remote work, while protecting employees from exposure to the virus, increases the distractions they likely to face, such as shared home workspaces and remote learning for children. According to the SEI CERT institute, user distractions are the cause for many accidental and non-malicious insider risks. Stressors such as potential job loss or safety concerns are also now heightened, which may lead some employees to participate in malicious activities, such as stealing intellectual property. In February this year, we introduced Insider Risk Management from Microsoft 365, helping organizations worldwide leverage the power of cloud scale combined with machine learning to identify insider risks and quickly take action with integrated collaboration workflows. Today we are pleased to announce the public preview of several new features that further enhance the rich set of detection and remediation capabilities already offered in the solution. Increased visibility with a focus on signal quality While having broad visibility into end user activities, actions, or communications are important, when it comes to effectively identifying risks, the quality insights matters most. In this release, we are significantly expanding the quality of insights that Insider Risk Management delivers to intelligently flag potentially risky behavior. We are further enhancing our already rich native integration with Microsoft 365 to surface additional insights across Microsoft Teams, SharePoint, and Exchange, including: Sharing files/folders/sites from SharePoint Online to domains marked “unallowed” Downloading content from Teams Emailing outside the organization to domains marked “unallowed” On devices, we continue to leverage the agentless capture of signals from Windows 10 endpoints to deliver new insights related to the obfuscation, exfiltration, or infiltration of sensitive information, including: Using Edge to copy files to personal cloud storage Copying files to USB Printing documents Transferring files to a network share Using Edge to download content from an unallowed domain Using Edge to download content from a third-party site Renaming files on device For those using Microsoft Defender Advanced Threat Protection (MDATP), we can now provide insights into whether someone is trying to evade security controls by disabling multi-factor authentication or installing unwanted software, which may indicate potentially malicious behavior. Finally, one of the key early indicators as to whether someone may choose to participate in malicious activities is disgruntlement. In this release, we are further enhancing our native HR connector to allow organizations to choose whether they want to use additional HR insights that might indicate disgruntlement to initiate a policy. More detail on the breadth of new signals being captured can be found on our documentation site. Quickly getting started without complex configurations or agent deployments Customers have told us one of the features they really appreciate in Insider Risk Management is the ability to leverage the built-in policy templates to quickly get started on identifying risks. Before switching to Insider Risk Management, many Microsoft 365 customers we spoke to were using a fractured and expensive approach to identify insider risks. They captured signals using a User Activity Monitoring (UAM) solution and fed these signals into a separate User Entity Behavior Analytics (UEBA) solution, with the hopes of finding the insider risk needle in the haystack. We know from our own experience in attempting to deploy these complex types of solutions at Microsoft, that not only is this approach not scalable, but it often results in a lot of ‘noise’ as it lacks enrichment, such as visibility into the sensitivity of the data and lack of broader context. In addition, deploying UAM and UEBA solutions takes considerable engineering resources to configure and maintain signal ingestion scripts, identify rules, and manage endpoint agents. Insider Risk Management from Microsoft 365 was developed in close collaboration with our internal digital security and risk engineering organization. We leveraged our deep learning and research in this space to design a solution that was easy to get started with. With Insider Risk Management, there is no requirement to deploy and manage endpoint agents on Windows 10 devices or configure and maintain complex scripting to ingest signals. You just simply choose the policy template most appropriate for the risk you are concerned about and add the users you want to look at. In the backend, our cloud-based machine learning and AI engine reasons over billions of signals to identify the risks most relevant for you to act on. In this release, to help organizations identify an even broader variety of risks, we are introducing new policy templates, including: Data leaks by priority users Data leaks by disgruntled users General security policy violations Security policy violations by departing users Security policy violations by priority users Security policy violations by disgruntled users We are also now introducing the powerful ability to customize policy templates. With policy customization, you can change the thresholds of the various indictors each policy reasons over to meet the unique needs of your organization. More detail on these new templates and policy customization can be found on our documentation page. Expanding extensibility into existing organizational systems and processes Many organizations are already leveraging existing Security Orchestration, Automation and Response (SOAR) systems to log and classify incidents by impact and urgency to prioritize actions for those assigned to them. With this release, we are integrating with ServiceNow APIs, which will provide the ability for Insider Risk Management case managers to directly create ServiceNow tickets for incident managers. These tickets can be customized to add information about the description of the incident and will also contain a link back to the case in Insider Risk Management for more detailed insights. In addition, we are also pushing Insider Risk Management alerts to the Office 365 Management Activity API. These alerts will contain information such as alert severity and status (active, investigating, resolved, dismissed). These alerts can then be consumed by Security Incident Event Management (SIEM) systems like Azure Sentinel to take further actions such as disabling user access or linking back to Insider Risk Management for further investigation. Get started today The new features in Insider Risk Management will start rolling out to customer’s tenants in the coming days and weeks. Insider Risk Management is one of several products from Microsoft 365 E5, including Communication Compliance, Information Barriers and Privileged Access Management, that help organizations mitigate insider risks and policy violations. You can sign up for a trial of Microsoft 365 E5 or navigate to the Microsoft 365 compliance center to get started. Learn more about what’s new with Insider Risk Management and how to get started and configure policies in your tenant in this supporting documentation. We look forward to hearing your feedback. Thank you, Talhah Mir, Principal Program Manager, Microsoft 365 Security and Compliance EngineeringJoin the "Working remotely during challenging times" webinar!
In this difficult time, remote work is becoming the new normal for many companies around the world. Part of this new normal is increased focus on implementing stricter security controls and data loss prevention policies within the solutions that already exist within your environment.7.5KViews7likes2Comments