Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
Unmanaged tenants create security blind spots. Learn how Microsoft Entra Tenant Governance helps you gain visibility and control.
Mar 25, 2026506Views
2likes
0Comments
7 MIN READ
Recover confidently from misconfigurations, security compromises, and operational errors with Microsoft Entra Backup and Recovery.
Mar 24, 20261.1KViews
1like
3Comments
External MFA in Microsoft Entra ID is GA, enabling integration with third-party MFA while maintaining Conditional Access and risk-based policies.
Mar 24, 202610KViews
2likes
0Comments
This article focuses on what exactly is changing for RC4 starting in January, why it matters, and how to be prepared
Mar 23, 2026847Views
1like
0Comments
Recent Discussions
Integrate MS Purview with ServiceNow for Data Governance
Hi team, We are planning to leverage Microsoft Purview for core Data Governance (DG) capabilities and build the remaining DG functions on ServiceNow. We have two key questions as we design the target‑state architecture: 1. What is the recommended split of DG capabilities between Microsoft Purview and ServiceNow? 2. How should data be shared and synchronized between Purview and ServiceNow to keep governance processes aligned and up to date? Thanks!Solved143Views0likes3CommentsThe Sentinel migration mental model question: what's actually retiring vs what isn't?
Something I keep seeing come up in conversations with other Sentinel operators lately, and I think it's worth surfacing here as a proper discussion. There's a consistent gap in how the migration to the Defender portal is being understood, and I think it's causing some teams to either over-scope their effort or under-prepare. The gap is this: the Microsoft comms have consistently told us *what* is happening (Azure portal experience retires March 31, 2027), but the question that actually drives migration planning, what is architecturally changing versus what is just moving to a different screen, doesn't have a clean answer anywhere in the community right now. The framing I've been working with, which I'd genuinely like to get other practitioners to poke holes in: What's retiring: The Azure portal UI experience for Sentinel operations. Incident management, analytics rule configuration, hunting, automation management: all of that moves to the Defender portal. What isn't changing: The Log Analytics workspace, all ingested data, your KQL rules, connectors, retention config, billing. None of that moves. The Defender XDR data lake is a separate Microsoft-managed layer, not a replacement for your workspace. Where it gets genuinely complex: MSSP/multi-tenant setups, teams with meaningful SOAR investments, and anyone who's built tooling against the SecurityInsights API for incident management (which now needs to shift to Microsoft Graph for unified incidents). The deadline extension from July 2026 to March 2027 tells its own story. Microsoft acknowledged that scale operators needed more time and capabilities. If you're in that camp, that extra runway is for proper planning, not deferral. A few questions I'd genuinely love to hear about from people who've started the migration or are actively scoping it: For those who've done the onboarding already: what was the thing that caught you most off guard that isn't well-documented? For anyone running Sentinel across multiple tenants: how are you approaching the GDAP gap while Microsoft completes that capability? Are you using B2B authentication as the interim path, or Azure Lighthouse for cross-workspace querying? I've been writing up a more detailed breakdown of this, covering the RBAC transition, automation review, and the MSSP-specific path, and the community discussion here is genuinely useful for making sure the practitioner perspective covers the right edge cases. Happy to share more context on anything above if useful.SolvedCloud Kerberos Trust with 1 AD and 6 M365 Tenants?
Hi, we would like to enable Cloud Kerberos Trust on hybrid joined devices ( via Entra connect sync) In our local AD wie have 6 OUs and users and devices from each OU have a seperate SCP to differnt M365 Tenants. I found this Article to configure the Cloud Kerberos Trust . Set-AzureADKerberosServer 1 2 The Set-AzureADKerberosServer PowerShell cmdlet is used to configure a Microsoft Entra (formerly Azure AD) Kerberos server object. This enables seamless Single Sign-On (SSO) for on-premises resources using modern authentication methods like FIDO2 security keys or Windows Hello for Business. Steps to Configure the Kerberos Server 1. Prerequisites Ensure your environment meets the following: Devices must run Windows 10 version 2004 or later. Domain Controllers must run Windows Server 2016 or later. Install the AzureADHybridAuthenticationManagement module: [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber 2. Create the Kerberos Server Object Run the following PowerShell commands to create and publish the Kerberos server object: Prompt for All Credentials: $domain = $env:USERDNSDOMAIN $cloudCred = Get-Credential -Message 'Enter Azure AD Hybrid Identity Administrator credentials' $domainCred = Get-Credential -Message 'Enter Domain Admin credentials' Set-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred -DomainCredential $domainCred As I understand the process, a object is created in local AD when running Set-AzureADKerberosServer What happens, if I run the command multiple times, for each OU/Tenant. Does this ovveride the object, or does it create a new objects?SolvedMicrosoft purview endpoint DLP Printing
Hello All, We can monitor print activities in Microsoft purview endpoint DLP, If someone print sensitive data based on the conditions defined in DLP it will take action on printing. I want to know how the Purview endpoint DLP intercepts the printing and avoid data exfiltration. Does it stop before it reaches the spooler? Please provide technical insights on this doubt. Thank you.Solved190Views0likes4CommentsAuto-labelling does not support content marking
We’ve hit a limitation with service-side auto-labeling in Purview: when a sensitivity label is applied by an auto-labeling policy, any configured visual markings (headers, footers, watermarks) are not written into the document. A further complication is that there is a requirement which includes a custom script that applies sensitivity labels at the folder level and relies on the service-side engine to cascade those labels down to the folder's contents. This means automation isn't just a 'nice to have' for scale — it is a core dependency of our labeling architecture. The inability to also apply visual markings through this same automated path creates a direct gap in our compliance posture and the MS solution. For environments where visible classification is mandated by regulation, this effectively means we can’t rely on service-side auto-labeling alone, which is a big constraint. I’d really appreciate: Any confirmed best practices/workarounds others are using, and Input from the product team on whether server-side visual markings tied to auto-labeling are being considered / and what to consider meeting this requirement as an alternativeSolvedPriority between CIDR and FQDN rules in Microsoft Entra Private Access (GSA)
Hello Question about prioritization between CIDR and FQDN rules in Microsoft Entra Private Access (GSA) Question: Hello everyone, I have a question about how rules are prioritized in Microsoft Entra Private Access (Global Secure Access). In my environment, I configured the following: I created an Enterprise Application using a broad CIDR range (10.10.0.0/16) to represent the entire data center. Within the same environment, I created other Enterprise Applications using specific FQDNs ( app01.company.local, app02.company.local) with specific ports. All rules are in the same Forwarding Profile. I noticed that in the GSA client rules tab there is a “Priority” field, and apparently the rules are evaluated from top to bottom. My question is: When there is an overlap between a broad CIDR rule and a more specific FQDN-based rule, which one takes precedence? Is there some internal technical criterion (DNS resolution first, longest prefix match,), or is the evaluation purely based on the order displayed? Is there a risk that the CIDR rule will capture traffic before the FQDN rule and impact granular access control? I want to make sure my architecture is correct before expanding its use to production. Could someone clarify the actual technical behavior of this prioritization?Solved131Views0likes3CommentsClarification on UEBA Behaviors Layer Support for Zscaler and Fortinet Logs
I would like to confirm whether the new UEBA Behaviors Layer in Microsoft Sentinel currently supports generating behavior insights for Zscaler and Fortinet log sources. Based on the documentation, the preview version of the Behaviors Layer only supports specific vendors under CommonSecurityLog (CyberArk Vault and Palo Alto Threats), AWS CloudTrail services, and GCP Audit Logs. Since Zscaler and Fortinet are not listed among the supported vendors, I want to verify: Does the UEBA Behaviors Layer generate behavior records for Zscaler and Fortinet logs, or are these vendors currently unsupported for behavior generation? As logs from Zscaler and Fortinet will also be get ingested in CommonSecurityLog table only.Solved101Views0likes1CommentClassification on DataBricks
Hello everyone, I would like to request an updated confirmation regarding the correct functioning of custom classification for Databricks Unity Catalog data sources. Here is my current setup: The data source is active. Source scanning is working correctly. I created the custom classification in “Annotation management / Classifications”. I created and successfully tested the regular expression under “Annotation management / Classification Rules”. I generated the Custom Scan Rule Set in “Source management / Scan Rule Sets”, associated to Databricks and selecting the custom rule. However, when running the scan on Databricks: I do not find any option to select my Scan Rule Set (for another source like Teradata, this option is visible). No classification findings are generated based on my custom rule. Other tests do produce findings (system-generated). Does anyone have insights on what I should verify? Or is this custom classification functionality not supported for Databricks?SolvedURL Hyperlinking phishing training
Mi using the Defender phishing simulations to perform testing. When creating a positive reinforcement email that goes to the person you have the option to use default text or put in your own text. When I put in my own text I have lines in the text, but when it renders the lines are not displayed so it looks like a bunch of text crammed together. Any idea how to get these lines to display?SolvedIssue wiht the downgraing label
Hello, We are experiencing an issue with sensitivity labels configured for SharePoint using Confidential – Encrypted. When User A uploads a file with this label applied automatically rom the SharePoint library , User B is unable to downgrade the label to a different one and receives an error message. We have confirmed that both User A and User B have the same permissions (Co-author access) to the file and location. Could you please advise what might be causing this or what additional permissions or configuration may be required? Any help would be much appreciated.Solved95Views0likes2CommentsURL rewriting does not apply during Attack Simulation (Credential Harvesting)
I’m running a credential-harvesting attack simulation in Microsoft Defender for Office 365, but the URL rewriting does not work as expected. In the final confirmation screen, the phishing link is shown as rewritten to something like: https://security.microsoft.com/attacksimulator/redirect?... However, during the actual simulation, the link is NOT rewritten. It stays as the original domain (e.g., www.officentry.com), which causes the simulation to fail with an error. I’m not sure whether this behavior is related to Safe Links or something else within Defender. Why is the URL not rewritten at runtime, and how can I ensure that the redirect link is applied correctly in the actual simulation?Solved196Views0likes1CommentVery High Increase in CPU activity after Update Microsoft Defender for Identity sensor
All our servers that are running this sensor (DCs, Certificate servers, AD Connect servers) showed a massive increase in average CPU utilization from virtually straight after the sensor was automatically updated to version 2.254.19112.470 (late night UK time). Two of our DCs are sitting on 100% CPU today and we can't find anything to resolve it. Has anyone else seen this since running this version and if so what actions did you take ? How would we go back to rolling back to the previous version when it appears it will just be automatically updated soon after ? This is our monitoring of CPU utilization from one of the majorly affected DCs but every server with the sensor had the exact same graph showing a major increase in CPU at the same date and time i.e. just after the sensor was updated.SolvedExtract telephoneNumber/businessPhones in Graph via PowerShell
Hi all, I am trying to extract the telephoneNumber from the businessPhones attribute in Entra via a PowerShell script. I call Get-MgUser, list the properties including businessPhones. No matter what I try I either get a System.String[] or a blank. I can extract all the extensionAttribute values using the dot operator, but no luck with telephoneNumber. After much searching and reading of the Learn documentation, I am rather stumped. Any guidance will be appreciated. BruceSolved152Views0likes2CommentsForce user to reset password in hybrid
Hi, we work in a hybrid environment at the moment, and it has been discovered that if you are using classic AD and reset a user's password and leave the tick-box saying user must change password at next logon, the password reset works! But, if you were to select the tick-box with the intention to make the user change their password, the password does not get reset and the user never gets asked to reset their password? Also, if you try and reset the user's password on AAD, you get the following error message: Because we cannot force the user to reset their password by AD or AAD, we have to tell the user to do it themselves by the classic Ctrl-Alt-Del method or set their personal password for them over the phone. So, what my question is, is why can I not force the user to change their password from either AD or AAD?SolvedMicrosoft purview auto labeling contextual summary
Hello All, I am not able to see the Contextual summary in service side auto labeling of Microsoft purview information protection. I do have "data classification content viewer role" in my ID. Please let me know if I am missing any thing to see the contextual summary.Solved89Views0likes2CommentsMicrosoft Purview Data Map Approach to scan
I plan to scan Purview data assets owner by owner rather than scanning entire databases in one go because this approach aligns with data governance and RBAC (Role-Based Access Control) principles. By segmenting scans by asset ownership, we ensure that only the designated data asset owners have the ability to edit or update metadata for their respective assets in Purview. This prevents broad, unrestricted access and maintains accountability, as each owner manages the metadata for the tables and datasets they are responsible for. Scanning everything at once would make it harder to enforce these permissions and could lead to unnecessary exposure of metadata management rights. This owner-based scanning strategy keeps governance tight, supports compliance, and ensures that metadata stewardship remains with the right people. This approach also aligns with Microsoft Purview best practices and the RBAC model: Microsoft recommends scoping scans to specific collections or assets rather than ingesting everything at once, allowing different teams or owners to manage their own domains securely and efficiently. Purview supports metadata curation via roles such as Data Owner and Data Curator, ensuring that only users assigned as owners; those with write or owner permissions on specific assets; can edit metadata like descriptions, contacts, or column details. The system adheres to the principle of least privilege, where users with Owner/Write permissions can manage metadata for their assets, while broader curation roles apply only where explicitly granted. Therefore, scanning owner by owner not only enforces governance boundaries but also ensures each data asset owner retains exclusive editing rights over their metadata; supporting accountability, security, and compliance. After scanning by ownership, we can aggregate those assets into a logical data product representing the full database without breaking governance boundaries. Is this considered best practice for managing metadata in Microsoft Purview, and does it confirm that my approach is correct?Solved183Views0likes2CommentsDLP Policy not Working with OCR
Hello Community, i activated the OCR in Microsoft Purview, and scan works fine infact Purview find image that contains sensible data. I have created DLP Policy that not permit print and move to rdp file that containts "Italy Confidential Data" like "Passport Number, Drivers License ecc..." this policy works for xlsx or word that contains data, but if file word contains image with this data not apply the DLP Rule infact i'm able to print or move into rdp this file also only the jpeg file. Policy match correctly i see it into "Activity Explorer" Is this behavior correct? Regards, GuidoSolvedMicrosoft Defender for Endpoint for Vulnerability Management and Reporting
Hi All, We’re currently using Rapid7 for vulnerability management and reporting, but we’re actively evaluating the possibility of moving to Microsoft Defender for Endpoint going forward. We’d like to better understand how to properly leverage Defender for Endpoint for vulnerability management and reporting. If this means using custom reports—such as building dashboards in Power BI—we’re definitely open to that approach. At a high level, we’re looking for guidance on best practices and the right direction to meet the following requirements: Ongoing vulnerability tracking and remediation Clearer reporting on vulnerability trends and areas needing improvement Breakdown of vulnerabilities by severity (Critical, High, Medium, Low), grouped by aging buckets (e.g., 30, 60, 90 days) Defender Secure Score reporting over time (30, 60, and 90-day views) Visibility into non-compliant devices in Intune, including devices in grace period and PCs that have checked in within the last 14 days Any recommendations, examples, or pointers to documentation or reporting approaches would be greatly appreciated. Thanks in advance, DilanSolved216Views1like3CommentsClarification related to JIT for EDLP
Can someone help clarify how JIT actually works and in which scenario we should enable JIT. The Microsoft documentation is very differently from what I’m observing during hands-on testing. I enabled JIT for a specific user (only 1 user). For that user, no JIT toast notifications appear for stale files when performing EDLP activities such as copying to a network share, etc. However, for all other users even though JIT is not enabled for them their events are still being captured in Activity Explorer. See SS below.SolvedHow to remove SSL Certificate on CLI
How can an SSL certificate get removed on the backend through the CLI? When I delete the cert in the GUI, it doesn't seem to actually get removed from the backend. The cert doesn't show in the GUI, but the cert is still recognized in the browser so it appears apache is still seeing it serving it up. There's a cert folder at: /var/cyberx/keys/certificates There's a properties folder at: /var/cyberx/properties Do I just remove the folder and restart apache? Are there any .properties files that need modified?Solved
Events
Take your AI security to the next level. Explore advanced techniques and best practices for safeguarding AI-powered applications against evolving threats in the cloud.
What you’ll learn:
Enable...
Wednesday, Apr 08, 2026, 12:00 PM PDTOnline
0likes
1Attendee
0Comments