Security and AI Essentials
Protect your organization with AI-powered, end-to-end security.
Defend Against Threats
Get ahead of threat actors with integrated solutions.
Secure All Your Clouds
Protection from code to runtime.
Secure All Access
Secure access for any identity, anywhere, to any resource.
Protect Your Data
Comprehensive data security across your entire estate.
Recent Blogs
Every security operations center is being told to adopt AI. Vendors promise autonomous threat detection, instant incident response, and the end of alert fatigue. The reality is messier. Most SOC team...
Mar 27, 202683Views
0likes
0Comments
Publisher screening is a software supply-chain reality: if a publisher account is compromised, a single update can reach thousands of machines—and recovery is costly. Microsoft Trust & Security Servi...
Mar 26, 2026194Views
6likes
0Comments
AI introduces new risks—like prompt injection, data leakage, and model misuse—which means security teams need visibility and guardrails that extend beyond traditional cloud controls. In our next Azur...
Mar 26, 202666Views
0likes
0Comments
Organizations are expanding Zero Trust across more users, applications, and now a growing population of AI agent identities, making it even more challenging to maintain visibility and control at scal...
Mar 26, 20261.5KViews
0likes
0Comments
Recent Discussions
Purview EXPORTS unreliable and missing "Top-of-information-store" folder
Has anyone noticed an issue where the exported PST files are either empty or missing folders? I don't normally check every PST file that I export, but after hearing from customers that there are either no emails or missing folders, I started to check after each export. I am noticing that the Seach and Export process seems to be fine and the Downloaded PST file show the correct size, but when i open the PST files, I'm seeing that they contain no emails OR they are missing folders - including the "Top-of-information-store" folder. When i look at the Properties > Folder Size settings, i can see that the PST file thinks that all the folders are there. This is incredibly tough to work with since I am now checking each PST file and then having to rerun the search/export/download all over again. It's been like this for about 3 weeks.90Views0likes3CommentsDevice Inventory and discovery - private vs corporate network
Trying to sanity‑check something in Defender, and hoping this is the right place given how many Defender products exist now. Goal: get an accurate device inventory of everything connected to the network. I’ve gone through the configuration so it should only be showing devices on our corporate network. We’re a mixed environment with on‑prem users, remote/VPN users, and external endpoints. What I’m unsure about: Devices showing 10.x.x.x make sense — that’s our internal corporate network. But I’m also seeing devices with 192.168.x.x addresses. In a Defender device inventory, what would typically cause 192.168.x.x devices to appear? Are these likely remote/VPN clients, home routers, or something misconfigured? Posting screen snip of some findings.39Views0likes2CommentsAIP scanner not discovering sensitivity content
I am deploying the Purview Information Protection AIP scanner to scan an some of the on‑premises Windows file share and some network file shares that is in scope for compliance and data protection. However, the scanner is not discovering sensitive content within files stored on the share for a custom configured SIT. The custom SIT is tested and it properly works, but the data are being reported as no matches / no sensitive content found to discover the files that may be applied with sensitivity label. This issue is observed across one or more mapped repository paths and may be inconsistent by folder, file type or file size. I noticed the scanner appears “healthy” service is running, repository configured and schedules enabled.7Views0likes0CommentsAuthentication Context (Entra ID) Use case
Microsoft Entra ID has evolved rapidly over the last few years, with Microsoft continuously introducing new identity, access, and security capabilities as part of the broader Zero Trust strategy. While many organizations hold the necessary Entra ID and Microsoft 365 licenses (often through E3 or E5 bundles), a number of these advanced features remain under‑utilised or entirely unused. This is frequently due to limited awareness, overlapping capabilities or uncertainty about where and how these features provide real architectural value. One such capability which is not frequently used is Authentication Context. Although this feature is available for quite some time, it is often misunderstood or overlooked because it does not behave like traditional Conditional Access controls. Consider Authentication Context as a mobile “assurance tag” that connects a resource (or a particular access route to that resource) to one or several Conditional Access (CA) policies, allowing security measures to be enforced with resource-specific accuracy instead of broad, application-wide controls. Put simply, it permits step-up authentication only when users access sensitive information or perform critical actions, while maintaining a smooth experience for the “regular path.” When used intentionally, it enables resource‑level and scenario‑driven access control, allowing organizations to apply stronger authentication only where it is actually needed without increasing friction across the entire user experience. Not expensive Most importantly to use Authentication Context the minimum licensing requirement is Microsoft Entra ID Premium P1 which most customers already have this license. so you not need to convenience for higher license to utilize this feature. But do note Entra Premium 2 is needed if your Conditional Access policy uses advanced signals, such as: User or sign‑in risk (Identity Protection) Privileged Identity Management (PIM) protected roles Risk‑based Conditional Access policies The Workflow Architecturally, Authentication Context works when a claims request is made as part of token issuance commonly expressed via the acrs claim. When the request includes a specific context (for example c1), Entra evaluates CA policies that target that context and forces the required controls (MFA, device compliance, trusted location, etc.). The important constraint: the context must be requested/triggered by a supported workload (e.g., SharePoint) or by an application designed to request the claim; it is not an automatic “detect any action inside any app” feature. Lets look at few high level architecture reference 1. Define “assurance tiers” as contexts Create a small set of contexts (e.g., c1: Confidential Access, c2: Privileged Operations) and publish them for use by supported apps/services. 2. Bind contexts to resources Assign the context to the resource boundary you want to protect—most commonly SharePoint sites (directly or via sensitivity labels), so only those sites trigger the context. (e.g - Specific SharePoint sites like financials, agreements etc ) 3. Attach Conditional Access policies to the context Create CA policies that target the context and define enforcement requirements (Additional MFA strength, mandating device compliance, or location constraint through named locations etc.). The context is the “switch” that activates those policies at the right moment. 4. Validate runtime behavior and app compatibility Because authentication context can impact some client apps and flows, validate supported clients and known limitations (especially for SharePoint/OneDrive/Teams integrations). Some Practical Business Scenarios Scenario A — Confidential SharePoint Sites (M&A / Legal / HR) Problem: You want stronger controls for a subset of SharePoint sites without forcing those controls for all SharePoint access. Architect pattern: Tag the confidential site(s) with Authentication Context and apply a CA policy requiring stronger auth (e.g., compliant device + MFA) for that context. Pre-reqs: SharePoint Online support for authentication context; appropriate licensing and admin permissions; CA policies targeted to the context Scenario B — “Step-up” Inside a Custom Line-of-Business App Problem: Users can access the app normally, but certain operations (approval, export, privileged view) need elevated assurance. Architect pattern: Build the app on OpenID Connect/OAuth2 and explicitly request the authentication context (via acrs) when the user reaches the sensitive path; CA then enforces step-up. Pre-reqs: App integrated with Microsoft identity platform using OIDC/OAuth2; the app can trigger claims requests/handle claim challenges where applicable; CA policies defined for the context Scenario C — Granular “Resource-based” Zero Trust Without Blanket MFA Problem: Security wants strong controls on crown jewels, but business wants minimal prompts for routine work. Architect pattern: Use authentication context to enforce higher assurance only for protected resources (e.g., sensitive SharePoint sites). This provides least privilege at the resource boundary while reducing global friction. Pre-reqs: Clearly defined resource classification; authentication context configured and published; CA policies and monitoring. In a nutshell, Authentication Context allows organizations to move beyond broad, one‑size‑fits‑all Conditional Access policies and adopt a more precise, resource‑driven security model. By using it to link sensitive resources or protected access paths to stronger authentication requirements, organizations can improve security outcomes while minimizing unnecessary user friction. When applied deliberately and aligned to business‑critical assets, Authentication Context helps close the gap between licensing capability and real‑world value—turning underused Entra ID features into practical, scalable Zero Trust controls. If you find this useful, please do not forget to like and add your thoughts 🙂Issues blocking DeepSeek
Hi all, I am investigating DeepSeek usage in our Microsoft security environment and have found inconsistent behaviour between Defender for Cloud Apps, Defender for Endpoint, and IOC controls. I am hoping to understand if others have seen the same. Environment Full Microsoft security and management suite What we are seeing Defender for Cloud Apps DeepSeek is classified as an Unsanctioned app Cloud Discovery shows ongoing traffic and active usage Multiple successful sessions and data activity visible Defender for Endpoint Indicators DeepSeek domains and URIs have been added as Indicators with Block action Indicators show as successfully applied Advanced Hunting and Device Timeline Multiple executable processes are initiating connections to DeepSeek domains Examples include Edge, Chrome, and other executables making outbound HTTPS connections Connection status is a mix of Successful and Unsuccessful No block events recorded Settings Network Protection enabled in block mode Web Content Filtering enabled SmartScreen enabled File Hash Computation enabled Network Protection Reputation mode set to 1 Has anyone else had similar issues when trying to block DeepSeek or other apps via Microsoft security suite? I am currently working with Microsoft support on this but wanted to ask here as well.Guidance: Sensitivity Labels during Mergers & Acquisitions (separate tenants, non-M365, etc.)
We’re building an internal playbook for how to handle Microsoft Purview sensitivity labels during mergers and acquisitions, and I’d really appreciate any lessons learned or best practices. Specifically, I’m interested in how others have handled: Acquired organizations on a separate Microsoft 365/O365 tenant for an extended period (pre- and post-close): How did you handle “Internal Only” content when the two tenants couldn’t fully trust each other yet? Any tips to reduce friction for collaboration between tenants during the transition? Existing label structures, such as: We use labels like “All Internal Only” and labels with user-defined permissions — has anyone found good patterns for mapping or reconciling these with another company’s labels? What if the acquired company is already using sensitivity labels with a different taxonomy? How did you rationalize or migrate them? Acquisitions where the target does not use Microsoft 365 (for example, Google Workspace, on-prem, or other platforms): Any strategies for protecting imported content with labels during or after migration? Gotchas around legacy permissions versus label-based protections? General pitfalls or watch-outs between deal close and full migration: Anything you wish you had known before your first M&A with Purview labels in play? Policies or configurations you’d recommend setting (or avoiding) during the interim period? Any examples, war stories, or template approaches you’re willing to share would be incredibly helpful as we shape our playbook. Thanks in advance for any insights!Cloud Kerberos Trust with 1 AD and 6 M365 Tenants?
Hi, we would like to enable Cloud Kerberos Trust on hybrid joined devices ( via Entra connect sync) In our local AD wie have 6 OUs and users and devices from each OU have a seperate SCP to differnt M365 Tenants. I found this Article to configure the Cloud Kerberos Trust . Set-AzureADKerberosServer 1 2 The Set-AzureADKerberosServer PowerShell cmdlet is used to configure a Microsoft Entra (formerly Azure AD) Kerberos server object. This enables seamless Single Sign-On (SSO) for on-premises resources using modern authentication methods like FIDO2 security keys or Windows Hello for Business. Steps to Configure the Kerberos Server 1. Prerequisites Ensure your environment meets the following: Devices must run Windows 10 version 2004 or later. Domain Controllers must run Windows Server 2016 or later. Install the AzureADHybridAuthenticationManagement module: [Net.ServicePointManager]::SecurityProtocol = [Net.ServicePointManager]::SecurityProtocol -bor [Net.SecurityProtocolType]::Tls12 Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber 2. Create the Kerberos Server Object Run the following PowerShell commands to create and publish the Kerberos server object: Prompt for All Credentials: $domain = $env:USERDNSDOMAIN $cloudCred = Get-Credential -Message 'Enter Azure AD Hybrid Identity Administrator credentials' $domainCred = Get-Credential -Message 'Enter Domain Admin credentials' Set-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred -DomainCredential $domainCred As I understand the process, a object is created in local AD when running Set-AzureADKerberosServer What happens, if I run the command multiple times, for each OU/Tenant. Does this ovveride the object, or does it create a new objects?SolvedEntra ID Private Access - data flow
Hello, I am successfully testing Entra Private Access. From outside, I can easily access my shared permissions. However, I have one more question. What happens if I my device on the internal network? If I access the shares directly, I get about 1GB/s. What happens if the "Global Secure Access" client is active? Do all the data go through the Entra portal, or just the authentication? If all the data go through the Entra portal, there could be challenges with the internet connection (all data in and out). Thank you for your support Stefan36Views0likes2CommentsMicrosoft Purview - Endpoint Data Discovery
Hi all, I wanted to understand Microsoft Purview’s capabilities around data discovery on Windows endpoints, specifically in a legacy data scenario. Use case: We have data residing on Windows machines/endpoints that is: Legacy in nature Not being actively moved, migrated, or modified Sitting at rest on local disks (user endpoints) Questions: Can Microsoft Purview perform data discovery or classification on such endpoint‑resident data? Does Purview support scanning or discovering data on Windows endpoints at rest, without requiring the data to be uploaded, migrated, or modified? If not directly, are there any supported approaches or workarounds (e.g., via integrations with Microsoft Defender for Endpoint, Information Protection scanners, or other Purview components) to achieve this? What are the current limitations of Purview when it comes to endpoint-based data discovery?96Views0likes2CommentsPIM
Hello, everyone. I need some help. We already use PIM for Just-in-Time activation of administrative functions in Entra ID, but we would like something more granular. For example, we want certain administrative actions in Microsoft 365, such as accessing sensitive data or performing critical tasks, to only be possible upon specific request and approval, even if the user has already activated the function in PIM. Is this only possible with PIM, or is there another feature in Microsoft 365 for this type of control?108Views1like3CommentsUsing MDE (Passive Mode) with Palo Alto Cortex XDR to enable Defender for IoT (Enterprise IoT)
Hi everyone! I’m working with a customer that uses Palo Alto Cortex XDR as their primary EDR. We want to leverage Microsoft Defender for IoT specifically for Enterprise IoT (not OT/ICS). I have a few questions: MDE in Passive Mode as a sensor: Can Microsoft Defender for Endpoint (MDE) running in Passive mode act as a sensor to enable Enterprise IoT discovery/monitoring for Defender for IoT? Are there any feature limitations when MDE is not the primary EDR? Appliance sensor in Enterprise IT: If we cannot use the MDE agent, is it supported to deploy the Defender for IoT appliance sensor in an enterprise IT network (e.g., offices/campuses) to cover Enterprise IoT use cases? Coexistence / Complementary sensors: Is it possible (and recommended) to run the appliance sensor alongside MDE (sensor) to complement coverage/features? Any guidance on architecture, data overlap/deduplication, or licensing implications?I have absolutely no idea what Microsoft Defender 365 wants me to do here
The process starts with an emal: There's more below on the email - an offer for credit monitoring, an option to add another device, an option to download the mobile app - but I don't want to do any of the, so I click on the "Open Defender" button, which results in this: OK, so my laptop is the bad boy here, there's that Status not of "Action recommended", with no "recommendations" and the only live link here is "Add device", something I don't need to do. The only potential "problem" I can even guess at here is that Microsoft is telling me that the laptop needs updating. Since I seldom use the laptop, only when traveling, I'd guess the next time I'd fire it up the update will occur, but of course I really don't know that's the recommended action it's warning me about, do I? You'd expect that if something is warning you "ACTION NEEDED!!!" they'd be a little more explicit, wouldn't you?Tenant Forwarding - Trusted ARC Sealer
As part of a tenant to tenant migration we often need to forward mail from one tenant to another. This can cause some issues with email authentication verdicts on the destination tenant. Is it possible or best practice to configure another tenant as a Trusted ARC sealer to help with forwarded email deliverability?What caught you off guard when onboarding Sentinel to the Defender portal?
Following on from a previous discussion around what actually changes versus what doesn't in the Sentinel to Defender portal migration, I wanted to open a more specific conversation around the onboarding moment itself. One thing I have been writing about is how much happens automatically the moment you connect your workspace. The Defender XDR connector enables on its own, a bi-directional sync starts immediately, and if your Microsoft incident creation rules are still active across Defender for Endpoint, Identity, Office 365, Cloud Apps, and Entra ID Protection, you are going to see duplicate incidents before you have had a chance to do anything about it. That is one of the reasons I keep coming back to the inventory phase as the most underestimated part of this migration. Most of the painful post-migration experiences I hear about trace back to things that could have been caught in a pre-migration audit: analytics rules with incident title dependencies, automation conditions that assumed stable incident naming, RBAC gaps that only become visible when someone tries to access the data lake for the first time. A few things I would genuinely love to hear from practitioners who have been through this: - When you onboarded, what was the first thing that behaved unexpectedly that you had not anticipated from the documentation? - For those who have reviewed automation rules post-onboarding: did you find conditions relying on incident title matching that broke, and how did you remediate them? - For anyone managing access across multiple tenants: how are you currently handling the GDAP gap while Microsoft completes that capability? I am writing up a detailed pre-migration inventory framework covering all four areas and the community experience here is genuinely useful for making sure the practitioner angle covers the right ground. Happy to discuss anything above in more detail.Your Sentinel AMA Logs & Queries Are Public by Default — AMPLS Architectures to Fix That
When you deploy Microsoft Sentinel, security log ingestion travels over public Azure Data Collection Endpoints by default. The connection is encrypted, and the data arrives correctly — but the endpoint is publicly reachable, and so is the workspace itself, queryable from any browser on any network. For many organisations, that trade-off is fine. For others — regulated industries, healthcare, financial services, critical infrastructure — it is the exact problem they need to solve. Azure Monitor Private Link Scope (AMPLS) is how you solve it. What AMPLS Actually Does AMPLS is a single Azure resource that wraps your monitoring pipeline and controls two settings: Where logs are allowed to go (ingestion mode: Open or PrivateOnly) Where analysts are allowed to query from (query mode: Open or PrivateOnly) Change those two settings and you fundamentally change the security posture — not as a policy recommendation, but as a hard platform enforcement. Set ingestion to PrivateOnly and the public endpoint stops working. It does not fall back gracefully. It returns an error. That is the point. It is not a firewall rule someone can bypass or a policy someone can override. Control is baked in at the infrastructure level. Three Patterns — One Spectrum There is no universally correct answer. The right architecture depends on your organisation's risk appetite, existing network infrastructure, and how much operational complexity your team can realistically manage. These three patterns cover the full range: Architecture 1 — Open / Public (Basic) No AMPLS. Logs travel to public Data Collection Endpoints over the internet. The workspace is open to queries from anywhere. This is the default — operational in minutes with zero network setup. Cloud service connectors (Microsoft 365, Defender, third-party) work immediately because they are server-side/API/Graph pulls and are unaffected by AMPLS. Azure Monitor Agents and Azure Arc agents handle ingestion from cloud or on-prem machines via public network. Simplicity: 9/10 | Security: 6/10 Good for: Dev environments, teams getting started, low-sensitivity workloads Architecture 2 — Hybrid: Private Ingestion, Open Queries (Recommended for most) AMPLS is in place. Ingestion is locked to PrivateOnly — logs from virtual machines travel through a Private Endpoint inside your own network, never touching a public route. On-premises or hybrid machines connect through Azure Arc over VPN or a dedicated circuit and feed into the same private pipeline. Query access stays open, so analysts can work from anywhere without needing a VPN/Jumpbox to reach the Sentinel portal — the investigation workflow stays flexible, but the log ingestion path is fully ring-fenced. You can also split ingestion mode per DCE if you need some sources public and some private. This is the architecture most organisations land on as their steady state. Simplicity: 6/10 | Security: 8/10 Good for: Organisations with mixed cloud and on-premises estates that need private ingestion without restricting analyst access Architecture 3 — Fully Private (Maximum Control) Infrastructure is essentially identical to Architecture 2 — AMPLS, Private Endpoints, Private DNS zones, VPN or dedicated circuit, Azure Arc for on-premises machines. The single difference: query mode is also set to PrivateOnly. Analysts can only reach Sentinel from inside the private network. VPN or Jumpbox required to access the portal. Both the pipe that carries logs in and the channel analysts use to read them are fully contained within the defined boundary. This is the right choice when your organisation needs to demonstrate — not just claim — that security data never moves outside a defined network perimeter. Simplicity: 2/10 | Security: 10/10 Good for: Organisations with strict data boundary requirements (regulated industries, audit, compliance mandates) Quick Reference — Which Pattern Fits? Scenario Architecture Getting started / low-sensitivity workloads Arch 1 — No network setup, public endpoints accepted Private log ingestion, analysts work anywhere Arch 2 — AMPLS PrivateOnly ingestion, query mode open Both ingestion and queries must be fully private Arch 3 — Same as Arch 2 + query mode set to PrivateOnly One thing all three share: Microsoft 365, Entra ID, and Defender connectors work in every pattern — they are server-side pulls by Sentinel and are not affected by your network posture. Please feel free to reach out if you have any questions regarding the information provided.Rescheduled Webinar: Copilot Skilling Series
Rescheduled Webinar Copilot Skilling Series | Security Copilot Agents, DSPM AI Observability, and IRM for Agents Hello everyone! The Copilot Skilling Series webinar on Security Copilot Agents, DSPM AI Observability, and IRM for Agents originally scheduled for April 16th, has been rescheduled for April 28th at 8:00 AM Pacific Time. We are sorry for the inconvenience and hope to see you there on the 28th. Please register for the updated time at http://aka.ms/securitycommunity All the best! The Security Community Team19Views0likes0CommentsSentinel datalake: private link/private endpoint
Has anyone already configured Sentinel Datalake with a private link/private endpoint setup? I can't find any instructions for this specific case. Can I use the wizard in the Defender XDR portal, or does it require specific configuration steps? Or does it require configuring a private link/private endpoint setup on the Datalake component after activation via the wizard?How to remove/modify a sensitivity label for many SharePoint documents?
We would like to implement Purview sensitivity labels for our SharePoint sites. We would like to use auto labeling. Before we start the implementation, we would like to test some rollback scenario. How to remove/modify a sensitivity label for many SharePoint documents?170Views0likes3CommentsUnified Catalog activity / usage reporting
Hi, We are building out our data products and glossary terms in Purivews Data Governance Unified catalog. I’d like to get some metrics on data consumer activity/usage. Metrics like how many users running searches for terms or data products. How many people navigating into data assets to discover data, etc. I haven’t found any reports like this. Is there any audit logs specific to data governance I can query? I’m trying to gauge user adoption. thanks29Views0likes1CommentSensitivity Label Permissions
Hello, I have set up sensitivity labels within my company. I have Public, Standard, Confidential and Highly Confidential. When testing with my external email (e.g. Gmail and Yahoo) I am prompted to enter the one-time passcode when opening an email from my test account. But then I tested with an external user who has an Outlook email and he was not prompted to enter the one-time passcode. "Authenticated Users" is included in Standard, Confidential and Highly Confidential permission control when setting up the labels. Is this the normal behavior for the one-time passcode only being prompted for Non-Microsoft emails? Can the one-time passcode be prompted for Microsoft (Outlook) domains? Also how can I have multi-factor authenticator apply to my labels for external clients/users? Any help would be much appreciated. Thank you!
Events
Take your AI security to the next level. Explore advanced techniques and best practices for safeguarding AI-powered applications against evolving threats in the cloud.
What you’ll learn:
Enable...
Wednesday, Apr 08, 2026, 12:00 PM PDTOnline
0likes
1Attendee
0Comments