Forum Discussion

AnthonyPorter's avatar
AnthonyPorter
Brass Contributor
Mar 24, 2026
Solved

What caught you off guard when onboarding Sentinel to the Defender portal?

Following on from a previous discussion around what actually changes versus what doesn't in the Sentinel to Defender portal migration, I wanted to open a more specific conversation around the onboarding moment itself.

One thing I have been writing about is how much happens automatically the moment you connect your workspace. The Defender XDR connector enables on its own, a bi-directional sync starts immediately, and if your Microsoft incident creation rules are still active across Defender for Endpoint, Identity, Office 365, Cloud Apps, and Entra ID Protection, you are going to see duplicate incidents before you have had a chance to do anything about it.

That is one of the reasons I keep coming back to the inventory phase as the most underestimated part of this migration. Most of the painful post-migration experiences I hear about trace back to things that could have been caught in a pre-migration audit: analytics rules with incident title dependencies, automation conditions that assumed stable incident naming, RBAC gaps that only become visible when someone tries to access the data lake for the first time.

A few things I would genuinely love to hear from practitioners who have been through this:

- When you onboarded, what was the first thing that behaved unexpectedly that you had not anticipated from the documentation?

- For those who have reviewed automation rules post-onboarding: did you find conditions relying on incident title matching that broke, and how did you remediate them?

- For anyone managing access across multiple tenants: how are you currently handling the GDAP gap while Microsoft completes that capability?

I am writing up a detailed pre-migration inventory framework covering all four areas and the community experience here is genuinely useful for making sure the practitioner angle covers the right ground.

Happy to discuss anything above in more detail.

  • I went ahead and wrote up the full breakdown based on the framing above. If it's useful for anyone working through this: https://securingm365.com/defenderxdr/sentinel/sentineldefender-part2/

    Covers the 4 audit aspects I believe need to be considered, before commencing the migration in any production environment.

    Would genuinely welcome any pushback or edge cases people have hit that aren't in there.

3 Replies

  • AnthonyPorter's avatar
    AnthonyPorter
    Brass Contributor

    I went ahead and wrote up the full breakdown based on the framing above. If it's useful for anyone working through this: https://securingm365.com/defenderxdr/sentinel/sentineldefender-part2/

    Covers the 4 audit aspects I believe need to be considered, before commencing the migration in any production environment.

    Would genuinely welcome any pushback or edge cases people have hit that aren't in there.

  • The incident creation rules being active during onboarding is the one that would catch most teams off guard. Those rules tend to be set-and-forget from early Sentinel deployments - nobody remembers they exist until the XDR connector spins up and suddenly the incident queue doubles overnight. The onboarding wizard offers to disable them, but by that point you're already cleaning up.

    What I'd add to the inventory angle: check your automation rules for title-based conditions. Things like "if incident title contains 'Suspicious sign-in', then enrich and notify". After onboarding, XDR groups and renames incidents differently - those conditions don't throw errors, they just quietly stop matching. Found that pattern in more environments than I'd like to admit.

    Interested in how you're structuring the pre-migration inventory - one-time audit before onboarding, or something you keep running post-migration to catch drift?

    • AnthonyPorter's avatar
      AnthonyPorter
      Brass Contributor

      I totally agree, that’s exactly the pattern I see in the wild. The XDR connector flipping on while incident-creation rules are still active is the fastest way to double your incident queue overnight, so id always pause Defender incident creation for each product before connecting (but echo your "too late" comment). I also recommend doing the initial connect into a staging workspace and running a 48–72 hour smoke test so you can validate behavior without impacting production (assuming that environments are running development workspaces). But i honestly believe we need to treat the inventory as living: run a one-time pre-migration audit of connectors, incident rules, analytics, automations, and RBAC, then keep that inventory live with scheduled drift checks (atleast until the dust is settled).