Brownfield Sentinel implementation

Iron Contributor

I wonder how other enterprises go about this.

We have a huge Log Analytics Workspace that stems from the OMS days, so it holds a mixture of security and non-security data.

Sentinel implementation guidance is either based on creating a new LAWS or using a single workspace whenever possible.

So this would be our single workspace, but too much data is flowing in, data we don’t need in Sentinel. This would inflate both our ingestion and retention cost.

Also way too many people have access to this data, especially when we are to add more sources in the near future.

We don’t want multi-homing but we do want to separate out security and non-security data. Also reduce ingestion into Sentinel wherever possible of unneeded data.

We know we could do a lot by table level RBAC or resourcebased RBAC to limit access. Also, we can leverage table level retention to limit the amount of data we need to keep. But this does not address the data ingestion.

 

So, how do you go about this? We are still thinking about a dedicated Sentinel LAWS,  but we do not have yet a clear view on the data flows. Needless to say we need the data in the existing OMI workspace for other purposes also.

 

4 Replies
"Needless to say we need the data in the existing OMI workspace for other purposes also" If you can separate the security and non security data ingestion why do you need security data on your OMI workspace ? Its a duplication of data spread across workspaces.
Ingestion into Sentinel enabled workspace is for security analysis.. any other data ingestion not only adds higher cost and retention but will pose lot of challenges like Noise, false positives, query performance issues, alerting etc.. mainly impacts mean time to triage & incident closure.

It is good to Ingest what is needed into a new sentinel enabled workspace and make use of sentinel specific RBAC.

@PrashTechTalk  You write “If you can separate the security and non-security data” - If I could I certainly would do that - it is just that I don’t think we can. 

On the other hand, most, if not all, resources sending data to the workspace also are onboarded into defender. So if we connect Defender (i.e. the many Defenders 😜) to Sentinel, we could be good?

 

I guess we need to do a thorough analysis of what is actually going into the OMI workspace first.

Could you stream the security data into a new dedicated instance for Sentinel then give access for users to query particular tables across workspaces? https://docs.microsoft.com/en-us/azure/azure-monitor/logs/cross-workspace-query

Or is that what you mean by avoid multi-homing?

@m_zorich Yes, I think that is what I mean by avoiding multihoming.

Ideally we would have one workspace for everything (with proper resourcebased or table level acces) but that would incur extra ingestion costs into Sentinel. Also, not all data has the same retention requirements. That would extend our current situation when we would deploy Sentinel on top of the existing OMI workspace.

On the other hand we are trying to find a good way to get only the necessary data into Sentinel, whilst not breaking anything of the current reporting and monitoring.

Preferably not by mutltihoming clients (and not all paas and saas services can even do that), but in some other way. But maybe in a way to select data from the existing workspace into a new one, use the log analytics workspace dataexport functionality, or some other means.

And, as I mentioned in another reply, maybe first thoroughly analyse what is going into the OMI workspace right now, and maybe we can separate at the source in other ways. E.g. when a VM is onboarded in Defender for Endpoint, should it still log to OMI, or is it sufficient to just connect DfE to Sentinel?