Forum Discussion
Possible Deficiencies or am I Glossing Over Something?
illuzian I cannot address all your concerns but I have an answer for some
First Concern: You can use the "search" command to search across all logs. This article uses it for just the type of scenario you describe: https://techcommunity.microsoft.com/t5/Azure-Sentinel/Security-Investigation-with-Azure-Sentinel-and-Jupyter-Notebooks/ba-p/432921
Second concern: While there is not one out of the box you can create a workbook that can provide basic data like what you need using a text parameter for the input value.
Third concern: On the overview page is a description of how ML will be used in Sentinel and a link that takes you to this page which provides a good overview. https://azure.microsoft.com/en-us/blog/reducing-security-alert-fatigue-using-machine-learning-in-azure-sentinel/
Fourth concern: Not sure why that is a concern. It is strictly a different way of working with alerts and incidents. And, if you have not noticed, there are times when an incident is comprised of multiple alerts so it does sort of work the way you described.
Last concern: If you look at the incident graphical investigation you will see that you can do exactly what you stated in an easy to use graphical view. And if that does not work, you can go directly to the logs from that page or use threat hunting with Juypter notebooks to get even more advanced analysis
While string searches certainly provide some of the capability, I would consider it less-than-ideal due to potential to inability to cover certain scenarios such as source ip->destination ip. You would need to search across logs and interpret the results or rely on targeting data sources that explicitly define the these fields or strings where you can regex the two. It definitely provides some capability though it's just not ideal/requires extra overhead.
I guess for the second point it would be more valuable but again, it wouldn't be as intuitive as normalised fields.
For the ML I guess it will be an adjustment as you're looking back on a time period and doing you're statistical analysis in query where as other SIEMs hold this type of data persistently and update it as required. As long as the solution is able to process theses more complicated queries efficiently it wouldn't really be a problem from an alerting perspective but it's certainly more overhead for rule design (imo).
For the fourth concern, I understand the concept and I imagine being able to achieve these types of scenarios. My only issue is it would likely be using complex queries with unions or joins etc that may also create overhead.
For the final point, there's certainly some of that capability included but not to the same extent as traditional SIEMs. I personally love the idea of using Jupyter especially being an avid Pythonista but in a diverse security team of varying skill levels it's simply not ideal to expect all of the team to be able to build more complex queries beyond Kusto or even from a GUI.
Thanks so much for replying though, it's definitely going to assist in completing our PoC of the platform. I'll have to build some examples out from what I've found valuable in the past and what you've suggested and see how useful it turns out.
I'd be really interested in hearing from prior or current users of traditional SIEMs that don’t experience alert fatigue or high administration overhead and how they are finding Sentinel and some examples of what they are finding they can do more easily, what they couldn't do before and can now, and what they've found they can't do.