May 06 2020 03:31 PM
Is there any sort of documentation that states a Syslog server of X size can handle Y amount of traffic? I thought I saw something somewhere but I cannot seem to locate the document again.
May 10 2020 04:52 PM
That's a good question and I'm guessing that it is not just about a syslog server in general but also about their ability to upload the logging data to Sentinel. I think that will be a bottleneck when large volumes of logs are involved as a Linux syslog server can be tweaked to support a high volume of events per second. I think that Log Analytics API only supports chunks of 30 MB upload per post so depending on the available bandwidth one can do some math on how many collectors would be needed for a specific volume of raw logs.
We typically start with a 2 x CPU, 8 GB RAM, minimal CentOS 7.7 VM and it seems to be falling asleep for 15 - 20 GB/day (using standard UDP-based syslog traffic). When dealing with more stringent requirements, such as high availability one can start introducing load balancers, maybe engage in real devops by spinning syslog containers through Kubernetes, maybe use Kafka to manage the log stream.
I guess that empirically, one can setup a log generator and flood a "standard" syslog server to see where it starts to fall apart. I will probably add this to my to-do list.
Adrian Grigorof
Jul 29 2021 09:10 PM
Jul 30 2021 03:56 AM
@crystan The URL you posted only re-opens this conversation :)
Aug 06 2021 02:28 AM