ATP sensor Consume most server CPU (60%)

%3CLINGO-SUB%20id%3D%22lingo-sub-306097%22%20slang%3D%22en-US%22%3EATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-306097%22%20slang%3D%22en-US%22%3E%3CP%3EIt%20is%20observed%20after%20installing%20ATP%20sensor%2C%20on%20domain%20controller%2C%20that%20more%20than%2060%20%25%20of%2016%20cores%20CPU%20are%20consumed%20by%20microsoft.tri.sensor.exe%20component%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-310217%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-310217%22%20slang%3D%22en-US%22%3E%3CP%3EDear%20All%2C%3C%2FP%3E%3CP%3EThanks%20for%20your%20support%2C%20it%20is%20observed%20after%20updating%20the%20current%20version%20to%26nbsp%3B%3CFONT%3E2.60.6070.18946%3C%2FFONT%3E%20the%20issue%20of%20CPU%20has%20been%20resolved.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-307626%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-307626%22%20slang%3D%22en-US%22%3E%3CP%3EWhat%20Eli%20described%20is%20documented%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure-advanced-threat-protection%2Fatp-architecture%23azure-atp-sensor-features%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3Ehere%3C%2FA%3E%2C%20under%20the%20%22%3CSPAN%3EResource%20limitations%3C%2FSPAN%3E%22%20bullet%20item.%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-306105%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-306105%22%20slang%3D%22en-US%22%3E%3CP%3EThe%20sensor%20(if%20installed%20on%20a%20DC)%20will%20make%20sure%20at%20least%2015%25%20of%20RAM%20and%20CPU%20are%20free%20at%20all%20time.%3C%2FP%3E%0A%3CP%3Eother%20wise%20it%20will%20try%20to%20utilize%20any%20free%20resources%20to%20reduce%20data%20latency.%3CBR%20%2F%3Eif%20the%20machine%20will%20get%20more%20busy%2C%20the%20limits%20will%20be%20adjusted%20and%20it%20will%20utilize%20less%20resources.%3C%2FP%3E%0A%3CP%3E(It%20will%20auto%20adjust%20within%20~%2010%20sec)%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EPlus%2C%20if%20it's%20a%20new%20deployment%2C%20and%20this%20instance%20is%20a%20synchronizer%20candidate%2C%20it's%20expected%20to%20work%20harder%20during%20the%20first%20few%20hours%20until%20the%20initial%20AD%20sync%20completes.%26nbsp%3B%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFor%20a%20standalone%20sensor%2C%20it%20assumes%20it%20can%20use%20all%20the%20machine%20resources%20without%20limits.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2181108%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2181108%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fuser%2Fviewprofilepage%2Fuser-id%2F106935%22%20target%3D%22_blank%22%3E%40Eli%20Ofek%3C%2FA%3E%26nbsp%3BWe%20are%20facing%20very%20high%20CPU%20usage%20with%20one%20of%20our%20DCs.%20The%20sensor%20process%20seems%20to%20occupy%20most%20of%20the%20resources%20on%20this%20DC.%20COuld%20you%20please%20share%20some%20insight%20on%20remediating%20this%3F%20Also%2C%20we%20are%20seeing%20too%20many%26nbsp%3B%3CSTRONG%3E8.8.8.8%20%3C%2FSTRONG%3Econnections%20on%20this%20server%2C%20not%20sure%20if%20this%20is%20linked!%20Any%20leads%20would%20be%20appreciated%20as%20always!%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2181186%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2181186%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fuser%2Fviewprofilepage%2Fuser-id%2F944718%22%20target%3D%22_blank%22%3E%40mesaqee%3C%2FA%3E%26nbsp%3B%2C%20it%20seems%20that%20the%20total%20CPU%20on%20the%20machine%20is%2074%25%2C%20so%20technically%20speaking%20there%20is%20no%20issue%2C%20and%20the%20sensor%20is%20not%20even%20throttling%20at%20this%20point.%3C%2FP%3E%0A%3CP%3ESuch%20consumption%20might%20be%20expected%20for%20high%20traffic%20scenarios.%3C%2FP%3E%0A%3CP%3EWhat%20did%20the%20sizing%20tool%20had%20to%20say%20about%20this%20machine%3F%3CBR%20%2F%3EWhat%20is%20the%20hardware%20spec%20%3F%20what%20is%20the%20busy%20packets%2Fsec%20and%20max%20packets%2Fsec%20%3F%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3Ethe%20sensor%20itself%20won't%20initiate%20connections%20specifically%20to%208.8.8.8%2C%20but%20if%20you%20are%20running%20a%20DNS%20service%20on%20the%20machine%20that%20will%20except%20connections%20from%208.8.8.8%2C%20then%20it%20is%20expected%20that%20the%20sensor%20will%20try%20to%20get%26nbsp%3B%20back%20to%20this%20endpoint%20to%20try%20and%20resolve%20it.%20most%20likely%20it's%20not%20related%20to%20the%20CPU%20usage.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2181309%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2181309%22%20slang%3D%22en-US%22%3E%3CP%3EDear%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fuser%2Fviewprofilepage%2Fuser-id%2F106935%22%20target%3D%22_blank%22%3E%40Eli%20Ofek%3C%2FA%3E%26nbsp%3B%2C%26nbsp%3B%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EThe%20server%20is%20based%20on%20a%20VM%2C%20attached%20are%20the%20complete%20hardware%20specs.%20The%26nbsp%3B%3CSPAN%3Ebusy%20packets%2Fsec%3D511%20and%20max%20packets%2Fsec%3D32%2C295.%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSPAN%3EPlease%20see%20below%20the%20complete%20sizing%20tool%20output%3A%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CTABLE%20width%3D%224420%22%3E%3CTBODY%3E%3CTR%3E%3CTD%20width%3D%22185%22%3EDC%3C%2FTD%3E%3CTD%20width%3D%22340%22%3ESensor%20Supported%3C%2FTD%3E%3CTD%20width%3D%22101%22%3EFailed%20Samples%3C%2FTD%3E%3CTD%20width%3D%22110%22%3EMax%20Packets%2Fsec%3C%2FTD%3E%3CTD%20width%3D%22107%22%3EAvg%20Packets%2Fsec%3C%2FTD%3E%3CTD%20width%3D%22113%22%3EBusy%20Packets%2Fsec%3C%2FTD%3E%3CTD%20width%3D%22182%22%3EBusy%20Packets%2Fsec%20Start%20Time%3C%2FTD%3E%3CTD%20width%3D%22175%22%3EBusy%20Packets%2Fsec%20End%20Time%3C%2FTD%3E%3CTD%20width%3D%2288%22%3EMin%20Avail%20MB%3C%2FTD%3E%3CTD%20width%3D%2287%22%3EAvg%20Avail%20MB%3C%2FTD%3E%3CTD%20width%3D%2294%22%3EBusy%20Avail%20MB%3C%2FTD%3E%3CTD%20width%3D%22136%22%3EBusy%20RAM%20Start%20Time%3C%2FTD%3E%3CTD%20width%3D%22130%22%3EBusy%20RAM%20End%20Time%3C%2FTD%3E%3CTD%20width%3D%2261%22%3ETotal%20MB%3C%2FTD%3E%3CTD%20width%3D%22110%22%3EMax%20%25%20CPU%20Time%3C%2FTD%3E%3CTD%20width%3D%22107%22%3EAvg%20%25%20CPU%20Time%3C%2FTD%3E%3CTD%20width%3D%22113%22%3EBusy%20%25%20CPU%20Time%3C%2FTD%3E%3CTD%20width%3D%22132%22%3EBusy%20CPU%20Start%20Time%3C%2FTD%3E%3CTD%20width%3D%22126%22%3EBusy%20CPU%20End%20Time%3C%2FTD%3E%3CTD%20width%3D%22119%22%3ELogical%20processors%3C%2FTD%3E%3CTD%20width%3D%22115%22%3EProcessor%20Groups%3C%2FTD%3E%3CTD%20width%3D%2276%22%3ECore%20Count%3C%2FTD%3E%3CTD%20width%3D%2286%22%3EVM%20Indicator%3C%2FTD%3E%3CTD%20width%3D%22159%22%3EAD%20Site%3C%2FTD%3E%3CTD%20width%3D%22334%22%3ETime%20Zone%20Name%3C%2FTD%3E%3CTD%20width%3D%2243%22%3EIs%20DST%3C%2FTD%3E%3CTD%20width%3D%22298%22%3EOS%20Caption%3C%2FTD%3E%3CTD%20width%3D%22114%22%3EOS%20Build%20Number%3C%2FTD%3E%3CTD%20width%3D%22132%22%3EOS%20Installation%20Type%3C%2FTD%3E%3CTD%20width%3D%22447%22%3EOS%20Server%20Levels%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EXXXXX%3C%2FTD%3E%3CTD%3EYes%2C%20but%20additional%20resources%20required%3A%20%2B1GB%3B%20%2B1%20core%3C%2FTD%3E%3CTD%3E8%3C%2FTD%3E%3CTD%3E32%2C295%3C%2FTD%3E%3CTD%3E105%3C%2FTD%3E%3CTD%3E511%3C%2FTD%3E%3CTD%3E19%3A51%3A52%3C%2FTD%3E%3CTD%3E20%3A21%3A50%3C%2FTD%3E%3CTD%3E2%2C366%3C%2FTD%3E%3CTD%3E4%2C349%3C%2FTD%3E%3CTD%3E3%2C323%3C%2FTD%3E%3CTD%3E17%3A12%3A16%3C%2FTD%3E%3CTD%3E17%3A42%3A14%3C%2FTD%3E%3CTD%3E8%2C191%3C%2FTD%3E%3CTD%3E100%3C%2FTD%3E%3CTD%3E50%3C%2FTD%3E%3CTD%3E98%3C%2FTD%3E%3CTD%3E2%3A17%3A12%3C%2FTD%3E%3CTD%3E2%3A47%3A30%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3CTD%3E1%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3CTD%3EVMWare%3C%2FTD%3E%3CTD%3EXXXXX%3C%2FTD%3E%3CTD%3E(UTC%2B08%3A00)%20Beijing%2C%20Chongqing%2C%20Hong%20Kong%2C%20Urumqi%3C%2FTD%3E%3CTD%3E%26nbsp%3B%3C%2FTD%3E%3CTD%3EMicrosoft%20Windows%20Server%202019%20Standard%3C%2FTD%3E%3CTD%3E17763%3C%2FTD%3E%3CTD%3EServer%3C%2FTD%3E%3CTD%3EServerCore%3B%20ServerCoreExtended%3B%20Server-Gui-Mgmt%3B%20Server-Gui-Shell%3C%2FTD%3E%3C%2FTR%3E%3C%2FTBODY%3E%3C%2FTABLE%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2181837%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2181837%22%20slang%3D%22en-US%22%3EWas%20one%20core%20added%20as%20suggested%3F%3CBR%20%2F%3EWhile%20the%20busy%20packets%20are%20low%2C%20the%20max%20is%20pretty%20high...%3CBR%20%2F%3EIs%20the%20high%20CPU%20you%20noticed%20is%20constant%20or%20spikes%20on%20certain%20hours%20%3F%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2182074%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2182074%22%20slang%3D%22en-US%22%3ENo%2C%20the%20core%20hasn't%20be%20added%20as%20this%20issue%20has%20started%20coming%20up%20since%20last%20week%20only.%20The%20spike%20has%20been%20there%20almost%20constantly.%20We%20are%20still%20monitoring%20that%20to%20evaluate%20if%20this%20is%20intermittent%20or%20a%20consistent%20issue.%20Do%20you%20have%20any%20other%20suggestions%20apart%20from%20the%20core%20addition%3F%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2184449%22%20slang%3D%22en-US%22%3ERe%3A%20ATP%20sensor%20Consume%20most%20server%20CPU%20(60%25)%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2184449%22%20slang%3D%22en-US%22%3ECheck%20the%20packets%2Fsec%20on%20all%20the%20nics%2C%20or%20re%20run%20the%20sizing%20tool%2C%20maybe%20there%20was%20an%20increase%20of%20traffic%20load%20on%20this%20machine%2C%20but%20nothing%20seems%20wrong%20here%20%2C%20especially%20if%20the%20sizing%20tool%20asked%20for%20another%20core%20and%20it%20wasn't%20deployed.%3C%2FLINGO-BODY%3E
New Contributor

It is observed after installing ATP sensor, on domain controller, that more than 60 % of 16 cores CPU are consumed by microsoft.tri.sensor.exe component

13 Replies

The sensor (if installed on a DC) will make sure at least 15% of RAM and CPU are free at all time.

other wise it will try to utilize any free resources to reduce data latency.
if the machine will get more busy, the limits will be adjusted and it will utilize less resources.

(It will auto adjust within ~ 10 sec)

 

Plus, if it's a new deployment, and this instance is a synchronizer candidate, it's expected to work harder during the first few hours until the initial AD sync completes.  

 

For a standalone sensor, it assumes it can use all the machine resources without limits.

What Eli described is documented here, under the "Resource limitations" bullet item. 

Dear All,

Thanks for your support, it is observed after updating the current version to 2.60.6070.18946 the issue of CPU has been resolved.

@Eli Ofek We are facing very high CPU usage with one of our DCs. The sensor process seems to occupy most of the resources on this DC. COuld you please share some insight on remediating this? Also, we are seeing too many 8.8.8.8 connections on this server, not sure if this is linked! Any leads would be appreciated as always!

@mesaqee , it seems that the total CPU on the machine is 74%, so technically speaking there is no issue, and the sensor is not even throttling at this point.

Such consumption might be expected for high traffic scenarios.

What did the sizing tool had to say about this machine?
What is the hardware spec ? what is the busy packets/sec and max packets/sec ?

 

the sensor itself won't initiate connections specifically to 8.8.8.8, but if you are running a DNS service on the machine that will except connections from 8.8.8.8, then it is expected that the sensor will try to get  back to this endpoint to try and resolve it. most likely it's not related to the CPU usage.

Dear @Eli Ofek , 

 

The server is based on a VM, attached are the complete hardware specs. The busy packets/sec=511 and max packets/sec=32,295.

 

Please see below the complete sizing tool output:

 

DCSensor SupportedFailed SamplesMax Packets/secAvg Packets/secBusy Packets/secBusy Packets/sec Start TimeBusy Packets/sec End TimeMin Avail MBAvg Avail MBBusy Avail MBBusy RAM Start TimeBusy RAM End TimeTotal MBMax % CPU TimeAvg % CPU TimeBusy % CPU TimeBusy CPU Start TimeBusy CPU End TimeLogical processorsProcessor GroupsCore CountVM IndicatorAD SiteTime Zone NameIs DSTOS CaptionOS Build NumberOS Installation TypeOS Server Levels
XXXXXYes, but additional resources required: +1GB; +1 core832,29510551119:51:5220:21:502,3664,3493,32317:12:1617:42:148,19110050982:17:122:47:30212VMWareXXXXX(UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi Microsoft Windows Server 2019 Standard17763ServerServerCore; ServerCoreExtended; Server-Gui-Mgmt; Server-Gui-Shell
Was one core added as suggested?
While the busy packets are low, the max is pretty high...
Is the high CPU you noticed is constant or spikes on certain hours ?
No, the core hasn't be added as this issue has started coming up since last week only. The spike has been there almost constantly. We are still monitoring that to evaluate if this is intermittent or a consistent issue. Do you have any other suggestions apart from the core addition?
Check the packets/sec on all the nics, or re run the sizing tool, maybe there was an increase of traffic load on this machine, but nothing seems wrong here , especially if the sizing tool asked for another core and it wasn't deployed.

Dear@Eli Ofek ,

 

The issue is still there even after increasing the server capacity.

We re-ran the sizing tool, and here are the results:

 

 

DCSensor SupportedFailed SamplesMax Packets/secAvg Packets/secBusy Packets/secBusy Packets/sec Start TimeBusy Packets/sec End TimeMin Avail MBAvg Avail MBBusy Avail MBBusy RAM Start TimeBusy RAM End TimeTotal MBMax % CPU TimeAvg % CPU TimeBusy % CPU TimeBusy CPU Start TimeBusy CPU End TimeLogical processorsProcessor GroupsCore CountVM IndicatorAD SiteTime Zone NameIs DSTOS CaptionOS Build NumberOS Installation TypeOS Server Levels
XXXXX.localYes02,26413918109:51:4710:06:454,6235,1934,72504:03:5204:18:4910,239100116912:58:1713:13:20414VMWareHXXXX(UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi Microsoft Windows Server 2019 Standard17763ServerRemote Registry Query Failed

 

Appreciate if you can provide some more insights around this.

 

 

Thanks,

Saqib

What is the total CPU usage pattern after the additional core was added?

@Eli Ofek Well after upgrading the capacity the usage was pretty high and we were even getting the "Some network traffic could not be analyzed" alert. However, we are now unable to see the trend as the Sensor service is not starting. On trying to manually start it, we are getting the below error:

 

mesaqee_0-1616578491368.png

 

We have tried rebooting the server and re-installed the sensor, but the service is still not running. Shall we contact support to have a detailed look or you got any further suggestions?

 

Thanks,

Saqib

 

Check the local logs of the sensor and updater to see if there is any clue to why it fails to start.
Also make sure you have the WmiApSrv service starting correctly.
If no clues, you need a support ticket to go forward...