Home
%3CLINGO-SUB%20id%3D%22lingo-sub-621186%22%20slang%3D%22en-US%22%3EHelp!%20My%20Director%20is%20consuming%20all%20my%20resources!%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-621186%22%20slang%3D%22en-US%22%3E%0A%20%26lt%3Bmeta%20http-equiv%3D%22Content-Type%22%20content%3D%22text%2Fhtml%3B%20charset%3DUTF-8%22%20%2F%26gt%3B%3CSTRONG%3EFirst%20published%20on%20TECHNET%20on%20Sep%2020%2C%202017%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%20Author%3A%20%3CSTRONG%3E%20DJ%20Ball%2C%20Senior%20Escalation%20Engineer%2C%20Skype%20for%20Business%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Recently%20I%20worked%20on%20a%20couple%20of%20cases%20where%20the%20administrators%20were%20reporting%20higher%20than%20average%20CPU%20consumption%20on%20their%20Director%20pool%20servers.%20They%20reported%20seeing%20sustained%2080%20to%2090%25%20CPU%20consumption%20during%20peak%20business%20hours.%20This%20was%20most%20noticeable%20around%20the%20top%20of%20each%20hour.%20Then%2C%20a%20few%20hours%20before%20the%20end%20of%20their%20day%2C%20the%20CPU%20would%20begin%20to%20fall%20back%20to%20their%20normal%2020%20to%2030%25%20average%20(normal%20for%20these%20customers%2C%20every%20customer%20should%20have%20their%20own%20baseline!).%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20As%20we%20began%20to%20troubleshoot%20the%20issue%20over%20several%20days%2C%20we%20noticed%20that%20only%20two%20or%20three%20servers%20in%20the%20pool%20would%20have%20high%20CPU%20consumption%20on%20a%20given%20day.%20We%20were%20able%20to%20confirm%20that%20every%20server%20in%20the%20pool%20had%20high%20CPU%20consumption%20at%20some%20point%2C%20so%20this%26nbsp%3B%20problem%20was%20definitely%20affecting%20all%20members%20of%20the%20pool%20(just%20not%20all%20at%20the%20same%20time)%20.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Watching%20Task%20Manager%20was%20enough%20to%20figure%20out%20that%20RTChost.exe%20was%20the%20top%20consumer%20of%20CPU%20time.%20Now%20we%20needed%20to%20determine%20what%20was%20causing%20the%20problem.%20Was%20it%20load%20not%20well%20balanced%20among%20servers%20in%20the%20pool%3F%20Was%20something%20different%20on%20the%20problem%20servers%20(or%20problem%20servers%20on%20problem%20days)%3F%20Was%20there%20were%20any%20increase%20in%20users%20or%20devices%20on%20problem%20days%3F%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20A%20custom%20perfmon%20counter%20log%20was%20needed%20to%20dig%20deeper%20and%20understand%20why%20this%20service%20was%20consuming%20more%20CPU.%20Here%20is%20the%20%3CA%20href%3D%22https%3A%2F%2Ftechnet.microsoft.com%2Fen-us%2Flibrary%2Fbb490956.aspx%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20Logman%20%3C%2FA%3E%20command%20line%20that%20allowed%20the%20customer%20to%20easily%20create%20the%20counter%20log%20on%20each%20server.%20I%20have%20provided%20the%20Performance%20Counter%20text%20file%20that%20contains%20all%20the%20counters%20that%20we%20used.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CA%20href%3D%22https%3A%2F%2Fmsdnshared.blob.core.windows.net%2Fmedia%2F2018%2F05%2FPerformanceCounters1.txt%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%20PerformanceCounters1%20%3C%2FA%3E%20%3CBR%20%2F%3E%3CBLOCKQUOTE%3E%3CB%3ECreate%20command%3A%20%3C%2FB%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20logman%20-create%20counter%20SFBPERF%20-f%20bin%20-v%20mmddhhmm%20-cf%20PerformanceCounters.txt%20-o%20%25systemdrive%25%5CPerflog%5C%25COMPUTERNAME%25.LOG%20-y%20-cnf%2024%3A00%3A00%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CB%3E%20Start%20command%3A%20%3C%2FB%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20logman%20start%20SFBPERF%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CB%3E%20Stop%20command%3A%20%3C%2FB%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Logman%20stop%20SFBPERF%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%20I%20had%20the%20customer%20run%20these%20perfmon%20logs%20on%20each%20server%20on%20issue%20and%20non-issue%20days%20(so%20we%20could%20compare%20problematic%20vs.%20non-problematic).%20Once%20I%20had%20this%20data%2C%20it%20was%20a%20time-consuming%20task%20to%20pick%20it%20apart.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20In%20reviewing%20the%20perfomns%2C%20I%20started%20off%20adding%20these%20two%20counters.%20They%20showed%20that%20the%20RTCHost.exe%20process%20trended%20up%20exactly%20as%20the%20total%20CPU%20usage.%20Rtchost%20was%20using%20~20%25%20of%20the%20Processor%20time%5C_Total.%20%3CBR%20%2F%3E%3CBLOCKQUOTE%3EProcess%5C%25%20processor%20Time%5CRTCHost%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Processor%5C%25%20processor%20Time%5C_Total%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%20%3CIMG%20alt%3D%22DJBlog1%22%20border%3D%220%22%20height%3D%22772%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F115270iE45EB3B0BC16A9C6%22%20style%3D%22border%3A%200px%20currentcolor%22%20title%3D%22DJBlog1%22%20width%3D%22667%22%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Then%20I%20overlaid%20these%20additional%20counters%20to%20look%20at%20user%20load%3A%20%3CBR%20%2F%3E%3CBLOCKQUOTE%3ELS%3ASIP%20protocol%5CSIP%20-%20Incoming%20Messages%20%2FSec%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20LS%3ASIP%20-%20Load%20Management%5CSIP%20-%20Average%20Holding%20Time%20For%20Incoming%20Messages%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20It%20was%20very%20clear%20that%20%3CB%3E%20SIP%20-%20Incoming%20Messages%20%2FSec%20%3C%2FB%3E%20went%20from%20an%20average%20of%203080%2C%20and%20jumped%20to%204380.%20That%20is%20about%20a%2040%25%20jump%20in%20traffic%20over%20the%20course%20of%20~3%20minutes.%20%3CB%3ESIP%20-%20Average%20Holding%20Time%20For%20Incoming%20Messages%20%3C%2FB%3E%20also%20rose%20from%20basically%200%2C%20to%2013.9%20just%20at%20this%20same%20time.%20But%20when%20I%20compared%20these%20peaks%20against%20other%20servers%20in%20the%20pool%2C%20they%20were%20no%20higher%20than%20other%20servers%20that%20were%20not%20having%20high%20CPU.%20I%20had%20established%20was%20that%20the%2010%3A00%20am%20hour%20was%20a%20peak%20time%20for%20users%20joining%20meetings.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CIMG%20alt%3D%22DJBlog2%22%20border%3D%220%22%20height%3D%22562%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F115271i6AF25E040B96E50F%22%20title%3D%22DJBlog2%22%20width%3D%221028%22%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20What%20is%20RTChost%20doing%20when%20it%20is%20consuming%20so%20much%20CPU%3F%20Next%20was%20to%20add%20these%20counters%20to%20the%20view%3A%20%3CBR%20%2F%3E%3CBLOCKQUOTE%3EProcess%5CPrivate%20bytes%5CRtcHost%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Memory%5CAvailable%20Mbytes%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20.Net%20CLR%20Memory%5C%25%20Time%20in%20GC%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%20Private%20Bytes%20counter%20showed%20that%20RtcHost%20process%20grew%20from%20consuming%20about%201Gb%20of%20memory%20to%20a%20peak%20just%20over%2013GB%20in%20the%20span%20of%209%20minutes.%20Available%20Mbytes%20counter%20showed%20that%20total%20system%20memory%20went%20from%20averaging%20~14GB%20free%2C%20then%20dropped%20to%203.6GB%20free%20over%20that%20same%20period.%20%3CB%3E%25%20Time%20In%20GC%20%3C%2FB%3E%20is%20a%20counter%20that%20shows%20.Net%20Garbage%20Collection%20that%20is%20occurring%20for%20that%20process.%20Our%20jump%20in%20user%20load%20is%20what%20caused%20the%20process%20to%20consume%20much%20more%20memory%2C%20which%20causes%20GC%20to%20start%20kicking%20into%20overdrive%2C%20which%20drove%20up%20the%20CPU%20usage.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CIMG%20alt%3D%22DJBlog3%22%20border%3D%220%22%20height%3D%22606%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F115272iFA211CEB2C4A6E7F%22%20title%3D%22DJBlog3%22%20width%3D%221028%22%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Now%20that%20we%20knew%20GC%20was%20our%20bottleneck%2C%20I%20discovered%20the%20customer%20was%20still%20running%20the%20old%20.%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fdotnet%2Fframework%2Fwhats-new%2Findex%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3ENet%204.0%20framework%20%3C%2FA%3E%20.%20.Net%204.6.2%20release%20has%20improved%20memory%20management%20performance%20and%20Skype%20for%20Business%20Server%20has%20supported%20.Net%204.6.2%20since%20the%20%3CA%20href%3D%22https%3A%2F%2Fblogs.technet.microsoft.com%2Fnexthop%2F2017%2F03%2F06%2Fupdated-support-statements-for-net-framework%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20February%202017%20update%20%3C%2FA%3E%20.%20We%20do%20not%20%3CA%20href%3D%22https%3A%2F%2Fblogs.technet.microsoft.com%2Fnexthop%2F2017%2F06%2F23%2Fnet-framework-4-7-and-skype-for-business-lync-server%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20support%20.Net%204.7%20%3C%2FA%3E%20version%20as%20it%20has%20not%20been%20fully%20tested.%20The%204.6.2%20version%20can%20be%20found%20%3CA%20href%3D%22https%3A%2F%2Fsupport.microsoft.com%2Fen-us%2Fhelp%2F3151800%2Fthe--net-framework-4-6-2-offline-installer-for-windows%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20here%20%3C%2FA%3E%20.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20The%20.Net%20Garbage%20Collector%20serves%20as%20the%20automatic%20memory%20manager%20for%20applications%20written%20in%20.Net.%20While%20GC%20is%20running%2C%20the%20other%20worker%20threads%20are%20blocked%20until%20GC%20finishes.%20The%20more%20often%20GC%20is%20running%2C%20the%20less%20often%20other%20work%20can%20be%20done.%20As%20a%20process%20becomes%20busier%2C%20GC%20will%20run%20more%20often%20and%20for%20longer%20periods%20of%20time.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Garbage%20Collection%20has%20two%20modes%2C%20Server%20and%20a%20Workstation.%20The%20Rtchost%20process%20is%20configured%20to%20use%20workstation%20mode%20by%20default.%20Workstation%20mode%20will%20have%201%20thread%20to%20perform%20GC%2C%20and%201%20memory%20heap%2C%20where%20as%20Server%20mode%20will%20have%201%20heap%20per%20logical%20CPU%20core%20and%201%20GC%20thread%20per%20CPU%20core.%20These%20differences%20can%20cause%20a%20process%20to%20consume%20as%20much%20as%202.5%20times%20the%20amount%20of%20memory.%20You%20need%20to%20check%20the%20Memory%5CAvailable%20Mbytes%20counter%20closely%20to%20ensure%20you%20have%20enough%20system%20memory%20to%20handle%20this%20change.%20For%20a%20deep%20dive%20on%20GC%2C%20the%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fdotnet%2Fstandard%2Fgarbage-collection%2Fperformance%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20Fundamentals%20of%20Garbage%20Collection%20%3C%2FA%3E%20is%20a%20great%20resource%20and%20the%20Exchange%20Team%20Blog%20has%20this%20excellent%20%3CA%20href%3D%22https%3A%2F%2Fblogs.technet.microsoft.com%2Fexchange%2F2015%2F04%2F30%2Ftroubleshooting-high-cpu-utilization-issues-in-exchange-2013%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20post%20%3C%2FA%3E%20.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Once%20the%20servers%20were%20updated%20with%20.Net%204.6.2%2C%20I%20had%20the%20customer%20enable%20server%20mode%20GC%20with%20concurrency%20in%20the%20Rtchost%20config%20file%20as%20shown%20below.%20You%20should%20make%20a%20backup%20of%20this%20file%20before%20adding%20the%20two%20lines%20to%20the%20%3CRUNTIME%3E%20section.%20This%20change%20does%20require%20reboot%20to%20be%20picked%20up.%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Default%20path%20-%20%22C%3A%5CProgram%20Files%5CSkype%20for%20Business%20Server%202015%5CServer%5CCore%5CRtcHost.Exe.config%22%20%3CBR%20%2F%3E%3CBLOCKQUOTE%3E%0A%20%20%20%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CCONFIGURATION%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CRUNTIME%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CGENERATEPUBLISHEREVIDENCE%20enabled%3D%22%26quot%3Bfalse%26quot%3B%2F%22%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CGCSERVER%20enabled%3D%22%26quot%3Btrue%26quot%3B%2F%22%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3C%2FGCSERVER%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CSYSTEM.SERVICEMODEL%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CSERVICES%3E%3C%2FSERVICES%3E%3C%2FSYSTEM.SERVICEMODEL%3E%3C%2FGENERATEPUBLISHEREVIDENCE%3E%3C%2FRUNTIME%3E%3C%2FCONFIGURATION%3E%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20If%20you%20think%20this%20change%20may%20help%20your%20environment%2C%20you%20need%20to%20consider%20the%20following%20caveats%3A%20%3CBR%20%2F%3E%3COL%3E%3CBR%20%2F%3E%3CLI%3EPer%20%3CA%20href%3D%22https%3A%2F%2Ftechnet.microsoft.com%2Fen-us%2Flibrary%2Fdn951388.aspx%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20Server%20requirements%20for%20Skype%20for%20Business%20Server%202015%20%3C%2FA%3E%20%2C%20Director%20role%20servers%20are%20recommended%20to%20have%2016GB%20of%20memory.%20You%20need%20to%20closely%20monitor%20the%20Memory%5CAvailable%20Mbytes%20counter%20before%20and%20after%20making%20this%20change.%20You%20should%20have%20at%20least%201.5GB%20free%20during%20peak%20times.%3C%2FLI%3E%3CBR%20%2F%3E%3CLI%3EFuture%20Cumulative%20updates%20may%20overwrite%20your%20custom%20RtcHost.Exe.config.%20You%20will%20need%20to%20check%20this%20setting%20after%20each%20update.%20This%20is%20a%20custom%20configuration%20that%20needs%20to%20be%20set%20for%20each%20environment.%3C%2FLI%3E%3CBR%20%2F%3E%3C%2FOL%3E%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20Thanks%20for%20reading!%20%3CBR%20%2F%3E%20%3CBR%20%2F%3E%20DJ.%3C%2FRUNTIME%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-621186%22%20slang%3D%22en-US%22%3EFirst%20published%20on%20TECHNET%20on%20Sep%2020%2C%202017%20Author%3A%20DJ%20Ball%2C%20Senior%20Escalation%20Engineer%2C%20Skype%20for%20BusinessRecently%20I%20worked%20on%20a%20couple%20of%20cases%20where%20the%20administrators%20were%20reporting%20higher%20than%20average%20CPU%20consumption%20on%20their%20Director%20pool%20servers.%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-621186%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Edirector%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Ehigh%20cpu%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Enet%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Enet%20framework%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ENextHop%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EPerformance%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eskype%20for%20business%20server%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E%3C%2FLINGO-BODY%3E
Microsoft
First published on TECHNET on Sep 20, 2017
Author: DJ Ball, Senior Escalation Engineer, Skype for Business

Recently I worked on a couple of cases where the administrators were reporting higher than average CPU consumption on their Director pool servers. They reported seeing sustained 80 to 90% CPU consumption during peak business hours. This was most noticeable around the top of each hour. Then, a few hours before the end of their day, the CPU would begin to fall back to their normal 20 to 30% average (normal for these customers, every customer should have their own baseline!).

As we began to troubleshoot the issue over several days, we noticed that only two or three servers in the pool would have high CPU consumption on a given day. We were able to confirm that every server in the pool had high CPU consumption at some point, so this  problem was definitely affecting all members of the pool (just not all at the same time) .

Watching Task Manager was enough to figure out that RTChost.exe was the top consumer of CPU time. Now we needed to determine what was causing the problem. Was it load not well balanced among servers in the pool? Was something different on the problem servers (or problem servers on problem days)? Was there were any increase in users or devices on problem days?

A custom perfmon counter log was needed to dig deeper and understand why this service was consuming more CPU. Here is the Logman command line that allowed the customer to easily create the counter log on each server. I have provided the Performance Counter text file that contains all the counters that we used.

PerformanceCounters1
Create command:

logman -create counter SFBPERF -f bin -v mmddhhmm -cf PerformanceCounters.txt -o %systemdrive%\Perflog\%COMPUTERNAME%.LOG -y -cnf 24:00:00

Start command:

logman start SFBPERF

Stop command:

Logman stop SFBPERF


I had the customer run these perfmon logs on each server on issue and non-issue days (so we could compare problematic vs. non-problematic). Once I had this data, it was a time-consuming task to pick it apart.

In reviewing the perfomns, I started off adding these two counters. They showed that the RTCHost.exe process trended up exactly as the total CPU usage. Rtchost was using ~20% of the Processor time\_Total.
Process\% processor Time\RTCHost

Processor\% processor Time\_Total

DJBlog1



Then I overlaid these additional counters to look at user load:
LS:SIP protocol\SIP - Incoming Messages /Sec

LS:SIP - Load Management\SIP - Average Holding Time For Incoming Messages



It was very clear that SIP - Incoming Messages /Sec went from an average of 3080, and jumped to 4380. That is about a 40% jump in traffic over the course of ~3 minutes. SIP - Average Holding Time For Incoming Messages also rose from basically 0, to 13.9 just at this same time. But when I compared these peaks against other servers in the pool, they were no higher than other servers that were not having high CPU. I had established was that the 10:00 am hour was a peak time for users joining meetings.



DJBlog2



What is RTChost doing when it is consuming so much CPU? Next was to add these counters to the view:
Process\Private bytes\RtcHost

Memory\Available Mbytes

.Net CLR Memory\% Time in GC

Private Bytes counter showed that RtcHost process grew from consuming about 1Gb of memory to a peak just over 13GB in the span of 9 minutes. Available Mbytes counter showed that total system memory went from averaging ~14GB free, then dropped to 3.6GB free over that same period. % Time In GC is a counter that shows .Net Garbage Collection that is occurring for that process. Our jump in user load is what caused the process to consume much more memory, which causes GC to start kicking into overdrive, which drove up the CPU usage.



DJBlog3



Now that we knew GC was our bottleneck, I discovered the customer was still running the old . Net 4.0 framework . .Net 4.6.2 release has improved memory management performance and Skype for Business Server has supported .Net 4.6.2 since the February 2017 update . We do not support .Net 4.7 version as it has not been fully tested. The 4.6.2 version can be found here .

The .Net Garbage Collector serves as the automatic memory manager for applications written in .Net. While GC is running, the other worker threads are blocked until GC finishes. The more often GC is running, the less often other work can be done. As a process becomes busier, GC will run more often and for longer periods of time.

Garbage Collection has two modes, Server and a Workstation. The Rtchost process is configured to use workstation mode by default. Workstation mode will have 1 thread to perform GC, and 1 memory heap, where as Server mode will have 1 heap per logical CPU core and 1 GC thread per CPU core. These differences can cause a process to consume as much as 2.5 times the amount of memory. You need to check the Memory\Available Mbytes counter closely to ensure you have enough system memory to handle this change. For a deep dive on GC, the Fundamentals of Garbage Collection is a great resource and the Exchange Team Blog has this excellent post .

Once the servers were updated with .Net 4.6.2, I had the customer enable server mode GC with concurrency in the Rtchost config file as shown below. You should make a backup of this file before adding the two lines to the <runtime> section. This change does require reboot to be picked up.



Default path - "C:\Program Files\Skype for Business Server 2015\Server\Core\RtcHost.Exe.config"
<?xml version="1.0" encoding="utf-8" ?>

<configuration>

<runtime>

<generatePublisherEvidence enabled="false"/>

<gcServer enabled="true"/>

</runtime>

<system.serviceModel>

<services>



If you think this change may help your environment, you need to consider the following caveats:

  1. Per Server requirements for Skype for Business Server 2015 , Director role servers are recommended to have 16GB of memory. You need to closely monitor the Memory\Available Mbytes counter before and after making this change. You should have at least 1.5GB free during peak times.

  2. Future Cumulative updates may overwrite your custom RtcHost.Exe.config. You will need to check this setting after each update. This is a custom configuration that needs to be set for each environment.




Thanks for reading!

DJ.