Home
%3CLINGO-SUB%20id%3D%22lingo-sub-723936%22%20slang%3D%22en-US%22%3EThe%20Compound%20Case%20of%20the%20Outlook%20Hangs%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-723936%22%20slang%3D%22en-US%22%3E%0A%20%26lt%3Bmeta%20http-equiv%3D%22Content-Type%22%20content%3D%22text%2Fhtml%3B%20charset%3DUTF-8%22%20%2F%26gt%3B%3CSTRONG%3E%20First%20published%20on%20TechNet%20on%20Aug%2021%2C%202010%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%3CP%3EThis%20case%20was%20shared%20with%20me%20by%20a%20friend%20of%20mine%2C%20Andrew%20Richards%2C%20a%20Microsoft%20Exchange%20Server%20Escalation%20Engineer.%20It%E2%80%99s%20a%20really%20interesting%20case%20because%20it%20highlights%20the%20use%20of%20a%20Sysinternals%20tool%20I%20specifically%20wrote%20for%20use%20by%20Microsoft%20support%20services%20and%20it%E2%80%99s%20actually%20two%20cases%20in%20one.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EThe%20case%20unfolds%20with%20a%20systems%20administrator%20at%20a%20corporation%20contacting%20Microsoft%20support%20to%20report%20that%20users%20across%20their%20network%20were%20complaining%20of%20Outlook%20hangs%20lasting%20up%20to%2015-minutes.%20The%20fact%20that%20multiple%20users%20were%20experiencing%20the%20problem%20pointed%20at%20an%20Exchange%20issue%2C%20so%20the%20call%20was%20routed%20to%20Exchange%20Server%20support%20services.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EThe%20Exchange%20team%20has%20developed%20a%20Performance%20Monitor%20data%20collector%20set%20that%20includes%20several%20hundred%20counters%20that%20have%20proven%20useful%20for%20troubleshooting%20Exchange%20issues%2C%20including%20LDAP%2C%20RPC%20and%20SMTP%20message%20activity%2C%20Exchange%20connection%20counts%2C%20memory%20usage%20and%20processor%20usage.%20Exchange%20support%20had%20the%20administrator%20collect%20a%20log%20of%20the%20server%E2%80%99s%20activity%20with%2012%20hour%20log%20cycles%2C%20the%20first%20from%209pm%20until%209am%20the%20next%20morning.%20When%20Exchange%20support%20engineers%20viewed%20the%20log%2C%20two%20patterns%20were%20clear%20despite%20the%20heavy%20density%20of%20the%20plots%3A%20first%20and%20as%20expected%2C%20the%20Exchange%20server%E2%80%99s%20load%20increased%20during%20the%20morning%20when%20users%20came%20into%20work%20and%20started%20using%20Outlook%3B%20and%20second%2C%20the%20counter%20graphs%20showed%20a%20difference%20in%20behavior%20between%20about%208%3A05%20and%208%3A20am%2C%20a%20duration%20that%20corresponded%20exactly%20to%20the%20long%20delays%20users%20were%20reporting%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22169%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F5670.image_5F00_thumb.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121237iE7B55FFF3A54AF7B%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EThe%20support%20engineers%20zoomed%20in%20and%20puzzled%20over%20the%20counters%20in%20the%20timeframe%20and%20could%20see%20Exchange%E2%80%99s%20CPU%20usage%20drop%2C%20the%20active%20connection%20count%20go%20down%2C%20and%20outbound%20response%20latency%20drastically%20increase%2C%20but%20they%20were%20unable%20to%20identify%20a%20cause%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22424%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F5582.image_5F00_thumb_5F00_8.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121238i3EB5654A990E0F63%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EThey%20escalated%20the%20case%20to%20the%20next%20level%20of%20support%20and%20it%20was%20assigned%20to%20Andrew.%20Andrew%20studied%20the%20logs%20and%20concluded%20that%20he%20needed%20additional%20information%20about%20what%20Exchange%20was%20doing%20during%20an%20outage.%20Specifically%2C%20he%20wanted%20a%20process%20memory%20dump%20of%20Exchange%20when%20it%20was%20in%20the%20unresponsive%20state.%20This%20would%20contain%20the%20contents%20of%20the%20process%20address%20space%2C%20including%20its%20data%20and%20code%2C%20as%20well%20as%20the%20register%20state%20of%20the%20process%E2%80%99s%20threads.%20Dump%20files%20of%20the%20Exchange%20process%20would%20allow%20Andrew%20to%20look%20at%20Exchange%E2%80%99s%20threads%20to%20see%20what%20was%20causing%20them%20to%20stall.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EOne%20way%20to%20obtain%20a%20dump%20is%20to%20%E2%80%9Cattach%E2%80%9D%20to%20the%20process%20with%20a%20debugger%20like%20Windbg%20from%20the%20%3CA%20href%3D%22http%3A%2F%2Fwww.microsoft.com%2Fwhdc%2Fdevtools%2Fdebugging%2Fdefault.mspx%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20Debugging%20Tools%20for%20Windows%20package%20%3C%2FA%3E%20(included%20with%20the%20Windows%20Software%20Development%20Kit)%20and%20execute%20the%20.dump%20command%2C%20but%20downloading%20and%20installing%20the%20tools%2C%20launching%20the%20debugger%2C%20attaching%20to%20the%20right%20process%2C%20and%20saving%20dumps%20is%20an%20involved%20procedure.%20Instead%2C%20Andrew%20directed%20the%20administrator%20to%20download%20the%20%3CA%20href%3D%22http%3A%2F%2Ftechnet.microsoft.com%2Fen-us%2Fsysinternals%2Fdd996900.aspx%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20Sysinternals%20Procdump%20%3C%2FA%3E%20utility%20(a%20single%20utility%20that%20you%20can%20run%20without%20installing%20any%20software%20on%20the%20server).%20Procdump%20makes%20it%20easy%20to%20obtain%20dumps%20of%20a%20process%20and%20includes%20options%20that%20create%20multiple%20dumps%20at%20a%20specified%20interval.%20Andrew%20asked%20the%20administrator%20to%20run%20Procdump%20the%20next%20time%20the%20server%E2%80%99s%20CPU%20usage%20dropped%20so%20that%20it%20would%20generate%20five%20dumps%20of%20the%20Exchange%20Server%20engine%20process%2C%20Store.exe%2C%20spaced%20three%20seconds%20apart%3A%3C%2FP%3E%3CBR%20%2F%3E%3CBLOCKQUOTE%3E%3CBR%20%2F%3E%3CP%3E%3CSPAN%20style%3D%22font-family%3A%20Courier%20New%3B%22%3E%20%3CSPAN%20style%3D%22font-size%3A%20small%3B%22%3E%20procdump%20%E2%80%93n%205%20%E2%80%93s%203%20store.exe%20c%3A%5Cdumps%5Cstore_mini.dmp%20%3C%2FSPAN%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%3CP%3EThe%20next%20day%20the%20problem%20reproduced%20and%20the%20administrator%20sent%20Andrew%20the%20dump%20files%20Procdump%20had%20generated.%20When%20a%20process%20temporarily%20hangs%20it%E2%80%99s%20often%20because%20one%20thread%20in%20the%20process%20acquires%20a%20lock%20protecting%20data%20that%20other%20threads%20need%20to%20access%2C%20and%20holds%20the%20lock%20while%20performing%20some%20long-running%20operation.%20Andrew%E2%80%99s%20first%20step%20was%20therefore%20to%20check%20for%20held%20locks.%20The%20most%20commonly%20used%20intra-process%20synchronization%20lock%20is%20a%20%3CA%20href%3D%22http%3A%2F%2Fmsdn.microsoft.com%2Fen-us%2Flibrary%2Fms686908(VS.85).aspx%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20critical%20section%20%3C%2FA%3E%20and%20the%20!locks%20debugger%20command%20lists%20the%20critical%20sections%20in%20a%20dump%20that%20are%20locked%2C%20the%20thread%20ID%20of%20the%20thread%20owning%20the%20lock%2C%20and%20the%20number%20of%20threads%20waiting%20to%20acquire%20it.%20Andrew%20used%20a%20similar%20command%2C%20!critlist%20from%20the%20Sieext.dll%20debugger%20extension%20(the%20public%20version%20of%20which%2C%20Sieextpub.dll%2C%20is%20downloadable%20from%20%3CA%20href%3D%22http%3A%2F%2Fwww.microsoft.com%2Fdownloads%2Fdetails.aspx%3Ffamilyid%3D7C6EC49C-A8F7-4323-B583-6A7A6AEB5E66%26amp%3Bdisplaylang%3Den%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%20here%20%3C%2FA%3E%20).%20The%20output%20showed%20that%20multiple%20threads%20were%20piled%20up%20waiting%20for%20thread%20223%20to%20release%20a%20critical%20section%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%2267%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F1781.image_5F00_thumb_5F00_9.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121239iD02AE08248755F56%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EHis%20next%20step%20was%20to%20see%20what%20the%20owning%20thread%20was%20doing%2C%20which%20might%20point%20at%20the%20code%20responsible%20for%20the%20long%20delays.%20He%20switched%20to%20the%20owning%20thread%E2%80%99s%20register%20context%20using%20the%20~%20command%20and%20then%20dumped%20the%20thread%E2%80%99s%20stack%20with%20the%20k%20command%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22220%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F6574.image_5F00_thumb_5F00_13.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121240iA5D97CC26E7DDCD4%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EAs%20we%E2%80%99ve%20seen%20in%20previous%20Case%20of%20the%20Unexplained%20cases%2C%20the%20debugger%20was%20unsure%20how%20to%20interpret%20the%20stack%20when%20it%20came%20across%20a%20stack%20frame%20pointing%20into%20Savfmsevsapi%2C%20an%20image%20for%20which%20it%20couldn%E2%80%99t%20obtain%20symbols.%20Most%20Windows%20images%20have%20their%20symbols%20posted%20on%20the%20Microsoft%20symbol%20server%20so%20this%20was%20likely%20a%20third-party%20DLL%20loaded%20into%20the%20Store.exe%20process%20and%20was%20therefore%20a%20suspect%20in%20the%20hangs.%20The%20list%20modules%20(%E2%80%9Clm%E2%80%9D)%20command%20dumps%20version%20information%20for%20loaded%20images%20and%20the%26nbsp%3Bpath%20of%20the%20image%20made%20it%20obvious%20that%20Savfmsevsapi%20was%20part%20of%20Symantec%E2%80%99s%20mail%20security%20product%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22243%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F7673.image_5F00_thumb_5F00_14.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121241i96CABF018AA06AEE%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EAndrew%20checked%20the%20other%20dumps%20and%20they%20all%20had%20similar%20stack%20traces.%20With%20the%20anecdotal%20evidence%20seeming%20to%20point%20at%20a%20Symantec%20issue%2C%20Andrew%20forwarded%20the%20dumps%20and%20his%20analysis%2C%20with%20the%20administrator%E2%80%99s%20permission%2C%20to%20Symantec%20technical%20support.%20Several%20hours%20later%20they%20reported%20that%20the%20dumps%20indeed%20revealed%20a%20problem%20with%20the%20mail%20application%E2%80%99s%20latest%20antivirus%20signature%20distribution%20and%20forwarded%20a%20patch%20to%20the%20administrator%20that%20would%20fix%20the%20bug.%20He%20applied%20it%20and%20continued%20to%20monitor%20the%20server%20to%20verify%20the%20fix.%20Sure%20enough%2C%20the%20server%E2%80%99s%20performance%20established%20fairly%20regular%20activity%20levels%20and%20the%20long%20delays%20disappeared.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EHowever%2C%20over%20the%20subsequent%20days%20the%20administrator%20started%20to%20receive%2C%20albeit%20at%20a%20lower%20rate%2C%20complaints%20from%20several%20users%20that%20Outlook%20was%20sporadically%20hanging%20for%20up%20to%20a%20minute.%20Andrew%20asked%20the%20administrator%20to%20send%20a%20correlating%2012-hour%20Performance%20Monitor%20capture%20with%20the%20Exchange%20data%20collection%20set%2C%20but%20this%20time%20there%20was%20no%20obvious%20anomaly%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22178%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F0724.image_5F00_thumb_5F00_15.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121242iA979A3DAE9991829%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EWondering%20if%20the%20hangs%20would%20be%20visible%20in%20the%20CPU%20usage%20history%20of%20Store.exe%2C%20he%20removed%20all%20the%20counters%20except%20for%20Store%E2%80%99s%20processor%20usage%20counter.%20When%20he%20zoomed%20in%20on%20the%20morning%20hours%20when%20users%20began%20to%20login%20and%20load%20on%20the%20server%20increased%2C%20he%20noticed%20three%20spikes%20around%208%3A30am%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22131%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F0383.image_5F00_thumb_5F00_2.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121243i0DB9B1F976AFC390%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EBecause%20the%20server%20has%20eight%20cores%2C%20the%20processor%20usage%20counter%20for%20an%20individual%20process%20has%20a%20possible%20range%20between%200%20and%20800%2C%20so%20the%20spikes%20were%20far%20from%20taxing%20the%20system%2C%20but%20definitely%20higher%20than%20Exchange%E2%80%99s%20typical%20range%20on%20that%20system.%20Zooming%20in%20further%20and%20setting%20the%20graph%E2%80%99s%20vertical%20scale%20to%20make%20the%20spikes%20more%20distinct%2C%20he%20observed%20that%20average%20CPU%20usage%20was%20always%20below%20about%2075%25%20of%20a%20single%20core%20and%20the%20spikes%20were%2015-30%20seconds%20long%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22385%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F8371.image_5F00_thumb_5F00_3.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121244iBB9FF508BD27D3D7%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22448%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EWhat%20was%20Exchange%20doing%20during%20the%20spikes%3F%20They%20were%20too%20short-lived%20and%20random%20for%20the%20administrator%20to%20run%20Procdump%20like%20he%20had%20before%20and%20reliably%20capture%20dumps%20when%20they%20occurred.%20Fortunately%2C%20I%20designed%20Procdump%20with%20this%20precise%20scenario%20in%20mind.%20It%20supports%20several%20trigger%20conditions%20that%20when%20met%2C%20cause%20it%20to%20generate%20a%20dump.%20For%20example%2C%20you%20can%20configure%20Procdump%20to%20generate%20a%20dump%20of%20a%20process%20when%20the%20process%20terminates%2C%20when%20its%20private%20memory%20usage%20exceeds%20a%20certain%20value%2C%20or%20even%20based%20on%20the%20value%20of%20a%20performance%20counter%20you%20specify.%20Its%20most%20basic%20trigger%2C%20though%2C%20is%20the%20CPU%20usage%20of%20the%20process%20exceeding%20a%20specified%20threshold%20for%20a%20specified%20length%20of%20time.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EThe%20Performance%20Monitor%20log%20gave%20Andrew%20the%20information%20he%20needed%20to%20craft%20a%20Procdump%20command%20line%20that%20would%20capture%20dumps%20for%20future%20CPU%20spikes%3A%3C%2FP%3E%3CBR%20%2F%3E%3CBLOCKQUOTE%3E%3CBR%20%2F%3E%3CP%3E%3CSPAN%20style%3D%22font-family%3A%20Courier%20New%3B%22%3E%20%3CSPAN%20style%3D%22font-size%3A%20small%3B%22%3E%20procdump.exe%20-n%2020%20-s%2010%20-c%2075%20-u%20store.exe%20c%3A%5Cdumps%5Cstore_75pc_10sec.dmp%20%3C%2FSPAN%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3C%2FBLOCKQUOTE%3E%3CBR%20%2F%3E%3CP%3EThe%20arguments%20configure%20Procdump%20to%20generate%20a%20dump%20of%20the%20Store.exe%20process%20when%20Store%E2%80%99s%20CPU%20usage%20exceeds%2075%25%20(-c%2075)%20relative%20to%20a%20single%20core%20(-u)%20for%2010%20seconds%20(-s%2010)%2C%20to%20generate%20up%20to%2020%20dumps%20(-n%2020)%20and%20then%20exit%2C%20and%20to%20save%20the%20dumps%20in%20the%20C%3A%5CDumps%20directory%20with%20names%20that%20begin%20with%20%E2%80%9Cstore_75pc_10sec%E2%80%9D.%20The%20administrator%20executed%20the%20command%20before%20leaving%20work%20and%20when%20he%20checked%20on%20its%20progress%20the%20next%20morning%20it%20had%20finished%20creating%2020%20dump%20files.%20He%20emailed%20them%20to%20Andrew%2C%20who%20proceeded%20to%20study%20them%20in%20the%20Windbg%20debugger%20one%20by%20one.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EWhen%20Procdump%20generates%20a%20dump%20because%20the%20CPU%20usage%20trigger%20is%20met%2C%20it%20sets%20the%20thread%20context%20in%20the%20dump%20file%20to%20the%20thread%20that%20was%20consuming%20the%20most%20CPU%20at%20the%20time%20of%20the%20dump.%20Since%20the%20debugger%E2%80%99s%20stack-dumping%20commands%20are%20relative%20to%20the%20current%20thread%20context%2C%20simply%20entering%20the%20stack%20dumping%20command%20shows%20the%20stack%20of%20the%20thread%20most%20likely%20to%20have%20caused%20a%20CPU%20spike.%20Over%20half%20the%20dumps%20were%20inconclusive%2C%20apparently%20captured%20after%20the%20spike%20that%20triggered%20the%20dump%20had%20already%20ended%2C%20or%20with%20threads%20that%20were%20executing%20code%20that%20obviously%20wasn%E2%80%99t%20directly%20related%20to%20a%20spike.%20However%2C%20several%20of%20the%20dumps%20had%20stack%20traces%20similar%20to%20this%20one%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%22263%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F1856.image_5F00_thumb_5F00_6.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121245i33F82EB5DE6311EE%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22475%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EThe%20stack%20frame%20that%20stuck%20out%20listed%20Store%E2%80%99s%20EcFindRow%20function%2C%20which%20implied%20that%20the%20spikes%20were%20caused%20by%20lengthy%20database%20queries%2C%20the%20kind%20that%20execute%20when%20Outlook%20accesses%20a%20mailbox%20folder%20with%20thousands%20of%20entries.%20With%20this%20clue%20in%20hand%2C%20Andrew%20suggested%20the%20administrator%20create%20an%20inventory%20of%20large%20mailboxes%20and%20pointed%20him%20at%20%3CA%20href%3D%22http%3A%2F%2Fmsexchangeteam.com%2Farchive%2F2009%2F12%2F07%2F453450.aspx%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%20an%20article%20%3C%2FA%3E%20the%20Exchange%20support%20team%20had%20written%20that%20describes%20how%20to%20do%20this%20for%20each%20version%20of%20Exchange%3A%3C%2FP%3E%3CBR%20%2F%3E%3CP%3E%3CIMG%20alt%3D%22image%22%20border%3D%220%22%20height%3D%2291%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fcfs-file.ashx%2F__key%2FCommunityServer-Blogs-Components-WeblogFiles%2F00-00-00-52-36-metablogapi%2F6253.image_5F00_thumb_5F00_7.png%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F121246iC9A333446A331D25%22%20style%3D%22border-right-width%3A%200px%3B%20display%3A%20inline%3B%20border-top-width%3A%200px%3B%20border-bottom-width%3A%200px%3B%20border-left-width%3A%200px%22%20title%3D%22image%22%20width%3D%22554%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%3ESure%20enough%2C%20the%20script%20identified%20several%20users%20with%20folders%20containing%20tens%20of%20thousands%20of%20items.%20The%20administrator%20asked%20the%20users%20to%20reduce%20their%20item%20count%20to%20well%20below%205000%20(the%20Exchange%202003%20recommendation%20%E2%80%93%20this%20has%20been%20increased%20in%20each%20version%20with%20a%20recommendation%20of%20100%2C000%20in%20Exchange%202010)%20by%20archiving%20the%20items%2C%20deleting%20them%2C%20or%20organizing%20them%20into%20subfolders.%20Within%20a%20couple%20of%20days%20they%20had%20reorganized%20the%20problematic%20folders%20and%20user%20complaints%20ceased%20entirely.%20Ongoing%20monitoring%20of%20the%20Exchange%20server%20over%20the%20following%20week%20confirmed%20that%20the%20problem%20was%20gone.%3C%2FP%3E%3CBR%20%2F%3E%3CP%3EWith%20the%20help%20of%20Procdump%2C%20the%20compound%20case%20of%20the%20Outlook%20hangs%20was%20successfully%20closed.%3C%2FP%3E%0A%20%0A%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-723936%22%20slang%3D%22en-US%22%3EFirst%20published%20on%20TechNet%20on%20Aug%2021%2C%202010%20This%20case%20was%20shared%20with%20me%20by%20a%20friend%20of%20mine%2C%20Andrew%20Richards%2C%20a%20Microsoft%20Exchange%20Server%20Escalation%20Engineer.%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-723936%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EMark%20Russinovich%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft
First published on TechNet on Aug 21, 2010

This case was shared with me by a friend of mine, Andrew Richards, a Microsoft Exchange Server Escalation Engineer. It’s a really interesting case because it highlights the use of a Sysinternals tool I specifically wrote for use by Microsoft support services and it’s actually two cases in one.


The case unfolds with a systems administrator at a corporation contacting Microsoft support to report that users across their network were complaining of Outlook hangs lasting up to 15-minutes. The fact that multiple users were experiencing the problem pointed at an Exchange issue, so the call was routed to Exchange Server support services.


The Exchange team has developed a Performance Monitor data collector set that includes several hundred counters that have proven useful for troubleshooting Exchange issues, including LDAP, RPC and SMTP message activity, Exchange connection counts, memory usage and processor usage. Exchange support had the administrator collect a log of the server’s activity with 12 hour log cycles, the first from 9pm until 9am the next morning. When Exchange support engineers viewed the log, two patterns were clear despite the heavy density of the plots: first and as expected, the Exchange server’s load increased during the morning when users came into work and started using Outlook; and second, the counter graphs showed a difference in behavior between about 8:05 and 8:20am, a duration that corresponded exactly to the long delays users were reporting:


image


The support engineers zoomed in and puzzled over the counters in the timeframe and could see Exchange’s CPU usage drop, the active connection count go down, and outbound response latency drastically increase, but they were unable to identify a cause:


image


They escalated the case to the next level of support and it was assigned to Andrew. Andrew studied the logs and concluded that he needed additional information about what Exchange was doing during an outage. Specifically, he wanted a process memory dump of Exchange when it was in the unresponsive state. This would contain the contents of the process address space, including its data and code, as well as the register state of the process’s threads. Dump files of the Exchange process would allow Andrew to look at Exchange’s threads to see what was causing them to stall.


One way to obtain a dump is to “attach” to the process with a debugger like Windbg from the Debugging Tools for Windows package (included with the Windows Software Development Kit) and execute the .dump command, but downloading and installing the tools, launching the debugger, attaching to the right process, and saving dumps is an involved procedure. Instead, Andrew directed the administrator to download the Sysinternals Procdump utility (a single utility that you can run without installing any software on the server). Procdump makes it easy to obtain dumps of a process and includes options that create multiple dumps at a specified interval. Andrew asked the administrator to run Procdump the next time the server’s CPU usage dropped so that it would generate five dumps of the Exchange Server engine process, Store.exe, spaced three seconds apart:



procdump –n 5 –s 3 store.exe c:\dumps\store_mini.dmp



The next day the problem reproduced and the administrator sent Andrew the dump files Procdump had generated. When a process temporarily hangs it’s often because one thread in the process acquires a lock protecting data that other threads need to access, and holds the lock while performing some long-running operation. Andrew’s first step was therefore to check for held locks. The most commonly used intra-process synchronization lock is a critical section and the !locks debugger command lists the critical sections in a dump that are locked, the thread ID of the thread owning the lock, and the number of threads waiting to acquire it. Andrew used a similar command, !critlist from the Sieext.dll debugger extension (the public version of which, Sieextpub.dll, is downloadable from here ). The output showed that multiple threads were piled up waiting for thread 223 to release a critical section:


image


His next step was to see what the owning thread was doing, which might point at the code responsible for the long delays. He switched to the owning thread’s register context using the ~ command and then dumped the thread’s stack with the k command:


image


As we’ve seen in previous Case of the Unexplained cases, the debugger was unsure how to interpret the stack when it came across a stack frame pointing into Savfmsevsapi, an image for which it couldn’t obtain symbols. Most Windows images have their symbols posted on the Microsoft symbol server so this was likely a third-party DLL loaded into the Store.exe process and was therefore a suspect in the hangs. The list modules (“lm”) command dumps version information for loaded images and the path of the image made it obvious that Savfmsevsapi was part of Symantec’s mail security product:


image


Andrew checked the other dumps and they all had similar stack traces. With the anecdotal evidence seeming to point at a Symantec issue, Andrew forwarded the dumps and his analysis, with the administrator’s permission, to Symantec technical support. Several hours later they reported that the dumps indeed revealed a problem with the mail application’s latest antivirus signature distribution and forwarded a patch to the administrator that would fix the bug. He applied it and continued to monitor the server to verify the fix. Sure enough, the server’s performance established fairly regular activity levels and the long delays disappeared.


However, over the subsequent days the administrator started to receive, albeit at a lower rate, complaints from several users that Outlook was sporadically hanging for up to a minute. Andrew asked the administrator to send a correlating 12-hour Performance Monitor capture with the Exchange data collection set, but this time there was no obvious anomaly:


image


Wondering if the hangs would be visible in the CPU usage history of Store.exe, he removed all the counters except for Store’s processor usage counter. When he zoomed in on the morning hours when users began to login and load on the server increased, he noticed three spikes around 8:30am:


image


Because the server has eight cores, the processor usage counter for an individual process has a possible range between 0 and 800, so the spikes were far from taxing the system, but definitely higher than Exchange’s typical range on that system. Zooming in further and setting the graph’s vertical scale to make the spikes more distinct, he observed that average CPU usage was always below about 75% of a single core and the spikes were 15-30 seconds long:


image


What was Exchange doing during the spikes? They were too short-lived and random for the administrator to run Procdump like he had before and reliably capture dumps when they occurred. Fortunately, I designed Procdump with this precise scenario in mind. It supports several trigger conditions that when met, cause it to generate a dump. For example, you can configure Procdump to generate a dump of a process when the process terminates, when its private memory usage exceeds a certain value, or even based on the value of a performance counter you specify. Its most basic trigger, though, is the CPU usage of the process exceeding a specified threshold for a specified length of time.


The Performance Monitor log gave Andrew the information he needed to craft a Procdump command line that would capture dumps for future CPU spikes:



procdump.exe -n 20 -s 10 -c 75 -u store.exe c:\dumps\store_75pc_10sec.dmp



The arguments configure Procdump to generate a dump of the Store.exe process when Store’s CPU usage exceeds 75% (-c 75) relative to a single core (-u) for 10 seconds (-s 10), to generate up to 20 dumps (-n 20) and then exit, and to save the dumps in the C:\Dumps directory with names that begin with “store_75pc_10sec”. The administrator executed the command before leaving work and when he checked on its progress the next morning it had finished creating 20 dump files. He emailed them to Andrew, who proceeded to study them in the Windbg debugger one by one.


When Procdump generates a dump because the CPU usage trigger is met, it sets the thread context in the dump file to the thread that was consuming the most CPU at the time of the dump. Since the debugger’s stack-dumping commands are relative to the current thread context, simply entering the stack dumping command shows the stack of the thread most likely to have caused a CPU spike. Over half the dumps were inconclusive, apparently captured after the spike that triggered the dump had already ended, or with threads that were executing code that obviously wasn’t directly related to a spike. However, several of the dumps had stack traces similar to this one:


image


The stack frame that stuck out listed Store’s EcFindRow function, which implied that the spikes were caused by lengthy database queries, the kind that execute when Outlook accesses a mailbox folder with thousands of entries. With this clue in hand, Andrew suggested the administrator create an inventory of large mailboxes and pointed him at an article the Exchange support team had written that describes how to do this for each version of Exchange:


image


Sure enough, the script identified several users with folders containing tens of thousands of items. The administrator asked the users to reduce their item count to well below 5000 (the Exchange 2003 recommendation – this has been increased in each version with a recommendation of 100,000 in Exchange 2010) by archiving the items, deleting them, or organizing them into subfolders. Within a couple of days they had reorganized the problematic folders and user complaints ceased entirely. Ongoing monitoring of the Exchange server over the following week confirmed that the problem was gone.


With the help of Procdump, the compound case of the Outlook hangs was successfully closed.