Home
%3CLINGO-SUB%20id%3D%22lingo-sub-723591%22%20slang%3D%22en-US%22%3EVista%20Multimedia%20Playback%20and%20Network%20Throughput%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-723591%22%20slang%3D%22en-US%22%3E%0A%20%26lt%3Bmeta%20http-equiv%3D%22Content-Type%22%20content%3D%22text%2Fhtml%3B%20charset%3DUTF-8%22%20%2F%26gt%3B%3CSTRONG%3E%20First%20published%20on%20TechNet%20on%20Aug%2026%2C%202007%20%3C%2FSTRONG%3E%20%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3EA%20few%20weeks%20ago%20a%20poster%20with%20the%20handle%20dloneranger%20%3C%2FFONT%3E%3CA%20href%3D%22http%3A%2F%2Fforums.2cpu.com%2Fshowthread.php%3Ft%3D83112%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CFONT%20color%3D%22%23800080%22%20face%3D%22Calibri%22%20size%3D%223%22%3Ereported%20%3C%2FFONT%3E%3C%2FA%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3Ein%20the%202CPU%20forums%20that%20he%20experienced%20reduced%20network%20throughput%20on%20his%20Vista%20system%20when%20he%20played%20audio%20or%20video.%20Other%20posters%20chimed%20in%20with%20similar%20results%2C%20and%20in%20the%20last%20week%20attention%20has%20been%20drawn%20to%20the%20behavior%20by%20other%20sites%2C%20including%20%3C%2FFONT%3E%3CA%20href%3D%22http%3A%2F%2Fit.slashdot.org%2Farticle.pl%3Fsid%3D07%2F08%2F21%2F1441240%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3ESlashdot%20%3C%2FFONT%3E%3C%2FA%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3Eand%20Zdnet%20blogger%20%3C%2FFONT%3E%3CA%20href%3D%22http%3A%2F%2Fblogs.zdnet.com%2Fhardware%2F%3Fp%3D702%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CFONT%20color%3D%22%23800080%22%20face%3D%22Calibri%22%20size%3D%223%22%3EAdrian%20Kingsley-Hughes%20%3C%2FFONT%3E%3C%2FA%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3EMany%20people%20have%20correctly%20surmised%20that%20the%20degradation%20in%20network%20performance%20during%20multimedia%20playback%20is%20directly%20connected%20with%20mechanisms%20employed%20by%20the%20Multimedia%20Class%20Scheduler%20Service%20(MMCSS)%2C%20a%20feature%20new%20to%20Windows%20Vista%20that%20I%20covered%20in%20my%20three-part%20TechNet%20Magazine%20%3C%2FFONT%3E%3CA%20href%3D%22http%3A%2F%2Fwww.microsoft.com%2Ftechnet%2Ftechnetmag%2Fissues%2F2007%2F02%2FVistaKernel%2Fdefault.aspx%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CFONT%20color%3D%22%23800080%22%20face%3D%22Calibri%22%20size%3D%223%22%3Earticle%20series%20%3C%2FFONT%3E%3C%2FA%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3Eon%20Windows%20Vista%20kernel%20changes.%20Multimedia%20playback%20requires%20a%20constant%20rate%20of%20media%20streaming%2C%20and%20playback%20will%20glitch%20or%20sputter%20if%20its%20requirements%20aren%E2%80%99t%20met.%20The%20MMCSS%20service%20runs%20in%20the%20generic%20service%20hosting%20process%20Svchost.exe%2C%20where%20it%20automatically%20prioritizes%20the%20playback%20of%20video%20and%20audio%20in%20order%20to%20prevent%20other%20tasks%20from%20interfering%20with%20the%20CPU%20usage%20of%20the%20playback%20software%3A%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CIMG%20height%3D%22379%22%20mce_src%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833250%2Foriginal.aspx%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833250%2Foriginal.aspx%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F120983iA593C35CE70C9F0B%22%20style%3D%22WIDTH%3A%20308px%3B%20HEIGHT%3A%20379px%22%20width%3D%22308%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CSPAN%20style%3D%22mso-no-proof%3A%20yes%22%3E%3CSHAPETYPE%20coordsize%3D%2221600%2C21600%22%20filled%3D%22f%22%20id%3D%22_x0000_t75%22%20preferrelative%3D%22t%22%20spt%3D%2275%22%20path%3D%22m%404%405l%404%4011%409%4011%409%405xe%22%20stroked%3D%22f%22%3E%0A%20%20%20%20%20%3CSTROKE%20joinstyle%3D%22miter%22%3E%0A%20%20%20%20%20%3C%2FSTROKE%3E%0A%20%20%20%20%20%3CFORMULAS%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22if%20lineDrawn%20pixelLineWidth%200%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22sum%20%400%201%200%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22sum%200%200%20%401%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22prod%20%402%201%202%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22prod%20%403%2021600%20pixelWidth%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22prod%20%403%2021600%20pixelHeight%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22sum%20%400%200%201%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22prod%20%406%201%202%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22prod%20%407%2021600%20pixelWidth%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22sum%20%408%2021600%200%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22prod%20%407%2021600%20pixelHeight%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%20%3CF%20eqn%3D%22sum%20%4010%2021600%200%22%3E%0A%20%20%20%20%20%20%3C%2FF%3E%0A%20%20%20%20%20%3C%2FFORMULAS%3E%0A%20%20%20%20%20%3CPATH%20gradientshapeok%3D%22t%22%20connecttype%3D%22rect%22%20extrusionok%3D%22f%22%3E%0A%20%20%20%20%20%3C%2FPATH%3E%0A%20%20%20%20%20%0A%20%20%20%20%20%3CLOCK%20aspectratio%3D%22t%22%20ext%3D%22edit%22%3E%0A%20%20%20%20%20%3C%2FLOCK%3E%0A%20%20%20%20%3C%2FSHAPETYPE%3E%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20When%20a%20multimedia%20application%20begins%20playback%2C%20the%20multimedia%20APIs%20it%20uses%20call%20the%20MMCSS%20service%20to%20boost%20the%20priority%20of%20the%20playback%20thread%20into%20the%20realtime%20range%2C%20which%20covers%20priorities%2016-31%2C%20for%20up%20to%208ms%20of%20every%2010ms%20interval%20of%20the%20time%2C%20depending%20on%20how%20much%20CPU%20the%20playback%20thread%20requires.%20Because%20other%20threads%20run%20at%20priorities%20in%20the%20dynamic%20priority%20range%20below%2015%2C%20even%20very%20CPU%20intensive%20applications%20won%E2%80%99t%20interfere%20with%20the%20playback.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20You%20can%20see%20the%20boost%20by%20playing%20an%20audio%20or%20video%20clip%20in%20Windows%20Media%20Player%20(WMP)%2C%20running%20the%20Reliability%20and%20Performance%20Monitor%20(Start-%26gt%3BRun-%26gt%3BPerfmon)%2C%20selecting%20the%20Performance%20Monitor%20item%2C%20and%20adding%20the%20Priority%20Current%20value%20for%20all%20the%20Wmplayer%20threads%20in%20the%20Thread%20object.%20Set%20the%20graph%20scale%20to%2031%20(the%20highest%20priority%20value%20on%20Windows)%20and%20you%E2%80%99ll%20easily%20spot%20the%20boosted%20thread%2C%20shown%20here%20running%20at%20priority%2021%3A%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CIMG%20height%3D%22476%22%20mce_src%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833251%2Foriginal.aspx%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833251%2Foriginal.aspx%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F120984i46EB8F41367E1095%22%20style%3D%22WIDTH%3A%20550px%3B%20HEIGHT%3A%20476px%22%20width%3D%22550%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CSPAN%20style%3D%22mso-no-proof%3A%20yes%22%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Besides%20activity%20by%20other%20threads%2C%20media%20playback%20can%20also%20be%20affected%20by%20network%20activity.%20When%20a%20network%20packet%20arrives%20at%20system%2C%20it%20triggers%20a%20CPU%20interrupt%2C%20which%20causes%20the%20device%20driver%20for%20the%20device%20at%20which%20the%20packet%20arrived%20to%20execute%20an%20Interrupt%20Service%20Routine%20(ISR).%20Other%20device%20interrupts%20are%20blocked%20while%20ISRs%20run%2C%20so%20ISRs%20typically%20do%20some%20device%20book-keeping%20and%20then%20perform%20the%20more%20lengthy%20transfer%20of%20data%20to%20or%20from%20their%20device%20in%20a%20Deferred%20Procedure%20Call%20(DPC)%20that%20runs%20with%20device%20interrupts%20enabled.%20While%20DPCs%20execute%20with%20interrupts%20enabled%2C%20they%20take%20precedence%20over%20all%20thread%20execution%2C%20regardless%20of%20priority%2C%20on%20the%20processor%20on%20which%20they%20run%2C%20and%20can%20therefore%20impede%20media%20playback%20threads.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Network%20DPC%20receive%20processing%20is%20among%20the%20most%20expensive%2C%20because%20it%20includes%20handing%20packets%20to%20the%20TCP%2FIP%20driver%2C%20which%20can%20result%20in%20lengthy%20computation.%20The%20TCP%2FIP%20driver%20verifies%20each%20packet%2C%20determines%20the%20packet%E2%80%99s%20protocol%2C%20updates%20the%20connection%20state%2C%20finds%20the%20receiving%20application%2C%20and%20copies%20the%20received%20data%20into%20the%20application%E2%80%99s%20buffers.%20This%20Process%20Explorer%20screenshot%20shows%20how%20CPU%20usage%20for%20DPCs%20rose%20dramatically%20when%20I%20copied%20a%20large%20file%20from%20another%20system%3A%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CIMG%20height%3D%22208%22%20mce_src%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833252%2Foriginal.aspx%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833252%2Foriginal.aspx%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F120985i3C5B0E6A8BABD8A5%22%20style%3D%22WIDTH%3A%20460px%3B%20HEIGHT%3A%20208px%22%20width%3D%22460%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CSPAN%20style%3D%22mso-no-proof%3A%20yes%22%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Tests%20of%20MMCSS%20during%20Vista%20development%20showed%20that%2C%20even%20with%20thread-priority%20boosting%2C%20heavy%20network%20traffic%20can%20cause%20enough%20long-running%20DPCs%20to%20prevent%20playback%20threads%20from%20keeping%20up%20with%20their%20media%20streaming%20requirements%2C%20resulting%20in%20glitching.%20MMCSS%E2%80%99%20glitch-resistant%20mechanisms%20were%20therefore%20extended%20to%20include%20throttling%20of%20network%20activity.%20It%20does%20so%20by%20issuing%20a%20command%20to%20the%20NDIS%20device%20driver%2C%20which%20is%20the%20driver%20that%20gives%20packets%20received%20by%20network%20adapter%20drivers%20to%20the%20TCP%2FIP%20driver%2C%20that%20causes%20NDIS%20to%20%E2%80%9Cindicate%E2%80%9D%2C%20or%20pass%20along%2C%20at%20most%2010%20packets%20per%20millisecond%20(10%2C000%20packets%20per%20second).%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CSPAN%20style%3D%22mso-no-proof%3A%20yes%22%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Because%20the%20standard%20Ethernet%20frame%20size%20is%20about%201500%20bytes%2C%20a%20limit%20of%2010%2C000%20packets%20per%20second%20equals%20a%20maximum%20throughput%20of%20roughly%2015MB%2Fs.%20100Mb%20networks%20can%20handle%20at%20most%2012MB%2Fs%2C%20so%20if%20your%20system%20is%20on%20a%20100Mb%20network%2C%20you%20typically%20won%E2%80%99t%20see%20any%20slowdown.%20However%2C%20if%20you%20have%20a%201Gb%20network%20infrastructure%20and%20both%20the%20sending%20system%20and%20your%20Vista%20receiving%20system%20have%201Gb%20network%20adapters%2C%20you%E2%80%99ll%20see%20throughput%20drop%20to%20roughly%2015%25.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Further%2C%20there%E2%80%99s%20an%20unfortunate%20bug%20in%20the%20NDIS%20throttling%20code%20that%20magnifies%20throttling%20if%20you%20have%20multiple%20NICs.%20If%20you%20have%20a%20system%20with%20both%20wireless%20and%20wired%20adapters%2C%20for%20instance%2C%20NDIS%20will%20process%20at%20most%208000%20packets%20per%20second%2C%20and%20with%20three%20adapters%20it%20will%20process%20a%20maximum%20of%206000%20packets%20per%20second.%206000%20packets%20per%20second%20equals%209MB%2Fs%2C%20a%20limit%20that%E2%80%99s%20visible%20even%20on%20100Mb%20networks.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20I%20caused%20throttling%20to%20be%20visible%20on%20my%20laptop%2C%20which%20has%20three%20adapters%2C%20by%20copying%20a%20large%20file%20to%20it%20from%20another%20system%20and%20then%20starting%20WMP%20and%20playing%20a%20song.%20The%20Task%20Manager%20screenshot%20below%20shows%20how%20the%20copy%20achieves%20a%20throughput%20of%20about%2020%25%2C%20but%20drops%20to%20around%206%25%20on%20my%201Gb%20network%20after%20I%20start%20playing%20a%20song%3A%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CIMG%20height%3D%22618%22%20mce_src%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833257%2Foriginal.aspx%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833257%2Foriginal.aspx%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F120986i93B5BF3AE5355F3B%22%20style%3D%22WIDTH%3A%20499px%3B%20HEIGHT%3A%20618px%22%20width%3D%22499%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CSPAN%20style%3D%22mso-no-proof%3A%20yes%22%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20You%20can%20monitor%20the%20number%20of%20receive%20packets%20NDIS%20processes%20by%20adding%20the%20%E2%80%9Cpackets%20received%20per%20second%E2%80%9D%20counter%20in%20the%20Network%20object%20to%20the%20Performance%20Monitor%20view.%20Below%2C%20you%20can%20see%20the%20packet%20receive%20rate%20change%20as%20I%20ran%20the%20experiment.%20The%20number%20of%20packets%20NDIS%20processed%20didn%E2%80%99t%20realize%20the%20theoretical%20throttling%20maximum%20of%206%2C000%2C%20probably%20due%20to%20handshaking%20with%20the%20remote%20system.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CIMG%20height%3D%22377%22%20mce_src%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833259%2Foriginal.aspx%22%20original-url%3D%22http%3A%2F%2Fblogs.technet.com%2Fphotos%2Fmarkrussinovich%2Fimages%2F1833259%2Foriginal.aspx%22%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F120987iF7B8E79AF6D4405E%22%20style%3D%22WIDTH%3A%20550px%3B%20HEIGHT%3A%20377px%22%20width%3D%22550%22%20%2F%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CSPAN%20style%3D%22mso-no-proof%3A%20yes%22%3E%20%3C%2FSPAN%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Despite%20even%20this%20level%20of%20throttling%2C%20Internet%20traffic%2C%20even%20on%20the%20best%20broadband%20connection%2C%20won%E2%80%99t%20be%20affected.%20That%E2%80%99s%20because%20the%20multiplicity%20of%20intermediate%20connections%20between%20your%20system%20and%20another%20one%20on%20the%20Internet%20fragments%20packets%20and%20slows%20down%20packet%20travel%2C%20and%20therefore%20reduces%20the%20rate%20at%20which%20systems%20transfer%20data.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20The%20throttling%20rate%20Vista%20uses%20was%20derived%20from%20experiments%20that%20reliably%20achieved%20glitch-resistant%20playback%20on%20systems%20with%20one%20CPU%20on%20100Mb%20networks%20with%20high%20packet%20receive%20rates.%20The%20hard-coded%20limit%20was%20short-sighted%20with%20respect%20to%20today%E2%80%99s%20systems%20that%20have%20faster%20CPUs%2C%20multiple%20cores%20and%20Gigabit%20networks%2C%20and%20in%20addition%20to%20fixing%20the%20bug%20that%20affects%20throttling%20on%20multi-adapter%20systems%2C%20the%20networking%20team%20is%20actively%20working%20with%20the%20MMCSS%20team%20on%20a%20fix%26nbsp%3Bthat%20allows%20for%20not%20so%20dramatically%20penalizing%20network%20traffic%2C%20while%20still%20delivering%20a%20glitch-resistant%20experience.%20%3C%2FFONT%3E%3C%2FP%3E%3CBR%20%2F%3E%3CP%20class%3D%22MsoNormal%22%20style%3D%22MARGIN%3A%200in%200in%2010pt%22%3E%3CFONT%20face%3D%22Calibri%22%20size%3D%223%22%3E%20Stay%20tuned%20to%20my%20blog%20for%20more%20information.%20%3C%2FFONT%3E%3C%2FP%3E%0A%20%0A%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-723591%22%20slang%3D%22en-US%22%3EFirst%20published%20on%20TechNet%20on%20Aug%2026%2C%202007%20A%20few%20weeks%20ago%20a%20poster%20with%20the%20handle%20dloneranger%20reported%20in%20the%202CPU%20forums%20that%20he%20experienced%20reduced%20network%20throughput%20on%20his%20Vista%20system%20when%20he%20played%20audio%20or%20video.%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-723591%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EMark%20Russinovich%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft
First published on TechNet on Aug 26, 2007

A few weeks ago a poster with the handle dloneranger reported in the 2CPU forums that he experienced reduced network throughput on his Vista system when he played audio or video. Other posters chimed in with similar results, and in the last week attention has been drawn to the behavior by other sites, including Slashdot and Zdnet blogger Adrian Kingsley-Hughes .


Many people have correctly surmised that the degradation in network performance during multimedia playback is directly connected with mechanisms employed by the Multimedia Class Scheduler Service (MMCSS), a feature new to Windows Vista that I covered in my three-part TechNet Magazine article series on Windows Vista kernel changes. Multimedia playback requires a constant rate of media streaming, and playback will glitch or sputter if its requirements aren’t met. The MMCSS service runs in the generic service hosting process Svchost.exe, where it automatically prioritizes the playback of video and audio in order to prevent other tasks from interfering with the CPU usage of the playback software:




When a multimedia application begins playback, the multimedia APIs it uses call the MMCSS service to boost the priority of the playback thread into the realtime range, which covers priorities 16-31, for up to 8ms of every 10ms interval of the time, depending on how much CPU the playback thread requires. Because other threads run at priorities in the dynamic priority range below 15, even very CPU intensive applications won’t interfere with the playback.


You can see the boost by playing an audio or video clip in Windows Media Player (WMP), running the Reliability and Performance Monitor (Start->Run->Perfmon), selecting the Performance Monitor item, and adding the Priority Current value for all the Wmplayer threads in the Thread object. Set the graph scale to 31 (the highest priority value on Windows) and you’ll easily spot the boosted thread, shown here running at priority 21:




Besides activity by other threads, media playback can also be affected by network activity. When a network packet arrives at system, it triggers a CPU interrupt, which causes the device driver for the device at which the packet arrived to execute an Interrupt Service Routine (ISR). Other device interrupts are blocked while ISRs run, so ISRs typically do some device book-keeping and then perform the more lengthy transfer of data to or from their device in a Deferred Procedure Call (DPC) that runs with device interrupts enabled. While DPCs execute with interrupts enabled, they take precedence over all thread execution, regardless of priority, on the processor on which they run, and can therefore impede media playback threads.


Network DPC receive processing is among the most expensive, because it includes handing packets to the TCP/IP driver, which can result in lengthy computation. The TCP/IP driver verifies each packet, determines the packet’s protocol, updates the connection state, finds the receiving application, and copies the received data into the application’s buffers. This Process Explorer screenshot shows how CPU usage for DPCs rose dramatically when I copied a large file from another system:




Tests of MMCSS during Vista development showed that, even with thread-priority boosting, heavy network traffic can cause enough long-running DPCs to prevent playback threads from keeping up with their media streaming requirements, resulting in glitching. MMCSS’ glitch-resistant mechanisms were therefore extended to include throttling of network activity. It does so by issuing a command to the NDIS device driver, which is the driver that gives packets received by network adapter drivers to the TCP/IP driver, that causes NDIS to “indicate”, or pass along, at most 10 packets per millisecond (10,000 packets per second).



Because the standard Ethernet frame size is about 1500 bytes, a limit of 10,000 packets per second equals a maximum throughput of roughly 15MB/s. 100Mb networks can handle at most 12MB/s, so if your system is on a 100Mb network, you typically won’t see any slowdown. However, if you have a 1Gb network infrastructure and both the sending system and your Vista receiving system have 1Gb network adapters, you’ll see throughput drop to roughly 15%.


Further, there’s an unfortunate bug in the NDIS throttling code that magnifies throttling if you have multiple NICs. If you have a system with both wireless and wired adapters, for instance, NDIS will process at most 8000 packets per second, and with three adapters it will process a maximum of 6000 packets per second. 6000 packets per second equals 9MB/s, a limit that’s visible even on 100Mb networks.


I caused throttling to be visible on my laptop, which has three adapters, by copying a large file to it from another system and then starting WMP and playing a song. The Task Manager screenshot below shows how the copy achieves a throughput of about 20%, but drops to around 6% on my 1Gb network after I start playing a song:




You can monitor the number of receive packets NDIS processes by adding the “packets received per second” counter in the Network object to the Performance Monitor view. Below, you can see the packet receive rate change as I ran the experiment. The number of packets NDIS processed didn’t realize the theoretical throttling maximum of 6,000, probably due to handshaking with the remote system.




Despite even this level of throttling, Internet traffic, even on the best broadband connection, won’t be affected. That’s because the multiplicity of intermediate connections between your system and another one on the Internet fragments packets and slows down packet travel, and therefore reduces the rate at which systems transfer data.


The throttling rate Vista uses was derived from experiments that reliably achieved glitch-resistant playback on systems with one CPU on 100Mb networks with high packet receive rates. The hard-coded limit was short-sighted with respect to today’s systems that have faster CPUs, multiple cores and Gigabit networks, and in addition to fixing the bug that affects throttling on multi-adapter systems, the networking team is actively working with the MMCSS team on a fix that allows for not so dramatically penalizing network traffic, while still delivering a glitch-resistant experience.


Stay tuned to my blog for more information.