%3CLINGO-SUB%20id%3D%22lingo-sub-1398310%22%20slang%3D%22en-US%22%3EONNX%20Runtime%20Training%20Technical%20Deep%20Dive%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1398310%22%20slang%3D%22en-US%22%3E%3CP%3E%3CEM%3EAuthor%3A%20Sherlock%20Huang%2C%20AI%20Frameworks%2C%20Microsoft%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%3CEM%20style%3D%22font-family%3A%20inherit%3B%22%3EThis%20post%20is%20co-authored%20by%20Cheng%20Tang%2C%26nbsp%3BJesse%20Benson%2C%26nbsp%3B%3CSPAN%3EKaarthik%20Sivashanmugam%20and%20Alexey%20Svyatkovskiy%3C%2FSPAN%3E%3C%2FEM%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSPAN%3EToday%20we%20%3C%2FSPAN%3E%3CSPAN%3E%3CA%20href%3D%22https%3A%2F%2Faka.ms%2Fort-build2020%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Eannounced%3C%2FA%3E%3C%2FSPAN%3E%20%3CSPAN%3Ethe%20preview%20for%20new%20training%3C%2FSPAN%3E%3CSPAN%3E%20feature%20in%20ONNX%20Runtime%3C%2FSPAN%3E%20(ORT).%20This%20blog%20explains%20how%20we%20have%20been%20using%20it%3CSPAN%3E%20to%20accel%3C%2FSPAN%3E%3CSPAN%3Eerate%20training%20for%20large%20transformer%20models.%20%3C%2FSPAN%3E%3CSPAN%3EONNX%20Runtime%20Training%20%3C%2FSPAN%3Eis%3CSPAN%3E%20inte%3C%2FSPAN%3E%3CSPAN%3Egrated%3C%2FSPAN%3E%20with%20PyTorch%20so%20that%20%3CSPAN%3Eexisting%20train%3C%2FSPAN%3Eing%3CSPAN%3E%20code%20%3C%2FSPAN%3Ecan%20be%20directly%20%3CSPAN%3Eaccelerate%3C%2FSPAN%3Ed%3CSPAN%3E%20for%20%3C%2FSPAN%3E%3CSPAN%3Etraining.%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EIn%20this%20paper%2C%20we%20will%20describe%20some%20of%20the%20key%20aspects%20of%20ORT%20design%20and%20%3CSPAN%3Eimplementation%3C%2FSPAN%3E%20that%20enable%20us%3CSPAN%3E%20to%20achieve%3C%2FSPAN%3E%20%3CSPAN%3Ethe%20distributed%20training%20%3C%2FSPAN%3E%3CSPAN%3Eperformance%20improvements%3C%2FSPAN%3E.%20We%20will%20also%20use%20%3CA%20href%3D%22https%3A%2F%2Farxiv.org%2Fabs%2F1810.04805%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EBERT-L%3C%2FA%3E%20pre-training%20as%20the%20benchmark%20to%20illustrate%20the%20performance%20of%20ORT%20training.%20Finally%2C%20we%20will%20present%20a%20case%20study%20of%20training%20%3CA%20href%3D%22https%3A%2F%2Fopenai.com%2Fblog%2Fbetter-language-models%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EGPT-2%3C%2FA%3E%26nbsp%3Bmodel%20for%20code%20autocompletion%20feature%20in%20Visual%20Studio%26nbsp%3B%3CSPAN%20style%3D%22font-style%3A%20normal%20!msorm%3B%22%3E%3CEM%3E%3CA%20href%3D%22https%3A%2F%2Fvisualstudio.microsoft.com%2Fservices%2Fintellicode%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CI%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3EIntelli%3C%2FSPAN%3E%3C%2FI%3E%3CI%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3EC%3C%2FSPAN%3E%3C%2FI%3E%3CI%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3Eode%3C%2FSPAN%3E%3C%2FI%3E%3C%2FA%3E.%26nbsp%3B%3C%2FEM%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH2%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--1350619403%22%20id%3D%22toc-hId--1350619403%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-1136893430%22%20id%3D%22toc-hId-1136893430%22%3E%3CSPAN%3EDesign%20and%20Implementation%3C%2FSPAN%3E%3C%2FH2%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSPAN%3EONNX%20Runtime%20Training%20is%20built%20on%20the%20same%20%3C%2FSPAN%3E%3CSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fonnxruntime%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Eopen%20sourced%20code%3C%2FA%3E%3C%2FSPAN%3E%20%3CSPAN%3Eas%20%3C%2FSPAN%3E%3CSPAN%3Ethe%20popular%20inference%20engine%3C%2FSPAN%3E%3CSPAN%3E%20for%20ONNX%20models%3C%2FSPAN%3E%3CSPAN%3E.%3C%2FSPAN%3E%20%3CSPAN%3EFigure%201%20shows%20the%20%3C%2FSPAN%3E%3CSPAN%3Ehig%3C%2FSPAN%3E%3CSPAN%3Eh-%3C%2FSPAN%3E%3CSPAN%3Elevel%20%3C%2FSPAN%3E%3CSPAN%3Earchitecture%20%3C%2FSPAN%3E%3CSPAN%3Efor%20%3C%2FSPAN%3E%3CSPAN%3EONNX%20Runtime%E2%80%99s%20ecosystem.%3C%2FSPAN%3E%20%3CSPAN%3EORT%20%3C%2FSPAN%3E%3CSPAN%3Eis%20a%20common%20runtime%3C%2FSPAN%3E%20backend%3CSPAN%3E%20that%20supports%20multiple%20framework%20frontends%2C%20such%20as%20PyTorch%20and%20Tensorflow%3C%2FSPAN%3E%3CSPAN%3E%2FKeras%3C%2FSPAN%3E%3CSPAN%3E.%20%3C%2FSPAN%3E%3CSPAN%3EIt%20makes%20use%20of%20the%20Execution%20Provider%20interface%20to%20%3C%2FSPAN%3E%3CSPAN%3Eperform%20computation%20on%20different%20hardware%3C%2FSPAN%3E%3CSPAN%3E.%26nbsp%3B%3C%2FSPAN%3E%3CSPAN%3EThis%20enables%20us%20to%20build%20hardware%3C%2FSPAN%3E-%3CSPAN%3Eagnostic%3C%2FSPAN%3E%2C%3CSPAN%3E%20graph%3C%2FSPAN%3E-%3CSPAN%3Elevel%20optimizations%20%3C%2FSPAN%3E%3CSPAN%3Ethat%20are%20extensible%20across%20different%20platforms%3C%2FSPAN%3E%2C%20as%20well%20as%20hardware%20specific%20optimization%20targeting%20platforms%20like%20NVIDIA%20GPU%3CSPAN%3E.%26nbsp%3B%3C%2FSPAN%3EWe%20have%20also%20implemented%20additional%20optimizations%2C%20outlined%20below%2C%20to%20expedite%20training%20for%20large%20transformer%20models.%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%20border-style%3A%20hidden%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22100%25%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22SherlockNoMad_0-1589781650044.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F192470iAE331AFD83BA4079%2Fimage-size%2Fmedium%3Fv%3D1.0%26amp%3Bpx%3D400%22%20title%3D%22SherlockNoMad_0-1589781650044.png%22%20alt%3D%22Figure%201.%20ONNX%20Runtime%20High%20Level%20Architecture%22%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-caption%22%20onclick%3D%22event.preventDefault()%3B%22%3EFigure%201.%20ONNX%20Runtime%20High%20Level%20Architecture%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-1827454904%22%20id%3D%22toc-hId-1827454904%22%3EStatic%20Graph%20Optimizations%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EMachine%20learning%20models%20are%20commonly%20abstracted%20as%20computation%20graphs.%20The%20computation%20graph%20used%20by%20deep%20learning%20frameworks%20could%20be%20either%20static%20or%20dynamic.%20In%20the%20current%20implementation%2C%20ORT%20has%20a%20view%20of%20the%20entire%20static%20computation%20graph.%20This%20makes%20it%20possible%20to%20enable%20many%20common%20graph%20optimization%20techniques%2C%20such%20as%20constant%20folding%2C%20redundant%20operation%20elimination%2C%20and%20operator%20fusion.%20They%20are%20first%20applied%20on%20the%20forward%20computation%20graph%20before%20auto%20differentiation%20engine%20builds%20the%20backward%20graph.%20As%20ORT%20has%20the%20global%20knowledge%20of%20data%20dependencies%2C%20it%20only%20builds%20the%20minimal%20gradient%20graph%20that%20is%20needed%20for%20targeted%20weights.%20%3CSPAN%3EConsequently%3C%2FSPAN%3E%2C%20activation%20tensors%20that%20are%20not%20needed%20for%20backward%20computation%20are%20automatically%20dropped%20after%20use.%20With%20a%20minimal%20training%20graph%2C%20it%20ensures%20that%20only%20essential%20computation%20is%20performed%20and%20memory%20consumption%20is%20minimized.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-20000441%22%20id%3D%22toc-hId-20000441%22%3EMemory%20Usage%20Optimizations%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EOver%20the%20last%20few%20years%2C%20the%20size%20of%20deep%20learning%20models%20has%20been%20growing%20rapidly.%20GPU%20memory%20consumption%20has%20become%20a%20limiting%20factor%20for%20large%20model%20training.%20ORT%20has%20made%20conscious%20efforts%20to%20preserve%20and%20reuse%20memory%20whenever%20possible.%20For%20example%2C%20ORT%20reuses%20the%20same%20buffer%20segments%20throughout%20a%20series%20of%20operations%2C%20including%20gradient%20accumulation%2C%20gradient%20scaling%20adjustment%2C%20allreduce%20communication%20and%20weight%20update%20computation%20(if%20the%20optimizer%20allows).%20ORT%20also%20tries%20to%20perform%20in-place%20operations%20if%20the%20source%20tensor%20is%20no%20longer%20consumed%20elsewhere%20in%20the%20computation%20graph.%20ORT%E2%80%99s%20kernel%20implementation%20also%20tries%20to%20minimize%20the%20use%20of%20scratch%20buffers%2C%20such%20as%20avoid%20using%20some%20memory%20intensive%20cuDNN%20functions%2C%20and%20reusing%20output%20buffer%20as%20scratch%20buffer%20if%20possible.%20As%20a%20result%2C%20ORT%20can%20train%20BERT%20with%202x%20the%20batch%20size%20as%20PyTorch.%20This%20enables%20us%20to%20utilize%20the%20GPU%20resources%20more%20efficiently%2C%20resulting%20in%20better%20performance%20on%20the%20same%20model%20and%20the%20ability%20to%20train%20larger%20models.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--1787454022%22%20id%3D%22toc-hId--1787454022%22%3EZeRO%20Stage%201%20Integration%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fwww.microsoft.com%2Fen-us%2Fresearch%2Fblog%2Fzero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EZero%20Redundancy%20Optimizer%20(ZeRO)%3C%2FA%3E%3C%2FSPAN%3E%20%3CSPAN%3Eis%20a%20memory%20optimization%20technique%3C%2FSPAN%3E%20from%20Microsoft%20Research.%20%3CSPAN%3EZeRO%20is%20used%20to%3C%2FSPAN%3E%20save%20%3CSPAN%3EGPU%20%3C%2FSPAN%3Ememory%20consumption%20by%20eliminating%20duplicated%20states%20across%20workers%20during%20distributed%20training.%20ZeRO%20has%20three%20main%20optimization%20stages.%20%26nbsp%3BCurrently%2C%20%3CSPAN%3EONNX%20Runtime%3C%2FSPAN%3E%20implemented%20Stage%201%20of%20ZeRO.%20ZeRO%20Stage%201%2C%20known%20as%20the%20optimizer%20state%20partitioning%2C%20allows%20ORT%20to%20shard%20the%20optimizer%20states%2C%20including%201%3CSUP%3Est%3C%2FSUP%3E%20and%202%3CSUP%3End%3C%2FSUP%3E%20order%20moments%20(and%20fp32%20copy%20of%20weights%20in%20mixed%20precision%20mode)%2C%20across%20multiple%20workers%20with%20no%20extra%20communication%20overhead.%20With%20ZeRO%2C%20ORT%20can%20further%20boost%20batch%20size%20or%20train%20a%20larger%20model.%20In%20BERT-L%20pre-training%2C%20ZeRO%20allows%20batch%20size%20to%20further%20grow%20from%20148%20to%20168%20for%20phase%201%20and%20from%2023%20to%2027%20for%20phase%202%20in%20a%2032GB%20V100.%20Distributed%20checkpointing%20is%20also%20introduced%2C%20as%20model%20persistent%20state%20is%20distributed%20across%20multiple%20workers.%20ZeRO%20can%20be%20enabled%20with%20a%20config%20flag.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-700058811%22%20id%3D%22toc-hId-700058811%22%3ENative%20Mixed%20Precision%20Training%20Support%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSPAN%3EUnlike%20PyTorch%E2%80%99s%20dependency%20on%3C%2FSPAN%3E%3CSPAN%3E%20-ERR%3AREF-NOT-FOUND-NVIDIA%20Apex%3C%2FSPAN%3E%3CSPAN%3E%20extension%3C%2FSPAN%3E%3CSPAN%3E%2C%20%3C%2FSPAN%3E%3CSPAN%3EORT%20has%20implemented%20its%20own%20support%20for%20mixed%20precision%20%3C%2FSPAN%3E%3CSPAN%3Etraining%3C%2FSPAN%3E%3CSPAN%3E.%3C%2FSPAN%3E%20Mixed%20precision%20training%20can%20be%20enabled%20with%20a%20config%20flag%20%E2%80%93%20no%20other%20code%20change%20needed.%20Under%20the%20hood%2C%20ORT%20converts%20the%20static%20computation%20graph%20into%20mixed%20precision%20mode%20through%20a%20series%20of%20graph%20transformations%2C%20i.e.%20running%20most%20of%20the%20computations%20in%20fp16%20while%20keeping%20some%20numerically%20sensitive%20computation%20in%20fp32.%20ORT%20supports%20dynamic%20loss%20scaling%20by%20automatically%20inserting%20the%20computation%20nodes%20for%20loss%20scaling%20into%20the%20graph.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--1107395652%22%20id%3D%22toc-hId--1107395652%22%3EHighly%20Scaleable%20Distributed%20Training%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EORT%20seeks%20to%20build%20a%20unified%20highly%20scaleable%20distributed%20training%20framework%20for%20hybrid%20parallelism%2C%20including%20a%20mixed%20of%20data%20and%20model%20parallelisms.%20ORT%20supports%20data%20parallelism%2C%20which%20is%20the%20most%20popular%20distributed%20training%20mode%20adopted%20by%20many%20internal%20teams.%3CSPAN%3E%20We%20are%20enhancing%3C%2FSPAN%3E%20ORT%20to%20%3CSPAN%3Efully%20%3C%2FSPAN%3Esupport%20training%20extremely%20large%20models%20(%26gt%3B100%20billion%20parameters).%20It%20has%20an%20experimental%20implementation%20of%20-ERR%3AREF-NOT-FOUND-Megatron-style%20horizontal%20parallelism%20and%20%3CSPAN%3Ewe%20are%20%3C%2FSPAN%3Eactively%20developing%20to%20support%20pipeline%20parallelism%2C%20such%20as%20-ERR%3AREF-NOT-FOUND-PipeDream.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-1380117181%22%20id%3D%22toc-hId-1380117181%22%3ECUDA%20Kernel%20Optimizations%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EORT%20has%20introduced%20highly%20optimized%20CUDA%20kernels%20for%20some%20key%20operations%20including%20Reductions%2C%20Dropout%20and%20Softmax.%20In%20addition%2C%20we%20have%20also%20introduced%20a%20few%20key%20operator%20fusions%20with%20fused%20kernels%20for%20LayerNormalization%2C%20Gelu%20and%20their%20gradients%2C%20as%20well%20as%20Lamb%20Optimizer.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--427337282%22%20id%3D%22toc-hId--427337282%22%3E%3CSPAN%3EUs%3C%2FSPAN%3E%3CSPAN%3Eing%20ORT%3C%2FSPAN%3E%20%3CSPAN%3Ewith%20%3C%2FSPAN%3EPy%3CSPAN%3ET%3C%2FSPAN%3Eorch%20T%3CSPAN%3Eraining%20%3C%2FSPAN%3EC%3CSPAN%3Eode%3C%2FSPAN%3E%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EONNX%20Runtime%20has%20the%20capability%20to%20train%20existing%20PyTorch%20models%20through%20its%20optimized%20backend.%20For%20this%2C%20we%20have%20introduced%20a%20python%20API%20%3CSPAN%3Efor%20PyTorch%2C%20%3C%2FSPAN%3Ecalled%20ORTTrainer%2C%20which%20can%20be%20used%20to%20switch%20the%20training%20backend%20for%20PyTorch%20models%20(instance%20of%20torch.nn.Module)%20to%20ORT.%20This%20requires%20some%20changes%20from%20the%20user%2C%20such%20as%20replacing%20the%20PyTorch%20optimizer%2C%20and%20optionally%2C%20setting%20flags%20to%20enable%20additional%20features%20such%20as%20mixed-precision%20training.%26nbsp%3BUnder%20the%20hood%2C%20as%20shown%20in%20Figure%202%2C%20ORTTrainer%20first%20converts%20the%20PyTorch%20model%20to%20ONNX%20format%20through%20the%20PyTorch-ONNX%20exporter.%20Next%2C%20ORT%20backend%20takes%20over%20and%20applies%20graph%20optimizations%2C%20builds%20a%20training%20graph%2C%20performs%20transformations%20on%20it%20as%20needed%20(e.g.%20mixed-precision%20transformation)%2C%20and%20sets%20up%20the%20graph%20elements%20needed%20for%20distributed%20training.%20In%20this%20design%2C%20while%20all%20the%20computation-intensive%20workload%20is%20offloaded%20onto%20the%20ORT%20backend%2C%20users%20can%20still%20enjoy%20the%20rich%20PyTorch%20frontend%20utilities%2C%20such%20as%20data%20loading%2C%20checkpointing%20%2C%20and%20easy%20specification%20of%20loss%20functions.%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%20border-style%3A%20hidden%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22100%25%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22SherlockNoMad_1-1589781650047.png%22%20style%3D%22width%3A%20977px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F192468i1F4E131FC0494015%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22SherlockNoMad_1-1589781650047.png%22%20alt%3D%22Figure%202.%20Workflow%20for%20converting%20an%20PyTorch%20model%20into%20an%20ORT%20training%20graph%22%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-caption%22%20onclick%3D%22event.preventDefault()%3B%22%3EFigure%202.%20Workflow%20for%20converting%20an%20PyTorch%20model%20into%20an%20ORT%20training%20graph%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EIt%20is%20important%20to%20note%20that%20the%20current%20API%20is%20experimental%20and%20expected%20to%20see%20significant%20changes%20in%20the%20near%20future.%20A%20new%20version%20of%20the%20API%20is%20under%20active%20development.%20Our%20goal%20is%20to%20improve%20the%20interface%20to%20provide%20more%20seamless%20integration%20with%20PyTorch%20training%20that%20requires%20minimal%20changes%20in%20users%E2%80%99%20training%20code%2C%20introduce%20new%20features%2C%20and%20present%20a%20more%20flexible%20API%20to%20cover%20advanced%20scenarios.%20Please%20refer%20to%20the%20-ERR%3AREF-NOT-FOUND-training%20examples%20for%20more%20details.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--437840386%22%20id%3D%22toc-hId--437840386%22%3E%3CSPAN%3EBenchmarking%26nbsp%3B%3C%2FSPAN%3ETraining%20Acceleration%20with%20ONNX%20Runtime%3C%2FH2%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EWe%20now%20present%20the%20performance%20evaluation%20of%20BERT-L%20pre-training%20with%20ONNX%20Runtime%20in%20a%204-node%20DGX-2%20cluster.%20In%20AzureML%2C%20we%20also%20reproduced%20the%20pre-training%20convergence%20for%20BERT-Large%20using%20sample%20from%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FNVIDIA%2FDeepLearningExamples%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ENVIDIA%E2%80%99s%20DeepLearningExamplesle%E2%80%99s%20repo%3C%2FA%3E.%20We%20also%20validated%20fine%20tuning%20accuracy%20with%20SQuAD%20benchmarks.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--442442391%22%20id%3D%22toc-hId--442442391%22%3EBenchmarking%20on%20DGX-2%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EWe%20compared%20PyTorch%20and%20ORT%E2%80%99s%20BERT-L%20training%20performance%20on%204%20NVIDIA%20DGX-2%20machines%20(each%20with%2016x%2032GB%20V100)%20interconnected%20with%20InfiniBand.%20PyTorch%E2%80%99s%20result%20was%20obtained%20with%20NGC%2020.03-py3%20docker%20image%20following%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FNVIDIA%2FDeepLearningExamples%2Ftree%2Fmaster%2FPyTorch%2FLanguageModeling%2FBERT%23pre-training%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3ENvidia%E2%80%99s%20recipe%3C%2FA%3E.%20ORT%E2%80%99s%20result%20was%20obtained%20following%20the%20same%20recipe%2C%20except%20that%20ORT%20used%20bigger%20local%20batch%20sizes.%20As%20described%20above%2C%20ORT%20is%20able%20to%20run%20at%20a%202x%20batch%20size%20of%20PyTorch%E2%80%99s.%20ORT%20ran%20at%20a%20local%20batch%20size%20of%20128%20and%2016%20for%20phase%201%20and%202%20respectively%2C%20whereas%20PyTorch%20ran%20at%20batch%20size%20of%2064%20and%208.%20The%20effective%20global%20batch%20size%20remained%20unchanged%20in%20both%20cases.%20Overall%2C%20ORT%20achieved%20throughput%20improvement%20of%2011.32%25%20and%2014.61%25%20for%20phase%201%20and%202.%20The%20total%20time%20to%20train%20was%20reduces%20by%2011.16%25%2C%20from%2017.74%20hours%20to%2015.76%20hours.%3C%2FP%3E%0A%3CTABLE%20class%3D%22lia-align-justify%20lia-align-left%22%20style%3D%22width%3A%20800px%3B%20margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%22%3E%3CCAPTION%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3ETab%3C%2FSPAN%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3Ele%20%3C%2FSPAN%3E1.%20%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3ETime%20to%20train%20on%204%20%3C%2FSPAN%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3ENVIDIA%20DGX-2%3C%2FSPAN%3E%20machines%3C%2FCAPTION%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22227.5px%22%20height%3D%2257px%22%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2257px%22%20class%3D%22lia-align-left%22%20style%3D%22width%3A%20136px%3B%20height%3A%2057px%3B%22%3E%3CP%3E%3CSTRONG%3EPyTorch%201.5%20with%20%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3ENGC%2020.03-py3%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2257px%22%3E%3CP%3E%3CSTRONG%3EPyTorch%201.5%20with%20%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EONNX%20Runtime%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22127.5px%22%20height%3D%2257px%22%20class%3D%22lia-align-left%22%20style%3D%22width%3A%20127.5px%3B%20height%3A%2057px%3B%22%3E%3CP%3E%3CSTRONG%3E%25%20Gain%20with%20%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EONNX%20Runtime%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22227.5px%22%20height%3D%2230px%22%3E%3CP%3EPhase%201%20Throughput%20(ex%2Fsec)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E11522.1%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E12826.2%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22127.5px%22%20height%3D%2230px%22%3E%3CP%3E11.32%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22227.5px%22%20height%3D%2230px%22%3E%3CP%3EPhase%202%20Throughput%20(ex%2Fsec)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E2150.0%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E2464.1%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22127.5px%22%20height%3D%2230px%22%3E%3CP%3E14.61%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22227.5px%22%20height%3D%2230px%22%3E%3CP%3EPhase%201%20time%20(hours)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E11.12%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E9.99%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22127.5px%22%20height%3D%2230px%22%3E%3CP%3E10.16%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22227.5px%22%20height%3D%2230px%22%3E%3CP%3EPhase%202%20time%20(hours)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E6.62%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E5.77%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22127.5px%22%20height%3D%2230px%22%3E%3CP%3E12.84%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22227.5px%22%20height%3D%2230px%22%3E%3CP%3ETotal%20time%20(hours)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E17.74%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22136px%22%20height%3D%2230px%22%3E%3CP%3E15.76%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22127.5px%22%20height%3D%2230px%22%3E%3CP%3E11.16%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-2045070442%22%20id%3D%22toc-hId-2045070442%22%3E%26nbsp%3B%3C%2FH3%3E%0A%3CH3%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-237615979%22%20id%3D%22toc-hId-237615979%22%3EBERT-L%20Pre-training%20on%20AzureML%3C%2FH3%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EWe%20performed%20BERT-L%20pre-training%20on%208x%20ND40rs_v2%20cluster%20(each%20with%208x%2032GB%20V100)%20interconnected%20with%20InfiniBand%20in%20AzureML.%20We%20used%20the%20same%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FNVIDIA%2FDeepLearningExamples%2Ftree%2Fmaster%2FPyTorch%2FLanguageModeling%2FBERT%23pre-training%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3ENvidia%E2%80%99s%20recipe%3C%2FA%3E%2C%20expect%20that%20we%20doubled%20the%20local%20batch%20size%20in%20the%20same%20way%20we%20mentioned%20above.%20Mixed%20precision%20mode%20and%20LAMB%20optimizer%20was%20used%20throughout%20the%20training.%20As%20the%20end%20of%20phase%202%2C%20we%20achieved%20the%20training%20loss%20of%201.31.%20The%20end-to-end%20training%20time%20was%2018.32%20hours.%3C%2FP%3E%0A%3CTABLE%20class%3D%22lia-align-justify%20lia-align-left%22%20style%3D%22height%3A%20207px%3B%20width%3A%20422px%3B%20margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%22%20width%3D%22422%22%3E%3CCAPTION%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3ETable%20%3C%2FSPAN%3E2.%20%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3ETime%20to%20train%20on%20Azure%20ML%20with%20%3C%2FSPAN%3E8x%20%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3END40rs_v2%3C%2FSPAN%3E%3C%2FCAPTION%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22240.5px%22%20height%3D%2257px%22%20class%3D%22lia-align-center%22%3E%3CP%3E%3CSTRONG%3E%26nbsp%3B%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22180.5px%22%20height%3D%2257px%22%20class%3D%22lia-align-left%22%20style%3D%22width%3A%20180.5px%3B%20height%3A%2057px%3B%22%3E%3CP%3E%3CSTRONG%3EPyTorch%201.5%20with%20ONNX%20Runtime%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22240.5px%22%20height%3D%2230px%22%20class%3D%22lia-align-left%22%3E%3CP%3EPhase%201%20Throughput%20(ex%2Fsec)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22180.5px%22%20height%3D%2230px%22%3E%3CP%3E10751.4%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22240.5px%22%20height%3D%2230px%22%20class%3D%22lia-align-left%22%3E%3CP%3EPhase%202%20Throughput%20(ex%2Fsec)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22180.5px%22%20height%3D%2230px%22%3E%3CP%3E2223.7%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22240.5px%22%20height%3D%2230px%22%20class%3D%22lia-align-left%22%3E%3CP%3EPhase%201%20Time%20(hours)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22180.5px%22%20height%3D%2230px%22%3E%3CP%3E11.92%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22240.5px%22%20height%3D%2230px%22%20class%3D%22lia-align-left%22%3E%3CP%3EPhase%202%20Time%20(hours)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22180.5px%22%20height%3D%2230px%22%3E%3CP%3E6.40%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22240.5px%22%20height%3D%2230px%22%20class%3D%22lia-align-left%22%3E%3CP%3ETotal%20Time%20(hours)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22180.5px%22%20height%3D%2230px%22%3E%3CP%3E18.32%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSPAN%3EFigure%20%3C%2FSPAN%3E3%3CSPAN%3E%20shows%20a%3C%2FSPAN%3E%20loss%20curve%20produced%20in%20a%20typical%20pre-training%20run.%20Phase%201%20ends%20with%20a%20loss%20value%20around%201.4%20after%207038%20steps.%20Phase%202%20continues%20with%20a%20jump%20of%20loss%20due%20to%20switch%20of%20sequence%20length%2C%20and%20%3CSPAN%3Eit%20%3C%2FSPAN%3Efinally%20decrease%20to%20%3CSPAN%3Ea%20%3C%2FSPAN%3Eloss%20value%20around%201.3.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%C2%AD%3C%2FP%3E%0A%3CTABLE%20style%3D%22margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%20border-style%3A%20hidden%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22100%25%22%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22SherlockNoMad_0-1589783446513.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F192473i5C089F3ED60E1FDC%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22SherlockNoMad_0-1589783446513.png%22%20alt%3D%22Figure%203.%20ORT%20BERT-L%20pre-training%20loss%20curves%22%20%2F%3E%3CSPAN%20class%3D%22lia-inline-image-caption%22%20onclick%3D%22event.preventDefault()%3B%22%3EFigure%203.%20ORT%20BERT-L%20pre-training%20loss%20curves%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EThe%20pretrained%20model%20is%20then%20further%20finetuned%20on%20SQuAD%20dataset.%20Both%20full%20precision%20or%20mixed%20precision%20finetuning%20result%20in%20satisfactory%20Exact%20Match%20and%20F1%20scores.%3C%2FP%3E%0A%3CTABLE%20class%3D%22%20lia-align-justify%22%20style%3D%22width%3A%20401px%3B%20margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%22%20width%3D%22401%22%3E%3CCAPTION%3ETable%203.%20BERT-L%20fine-tuning%20result%20on%20SQuAD%20Dataset%3C%2FCAPTION%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20colspan%3D%222%22%20width%3D%22125px%22%20class%3D%22lia-align-left%22%3E%3CP%3E%3CSTRONG%3EAccuracy%20Metrics%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22119px%22%20class%3D%22lia-align-left%22%3E%3CP%3E%3CSTRONG%3EFinetuning%20-%20FP32%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22156px%22%20class%3D%22lia-align-left%22%3E%3CP%3E%3CSTRONG%3EFinetuning%20-%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3Emixed%20precision%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20colspan%3D%222%22%20width%3D%22125px%22%3E%3CP%3EExact%20Match%20%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22119px%22%3E%3CP%3E84.63%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22156px%22%3E%3CP%3E84.81%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22124px%22%3E%3CP%3EF1%20score%20%25%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20colspan%3D%222%22%20width%3D%22120px%22%3E%3CP%3E91.15%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22156px%22%3E%3CP%3E91.32%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--1698921203%22%20id%3D%22toc-hId--1698921203%22%3E%3CFONT%20size%3D%224%22%3EA%20Case%20Study%20with%20Visual%20Studio%20using%20GPT-2%20Medium%3C%2FFONT%3E%3C%2FH2%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EMicrosoft%20Visual%20Studio%20uses%20ONNX%20Runtime%20to%20accelerate%20pre-training%20a%2024-layer%20%3CA%20href%3D%22https%3A%2F%2Fopenai.com%2Fblog%2Fbetter-language-models%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EGPT-2%3C%2FA%3E%20Medium%20model%20to%20power%20code%20autocompletion%20in%20the%20%3CSPAN%20style%3D%22font-style%3A%20normal%20!msorm%3B%22%3E%3CEM%3E%3CA%20href%3D%22https%3A%2F%2Fvisualstudio.microsoft.com%2Fservices%2Fintellicode%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3E%3CI%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3EIntelli%3C%2FSPAN%3E%3C%2FI%3E%3CI%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3EC%3C%2FSPAN%3E%3C%2FI%3E%3CI%3E%3CSPAN%20style%3D%22font-weight%3A%20normal%20!msorm%3B%22%3Eode%3C%2FSPAN%3E%3C%2FI%3E%3C%2FA%3E%3C%2FEM%3E%3C%2FSPAN%3E%26nbsp%3Bof%20Visual%20Studio.%20Intellicode%20serves%20as%20a%20universal%20programming%20language%20compiler%2C%20effectively%20generating%20syntactically%20correct%20code%20in%20multiple%20programming%20languages%2C%20capable%20of%20completing%20an%20entire%20line%20of%20code%20in%20a%20couple%20of%20keystrokes.%20The%20training%20dataset%20for%20this%20task%20comprises%20over%201.2%20billion%20lines%20of%20source%20code%20in%20Python%2C%20C%23%2C%20JavaScript%20and%20TypeScript%20programming%20language%20from%2052000%20top-starred%20projects%20in%20GitHub.%26nbsp%3BWe%20treat%20the%20source%20code%20data%20as%20a%20sequence%20of%20tokens%20corresponding%20to%20the%20output%20of%20a%20lexical%20analyzer.%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3EThe%20training%20was%20performed%20in%20a%20DGX-2%20cluster.%20As%20we%20use%20a%20large%20sequence%20length%20of%201024%2C%20the%20memory%20usage%20is%20very%20intensive%20and%20PyTorch%20is%20only%20able%20to%20fit%20a%20batch%20size%20of%202%20on%20the%2032GB%20V100.%20ORT%20achieved%2015.8%25%20higher%20throughput%20under%20the%20identical%20local%20batch.%20As%20ORT%20is%20more%20memory%20efficient%20and%20able%20to%20run%20at%20a%20bigger%20batch%20size%20of%203%2C%20it%20delivered%20an%20overall%2020.5%25%20of%20the%20throughput%20improvement.%20As%20a%20result%2C%20the%20overall%20training%20time%20is%20reduced%20from%20202%20hours%20to%20168%20hours%20(with%201.2%20x%20higher%20throughput).%20The%20final%20evaluation%20metric%20also%20achieved%20the%20same%20production%20shipping%20bar.%20%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20class%3D%22%20lia-align-left%22%20style%3D%22height%3A%20147px%3B%20width%3A%20700px%3B%20margin-left%3A%20auto%3B%20margin-right%3A%20auto%3B%22%3E%3CCAPTION%3ETable%204.%20GPT-2%20medium%20pre-training%20performance.%3C%2FCAPTION%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22mso-yfti-irow%3A%20-1%3B%20mso-yfti-firstrow%3A%20yes%3B%20mso-yfti-lastfirstrow%3A%20yes%3B%20mso-prop-change%3A%20'Sherlock%20Huang'%2020200517T1612%3B%22%3E%0A%3CTD%20width%3D%22117.5px%22%20height%3D%2257px%22%3E%3CP%3E%3CSTRONG%3E%26nbsp%3B%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22134.5px%22%20height%3D%2257px%22%3E%3CP%3E%3CSTRONG%3EBatch%20size%20%2F%20GPU%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22165.5px%22%20height%3D%2257px%22%3E%3CP%20class%3D%22lia-align-left%22%3E%3CSTRONG%3EThroughput%20(ex%2Fsec)%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22148px%22%20height%3D%2257px%22%3E%3CP%20class%3D%22lia-align-left%22%3E%3CSTRONG%3ETime%20to%20train%20(hours)%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%22117.5px%22%20height%3D%2230px%22%3E%3CP%3EPyTorch%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22134.5px%22%20height%3D%2230px%22%3E%3CP%3E2%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22165.5px%22%20height%3D%2230px%22%3E%3CP%3E48.7%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22148px%22%20height%3D%2230px%22%3E%3CP%3E202%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22mso-yfti-irow%3A%201%3B%20mso-prop-change%3A%20'Sherlock%20Huang'%2020200517T1612%3B%22%3E%0A%3CTD%20width%3D%22117.5px%22%20height%3D%2230px%22%3E%3CP%3EPyTorch%20%2B%20ORT%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22134.5px%22%20height%3D%2230px%22%3E%3CP%3E2%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22165.5px%22%20height%3D%2230px%22%3E%3CP%3E56.4%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22148px%22%20height%3D%2230px%22%3E%3CP%3E174%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%20style%3D%22mso-yfti-irow%3A%202%3B%20mso-yfti-lastrow%3A%20yes%3B%20mso-prop-change%3A%20'Sherlock%20Huang'%2020200517T1612%3B%22%3E%0A%3CTD%20width%3D%22117.5px%22%20height%3D%2230px%22%3E%3CP%3EPytorch%20%2B%20ORT%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22134.5px%22%20height%3D%2230px%22%3E%3CP%3E3%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22165.5px%22%20height%3D%2230px%22%3E%3CP%3E58.7%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%22148px%22%20height%3D%2230px%22%3E%3CP%3E168%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSTRONG%3E%26nbsp%3B%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CH2%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId-788591630%22%20id%3D%22toc-hId-788591630%22%3EConclusion%3C%2FH2%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%3CSPAN%3EToday%2C%20w%3C%2FSPAN%3E%3CSPAN%3Ee%3C%2FSPAN%3E%3CSPAN%3E%20announced%20%3C%2FSPAN%3E%3CSPAN%3Ethe%20preview%20of%20training%20support%20in%20%3C%2FSPAN%3E%3CSPAN%3EO%3C%2FSPAN%3E%3CSPAN%3ENNX%20Runtime%3C%2FSPAN%3E%3CSPAN%3E%20with%20%3C%2FSPAN%3E%3CSPAN%3Ea%3C%2FSPAN%3E%20%3CSPAN%3Efocus%20on%3C%2FSPAN%3E%20%3CSPAN%3Elarge%20sc%3C%2FSPAN%3E%3CSPAN%3Eale%20%3C%2FSPAN%3E%3CSPAN%3Ecomputation%20intensive%3C%2FSPAN%3E%3CSPAN%3E%20transformer%20%3C%2FSPAN%3E%3CSPAN%3Emodels%3C%2FSPAN%3E%3CSPAN%3E.%3C%2FSPAN%3E%20We%20have%20demonstrated%20that%2C%20on%20a%204%20DGX-2%20cluster%2C%20ONNX%20Runtime%20can%20achieve%20a%20throughput%20gain%20of%2011.32%25%20and%2014.61%25%20for%20BERT-L%20phase%201%20and%202%20pre-training%20over%20PyTorch.%20The%20total%20training%20time%20was%20reduced%20by%2011.16%25%2C%20from%2017.74%20hours%20to%2015.76%20hours.%20ONNX%20Runtime%20is%20able%20to%20train%20BERT-L%20at%20a%202x%20batch%20size%20as%20PyTorch.%20We%20have%20shown%20a%20similar%2020.5%25%20speedup%20on%20a%20GPT-2%20model%2C%20saving%2034%20hours%20in%20total%20training%20time.%20ONNX%20Runtime%20Training%20is%20integrated%20with%20PyTorch%20so%20that%20existing%20PyTorch%20training%20code%20can%20be%20directly%20accelerated%20for%20%3CSPAN%3Etransformer%20%3C%2FSPAN%3E%3CSPAN%3Emodels%20training.%26nbsp%3B%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%20class%3D%22lia-align-justify%22%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20class%3D%22lia-align-justify%22%20id%3D%22toc-hId--1018862833%22%20id%3D%22toc-hId--1018862833%22%3EGet%20Started%3C%2FH2%3E%0A%3CP%20class%3D%22lia-align-justify%22%20data-unlink%3D%22true%22%3EAs%20a%20part%20of%20the%20announcement%20on%20using%20ONNX%20Runtime%20for%20training%2C%20we%20have%20released%20a%20Docker%20image%20with%20ORT%20and%20made%20available%20a%20repo%20at%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fonnxruntime-training-examples%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fgithub.com%2Fmicrosoft%2Fonnxruntime-training-examples%3C%2FA%3E%20that%20will%20host%20examples%20for%20ORT%20training.%20The%20first%20recipe%20available%20in%20this%20repo%20will%20help%20you%20get%20started%20with%20ORT%20for%20BERT%20pretraining%20in%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fservices%2Fmachine-learning%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20Machine%20Learning%20service%3C%2FA%3E%20or%20%3CA%20href%3D%22https%3A%2F%2Fwww.nvidia.com%2Fen-us%2Fdata-center%2Fdgx-2%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3ENVIDIA%20DGX-2%3C%2FA%3E%20and%20see%20the%20speedup%20in%20action.%20This%20recipe%20shows%20how%20to%20use%20ONNX%20Runtime%20training%20with%20BERT%20pretraining%20implementation%20in%20PyTorch.%20You%20can%20use%20this%20example%20either%20with%20the%20two%20datasets%20used%20in%20the%20original%20implementation%20or%20with%20your%20custom%20dataset%20to%20pretrain%20a%20BERT%20model%20and%20get%20the%20performance%20improvements%20with%20ORT%20reported%20in%20this%20blog.%20We%20are%20planning%20to%20add%20more%20examples%20for%20transformer%20models%20and%20other%20models.%20We%20also%20welcome%20your%20contribution%20to%20this%20repo%26nbsp%3Band%20feedback%20to%20improve%20ORT%20training%20capabilities%20and%20experience.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1398310%22%20slang%3D%22en-US%22%3E%3CDIV%20class%3D%22title-wrapper%22%3E%0A%3CP%20class%3D%22%22%3EToday%20we%20are%20introducing%20the%20preview%20of%20the%20new%20%3CSPAN%3Etraining%3C%2FSPAN%3E%3CSPAN%3E%20feature%20in%20ONNX%20Runtime%3C%2FSPAN%3E.%20This%20allows%20training%20a%20Pytorch%20transformer%20model%20up%20to%20...%2045%25%20faster%20with%202x%20batch%20size.%26nbsp%3B%3C%2FP%3E%0A%3C%2FDIV%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1398310%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAzure%20Machine%20Learning%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EMachine%20Learning%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

Author: Sherlock Huang, AI Frameworks, Microsoft

This post is co-authored by Cheng Tang, Jesse Benson, Kaarthik Sivashanmugam and Alexey Svyatkovskiy

 

Today we announced the preview for new training feature in ONNX Runtime (ORT). This blog explains how we have been using it to accelerate training for large transformer models. ONNX Runtime Training is integrated with PyTorch so that existing training code can be directly accelerated for training.

In this paper, we will describe some of the key aspects of ORT design and implementation that enable us to achieve the distributed training performance improvements. We will also use BERT-L pre-training as the benchmark to illustrate the performance of ORT training. Finally, we will present a case study of training GPT-2 model for code autocompletion feature in Visual Studio IntelliCode

 

Design and Implementation

ONNX Runtime Training is built on the same open sourced code as the popular inference engine for ONNX models. Figure 1 shows the high-level architecture for ONNX Runtime’s ecosystem. ORT is a common runtime backend that supports multiple framework frontends, such as PyTorch and Tensorflow/Keras. It makes use of the Execution Provider interface to perform computation on different hardwareThis enables us to build hardware-agnostic, graph-level optimizations that are extensible across different platforms, as well as hardware specific optimization targeting platforms like NVIDIA GPUWe have also implemented additional optimizations, outlined below, to expedite training for large transformer models. 

 

Figure 1. ONNX Runtime High Level ArchitectureFigure 1. ONNX Runtime High Level Architecture

Static Graph Optimizations

Machine learning models are commonly abstracted as computation graphs. The computation graph used by deep learning frameworks could be either static or dynamic. In the current implementation, ORT has a view of the entire static computation graph. This makes it possible to enable many common graph optimization techniques, such as constant folding, redundant operation elimination, and operator fusion. They are first applied on the forward computation graph before auto differentiation engine builds the backward graph. As ORT has the global knowledge of data dependencies, it only builds the minimal gradient graph that is needed for targeted weights. Consequently, activation tensors that are not needed for backward computation are automatically dropped after use. With a minimal training graph, it ensures that only essential computation is performed and memory consumption is minimized.

 

Memory Usage Optimizations

Over the last few years, the size of deep learning models has been growing rapidly. GPU memory consumption has become a limiting factor for large model training. ORT has made conscious efforts to preserve and reuse memory whenever possible. For example, ORT reuses the same buffer segments throughout a series of operations, including gradient accumulation, gradient scaling adjustment, allreduce communication and weight update computation (if the optimizer allows). ORT also tries to perform in-place operations if the source tensor is no longer consumed elsewhere in the computation graph. ORT’s kernel implementation also tries to minimize the use of scratch buffers, such as avoid using some memory intensive cuDNN functions, and reusing output buffer as scratch buffer if possible. As a result, ORT can train BERT with 2x the batch size as PyTorch. This enables us to utilize the GPU resources more efficiently, resulting in better performance on the same model and the ability to train larger models.

 

ZeRO Stage 1 Integration

Zero Redundancy Optimizer (ZeRO) is a memory optimization technique from Microsoft Research. ZeRO is used to save GPU memory consumption by eliminating duplicated states across workers during distributed training. ZeRO has three main optimization stages.  Currently, ONNX Runtime implemented Stage 1 of ZeRO. ZeRO Stage 1, known as the optimizer state partitioning, allows ORT to shard the optimizer states, including 1st and 2nd order moments (and fp32 copy of weights in mixed precision mode), across multiple workers with no extra communication overhead. With ZeRO, ORT can further boost batch size or train a larger model. In BERT-L pre-training, ZeRO allows batch size to further grow from 148 to 168 for phase 1 and from 23 to 27 for phase 2 in a 32GB V100. Distributed checkpointing is also introduced, as model persistent state is distributed across multiple workers. ZeRO can be enabled with a config flag.

 

Native Mixed Precision Training Support     

Unlike PyTorch’s dependency on NVIDIA Apex extension, ORT has implemented its own support for mixed precision training. Mixed precision training can be enabled with a config flag – no other code change needed. Under the hood, ORT converts the static computation graph into mixed precision mode through a series of graph transformations, i.e. running most of the computations in fp16 while keeping some numerically sensitive computation in fp32. ORT supports dynamic loss scaling by automatically inserting the computation nodes for loss scaling into the graph.

 

Highly Scaleable Distributed Training

ORT seeks to build a unified highly scaleable distributed training framework for hybrid parallelism, including a mixed of data and model parallelisms. ORT supports data parallelism, which is the most popular distributed training mode adopted by many internal teams. We are enhancing ORT to fully support training extremely large models (>100 billion parameters). It has an experimental implementation of Megatron-style horizontal parallelism and we are actively developing to support pipeline parallelism, such as PipeDream.

 

CUDA Kernel Optimizations

ORT has introduced highly optimized CUDA kernels for some key operations including Reductions, Dropout and Softmax. In addition, we have also introduced a few key operator fusions with fused kernels for LayerNormalization, Gelu and their gradients, as well as Lamb Optimizer.

 

Using ORT with PyTorch Training Code

ONNX Runtime has the capability to train existing PyTorch models through its optimized backend. For this, we have introduced a python API for PyTorch, called ORTTrainer, which can be used to switch the training backend for PyTorch models (instance of torch.nn.Module) to ORT. This requires some changes from the user, such as replacing the PyTorch optimizer, and optionally, setting flags to enable additional features such as mixed-precision training. Under the hood, as shown in Figure 2, ORTTrainer first converts the PyTorch model to ONNX format through the PyTorch-ONNX exporter. Next, ORT backend takes over and applies graph optimizations, builds a training graph, performs transformations on it as needed (e.g. mixed-precision transformation), and sets up the graph elements needed for distributed training. In this design, while all the computation-intensive workload is offloaded onto the ORT backend, users can still enjoy the rich PyTorch frontend utilities, such as data loading, checkpointing , and easy specification of loss functions. 

 

Figure 2. Workflow for converting an PyTorch model into an ORT training graphFigure 2. Workflow for converting an PyTorch model into an ORT training graph

It is important to note that the current API is experimental and expected to see significant changes in the near future. A new version of the API is under active development. Our goal is to improve the interface to provide more seamless integration with PyTorch training that requires minimal changes in users’ training code, introduce new features, and present a more flexible API to cover advanced scenarios. Please refer to the training examples for more details.

 

Benchmarking Training Acceleration with ONNX Runtime

We now present the performance evaluation of BERT-L pre-training with ONNX Runtime in a 4-node DGX-2 cluster. In AzureML, we also reproduced the pre-training convergence for BERT-Large using sample from NVIDIA’s DeepLearningExamplesle’s repo. We also validated fine tuning accuracy with SQuAD benchmarks.

 

Benchmarking on DGX-2

We compared PyTorch and ORT’s BERT-L training performance on 4 NVIDIA DGX-2 machines (each with 16x 32GB V100) interconnected with InfiniBand. PyTorch’s result was obtained with NGC 20.03-py3 docker image following Nvidia’s recipe. ORT’s result was obtained following the same recipe, except that ORT used bigger local batch sizes. As described above, ORT is able to run at a 2x batch size of PyTorch’s. ORT ran at a local batch size of 128 and 16 for phase 1 and 2 respectively, whereas PyTorch ran at batch size of 64 and 8. The effective global batch size remained unchanged in both cases. Overall, ORT achieved throughput improvement of 11.32% and 14.61% for phase 1 and 2. The total time to train was reduces by 11.16%, from 17.74 hours to 15.76 hours.

Table 1. Time to train on 4 NVIDIA DGX-2 machines

 

PyTorch 1.5 with

NGC 20.03-py3

PyTorch 1.5 with

ONNX Runtime

% Gain with

ONNX Runtime

Phase 1 Throughput (ex/sec)

11522.1

12826.2

11.32%

Phase 2 Throughput (ex/sec)

2150.0

2464.1

14.61%

Phase 1 time (hours)

11.12

9.99

10.16%

Phase 2 time (hours)

6.62

5.77

12.84%

Total time (hours)

17.74

15.76

11.16%

 

BERT-L Pre-training on AzureML

We performed BERT-L pre-training on 8x ND40rs_v2 cluster (each with 8x 32GB V100) interconnected with InfiniBand in AzureML. We used the same Nvidia’s recipe, expect that we doubled the local batch size in the same way we mentioned above. Mixed precision mode and LAMB optimizer was used throughout the training. As the end of phase 2, we achieved the training loss of 1.31. The end-to-end training time was 18.32 hours.

Table 2. Time to train on Azure ML with 8x ND40rs_v2

 

PyTorch 1.5 with ONNX Runtime

Phase 1 Throughput (ex/sec)

10751.4

Phase 2 Throughput (ex/sec)

2223.7

Phase 1 Time (hours)

11.92

Phase 2 Time (hours)

6.40

Total Time (hours)

18.32

 

Figure 3 shows a loss curve produced in a typical pre-training run. Phase 1 ends with a loss value around 1.4 after 7038 steps. Phase 2 continues with a jump of loss due to switch of sequence length, and it finally decrease to a loss value around 1.3.

­

Figure 3. ORT BERT-L pre-training loss curvesFigure 3. ORT BERT-L pre-training loss curves

The pretrained model is then further finetuned on SQuAD dataset. Both full precision or mixed precision finetuning result in satisfactory Exact Match and F1 scores.

Table 3. BERT-L fine-tuning result on SQuAD Dataset

Accuracy Metrics

Finetuning - FP32

Finetuning -

mixed precision

Exact Match %

84.63

84.81

F1 score %

91.15

91.32

 

A Case Study with Visual Studio using GPT-2 Medium

Microsoft Visual Studio uses ONNX Runtime to accelerate pre-training a 24-layer GPT-2 Medium model to power code autocompletion in the IntelliCode of Visual Studio. Intellicode serves as a universal programming language compiler, effectively generating syntactically correct code in multiple programming languages, capable of completing an entire line of code in a couple of keystrokes. The training dataset for this task comprises over 1.2 billion lines of source code in Python, C#, JavaScript and TypeScript programming language from 52000 top-starred projects in GitHub. We treat the source code data as a sequence of tokens corresponding to the output of a lexical analyzer.

The training was performed in a DGX-2 cluster. As we use a large sequence length of 1024, the memory usage is very intensive and PyTorch is only able to fit a batch size of 2 on the 32GB V100. ORT achieved 15.8% higher throughput under the identical local batch. As ORT is more memory efficient and able to run at a bigger batch size of 3, it delivered an overall 20.5% of the throughput improvement. As a result, the overall training time is reduced from 202 hours to 168 hours (with 1.2 x higher throughput). The final evaluation metric also achieved the same production shipping bar.  

Table 4. GPT-2 medium pre-training performance.

 

Batch size / GPU

Throughput (ex/sec)

Time to train (hours)

PyTorch

2

48.7

202

PyTorch + ORT

2

56.4

174

Pytorch + ORT

3

58.7

160

 

Conclusion

Today, we announced the preview of training support in ONNX Runtime with a focus on large scale computation intensive transformer models. We have demonstrated that, on a 4 DGX-2 cluster, ONNX Runtime can achieve a throughput gain of 11.32% and 14.61% for BERT-L phase 1 and 2 pre-training over PyTorch. The total training time was reduced by 11.16%, from 17.74 hours to 15.76 hours. ONNX Runtime is able to train BERT-L at a 2x batch size as PyTorch. We have shown a similar 20.5% speedup on a GPT-2 model, saving 34 hours in total training time. ONNX Runtime Training is integrated with PyTorch so that existing PyTorch training code can be directly accelerated for transformer models training. 

 

Get Started

As a part of the announcement on using ONNX Runtime for training, we have released a Docker image with ORT and made available a repo at https://github.com/microsoft/onnxruntime-training-examples that will host examples for ORT training. The first recipe available in this repo will help you get started with ORT for BERT pretraining in Azure Machine Learning service or NVIDIA DGX-2 and see the speedup in action. This recipe shows how to use ONNX Runtime training with BERT pretraining implementation in PyTorch. You can use this example either with the two datasets used in the original implementation or with your custom dataset to pretrain a BERT model and get the performance improvements with ORT reported in this blog. We are planning to add more examples for transformer models and other models. We also welcome your contribution to this repo and feedback to improve ORT training capabilities and experience.