ONNX Runtime 0.5 releases with support for hardware optimized inferencing

%3CLINGO-SUB%20id%3D%22lingo-sub-862032%22%20slang%3D%22en-US%22%3EONNX%20Runtime%200.5%20releases%20with%20support%20for%20hardware%20optimized%20inferencing%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-862032%22%20slang%3D%22en-US%22%3E%3CDIV%20id%3D%22tinyMceEditorclipboard_image_0%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3EIf%20you%20are%20creating%20an%20Intelligent%20Edge%20project%20that%20uses%20vision%20machine%20learning%20models%20in%20the%20%3CA%20href%3D%22https%3A%2F%2Fonnx.ai%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EONNX%20(Open%20Neural%20Network%20eXchange)%3C%2FA%3E%20format%2C%20the%20recent%20ONNX%20Runtime%200.5%20release%20provides%20support%20and%20tutorials%20for%20using%20the%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Fembedded%2Fjetson-nano-developer-kit%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3ENVIDIDA%20Jetson%20Nano%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fsoftware.intel.com%2Fen-us%2Fopenvino-toolkit%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EIntel's%20OpenVINO%20Toolkit%3C%2FA%3E%26nbsp%3Bfor%20hardware-based%20optimization.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EONNX%20Runtime%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20a%20performance-focused%20complete%20scoring%20engine%20for%20Open%20Neural%20Network%20Exchange%20(ONNX)%20models%2C%20with%20an%20open%20extensible%20architecture%20to%20continually%20address%20the%20latest%20developments%20in%20AI%20and%20Deep%20Learning.%20ONNX%20Runtime%20stays%20up%20to%20date%20with%20the%20ONNX%20standard%20with%20complete%20implementation%20of%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CSTRONG%3Eall%3C%2FSTRONG%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3EONNX%20operators%2C%20and%20supports%20all%20ONNX%20releases%20(1.2%2B)%20with%20both%20future%20and%20backwards%20compatibility.%20Please%20refer%20to%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fonnxruntime%2Fblob%2Fmaster%2Fdocs%2FVersioning.md%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ethis%20page%3C%2FA%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Efor%20ONNX%20opset%20compatibility%20details.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fonnx.ai%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EONNX%3C%2FA%3E%3CSPAN%3E%26nbsp%3B%3C%2FSPAN%3Eis%20an%20interoperable%20format%20for%20machine%20learning%20models%20supported%20by%20various%20ML%20and%20DNN%20frameworks%20and%20tools.%20The%20universal%20format%20makes%20it%20easier%20to%20interoperate%20between%20frameworks%20and%20maximize%20the%20reach%20of%20hardware%20optimization%20investments.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ERead%20the%20complete%20blog%20announcement%20on%20the%20Microsoft%20Open%20Source%20Blog%20-%20%3CA%20href%3D%22https%3A%2F%2Fcloudblogs.microsoft.com%2Fopensource%2F2019%2F08%2F26%2Fannouncing-onnx-runtime-0-5-edge-hardware-acceleration-support%2F%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3ENow%20available%3A%20ONNX%20Runtime%200.5%20with%20support%20for%20edge%20hardware%20acceleration%3C%2FA%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAdditional%20Resources%20-%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fcloudblogs.microsoft.com%2Fopensource%2F%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EMicrosoft%20Open%20Source%20Blog%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fonnxruntime%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EONNX%20Runtime%20GitHub%20repo%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fsoftware.intel.com%2Fen-us%2Fopenvino-toolkit%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EIntel%20Distribution%20of%20OpenVINO%20Toolkit%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fdeveloper.nvidia.com%2Fembedded%2Fjetson-nano-developer-kit%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3ENVidia%20Jetson%20Nano%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fonnx.ai%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3EONNX%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%3CSPAN%3EFind%20more%20resources%20for%20Intelligent%20Edge%20device%20builders%20at%20the%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Faka.ms%2FIntelligentEdgeResourceCenter%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3EIntelligent%20Edge%20Device%20Builder%20Resource%20Center.%3C%2FA%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-862032%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAI%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Ehardware_engineering%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EVision%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Highlighted
Microsoft
 

If you are creating an Intelligent Edge project that uses vision machine learning models in the ONNX (Open Neural Network eXchange) format, the recent ONNX Runtime 0.5 release provides support and tutorials for using the NVIDIDA Jetson Nano and Intel's OpenVINO Toolkit for hardware-based optimization.

 

ONNX Runtime is a performance-focused complete scoring engine for Open Neural Network Exchange (ONNX) models, with an open extensible architecture to continually address the latest developments in AI and Deep Learning. ONNX Runtime stays up to date with the ONNX standard with complete implementation of all ONNX operators, and supports all ONNX releases (1.2+) with both future and backwards compatibility. Please refer to this page for ONNX opset compatibility details.

 

ONNX is an interoperable format for machine learning models supported by various ML and DNN frameworks and tools. The universal format makes it easier to interoperate between frameworks and maximize the reach of hardware optimization investments.

 

Read the complete blog announcement on the Microsoft Open Source Blog - Now available: ONNX Runtime 0.5 with support for edge hardware acceleration

 

Additional Resources - 

Find more resources for Intelligent Edge device builders at the Intelligent Edge Device Builder Resource Center.

0 Replies