In today’s complex cloud environments, it is common for companies to use technologies from many sources. Cloud computing platforms are often the foundation of an enterprise IT landscape, with supplement from independent software vendors and products that meet business and industrial vertical needs. With integration between H2O.ai and Azure Machine Learning, we enable customers to democratize model creation with SaaS-like tools and pick the deployment technologies that align with their corporate requirements.
Using H2O.ai’s AzureML integration, models built in H2O.ai now appear as a deployed model within an AzureML workspace. This means that any user or product can now leverage an inferencing endpoint of the H2O.ai models hosted in AzureML with a simple API call. This makes organizational model adoption easier, allowing users to access models across their enterprise IT landscape without changing existing deployment strategies.
“H2O's integration with Azure provides an innovative approach to deliver models as an Azure service, resulting in higher model adoption across Momentum Financial Services Group. As an example, 80% of credit checks are now reviewed using H2O models and more than 6K requests a day”.
- Nima Norouzi, Director - Advanced Analytics
At a high level, many organizations use databases, services, and repositories on Azure. Those assets can be used to build a machine learning model with H2O.ai products that include Driverless AI or H2O-3. As you may expect, Driverless AI has some unique capabilities to create new features as part of the Automated Machine Learning workflow and generate a model that is highly accurate and explainable.
The resulting artifact from automated training process is the model. This is used in the scoring (inference) stage of the workflow. This is where our integration takes root: organizations can now host that model for inference within Azure Machine Learning.
A high level diagram of the integration between H2O.ai and Azure Machine Learning
Azure Machine Learning can now invoke the models directly as services. This means the models generated within H2O.ai can be quickly and easily used within the current pipelines as well as used in new ways. For example, using the new AzureML Managed Online Endpoint feature enables the H2O-3 or Driverless AI model to be accessible directly as a HTTP call.
The integration is available as part of H2O.ai’s Rest Server. The Rest Server also generates an Azure Machine Learning deployment template as well as other artifacts to enable a seamless user experience when using consuming models within Azure Machine Learning.
Additionally, H2O’s Wave Application enables users to deploy the models into Azure using the Wave UI.
A screenshot of the H2O.ai Wave UI connecting to Azure Machine Learning
The benefit to customers is simple: the deployment and reuse of models generated within H2O.ai with hosted inferencing on Azure Machine Learning environment is a natural extension of our combined capabilities that empowers customers to build models where they like, while enabling enterprise data science teams to keep model governance and management on Azure Machine Learning.
Updated Jun 21, 2022
Version 2.0cgero-msft
Microsoft
Joined November 03, 2021
AI - Machine Learning Blog
Follow this blog board to get notified when there's new activity