MLflow: A way to do more on Azure Machine Learning
Published Jun 24 2022 12:27 PM 9,615 Views
Microsoft

This post is co-authored by Abe Omorogbe, Product Manager for Azure Machine Learning

Santiago_0-1656012334695.png

 

The open-source movement is one of the reasons why technology has developed at such a breakneck pace for the past few decades. By encouraging innovation through collaboration, it provides the basis for how the world communicates and innovates with software. If we look at how diverse the technology landscape for Machine Learning looks like, such ideas are not just good but needed.


With that in mind, MLflow is probably one of the open-source projects that have made its way further in the ML space. We announced the integration between MLflow and Azure Machine Learning some time ago by supporting tracking and model management capabilities in MLflow when connected with Azure Machine Learning

 

We are expanding our commitment by bringing to life a more mature integration, including:

  • Support for a broader set of APIs
  • No-code deployment for MLflow models in real-time/batch-managed inference
  • Endpoints for MLflow models deployments
  • Curated environments with MLflow
  • MLflow model registration and deployment from Azure Machine Learning Studio
  • Use MLflow models as job inputs and pipeline inputs
  • Integrations with our CLI v2 

We are also contributing back to the standard to make our expertise available to everyone. Endpoints - something Azure Machine Learning users have been using in our Managed Inference service to deploy models faster – is a great example of a contribution that went to MLflow from v1.25.0 and now it's available to everyone. Check our new documentation or sample notebooks and start taking advantage of MLflow today.

 

What’s MLflow?

 

MLflow is an open-source framework, designed to manage the complete machine learning lifecycle. Its ability to train and serve models on different platforms allows users to avoid vendor lock-ins and to move freely from one platform to another one.

 

MLflow

  • Tracking, allowing experiments to record and compare parameters, metrics, and results.
  • Model’s Registries, supplying a central model store to collaboratively manage the full lifecycle of a model.
  • Projects, allowing to package ML code in a reusable, reproducible form to share with other data scientists or transfer to production (on preview on Azure Machine Learning).

 

Not just an open-source standard, but a way to experiment faster

 

MLflow not only represents an open-source standard to track ML experiments, but it is also equipped with a set of functionalities created to improve ML practitioners' productivity. To call out a few:

 

Autologging

Autologging is a feature in MLflow that allows automatic tracking of hyper-parameters, metrics, artifacts, and even models in a wide variety of machine learning frameworks. Autologging saves machine learning practitioners time by automatically tracking interesting parameters and metrics according to the model they are building.
Currently, MLflow supports logging models created in FastAI, MXNet Gluon, PyTorch, TensorFlow, XGBoost, CatBoost, h2o, Keras, LightGBM, MLeap, ONNX, Prophet, spaCy, Spark MLLib, Scikit-Learn, and Statsmodels.

 

By simple doing,

 

mlflow.autolog()

 

It starts tracking your experiment and logging metrics specific to the model framework you are working with. This resonates with a lot of people since the owner of the machine learning framework is probably the best one to tell (and log) the relevant parameters for a given model architecture, saving time for the ML practitioner to constantly have to manually log all parameters they may be using.

 

 

Tracking_mlflow.gif

 

 

In this example, we are training a XGBoost classifier and we turned on the autolog functionality of MLflow. All those metrics and parameters are captured automatically by MLflow for XGBoost models.

 

Saving models in an open-source specification

MLflow proposes saving models using the MLModel specification. This format is not just an open-source format, but it is also ML-framework agnostic. This means there is not a model conversion process in the middle: each framework decides how to implement the persistence of the assets themself. Each supported framework indicates to MLflow how to load and run the model.

 

For instance, a TensorFlow model uses SavedModel format under the hood, while a FastAI model stores its assets using a fastai.learner format. All this complexity is abstracted from the ML practitioner that only must learn how to save and load models using the MLflow MLModel specification. The specifics about how to save the model and load it back on each specific ML framework are all handled for you.

 

Animation3.gif

In this sample, we are loading the last version of a model called “cats_vs_dogs” created with FastAI. Then, without even wondering how FastAI saves and loads a model, we can run the model using MLflow and perform inference using the predict function.

 

 

Pick up from where you left off easier

Whether your models are being saved inside of your experimentation runs or in a model registry, sometimes you need to continue the training of an existing model to tune it further. You may want to train the model for a different downstream task, fine-tune it with specific data, or pick it up from a checkpoint. This can be easily orchestrated with MLflow by loading the model back into a new experiment and then continuing the training with new data.

 

Animation4.gif

In this example, we are loading the last version of the model named “cats_vs_dogs”, a model trained with FastAI. Then we associate a new data loader to the model that is used to feed the learner with new instances of images. Finally, the method fine_tune (specific of FastAI) is called to continue the training and fine-tune the original model to the new dataset.

 

Move from your laptop to the cloud, without changing a line of code

 

MLflow allows you to remove any dependencies that your models may have with the tracking solution you are using, creating a truly portable training routine. This allows ML practitioners to train/experiment with models locally or grab already existing training routines that they used on another platform and track them using Azure Machine Learning: without changing a single line of code and without installing an Azure Machine Learning SDK (software development kits).


As long as you are relying on MLflow for tracking metrics, parameters, or models, that information can flow directly to Azure Machine Learning. When using Azure Notebooks or any Azure Machine Learning Compute, this will happen automatically.
If you would rather continue training on your existing platform (let’s say your beloved laptop, Azure Databricks, you named it), you only have to instruct MLflow to point to your workspace:

 

 

mlflow.set_tracking_uri('azureml://...')

 

Lift models quicker to production with one-click deployment of MLflow model to Managed Inference

 

Azure Machine Learning supports no-code-deployments of models stored in the MLflow model format MLModel. That means that any model logged using MLflow can be deployed in Azure Machine Learning to a container instance, a Kubernetes cluster, or to our flagship managed inf...— both in real-time and batch — with a couple of clicks. We take care of all the details about the deployment including dependencies installation, generation of the scoring script, and monitoring for you.

 

Santiago_3-1656010659373.png

 

 

Enjoy features like endpoint testing to start interacting with your deployed model, all without worrying about the deployment itself.

 

 

Animation2.gif

 An NLP (natural language processing) model trained to classify book reviews deployed in Azure Machine Learning managed real-time endpoint.

 

A transparent way to move across different platforms

 

We believe Azure Machine Learning is the best place to build Machine Learning solutions, but we recognize that the ML space is diverse and there may be times when we need to move workloads from one vendor’s platform to another. By supporting MLflow we unlock any ML practitioner currently using the standard to lift and shift their experiments to Azure Machine Learning, without changing a single line of code (*). Enjoy the benefits in terms of productivity and scalability in Azure Machine Learning without having to adapt training routines to work with Azure Machine Learning.

 

 

Santiago_7-1656010950599.png

 

 

This benefit works in both directions: in the same way you can get in into our platform you can get out too. The MLflow standard proposes a way to avoid vendor lock-in and provides a transparent way to take your experiments and models out of Azure Machine Learning if needed. Experiments, parameters, metrics, artifacts, and models can be accessed using MLflow SDK seamlessly as if using vendor-specific SDKs (software development kits).


MLflow even supports packaging models in containers that hold not a single reference to the Azure Machine Learning platform where they were originally created. This truly enables anyone to deploy anywhere.

 

An opportunity for partners and ISV (Independent Software Vendors)

 

MLflow represents both an open-source standard for storing any kind of model and a transparent API (Application Programming Interfaces) for accessing and executing those models. This creates a tremendous opportunity for partners and ISV to build their own ML solutions on top of Azure Machine Learning. By adopting Azure as a platform, they can take advantage of its capabilities including training infrastructure, model registries, managed inference, and all the pieces they need to deliver a world-class ML platform for their own customers, without having to worry about implementing such building blocks — nor creating a dependency between their customers' code and the fact they run on Azure Machine Learning. As long as customers interface with Azure Machine Learning using MLflow, they can seamlessly talk with the underlying platform without even noticing. No Azure Machine Learning-specific code is needed.

 

 

Santiago_5-1656010790696.png

 

For instance, a user can request the last version of the model named “customer-churn” just by using a syntax like:

 

mlflow.pyfunc.load_model('models:/customer-churn/latest')

 

Our integration with MLflow will automatically resolve the model to the one stored in the Azure Machine Learning Models registry, the latest version.

 

Getting started with MLflow in Azure Machine Learning

 

We encourage you to start using MLflow standard in all your experiments. MLflow is already available in all our curated environments used on Azure Machine Learning so you are ready to get started with them. Check our new documentation or sample notebooks and start taking advantage of MLflow today. All you need is autolog().

Version history
Last update:
‎Nov 09 2023 11:10 AM
Updated by: