Azure Databricks announced today the general availability of Model Serving. Azure Databricks Model Serving deploys machine learning models as a REST API, allowing you to build real-time ML applications like personalized recommendations, customer service chatbots, fraud detection, and more - all without the hassle of managing serving infrastructure. This is the first real-time inference feature to launch under this service.
To learn about the full announcement, you can read the original post here or join our webinar.
"With Databricks Model Serving, we can now train, deploy, monitor, and retrain machine learning models, all on the same platform. By bringing model serving (and monitoring) together with the feature store, we can ensure deployed models are always up-to-date and deliver accurate results. This streamlined approach allows us to focus on maximizing the business impact of AI without worrying about availability and operational concerns." - Don Scott, VP of Product Development, Hitachi Solutions
Azure Databricks Model Serving accelerates data science teams’ path to production by simplifying deployments and reducing mistakes through integrated tools. With the new model serving service, you can do the following:
- Deploy a model as an API with one click in a serverless environment.
- Serve models with high availability and low latency using endpoints that can automatically scale up and down based on incoming workload.
- Safely deploy the model using flexible deployment patterns such as progressive rollout or perform online experimentation using A/B testing
- Seamlessly integrate model serving with online feature store(hosted on Azure Cosmos DB), MLflow Model Registry, and monitoring, allowing for faster and error-free deployment.
Ready to get started or try it out for yourself? You can read more about Azure Databricks Model Serving and how to use it in our documentation here.