Azure Databricks announced today the general availability of Model Serving. Azure Databricks Model Serving deploys machine learning models as a REST API, allowing you to build real-time ML applications like personalized recommendations, customer service chatbots, fraud detection, and more - all without the hassle of managing serving infrastructure. This is the first real-time inference feature to launch under this service.
To learn about the full announcement, you can read the original post here or join our webinar.
"With Databricks Model Serving, we can now train, deploy, monitor, and retrain machine learning models, all on the same platform. By bringing model serving (and monitoring) together with the feature store, we can ensure deployed models are always up-to-date and deliver accurate results. This streamlined approach allows us to focus on maximizing the business impact of AI without worrying about availability and operational concerns." - Don Scott, VP of Product Development, Hitachi Solutions
Azure Databricks Model Serving accelerates data science teams’ path to production by simplifying deployments and reducing mistakes through integrated tools. With the new model serving service, you can do the following:
Ready to get started or try it out for yourself? You can read more about Azure Databricks Model Serving and how to use it in our documentation here.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.