We are excited to roll out the public preview of the Azure Functions durable task scheduler. This new Azure-managed backend is designed to provide high performance, improve reliability, reduce operational overhead, and simplify the monitoring of your stateful orchestrations. If you missed the initial announcement of the private preview, see this blog post.
Durable Task Scheduler
Durable functions simplifies the development of complex, stateful, and long-running apps in a serverless environment. It allows developers to orchestrate multiple function calls without having to handle fault tolerance. It's great for scenarios like orchestrating multiple agents, distributed transactions, big data processing, batch processing like ETL (extract, transform, load), asynchronous APIs, and essentially any scenario that requires chaining function calls with state persistence.
The durable task scheduler is a new storage provider for durable functions, designed to address the challenges and gaps identified by our durable customers with existing bring-your-own storage options. Over the past few months, since the initial limited early access launch of the durable task scheduler, we’ve been working closely with our customers to understand their requirements and ensure they are fully supported in using the durable task scheduler successfully. We’ve also dedicated significant effort to strengthening the fundamentals – expanding regional availability, solidifying APIs, and ensuring the durable task scheduler is reliable, secure, scalable, and can be leveraged from any of the supported durable functions programming languages. Now, we’re excited to open the gates and make the durable task scheduler available to the public. Some notable capabilities and enhancements over the existing “bring your own storage” options include:
Azure Managed
Unlike the other existing storage providers for durable functions, the durable task scheduler offers dedicated resources that are fully managed by Azure. You no longer need to bring your own storage account for storing orchestration and entity state, as it is completely built in.
Looking ahead, the roadmap includes additional operational capabilities, such as auto-purging old execution history, handling failover, and other Business Continuity and Disaster Recovery (BCDR) capabilities.
Superior Performance and Scalability
Enhanced throughput for processing orchestrations and entities, ideal for demanding and high-scale applications. Efficiently manages sudden bursts of events, ensuring reliable and quick processing of your orchestrations across your function app instances.
The table below compares the throughput of the durable task scheduler provider and the Azure Storage provider.
- The function app used for this test runs onone to four Elastic Premium EP2 instances.
- The orchestration code was written in C# using the .NET Isolated worker model on NET 8.
- The same app was used for all storage providers, and the only change was the backend storage provider configuration.
- The test is triggered using an HTTP trigger which starts 5,000 orchestrations concurrently.
The benchmark used a standard orchestrator function calling five activity functions sequentially, each returning a "Hello, {cityName}!" string. This specific benchmark showed that the durable task scheduler is roughly five times faster than the Azure Storage provider.
Orchestration Debugging and Management Dashboard
Simplify the monitoring and management of orchestrations with an intuitive out-of-the-box UI. It offers clear visibility into orchestration errors and lifecycle events through detailed visual diagrams, providing essential information on exceptions and processing times. It also enables interactive orchestration management, allowing you to perform ad hoc actions such as suspending, resuming, raising events, and terminating orchestrations.
View orchestration history and monitor the runtime statusExplore detailed orchestration instance information, view activity timelines, and interact with the orchestration through management controls.Monitor the inputs and outputs between orchestration and activities. Exceptions surfaced making it easy to identify where and why an orchestration may have failed.
Security Best Practices
Uses identity-based authentication with Role-Based Access Control (RBAC) for enterprise-grade authorization, eliminating the need for SAS tokens or access keys.
Local Emulator
To simplify the development experience, we are also launching a durable task scheduler emulator that can be run as a container on your development machine. The emulator supports the same durable task scheduler runtime APIs and stores data in local memory, enabling a completely offline debugging experience. The emulator also allows you to run the durable task scheduler management dashboard locally.
Pricing Plan
We’re excited to announce the initial launch of the durable task scheduler with a Dedicated, fixed pricing plan. One of the key pieces of feedback that we’ve consistently received from customers is the desire for more upfront billing transparency. To address this, we’ve introduced a fixed pricing model with the option to purchase a specific amount of performance and storage through an abstraction called a Capacity Unit (CU). A single CU provides :
- Single tenancy with dedicated resources for predictable performance
- Up to 2,000 work items* dispatched per second
- 50GB of orchestration data storage
A Capacity Unit (CU) is a measure of the resources allocated to your durable task scheduler. Each CU represents a pre-allocated amount of CPU, memory, and storage resources. A single CU guarantees the dispatch of a certain number of work items and provides a defined amount of storage. If additional performance and/or storage are needed, more CUs can be purchased*.
A Work Item is a message dispatched by the durable task scheduler to your application, triggering the execution of orchestrator, activity, or entity functions. The number of work items that can be dispatch per second is determined by the Capacity Units allocated to the durable task scheduler. For detailed instructions on determining the number of work items your applications needs and the number of CUs you should purchase, please refer to the guidance provided here.
*At the beginning of the public preview phase, schedulers will be temporarily limited to a single CU.
*Billing for the durable task scheduler will begin on May 1st, 2025.
Under the Hood
The durable functions team has been continuously evolving the architecture of the backends that persist the state of orchestrations and entities; the durable task scheduler is the latest installment in this series, and it includes both the most successful characteristics of its predecessors, as well as some significant improvements of its own. In the next paragraph, we shed some light on what is new. Of course, it is not necessary to understand these internal implementation details, and they are subject to change as we will keep improving and optimizing the design.
Like the MSSQL provider, the durable task scheduler uses a SQL database as the storage foundation, to provide robustness and versatility. Like the Netherite provider, it uses a partitioned design to achieve scale-out, and a pipelining optimization to boost the partition persistence. Unlike the previous backends, however, the durable task scheduler runs as a service, on its own compute nodes, to which workers are connected by GRPC. This significantly improves latency and load balancing. It strongly isolates the workflow management logic from the user application, allowing them to be scaled separately.
What can we expect next for the durable task scheduler?
One of the most exciting developments is the significant interest we’ve received in leveraging the durable task scheduler across other Azure compute offerings beyond Azure Functions, such as Azure Container Apps (ACA) and Azure Kubernetes Service (AKS). As we continue to enhance the integration with durable functions, we have also integrated the durable task SDKs, which are the underlying technology behind the durable task framework and durable functions, to support the durable task scheduler directly. We refer to these durable task sdks as the “portable SDKs” because they are a client-only SDK that connects directly to the durable task scheduler, where the managed orchestration engine resides, eliminating any dependency on the underlying compute platform, hence the name “portable”. By utilizing the portable SDKs to author your orchestrations as code, you can deploy your orchestrations across any Azure compute offering. This allows you to leverage the durable task scheduler as the backend, benefiting from its full set of capabilities.
If you would like to discuss this further with our team or are interested in trying out the portable SDK yourself, please feel free to reach out to us at DurableTaskScheduler@microsoft.com . We welcome your questions and feedback.
We've also received feedback from customers’ requesting a versioning mechanism to facilitate zero downtime deployments. This feature would enable you to manage breaking workflow changes by allowing all in-flight orchestrations using the older version to complete, while switching new orchestrations to the updated version. This is already in development and will be available in the near future.
Lastly, we are in the process of introducing critical enterprise features under the category of Business Continuity and Disaster Recovery (BCDR). We understand the importance of these capabilities as our customers rely on the durable task scheduler for critical production scenarios.
Get started with the durable task scheduler
Migrating to the durable task scheduler from an existing durable function application is a quick process. The transition is purely configuration changes, meaning your existing orchestrations and business logic remain unchanged.
The durable task scheduler is provided through a new Azure resource known as a scheduler. Each scheduler can contain one or multiple task hubs, which are sub-resources. A task hub, an established concept within durable functions, is responsible for managing the state and execution of orchestrations and activities. Think of a task hub as a logical way to separate your applications that require orchestrations execution.
A Durable Task scheduler resource in the Azure Portal includes a task hub named dts-github-agent
One you have created a scheduler and task hub(s), simply add the library package to your project and update your host.json to point your function app to the durable task scheduler endpoint and task hub. That’s all there is to it.
With the correct authentication configuration applied, your applications can fully leverage the capabilities of the durable task scheduler. For more detailed information on how to migrate or start using the durable task scheduler, visit our getting started page here.
Get in touch to learn more
We are always interested in engaging with both existing and potential new customers. If any of the above interests you,, if you have any questions, or if you simply want to discuss your scenarios and explore how you can leverage the durable task scheduler, feel free to reach out to us anytime. Our line is always open - DurableTaskScheduler@microsoft.com.
Updated Mar 19, 2025
Version 1.0greenie-msft
Microsoft
Joined May 24, 2022
Apps on Azure Blog
Follow this blog board to get notified when there's new activity