Blog Post

Messaging on Azure Blog
4 MIN READ

Message brokers as the cornerstone of the next generation of agentic AI backends

EldertGrootenboer's avatar
Dec 15, 2025

We are seeing changes in the way Agentic AI behaves. Instead of one-off model calls, we are starting to see networks of agents and MCP services working together. These are going to bring powerful integrations, with a variety of distributed components. Work is going to arrive in unpredictable bursts. Some services end up overloaded while others sit idle. Every call burns tokens and compute, so wasted effort translates directly into real money.

Direct calls are no longer enough. We are going to need orchestration. A broker in the middle that absorbs spikes, queues up work until capacity becomes available, and handles retries. This approach helps keep costs predictable by pacing work to match budgets and downstream capacity.

Message brokers, such as Azure Service Bus, are ideal for the capabilities needed in this future. Queues and topics ensure that messages stay available. Sessions maintain order across related work. Dead letter queues isolate failures without impacting the rest of the workload. Scheduled delivery and deferral allow retries and resequencing without custom logic. Message TTL ensures stale work is removed in time. Duplicate detection enforces idempotency. These capabilities are not optional. They are essential for building systems at the scale we are going to need.

Why agentic AI backends need enterprise messaging

Agentic systems are evolving into ecosystems of cooperating components. Agents fanning hundreds or thousands of tasks out, aggregating results that arrive at different times, and going through multiple refinements before reaching their final answers. Backends are not all available at once. Some become unresponsive. Others throttle. Yet the system still needs to make progress.

For example, imagine a travel booking agent. A user would inform the agent where they want to go, how they want to travel, and at what type of properties they want to stay. The agent would then send out a variety of tasks to various backend services to get this information. Some services might provide information about different flight options, others about hotels or other options for stays, etc.

The agent would gather all the information, follow up with more tasks as needed, for example to confirm availability, or to gather more specific requirements from the customer’s input. Services may respond out of order, as some may be slower than others, or may respond with substandard quality responses.

Enterprise messaging provides the backbone that makes this possible. Queues and topics absorb bursts, preserve intent when services are offline, and regulate how fast work reaches downstream components. Routing decisions are based on workflow state, not on connectivity. Workers process at the rate they can sustain.

Scale matters, but so does cost. Unnecessary retries and unneeded calls quickly start adding up. Messaging reduces this waste by enabling scheduled retries, deferred steps, batching, and prioritization. The result is predictable systems and fewer wasted tokens.

A callback to the past

We have seen this pattern before. When applications needed to integrate multiple systems, enterprise messaging and service-oriented architecture helped manage complexity and orchestrate processes. The principle remains the same: decoupling and reliable communication are how we keep complex systems from breaking under their own weight. The difference now is that agentic AI workloads are more dynamic, more granular, and more expensive when they go wrong.

Why Azure Service Bus stands out

Not every messaging option meets these demands. Streaming brokers excel at event ingestion and analytics. Basic queues handle simple point-to-point flows. However, neither delivers the enterprise messaging features that agentic systems require; ordered delivery, correlation, controlled retries, and clear failure isolation.

After all, agentic systems are unpredictable by design. Steps complete out of order. Latency varies. Results arrive when they can. Azure Service Bus provides capabilities that that are quite uniquely suited for turning this type of disorder into a manageable workflow.

  • Sessions for correlation and ordered processing.
  • Dead-letter queues for isolating failures.
  • Scheduled delivery and deferral for controlled retries.
  • TTL for time-sensitive operations.
  • Duplicate detection for idempotency.

These are the foundational building blocks needed for reliable agentic backends at massive scale.

Patterns for the future

As these systems grow, a few patterns are going to become critical.

Scatter / Gather

Agents will distribute work across many backend workers and then combine the results. Topics fan out these tasks. Sessions make sure that related messages are kept together. Additionally, dead letter queues can isolate failures without blocking progress for the rest of the workload.

Request / Proposal / Refinement

Agentic AI does their work through iteration. An agent proposes an action, receives partial responses, and refines until the result meets a threshold. Deferral and scheduled delivery control timing for the corresponding messages. TTL makes sure messages for stale proposals are removed when they are no longer needed. Finally, duplicate detection keeps retries safe by ensuring that duplication is detected before they are sent out to the backend systems.

Saga-like coordination

Multi-step workflows require ordered execution and detailed progress tracking. Sessions enforce sequential processing. Session state can be used to record what is done and what remains. Furthermore, dead letter queues capture failures for targeted repair while other workflows continue.

Backpressure and load shaping

Loads can spike, especially with agentic AI. Components can fall behind. This is where queues come in, to buffer the work. Scheduled delivery and concurrency control smooth arrival to the backend workers. Lock renewal protects long-running tasks. The goal is to ensure steady latency and prevent cascading failures.

Closing thoughts

Agentic AI does not behave uniformly. Workloads spike. Steps finish at different times. Availability depends on demand. Designing for this reality is essential if we want systems that scale and deliver consistent results.

Messaging provides the stability these architectures need. Azure Service Bus brings the capabilities that make orchestration practical and repeatable at the scale that is going to be needed. With the right patterns in place, irregular and asynchronous interactions become workflows that can be managed and controlled.

Messaging is not just a transport decision; it is a design principle for the next generation of agentic AI backends!

Updated Dec 15, 2025
Version 1.0
No CommentsBe the first to comment