Messaging and eventing systems exist to move transient information like events and commands between parts of a system. Larger information items, like rich documents or very large lists of inventories, are commonly handled in file systems, also because they typically have more permanence and are being reused.
A file system or web-based file store has protocol facilities that deal with the special challenges of transferring large files, including being able to recover from dropped connections. Generally, the most efficient way for moving large files between applications or services is through a shared file system or shared web store that the publisher and the consumers of the file all can access, and to announce the upload to those consumers via an event delivered through a messaging/eventing system. This technique is commonly referred to as the "claim-check pattern".
While this is and remains our best-practice guidance for all Azure Messaging services, there are circumstances when moving large documents directly through a messaging system is a good alternative option, including:
The last case was the leading motivation for introducing the "Large Message" feature in Azure Service Bus Premium. The feature is a new, per-entity message size limit configuration setting that can range from 1MByte to 100 MByte and defaults to 1 MByte.
The configured limit ensures that your application is protected from messages that are larger than expected in existing buffers and by memory configurations and can be set to the exact limits you need, with Service Bus enforcing the limit for that entity.
The introduction of full JMS 2.0 compliance for Service Bus Premium in the beginning of 2021 is convincing more and more customers to drop their existing, expensive JMS message broker products, in some cases even on-premises clusters, and replace them with Azure Service Bus, for existing workloads. Those workloads have often been built with the assumption of message size limits exceeding the prior Azure Service Bus limit of 1 MByte per message, which was a blocker for adoption. With the new maximum message size of 100MByte, the vast majority of customers are now unblocked for their migrations.
The new message size limit can be used with all Service Bus entities and features and does not require new APIs or special attention except the configuration setting.
Throughput performance matters a lot for large messages, of course. A greater Messaging Units (MU) allocation for the Service Bus namespace allows for larger buffer sizes. At 8+ MU, a single queue can move large messages at over 50 MByte/sec. When using pub/sub distribution with Service Bus topics, large messages do not have the same adverse performance impact as with brokers that maintain distinct queues per subscription, because the underlying log for message contents is shared amongst all subscriptions, meaning the transfer into the log only occurs once. At 8+ MU, a topic with 5 concurrently used subscriptions can move messages at over 22MB/sec, a total of 110MB/sec of output throughput. All this, of course, with the safety net of triple-replicated, flushed-to-disk persistence, spanning Azure availability zones.
If you use Event Grid and/or Event Hubs and believe you have a need for transfers of this magnitude, you should consider realizing the respective communication path using a Service Bus queue or topic with your other communication paths unchanged. Many of our customer discussions show that such transfers are made with the expectation of the documents being handled exactly once at one or many consumers, and Service Bus has the right settlement features for keeping track of when such a job has been completed and will also move a faulty document into a dead-letter queue for inspection when it repeatedly causes issues.
The Large Messages feature is now generally available (GA) and is a feature of Service Bus Premium. There is no extra cost associated with using the feature.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.