We are excited to announce the general availability of Log Analytics data export, a capability that lets you continuously export ingested data for selected tables in your Log Analytics workspace, and sent it to an Azure storage account, or Azure Event Hubs.
During the public preview, we've implemented many improvements to accommodate high ingestion rate, that many of our large customers have. We also collected valuable feedback and introduced more flexibility in rules to Event Hubs, for better utilization of the various Event Hubs tiers. Export configuration is defined in Log Analytics workspace and can include up to 10 enabled rules, each with one table or more.
Common Data export scenarios
Data in Log Analytics is available for the retention period in your workspace, and used in various Azure experiences like insights, Sentinel, interactive queries and more. The new Archived Logs (Preview), lets you retain data for up to seven years in workspace at a reduced cost , and some limitation on usage. With that, there isn't one solution that can fit all, and you might need your data outside of Azure Monitor for cases such as:
Security and audit conformance - export to storage account to keep data for very long time in low cost.
Integration with other tools - export to Event Hubs for using with your own SIEM solution, or to Azure Data Lake Gen2 for integration with Synapse and correlation with data coming from other sources.
Tamper protected store - although data can't be altered in Log Analytics once ingested, it can be purged. Export to storage account with immutability policies, keeps data tamper protected.
How it works?
Data export is designed for scale and can support Terabytes of logs per day for each of your workspaces. Data export flow relies on destination resources that you own and manage, and you must assure sufficient ingress capacity for proper export operation, and prevent failures. See 'Scale considerations' for details.
When you configure an export rule, new ingested data to specified tables in your workspace is also sent to your destination from Azure Monitor pipeline, as it arrives, and without any filter on data.
Not all tables can be exported yet - the reason is that not all streams are flowing through the new export infrastructure yet, and we more tables continuously - a list of supported tables is available and being updated regularly. You can include unsupported tables in export rules, and data for these will start automatically once tables become supported. The current custom log (aka CLv1) isn't supported in export, but the new revamped custom log (CLv2), starting it's preview in February 2022, can be exported. See more information on custom log (CLv2) in Azure Monitor announcement.
Billing for the Log Analytics data export feature isn't enabled yet and it will be announces once available. If you choose to keep using data export after billing starts, you be billed accordingly. View more details in pricing page.
Historical data in Log Analytics workspace can be exported using alternative methods relying on Log Analytics query API and its limits, and not meant to be used for large data sets.
While there isn’t a limit on data volume that Azure Monitor can ingest and export, successful export function relies on your destination resources. Storage accounts and Event Hubs have limits that export is bounded by. When monitoring your destinations and responding to data volume increase or failures appropriately, data export can deliver Terabytes of logs per day.
To support high scale scenarios, we provide flexibility in rule configuration, to give you control on data separation and isolation, where tables can be split to different Storage Accounts and Event Hubs. For example, if SecurityEvent is “massive” in rate, create an export rule where SecurityEvent is included alone and sent to separate Storage Account, for proper ingress bandwidth, or verify sufficient throughput units when export it to Event Hubs.
For more information, see data export in Log Analytics.