Blog Post

Microsoft Sentinel Blog
4 MIN READ

Run agentless SAP connector cost-efficiently

MartinPankraz's avatar
MartinPankraz
Icon for Microsoft rankMicrosoft
Oct 28, 2025

The SAP agentless connector uses SAP Integration Suite (Cloud Integration/CPI) to fetch SAP audit log data and forward it to Microsoft Sentinel. Because SAP CPI billing typically reflects message counts and data volume, you can tune the connector to control costs—while preserving reliability and timeliness.

Cost reductions primarily come from sending fewer CPI messages by increasing the polling interval. The max-rows parameter is a stability safeguard that caps events per run to protect CPI resources; it is not a direct cost-optimization lever. After any change, monitor CPI execution time and resource usage.

☝️Note: It may not be feasible to increase the polling interval on busy systems processing large data volumes. Larger intervals can lengthen CPI execution time and cause truncation when event spikes exceed max-rows. Cost optimization via longer intervals generally works best on lower-utilization environments (for example, dev and test) where event volume is modest and predictable.

Tunable parameters

Setting

Default

Purpose

Cost impact

Risk / trade-off

Polling interval

1 minute

How often the connector queries SAP and triggers a CPI message.

Lower message count at longer intervals → potential cost reduction.

Larger batches per run can extend CPI execution time; spikes may approach max-rows after which message processing for that interval is truncated.

max-rows

150,000

Upper bound on events packaged per run to protect CPI stability.

None (safeguard)—does not reduce message count on its own.

If too low, frequent truncation; if too high, runs may near CPI resource limits. Adjust cautiously and observe.

☝️Note: When event volume within one interval exceeds max-rows, the batch is truncated by design. Remaining events are collected on subsequent runs.

Recommended approach

  1. Start with defaults. Use a 1-minute polling interval and max-rows = 150,000.
  2. Measure your baseline. Understand average and peak ingestion per minute (see KQL below).
  3. Optimize the polling interval first to reduce message count when costs are a concern.
  4. Treat max-rows as a guardrail. Change only if you consistently hit the cap; increase in small steps.
  5. Monitor after each change. Track CPI run duration, CPU/memory, retries/timeouts, and connector health in both SAP CPI and Sentinel.

💡Aim for the lowest interval that keeps CPI runs comfortably within execution-time and resource limits. Change one variable at a time and observe for at least a full business cycle.

🧐Consider the Azure Monitor Log Ingestion API limits to close the loop on your considerations.

Analyze ingestion profile (KQL)

ABAPAuditLog | where TimeGenerated >= ago(90d) | summarize IngestedEvents = count() by bin(UpdatedOn, 1m) | summarize MaxEvents = max(IngestedEvents), AverageEvents = toint(avg(IngestedEvents)), P95_EventsPerMin = percentile(IngestedEvents, 95)

How to use these metrics?

  • AverageEvents → indicates typical per-minute volume.
  • P95_EventsPerMin → size for spikes: choose a polling interval such that P95 × interval (minutes) remains comfortably below max-rows.
  • If MaxEvents × interval approaches max-rows, expect truncation and catch-up behavior—either shorten the interval or, if safe, modestly raise max-rows.

Operational guidance

⚠️❗Large jumps (for example, moving from a 1-minute interval to 5 minutes and raising max-rows simultaneously) can cause CPI runs to exceed memory/time limits. Adjust gradually and validate under peak workloads (e.g., period close, audit windows).

  • Document changes (interval, max-rows, timestamp, rationale).
  • Alert on CPI anomalies (timeouts, retries, memory warnings).
  • Re-evaluate regularly in higher-risk periods when SAP event volume increases.

Balancing Audit Log Tuning and Compliance in SAP NetWeaver: Risks of Excluding Users and Message Classes

When tuning SAP NetWeaver audit logging via transaction SM19 (older releases) or RSAU_CONFIG (newer releases), administrators can filter by user or message class to reduce log volume - such as excluding high-volume batch job users or specific event types - but these exclusions carry compliance risks: omitting audit for certain users or classes may undermine traceability, violate regulatory requirements, or mask unauthorized activities, especially if privileged or technical users are involved.

Furthermore, threat hunting in Sentinel for SAP gets "crippled" due to missing insights.

Best practice is to start with comprehensive logging, only apply exclusions after a documented risk assessment, and regularly review settings to ensure that all critical actions remain auditable and compliant with internal and external requirements.

Cost-Efficient Long-Term Storage for Compliance

Microsoft Sentinel Data Lake enables organizations to retain security logs - including SAP audit data - for up to 12 years at a fraction of traditional SIEM storage costs, supporting compliance with regulations such as NIS2, DORA and more. By decoupling storage from compute, Sentinel Data Lake allows massive volumes of security data to be stored cost-effectively in a unified, cloud-native platform, while maintaining full query and analytics capabilities for forensic investigations and regulatory reporting. This approach ensures that organizations can meet strict data retention and auditability requirements without compromising on cost or operational efficiency.

Summary

  • Use the polling interval to reduce message count (primary cost lever).
  • Keep max-rows as a safety cap to protect CPI stability.
  • Measure → adjust → monitor to achieve a stable, lower-cost configuration tailored to your SAP workload.
  • Use built-in mirroring to the Sentinel Data Lake to store the SAP audit logs cost-efficient for years

Next Steps

Updated Oct 29, 2025
Version 2.0
No CommentsBe the first to comment