SAP CDC Connector and SLT - Part 1 - Overview and architecture
Published Mar 23 2023 02:00 PM 17.3K Views
Microsoft

The SAP CDC connector, which is part of the Azure Data Factory and Azure Synapse solutions, provides you with a simple and efficient way to access information that is most valuable to your organization. With its ability to extract data using a wide range of source objects, the SAP CDC connector makes it easy for businesses to extract and process SAP data making it a valuable source of information for analytics.

 

In many cases, customers want to access data stored directly in the ABAP tables. Traditional data extraction methods, such as extractors or CDS Views, may not always provide the level of specificity required. They provide additional transformation layer to expose data in the multidimensional architecture with a set of facts and dimensions. To extract data directly from tables that you can use a specialized add-on SAP Landscape Transformation Replication Service (SAP SLT). It uses database triggers to continuously monitor and record all changes made to the data stored in the ABAP tables. It natively integrates with the SAP Operational Data Provisioning service, which means you can use the SAP CDC connector to fetch the desired information and place it in your target storage, ensuring that you always have the most up-to-date and accurate data at your fingertips.

SAPDataModel.png

Throughout this blog series, I will be highlighting the use of SAP CDC Connector and SAP SLT for data extraction. While the first episode covers the high-level architecture, there is much more to come. If you're new to the SAP CDC Connector, I highly suggest checking out my previous post where I introduce the basics of data extraction from SAP systems.

 

OVERVIEW AND ARCHITECTURE

SAP SLT is a commonly used solution for data replication between SAP systems - like extracting data from SAP ECC or SAP S/4HANA to and Central Finance. In sidecar scenarios, you use the replication engine to load data to external databases to run analytics, without impacting the performance of the transactional system. As SLT is also one of the possible data sources for the SAP Operational Data Provisioning framework, you can use the SAP CDC connector to fetch and extract data.

 

There are two deployment patterns available:

  • Embedded - where the SLT engine is part of the source system
  • Standalone - where the SLT runs on a separate system

In the embedded deployment, the SLT engine is part of your source system, like SAP ERP or S/4HANA. It minimizes the maintenance effort, as you don't have to install and run a separate SAP NetWeaver system. But as SLT is an additional workload, you need to ensure you have extra room to run this component. In standalone deployment, you use a separate system called replication server to run the SLT, which limits the performance hit on the source but requires you to maintain extra infrastructure.

 

In real life, customers with small SAP landscape, that replicate data from a single system usually follow the embedded architecture. But if you plan to extract data from multiple sources, your SAP is difficult to scale, or you replicate data from non-ABAP system consider using an additional system.

SLTArchitecturePattern.png

The SAP SLT framework is part of the DMIS (Data Migration Server) component. In standalone deployment, you must install it on both systems: the source (transactional) system, as well as the one dedicated to run the SLT engine. In S/4HANA 2020 and higher, the addon is already built into the S4CORE component.

 

There are three versions of the DMIS component. You should choose the one that suits your deployment model and the SAP NetWeaver release:

DMIS 2011 – SAP NetWeaver up to 7.52 – SAP Note 1577441

DMIS 2018 – SAP NetWeaver 7.52 – SAP Note 2669326

DMIS 2020 – SAP S/4HANA Foundation 2020 and higher – SAP Note 2954022

 

In standalone deployments, the version of the DMIS doesn't have to perfectly match the one in the source system. For every release and support package stack, SAP provides a compatibility matrix in the release information note. Refer to SAP Note 2675613 to find the release information for your component version.

SLTReleaseInformation.png

 

During the replication process, database triggers track all changes to the data stored in the source tables. Every operation is registered in the logging table, and the function module transfers data to a specified target. SLT automatically creates all required objects when you initiate the data extraction. Keeping changes in the logging tables provides a level of fault tolerance that prevents data loss when the system that manages the replication is temporarily unavailable. In such a case, once the replication process is re-established, the SLT can easily identify all not-replicated changes and continue operation.

SLTLoggingMechanism.png

 

You can list the name of created objects by choosing the table name in the t (transaction code LTRC) and clicking the magnifier icon:

DisplayDetails.png

 

Logging tables don't store the whole data record. Instead, they use the primary key to identify changed rows. Then, during the extraction, the function module responsible for the data transfer looks up the latest values and passes them to the target. The process slightly differs when a record is deleted. In such a case, only the primary key is passed to the target, as the row doesn't exist any longer in the source table.

 

In the example below, /1CADMC/00000513 is the logging table created for replicating data from the VBAK table. It contains an entry identifying a changed record, based on the Primary Key of the source table – MANDT and VBELN. You can also see additional fields which keep the status of the change and the associated operation.

LoggingTable.png

The additional workload can impact the overall system performance. Before starting the replication process, please ensure you have enough resources available. To get a rough estimate of the required resources, I recommend starting with the T-Shirt sizing available in the SAP Sizing Guide.

 

That wraps up the first part of this blog series on using the SAP CDC Connector with SLT for data extraction. We've covered a lot of ground, from deployment patterns to the inner workings of data replication. In the second part of this series, we'll dive into the basics of configuration, so stay tuned! We'll be exploring how to set up your environment and connect your SAP system to Azure Data Factory or Azure Synapse, enabling you to efficiently and effectively extract data in real-time. Don't miss out on the next episode!

2 Comments
Co-Authors
Version history
Last update:
‎Mar 22 2023 06:39 AM
Updated by: