SAP CDC Connector and SLT - Part 2 - Initial Configuration
Published Mar 29 2023 07:00 AM 9,744 Views
Microsoft

Welcome back to the blog series on SAP CDC Connector and SLT! In the previous post, we discussed the architecture and deployment options. Now that you have a solid understanding of how the solution works, it's time to dive into the initial configuration process. In this post, I'll guide you through the essential first steps of setting up the SLT to work with Azure. 

 

INITIAL SLT CONFIGURATION
To start the replication process you need to set-up SAP SLT. You do it by creating configurations that controls the replication process. You manage the configuration in the SLT Cockpit (t-code: LTRC). In the Standalone deployment you should start the cockpit on the Replication Server, and not on the source.

 

image011.png

 

You can define multiple configurations in a single SLT. Each configuration is a combination of the source and target system with specific replication settings.

 

The user interface of the SLT Cockpit slightly varies between releases, with visible changes even between support package versions. In this guide, I'm using SAP S/4HANA 2022 system with the SLT being part of the S4CORE component, but if you’re using a different release, you can still follow this post as the basic principles remain the same.

 

The initial screen of the SLT Cockpit lists all previously created configurations. Each configuration has a unique identifier called Mass Transfer ID (MTID). Click the Create Configuration button to define a new one. It opens a wizard that guides you through basic settings.

You start by specifying a user-friendly configuration name and optional description. For replication to the Operational Data Provisioning framework, you don't have to maintain other fields on this screen.

 

image013.png

 

In the next step, provide the RFC destination to the source system. By default, the Read from Single Client checkbox is not selected, which means that SLT replicates all changes irrespectively of the client you use. Usually, it is not the desired approach. It’s a good practice to ensure the Allow Multiple Usage is selected, as it allows you to use the same source system in multiple configurations. Once the configuration is created you can no longer change this. You can leave other settings unchanged.

 

image015.png

 

You specify the target for replicated data in the third step. Choose Other and select Operational Data Provisioning. Provide the Queue Name - you will use it later in Azure as a replication Context to identify the SLT configuration.

 

image017.png

The last configuration step is all about the performance. I suggest changing the Initial Load Mode to Resource Optimized. I describe the difference between Initial Load Modes in the Advanced Configuration section of this guide. The number of Data Transfer / Initial Load / Calculation Jobs should come from the sizing you perform. For small configuration, with limited number of tables in scope SAP recommends using not less than two data load jobs. There is 1-to-1 corelation between specified data transfer jobs and the dialog work processes in the source system. Ensure that your source system has enough work processes ready to accommodate this additional workload. To optimize the performance of the initial load you can reserve some data transfer jobs just for this purpose in the field Initial Load Jobs. The number of Initial Load Jobs should be lower than total number of data transfer jobs, otherwise after the initial load the system won’t replicate changes.

 

When you start the replication, Calculation Jobs run the initial assessment of the data stored in the source table and chunk data into smaller pieces. You can find more information about calculation jobs in the Advanced Configuration chapter. SAP recommends using at least two calculation jobs.

 

image019.png

 

Review and confirm your settings. Your configuration is now active!

 

The final configuration step is enabling a dedicated BAdI. Open the created configuration and navigate to the Expert Functions tab. Choose Activate / Deactivate BAdI Implementation. If you skip this step, the SLT will not appear in the list of available Contexts in the SAP CDC connector. You do it only once per system - if you decide to delete and recreate the configuration, you don't have to enable the BAdI again.

 

image021.png

 

AZURE CONFIGURATION

To extract tables using the SLT, head over to Azure Data Factory or Synapse. You don’t have to maintain any additional settings in SLT Cockpit. If you haven't yet created pipeline mapping data flow, please refer to my older post where I describe the full process end-to-end. The only difference when using SLT as the source is choosing the right Context.

imgx01.png

Open the "Mapping Data Flow" and select the "Source" action. Switch to the "Source Options" tab. Select the created SLT configuration in the "ODP Context" field using the chosen Queue Alias name. Provide the table you want to extract in the "ODP Name". The "Run mode" lets you choose the extraction type. You have two options: run a one-time full extraction of your dataset (Full on every run) or a full extraction followed by incremental delta changes in the subsequent runs (Full on the first run, then incremental). Finally, provide the Key Columns to ensure your data can be merged into a consistent datastore.

 

If you can't find the SLT queue name in the Context list, most likely you forgot to enable the BAdI! Scroll up two paragraphs where I describe how to do it. Don't worry, that's a common mistake.

 

image023.png

That's it! Both SAP and Azure are now ready to replicate data - but that's something for next week! I'll show you how you can monitor the initial and delta extraction. We'll also go through all relevant tools that allows you to understand what's going on under the hoods.

1 Comment
Co-Authors
Version history
Last update:
‎Mar 28 2023 06:50 AM
Updated by: