Deploying Microsoft Sentinel Threat Monitoring for SAP agent into an AKS/Kubernetes cluster
Published Jul 05 2022 12:00 PM 4,619 Views
Microsoft

** UPDATED 2022/12/02**

This article was updated to fix several technical issues that were preventing the successful creation of the AKS deployment.

 

** UPDATED 2022/11/01**

This article was updated, as one of the components (pod-managed identities in Azure Kubernetes Service) has been retired in favor of workload identity for Azure Kubernetes Services. Many thanks to Richard Timmering ( @azwildfire ) for pointing this out and providing a workaround

 

Quick Intro

Effectively monitoring SAP environments has been traditionally very difficult to achieve.

Recently Microsoft released Microsoft Sentinel Threat Monitoring for SAP solution which will help protect your SAP environments.

 

To see how you can deploy our solution/scenarios, check out https://aka.ms/sentinel4sapintro

One of the common questions Sentinel team is getting asked is “How do we make Microsoft Sentinel Threat Monitoring for SAP solution highly available?”

 

Original deployment scenarios (available at https://aka.ms/sentinel4sapdocs) outline deployment of the data connector agent to a VM running docker. If this VM runs in Azure, we have a 99.9% SLA (given that some criteria are matched) (https://azure.microsoft.com/en-us/support/legal/sla/virtual-machine).

Well, what if you want an even better SLA, or want better manageability than a single Docker instance.

The answer is "run this container in a Kubernetes cluster". We have Azure Kubernetes Services available in Azure, so in this article we’ll review how to get it running.

 

Technology in use

Before we can get started, let's look what technologies, apart from AKS we'll be using to achieve the goal

 

Firstly, we need to remember that Microsoft Sentinel Threat Monitoring for SAP data collector utilizes SAP NetWeaver SDK (download at https://aka.ms/sentinel4sapsdk, SAP account required to download), which will need to be presented to the container.

Secondly, we need a location to store the configuration file. We then need to mount it into the container too.

We’ll achieve that through Azure File Shares.

Next, we'll be storing secrets in Azure Key Vault, and we'll need an Azure Active Directory user-managed identity associated witha workload to connect to it.

 

We will be performing all our actions from Bash shell in Azure Cloud shell

 

Enabling necessary features

Since we'll be using some preview features, we first need to activate them and get the latest version of az command (commands borrowed from Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster

 

# Enable Workload Identity feature
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"

# Run the following command to get the registration state of the feature
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableWorkloadIdentityPreview')].{Name:name,State:properties.state}"

# Wait for the following command to return "Registered" as in the following example
#Name                                                      State
#--------------------------------------------------------  ----------
#Microsoft.ContainerService/EnableWorkloadIdentityPreview  Registered

# Register the Microsoft.Container provider to propagete the change
az provider register -n Microsoft.ContainerService

# The following commands add the aks-preview features to az command and update the aks-preview extension to the latest version
az extension add --name aks-preview
az extension update --name aks-preview

 

 

Next we need to deploy an AKS cluster, that has the workload identity management feature activated. Also, since our AKS cluster will be talking to an SAP system, we'll set it up in an Azure network, that you can later peer with network that has your SAP deployment (or that is connected to on-premises, in case your SAP resides there)

 

Creating an AKS cluster

We’ll carry out steps outlined in Quickstart: Deploy an Azure Kubernetes Service cluster using the Azure CLI with a minor change

For this demo we'll be creating all in East US Azure datacenter, make sure to change the location parameter if deploying in a different region. Also note, we're defining some variables that contain names of resources, make sure you change them so they are unique to you.

 

# Create a resource group
RGNAME=sentinelforsapRG
AKSNAME=sentinelforsapaks
az group create --name $RGNAME --location eastus

# Create AKS cluster with workload identity feature
az aks create --resource-group $RGNAME --name $AKSNAME --node-count 1 --node-vm-size Standard_D2_v5 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys
#Connect to cluster
az aks get-credentials --resource-group $RGNAME --name $AKSNAME

# Verify connection to the AKS cluster
kubectl get nodes

 

 

Creating storage account, file shares and granting access

Next step is to create a storage account and create two file shares

We’ll borrow some of the steps from the Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS) guide.

The following sample can be used to create a storage account and necessary file shares.

 

# Change these four parameters as needed for your own environment
AKS_PERS_STORAGE_ACCOUNT_NAME=sentinelforsap$RANDOM
AKS_PERS_SHARE_NAME=nwrfcsdk
AKS_PERS_SHARE_NAME2=work

# Create a storage account
az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $RGNAME --sku Standard_LRS

# Export the connection string as an environment variable, this is used when creating the Azure file share
export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $RGNAME -o tsv)

# Create the file shares
az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
az storage share create -n $AKS_PERS_SHARE_NAME2 --connection-string $AZURE_STORAGE_CONNECTION_STRING

# Get storage account key
STORAGE_KEY=$(az storage account keys list --resource-group $RGNAME --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)

# Echo storage account name and key
echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
echo Storage account key: $STORAGE_KEY

 

 

That should create us a storage account and two file shares – `nwrfcsdk` and `work`, and also output the storage account name and key (we'll need them in the next steps)

 

We'll be running out workload in a separate namespace, so let's go ahead and create that namespace

 

NAMESPACE=sentinel4sapnamespace
kubectl create namespace $NAMESPACE

 

 

Next task is to allow the Kubernetes to access the file shares

Borrowing steps from the Manually create Azure Files share guide

 

kubectl create secret generic sentinel4sap-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY -n $NAMESPACE

 

 

Next, upload the NetWeaver SDK zip file to the nwrfc share, so that the result looks like this:

MSFT_AndrewLomakin_0-1655728723522.png

 

Creating systemconfig.ini

Create a systemconfig.ini file (Microsoft Sentinel Continuous Threat Monitoring for SAP container configuration file reference | Mic...

The Sample systemconfig.ini file below uses Azure Key Vault for storing secrets. You *can* (but shouldn't) configure it to store secrets right in the systemconfig.ini file in plain text, however this is a disaster for security, so let's take the long secure way.

Spoiler
[Secrets Source]
secrets = AZURE_KEY_VAULT
# Uncomment and replace with your keyvault name
# keyvault = demokeyvault

# Uncomment and replace with your SID
# intprefix = A4H

[ABAP Central Instance]
# Uncomment and replace with your own System ID, for example A4H
# sysid = A4H

# Uncomment and replace with your own Client ID, for example 001
# client = 001

# Uncomment and replace with your own System Number, for example 00
# sysnr = 00

# Uncomment and replace with your own ABAP server IP address, for example 192.168.1.1
# ashost = 192.168.1.1

[Azure Credentials]

[File Extraction ABAP]

[File Extraction JAVA]

[Logs Activation Status]
ABAPAuditLog = True
ABAPJobLog = True
ABAPSpoolLog = True
ABAPSpoolOutputLog = True
ABAPChangeDocsLog = True
ABAPAppLog = True
ABAPWorkflowLog = True
ABAPCRLog = True
ABAPTableDataLog = False

[Connector Configuration]
extractuseremail = True
apiretry = True
auditlogforcexal = False
auditlogforcelegacyfiles = False
timechunk = 60

[ABAP Table Selector]
AGR_TCODES_FULL = True
USR01_FULL = True
USR02_FULL = True
USR02_INCREMENTAL = True
AGR_1251_FULL = True
AGR_USERS_FULL = True
AGR_USERS_INCREMENTAL = True
AGR_PROF_FULL = True
UST04_FULL = True
USR21_FULL = True
ADR6_FULL = True
ADCP_FULL = True
USR05_FULL = True
USGRP_USER_FULL = True
USER_ADDR_FULL = True
DEVACCESS_FULL = True
AGR_DEFINE_FULL = True
AGR_DEFINE_INCREMENTAL = True
PAHI_FULL = True
AGR_AGRS_FULL = True
USRSTAMP_FULL = True
USRSTAMP_INCREMENTAL = True

Upload the systemconfig.ini file to the work share, so the result looks like this:

 

MSFT_AndrewLomakin_1-1655728723530.png

Creating identity and assign it to the AKS cluster

The next steps are again borrowed from the Deploy and configure workload identity (preview) on an Azure Kubernetes Service (AKS) cluster guide.

Create an identity and assign it to a namespace in the AKS cluster:

 

IDENTITYNAME="sentinel4sapidentity"
FEDERATEDIDENTITY="sentinel4sapfederatedidentity"

az identity create --name $IDENTITYNAME --resource-group $RGNAME

export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group $RGNAME --name $IDENTITYNAME --query 'clientId' -otsv)"
export AKS_OIDC_ISSUER="$(az aks show -n $AKSNAME -g $RGNAME --query "oidcIssuerProfile.issuerUrl" -otsv)"

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    azure.workload.identity/client-id: ${USER_ASSIGNED_CLIENT_ID}
  labels:
    azure.workload.identity/use: "true"
  name: $IDENTITYNAME
  namespace: $NAMESPACE
EOF

az identity federated-credential create --name $FEDERATEDIDENTITY --identity-name $IDENTITYNAME --resource-group $RGNAME --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:$NAMESPACE:$IDENTITYNAME

 

 

Create and configure the keyvault

These steps are more-less from our own deployment guide

 

KVNAME=sentinelforsapkv
az keyvault create --name $KVNAME --resource-group $RGNAME

az keyvault set-policy --name $KVNAME --secret-permissions get list --spn "${USER_ASSIGNED_CLIENT_ID}"

 

 

Now the easy part, let's populate keyvault with the secrets

 

#Define the SID
SID=A4H

# Replace values below to match your SAP and Log Analytics setup
USERNAME="SENTINELUSER"
PASSWORD="P@ssw0rd1"
LOGWSID="8a7e2369-7a53-442f-a264-b1f98e1b1baa"
LOGWSPUBLICKEY="Q29uZ3JhdHosIHlvdSBmb3VuZCB0aGUgZWFzdGVyIGVnZyA6KSBOb3cgZ28gYW5kIGZpbmlzaCB0aGUgc2V0dXA="

az keyvault secret set --name "$SID"-ABAPUSER --value "$USERNAME" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-ABAPPASS --value "$PASSWORD" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-LOGWSID --value "$LOGWSID" --vault-name "$KVNAME"
az keyvault secret set --name "$SID"-LOGWSPUBLICKEY --value "$LOGWSPUBLICKEY" --vault-name "$KVNAME"

 

 

Constructing the yaml file

Finally let’s create the yaml file, which will be used to build our pod (borrowed with some editing from Manually create Azure Files share)

Couple of things to point out:

The "aadpodidbinding" label must match with the pod identity name from the previous step. Also something that is crucial is to have the nobrl option on the work file share, else metadata.db file that is generated for the data collector to track it's progress will fail to initialize.

Spoiler
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: Sentinel4SAP
  name: sentinel4sap-agent
  namespace: "sentinel4sapnamespace"					  
spec:
  replicas: 1
  selector:
    matchLabels:
      app: Sentinel4SAP
  template:
    metadata:
      labels:
        app: Sentinel4SAP
      name: sentinel4sapdeployment                          
    spec:
      nodeSelector:
        "kubernetes.io/os": linux
      serviceAccountName: "sentinel4sapidentity"
      containers:
        - name: sentinel4sap-agent
          image: mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest
          volumeMounts:
            - name: sentinel4sap-sdk
              mountPath: "/sapcon-app/inst"
              readOnly: false
            - name: sentinel4sap-work
              mountPath: "/sapcon-app/sapcon/config/system"
              readOnly: false
          resources:
            limits:
              memory: "2048Mi"
              cpu: "500m"
      volumes:
      - name: sentinel4sap-work
        csi:
            driver: file.csi.azure.com
            volumeAttributes:
                secretName: sentinel4sap-fileshare-secret
                shareName: work
                mountOptions: "dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,nobrl"
      - name: sentinel4sap-sdk
        csi:
            driver: file.csi.azure.com
            volumeAttributes:
                secretName: sentinel4sap-fileshare-secret
                shareName: nwrfcsdk
                mountOptions: "dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,cache=strict,nosharesock,nobrl"

Upload this yaml to Azure Cloud Shell

 

Creating Network Peering

One last thing to do before we deploy the actual container is to peer the VNet that was created for AKS with VNet where SAP resides (unless you're accessing your SAP through a public IP address, which would be very odd). Just navigate through the portal and create a new peering between the networks:

MSFT_AndrewLomakin_0-1655906803635.png

Make magic happen

Run the command below and we're done (assuming you saved the yaml as aks-deploy.yml) :)

 

kubectl apply -f aks-deploy.yml

 

 

That should be it, verify container is running by reviewing

 

kubectl get pods
NAME                                 READY   STATUS             RESTARTS         AGE
sentinel4sap-agent-cdf5fd8fd-w2nrv   1/1     Running            0(4m25s ago)   4m

kubectl logs sentinel4sap-agent-cdf5fd8fd-w2nrv

 

 

That's it, we've now deployed the data collector agent onto a kubernetes cluster. Simple, right? :)

 

For more information on Microsoft Sentinel Threat Monitoring for SAP be sure to check the product documentation page https://aka.ms/sentinel4sapdocs

 

P.S. Did you find the easter egg in this post?

1 Comment
Version history
Last update:
‎Dec 02 2022 01:57 AM
Updated by: