Blog Post

SQL Server Blog
7 MIN READ

AD(Active Directory) authentication for SQL Containers on Azure Kubernetes Service (AKS)

amvin87's avatar
amvin87
Icon for Microsoft rankMicrosoft
Sep 13, 2021

In this blog today, let's configure AD (Active Directory) authentication for SQL Server containers running on Azure Kubernetes Service (AKS). Same steps can be followed for SQL Server containers deployed on other kubernetes environments as well.

 

Pre-requisites:

  1. Create an Azure Vnet and add a virtual machine to the network as a domain controller.
  2. We will use another machine (in this case, an Ubuntu machine) that is added to the above-mentioned network and joined to the domain to create the required keytab file for SQL Server container.
  3. Then, create an Azure kubernetes service cluster with network plugin set to Azure CNI and connected to the Azure vnet created in the preceding steps.

Current Environment details & Setup:

In this environment:-

  1. I've created an Azure Vnet (aksvnet2) and a VM (kadvm.contoso.com) as the domain controller with domain named: CONTOSO.COM.
  2. I then created an Azure kubernetes service cluster (akscluster2) with the network type plugin set to Azure CNI, resulting in the cluster being part of the Azure Vnet(aksvnet2). To create an Azure kubernetes service using the Azure CNI plugin, refer to this document. Once the cluster is created you would see the network plugin showing as Azure CNI as shown below, please note that the cluster is not part of the domain but only connected to the network.
  3. I also added another Azure VM Ubuntu 20.04 machine (Ubu11.contoso.com) to the network and joined it to the domain, this machine will be used to generate the keytabs.

Now, we log in to our Azure VM machine (ubu11.contoso.com), which was previously created and added to the domain. If you need assistance creating the VM and adding it to the domain refer to this article. We'll use this machine to generate the service keytab, which we'll then copy to the container required for AD authentication setup.

 

Preparing for the deployment:

Before we can create the keytab we must first plan the container hostname, add the dns host entry for the container hostname and specify the internal load balancer IP address for the DNS host entry. The following are the configuration details for this blog:

  1. The container hostname will be: sqlinst1-0 ( I am going to deploy SQL container as statefulset).
  2. For the internal load balancer we will use the IP address that is available on the network, in this case the IP address from the vnet that I will be using is: 10.1.0.230.
  3. The DNS host entry that I added to the domain controller before deploying the SQL Server container is shown below:
  4. The AD account that I will use in the keytab is : "sqluser".

With the above information, we are now ready to create the keytab on the Ubuntu machine. I download & install the adutil client on the Ubu11.contoso.com server and then run following commands:

  1. We first proceed to create the user "sqluser" with the command:
    adutil user create --name sqluser --distname CN=sqluser,CN=Users,DC=CONTOSO,DC=COM --password 'P@ssw0rd!'​
  2. I also enable the AES protocol for the sqluser, which is needed when you don't want to use older protocols like RC4.
  3. Run the following command to create the spns:
    adutil spn addauto -n sqluser -s MSSQLSvc -H sqlinst1-0.contoso.com -p 1433​​
  4. We are now ready to create the keytab using the commands listed below:
    adutil keytab createauto -k ~/container/mssql.keytab -p 1433 -H sqlinst1-0.contoso.com --password 'P@ssw0rd!' -s MSSQLSvc
    adutil keytab create -k ~/container/mssql.keytab -p sqluser --password 'P@ssw0rd!'

With this, the keytab is ready and here is how it looks, note for this example I used RC4 protocol but for production environment you should not use RC4.

 

Time to deploy SQL Server container:

With all of the preliminary work completed, we are now ready to deploy the SQL Server container. I've included all of the deployment files below, along with explanations of the various parameters used:

  1. First, we create the krb5.conf configmap that will be mounted on the container and contains the relevant realm details so that the container can communicate with the domain. In my case, the domain is CONTOSO.COM, but the details will vary depending on your environment. The file is saved as krb5conf.yaml, so when I deploy this configmap, I run the following command:
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: krb5-conf
    data:
      krb5.conf: |
        [libdefaults]
        default_realm = CONTOSO.COM
    
        [realms]
        CONTOSO.COM = {
          kdc = kadvm.contoso.com
          admin_server = kadvm.contoso.com
          default_domain = CONTOSO.COM
        }
    
        [domain_realm]
        contoso.com = CONTOSO.COM
        .contoso.com = CONTOSO.COM​
     kubectl apply -f "path to file>/krb5conf.yaml"
  2. We then create the mssql.conf configmap and like the name suggests this has the mssql.conf details that needs to be mounted. The file is saved as mssql.yaml and the command I run to create the configmap is :
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: mssql
    data:
      mssql.conf: |
        [EULA]
        accepteula = Y
        accepteulaml = Y
    
        [coredump]
        captureminiandfull = true
        coredumptype = full
    
        [language]
        lcid = 1033
    
        [filelocation]
        defaultdatadir = /var/opt/mssql/userdata
        defaultlogdir = /var/opt/mssql/userlog
    
        [network]
        kerberoskeytabfile = /var/opt/mssql/secrets/mssql.keytab
        privilegedadaccount = sqluser​
    kubectl apply -f "<path to file>/mssql.yaml"

Please note:I am adding the network related configuration details like the keytab location on the container and the privilagedadaccount to be used for SQL Server.

  1. Lets create the secret to store SA password and the command I use is shown below, please ensure that you change this to the SA password that you want to use
     kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd"​
  2. Using the yaml file below, we can now deploy SQL Server containers:
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
         name: azure-disk
    provisioner: kubernetes.io/azure-disk
    parameters:
      storageaccounttype: Standard_LRS
      kind: Managed
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
     name: sqlinst1
     labels:
      app: sqlinst1
    spec:
     serviceName: "sqlinst1"
     replicas: 1
     selector:
      matchLabels:
       app: sqlinst1
     template:
      metadata:
       labels:
        app: sqlinst1
      spec:
       securityContext:
         fsGroup: 10001
       containers:
        - name: sqlinst1
          command:
          - /bin/bash
          - -c
          - cp /var/opt/config/mssql.conf /var/opt/mssql/mssql.conf && /opt/mssql/bin/sqlservr
          image: mcr.microsoft.com/mssql/server:2019-latest
          env:
          - name: ACCEPT_EULA
            value: "Y" 
          - name: MSSQL_ENABLE_HADR
            value: "1"
          - name: SA_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mssql
                key: SA_PASSWORD
          volumeMounts:
          - name: mssql
            mountPath: "/var/opt/mssql"
          - name: tempdb
            mountPath: "/var/opt/mssql/tempdb"
          - name: data
            mountPath: "/var/opt/mssql/userdata"
          - name: log
            mountPath: "/var/opt/mssql/userlog"
          - name: mssql-config-volume
            mountPath: /var/opt/config
          - name: krb5-conf
            mountPath: /etc/krb5.conf
            subPath: krb5.conf
       hostAliases:
        - ip: "10.1.0.226"
          hostnames:
           - "kadvm.contoso.com"
           - "contoso.com"
           - "contoso"
       dnsPolicy: "None"
       dnsConfig:
         nameservers:
           - 10.1.0.226
         searches:
           - "contoso.com"
           - "com"
       volumes:        
         - name: mssql-config-volume
           configMap:
            name: mssql
         - name: krb5-conf
           configMap:
            name: krb5-conf
     volumeClaimTemplates:
      - metadata:
          name: mssql
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 8Gi
      - metadata:
          name: tempdb
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 8Gi
      - metadata:
          name: data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 8Gi
      - metadata:
          name: log
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 8Gi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sqlinst1-0
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    spec:
      type: LoadBalancer
      loadBalancerIP: 10.1.0.230
      ports:
      - name: sql
        protocol: TCP
        port: 1433
        targetPort: 1433
      selector:
        statefulset.kubernetes.io/pod-name: sqlinst1-0​

We are deploying the following Kubernetes objects in this yaml file:

  1. Creating a statefulset deployment called sqlinst1
  2. Deploying the Azure disk storage class for persistent data storage and deploying SQL Server container.
  3. After that, creating an internal load balancer service with static IP (so the IP remains the same even after restart) called sqlinst1-0 pointing to sqlinst1-0 container.

Because my kubernetes service host nodes are not joined to the domain, I must provide domain-related information to the containers during deployment, which is why you see me using parameters like hostAliases, dnsPolicy, and dnsConfig. The IP address that is used in hostAliases.ip refers to the domain controller's IP address, while hostname refers to the domain controller's hostname. This adds the necessary entries to the container's /etc/hosts file. Similarly the dnsConfig settings add the dnssearch and dns nameserver entries of the domain and the domain controller respectively in /etc/resolv.conf file of the container.

 

During the container deployment, I load the mssql.conf and krb5.conf configuration files as configmaps. With this in place, we can now copy the keytab file from the Ubuntu server to the local client from which we are running kubectl, and then copy it from the local client to the container and restart the SQL Server container. To copy the mssql.keytab file from the client to the container, use the following command:

 

 

 

kubectl cp "certificate\mssql.keytab" sqlinst1-0:/var/opt/mssql/secrets/mssql.keytab

 

 

 

Where certificate is the name of a folder in my current working directory.

Note: To troubleshoot AD authentication issues, you will also need logger.ini, which can be copied to the container in the same way you copied mssql.keytab. The command to copy the file is: 

 

 

kubectl cp "certificate\logger.ini" sqlinst1-0:/var/opt/mssql/logger.ini

 

 

The logger.ini file consists of the following details :

 

 

To restart the container post the copy you can run the command: kubectl delete pod sqlinst1-0. This will delete and recreate the SQL Server container.

 

AD authentication in action:

After restarting the SQL Server container, you can now connect to it using the FQDN name of the container, which is the DNS host entry that was added in this case: "sqlinst1-0.contoso.com." After connecting, open a query window and run the following command to add a domain user:

 

 

 

create login [contoso\amvin] from windows.

 

 

 

Log in with this AD user now, and the AD authentication should work as shown below:

 

 

 

 

I hope this helps, and please let me know if you have any further questions! Happy Monday !!

 

Updated Sep 30, 2022
Version 3.0
  • Thanks Tom. Looking forward for your feedback.

     

    SQL Server is a stateful application as it is a database workload and thus in this blog I went ahead with a statefulset type workload. This is in accordance to the guidance here: StatefulSets | Kubernetes again we don't need all the functions as listed in the doc that comes with statefulset deployments but things like stable and persistent storage is definitely one of them. You can use the deployment workload as well provided you ensure stable and persistent storage for the containers. But, if your application requirement is such where the storage persistence is not required that is also fine and you can go ahead with the deployment.

     

    The keytab based authentication process for SQL Server is explained here Understanding Active Directory authentication for SQL Server on Linux and containers - SQL Server | Microsoft Learn and this also should answer your question on why the createauto command( The command that adds the SPN entries in the keytab) is required for SQL Server. Also, the same document explains why we added the user account using the 2nd command in the keytab.

     

    Thanks

    Amit

     

  • TomPester's avatar
    TomPester
    Copper Contributor

    Hi Amit,

     

    Thanks for the response. We got it working with the help from an extra ops person.

     

    Can you tell use what the reason was to use a statefulset and not a deployment? We don't see the use of it being stateful. Thank you.

     

    We will give some more feedback on this subject if appropriate. I think for example that the first command in the above 2 commands, the adutil keytab createauto, is not necessary to get it working.

    We will go over our solution and script it out tomorrow so I will let you know if it was needed.

     

    Kind Regards, Tom

  • Hi Tom,

     

    Yes, that was a typo the 2nd command should have been different. Thank you for pointing this out. I have updated the blog with the right command. which are:

    adutil keytab createauto -k ~/container/mssql.keytab -p 1433 -H sqlinst1-0.contoso.com --password 'P@ssw0rd!' -s MSSQLSvc
    adutil keytab create -k ~/container/mssql.keytab -p sqluser --password  'P@ssw0rd!' 

     

    Also, you can follow the documentation here as another reference: Configure Active Directory authentication with SQL Server on Linux-based containers using adutil - SQL Server | Microsoft Learn to configure AD authentication for SQL containers. 

     

    For the error message that you pasted, normally there should be something more above this to help identify the issue, could you please paste few more lines above and below the error messages, may be all the lines for a specific time repro, so I can see the details. 

  • TomPester's avatar
    TomPester
    Copper Contributor

    Hi,

    Is this a typo in the blog?

     

    "
    We are now ready to create the keytab using the commands listed below:

    adutil keytab createauto -k ~/container/mssql.keytab -p 1433 -H sqlinst1-0.contoso.com --password 'P@ssw0rd!' -s MSSQLSvc
    adutil keytab createauto -k ~/container/mssql.keytab -p 1433 -H sqlinst1-0.contoso.com --password 'P@ssw0rd!' -s MSSQLSvc​​
    "

     

    The 2 commands are *exactly* the same right?
    Was it the intention that the 2nd command was different?

    Thank you

     

  • TomPester's avatar
    TomPester
    Copper Contributor

    Hello amvin87 
    We are getting closer after 2 days 🙂

    We now get the following error and are stuck and it's not mentioned in
    https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-ad-auth-troubleshooting?view=sql-server-linux-ver15

     

    09/28/2022 14:42:28.814184419 Error [security.ldap] <0000000041/0x00000074> Initializing credentials for use in new cache failed: Read-only file system

     

    We have no clue as all mounts in the stafeful set are read/write

     

  • TomPester's avatar
    TomPester
    Copper Contributor

    Trying to get this to work as we speak. I also saw the "do it under 5 minutes" video which was helpful. Still, this is error prone and typically requires coordination between a dev and operations role. Thanks for writing this up!