Implementing Azure NetApp Files with Kerberos
Published Feb 09 2022 06:02 PM 5,129 Views

Implementing Azure NetApp Files with Kerberos

PoC and Validation

Kerberos with ANF for SAP HANA

Encryption is a very big topic when it comes to data security especially in public clouds.

Azure NetApp Files (ANF) supports DES, Kerberos AES 128, and Kerberos AES 256 encryption types (from the least secure to the most secure). If you enable AES encryption, the user credentials used to join Active Directory must have the highest corresponding account option enabled that matches the capabilities enabled for your Active Directory.

The question which has to be answered is if Kerberos adds additional value to the overall system security and system performance. Encryption always will cost CPU cycles and will also enlarge the storage latency. With SAP HANA you can enable LSS encryption which will encrypt the data additionally before the data will be written to the storage. At the storage the data will be encrypted at REST a second time by default. So, enabling Kerberos the data would be encrypted a third time which obviously has the biggest impact since this encryption is in the data path.


Anyway, the request to enable Kerberos is coming more and more.

This document will try to describe the configuration and will also try to show the impact when Kerberos is enabled.


To start with I will show the starting point without enabling Kerberos and LSS. The numbers here are the so, called “default”.

Before you begin read:

Compare Active Directory-based services in Azure | Microsoft Docs

We will use Azure Active Directory Domain Services (Azure AD DS) in this documentation.

Overview of Azure Active Directory Domain Services | Microsoft Docs

Kerberos Authentication

Kerberos Authentication Overview | Microsoft Docs

ANF – Kerberos configuration

Create and manage Active Directory connections for Azure NetApp Files | Microsoft Docs

Configure NFSv4.1 Kerberos encryption for Azure NetApp Files | Microsoft Docs

Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes | Microsoft Docs


The NetApp TR-4616 is also a very good information how to configure Kerberos and also describes some Kerberos terms very detailed.

TR-4616: NFS Kerberos in ONTAP (


Some facts to know:

Performance impact of Kerberos on Azure NetApp Files NFSv4.1 volumes | Microsoft Docs

The security options currently available for NFSv4.1 volumes are as follows:

  • sec=sys uses local UNIX UIDs and GIDs by using AUTH_SYS to authenticate NFS operations.
  • sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
  • sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
  • sec=krb5p uses Kerberos V5 for user authentication and integrity checking. It encrypts NFS traffic to prevent traffic sniffing. This option is the most secure setting, but it also involves the most performance overhead.

Throughput here the baseline is 128MB (max):










Reference (German) NFS mit Kerberos sichern › Kerberos › Wiki ›


This is the test setup



SUSE documentation

Network Authentication with Kerberos | Security and Hardening Guide | SUSE Linux Enterprise Server 1...


First some Performance measurements with and without Kerberos:


Data and Log volumes:

I used a 12TiB Ultra Volume for the tests. Both tests (data and Log) are pointing to the same volume.


HCMT native no Kerberos:



HCMT with Kerberos krb5



HCMT with Kerberos krb5i



We also tested Kerberos 5p. However, based on the throughput we could achieve, and which is not shown here, it is not recommended to use Kerberos 5p. The impact on throughput and latency was very significant. The performance penalty for Kerberos 5p is by far too high to meet any SAP HANA or any other DBMS KPI. We even got dumps with the random 1M data file read which caused HCMT to break.

So available and recommended Kerberos flavors are 5 and 5i but NOT 5p.


HANA Stress tool

This tool (from GitHub) is creating 10000 tables and will add 20000 rows into each table.

I started the tool three times to see if there are no differences in the runs.


For the tests I am using two Ultra volumes – Data 4TB and Log 3TB




anaadm@ralfvm01:/opt/hanastress> time ./ -v --host localhost -i 00 -u HANASTRESS -p HANAStress02 -g <Group> --tables 10000 --rows 20000 
--threads 10
[info] Starting Generation...


real    12m.921s

user    0m1.084s

sys     0m0.559s


real    13m24.9s

user    0m1.002s

sys     0m0.592s


real    13m83.918s

user    0m1.005s

sys     0m0.575s


Kerberos 5

real    14m16.617s

user    0m10.739s

sys     0m6.127s


real    14m54.530s

user    0m10.764s

sys     0m6.055s


real    15m41.758s

user    0m10.798s

sys     0m6.294s



Kerberos 5i

real    16m20.946s

user    0m11.175s

sys     0m6.018s


real    16m52.497s

user    0m11.094s

sys     0m6.181s


real    17m36.939s

user    0m11.190s

sys     0m6.055s



This is the graphical overview. !!! Lower is better !!!



Setup of our test scenario

Azure AD DS

First create the Azure Active Directory Domain Service


Select the:


The user who is trying to create the AD DS must have the Global Administrator role for the Directory.

Select the:


Click on Create








Use the same vNET but let the service create a new subnet.







Click on Create after the validation was successful.

Kerberos RC4 Encryption

Enable or disable Kerberos RC4 encryption for your managed domain. When Kerberos RC4 encryption is disabled, all Kerberos requests that use RC4 encryption will fail.


Kerberos Armoring

Enable or disable Kerberos Armoring for your managed domain. This will provide a protected channel between the Kerberos client and the KDC.


Helpful Links

Harden an Azure Active Directory Domain Services managed domain







It will take several minutes to complete…



As the result Azure will create the Azure AD DS with two DNS IP addresses

Configure the vNET DNS config

After the Azure ADDS was deployed, we need to change the default DNS entry in the vNET settings.






If you do not sync all users only the Domain Admins will be synchronized from the Azure AD to the Azure AD DS.


The synchronization will take some time.

After the synchronization is done you must see al users in the Administrative User tool. Be aware that you cannot change or add users in this tool (the Azure AD DS is read only from this point)



Be aware that if you only have the Azure AD DS Service as your Domain Controller and AD you must reset the passwords if you like to authenticate towards the Azure AD DS service.

Passwords are not synced from the Azure AD.



Then log-off from the Azure portal and re-logon to the Azure portal. You now need to change the password. Now the password hash is also in the Azure AD DS.


Run ipconfig /renew after the reboot of the VM to switch from the Azure default DNS to the new created Azure AD DS


ipconfig /all

  DNS-Server . . . . . . . . . . . :



ipconfig /renew
ipconfig /all

   DNS-Server  . . . . . . . . . . . :


Now you can join the domain….

Enable synchronization of password hashes from on-prem AD (if required)

If you select the Azure AD DS resource you see this picture on the right side. Click now Instructions for synced user accounts

Enable password hash sync for Azure AD Domain Services | Microsoft Docs


Check the AD settings from the JumpBox



Install the required DNS Tolls if you would like to manage the DNS as well.


When starting the DNS Editor you only need to specify the domain name.



If you like to add the Linux host in the domain simply specify the client here as new host.



You need to restart the nscd daemon on the client that the clint can ping the new defined entry.

ping: Name or service not known


systemctl restart nscd
64 bytes from ( icmp_seq=1 ttl=64 time=0.020 ms
64 bytes from ( icmp_seq=2 ttl=64 time=0.044 ms

To understand the LDAP structure, it is important to start the ADSI Edit to view an understand
how the LDAP structure from the Azure AD DS looks like.


For the ANF SMB and Kerberos configuration the AADDS structure must be used.

This is the OU which must be configured in ANF for the AD join.



The hostname of the DNS Server for the ANF AD join can also be retrieved from the MMC

Start MMC on the JumpBox


Note the DNS hostname for the Kerberos Realm ANF config.

Azure AD DS User workaround

Because you cannot modify the G-id and U-id under OU=AADDS Users you need to create a new OU for the SAP LDAP users.

First create a new OU


Specify the name for the OU … can be anything, here I used SAP



Open the properties by right click on the SAP OU.



Note down the full OU. This is required for the ANF AD connection.

Here: OU=SAP,DC=sapcontoso,DC=com

LDAP User creation

Select the new OU (Organizational Unit) by a single click and use the add user button.



Specify the SIDadm user. Here anaadm



Specify the password for the user and click Next then finish.



DoubleClick the just created user and go to Attribute Editor.


Change the uid, uidNumber and the gid to the values from the Linux user.


     uid =           anaadm
     uidNumber=      1001
     gidNumber=      79

DoubleClick the just created user and go to Attribute Editor.


Then create the Group sapsys



Open again the Attribute Editor and change the gidNumber to 79



Now we have created the SIDadm user and the sapsys group.



Now create the NFS Volume with enabled Kerberos

Before you create a volume for SAP workloads you must enable the UNIX permission feature of ANF

Those features are public preview at the moment.

Configure Unix permissions and change ownership mode for Azure NetApp Files NFS and dual-protocol vo...

Create an NFS volume for Azure NetApp Files | Microsoft Docs

az feature register --namespace Microsoft.NetApp --name ANFUnixPermissions
az feature register --namespace Microsoft.NetApp --name ANFChownMode
az feature register --namespace Microsoft.NetApp --name ANFLdapExtendedGroups

az feature list --namespace Microsoft.NetApp

the activation can take up to 60 minutes...

After the feature registration there is a new field in the volume creation workflow under protocol. After the volume is created you can change the volume access from restricted to unrestricted.

Join ANF to the Domain

After we created the Azure AD DS we join the NetApp Account to this AD



Click join

Configure the AD settings






As a result, you will see the config in the portal


go to your ANF account and create a new Volume



 Select the Kerberos Protokoll which suits your requironments. If you select all, all kerberos modies will be possible.



Kerberos 5p is not supported because of performance and functional reasons.

Now we create the data volume


After the volume is created you find additional entries in the LDAP


Configure Active Directory connection

Also see this documentation:

Configure NFSv4.1 Kerberos encryption for Azure NetApp Files | Microsoft Docs

Configuration of NFSv4.1 Kerberos creates two computer accounts in Active Directory:

  • computer account for SMB shares
  • A computer account for NFSv4.1--You can identify this account by way of the prefix NFS-.

After creating the first NFSv4.1 Kerberos volume, set the encryption type for the computer account by using the following PowerShell command:

Set-ADComputer $NFSCOMPUTERACCOUNT -KerberosEncryptionType AES256


You can find the correct command line in the Portal under Mount Instructions


Set-ADComputer NFS-ANFSMB-8859 -KerberosEncryptionType AES256


The AD is now configured.


If you like to add an Azure AD user as Windows Logon User you must add the user as Desktop User

PS C:\Windows\system32> net localgroup "Remote Desktop Users" /add ""






Set-ADComputer NFS-ANFSMB-8859 -KerberosEncryptionType AES256



The AD is now configured.


If you like to add an Azure AD user as Windows Logon User you must add the user as Desktop User

PS C:\Windows\system32> net localgroup "Remote Desktop Users" /add ""


Configuration of the client SLES15SP2

Configure an NFS client for Azure NetApp Files | Microsoft Docs


Install the required SUSE packages for Kerberos

zypper in krb5 krb5-client realmd samba-common chrony nfs-utils sssd-ad sssd-ipa sssd-krb5 sssd-ldap sssd-proxy realmd-lang 
zypper in sssd-tools sssd adcli

configure the chrony (NTP) service



vi /etc/chrony.conf
server iburst


Start the chrony service

systemctl enable chronyd.service
systemctl start chronyd.service


check the chrony status

chronyc sources -v
210 Number of sources = 5

MS Name/IP address         Stratum Poll Reach LastRx Last sample
#* PHC0                    0   3   377    11    -15us[  -86us] +/- 2724ns
^-    2   9   377   249  +2054us[+1878us] +/-   70ms
^-           4  10   377   899    -13ms[  -13ms] +/-  880ms
^-             2   8   377   108   +743us[ +456us] +/-  164ms
^- 50-205-244-109-static.hf>   9   377   501  -2665us[-2603us] +/-   57ms

Search Domain

If you like to add your own search domain in the /etc/resolv.conf you have to change the network config. Manual changes in /etc/resolv.conf will be overwritten from the wicked daemon after some time.

cd /etc/sysconfig/network 
vi config


add the search domains here:

## Type:        string
## Default:     ""
# List of DNS domain names used for host-name lookup.
# It is written as search list into the /etc/resolv.conf file.


Restart the network service

netconfig update


now the change is persistent in the /etc/resolv.conf


cat /etc/resolv.conf
### /etc/resolv.conf is a symlink to /var/run/netconfig/resolv.conf
### autogenerated by netconfig!
# See also the netconfig(8) manual page and other documentation.
### Call "netconfig update -f" to force adjusting of /etc/resolv.conf.


Join the Active Directory domain

To join the AD Domain, issue the command (as root)



realm join SAPCONTOSO.COM -U ralf.klahr --computer-ou="OU=AADDC Computers"
Password for ralf.klahr:*********
ralfwestvm01:~ #


To validate the success, you can check again the AD and the realm list command

realm list
  type: kerberos
  realm-name: SAPCONTOSO.COM
  configured: kerberos-member
  server-software: active-directory
  client-software: sssd
  required-package: sssd-tools
  required-package: sssd
  required-package: adcli
  required-package: samba-client
  login-policy: allow-realm-logins

The client is now also visible in the AD



Ensure that default_realm is set to the provided realm in /etc/krb5.conf. If not, add it under the [libdefaults] section in the file as shown in the following example:


Backup the existing default Kerberos config

cp /etc/krb5.conf /etc/krb5.back


As an example:

vi /etc/krb5.conf
includedir  /etc/krb5.conf.d
    default_realm = SAPCONTOSO.COM
    default_tkt_enctypes = aes256-cts-hmac-sha1-96
    default_tgs_enctypes = aes256-cts-hmac-sha1-96
    permitted_enctypes = aes256-cts-hmac-sha1-96
        kdc =
        admin_server =
        master_kdc =
        default_domain = SAPCONTOSO.COM
    kdc = SYSLOG:INFO
    admin_server = FILE=/var/kadm5.log

Run the kinit command with the user account to get tickets:

For example:

kinit ralf.klahr@SAPCONTOSO.COM
Password for ralf.klahr@SAPCONTOSO.COM: *********

Or for the SISadm...

kinit anaadm@SAPCONTOSO.COM


Restart all NFS services:


systemctl restart nfs-*
systemctl restart rpc-gssd.service


Change the idmapd config

vi /etc/idmapd.conf 
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain =
Nobody-User = nobody
Nobody-Group = nobody

Finally try to mount the Volume

mount -t nfs -o sec=krb5i,rw,hard,rsize=262144,wsize=262144,vers=4.1,tcp /mnt


df -h
Filesystem                                Size  Used Avail Use% Mounted on
..  100G     0  100G   0% /mnt
mount on /mnt type nfs4 (rw,relatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5i,clientaddr=,local_lock=none,addr=


if you plan to install HANA on a default SLES image you also need to install

zypper in libatomic1 insserv sapconf libltdl7



For the HCMT test I made sure that both volumes are on the same storage endpoint

ralfwestvm01:~ # ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=63 time=0.551 ms
64 bytes from icmp_seq=2 ttl=63 time=0.468 ms
64 bytes from icmp_seq=3 ttl=63 time=0.550 ms

ralfwestvm01:~ # ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=63 time=0.380 ms
64 bytes from ( icmp_seq=2 ttl=63 time=0.483 ms
64 bytes from ( icmp_seq=3 ttl=63 time=0.467 ms


Configure the /etc/hosts entries

# IP-Address  Full-Qualified-Hostname  Short-Hostname
#       localhost ralfwest02

Configure the /etc/fstab

vi /etc/fstab
# Kerberos Volume    /hana/data/ANA/mnt00002  nfs   sec=krb5i,rw,hard,rsize=262144,wsize=262144,vers=4.1,tcp  0  0   /hana/log/ANA/mnt00002  nfs   sec=krb5,rw,hard,rsize=262144,wsize=262144,vers=4.1,tcp  0  0
# normal ANF Volume                      /hana/data/ANA/mnt00001  nfs   rw,hard,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp  0  0                       /hana/log/ANA/mnt00001  nfs   rw,hard,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp  0  0                     /hana/shared/ANA  nfs   rw,hard,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp  0  0


First we start HCMT the “normal NFSv4.1 non Kerberos Volume:

df -h
Filesystem                               Size  Used Avail Use% Mounted on                     12T     0   12T   0% /hana/data/ANA/mnt00001  100G    0  100G   0% /hana/data/ANA/mnt00002


Start of HCMT in an “screen” to avoid any connection issues

cd /hana/shared/HCMT
ralfwestvm01:/hana/shared/HCMT # ./hcmt -v -p config/storage.json

Press CTRL+A and D to leave the screen


This test was done on native, Krb5 and krb5i.

The HCMT with krb5p never was successful.


For the HANASpeed tests I copied the data and log area over from the native to the Kerberos volumes.

su – anaadm
anaadm@ralfwest02:/usr/sap/ANA/HDB00> kinit anaadm@SAPCONTOSO.COM
Password for anaadm@SAPCONTOSO.COM:*****
anaadm@ralfwest02:/usr/sap/ANA/HDB00> cp -r /hana/data/ANA/mnt00001/* /hana/data/ANA/mnt00002/
anaadm@ralfwest02:/usr/sap/ANA/HDB00> cp -r /hana/log/ANA/mnt00001/* /hana/log/ANA/mnt00002/


then I remounted the Kerberos volumes under the mnt00001 path and restarted HANA and the tests.

Version history
Last update:
‎Feb 09 2022 07:35 PM
Updated by: