AppArmor is a Linux Kernel security module. It confines the access of users and groups to different resources within Linux. This is at the Node level, and you already have AppArmor installed in your AKS worker nodes. The way AppArmor works is, you can create a profile and that in turn define whether the entity adhering this profile allowed to do certain activities such as network access or say file read/write/execute. It can either do “Enforcing” or block access to resources or can “Complain”, which means report such violations. Read more on AppArmor and its applicability in the perspective of Kubernetes here.
Let’s start with AppArmor. Let’s first connect to your AKS cluster using Azure CLI (see Part 1 of this series) and run command “kubectl get nodes -o wide” to get the internal IPs of your worker nodes. For me these are as below:
10.0.0.4
10.0.0.35
10.0.0.66
Step 1
You need to establish SSH connect to the worker nodes (please check Step1-6 in Part 1 of this series). Now, exec to the aks-ssh pod especially provisioned to facilitate SSH and then run following commands to run a check whether AppArmor is configured or not:
kubectl exec -it aks-ssh -- bash
ssh -i id_rsa azureuser@10.0.0.4
sudo snap install docker
sudo docker info | grep -C 5 apparmor
You can see both AppArmor and seccomp are pre-installed.
Now, uninstall Docker and exit from the node:
sudo snap remove docker
exit
If needed, you can check other nodes as well.
Step 2
How to create a AppArmor profile is given in detail here and here. We will follow something similar but what suits us best. Let’s first exec back to aks-ssh pod (kubectl exec -it aks-ssh -- bash) if you already exited from it and then run following command to set profile for all 3 nodes at once. This profile denies all write access:
NODES=('10.0.0.4' '10.0.0.35' '10.0.0.66')
for NODE in ${NODES[*]}; do ssh -i id_rsa azureuser@$NODE 'sudo apparmor_parser -q <<EOF
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
EOF'
done
This will create the AppArmor profile in all 3 nodes we have.
Step 3
Exit from aks-ssh pod and then create a new pod with the profile:
exit
cat > chmod-prevented.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: chmod-prevented
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: seccomp-prevent-chmod.json
containers:
- name: chmod
image: nginx
restartPolicy: Never
EOF
kubectl apply -f chmod-prevented.yaml
Step 4
Exec to hello-apparmor pod and try write something
As you can see the deny write profile is in action. This is how you can use AppArmor to protect your assets within your AKS cluster by restricting or limiting access. Let’s now talk about seccomp.
Seccomp stands for secure computing mode and it’s a security module of the Linux kernel just like AppArmor. With seccomp you can limit the process calls which is a bit different compared to AppArmor. With Kubernetes you can apply seccomp profiles (available on your nodes) to your pods to ensure pods do not access sensitive processes of Linux. You can read more on seccomp and its applicability on Kuberenetes from here and on AKS from here. As we already witnessed that seccomp is available by default on your AKS cluster nodes, you can start implementing a seccomp profile to these nodes:
Step 5
kubectl exec -it aks-ssh – bash
The trick of implementing seccomp profile is to copy the profile to the default path of seccomp profiles i.e, /var/lib/kubelet/seccomp. What you may find it’s not available by default and hence creating it before we set profiles in each node:
NODES=('10.0.0.4' '10.0.0.35' '10.0.0.66')
for NODE in ${NODES[*]}; do ssh -i id_rsa azureuser@$NODE 'sudo mkdir /var/lib/kubelet/seccomp'
done
for NODE in ${NODES[*]}; do ssh -i id_rsa azureuser@$NODE 'sudo chmod 777 /var/lib/kubelet/seccomp'
done
for NODE in ${NODES[*]}; do ssh -i id_rsa azureuser@$NODE 'sudo cat > /var/lib/kubelet/seccomp/seccomp-prevent-chmod.json <<EOF
{
"defaultAction": "SCMP_ACT_ALLOW",
"syscalls": [
{
"names": ["chmod","fchmodat","chmodat"],
"action": "SCMP_ACT_ERRNO"
}
]
}
EOF'
done
for NODE in ${NODES[*]}; do ssh -i id_rsa azureuser@$NODE 'sudo chmod 644 /var/lib/kubelet/seccomp/seccomp-prevent-chmod.json'
done
for NODE in ${NODES[*]}; do ssh -i id_rsa azureuser@$NODE 'sudo chmod 666 /var/lib/kubelet/seccomp'
done
You can run these commands individually or altogether. You may have noticed; we are prohibiting access to CHMOD process through this policy. Now, it’s time to test it by creating a pod using this profile. Exit from aks-ssh pod and run this command:
exit
cat > chmod-prevented.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: chmod-prevented
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: seccomp-prevent-chmod.json
containers:
- name: chmod
image: nginx
restartPolicy: Never
EOF
kubectl apply -f chmod-prevented.yaml
Now, try do an exec to chmod-prevented and then try some CHMOD command:
As you can see CHMOD is prohibited as expected. This is how you can apply features of Linux Kernel (AppArmor and seccomp) to secure your Kubernetes/AKS. I also recommend a tool called DockerSlim that helps to minify/optimize your docker container images by removing all unnecessary components from the image and securing it by automatically generating AppArmor and seccomp profiles. That’s pretty much it. In the next part of this series, we will talk about Audit Logs.
Other parts of these series: Part1 | Part3 | Part4 | Part5
Updated Jan 12, 2022
Version 1.0pranabpaul
Microsoft
Joined November 12, 2021
Azure Architecture Blog
Follow this blog board to get notified when there's new activity