Azure Kubernetes Service Security Deep Dive – Part 4 (Network Policies)
Published Jan 25 2022 05:56 AM 3,791 Views
Microsoft

By nature, and by default each pod can talk to all other pods within a Kubernetes cluster. This behaviour sometimes is not desirable as some application owners may not agree to share connectivity to other applications. Or would like to restrict access in or out of the backend applications running on certain pods. Network Policy comes to your rescue in those cases.


Network policies or network security policies are kind of firewall rules for Kubernetes cluster. These are implemented with the help of Network Plugin or Container Network Interface (CNI) and control traffic flow at the IP address or port level (OSI layer 3 or 4). These policies are applied at the Namespace level and can restrict ingress and egress to and from a pod or a group of pods based on how they are configured. Please read more about Network Policies in official documentation of Kubernetes.


Please read this article that describes how to set up network policy on your AKS cluster. It also gives detailed information on how to configure and test network policies on pods. Assuming you read through this article, let’s take a simple but practical example and try to explore how network policies can help us.

pranabpaul_0-1639378679594.png

The diagram above describes a very common requirement. We have 2 subnets within our virtual network. The AKS cluster within AKS subnet has 2 namespaces – Backend and Frontend. Backend namespace has one pod DB and Frontend namespace has 2 pods API and App. Subnet VM has only one VM called Test VM. The ingress and egress criteria are as followed:

1. DB Pod can only receive ingress request from API Pod and nothing else.
2. Egress from DB Pod is not allowed.
3. App Pod and API Pod can receive ingress request from anywhere within the VNET.
4. Egress from App Pod to API Pod and API Pod to Backend Namespace is only permitted.


Step1
Open Azure CLI and provision a new resource group, virtual network, and subnets:

 

 

az group create --name rgakstestnetworkpolicy --location northeurope
az network vnet create -g rgakstestnetworkpolicy -n aksvnet --address-prefix 10.0.0.0/16 --subnet-name vmsubnet --subnet-prefix 10.0.0.0/24
az network vnet subnet create --name akssubnet --vnet-name aksvnet --resource-group rgakstestnetworkpolicy --address-prefixes 10.0.1.0/24

 

 


Step2
Provision a VM within VM Subnet and ensure you can do SSH to it:

 

 

az vm create \
--resource-group rgakstestnetworkpolicy \
--name testvm \
--image UbuntuLTS\
--admin-username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub \
--subnet vmsubnet \
--vnet-name aksvnet
ssh -i ~/.ssh/id_rsa azureuser@<public IP of the VM>
exit

 

 


Step 3
Now create your AKS cluster. Remember to add the flag “network-policy”.

 

 

az aks create --resource-group rgakstestnetworkpolicy --enable-managed-identity --name akstestnetworkpolicy --zones 1 2 3 --node-count 3 --kubernetes-version 1.21.2 --network-plugin azure --network-policy azure --vnet-subnet-id "/subscriptions/<your subscriptionid>/resourceGroups/rgakstestnetworkpolicy/providers/Microsoft.Network/virtualNetworks/aksvnet/subnets/akssubnet" --service-cidr 10.1.0.0/24 --dns-service-ip 10.1.0.10 --docker-bridge-address 172.17.0.1/24 --enable-addons monitoring --generate-ssh-keys

 

 

Step 4
Connect to your AKS cluster and create the namespaces and pods. We are keeping all pods with simple image of nginx. This is for demo purpose only.

 

 

az account set --subscription <your subscriptionid>
az aks get-credentials --resource-group rgakstestnetworkpolicy --name akstestnetworkpolicy
kubectl create namespace backend
kubectl create namespace frontend
kubectl run dbpod --image=nginx --restart=Never -n backend
kubectl run apppod --image=nginx --restart=Never -n frontend
kubectl run apipod --image=nginx --restart=Never -n frontend

 

 


Step 5
Get the IP addresses of your pods

pranabpaul_1-1639378949530.png

 

Now try accessing dbpod using curl from apppod or the VM we created earlier. You will see we can access it all right and nginx web server default page shows up.

pranabpaul_2-1639379006413.png

You can check this with other pods as well. You will find all pods are accessible from the others as well as from the VM.


Step 6
We will now set up network policies to restrict ingress and egress based on the criteria given above. First create a policy for dbpod:

 

 

cat > dbpod-policy.yaml <<EOF

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: dbpod-policy

  namespace: backend

spec:

  podSelector:

    matchLabels:

      run: dbpod

  policyTypes:

  - Ingress

  - Egress

  ingress:

  - from:

    - namespaceSelector:

        matchLabels:

          kubernetes.io/metadata.name: frontend

      podSelector:

        matchLabels:

          run: apipod

    ports:

    - protocol: TCP

      port: 80

EOF

kubectl apply -f dbpod-policy.yaml

 

 


This network policy defines there is no egress allowed from dbpod and ingress is allowed from only apipod under frontend namespace and that too at port 80 only. Remember the ingress -from statement here is inclusive of both namespace and the pod selection. If you use the following instead

 

 

- from:

    - namespaceSelector:

        matchLabels:

          kubernetes.io/metadata.name: frontend

    - podSelector:

        matchLabels:

          run: apipod

 

 

This will mean any pod under namespace frontend and only apipod within current namespace (backend). So, one dash (-) can change the whole context.


Step 7

Now, it’s time to check the access. First try from apipod

pranabpaul_3-1639379118763.png

 

Then, from apppod

pranabpaul_4-1639379160777.png

Next, from testvm

pranabpaul_5-1639379201947.png

 

So, as you can see, we covered both criteria 1 and 2 and successfully tested those.

Step 8
We will create 2 more policies for apipod and apppod but we are not going to test those exclusively.

 

 

cat > apipod-policy.yaml <<EOF

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: apipod-policy

  namespace: frontend

spec:

  podSelector:

    matchLabels:

      run: apipod

  policyTypes:

  - Ingress

  - Egress

  egress:

  - to:

    - namespaceSelector:

        matchLabels:

          kubernetes.io/metadata.name: backend

  ingress:

  - from:

    - ipBlock:

        cidr: 10.0.0.0/16

    ports:

    - protocol: TCP

      port: 80

EOF

kubectl apply -f apipod-policy.yaml



cat > apppod-policy.yaml <<EOF

apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

  name: apppod-policy

  namespace: frontend

spec:

  podSelector:

    matchLabels:

      run: apppod

  policyTypes:

  - Ingress

  - Egress

  egress:

  - to:

    - podSelector:

        matchLabels:

          run: apipod

  ingress:

  - from:

    - ipBlock:

        cidr: 10.0.0.0/16

    ports:

    - protocol: TCP

      port: 80

EOF

kubectl apply -f apppod-policy.yaml

 

 


With this we covered criteria 3 and 4 as well. We saw how Network Policies can control traffic flow (ingress and egress) within your Kubernetes cluster and in some cases outside the cluster as well. One question persists, what happens when we use multiple instances of similar pods such as in ReplicaSet or Deployment. The answer is it remains the same. You restrict ingress or egress by either pod labels, namespace labels or IP ranges. And in case of a ReplicaSet, the pod lables remains same for all the pods within the ReplicaSet. You can use DNS name to check/test instead of IP address of the pod if you have exposed the pod/deployment through a Service. Additionally, we use Ingress Controller and User Defined Routes (UDR) to control traffic respectively towards and from the cluster in relation to the network outside.

 

That’s pretty much it. We will talk about security of ingress and egress in the context of network in the next part of this series.

 

Other parts of these series:  Part1 | Part2 | Part3 | Part5

Co-Authors
Version history
Last update:
‎Jan 07 2022 09:20 AM
Updated by: