AKS Review - 3.1: Security - Prevention
Published Jan 24 2023 06:34 AM 2,370 Views
Microsoft

The next topic has to do with runtime security, where you need to make sure that your runtime context is not vulnerable. One thing you should keep in mind is that when a plain Kubernetes cluster is created, it is very open, by default, in terms of flexibility it gives to the developer to deploy and do things in the cluster. 

 

For that reason, once you have identity access control properly configured, you would then want to make sure that your runtime environment is also secure. It’s important that you place a security baseline for your cluster, so your developers can only do the things that you want to allow them to do. This was traditionally a difficult task to achieve in a regular Kubernetes cluster, but in AKS there are a couple of things built-in that you can leverage.

 

First, the things that are running in your cluster need to follow the best practices around security. By default, Kubernetes is very flexible in the way that you can deploy things. If you have permissions inside a namespace, you can potentially deploy any kind of pod with a YAML file and with any kind of configuration that you want.

 

For example, if you are a developer, you can deploy a pod, which could download an image from an external public repository (e.g., docker hub – although you could do some things on the networking level and apply some firewalling rules that could block something like that). You could also run that image as a privileged container inside the host, which means that it will have access to the host network, the host file system, be able to mount host paths, etc.

 

This is possible, because any container in a Pod can enable privileged mode, by setting the privileged flag on the securityContext property of the container spec equal to true. This means that your pod will run as a privileged user and will have all kinds of permissions, which obviously is not something good. Some containers, like system containers (e.g., kube-proxy), need to do that and they have some legitimate reasons. Ideally, all your workload pods should not be doing this.

 

There are also other capabilities they can do. For example, by default, a pod will run as a root user, which is still a highly privileged user, that can do things that other non-root users cannot do. This is also not a recommended practice.

 

Admission Controllers

 

In the past, to overcome this limitation you had to do something like the following: Generally, the way the Kubernetes works is that you have the API server, and a developer can deploy YAML files, after connecting and authenticating to the API server endpoint. This is very similar to what you do in Azure, where to deploy a resource, you need to connect to the management endpoint in Azure (https://management.azure.com/) and deploy an ARM template.

 

At that point, there is one thing in Kubernetes, which is called an “admission controller”. Admission controllers are small functions that you deploy to your cluster and will execute every time a YAML file gets deployed against the API server. Admission controllers can validate the YAML file and block the deployment if it is not valid. There are also admission controllers that can change the YAML file itself. These are called mutating admission controllers.

 

This is the feature that you were using in Kubernetes, to enforce your corporate governance and security policies. Regardless of the permission that the user has, you can use those deployment hooks to validate YAMLs and to block any YAML, if it does not conform with those standards.

 

In Azure, Microsoft has something similar called Azure Policies, which do the same as admission controllers in Kubernetes, but for Azure resources. For example, if you are a contributor or an owner in a resource group, then you are able to create any kind of resources and apply any kind of configuration you want. You can send an ARM template for creating and configuring a VM with whichever SKU you want, or you can assign a public IP to the VM, etc. If you want to make sure that someone can only deploy VMs which conform to the corporate standards, you leverage Azure policies.

 

Azure policies and admission controllers in Kubernetes are very similar concepts. The important thing to remember is that you want to enforce some sort of baseline security to the way your users can deploy the pods and any exception needs to be whitelisted.

 

Enforce a baseline security to the way users can deploy pods with Pod Security Admission

 

This has been a difficult thing to do in the past with Kubernetes. But now it makes it very easy, with a feature called “pod security admission”.

 

You need to be in v1.23 in Kubernetes to be able to use this feature. Kubernetes has created three different security levels:

 

  • Privileged
  • Baseline
  • Restricted

 

Those are called “pod security standards”. They use the same technology as admission controllers and they can be applied at the namespace level. If you enable pod security admissions for a namespace, then a built-in admission controller inside the Kubernetes cluster will be enabled for that namespace. Because of that, all YAMLs that get deployed to the namespace, will be checked against the rules described in the pod security standard.

 

The Pod Security Standards define three different policies to broadly cover the security spectrum. These policies are cumulative and range from highly permissive to highly restrictive.

 

ormeikopmsft_0-1674567730878.png

 

The Privileged profile can do almost anything, and that is the reason why it is not really recommended. Normally, pods would get deployed using the Baseline profile, except when you are in a kind of PCI-compliant environment (e.g., dealing with sensitive customer data). This would be a place to consider using the Restricted option.

 

For instance, in the case of the Baseline security policy, which is the one that is usually recommended, these are some of the things that you get:

 

  • HostProcess containers:
    • Privileged access to the host is disallowed.
    • Can only be set to false, which is the default.
    • Big security risk in Kubernetes.

 

ormeikopmsft_1-1674567730888.png

 

  • Privileged Containers:
    • Privileged Pods disable most of the security mechanisms.
    • Can only be set to false, which is the default.
    • No-one can go into the YAML file and set the privileged property equal to true, as this also is a big security risk in the Kubernetes cluster, because if that container got compromised, it would have unlimited access to the host node with root privileges.

 

ormeikopmsft_2-1674567730893.png

 

  • Linux Capabilities:
    • Below are the ones that are allowed by the baseline profile.
    • Others that are highly sensitive are not on the list and not allowed.

 

ormeikopmsft_3-1674567730899.png

 

Another good thing is that when you enable them, you can select in which prevention mode they will be enabled. You have the following options:

 

ormeikopmsft_4-1674567730904.png

 

You are also allowed to have multiple pod security standards applied with different prevention modes in a single namespace. For example, you could have BASELINE profile with ENFORCE mode and RESTRICTED profile with AUDIT mode. That way, you can enforce BASELINE profile to your YAML deployment of your pods and have applied the RESTRICTED profile in AUDIT mode, where you will get notified when something that does not conform to the RESTRICTED profile tries to be deployed inside your cluster.

 

One thing to have in mind is that you still need to have things like privileged containers, for example system ones, or if you want to deploy third party, or networking components. But the important thing is that you always have those deployed in certain system namespaces, where the users don’t have access to. Then you can use the Kubernetes RBAC access model to make sure that only the cluster admin users can manipulate those containers.

 

The threat in runtime security is that, if you get a malicious code running in your container and it gets compromised (e.g., like running in privileged mode), because containers in pods are not VMs, meaning they are not Hyper-V isolated environments with their own kernel, that code can break out into the host and run as root user there.

 

Back to the pod security admission feature, once it is enabled, you can define the admission control mode you want to use for pod security in each namespace. Kubernetes defines a set of labels you can configure to define which of the predefined Pod Security Standard levels you want to use for a given namespace. The label you select defines what action the control plane takes if a potential violation is detected. A namespace can configure any or all modes, or even set a different level for different modes. For each mode, there are two labels that determine the policy used:

 

ormeikopmsft_5-1674567730913.png

 

Open Policy Agent, GateKeeper & kyverno

 

Before the "pod security admission" feature, to achieve something similar, people would have a couple of options. You could basically implement what pod security admissions are doing, which are basically admission controllers that validate YAML files.

 

To do that, people were using a couple of open-source components. The first one is the “Open Policy Agent (OPA)”. This is an agent that allows you to validate YAML files against a language, that is called “Rego”, where you define some sort of regular expressions, and then it validates that file to conform to those. Open Policy is not something specific to Kubernetes, but there is something that implements OPA, which is called “GateKeeper. GateKeeper is hosting the OPA agent inside an admission controller.

 

People were either using the GateKeeper & OPA combination by deploying these into their cluster, or people were using “kyverno”, which is Kubernetes native policy management. Those were the two most used open policy agents.

 

Enabling Azure Policy to Enforce Organizational Standards

 

To enable GateKeeper and OPA, you had to deploy all those components in your cluster, configure them, build your own Regex files, and, in general, it was a complicated thing to do. But what Microsoft did for you was to enable automatic configuration of all those components via Azure Policy add-on and to integrate the way that you define policies in Azure (i.e., you go to the Azure Policy portal, you create a new policy and apply that to an Azure scope level, like a resource group or subscription or management group) to the implementation of GateKeeper and OPA.

 

For example, you can enable the gatekeeper-system agent in an AKS cluster in Azure. This will create a new namespace with the name gatekeeper-system and if you run a kubectl get pods -n gatekeeper-system command, you will get back several admission controller pods deployed in that namespace.

 

With that in place, you can also use Azure Policy. Azure Policy is what you use in Azure to enforce your corporate governance and security policies for Azure resources, across the board, and regardless of the permission model that your users have against the Azure resources.

 

One example of an Azure Policy would be, if you do not want anybody to be able to create App Services with public endpoints in any of your organization’s subscriptions, because you want all the incoming traffic to have to go through your publicly deployed firewall ingress device. Although a user might have permissions to create an App Service in any subscription, if you have an Azure policy that prevents public endpoint creation for App Services, then even if the user tries to deploy it, the deployment will fail.

 

Azure Policy extends Gatekeeper version 3 and makes it possible to manage and report on the compliance state of all your Kubernetes clusters. This add-on requires three gatekeeper components to run: one audit pod and two web hook pod replicas. Basically, these components consume some resources on your clusters and so when you enable it, keep in mind that some of the resources you have on your node will be consumed by these pods, which will be running on your nodes.

 

Once you have enabled the Azure Policy add-on on the cluster, you can assign the Kubernetes policies to the cluster and once the policy is assigned, the add-on downloads the policy assignments, installs the constraints on the cluster and it maps to your Azure Policy definition and policy assignment. It monitors your cluster, it will gather your compliance, it will audit logs and gather your compliance results and because it has those mapping details of your policy, it will send those compliance details back to your Azure Policy, which will be your common dashboard from where you can go and monitor all your clusters.

 

Almost every Azure service has its own set of policies defined inside the Azure portal. You can have built-in Azure Policy definitions for Kubernetes. An example of such a policy is:

 

  • Kubernetes cluster should not allow privileged containers:
    • If a user tries to create a pod that runs on privileged mode, then the deployment will fail.

 

With policies you have different options for the mode in which they will be applied, whether you just want it to audit your clusters or append if something is missing from them. If you want your environment to be secure from the start, then deny is a perfect option for you, where no one can go and even try to deploy unsecure resources. If you want to just audit at first because you just started to use Azure Policy in your cluster and want to see how it works, just assign the policy in audit mode, and see what non-compliance report it provides to you.

 

Moreover, Microsoft has created the concept of Policy Initiatives. Initiatives are, basically, groups of related policies. If you want to group a bunch of related policies into a higher-level entity that you can assign to multiple Azure services, then you can define a Policy Initiative.

 

In AKS, there are two built-in initiatives:

 

  • Kubernetes cluster pod security restricted standards for Linux-based workloads:
    • This has 8 policies.
  • Kubernetes cluster pod security baseline standards for Linux-based workloads:
    • This has 5 policies.
    • If you click on this initiative in the portal, you can see that the following policies will be applied to your cluster if you assign it:
      • Kubernetes cluster should not allow privileged containers.
      • Kubernetes cluster pods should only use approved host network and port range.
      • Kubernetes cluster containers should not share host process ID or host IPC.
      • Kubernetes cluster containers should only use allowed capabilities.
      • Kubernetes cluster pod hostPath volumes should only use allowed host paths.

 

But isn’t that thing the same as pod security admission feature? This is true. Basically, there is an overlap of functionality to what Azure Policy provides and to what AKS pod security admission provides. They have the same checks in many cases.

 

The recommendation here would be to try and stick to the built-in pod security admission, which is a new feature and is built and maintained by the community.  

 

However, Azure Policies on the other hand, can give you a couple of additional things. First, normal policies need to be related to security. But you may want to apply additional policies, like for example:

 

  • Disallow people to create node pools of certain SKUs.
  • Disallow people to use public IPs with the load balancer, when you create the cluster,
  • Whitelist the container registries that your users can use to pull images for the pods to use:
    • You want to make sure that all your images that run inside the cluster come from your trusted (probably private) Azure container registry (ACR) and that your users are not able to pull images from public registries, like Docker Hub

 

Those are governance policies and will not be controlled by pod security admissions and you cannot achieve something similar using pod security admission policies. These are focused primarily on applying pod runtime security policies to the Kubernetes cluster. When you define a pod definition in a Kubernetes YAML file, there is one section called securityContext and pod security admission policies focus on this property, which is the pod’s runtime security. Azure Policies allows you to go beyond just pod runtime security and implement other governance policies as well.

 

To summarize, the recommendation here is to at least use pod security admission policies (with baseline and restricted modes) and consider using Azure Policy, if you want to implement your own custom governance policies, or if there are other built-in governance policies that makes sense for your scenarios and context. Keep in mind that, even in Azure Policy you will find some policies, which are also very security related, like for example, privileged escalation, only use allowed volume types or host paths, etc. All those are already implemented by pod security admission policies, and you don’t need to implement and apply them twice.

 

AKS Design Review Series - Contents

Co-Authors
Version history
Last update:
‎Jan 24 2023 06:33 AM
Updated by: