The cloud-native movement has driven container adoption with emphasis on designing applications specifically for cloud environments. Kubernetes has emerged as the de facto standard for container orchestration and Microsoft’s Azure Kubernetes Service (AKS) continues to be a leading choice for customers for their container-based applications deployed on cloud.
Confidential computing will foundationally increase the security posture for cloud native workloads in the public cloud. Customers are increasingly looking for cloud-based (hybrid and multi-cloud) solutions that provide applications protection from hostile multi-tenancy, with strong security guarantees.
We are proud to announce the preview of confidential containers on AKS, which provides confidential computing capabilities to containerize workloads on AKS. This offering provides strong isolation at the pod-level, memory encryption, and AMD Infinity Guard with SEV-SNP hardware-based attestation capabilities for containerized application code and data while in-use, building upon the existing security, scalability and resiliency benefits offered by AKS.
We have partnered with AMD to bring these capabilities to AKS through hardware innovations and collaboration in the CNCF community. Hear more from AMD about how their advanced security features help offer customers fantastic confidential computing capabilities - “Microsoft Azure blazed the trail with confidential VMs based on 3rd Gen AMD EPYC™ processors with Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP), part of AMD Infinity Guard which offers advanced capabilities to help defend against internal and external threats and keep your data safe,” said Raghu Nambiar, corporate vice president, Data Center Ecosystems and Solutions at AMD. “Working with ecosystem partners and seeing the need for confidential container solutions, we are building again on top of our advanced security foundation and enabling customers with even the most security- and privacy-constrained workloads to move to the cloud with confidence.”
What makes containers ‘confidential’
Containers on the cloud typically have cluster-level boundaries, where existing security measures need to account for containers in the same pod, Kubernetes admins, Hypervisor admins and Cloud admins.
Confidential computing now brings the ability to isolate the container and provide confidentiality based on the founding principles of:
Hardware root of trust
Memory Isolation and Encryption
Secure Key Management
Container level isolation in AKS
In default AKS, the cluster is the security boundary as all workloads share the same cluster admin and run on nodes with shared kernels. With the preview of Pod Sandboxing on AKS, the isolation grew a notch higher with the ability to provide kernel isolation for workloads on the same AKS node.
Confidential containers are the next step of this isolation, and they leverage the memory encryption capabilities of the underlying AMD SEV-SNP hardware. Confidential containers on AKS are built on Microsoft technologies like Azure Linux based virtual host integrated with Microsoft hypervisor, and OSS components like Cloud-Hypervisor VMM and Kata/Kata CoCo implementation at the user space.
As depicted in the image above, in the classic case for container deployment on AKS (as shown in the left side of the stack above), the pods deployed on the same node share the kernel resources and as pods get scheduled, they can get scheduled on other nodes running a shared kernel.
With confidential containers implementing Kata/Kata CoCo with AMD SEV-SNP enlightenment, different types of pods from different container runtimes can be deployed on the same node, isolated by separate utility VM boundaries. This provides applications with a kernel level isolation and helps build solutions that require multi-tenancy support. With Kata confidential containers (on the right side of the picture), you can deploy system pods that are runc containers, vanilla Kata pods with kernel isolation and Kata CoCo pods with confidential support (that can be measured and attested) on the same worker node.
Note: This service is offered with the DCa_cc and ECa_cc virtual machine sizes with the capability of surfacing the hardware’s root of trust to the pods deployed on it. These SKUs are required since they support child VMs and unique encryption keys for each pod. If you’d like to learn more about the AMD SEV-SNP hardware, please check out the documentation here.
Community at our core
Confidential computing also provides transparency and with the TCB where your apps run. Aligning with Microsoft’s commitment to the open-source community, the underlying stack for confidential containers is designed with key OSS contributions. Microsoft has leveraged and contributed to Kata Containers and Kata CoCo code for the core functionality of confidential containers on AKS to power containers running inside a confidential utility VM. We contributed core components of security policy and dm-verity integrity protection for container images to the Kata Coco community. Microsoft will continue playing an active role in stewarding the project, by supporting and contributing to the community efforts.
The Azure Linux AKS Container Host is the base platform that supports the deployment of Kata and CoCo containers for isolation and confidentiality. The Azure Linux container host uses an enlightened Azure Linux host kernel to create nested UVMs using the Microsoft hypervisor that powers Azure. This along with Cloud Hypervisor Virtual Machine Monitor (VMM) forms the base platform.
Confidential Computing Enforcement (CCE) policy
We have created the open-source Kata policy generation tool that runs on confcom that can be used to generate a default security policy. Confcom is an open source, Azure cli extension that we use for generating security policies and other utilities for our confidential computing products. The default policy is added to the workloads by the tool, and you may modify it.
Policy is critical to enforce isolation and protection. Before creating the Pod confidential VM, Kata Shim computes the SHA256 hash of the Policy document and attaches that hash value to the Trusted Execution Environment (TEE). That action creates a strong binding between the contents of the Policy and the VM. This TEE field cannot be modified later by either the software executed inside the VM, or outside of it.
Integrate in your existing AKS deployments
Workloads run on confidential containers on AKS without the need to rebuild your clusters or code refactoring. You can bring your unmodified Linux containers and run them confidentially. Additionally, you can add confidential containers to your existing AKS deployments, even those which are running other container runtimes like standard, Kata, OCI Container, etc. This helps retain the infrastructure of your workload while running sensitive containers confidentially.
If you’d like a summary of these benefits and a quick demo of an encrypted messaging system in Kafka, check out our video here -
Here is a step-by-step guide to get started. This takes you through the installation of the Kata runtime on your new or existing cluster, adding a node pool and deploying a container confidentially.
To learn more about how to use remote attestation to evidence to verify the integrity of the policy, please refer to our documentation here. You can also learn more about how to perform attestation and secure key release using the open source sidecar containers. For more information on the underlying stack, please check out the documentation here.