Blog Post

Apps on Azure Blog
4 MIN READ

Azure Kubernetes Service Microsoft Ignite announcements

Justin Davies's avatar
Justin Davies
Former Employee
Oct 12, 2022

Operate at scale

 

Most customers tell us they prefer to run a larger number of smaller clusters, but some very large workloads can be run much more efficiently on bigger clusters. To support this, we have increased the number of nodes you can have in a single Azure Kubernetes Service (AKS) cluster from 1,000 to 5,000.  This will allow you to reduce the number of clusters needed to run your workloads, reducing the operational overhead needed to support application teams.

 

When using the uptime SLA you can request a quota increase across all node pools in the cluster by raising a support request. 

 

Networking at scale

 

One of the things we hear a lot from our customers is around IP address exhaustion.  The release of Dynamic IP allocation has helped to reduce the need to pre-allocate IP addresses from your virtual network by assigning IP addresses at workload creation time, but we’ve gone a step further and announced the availability of Azure CNI Overlay networking.  CNI Overlay will allocate IP addresses from a private CIDR, and translate your pod IP addresses  to your VNet address space.  This dramatically reduces the IP burden when running your workloads and keeps you in control of how you setup your networking topology. 

 

As your workloads scale on AKS, your egress networking needs to change, and with that you can move from Load Balancer egress to Virtual Network NAT, which increases the outbound flows per IP address to 64,000.  You can add up to 16 IP addresses, giving you a total of over a million egress flows.

 

 

 

Maintenance Windows and Event Grid notifications

 

As you increase the scale of your Cloud Native estate, the ability to manage updates and maintenance becomes even more important.  We previously announced planned maintenance windows allowing you to control the timing of cluster upgrades. We've now integrated that functionality with Azure Maintenance configurations, enabling a consistent management interface for Azure resources.  This provides you with a single place to setup maintenance across their entire Azure infrastructure through a single pane of glass.

 

We have supported AKS as an Event Grid source for over a year now, and based on your feedback are adding more events, starting with notifications around when new Kubernetes versions are available.  This is the start of providing customers the mechanism to automate their Azure infrastructure through events across Azure infrastructure for AKS, and will continue to onboard further events over time.

 

General Availability for Arm-based nodes for improved price performance

 

Support for ARM64 architecture is now generally available.  You can now have the confidence to run your cloud native workloads on ARM64 in production.  The power profile, and cost savings from migrating to the ARM architecture give you’re the power to choose how and where you run your applications.  Making things as simple as possible for our customers is one of the driving forces behind how we ship our features, so just by simply choosing a DP or EP series Virtual Machine for your node pool will also deploy the ARM64 version of Ubuntu and get you up and running sooner.

 

Security in the Open

 

Whilst customers can currently use Pod identity with their Kubernetes workloads, the setup process and dependencies made this difficult to implement.  We have released AAD workload identity to address this and make it much simpler to use by leveraging  Managed Identity with federated identity for your workloads. This gets rid of the need to use Custom Resource Definitions and improves on the scalability of managed identities.  Using Azure SDKs or MSAL in your workloads is now seamless and provides a level of simplification that will help application developers concentrate on delivering value through software rather than having to scaffold authentication from an operational perspective.

 

We also announced the General Availability of AMD SEV-SNP confidential VM nodes on AKS.  Microsoft is the first cloud provider to support AMD SEV-SNP in Kubernetes and this capability will help organizations meet their data security goals and add to defense-in-depth without adding developer and operational overhead.

 

Confidential VM nodes enable the full lift-and-shift of Linux workloads on Kubernetes managed infrastructure to Azure, with minimal performance degradation and full AKS feature parity. Customers can also verify the trustworthiness of the node by establishing hardware root of trust through remote attestation (reference example here). With heterogenous node pool support, a single AKS cluster can have confidential and non-confidential node pools and reduce cluster operational overhead. This capability also enables you to split your workload to process sensitive data in a confidential VM node, where memory encryption keys are generated from the chipset itself.  

 

You can get started today! Here is a quick-start guide to add a confidential VM node pool to your existing cluster, and learn more about our announcement here.

 

 

What else?

 

Azure Kubernetes Fleet Manager is being introduced to addressed multi-cluster and at-scale scenarios like Kubernetes resource propagation and multi-cluster load balancing across multiple AKS clusters.

 

Kubernetes apps is a new marketplace offer that allows partners to create, publish, and manage commercial Kubernetes solutions in marketplace with click through deployments to Azure Kubernetes service with flexible billing models.

 

Hot on the heels of Ignite is Kubecon at the end of October where we will be announcing more AKS features, so stay tuned for those!

Updated Oct 12, 2022
Version 3.0
No CommentsBe the first to comment