Blog Post

Startups at Microsoft
19 MIN READ

AKS networking made easy: Your comprehensive guide

rmmartins's avatar
rmmartins
Icon for Microsoft rankMicrosoft
Mar 31, 2025

Azure Kubernetes Service (AKS) is not just about deploying containerized applications—it’s also about architecting robust, secure, and efficient network connectivity for your clusters. In this blog post, we’ll explore the intricacies of AKS networking, clarify the different models and options available, and discuss best practices through real-world scenarios. Whether you’re just starting out or looking to fine-tune an existing deployment, this guide will help you master AKS networking.

1. AKS network topologies and connectivity

Understanding the network topology is the foundation of effective AKS networking. The Cloud Adoption Framework’s AKS network topology and connectivity guide provides a structured look at how AKS clusters integrate into an organization’s network fabric.

Key concepts:

  • Cluster connectivity: How pods, services, and external resources communicate.
  • Topology options: From simple flat networks to more segmented designs.

Real-world scenario: Imagine a multi-tier application where frontend pods need to securely talk to backend services and databases. A clear network topology ensures that the traffic flow respects both performance and security requirements.

This diagram illustrates a simplified view of how traffic flows from external users through an ingress controller to both frontend and backend pods.

2. Comparing AKS network models

One of the most important decisions when deploying AKS is choosing between the different networking models.

Kubenet was one of the original networking drivers in Kubernetes, and it still “just works” out of the box in most on‑prem or DIY clusters. But as we’ve moved toward managed, cloud‑hosted Kubernetes, vendor‑built CNIs have become the norm—solving Kubenet’s limitations around IP‑address management, scalability and lack of overlay networking.

That’s why AKS now offers a full spectrum of Azure‑native CNIs—Standard (Node Subnet), Overlay, dynamic IP allocation and even Cilium‑powered variants—each built to fill those gaps. Standard mode injects pod IPs straight into your VNet, Overlay preserves your address space, dynamic IP mode auto‑manages huge clusters, and Cilium brings eBPF‑driven performance and observability.

The AKS concepts on network models outline the primary options:

Kubenet vs. Azure CNI (Standard)

  • Kubenet: 
Kubenet in action: Pods receive overlay network IPs, use NAT for external communication, and preserve VNET addresses.

 

 

    • Simplicity and flexibility: Pods receive an IP from an overlay network.
    • Use case: Historically, kubenet was favored for smaller clusters or scenarios where conserving IP addresses was important.
    • Important notice: On 31 March 2028, kubenet networking for Azure Kubernetes Service (AKS) will be retired.  To avoid service disruptions, you will need to upgrade your workloads running on kubenet to Azure Container Networking Interface (CNI) overlay before that date. More details can be found in the official Microsoft documentation.
  • Azure CNI (Standard Mode):

 

CNI Standard: Pods obtain IPs directly from the VNET, ensuring seamless integration but requiring careful IP planning
    • Full integration: Pods get IP addresses directly from the virtual network (VNET), providing seamless integration with other Azure resources.
    • Scalability and integration: Ideal for large clusters and scenarios that require tight integration with Azure networking.

Azure CNI Standard vs. Azure CNI Overlay

When choosing a networking approach for your AKS cluster, it's important to understand the trade-offs between the two main Azure CNI variants. Azure CNI Standard assigns pod IPs directly from your Azure VNET, offering tight integration with your network infrastructure. In contrast, Azure CNI Overlay decouples pod IP assignment from the VNET through encapsulation (e.g., VXLAN), which can be advantageous for large-scale deployments with limited IP space. Below is an overview of the differences between these two approaches:

  • Azure CNI Standard:
CNI Standard: Pods directly receive IPs from the VNET, allowing seamless integration but requiring careful IP planning
    • Direct IP assignment: Each pod is assigned a unique IP address from your Azure VNET.
    • Full VNET integration: Enables use of VNET-level controls (like NSGs) and ensures pods are routable within your VNET.
    • IP consumption: Requires careful IP planning, as each pod consumes a VNET IP.
    • Learn more: Azure CNI networking
  • Azure CNI Overlay:
CNI Overlay: Pods receive IPs from an overlay network, decoupled from VNET IP space for efficient IP usage but with slight encapsulation overhead.
    • Overlay network: Pods receive IP addresses from an overlay network using encapsulation (such as VXLAN).
    • Efficient IP utilization: Decouples pod IP assignment from the VNET's IP range, which is beneficial for large-scale deployments with limited VNET address space.
    • Performance consideration: There is a slight overhead due to encapsulation/decapsulation processes.
    • Learn more: Azure CNI overlay

Additional Azure CNI variants

Beyond the standard modes, Microsoft offers other variants to address different workload needs:

  • Azure CNI with dynamic IP allocation: Allocates pod IP addresses dynamically, reducing the need for pre-allocation and easing IP management in highly dynamic environments.
    • Benefits: 
      • Reduces IP waste when pods are ephemeral and can be scaled up or down frequently.
      • Simplifies IP address management by allocating IPs on-demand.
    • When to use: Ideal for environments with rapid scaling or high pod churn, where managing a static pool of IPs can be cumbersome.
    • Learn more: Azure CNI with dynamic IP allocation

  • Azure CNI Powered by Cilium: Leverages Cilium and eBPF to provide advanced networking capabilities, enhanced security policies, and improved observability.
    • Benefits:
      • Provides granular security and networking policies with high performance, thanks to eBPF.
      • Enables advanced features like transparent encryption, load balancing, and deep visibility into network flows.
    • When to use: Suitable for organizations looking for cutting-edge network security, observability, and performance improvements.
    • Learn more: Azure CNI Powered by Cilium

Example lab: Deploying an AKS cluster with Azure CNI (Standard)

1. Plan IP addressing: Use the Azure CNI configuration guide to determine your IP range.

2. Create the AKS cluster:

# Variables
resourceGroup="MyResourceGroup"
location="centralus"
vnetName="MyVnet"
vnetAddressPrefix="10.0.0.0/16"
subnetName="MySubnet"
subnetAddressPrefix="10.0.1.0/24"
aksName="MyCNIAKSCluster"
serviceCidr="10.200.0.0/16"
dnsServiceIp="10.200.0.10"

# Create resource group
az group create --name "$resourceGroup" --location "$location"

# Create virtual network
az network vnet create \
  --resource-group "$resourceGroup" \
  --name "$vnetName" \
  --address-prefix "$vnetAddressPrefix"

# Create subnet within the VNET
az network vnet subnet create \
  --resource-group "$resourceGroup" \
  --vnet-name "$vnetName" \
  --name "$subnetName" \
  --address-prefix "$subnetAddressPrefix"

# Retrieve current subscription ID and build the subnet ID dynamically
subId=$(az account show --query id -o tsv)
subnetId="/subscriptions/${subId}/resourceGroups/${resourceGroup}/providers/Microsoft.Network/virtualNetworks/${vnetName}/subnets/${subnetName}"

# Create the AKS cluster using the dynamic subnet ID
az aks create \
  --resource-group "$resourceGroup" \
  --name "$aksName" \
  --location "$location" \
  --network-plugin azure \
  --vnet-subnet-id "$subnetId" \
  --service-cidr "$serviceCidr" \
  --dns-service-ip "$dnsServiceIp" \
  --enable-managed-identity

AKS CNI Standard mode – Key networking parameters

When deploying an AKS cluster using Azure CNI in standard mode, it’s important to understand the key parameters that control network configuration:

  • --service-cidr:
    • Purpose: This parameter defines the CIDR block from which Kubernetes service IPs are allocated.
    • Usage: The service CIDR must be a range that does not conflict with your virtual network (VNET) or pod IP ranges.
    • Example: If you specify --service-cidr 10.200.0.0/16, all cluster services (such as those created via kubectl expose) will receive IPs from this range. It’s critical to plan this CIDR carefully to ensure there are no overlaps with any other network segments in your environment.

  • --dns-service-ip:
    • Purpose: This parameter designates the IP address within the service CIDR that is used for the cluster’s DNS service (typically CoreDNS).
    • Usage: This IP must fall within the range defined by the service CIDR and must not be in use by any other service.
    • Example: For a service CIDR of 10.200.0.0/16, you might set --dns-service-ip 10.200.0.10. This reserved IP is then used by the DNS service to resolve names for services and pods within the cluster.

Why these settings are critical:
Using separate CIDR blocks for the VNET, pods, and services ensures there is no overlap, which is essential for proper routing and network isolation. While Azure CNI (standard mode) assigns pod IPs directly from the VNET, the service CIDR is distinct and is only used for service IP allocation. This separation allows you to have more control over your network design and helps prevent conflicts with external networks.

3. Validate networking:

Check that nodes and pods are receiving IPs from the specified VNET. This command displays each node's internal IP address, helping you verify that nodes are attached to the correct network.

kubectl get nodes -o wide

Look for the INTERNAL-IP column to confirm that each node's IP falls within the expected VNET address space.

To check that pods are receiving IPs correctly, list pods across all namespaces:

kubectl get pods --all-namespaces -o wide

The IP column should show addresses allocated from the VNET's defined range (for Azure CNI).

For additional details on a specific node’s networking, including labels and annotations related to IP assignment, you can describe the node:

kubectl describe node <node-name>

Replace <node-name> with one of the node names from the previous command. This output can help confirm that the node is correctly integrated with the VNET.

These commands together help validate that both the node and pod IP assignments are in line with your planned IP ranges, ensuring that your network planning and model selection are correctly implemented.

3. Private clusters and DNS configurations

For organizations with strict security requirements, AKS offers the ability to create private clusters. Private clusters ensure that the API server is not exposed to the public internet, enhancing security.

Key topics:

Example lab: Creating a private AKS cluster with CNI (Standard)

1. Deploy a private AKS cluster:

az aks create \
  --resource-group MyResourceGroup \
  --name MyPrivateAKSCluster \
  --enable-private-cluster \
  --network-plugin azure

Why isn't the VNET or service CIDR specified?

In this example, advanced networking parameters like the virtual network, subnet, service CIDR, and DNS service IP are not explicitly defined. This is because:

  • Default networking configuration: When these parameters are omitted, AKS automatically provisions a default virtual network and assigns IP ranges for the cluster. With --network-plugin azure, the cluster is created using Azure CNI. This managed configuration is sufficient for many scenarios, reducing complexity during initial deployments.
  • Focus on enabling privacy: The primary goal in this scenario is to enable the private connectivity feature. By focusing on --enable-private-cluster, the example emphasizes that the API server will be accessible only within the internal network. Customizing networking settings (like specifying a particular VNET or IP ranges) is optional and can be added if you have specific integration or policy requirements.
  • Flexibility and customization: If your deployment requires integration with an existing virtual network or adherence to particular IP address planning, you can extend the command to include those parameters, similar to the public cluster examples. The minimal command is provided as a baseline for simplicity and ease of deployment.

2. Configure a private DNS zone:

The Azure Private DNS overview explains how to leverage private DNS zones for name resolution within your virtual network. For private clusters, configuring a private DNS zone ensures that your cluster’s API server and internal endpoints are accessible using friendly domain names. The configuration guide provides step-by-step instructions for this setup.

Real-world example: Consider a financial services company that must comply with strict data residency and security guidelines. Deploying AKS as a private cluster—with a dedicated private DNS zone—ensures that all control-plane communications and sensitive endpoints remain isolated within the company’s secure virtual network. If advanced network customization is needed, additional parameters (like a pre-created VNET, custom service CIDR, etc.) can be integrated into the deployment command.

4. Ingress, application routing, and traffic management

Managing incoming traffic is critical for any production-grade application. AKS offers several options for routing traffic:

Application Gateway for Containers

  • Azure’s latest Ingress offering, Application Gateway for Containers, is the successor to Application Gateway Ingress Controller, bringing numerous performance, resiliency, and layer 7 load balancing capabilities.  In addition, it adopts Kubernetes’s latest Gateway API to enable administrators and developers to easily define their load balancing intent.

Application Gateway Ingress Controller (AGIC)

Application routing

  • HTTP Application Routing: Historically, HTTP Application Routing was a popular option for simplifying DNS management for your applications. Note: Microsoft has announced that HTTP Application Routing will be retired on 03 March 2025. It is recommended that you migrate to the Application Routing add-on by that date to ensure continued support and enhanced functionality. For further details on migration, refer to the App routing migration guide.

Traffic management overview

In addition to ingress and application routing, effective traffic management involves strategies that optimize how traffic is handled within your environment. While a deep dive into these advanced topics is beyond the scope of this article, here is a brief overview:

  • Traffic splitting & canary deployments: Techniques that enable gradual rollout of new application versions by directing a portion of the traffic to new deployments while the majority remains on the current version. This reduces risk during updates and allows for real-time testing under live conditions.
  • A/B testing & blue/green deployments: Strategies that allow you to serve different versions of your application to different user groups. This can help in testing features or UI changes before a full rollout, ensuring smoother transitions and minimizing disruption.
  • Geo-based routing: Directing user requests to the nearest available service endpoint based on geographic location. This not only improves response times but also enhances the overall user experience by reducing latency.
  • Service mesh integration: Tools like Istio can be deployed alongside AKS to provide fine-grained control over traffic routing, observability, and secure communication between services. These tools add another layer of management for scenarios that require dynamic traffic policies and granular control.

Note: For a comprehensive exploration of these advanced traffic management strategies, a dedicated article would be ideal. This overview aims to provide context on how these techniques integrate with basic ingress and application routing to form a complete traffic management strategy.

Ingress resource example with Gateway API and how to use it

With the new capabilities of Application Gateway for Containers, you can now leverage the Gateway API for more advanced ingress scenarios—such as hosting multiple sites and aligning with Kubernetes’ evolving standards. Unlike the traditional ingress resource, the Gateway API provides a more flexible and standardized way to manage external traffic.

Step 1: Prepare Your Backend Service

Ensure you have a backend service deployed (for example, a service named my-service that listens on port 80). For instance:
Service Configuration (my-service.yaml)

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: default
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Deploy the service using:

kubectl apply -f my-service.yaml

Step 2: Create a Gateway API Configuration

Below is an example of how to configure the Gateway API to work with AGC. This example demonstrates creating a Gateway and an associated HTTPRoute to host traffic for the hostname example.yourdomain.com.

Gateway Configuration (gateway.yaml):

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-gateway
  namespace: default
spec:
  gatewayClassName: azure-agc
  listeners:
    - name: https
      protocol: HTTPS
      port: 443
      allowedRoutes:
        namespaces:
          from: All

HTTPRoute Configuration (httproute.yaml):

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: my-httproute
  namespace: default
spec:
  parentRefs:
    - name: my-gateway
  hostnames:
    - "example.yourdomain.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: my-service
          port: 80

Step 3: Deploy the Gateway and HTTPRoute

Apply the configurations:

kubectl apply -f gateway.yaml kubectl apply -f httproute.yaml

Step 4: Validate the Deployment

  1. DNS resolution: Ensure that example.yourdomain.com points to the public IP of your Application Gateway for Containers.
  2. Testing connectivity: From an external client, send an HTTPS request to https://example.yourdomain.com and verify that the traffic is routed to your backend service.
  3. Monitoring and troubleshooting: Use the following commands to inspect the status and events of your Gateway and HTTPRoute:
kubectl describe gateway my-gateway -n default
kubectl describe httproute my-httproute -n default

Key advantages of Gateway API with AGC:

  • Advanced routing capabilities: The Gateway API allows you to define multiple routes, enabling scenarios like multiple site hosting, path-based routing, and more.
  • Future-proof alignment: With Ingress API development in a freeze state, adopting the Gateway API aligns your deployments with the evolving direction of Kubernetes networking.
  • Unified management: By using AGC with Gateway API, you benefit from the advanced features of Application Gateway for Containers, including robust load balancing and enhanced security features.

This updated approach not only modernizes your ingress setup but also provides a more scalable and flexible way to manage external traffic into your AKS clusters.

For additional details and the latest examples, see the multi-site hosting with Application Gateway for Containers

Another great content about AGC written by Jose Moreno is available here: Application Gateway for Containers: a not-so-gentle intro

Diagram: AGC traffic flow

An example architecture illustrating how Application Gateway for Containers (AGC) uses the Gateway API to route HTTPS traffic from a client to different services within an AKS cluster.

Scenario: A global e-commerce platform leverages Application Gateway for Containers (AGC) integrated with the Gateway API to route traffic based on hostnames, paths, or other advanced routing rules. This approach allows each microservice (e.g., checkout, product catalog, user management) to be served through its own route configuration, simplifying scaling and updates. As the platform grows, the Gateway API’s extensible model ensures a future-proof solution—one that supports multiple site hosting and advanced traffic management without relying on the older Ingress API.

5. Virtual networks, service endpoints, and private link

Integrating your AKS clusters with Azure Virtual Networks (VNETs) is crucial for secure communication with other Azure services.

Service endpoints and private link:

  • Service endpoints: The Virtual network service endpoints overview explains how endpoints extend VNET private address space to Azure services.
  • Private link: For even tighter integration, private link allows you to access Azure PaaS services over a private endpoint in your VNET. 

Diagram: VNET Integration:

An AKS cluster integrates with an Azure Virtual Network, enabling secure access to Azure SQL Database, Storage Accounts, and other PaaS services

Example Use Case: A healthcare application that needs to access an Azure SQL Database can use service endpoints or Private Link. This ensures that traffic between the AKS cluster and the database does not traverse the public internet, thereby meeting regulatory compliance and security requirements.

6. Planning IP addressing with Azure CNI

A critical aspect of designing your AKS network is planning the IP address space. The Azure CNI configuration guide helps you to:

  • Determine IP range requirements: For nodes and pods.
  • Avoid address overlap: With existing VNETs or on-premises networks.

7. Egress traffic management and security controls

Outbound traffic from your AKS clusters must be managed to ensure security and compliance. There are several approaches:

Egress options:

Security layers:

  • Network security groups (NSGs): NSGs in virtual networks provide an extra layer of security by filtering traffic at the subnet or NIC level.
  • Network policies: For pod-level security, the use network policies guide explains how to restrict communication between pods. 

Network policies are a key tool for enforcing security at the pod level. They allow you to restrict both ingress and egress traffic between pods. In this example, we focus on an ingress policy that permits only pods with the label app: frontend to communicate with pods labeled app: backend.

Practical scenario: In a scenario where a cluster hosts a mix of public-facing and internal services, configuring UDRs with Azure Firewall and applying NSGs and network policies ensures that public endpoints are hardened while internal communications remain efficient and secure.

8. Advanced networking: CNI overlay and operator best practices

For those looking to push the envelope in AKS networking, advanced configurations can offer improved performance and flexibility. One such configuration is using Azure CNI Overlay, which helps in scenarios where you need to conserve VNET IP addresses for large-scale deployments.

What is Azure CNI overlay?

  • Overlay network: Instead of assigning each pod an IP directly from your VNET (as in standard Azure CNI), pods receive IP addresses from an overlay network. This overlay is built using encapsulation methods (such as VXLAN), allowing you to decouple pod IP assignment from your VNET’s IP range.
  • Efficient IP utilization: This approach is particularly beneficial in environments with limited VNET address space or when deploying clusters with high pod density.
  • Trade-off: While the overlay approach introduces slight encapsulation overhead, it greatly enhances scalability.

Advanced concepts:

Example lab: Implementing CNI overlay

1. Deploy a cluster with CNI overlay:

# Variables
resourceGroup="MyResourceGroup"
location="centralus"
aksName="MyOverlayAKSCluster"
podCidr="192.168.0.0/16"
nodeCount=3

# Create resource group (if it doesn't already exist)
az group create --name "$resourceGroup" --location "$location"

# Create AKS cluster with CNI Overlay
az aks create \
  --resource-group "$resourceGroup" \
  --name "$aksName" \
  --location "$location" \
  --network-plugin azure \
  --network-plugin-mode overlay \
  --pod-cidr "$podCidr" \
  --enable-addons monitoring \
  --node-count "$nodeCount"

Note on VNET and pod CIDR with CNI overlay

When deploying an AKS cluster using Azure CNI Overlay, it's important to understand how networking is handled:

  • Overlay pod CIDR: The pod CIDR you specify (e.g., 192.168.0.0/16) is used exclusively for assigning IP addresses to pods. This overlay CIDR is completely independent of the address space used by the underlying virtual network (VNET).
  • Default VNET provisioning: In overlay mode, you do not have the option to provide a custom VNET or configure its address range. Instead, if you do not explicitly specify a VNET (which you actually cannot in overlay mode), AKS automatically provisions a default VNET in a system-managed resource group. This VNET supports the cluster's control plane and node infrastructure, and its IP range is independent of the overlay pod CIDR.
  • Decoupled pod networking: Because pod IP addresses are allocated from the overlay CIDR rather than the VNET, even if the system-managed VNET uses a different range (e.g., 10.0.0.0/16), there is no conflict with pod IPs from the overlay CIDR (e.g., 192.168.0.0/16). This decoupling simplifies IP management and allows for greater scalability, especially in environments where VNET IP space is limited.
  • When to Use Azure CNI (Standard): If you require explicit control over your VNET—such as defining specific address ranges, subnets, or other custom network configurations—you should use Azure CNI (Standard) mode. With Standard mode, you can create and supply your own custom VNET during cluster creation.

In summary, Azure CNI Overlay is designed to abstract the underlying VNET management, automatically provisioning a default VNET without allowing custom configurations, while still providing efficient and scalable pod networking via a decoupled overlay pod CIDR.

2. Test connectivity:

Deploy a sample application and verify pod-to-pod connectivity using overlay networking tools and commands.

Step 1: Deploy two test pods:

Create two pods (named test-pod-1 and test-pod-2) using the BusyBox image, which provides basic networking utilities:

kubectl run test-pod-1 --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"
kubectl run test-pod-2 --image=busybox --restart=Never -- /bin/sh -c "sleep 3600"

Step 2: Verify pods are running

Check that both pods are in the running state:

kubectl get pods

You should see output similar to:

NAME        READY   STATUS   RESTARTS   AGE
test-pod-1  1/1     Running  0          1m
test-pod-2  1/1     Running  0          1m

Step 3: Retrieve the IP address of one pod

Get the IP address of test-pod-2:

POD2_IP=$(kubectl get pod test-pod-2 -o jsonpath='{.status.podIP}')
echo "test-pod-2 IP: $POD2_IP"

Step 4: Test connectivity from the other pod

Exec into test-pod-1 and ping test-pod-2 using the retrieved IP address:

kubectl exec test-pod-1 -- ping -c 4 $POD2_IP

You should see output confirming that test-pod-1 can reach test-pod-2, such as:

PING 192.168.0.5 (192.168.2.117): 56 data bytes
64 bytes from 192.168.2.117: seq=0 ttl=64 time=0.123 ms
64 bytes from 192.168.2.117: seq=1 ttl=64 time=0.098 ms
...
--- 192.168.2.117 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss

Optional: Clean up

After testing, remove the test pods:

kubectl delete pod test-pod-1 test-pod-2

9. Managing resource groups and FAQs

Understanding how AKS organizes its resources is critical for efficient management. When you deploy an AKS cluster, two resource groups are created by design: one for the cluster's user-managed resources and a secondary, system-managed resource group that contains supporting components. Here’s what you need to know:

  • Primary vs. secondary resource group: The primary resource group hosts the cluster’s core components, while the secondary resource group holds system-managed resources like load balancers, managed identities, and network components. It’s important to avoid manual modifications in the secondary group since it is maintained by AKS.
  • Lifecycle management best practices: To safeguard your resources:
    • Apply resource locks or policies to prevent accidental deletion or modification.
    • Use consistent naming conventions and tagging across both resource groups. This aids in tracking, cost management, and operational monitoring.
  • Role-based access control (RBAC): Implement RBAC not only within your AKS cluster but also across both resource groups. Proper RBAC configuration ensures that access is granted based on roles and responsibilities, enhancing overall security and operational efficiency.
  • Monitoring and auditing: Regular monitoring using Azure Monitor or other auditing tools is essential. Keeping a close watch on both resource groups can help detect unauthorized changes or unexpected costs early on, ensuring the operational health and security of your deployment.

By integrating these practices into your management strategy, you can efficiently control the lifecycle, security, and performance of your AKS resources, leading to a more stable and secure production environment.

For further details, refer to the AKS FAQ on resource groups.

Conclusion

AKS networking is multifaceted, covering everything from basic connectivity and IP planning to advanced security and routing scenarios. By understanding:

you can design and operate AKS clusters that are both high-performing and secure. Real-world scenarios, like segregating public-facing and internal services or ensuring regulatory compliance via private networking, illustrate how these concepts are applied in production environments.

Next Steps: Hands-On Labs

  • Lab 1: Deploy an AKS Cluster with Azure CNI Standard and validate IP addressing
    Follow the steps in Section 2 to create your cluster and verify pod IPs.
  • Lab 2: Implement a Private Cluster and Configure Private DNS
    Use Section 3’s instructions to deploy a private cluster and set up a private DNS zone.
  • Lab 3: Deploy an AKS Cluster with Azure CNI Overlay
    Follow the steps in step 8 to create your cluster and test pods connectivity

These labs will give you hands-on experience with the core aspects of AKS networking, solidifying your understanding of both the concepts and their practical applications.

References

  1. Networking Best Practices - AKS
  2. AKS Network Topology and Connectivity
  3. Compare Network Models
  4. Private Clusters
  5. Azure Private DNS Overview
  6. AKS FAQ – Resource Groups
  7. No Private DNS Zone Prerequisites
  8. Application Gateway Ingress Controller Overview
  9. HTTP Application Routing
  10. App Routing Migration
  11. Virtual Network Service Endpoints Overview
  12. Azure Private Link Overview
  13. Configure Private DNS Zone
  14. Plan IP Addressing for Your Cluster
  15. Deploy a Cluster with Outbound Type of UDR and Azure Firewall
  16. AKS CNI Overview
  17. Egress Outbound Type
  18. Limit Egress Traffic
  19. Network Security Groups Overview
  20. Use Network Policies
  21. Azure CNI Overlay
  22. Operator Best Practices – Networking
Updated Apr 17, 2025
Version 19.0
No CommentsBe the first to comment