Controlling AKS egress using an HTTP Proxy
Published Apr 22 2024 09:00 AM 3,366 Views
Microsoft
Azure Kubernetes Service (AKS) clusters, whether deployed into a managed or custom virtual network, have certain outbound dependencies necessary to function properly. Previously, in environments requiring internet access to be routed through HTTP proxies, this was a problem. Nodes had no way of bootstrapping the configuration, environment variables, and certificates necessary to access internet services.
This feature adds HTTP proxy support to AKS clusters, exposing a straightforward interface that cluster operators can use to secure AKS-required network traffic in proxy-dependent environments.
Both AKS nodes and Pods will be configured to use the HTTP proxy. Here is an architecture diagram showing the different components.

 

architecture.png

 

If you are more comfortable with video content type, I have created one for you. It is available on Youtube.

 

HoussemDellai_0-1713676385139.png

 

 

Deploy demo using Terraform

 

You will create an environment where AKS egress traffic go through an HTTP Proxy server.
You will use MITM-Proxy as an HTTP Proxy server for AKS. Note you can use another proxy servers like `Squidhead` or `Zscaler`.

The source code and templates are available in this Github repository: https://github.com/HoussemDellai/docker-kubernetes-course/tree/main/67_egress_proxy

 

Generate Certificate for MITM-Proxy server

 

By default, MITM6-Proxy generates a certificate when it starts. You can get this certificate from ~\.mitmproxy\ folder and use it with AKS.
But, for an enterprise use case, they will create their own certificate and then import it into MITM-Proxy. That is what you will do here.
Refer to the script ``generate-cert.sh` to generate a certificate for MITM-Proxy and print it as base64 encoded.
openssl genrsa -out cert.key 2048

# (Specify the mitm domain as Common Name, e.g. \*.google.com or for all: *)

openssl req -new -x509 -key cert.key -out mitmproxy-ca-cert.pem

cat cert.key mitmproxy-ca-cert.pem > mitmproxy-ca.pem

openssl pkcs12 -export -inkey cert.key -in mitmproxy-ca-cert.pem -out mitmproxy-ca-cert.p12

cat mitmproxy-ca-cert.pem | base64 -w0
# sample output
# LS0tLS1CRUdJTiB........0VSVElGSUNBVEUtLS0tLQo=
An Azure VM `vm-linux-proxy-mitm.tf` will be used to install MITM-Proxy and import the generated certificate. Note how it runs the script `install-mitmproxy.sh` as custom data. For simplicity, the certificate will be imported from a Git repository.

 

#!/bin/bash

# 1. install MITM proxy from official package

wget https://downloads.mitmproxy.org/10.2.4/mitmproxy-10.2.4-linux-x86_64.tar.gz

tar -xvf mitmproxy-10.2.4-linux-x86_64.tar.gz

# [Other option] install MITM proxy using Python pip

# sudo apt install python3-pip -y
# pip3 install mitmproxy
# sudo apt install wget -y # install if not installed

# MITM proxy can create a certificate for us on starting, but we will use our own certificate
# 2. download the certificate files

wget 'https://raw.githubusercontent.com/HoussemDellai/docker-kubernetes-course/main/_egress_proxy/certificate/mitmproxy-ca-cert.pem'
wget 'https://raw.githubusercontent.com/HoussemDellai/docker-kubernetes-course/main/_egress_proxy/certificate/mitmproxy-ca.pem'
wget 'https://raw.githubusercontent.com/HoussemDellai/docker-kubernetes-course/main/_egress_proxy/certificate/mitmproxy-ca-cert.p12'

# 3. start MITM proxy with the certificate and expose the web interface

./mitmweb --listen-port 8080 --web-host 0.0.0.0 --web-port 8081 --set block_global=false --certs *=./mitmproxy-ca.pem --set confdir=./

 

To configure AKS with an HTTP Proxy, you should use the following configuration sample.
{
    "httpProxy": "http://20.73.245.90:8080/",
    "httpsProxy": "https://20.73.245.90:8080/",
    "noProxy": [ "localhost", "127.0.0.1", "docker.io", "docker.com" ],
    "trustedCA": "LS0tLS1CRUdJTiBD..........Q0VSVElGSUNBVEUtLS0tLQo="
}
Now you can deploy the Terraform template using the following commands.
terraform init
terraform plan -out tfplan
terraform apply tfplan
Check the created resources
 
resources.png

 

Testing the HTTP Proxy

 

Deploy a sample Nginx pod and check the injected environment variables for the Proxy server.
Note the variables: httpProxy, httpsProxy, noProxy and trustedCa.
kubectl run nginx --image=nginx
kubectl exec -it nginx -- env
# http_proxy=http://10.0.0.4:8080/
# HTTP_PROXY=http://10.0.0.4:8080/
# https_proxy=https://10.0.0.4:8080/
# HTTPS_PROXY=https://10.0.0.4:8080/
# no_proxy=localhost,aks-8v0n0swv.hcp.westeurope.azmk8s.io,10.10.0.0/24,10.0.0.0/16,169.254.169.254,docker.com,127.0.0.1,docker.io,konnectivity,10.10.0.0/16,168.63.129.16
# NO_PROXY=localhost,aks-8v0n0swv.hcp.westeurope.azmk8s.io,10.10.0.0/24,10.0.0.0/16,169.254.169.254,docker.com,127.0.0.1,docker.io,konnectivity,10.10.0.0/16,168.63.129.16

kubectl exec -it nginx -- 'curl ifconf.me'
# 20.134.24.9 # this is VM's public IP used by Proxy

 

Spoiler
Note how AKS injected other CIDR ranges and domain names for the NO_PROXY environment variable. These are needed by the platform.
Check also these environment variables are injected into the cluster nodes.
kubectl get nodes
# NAME                                 STATUS   ROLES    AGE   VERSION
# aks-systempool-48300357-vmss000000   Ready    <none>   11m   v1.29.0
# aks-systempool-48300357-vmss000001   Ready    <none>   11m   v1.29.0
# aks-systempool-48300357-vmss000002   Ready    <none>   11m   v1.29.0

kubectl debug node/aks-systempool-48300357-vmss000000 -it --image=ubuntu

root@aks-systempool-48300357-vmss000000:/# chroot /host

env
# http_proxy=http://10.0.0.4:8080/
# HTTP_PROXY=http://10.0.0.4:8080/
# https_proxy=https://10.0.0.4:8080/
# HTTPS_PROXY=https://10.0.0.4:8080/
# no_proxy=localhost,aks-8v0n0swv.hcp.westeurope.azmk8s.io,10.10.0.0/24,10.0.0.0/16,169.254.169.254,docker.com,127.0.0.1,docker.io,konnectivity,10.10.0.0/16,168.63.129.16
# NO_PROXY=localhost,aks-8v0n0swv.hcp.westeurope.azmk8s.io,10.10.0.0/24,10.0.0.0/16,169.254.169.254,docker.com,127.0.0.1,docker.io,konnectivity,10.10.0.0/16,168.63.129.16
# ... removed for brievety
Bypass HTTP Proxy

 

By default, all egress traffic for nodes and pods will go through the Proxy because the environment variables for proxy are injected into all pods and nodes.
However, if you need to bypass the proxy, you just need to not inject these environment variables.
In kubernetes, this could be done declaratively using the annotation "kubernetes.azure.com/no-http-proxy-vars": "true".
apiVersion: v1
kind: Pod
metadata:
  name: nginx-noproxy
  annotations:
    "kubernetes.azure.com/no-http-proxy-vars": "true"
spec:
  containers:
  - image: nginx
    name: nginx

 

kubectl apply -f noproxy-pod.yaml

 

Now if you check the environment variables for this pod, you will notice that the environment variables for the proxy were not injected.
And if you try to connect to internet, the egress traffic will be carried through the cluster Load Balancer and it's Public IP address.

 

kubectl exec -it nginx-noproxy -- env
# PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# HOSTNAME=nginx-noproxy
# NGINX_VERSION=1.25.4
# NJS_VERSION=0.8.3
# PKG_RELEASE=1~bookworm
# KUBERNETES_PORT=tcp://10.0.0.1:443
# KUBERNETES_PORT_443_TCP=tcp://10.0.0.1:443
# KUBERNETES_PORT_443_TCP_PROTO=tcp
# KUBERNETES_PORT_443_TCP_PORT=443
# KUBERNETES_PORT_443_TCP_ADDR=10.0.0.1
# KUBERNETES_SERVICE_HOST=10.0.0.1
# KUBERNETES_SERVICE_PORT=443
# KUBERNETES_SERVICE_PORT_HTTPS=443
# TERM=xterm
# HOME=/root

kubectl exec -it nginx-noproxy -- curl ifconf.me
# 4.245.123.106 # this is cluster LB

 

Updating Proxy configuration

 

You can update a cluster with existing proxy settings, but could not enable proxy for existing cluster.
az aks update -n aks -g rg-aks --http-proxy-config aks-proxy-config.json
An AKS update for httpProxy, httpsProxy, and/or NoProxy will automatically inject new environment variables into pods with the new httpProxy, httpsProxy, or NoProxy values.
Pods must be rotated for the apps to pick it up.

 

For components under kubernetes, like containerd and the node itself, this won't take effect until a node image upgrade is performed.
 
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
1 Comment
Co-Authors
Version history
Last update:
‎Apr 20 2024 10:35 PM
Updated by: