Containerd is the default container runtime with AKS clusters on Kubernetes version 1.19 onwards. With a containerd-based node and node pools, instead of talking to the dockershim, the kubelet will talk directly to containerd via the CRI (container runtime interface) plugin, removing extra hops on the flow when compared to the Docker CRI implementation. As such, you'll see better pod startup latency and less resource (CPU and memory) usage.
This change restricts containers from accessing the docker engine, /var/run/docker.sock, or use Docker-in-Docker (DinD).
In order to build docker images, Docker-in-Docker is a common technique used with Azure DevOps pipelines running in Self-Hosted agents. With Containerd, the pipelines building docker images no longer work and we need to consider other techniques. This article outlines the steps to modify the pipelines to perform image builds on Containerd enabled Kubernetes clusters.
Azure VM scale set agents is an option to scale self-hosted agents outside Kubernetes. To continue running the agents on Kubernetes, we will look at 2 options. One to perform image builds outside the cluster using ACR Tasks and another using kaniko executor image which is responsible for building an image from a Dockerfile and pushing it to a registry.
ACR Tasks facilitates container image builds.
Modify the existing pipelines/create a new pipeline to add an Azure CLI Task running the below command.
az acr build --registry <<registryName>> --image <<imageName:tagName>> .
The command will:
The pipeline should look as illustrated below:
Though this approach is simple, it has a dependency on ACR. The next option deals with in-cluster builds which does not require ACR.
To use Kaniko to build images, it needs a build context and the executor instance to perform the build and push to the registry. Unlike Docker-in-Docker scenario, Kaniko builds are executed in a separate pod. We will use Azure Storage to exchange the context (source code to build) between the agent and the kaniko executor. Below are the steps in the pipeline.
The script to perform the build is as below:
# package the source code
tar -czvf /azp/agent/_work/$(Build.BuildId).tar.gz .
#Upload the tar file to Azure Storage
az storage blob upload --account-name codelesslab --account-key $SKEY --container-name kaniko --file /azp/agent/_work/$(Build.BuildId).tar.gz --name $(Build.BuildId).tar.gz
#Create a deployment yaml to create the Kaniko Pod
cat > deploy.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: kaniko-$(Build.BuildId)
namespace: kaniko
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:latest
args:
- "--dockerfile=Dockerfile"
- "--context=https://<<storageAccountName>>.blob.core.windows.net/<<blobContainerName>>/$(Build.BuildId).tar.gz"
- "--destination=<<registryName>>/<<imageName>>:k$(Build.BuildId)"
volumeMounts:
- name: docker-config
mountPath: /kaniko/.docker/
env:
- name: AZURE_STORAGE_ACCESS_KEY
value: $SKEY
restartPolicy: Never
volumes:
- name: docker-config
configMap:
name: docker-config
EOF
The storage access key can be added as an encrypted pipeline variable. Since the encrypted variables are not passed on to the tasks directly, we need to map them to an environment variable.
As the build is executed outside the pipeline, it is required to monitor the status of the pod to decide on the next steps within the pipeline. Below is a sample bash script to monitor the pod:
# Monitor for Success or failure
while [[ $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') != "Failed" ]]; do echo "waiting for pod" && sleep 1; done
# Exit the script with error if build failed
if [ $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') == "Failed" ]; then
exit 1;
fi
The complete pipeline should look similar to below:
Task 1: [Optional ] Get the KubeConfig (If not supplied through secrets)
Task 2: [Optional ] Install Kubectl latest (if not installed with the agent image)
Task 3: Package Context and Prepare YAML
Note how the pipeline variable is mapped to the Task Environment variable
Task 4: Create the Executor Pod
Note: Alternatively, can be included in the script kubectl apply -f deploy.yaml
Task 5: Monitor for Status
These build techniques are secure compared to Docker-in-Docker scenario as no special permission, privileges or mounts are required to perform a container image build.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.