This article is contributed. See the original author and article here.

Overview


 


Containerd is the default container runtime with AKS clusters on Kubernetes version 1.19 onwards. With a containerd-based node and node pools, instead of talking to the dockershim, the kubelet will talk directly to containerd via the CRI (container runtime interface) plugin, removing extra hops on the flow when compared to the Docker CRI implementation. As such, you’ll see better pod startup latency and less resource (CPU and memory) usage.


 


This change restricts containers from accessing the docker engine, /var/run/docker.sock, or use Docker-in-Docker (DinD).


 


In order to build docker images, Docker-in-Docker is a common technique used with Azure DevOps pipelines running in Self-Hosted agents. With Containerd, the pipelines building docker images no longer work and we need to consider other techniques. This article outlines the steps to modify the pipelines to perform image builds on Containerd enabled Kubernetes clusters.


 


Azure VM scale set agents is an option to scale self-hosted agents outside Kubernetes. To continue running the agents on Kubernetes, we will look at 2 options. One to perform image builds outside the cluster using ACR Tasks and another using kaniko executor image which is responsible for building an image from a Dockerfile and pushing it to a registry.


 


Building images using ACR Tasks


 


ACR Tasks facilitates container image builds.


 


Modify the existing pipelines/create a new pipeline to add an Azure CLI Task running the below command.


 

az acr build --registry <<registryName>> --image <<imageName:tagName>> .

 


The command will:



  • Run in the current workspace

  • Package the code and upload to in a temp volume attached to ACR Tasks

  • Build the container image

  • Push the container image to the registry


 


The pipeline should look as illustrated below:


 


srinipadala_5-1613039730259.png


 


Though this approach is simple, it has a dependency on ACR. The next option deals with in-cluster builds which does not require ACR.


 


Building images using Kaniko


 


To use Kaniko to build images, it needs a build context and the executor instance to perform the build and push to the registry. Unlike Docker-in-Docker scenario, Kaniko builds are executed in a separate pod. We will use Azure Storage to exchange the context (source code to build) between the agent and the kaniko executor. Below are the steps in the pipeline.


 



  • Package the build context as a tar file

  • Upload the tar file to Azure Storage

  • Create a pod deployment to execute the build

  • Wait for the Pod completion to continue


The script to perform the build is as below:


 

# package the source code
tar -czvf /azp/agent/_work/$(Build.BuildId).tar.gz .

#Upload the tar file to Azure Storage
az storage blob upload --account-name codelesslab --account-key $SKEY --container-name kaniko --file /azp/agent/_work/$(Build.BuildId).tar.gz --name $(Build.BuildId).tar.gz

#Create a deployment yaml to create the Kaniko Pod
cat > deploy.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: kaniko-$(Build.BuildId)
  namespace: kaniko
spec:
  containers:
  - name: kaniko
    image: gcr.io/kaniko-project/executor:latest
    args:
    - "--dockerfile=Dockerfile"
    - "--context=https://<<storageAccountName>>.blob.core.windows.net/<<blobContainerName>>/$(Build.BuildId).tar.gz"
    - "--destination=<<registryName>>/<<imageName>>:k$(Build.BuildId)"
    volumeMounts:
    - name: docker-config
      mountPath: /kaniko/.docker/
    env:
    - name: AZURE_STORAGE_ACCESS_KEY
      value: $SKEY
  restartPolicy: Never
  volumes:
  - name: docker-config
    configMap:
      name: docker-config
EOF

 


The storage access key can be added as an encrypted pipeline variable. Since the encrypted variables are not passed on to the tasks directly, we need to map them to an environment variable.


 


As the build is executed outside the pipeline, it is required to monitor the status of the pod to decide on the next steps within the pipeline. Below is a sample bash script to monitor the pod:


 

# Monitor for Success or failure

while [[ $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') != "Failed" ]]; do echo "waiting for pod" && sleep 1; done

# Exit the script with error if build failed

if [ $(kubectl get pods kaniko-$(Build.BuildId) -n kaniko -o jsonpath='{..status.phase}') == "Failed" ]; then 
    exit 1;
fi

 


The complete pipeline should look similar to below:


 


Task 1: [Optional ] Get the KubeConfig (If not supplied through secrets)


 


srinipadala_0-1613039639100.png


 


Task 2:  [Optional ] Install Kubectl latest (if not installed with the agent image)


 


srinipadala_1-1613039665374.png


Task 3: Package Context and Prepare YAML


Note how the pipeline variable is mapped to the Task Environment variable


 


srinipadala_2-1613039673425.png


 


Task 4: Create the Executor Pod


Note: Alternatively, can be included in the script kubectl apply -f deploy.yaml


 


srinipadala_3-1613039693910.png


 


Task 5: Monitor for Status


 


srinipadala_4-1613039703127.png


 


Summary


 


These build techniques are secure compared to Docker-in-Docker scenario as no special permission, privileges or mounts are required to perform a container image build.


 


 


 


 


 


 


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.