This article is contributed. See the original author and article here.

aks+ndmv4.jpg


 


 


Introduction


Today there is a lot of  interest around generative AI, specifically training and inferencing large language models (OpenAI GPT4, DALL.E2), Git copilot, Azure OpenAI service). Training these large language models requires lots of float-point performance and high interconnect network bandwidth. The Azure NDm_v4  virtual machine is an ideal  choice for these types of demanding  jobs (because it has 8 A100 GPU and each GPU has 200 Gbps of HDR InfiniBand). Kubernetes is a popular choice to deploy and manage containerized workloads on compute/gpu resources. The Azure Kubernetes service (AKS) simplifies Kubernetes cluster deployments. We show how to deploy an optimal NDm_v4 (A100) AKS cluster, making sure that all 8 GPU and 8 InfiniBand devices on each virtual machine come up correctly and are available to deliver optimal performance. A multi-node NCCL allreduce benchmark job is executed on the NDm_v4 AKS cluster to verify its deployed/configured correctly.


 


Procedure to deploy a NDmv4 (A100) AKS Cluster


We will deploy AKS cluster from the Azure cloud shell using Azure command line interface (azcli). The Azure cloud shell has azcli preinstalled, but if you prefer to install from your local workstation, instructions to install azcli are here.


 


Note: There are many other ways to deplot an AKS cluster (e.g. Azure Portal, ARM template, Bicep and terraform are also popular choices)


 


First we need to install the aks-preview azcli extension, to be able to deploy AKS and control AKS via azcli.


az extension add –name aks-preview

 


It is also necessary to register infiniBand support, to make sure all nodes in your pool can communicate over the same InfiniBand network.


az feature register –name AKSInfinibandSupport –namespace Microsoft.ContainerService

 


Create a resource group for the AKS cluster.


az group create –resource-group  –location 

For simplicity we will use the default kubenet networking (you could also deploy AKS using CNI and choose your own VNET), in the kubenet case AKS will deploy the VNET and subnet. System managed identity will be used for authentication. Ubuntu is chosen for the HostOS (The default AKS version deployed was 1.25.6 and the default Ubuntu HostOS is Ubuntu 22.04).


az aks create -g  –node-resource-group  -n –enable-managed-identity –node-count 2 –generate-ssh-keys -l   –node-vm-size Standard_D2s_v3 –nodepool-name  –os-sku Ubuntu –attach-acr 

 


Then deploy the NDmv4 AKS pool. (Initially only one NDmv4 VM, later we will scale up the AKS cluster).


 


Note: Make sure you have sufficient NDmv4 quota in your subscription/location.


 


A specific tag (SkipGPUDriverInstall=true) needs to be set to prevent the GPU driver from being installed automatically (we will use the Nvidia GPU operator to install the InfiniBand driver instead). Some container images can be quite large and so we use a larger OS disk size (128 GB)


 


 


az aks nodepool add –resource-group  –cluster-name  –name  –node-count 1 –node-vm-size Standard_ND96amsr_A100_v4 –node-osdisk-size 128 –os-sku Ubuntu –tags SkipGPUDriverInstall=true

 


Get credentials to connect and interact with the AKS Cluster.


az aks get-credentials –overwrite-existing –resource-group  –name  

 


Check that the AKS pools are ready.


kubectl get nodes


 


kubectl get nodes

 


Install NVIDIA network and gpu operators (they will be used to install specific GPU and InfiniBand drivers (in this case OFED 5.8-1.0.1.1.2 and GPU driver 525.60.13)


 


 

#! /bin/bash

# Apply required manifests
kubectl get namespace nvidia-operator 2>/dev/null || kubectl create namespace nvidia-operator

# Install node feature discovery
helm upgrade -i --wait 
  -n nvidia-operator node-feature-discovery node-feature-discovery 
  --repo https://kubernetes-sigs.github.io/node-feature-discovery/charts 
  --set-json master.nodeSelector='{"kubernetes.azure.com/mode": "system"}' 
  --set-json worker.nodeSelector='{"kubernetes.azure.com/accelerator": "nvidia"}' 
  --set-json worker.config.sources.pci.deviceClassWhitelist='["02","03","0200","0207"]' 
  --set-json worker.config.sources.pci.deviceLabelFields='["vendor"]'

# Install the network-operator
helm upgrade -i --wait 
  -n nvidia-operator network-operator network-operator 
  --repo https://mellanox.github.io/network-operator 
  --set deployCR=true 
  --set nfd.enabled=false 
  --set ofedDriver.deploy=true 
  --set ofedDriver.version="5.8-1.0.1.1.2" 
  --set secondaryNetwork.deploy=false 
  --set sriovDevicePlugin.deploy=true 
  --set-json sriovDevicePlugin.resources='[{"name": "infiniband", "vendors": ["15b3"], "devices": ["101c"]}]' 
  --set sriovNetworkOperator.enabled=false
# If you want to enable IPoIB, change secondaryNetwork.deploy to true and add the following flags:
# --set secondaryNetwork.multus.deploy=true
# --set secondaryNetwork.cniPlugins.deploy=true
# --set secondaryNetwork.ipamPlugin.deploy=true

# Install the gpu-operator
helm upgrade -i --wait 
  -n nvidia-operator gpu-operator gpu-operator 
  --repo https://helm.ngc.nvidia.com/nvidia 
  --set nfd.enabled=false 
  --set driver.enabled=true 
  --set driver.version="525.60.13" 
  --set driver.rdma.enabled=true 
  --set toolkit.enabled=true

 


 


Verify that InfiniBand and GPU drivers have been installed. You should see 8 infiniband devices and 8 gpu’s per NDm_v4 VM.


kubectl describe node  | grep =e “nvidia.com/infiniband” -e “nvidia.com/gpu”

 


Install Volcano Kubernetes scheduler to make it easier to submit HPC/AI tightly-coupled jobs.


kubectl apply -f https://raw.githubusercontent.com/volcano-sh/volcano/release-1.7/installer/volcano-development.yaml

 


Check that the Volcano kubernetes scheduler was installed correctly.


kubectl get all -n volcano-system

 


Create NCCL collective test container


Here is the Dockerfile that was used to create the NCCL collective test container, the NVIDIA NGC pytorch (23.03) was used as a base container.


 


nccl-tests.sh script to build the NCCL collective tests.


 


 

#!/bin/bash

git clone https://github.com/NVIDIA/nccl-tests.git
cd nccl-tests
make MPI=1 MPI_HOME=/usr/local/mpi

 


 


Dockerfile


ARG FROM_IMAGE_NAME=nvcr.io/nvidia/pytorch:23.03-py3

FROM ${FROM_IMAGE_NAME}

RUN apt update
RUN apt-get -y install build-essential
RUN apt-get -y install infiniband-diags
RUN apt-get -y install openssh-server
RUN apt-get -y install kmod
COPY nccl-tests.sh .
RUN ./nccl-tests.sh
COPY ndv4-topo.xml .

 


Login to your Azure container registry, where your custom container will be stored.


az acr login -n 

 


Build your container locally on a Ndmv4 VM. First change to the directory containing your Dockerfile.


docker build -t .azurecr.io/ .

 


Push your local container to your Azure container registry.


docker push .azurecr.io/

 


Run NCCL allreduce benchmark on NDmv4 AKS Cluster


The NVIDIA NCCL collective communication tests are ideal to verify that the NDv4 AKS cluster is set-up correctly for optimal performance. On 2 NDmv4 nodes (16 A100), NCCL allreduce should be ~186 GB/s.


We will use the docker container we created in the previous section and submit the NCCL allreduce benchmark using the Volcano scheduler.


 


Scale-up the NDmv4 AKS cluster to 2 NDmv4 VM’s (16 A100).


az aks nodepool scale –resource-group  –cluster-name  –name  –node-count 2 

 Here is the NCCL allreduce benchmark yaml script.


 


 

apiVersion: batch.volcano.sh/v1alpha1
kind: Job
metadata:
  name: nccl-allreduce-job1
spec:
  minAvailable: 3
  schedulerName: volcano
  plugins:
    ssh: []
    svc: []
  tasks:
    - replicas: 1
      name: mpimaster
      policies:
        - event: TaskCompleted
          action: CompleteJob
      template:
        spec:
          containers:
            - command:
                - /bin/bash
                - -c
                - |
                  MPI_HOST=$(cat /etc/volcano/mpiworker.host | tr "n" ",")
                  mkdir -p /var/run/sshd; /usr/sbin/sshd
                  echo "HOSTS: $MPI_HOST"
                  mpirun --allow-run-as-root -np 16 -npernode 8 --bind-to numa --map-by ppr:8:node -hostfile /etc/volcano/mpiworker.host -x NCCL_DEBUG=info -x UCX_TLS=tcp -x NCCL_TOPO_FILE=/workspace/ndv4-topo.xml -x UCX_NET_DEVICES=eth0 -x CUDA_DEVICE_ORDER=PCI_BUS_ID -x NCCL_SOCKET_IFNAME=eth0 -mca coll_hcoll_enable 0 /workspace/nccl-tests/build/all_reduce_perf -b 8 -f 2 -g 1 -e 8G -c 1 | tee /home/re
              image: cgacr2.azurecr.io/pytorch_nccl_tests_2303:latest
              securityContext:
                capabilities:
                  add: ["IPC_LOCK"]
                privileged: true
              name: mpimaster
              ports:
                - containerPort: 22
                  name: mpijob-port
              workingDir: /workspace
              resources:
                requests:
                  cpu: 1
          restartPolicy: OnFailure
    - replicas: 2
      name: mpiworker
      template:
        spec:
          containers:
            - command:
                - /bin/bash
                - -c
                - |
                  mkdir -p /var/run/sshd; /usr/sbin/sshd -D;
              image: cgacr2.azurecr.io/pytorch_nccl_tests_2303:latest
              securityContext:
                capabilities:
                  add: ["IPC_LOCK"]
                privileged: true
              name: mpiworker
              ports:
                - containerPort: 22
                  name: mpijob-port
              workingDir: /workspace
              resources:
                requests:
                  nvidia.com/gpu: 8
                  nvidia.com/infiniband: 8
                limits:
                  nvidia.com/gpu: 8
                  nvidia.com/infiniband: 8
              volumeMounts:
              - mountPath: /dev/shm
                name: shm
          restartPolicy: OnFailure
          terminationGracePeriodSeconds: 0
          volumes:
          - name: shm
            emptyDir:
              medium: Memory
              sizeLimit: 8Gi
---

 


 


Note: Modify the ACR (cgacr2) and the container name (pytorch_nccl_tests_2303:latest) in the above script.


 


Check the output


kubectl logs 

 


You should see ~186 GB/s for large messages sizes.


 


      8             2     float     sum      -1    38.15    0.00    0.00      0    31.44    0.00    0.00      0
          16             4     float     sum      -1    33.06    0.00    0.00      0    31.67    0.00    0.00      0
          32             8     float     sum      -1    31.27    0.00    0.00      0    31.14    0.00    0.00      0
          64            16     float     sum      -1    31.91    0.00    0.00      0    31.42    0.00    0.00      0
         128            32     float     sum      -1    32.12    0.00    0.01      0    31.64    0.00    0.01      0
         256            64     float     sum      -1    33.79    0.01    0.01      0    33.14    0.01    0.01      0
         512           128     float     sum      -1    35.12    0.01    0.03      0    34.55    0.01    0.03      0
        1024           256     float     sum      -1    35.38    0.03    0.05      0    34.99    0.03    0.05      0
        2048           512     float     sum      -1    38.72    0.05    0.10      0    37.35    0.05    0.10      0
        4096          1024     float     sum      -1    39.20    0.10    0.20      0    38.94    0.11    0.20      0
        8192          2048     float     sum      -1    46.89    0.17    0.33      0    43.53    0.19    0.35      0
       16384          4096     float     sum      -1    50.02    0.33    0.61      0    49.28    0.33    0.62      0
       32768          8192     float     sum      -1    59.52    0.55    1.03      0    54.29    0.60    1.13      0
       65536         16384     float     sum      -1    71.60    0.92    1.72      0    68.39    0.96    1.80      0
      131072         32768     float     sum      -1    79.46    1.65    3.09      0    76.06    1.72    3.23      0
      262144         65536     float     sum      -1    80.70    3.25    6.09      0    79.49    3.30    6.18      0
      524288        131072     float     sum      -1    89.90    5.83   10.94      0    90.97    5.76   10.81      0
     1048576        262144     float     sum      -1    104.8   10.00   18.75      0    105.6    9.93   18.62      0
     2097152        524288     float     sum      -1    140.0   14.98   28.08      0    133.6   15.70   29.44      0
     4194304       1048576     float     sum      -1    150.6   27.84   52.21      0    151.4   27.70   51.93      0
     8388608       2097152     float     sum      -1    206.6   40.61   76.14      0    204.0   41.11   77.09      0
    16777216       4194304     float     sum      -1    389.0   43.13   80.86      0    386.2   43.45   81.46      0
    33554432       8388608     float     sum      -1    617.4   54.35  101.90      0    608.5   55.14  103.39      0
    67108864      16777216     float     sum      -1    949.0   70.71  132.59      0    939.4   71.44  133.95      0
   134217728      33554432     float     sum      -1   1687.9   79.52  149.09      0   1647.8   81.45  152.72      0
   268435456      67108864     float     sum      -1   3019.6   88.90  166.68      0   3026.4   88.70  166.31      0
   536870912     134217728     float     sum      -1   5701.8   94.16  176.55      0   5745.8   93.44  175.20      0
  1073741824     268435456     float     sum      -1    11029   97.36  182.54      0    11006   97.56  182.92      0
  2147483648     536870912     float     sum      -1    21588   99.48  186.52      0    21668   99.11  185.83      0
  4294967296    1073741824     float     sum      -1    42935  100.03  187.56      0    42949  100.00  187.50      0
  8589934592    2147483648     float     sum      -1    85442  100.54  188.50      0    85507  100.46  188.36      0
# Out of bounds values : 0 OK
# Avg bus bandwidth    : 56.6365 

 


Conclusion


Correct deployment of NDmv4 kubernetes pools using Azure Kubernetes service is critical to get the expected performance. NCCL collectives tests (e.g allreduce) are excellent benchmarks to verify the cluster is set-up correctly and achieving the expected high performance of NDmv4 VM’s.


 

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.