This article is contributed. See the original author and article here.

When we deploy SQL Server on AKS, sometimes we may find SQL HA is not working as expect.


For example, when we deploy AKS using our default sample with 2 nodes:



az aks create 
    --resource-group myResourceGroup 
    --name myAKSCluster 
    --node-count 2 
    --attach-acr <acrName>



There should be 2 instances deployed in the AKS virtual machine scale set:

Untitled picture.png


According to the SQL document:


In the following diagram, the node hosting the mssql-server container has failed. The orchestrator starts the new pod on a different node, and mssql-server reconnects to the same persistent storage. The service connects to the re-created mssql-server.


Untitled picture1.png


However, this seems not always be true when we manually stop the AKS node instance from the portal.


Before we stop any nodes, we may see the status of the pod is running.


Untitled picture2.png


If we stop node 0, nothing will happen as SQL reside on node 1.


Untitled picture4.png


The status of SQL pod remains running.

Untitled picture5.png


However, if we stop node 1 instead of node 0, then there comes the issue.

Untitled picture6.png

We may see original sql remains in the status of Terminating while the new sql pod stucks in the middle of status ContainerCreating.


$ kubectl describe pod mssql-deployment-569f96888d-bkgvf
Name:           mssql-deployment-569f96888d-bkgvf
Namespace:      default
Priority:       0
Node:           aks-nodepool1-26283775-vmss000000/
Start Time:     Thu, 17 Dec 2020 16:29:10 +0800
Labels:         app=mssql
Annotations:    <none>
Status:         Pending
IPs:            <none>
Controlled By:  ReplicaSet/mssql-deployment-569f96888d
    Container ID:
    Image ID:
    Port:           1433/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
      MSSQL_PID:    Developer
      SA_PASSWORD:  <set to the key 'SA_PASSWORD' in secret 'mssql'>  Optional: false
      /var/opt/mssql from mssqldb (rw)
      /var/run/secrets/ from default-token-jh9rf (ro)
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mssql-data
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jh9rf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: for 300s
        for 300s
  Type     Reason              Age                  From                                        Message
  ----     ------              ----                 ----                                        -------
  Normal   Scheduled           <unknown>            default-scheduler                           Successfully assigned default/mssql-deployment-569f96888d-bkgvf to aks-nodepool1-26283775-vmss000000
  Warning  FailedAttachVolume  18m                  attachdetach-controller                     Multi-Attach error for volume "pvc-6e3d4aac-6449-4c9d-86d0-c2488583ec5c" Volume is already used by pod(s) mssql-deployment-569f96888d-d8kz7
  Warning  FailedMount         3m16s (x4 over 14m)  kubelet, aks-nodepool1-26283775-vmss000000  Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[mssqldb default-token-jh9rf]: timed out waiting for the condition
  Warning  FailedMount         62s (x4 over 16m)    kubelet, aks-nodepool1-26283775-vmss000000  Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[default-token-jh9rf mssqldb]: timed out waiting for the condition


This issue caused by an multi-attach error should be expected due to the current AKS internal design.


If you restart the node instance that was shutdown, the issue will be resolved.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

%d bloggers like this: