azure
35 TopicsRecommendations and Best Practices When Deploying SQL Server AlwaysOn Availability Groups in Microsoft Azure (IaaS)
First published on MSDN on Aug 29, 2014 IntroductionMicrosoft Azure virtual machines (VMs) with SQL Server can help lower the cost of a high availability and disaster recovery (HADR) database solution.14KViews0likes1CommentAccessing SSRS URL hosted on Azure VM over the Internet
First published on MSDN on Apr 10, 2017 There might be a requirement where you would like the SQL Server Reporting Services URL hosted on your on premise environment to be accessed over the Internet (outside of your domain).9.1KViews1like2CommentsUnable to connect to SQL Server on azure VM due to an extra NSG applied to subnet
First published on MSDN on Sep 18, 2016 If you need to open up your SQL Server on an Azure VM to public internet access, you need to look no further than this document Connect to a SQL Server Virtual Machine on Azure (Resource Manager).5.3KViews0likes0CommentsTroubleshooting Internal Load Balancer Listener Connectivity in Azure
First published on MSDN on Feb 22, 2017 Problem Unable to Connect to Azure availability group listener Creating an availability group in order to make your application highly available when running on Azure virtual machines (IaaS) is very popular.4.9KViews0likes0CommentsAutomating Azure Analysis Service Processing using Azure Automation Account
First published on MSDN on Sep 01, 2017 Analysis Services has been progressing day-by-day with new exciting features and there is an ask from the users to automate the Azure Analysis Services Processing.4.8KViews1like0CommentsSQL pod may get stuck in "ContainerCreating" status when you stop the node instance on AKS
When we deploy SQL Server on AKS, sometimes we may find SQL HA is not working as expect. For example, when we deploy AKS using our default sample with 2 nodes: https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster#create-a-kubernetes-cluster az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 2 \ --generate-ssh-keys \ --attach-acr <acrName> There should be 2 instances deployed in the AKS virtual machine scale set: According to the SQL document: In the following diagram, the node hosting the mssql-server container has failed. The orchestrator starts the new pod on a different node, and mssql-server reconnects to the same persistent storage. The service connects to the re-created mssql-server. However, this seems not always be true when we manually stop the AKS node instance from the portal. Before we stop any nodes, we may see the status of the pod is running. If we stop node 0, nothing will happen as SQL reside on node 1. The status of SQL pod remains running. However, if we stop node 1 instead of node 0, then there comes the issue. We may see original sql remains in the status of Terminating while the new sql pod stucks in the middle of status ContainerCreating. $ kubectl describe pod mssql-deployment-569f96888d-bkgvf Name: mssql-deployment-569f96888d-bkgvf Namespace: default Priority: 0 Node: aks-nodepool1-26283775-vmss000000/10.240.0.4 Start Time: Thu, 17 Dec 2020 16:29:10 +0800 Labels: app=mssql pod-template-hash=569f96888d Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/mssql-deployment-569f96888d Containers: mssql: Container ID: Image: mcr.microsoft.com/mssql/server:2017-latest Image ID: Port: 1433/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: MSSQL_PID: Developer ACCEPT_EULA: Y SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false Mounts: /var/opt/mssql from mssqldb (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jh9rf (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: mssqldb: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mssql-data ReadOnly: false default-token-jh9rf: Type: Secret (a volume populated by a Secret) SecretName: default-token-jh9rf Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned default/mssql-deployment-569f96888d-bkgvf to aks-nodepool1-26283775-vmss000000 Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-6e3d4aac-6449-4c9d-86d0-c2488583ec5c" Volume is already used by pod(s) mssql-deployment-569f96888d-d8kz7 Warning FailedMount 3m16s (x4 over 14m) kubelet, aks-nodepool1-26283775-vmss000000 Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[mssqldb default-token-jh9rf]: timed out waiting for the condition Warning FailedMount 62s (x4 over 16m) kubelet, aks-nodepool1-26283775-vmss000000 Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[default-token-jh9rf mssqldb]: timed out waiting for the condition This issue caused by an multi-attach error should be expected due to the current AKS internal design. If you restart the node instance that was shutdown, the issue will be resolved.4.6KViews1like1Comment