azure
35 TopicsAutomating Azure Analysis Service Processing using Azure Automation Account
First published on MSDN on Sep 01, 2017 Analysis Services has been progressing day-by-day with new exciting features and there is an ask from the users to automate the Azure Analysis Services Processing.4.9KViews1like0CommentsAccessing SSRS URL hosted on Azure VM over the Internet
First published on MSDN on Apr 10, 2017 There might be a requirement where you would like the SQL Server Reporting Services URL hosted on your on premise environment to be accessed over the Internet (outside of your domain).9.2KViews1like2CommentsTips & Tricks on ‘cloning’ Azure SQL virtual machines from captured images
First published on MSDN on Jul 06, 2016 While we have documentation on how to create a VM from captured image under “How to capture a Windows virtual machine in the Resource Manager deployment model”, it doesn’t address some specifics on SQL Server.3.4KViews1like2CommentsUsing SQL Server in Microsoft Azure Virtual Machine? Then you need to read this…
First published on MSDN on Jun 12, 2014 Over the past few months we noticed some of our customers struggling with optimizing performance when running SQL Server in a Microsoft Azure Virtual Machine, specifically around the topic of I/O Performance.334Views1like0CommentsSQL pod may get stuck in "ContainerCreating" status when you stop the node instance on AKS
When we deploy SQL Server on AKS, sometimes we may find SQL HA is not working as expect. For example, when we deploy AKS using our default sample with 2 nodes: https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster#create-a-kubernetes-cluster az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 2 \ --generate-ssh-keys \ --attach-acr <acrName> There should be 2 instances deployed in the AKS virtual machine scale set: According to the SQL document: In the following diagram, the node hosting the mssql-server container has failed. The orchestrator starts the new pod on a different node, and mssql-server reconnects to the same persistent storage. The service connects to the re-created mssql-server. However, this seems not always be true when we manually stop the AKS node instance from the portal. Before we stop any nodes, we may see the status of the pod is running. If we stop node 0, nothing will happen as SQL reside on node 1. The status of SQL pod remains running. However, if we stop node 1 instead of node 0, then there comes the issue. We may see original sql remains in the status of Terminating while the new sql pod stucks in the middle of status ContainerCreating. $ kubectl describe pod mssql-deployment-569f96888d-bkgvf Name: mssql-deployment-569f96888d-bkgvf Namespace: default Priority: 0 Node: aks-nodepool1-26283775-vmss000000/10.240.0.4 Start Time: Thu, 17 Dec 2020 16:29:10 +0800 Labels: app=mssql pod-template-hash=569f96888d Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/mssql-deployment-569f96888d Containers: mssql: Container ID: Image: mcr.microsoft.com/mssql/server:2017-latest Image ID: Port: 1433/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: MSSQL_PID: Developer ACCEPT_EULA: Y SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false Mounts: /var/opt/mssql from mssqldb (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jh9rf (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: mssqldb: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mssql-data ReadOnly: false default-token-jh9rf: Type: Secret (a volume populated by a Secret) SecretName: default-token-jh9rf Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned default/mssql-deployment-569f96888d-bkgvf to aks-nodepool1-26283775-vmss000000 Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-6e3d4aac-6449-4c9d-86d0-c2488583ec5c" Volume is already used by pod(s) mssql-deployment-569f96888d-d8kz7 Warning FailedMount 3m16s (x4 over 14m) kubelet, aks-nodepool1-26283775-vmss000000 Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[mssqldb default-token-jh9rf]: timed out waiting for the condition Warning FailedMount 62s (x4 over 16m) kubelet, aks-nodepool1-26283775-vmss000000 Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[default-token-jh9rf mssqldb]: timed out waiting for the condition This issue caused by an multi-attach error should be expected due to the current AKS internal design. If you restart the node instance that was shutdown, the issue will be resolved.4.6KViews1like1Comment