SQLServerAlwaysOn
9 TopicsHPE SGLX - The new Azure VM extension for SQL Server on Linux
Overview Mission critical SQL Server instances have the need to utilize high availability and disaster recovery features to ensure business continuity. SQL Server on Linux supports HPE Serviceguard as one of the supported clustering solutions. To know more, please refer to HPE Serviceguard for Linux. We have an official documentation that provides detailed steps on how you can configure SQL Server Always On availability groups with HPE Serviceguard for Linux. The purpose of this blog is to inform you about the HPE Serviceguard Extension available in Azure marketplace. This extension allows users to easily create Azure linux-based VMs with SQL Server and HPE Serviceguard pre-installed. As a result, customers can quickly configure SQL Server high availability (HA) solutions. The Serviceguard - Azure VM marketplace extension is available in all Azure regions for use. Let's get started! Create an Azure virtual machine using Azure portal: Log in to Azure portal – portal.azure.com Create Resource group in Azure under the subscription or choose existing resource group of choice. Choose SQL Server on Linux based marketplace image (RHEL or SUSE) and configure the VM on the basis of disks, virtual network, etc. On the Advanced tab of VM creation, click on “Select an extension to install.” Search for HPE Serviceguard for Linux Click on load more and select the extension. Under ‘Select Serviceguard Add-on to install’ , Select Microsoft SQL server on Linux if it’s non-quorum server node machine and select ‘Centralized Serviceguard Management and Arbitration’ if you need Serviceguard quorum server and SGMGR+ (UI) to be configured,. Enter Serviceguard administrative user “sgmgr” password and confirm the same. Proceed to create the VM upon entering the SGLX extension details. After you have configured the other two VMs using the steps as outlined above, you can follow this documentation starting from "Create HPE Service guard cluster" section to configure SQL Server Always On availability groups for SQL Server on Linux3.9KViews1like0CommentsAnnouncing General Availability of Azure Portal Experience to Deploy Multi-Subnet Availability Group
Today, we are excited to announce general availability (GA) of the end-to-end experience to deploy a multi-subnet Availability Group (AG) for SQL Server on Azure Virtual Machines.3.8KViews3likes2CommentsLift and Shift Always On SQL Server Failover Cluster Instance (SQL FCI) to Azure VMs
Today, we are announcing two new features enabling lift and shift of Always On SQL server Failover Cluster instances (SQL FCI) from on-premises to Azure Virtual Machines: Distributed Network Name (DNN) for SQL FCI and Azure Shared Disks for SQL FCI.23KViews9likes7CommentsConfigure SQL Server AG (Read-Scale) for SQL Containers deployed on Kubernetes using Helm
If you are trying to setup Always On availability group between SQL instances deployed as SQL containers on Kubernetes platform, then I hope that this blog provides you the required reference to successfully setup the environment. Target: By end of this blog, we should have three SQL Server instances deployed on the Kubernetes aka k8s cluster. With Always On availability group configured amongst the three SQL Server instances in Read scale mode. We will also have the READ_WRITE_ROUTING_URL setup to provide read/write connection redirection. References: Refer Use read-scale with availability groups - SQL Server Always On | Microsoft Docs to read more about read scale mode. To prepare your machine to run helm charts please refer this blog where I talk how you can setup your environment including AKS and preparing your windows client machine with helm and other tools to deploy SQL Server instances on AKS (Azure Kubernetes Service). Environment layout: 1) To set this environment up, in my case I am using Azure Kubernetes Service as my Kubernetes platform. 2) I will deploy three SQL Server container instances using helm chart in a Statefulset mode you can also deploy this even using deployment mode. 3) I will use T-SQL scripts to setup and configure the always on availability group. Let's get the engine started: Step 1: Using helm deploy three instances of SQL Server on AKS with Always on enabled and create external services of type load balancer to access the deployed SQL Servers. Download the helm chart and all its files to your windows client, switch to the directory where you have downloaded and after you have done modification to the downloaded helm chart to ensure it is as per your requirement and customization, deploy SQL Servers using the command as shown below, you can change the deployment name ("mssql") to anything that you'd like. helm install mssql. --set ACCEPT_EULA.value=Y --set MSSQL_PID.value=Developer Within few minutes, you should see the pods coming up, the number of pods that would be started depends on the "replicas" value you set in the values.yaml file, if you use it as is, then you should have three pods starting up, as the replicas value is set to three. So you have three SQL Server instances using its own pvc's up and running as shown below We also need a way to access these SQL Servers outside the kubernetes cluster, and since I am using AKS as my kubernetes cluster, to access the SQL Server instances, we have created three services one each for the SQL Server pod. The yaml file for the services is also shared with the helm chart under the folder "external services" and the yaml file name is : "ex_service.yaml". If you are using the sample helm chart, you can create the services using the command shown below: kubectl apply -f "D:\helm-charts\sql-statefull-deploy\external services\ex_service.yaml" Apart from the three external services, we will also need the pods to be able to talk to each other on port 5022 (default port used by AG for endpoints on all the replicas) so we create one clusterip service for each pod, the yaml file for this is also available in the sample helm chart under the folder "external services" and the file name is "ag_endpoint.yaml". If you have not made any changes then you can create the service using the command: kubectl apply -f "D:\helm-charts\sql-statefull-deploy\external services\ag_endpoint.yaml" If all the above steps are followed you should have the following resources in the kubernetes cluster: Note: On our cluster, we already have a secret created to store sa password using the command below, the same sa password is being used by all the three SQL Server instances. It is always recommended to change the sa password after the SQL container deployment so the same sa password is not used for all three instances. kubectl create secret generic mssql --from-literal=SA_PASSWORD="MyC0m9l&xP@ssw0rd" Step 2: Create Certificates on primary and secondary replicas followed by creation of endpoints on all replicas. Now it's time for us to create the certificate and endpoints on all the replicas. Please use the External IP address to connect to SQL Server primary instance and run the below T-SQL command to create the certificate and endpoint. --In the context of master database, please create a master key use master go CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<'mycomplexpassword'>'; --under the master context, create a certificate that will be used by endpoint for --authentication. We then backup the created certificate -- to copy the certificate to all the other replicas CREATE CERTIFICATE dbm_certificate WITH SUBJECT = 'dbm'; BACKUP CERTIFICATE dbm_certificate TO FILE = '/var/opt/mssql/data/dbm_certificate.cer' WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/dbm_certificate.pvk',ENCRYPTION BY PASSWORD = '<'mycomplexpassword'>'); --Now create the endpoint and authenticate using the certificate we created above. CREATE ENDPOINT [Hadr_endpoint] AS TCP (LISTENER_PORT = 5022) FOR DATABASE_MIRRORING ( ROLE = ALL, AUTHENTICATION = CERTIFICATE dbm_certificate, ENCRYPTION = REQUIRED ALGORITHM AES ); ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED; On the primary SQL Server instance pod, we should have the dbm_certificate.pvk and dbm_certificate.cer files at the location : /var/opt/mssql/data. As shown below We need to copy these files to the other pods, you can use the kubectl cp commands to copy from the primary pod to your local client and then from the local client to the secondary pods. Sample commands are shown below --Please ensure on the local machine you create the certificates folder and then run the --below command to copy the files from primary pod to the local machine kubectl cp mssql-sql-statefull-deploy-0:/var/opt/mssql/data/dbm_certificate.pvk "certificate\pvk" kubectl cp mssql-sql-statefull-deploy-0:/var/opt/mssql/data/dbm_certificate.cer "certificate\cer" --Now copy the files from the local machine to the secondary pods kubectl cp "certificate\certs" mssql-sql-statefull-deploy-1:/var/opt/mssql/data/dbm_certificate.cer kubectl cp "certificate\pvk" mssql-sql-statefull-deploy-1:/var/opt/mssql/data/dbm_certificate.pvk kubectl cp "certificate\certs" mssql-sql-statefull-deploy-2:/var/opt/mssql/data/dbm_certificate.cer kubectl cp "certificate\pvk" mssql-sql-statefull-deploy-2:/var/opt/mssql/data/dbm_certificate.pvk Post this the files should be available on every pod as shown below Please create the certificates and endpoints on the secondary replica pods by connecting to the secondary replicas and running the below T-SQL commands: --Run the below command on secondary 1&2 : mssql-sql-statefull-deploy-1 & mssql-sqlstatefull-deploy-2 --once the cert and pvk files are copied create the cert here on secondary and alsocreate the endpoint CREATE MASTER KEY ENCRYPTION BY PASSWORD ='<'mycomplexpassoword'>'; CREATE CERTIFICATE dbm_certificate FROM FILE = '/var/opt/mssql/data/dbm_certificate.cer' WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/dbm_certificate.pvk', DECRYPTION BY PASSWORD = '<'mysamecomplexpassword'>' ); CREATE ENDPOINT [Hadr_endpoint] AS TCP (LISTENER_PORT = 5022) FOR DATABASE_MIRRORING ( ROLE = ALL, AUTHENTICATION = CERTIFICATE dbm_certificate, ENCRYPTION = REQUIRED ALGORITHM AES ); ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED; Step 3: Create the AG on the primary replica and then join the secondary replicas using T-SQL On the primary replica run the below command to create the AG which has Read_only_routing_list configured and also has Read_write_routing_url configured to redirect connection to primary irrespective of the instance that you connect provided you pass the database name to which you want to connect. --run the below t-sql on the primary SQL server pod CREATE AVAILABILITY GROUP MyAg WITH ( CLUSTER_TYPE = NONE ) FOR DATABASE test REPLICA ON N'mssql-sql-statefull-deploy-0' WITH ( ENDPOINT_URL = 'TCP://mssql-mirror-0:5022', AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT, FAILOVER_MODE = MANUAL, SEEDING_MODE = AUTOMATIC, SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL, READ_ONLY_ROUTING_URL = 'TCP://104.211.231.206:1433' ), PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE, READ_ONLY_ROUTING_LIST = ('mssql-sql-statefull-deploy-1','mssql-sql-statefull-deploy-2'), READ_WRITE_ROUTING_URL = 'TCP://104.211.231.206:1433' ), SESSION_TIMEOUT = 10 ), N'mssql-sql-statefull-deploy-1' WITH ( ENDPOINT_URL = 'TCP://mssql-mirror-1:5022', AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT, FAILOVER_MODE = MANUAL, SEEDING_MODE = AUTOMATIC, SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL, READ_ONLY_ROUTING_URL = 'TCP://104.211.203.78:1433' ), PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE, READ_ONLY_ROUTING_LIST = ('mssql-sql-statefull-deploy-0','mssql-sql-statefull-deploy-2'), READ_WRITE_ROUTING_URL = 'TCP://104.211.203.78:1433' ) , SESSION_TIMEOUT = 10 ), N'mssql-sql-statefull-deploy-2' WITH ( ENDPOINT_URL = 'TCP://mssql-mirror-2:5022', AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT, FAILOVER_MODE = MANUAL, SEEDING_MODE = AUTOMATIC, SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL, READ_ONLY_ROUTING_URL = 'TCP://104.211.203.159:1433' ), PRIMARY_ROLE (ALLOW_CONNECTIONS = READ_WRITE, READ_ONLY_ROUTING_LIST = ('mssql-sql-statefull-deploy-0','mssql-sql-statefull-deploy-1'), READ_WRITE_ROUTING_URL = 'TCP://104.211.203.159:1433'), SESSION_TIMEOUT = 10 ); GO ALTER AVAILABILITY GROUP [MyAg] GRANT CREATE ANY DATABASE; Note: In the above command, please ensure that you pass the service names that you created in step 1 for the enpoint_url and you pass the external IP address of the SQL Server pods when configuring the read_write_routing_url option. Any error here can result in the secondary's not able to join the AG. Now on the secondary replicas please run the T-SQL command to join the AG, sample shown below --On both the secondaries run the below T-SQL commands ALTER AVAILABILITY GROUP [MyAg] JOIN WITH (CLUSTER_TYPE = NONE); ALTER AVAILABILITY GROUP [MyAg] GRANT CREATE ANY DATABASE; The AG should be configured and the dashboard should look as shown below Step 4: Read_write_routing_url in action You can now try connecting to any of the secondary replicas and provide the AG database as the database context, you will automatically get routed to the current primary even without the presence of listener. As you can see we are connecting to 104.211.203.78 which is the external IP address for pod: mssql-sql-statefull-deploy-1 which is secondary server, but the connection got re-routed to the current primary which is : mssql-sql-statefull-deploy-0 @ 104.211.231.206 To try manual failover, please follow the steps documented here. Hope this helps!11KViews3likes2CommentsReleased: Support for Distributed Network Names (DNN) Listeners for Always On Availability Groups
We are pleased to announce support for Distributed Network Name based listeners in Availability Groups. As of SQL Server 2019, CU8, we now support the use of Availability Group Listeners based on Dynamic Network Names (DNN Listeners).4.7KViews1like3CommentsSQL Server Always on Availability Group Scalability Experiment and Result on Azure VM
Today to meet the demands of fast business growth, many Tier 1 customers need to configure their Always On Availability Groups (AG) with several hundred or more databases, and in recent years with the massive improvements of hardware technology they are now able to do it. Customers often ask how many databases they can place in an Always On Availability Group. This article shares the information and learning from our experiment of placing large number of databases within an AG, based on scalability test with high end Azure virtual machines (E64s and M128ms), with TPCE workload simulating customer’s scenarios.4.8KViews2likes0Comments