linux
13 TopicsConfigure multiple-subnet AlwaysOn Availability Group by modifying CIB
In the Windows world, a Windows Server Failover Cluster (WSFC) natively supports multiple subnets and handles multiple IP addresses via an OR dependency on the IP address. On Linux, there is no OR dependency, but there is a way to achieve a proper multi-subnet natively with Pacemaker, as shown by the following. You cannot do this by simply using the normal Pacemaker command line to modify a resource. You need to modify the cluster information base (CIB). The CIB is an XML file with the Pacemaker configuration. Here is an example to create a SQL Server Linux Availability group in 4 nodes in 3 subnets in RHEL 7.6 If you are already familiar with the AG Group setup process, please just jump to step 16. 1.Register your subscription on for all servers (red1,red2,red3 and red4 in this case) subscription-manager register 2.List all available subscription, pick the one with High Availabiilty , notedown the pool id subscription-manager list --available --all 3.Register the subscription for all nodes (red1,red2,red3 and red4 in this case) sudo subscription-manager attach --pool=xxxxx 4.Enable the repository(red1,red2,red3 and red4 in this case) sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms 5.Install Pacemaker packages on all nodes. (red1,red2,red3 and red4 in this case) sudo yum install pacemaker pcs fence-agents-all resource-agents 6.Install SQL Server resource agent (red1,red2,red3 and red4 in this case) sudo yum install mssql-server-ha 7.Set the password for the default user that is created when installing Pacemaker and Corosync packages. All the password should be exactly same (red1,red2,red3 and red4 in this case) sudo passwd hacluster 8.Update /etc/hosts file in all servers, add IP and node name. All the servers should have the same entries. 192.168.2.103 red1 192.168.2.104 red2 192.168.4.100 red3 192.168.5.101 red4 9.Run following commands to Enable and start pcsd service and Pacemaker in all nodes. (red1,red2 and red3 and red4 in this case) sudo systemctl enable pcsd sudo systemctl start pcsd sudo systemctl enable pacemaker 10.Run following commands to Create Cluster in primary replica node (red1 in this case) sudo pcs cluster auth red1 red2 red3 red4 -u hacluster -p YouPasswordUsedinStep7 sudo pcs cluster setup --name sqlcluster1 red1 red2 red3 red4 sudo pcs cluster start --all sudo pcs cluster enable --all 11.Run following command to Enable cluster feature in all nodes(red1,red2 , red3 and red4 in this case) sudo /opt/mssql/bin/mssql-conf set hadr.hadrenabled 1 sudo systemctl restart mssql-server Create AG and Listener 1.Run following queries in red1 to create certificate use master go CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**<Master_Key_Password>**'; go CREATE CERTIFICATE dbm_certificate WITH SUBJECT = 'dbm'; go BACKUP CERTIFICATE dbm_certificate TO FILE = '/var/opt/mssql/data/dbm_certificate.cer' WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/dbm_certificate.pvk', ENCRYPTION BY PASSWORD = '**<Private_Key_Password>**' ); 2.Run following commands in red1 to copy the certificate to rest of the servers(red2,red3 and red4 in this case) cd /var/opt/mssql/data scp dbm_certificate.* root@red2:/var/opt/mssql/data/ scp dbm_certificate.* root@red3:/var/opt/mssql/data/ scp dbm_certificate.* root@red4:/var/opt/mssql/data/ 3.Give permission to the mssql user to access the certificate files in rest of the servers(red2,red3 and red4 in this case) cd /var/opt/mssql/data chown mssql:mssql dbm_certificate.* 4.Run following T-SQL queries to create the certificate in rest of the nodes by restoring the certificate backup file (red2,red3 and red4 in this case) use master go CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**<Master_Key_Password>**' go CREATE CERTIFICATE dbm_certificate FROM FILE = '/var/opt/mssql/data/dbm_certificate.cer' WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/dbm_certificate.pvk', DECRYPTION BY PASSWORD = '**<Private_Key_Password>**' ) 5.Create endpoint in all servers (red1,red2,red3 and red4) CREATE ENDPOINT [Hadr_endpoint] AS TCP (LISTENER_PORT = 5022) FOR DATABASE_MIRRORING ( ROLE = ALL, AUTHENTICATION = CERTIFICATE dbm_certificate, ENCRYPTION = REQUIRED ALGORITHM AES ); ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED; 6.Run following query in primary replica (red1) to create Availability group(Please note, it works for SQL 2019. If you are using SQL 2017, you need to change AVAILABILITY_MODE of one the replica to ASYNCHRONOUS_COMMIT) CREATE AVAILABILITY GROUP [ag1] WITH (DB_FAILOVER = ON, CLUSTER_TYPE = EXTERNAL) FOR REPLICA ON N'red1' WITH ( ENDPOINT_URL = N'tcp://red1:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC) , N'red2' WITH ( ENDPOINT_URL = N'tcp://red2:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC), N'red3' WITH ( ENDPOINT_URL = N'tcp://red3:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC), N'red4' WITH ( ENDPOINT_URL = N'tcp://red4:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC) ALTER AVAILABILITY GROUP [ag1] GRANT CREATE ANY DATABASE;--grant create any database permission Join the AG group, run the following T-SQL queries in all the secondary servers (red2,red3 and red4 in this case) ALTER AVAILABILITY GROUP [ag1] JOIN WITH (CLUSTER_TYPE = EXTERNAL); ALTER AVAILABILITY GROUP [ag1] GRANT CREATE ANY DATABASE 8.Run following T-SQL Queries to create database and add it to AG group in primary replica (red1 in this case). CREATE DATABASE [db1]; ALTER DATABASE [db1] SET RECOVERY FULL; BACKUP DATABASE [db1] TO DISK = N'/var/opt/mssql/data/db1.bak'; BACKUP log [db1] TO DISK = N'/var/opt/mssql/data/db1.trn'; GO ALTER AVAILABILITY GROUP [ag1] ADD DATABASE [db1]; 9.Create SQL login pacemaker in all servers (red1,red2,red3 and red4 in this case). CREATE LOGIN [pacemakerLogin] with PASSWORD= N'ComplexP@$$w0rd!' GO ALTER SERVER ROLE [sysadmin] ADD MEMBER [pacemakerLogin] 10.Run following bash command in red1 sudo pcs property set stonith-enabled=false In all SQL Server Linux servers , run following bash commands to save the credentials for the SQL Server login.(red1,red2,red3 and red4) (The password is as same as the one used in step 9) echo 'pacemakerLogin' >> ~/pacemaker-passwd echo 'ComplexP@$$w0rd!' >> ~/pacemaker-passwd sudo mv ~/pacemaker-passwd /var/opt/mssql/secrets/passwd sudo chown root:root /var/opt/mssql/secrets/passwd sudo chmod 400 /var/opt/mssql/secrets/passwd # Only readable by root 12.Create availability group resource at cluster level, run following command on any one of the nodes (just in one server and run just one time). sudo pcs resource create ag_cluster1 ocf:mssql:ag ag_name=ag1 meta failure-timeout=60s master notify=true ##check the status 13.Run following bash command in primary replica red1 to create one virtual IP resources. The resource name is 'vip1', and IP address is 192.168.2.111 sudo pcs resource create vip1 ocf:heartbeat:IPaddr2 ip=192.168.2.111 ##check the status Create Availability group listener for Availability group ag1. Run following T-SQL query in primary replica (red1 in this case). ALTER AVAILABILITY GROUP [ag1] ADD LISTENER 'aglistener' (WITH IP ( ('192.168.2.111','255.255.255.0'), ('192.168.4.111','255.255.255.0'), ('192.168.5.111','255.255.255.0') ),PORT = 1433); Run following bash commands to create constraints: sudo pcs constraint colocation add vip1 ag_cluster1-master INFINITY with-rsc-role=Master sudo pcs constraint order promote ag_cluster1-master then start vip1 16.Run following bash command to export the CIB.(you can run the command in any node) sudo pcs cluster cib <filename> 17.You will find following similar entries <primitive class="ocf" id="vip1" provider="heartbeat" type="IPaddr2"> <instance_attributes id="vip1-instance_attributes"> <nvpair id="vip1-instance_attributes-ip" name="ip" value="192.168.2.111"/> </instance_attributes> <operations> <op id="vip1-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/> <op id="vip1-start-interval-0s" interval="0s" name="start" timeout="20s"/> <op id="vip1-stop-interval-0s" interval="0s" name="stop" timeout="20s"/> </operations> </primitive> 18.Here is the modified version <primitive class="ocf" id="vip1" provider="heartbeat" type="IPaddr2"> <instance_attributes id="vip1-instance_attributes"> <rule id="Subnet1-IP" score="INFINITY" boolean-op="or"> <expression id="Subnet1-Node1" attribute="#uname" operation="eq" value="red1"/> <expression id="Subnet1-Node2" attribute="#uname" operation="eq" value="red2"/> </rule> <nvpair id="vip1-instance_attributes-ip" name="ip" value="192.168.2.111"/> </instance_attributes> <instance_attributes id="vip1-instance_attributes2"> <rule id="Subnet2-IP" score="INFINITY"> <expression id="Subnet2-Node1" attribute="#uname" operation="eq" value="red3"/> </rule> <nvpair id="vip1-instance_attributes-ip2" name="ip" value="192.168.4.111"/> </instance_attributes> <instance_attributes id="vip1-instance_attributes3"> <rule id="Subnet3-IP" score="INFINITY"> <expression id="Subnet3-Node1" attribute="#uname" operation="eq" value="red4"/> </rule> <nvpair id="vip1-instance_attributes-ip3" name="ip" value="192.168.5.111"/> </instance_attributes> <operations> <op id="vip1-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/> <op id="vip1-start-interval-0s" interval="0s" name="start" timeout="20s"/> <op id="vip1-stop-interval-0s" interval="0s" name="stop" timeout="20s"/> </operations> </primitive> Run following command to import the modified CIB and reconfigure Pacemaker. sudo pcs cluster cib-push <filename> Here are the takeaway points: 1).All nodes in same subnet should be in the same <Instance_attributes> 2).If there are more than one servers in the subnet, the keyword ‘boolean-op="or"’ is a must 3).The IP address of Alwayson Listener is addressed in <nvpair> . 4).The value of id property does not matter, you can specify any value as long as the value is unique. Optional, you can create three entries for the three IP addresses in the DNS server. Here is an screenshot of using SQLCMD to connect the AGListener11KViews3likes1CommentGetting Illegal character from SQL DB
We are getting Illegal character (?) while retrieving a nvarchar field(TEMP-WRK-INFO) from a table in a cobol module. Below are region details:- Linux version - RHEL 8.3 MSSQL version - 2019 Initially, we got illegal character while converting the hex value to ASCII value. For example, when the amount of 894.55 is moved to TEMP-WRK-INFO, then hex value of (89 45 5C) is converted to ASCII as (?E\). This issue occurred as extended ASCII value is not converted correctly. We had a fix for this issue by changing the LANG to en_US.CP1252 in environment settings, then it worked for all extended ASCII value except for 81 and 90. When the hex value has the combination of 81 or 90, then we get the illegal characters. Same logic is working fine in windows environment.1KViews0likes0CommentsSQL pod may get stuck in "ContainerCreating" status when you stop the node instance on AKS
When we deploy SQL Server on AKS, sometimes we may find SQL HA is not working as expect. For example, when we deploy AKS using our default sample with 2 nodes: https://docs.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster#create-a-kubernetes-cluster az aks create \ --resource-group myResourceGroup \ --name myAKSCluster \ --node-count 2 \ --generate-ssh-keys \ --attach-acr <acrName> There should be 2 instances deployed in the AKS virtual machine scale set: According to the SQL document: In the following diagram, the node hosting the mssql-server container has failed. The orchestrator starts the new pod on a different node, and mssql-server reconnects to the same persistent storage. The service connects to the re-created mssql-server. However, this seems not always be true when we manually stop the AKS node instance from the portal. Before we stop any nodes, we may see the status of the pod is running. If we stop node 0, nothing will happen as SQL reside on node 1. The status of SQL pod remains running. However, if we stop node 1 instead of node 0, then there comes the issue. We may see original sql remains in the status of Terminating while the new sql pod stucks in the middle of status ContainerCreating. $ kubectl describe pod mssql-deployment-569f96888d-bkgvf Name: mssql-deployment-569f96888d-bkgvf Namespace: default Priority: 0 Node: aks-nodepool1-26283775-vmss000000/10.240.0.4 Start Time: Thu, 17 Dec 2020 16:29:10 +0800 Labels: app=mssql pod-template-hash=569f96888d Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: ReplicaSet/mssql-deployment-569f96888d Containers: mssql: Container ID: Image: mcr.microsoft.com/mssql/server:2017-latest Image ID: Port: 1433/TCP Host Port: 0/TCP State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: MSSQL_PID: Developer ACCEPT_EULA: Y SA_PASSWORD: <set to the key 'SA_PASSWORD' in secret 'mssql'> Optional: false Mounts: /var/opt/mssql from mssqldb (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-jh9rf (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: mssqldb: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mssql-data ReadOnly: false default-token-jh9rf: Type: Secret (a volume populated by a Secret) SecretName: default-token-jh9rf Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> default-scheduler Successfully assigned default/mssql-deployment-569f96888d-bkgvf to aks-nodepool1-26283775-vmss000000 Warning FailedAttachVolume 18m attachdetach-controller Multi-Attach error for volume "pvc-6e3d4aac-6449-4c9d-86d0-c2488583ec5c" Volume is already used by pod(s) mssql-deployment-569f96888d-d8kz7 Warning FailedMount 3m16s (x4 over 14m) kubelet, aks-nodepool1-26283775-vmss000000 Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[mssqldb default-token-jh9rf]: timed out waiting for the condition Warning FailedMount 62s (x4 over 16m) kubelet, aks-nodepool1-26283775-vmss000000 Unable to attach or mount volumes: unmounted volumes=[mssqldb], unattached volumes=[default-token-jh9rf mssqldb]: timed out waiting for the condition This issue caused by an multi-attach error should be expected due to the current AKS internal design. If you restart the node instance that was shutdown, the issue will be resolved.4.6KViews1like1CommentError 15404 ‘Could not obtain information about Windows NT group/user '%ls', error code 0x80090304'
Your SQL Linux has been joined to domain and you can connect to the SQL Server instance using Windows Authentication. The connection itself is fine, but when you run some high privilege T-SQL statements like 'Create Login' , 'sp_addsrvrolemember' , you may run into the issue 'Error 15404 ‘Could not obtain information about Windows NT group/user '%ls', error code 0x80090304'' You will find following messages in PALLOG 11/22/2019 13:56:26.448761588 Debug [security.kerberos] <0000040947/0x00000200> Processing SSPI operation 0x0000000F 11/22/2019 13:56:26.449366379 Error [security.ldap] <0000040947/0x00000200> Initializing credentials for use in new cache failed: Keytab contains no suitable keys for red4$@SQLREPRO.EDU 11/22/2019 13:56:26.449613575 Debug [security.kerberos] <0000040947/0x00000200> Import name [ADMINISTRATOR@SQLREPRO.EDU] returned [ADMINISTRATOR@SQLREPRO.EDU] 11/22/2019 13:56:26.449633375 Debug [security.kerberos] <0000040947/0x00000200> Import name [red4$] returned [red4$] 11/22/2019 13:56:26.449753473 Debug [security.kerberos] <0000040947/0x00000200> Import name [RED4$] returned [RED4$] 11/22/2019 13:56:26.449905471 Debug [security.kerberos] <0000040947/0x00000200> Import name [red4$] returned [red4$] 11/22/2019 13:56:26.450014469 Error [security.kerberos] <0000040947/0x00000200> GSS MAJOR: 851968 GSS MINOR: 39756033 Error acquiring credentials in AcquireCredCaseInsensitive 11/22/2019 13:56:26.450029069 Error [security.kerberos] <0000040947/0x00000200> Unspecified GSS failure. Minor code may provide more information 11/22/2019 13:56:26.450039869 Error [security.kerberos] <0000040947/0x00000200> No key table entry found for red4$@SQLREPRO.EDU 11/22/2019 13:56:26.450053069 Debug [security.kerberos] <0000040947/0x00000200> SSPI operation 0x0000000F returned status: KerberosStream.cpp:2021 Operation unsuccessful 11/22/2019 13:56:26.450119868 Debug [security.kerberos.libos] <0000040961/0x0000020c> GetSecContextByUserABI() return value: 0x80090304 11/22/2019 13:56:26.488617991 Debug [security.kerberos.libos] <0000040961/0x0000020c> QueryContextAttributes() return value: 0x00000000 11/22/2019 13:56:26.488748289 Debug [security.kerberos.libos] <0000040961/0x0000020c> QueryContextAttributes() return value: 0x00000000 11/22/2019 13:56:26.489370580 Debug [security.kerberos.libos] <0000040961/0x0000020c> LookupAccountSid() return value: 0x00000001 Why? When you run queries like 'create login', it will cause permissions to be checked. The first time you do it, it invalidates current permission. When you do it again, the permission check will be rechecked. When doing the permission check, SQL Server has to go through the mssql.keytab to pick up the machine entry key or MSA key. If SQL Server does not find the entries or find invalid entries, it raises error in question. Here are three common scenarios that will cause the issue. 1.The machine account entries are not added. (mssql.keytab is configured by machine account) If you skip the step 'Option 1: Using UPN to configure the keytab' in article https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-active-directory-authentication?view=sql-server-2017, you will run into this issue. For example, here is a screenshot of valid mssql.keytab. If you don't have the entries highlighted, error 15404 happens. 2.Credentials of the MSA are not added. (mssql.keytab is configured by Managed Service Account) This is similar to the scenario 1. For example, here is a screenshot of valid mssql.keytab. If you don't have the entries highlighted, error 15404 happens 3.The machine account entries are expired. (mssql.keytab is configured by machine account) To my knowledge, updating the machine account password cause the KVNO increase, and the machine account entries stall. If someone in the SQL Server Linux runs following command: adcli update --computer-password-lifetime=0. It will update the machine account password and caused the KNVO increase. Please note, Windows domain controller does not control the machine account password update. It’s the client decision to update the password or not. Even you have the 'Domain member: Maximum machine account password age' policy enabled, domain controller does not force clients to update machine account password. Solution === 1.For scenario 1 &2, just add then entries back. 2.For scenario 3, you need to recreate the mssql.keytab. The ‘adcli update’ commands update machine account password, and refresh the /etc/krb5.keytab. You need to following the step ‘Option 1: Using UPN to configure the keytab’ , but with some changes. The original steps is : Now, you need to ‘Delete all the entries by their slot number that are not the UPN’, and delete all stalled UNP entries. Only UPN with latest KVNO are left. For example, here is a screenshot of /etc/krb5.keytab. You need delete all the entries, keep the entries highlighted.38KViews0likes5CommentsTutorial: Create SQL Cluster(FCI) on RHEL
In the Windows world, SQL Server integrates into Windows Server Failover Cluster (WSFC) natively, and we have a dedicated installation for SQL Server Cluster. However, on Linux, you need to install standalone SQL Server instance in the nodes first, and then configure the instance as a SQL Server cluster instance. I use SQL Server 2019 with RHEL 8.x in this tutorial, but it is possible to use SQL Server 2017 in RHEL 7.x or RHEL 8 to configure FCI. Here is the step by step Video Topology 1.ISCSI target server configuration The two highlighted disks will be the used as Shared Storage. sdc is for the database files in /var/opt/mssql/data sdd is for the user databases files. If all your databases are stored /var/opt/mssql/data, feel free to ignore all the steps link to device sdd. 2.Run fdisk to create partition. fdisk /dev/sdc fdisk /dev/sdd Run lsblk again 3.Install targetcli package. yum -y install targetcli 4.Run targetcli to create ISCSI target server. I created two ACLs entries for the node1 and node2: iqn.2020-08.com.contoso:node1 iqn.2020-08.com.contoso:node2 I’m not going to dig into the targetcli command, please review this article for detail. And I have following iscsi settings: 5.Run following bash command to expose tcp port 3260 firewall-cmd --add-port=3260/tcp --permanent firewall-cmd --reload Enable and restart the target service. systemctl enable target.service systemctl restart target.service 2.ISCSI initiator configuration. Choose one of the servers that will participate in the FCI configuration. It does not matter which one. I use node1 in this tutorial. Please note, All the steps in this section are performed in node1, unless stated otherwise. 1.Install iscsi-initiator-utils in all nodes. sudo yum install iscsi-initiator-utils -y 2.Edit the /etc/iscsi/initiatorname.iscsi , replace the existing value with following keywords, the one I used in step 4 of section[ISCSI target server configuration] InitiatorName=iqn.2020-08.com.contoso:node1 3.Discover iSCSI disk of the target. iscsiadm -m discovery -t st -p <ip of iscsi target server> Here is the command in this tutorial. iscsiadm -m discovery -t st -p 10.1.0.8 (Optional step)After the discovery below database is updated. ls -l /var/lib/iscsi/nodes 6.Make the connection to iscsi target. iscsiadm --mode node --targetname iqn.2020-08.com.contoso:servers --login (Optional step)After logging in, a session with the iSCSI target is established. iscsiadm --mode node -P 1 (Optional step)If you review the messages file, you will see following keywords sudo grep "Attached SCSI" /var/log/messages Aug 6 01:38:21 localhost kernel: sd 3:0:1:0: [sdb] Attached SCSI disk Aug 6 01:38:21 localhost kernel: sd 2:0:0:0: [sda] Attached SCSI disk Aug 6 04:26:01 localhost kernel: sd 6:0:0:0: [sdc] Attached SCSI disk Aug 6 04:26:01 localhost kernel: sd 6:0:0:1: [sdd] Attached SCSI disk 9.Create physical volumes on the iSCSI disks. sudo pvcreate /dev/sdc sudo pvcreate /dev/sdd 10.Create volume groups ‘FCIDataVG1’ and ‘FCIDataVG2’ on the iSCSI disk. sudo vgcreate FCIDataVG1 /dev/sdc sudo vgcreate FCIDataVG2 /dev/sdd Create logical name for the two groups. Run following commands in node1. sudo lvcreate -L599G -n FCIDataLV1 FCIDataVG1 sudo lvcreate -L499G -n FCIDataLV2 FCIDataVG2 Check the device Format the logical volume with a supported filesystem. sudo mkfs.xfs /dev/FCIDataVG1/FCIDataLV1 sudo mkfs.xfs /dev/FCIDataVG2/FCIDataLV2 13.Repeat the step1~step6 in rest of the nodes. Please note, 1)Do not do step6 in rest of node2 before step12 is completed in node1. Else you maybe not able to failover. 2)All the steps but step 2 are exactly same. Here is the value for node2 3)After the step6 is executed in rest of the nodes, you will see the same devices as node1. Here is a screenshot of node2 after step 6 is executed. 3.SQL Server configuration. Please note, All the steps in section are performed in node1, unless stated otherwise. 1.Run following queries to create login used by pacemaker CREATE LOGIN [sqlpackmaker] with PASSWORD= N'YourStrongP@ssword1' ALTER SERVER ROLE [sysadmin] ADD MEMBER [sqlpackmaker] 2.Drop the default server name and create a new server name. The new server name is SQL Virutal name. exec sp_dropserver node1 go exec sp_addserver 'sqlvirtualname1','local' Restart SQL Server to take effect. sudo systemctl stop mssql-server sudo systemctl restart mssql-server 4.Run following queries to check change. select @@servername, SERVERPROPERTY('ComputernamephysicalNetBIOS') 5.Stop SQL Server in all nodes(node1,node2 ). sudo systemctl stop mssql-server 6.Copy the /var/opt/mssql/secrets/machine-key of node1 to node2. 7.Create temporary directories to store the SQL Server data and log files. mkdir /var/opt/mssql/tempdir mkdir /var/opt/mssql/tempuserdata 8.Copy the SQL Server data and log files to the temporary directories. cp /var/opt/mssql/data/* /var/opt/mssql/tempdir/ cp /var/opt/mssql/userdata/* /var/opt/mssql/tempuserdata/ 9.Delete the files from the existing SQL Server data directory in node1. rm -f /var/opt/mssql/data/* rm -f /var/opt/mssql/userdata/* 10.Mount the iSCSI logical volume in the SQL Server data folder. mount /dev/<VolumeGroupName>/<LogicalVolumeName> <FolderName> Here are the commands in this tutorial. mount /dev/FCIDataVG1/FCIDataLV1 /var/opt/mssql/data mount /dev/FCIDataVG2/FCIDataLV2 /var/opt/mssql/userdata 11.Change the owner of the mount to mssql. chown mssql:mssql /var/opt/mssql/data chown mssql:mssql /var/opt/mssql/userdata 12.Change ownership of the group of the mount to mssql. chgrp mssql /var/opt/mssql/data chgrp mssql /var/opt/mssql/userdata 13.Copy the files from temp folders back to /var/opt/mssql/data and /var/opt/mssql/userdata. cp /var/opt/mssql/tempdir/* /var/opt/mssql/data cp /var/opt/mssql/tempuserdata/* /var/opt/mssql/userdata 14.Check the two temp folders and make sure the files are copied. Change ownership of files to mssql. chown mssql:mssql /var/opt/mssql/data/* chown mssql:mssql /var/opt/mssql/userdata/* 16.Configure auto mount to make the OS mount the devices automatically. 1) makeGet the UUID. Please downdown the UUID ,TYPE and directory blkid /dev/FCIDataVG1/FCIDataLV1 blkid /dev/FCIDataVG2/FCIDataLV2 2).Edit /etc/fstab to configure auto mount in node1,node2. Here is a screenshot in node1. Please review this article for more detail 4.Cluster configuration. 1.Edit /etc/hosts to speicfy the node and ips in node1 and node1. Do the same thing in node1 and node2 2.Create a file to store the SQL Server username and password for the Pacemaker login. Run the following command in node1 and node2 (The same login name and password specified in step 1 of section [SQL Server configuration] sudo touch /var/opt/mssql/secrets/passwd sudo echo 'sqlpackmaker' >> /var/opt/mssql/secrets/passwd sudo echo 'YourStrongP@ssword1' >> /var/opt/mssql/secrets/passwd sudo chown root:root /var/opt/mssql/secrets/passwd sudo chmod 600 /var/opt/mssql/secrets/passwd 3.On both cluster nodes, open the Pacemaker firewall ports. To open these ports with firewalld, run the following command in node 1 and node2 sudo firewall-cmd --permanent --add-service=high-availability sudo firewall-cmd --reload 4.Install Pacemaker packages in node 1 and node2 sudo yum install pacemaker pcs fence-agents-all resource-agents 5.Set the password for the default user that is created when installing Pacemaker and Corosync packages. Use the same password in node 1 and node2 sudo passwd hacluster 6.To allow nodes to rejoin the cluster after the reboot, enable and start pcsd service and Pacemaker. Run the following command in node 1 and node2 sudo systemctl enable pcsd sudo systemctl start pcsd sudo systemctl enable pacemaker 7.Create the cluster. Run following command in node1. The password should be as same as the one in step 5. sudo pcs host auth node1 node2 sudo pcs cluster setup sqlcluster node1 node2 sudo pcs cluster start --all sudo pcs cluster enable --all 8.Disable the stonith-enabled for test purpose. Run following command in node1. sudo pcs property set stonith-enabled=false 9.Install the FCI resource agent for SQL Server. Run the following commands in node1 and node2 sudo yum install mssql-server-ha 10.Create disk resource and this resource belongs to a resource group(RGfci in this demo). Run following command in node1 sudo pcs resource create iSCSIDisk1 Filesystem device="/dev/FCIDataVG1/FCIDataLV1" directory="/var/opt/mssql/data" fstype="xfs" --group fci sudo pcs resource create iSCSIDisk2 Filesystem device="/dev/FCIDataVG2/FCIDataLV2" directory="/var/opt/mssql/userdata" fstype="xfs" --group fci 11. Create IP resource that will be used by FCI, and this resource belongs to the same resource group created in previous step. sudo pcs resource create vip2 ocf:heartbeat:IPaddr2 ip=10.1.0.111 nic=eth0 cidr_netmask=24 --group fci 12.Create FCI resource. The resource name should be exactly same to the SQL Virtual name created in step 2 in section [SQL Server configuration] sudo pcs resource create sqlvirtualname1 ocf:mssql:fci --group fci Failover === sudo pcs resource move sqlvirtualname1 <NodeName> More:Add a node to existing cluster3.8KViews2likes0CommentsCreate a hybrid Distributed Availability Group Between SQL Windows and SQL Linux
This article describes how to create a Hybrid SQL Server Distributed Availability Group between SQL Linux and SQL Windows Here is the topology of the DAG environment Data Center 1 Data Center 2 Red1:192.168.3.102 (Primary) Red2:192.168.3.103 Availability group :AGLinux Listener:AGLinux-Lis IP:192.168.3.111 SQL Server: 2019 CU2 RHEL 8 Node1:192.168.2.101(Forwarder) Node2:192.168.2.102 Availability group :AGWindows Listener:AGWindows-Lis: IP:192.168.2.111 SQL Server: 2019 CU2 Windows 2016 Distributed Availability group:DAGHybrid Please make use following ports are opened on all the servers in both datacenters. TCP:5022,2224,3121,21064 UDP:5405 For RHEL, you need to have the 'high availability subscription' enabled. Please refer https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-availability-group-cluster-rhel?view=sql... for detail. Section 1:Create Availability group in datacenter1 1. Updates /etc/hosts on all the nodes Datacenter1, add IP , replica name and AG Listeners. Here is a screenshot. All servers in the datacenter1 should have same setting.(Although the IP resources of Listeners will be created in later steps, it’s ok to add at the very beginning) 2. Run following bash commands on all servers in datacenter1 to enable Alwayson Feature. sudo /opt/mssql/bin/mssql-conf set hadr.hadrenabled 1 sudo systemctl restart mssql-server 3.Run following T-SQL to Enable Always On Health check event session on all the servers in datacenter1. ALTER EVENT SESSION AlwaysOn_health ON SERVER WITH (STARTUP_STATE=ON); 4.Create a certificate, run the query on primary replica in datacenter1(red1 in this case) use master go CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**<Master_Key_Password>**'; go CREATE CERTIFICATE dbm_certificate WITH SUBJECT = 'dbm'; go BACKUP CERTIFICATE dbm_certificate TO FILE = '/var/opt/mssql/data/dbm_certificate.cer' WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/dbm_certificate.pvk', ENCRYPTION BY PASSWORD = '**<Private_Key_Password>**' ); 5. Copy the certificate to rest of the servers in datacenter1(red2 in this case) cd /var/opt/mssql/data scp dbm_certificate.* root@**<nodeName>**:/var/opt/mssql/data/ 6. Give permission to the mssql user to access the certificate files in rest of the servers(red2 in this case).Run following bash command in red2. cd /var/opt/mssql/data chown mssql:mssql dbm_certificate.* 7. Create the certificate on rest of the servers by restoring the certificate backup(red2 in this case) use master go CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**<Master_Key_Password>**' go CREATE CERTIFICATE dbm_certificate FROM FILE = '/var/opt/mssql/data/dbm_certificate.cer' WITH PRIVATE KEY ( FILE = '/var/opt/mssql/data/dbm_certificate.pvk', DECRYPTION BY PASSWORD = '**<Private_Key_Password>**' ) 8.Create endpoint on all servers in datacenter1. The port 5022 is used. CREATE ENDPOINT [Hadr_endpoint] AS TCP (LISTENER_PORT = 5022) FOR DATABASE_MIRRORING ( ROLE = ALL, AUTHENTICATION = CERTIFICATE dbm_certificate, ENCRYPTION = REQUIRED ALGORITHM AES ); ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED; 9.Create AG with 2 synchronous replicas in datacenter1. Run following query on primary replica (red1 in this case) CREATE AVAILABILITY GROUP [AGLinux] WITH (DB_FAILOVER = ON, CLUSTER_TYPE = EXTERNAL) FOR REPLICA ON N'red1' WITH ( ENDPOINT_URL = N'tcp://red1:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC), N'red2' WITH ( ENDPOINT_URL = N'tcp://red2:5022', AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, FAILOVER_MODE = EXTERNAL, SEEDING_MODE = AUTOMATIC) go ALTER AVAILABILITY GROUP [AGLinux] GRANT CREATE ANY DATABASE;--grant create any database permission 10.Join the AG group, run the following T-SQL queries on all the secondary servers in datacenter1(red4 in this case) ALTER AVAILABILITY GROUP [AGLinux] JOIN WITH (CLUSTER_TYPE = EXTERNAL); ALTER AVAILABILITY GROUP [AGLinux] GRANT CREATE ANY DATABASE 11.Create database and add to AG group on primary replica.(red1 in this case). CREATE DATABASE [db2]; ALTER DATABASE [db2] SET RECOVERY FULL; BACKUP DATABASE [db2] TO DISK = N'db2.bak'; BACKUP log [db2] TO DISK = N'db2.trn'; GO ALTER AVAILABILITY GROUP [AGLinux] ADD DATABASE [db2]; 12.Create SQL login pacemaker on all servers in datacenter1 CREATE LOGIN [pacemakerLogin] with PASSWORD= N'ComplexP@$$w0rd!' GO ALTER SERVER ROLE [sysadmin] ADD MEMBER [pacemakerLogin] GO 13.Install pacemaker.Run the following bash command on all servers in datacenter1. sudo yum install pacemaker pcs fence-agents-all resource-agents 14.Install SQL Server resource agent for SQL Server. Run the following bash command on all servers in datacenter1. sudo yum install mssql-server-ha 15.Set the password for the default user that is created when installing Pacemaker and Corosync packages. Run following command on all servers in datacenter1(This command asks for password. The password should be same for the servers in same datacenter) sudo passwd hacluster 16.To allow nodes to rejoin the cluster after the reboot, enable and start pcsd service and Pacemaker. Run the following commands on all servers in datacenter1. sudo systemctl enable pcsd sudo systemctl start pcsd sudo systemctl enable pacemaker 17.Create a cluster in datacenter1. Run following commands on any one of the nodes in datacenter1(just on one server, and just one time). You will be asked to provide the username and password you specified in step 15. sudo pcs host auth red1 red2 sudo pcs cluster setup cluster1 red1 red2 sudo pcs cluster start --all sudo pcs cluster enable --all 18.Because the node level fencing configuration depends heavily on your environment, disable it for this tutorial (it can be configured later. In my test, if this value is not set to false, you can’t failover, and IP resource may fail). The following script disables node level fencing(run it on any nodes in datacenter1) sudo pcs property set stonith-enabled=false 20.On all SQL Server Linux servers in datacenter1 , save the credentials for the SQL Server login.(Use the same user name and password created in step12) echo 'pacemakerLogin' >> ~/pacemaker-passwd echo 'ComplexP@$$w0rd!' >> ~/pacemaker-passwd sudo mv ~/pacemaker-passwd /var/opt/mssql/secrets/passwd sudo chown root:root /var/opt/mssql/secrets/passwd sudo chmod 400 /var/opt/mssql/secrets/passwd # Only readable by root 21.Create availability group resource at OS level in datacenter1, run following command on any one of the nodes in datacenter1(just on one node, and just one time)Here are the resource and constraint before creating the resource(no resource/constraint) sudo pcs resource create ag_cluster ocf:mssql:ag ag_name=AGLinux meta failure-timeout=60s promotable notify=true 22.Create virtual IP resource. The resource name is ‘virtualip1’, and IP Address is 192.168.3.111 in this demonstration. This is the IP Address for the AG Listener. sudo pcs resource create virtualip1 ocf:heartbeat:IPaddr2 ip=192.168.3.111 23.Create Availability group listener for Availability group ag1. Run following T-SQL query in primary replica(red1 in this case). The ip address should be as same as the IP in step 22. ALTER AVAILABILITY GROUP [AGLinux] ADD LISTENER 'AGLinux-Lis' (WITH IP(('192.168.3.111','255.255.255.0')),PORT = 1433); a)It’s fine if you run step 23 before the step 22, however, you can’t use the IP Address to connect SQL Server until the resource in Pacemaker is created. b)There is a bug in DMV sys.availability_group_listener_ip_addresses showing the state is offline 24.Add colocation constraint. Please note, without the constraint, the ip resource does not failover when the AG resource failover. Hence we need to create a constraint to guarantee that the IP will failover when cluster resource failover. Run following command on any servers in datacenter1(just on one server, and just one time) sudo pcs constraint colocation add virtualip1 with master ag_cluster-clone INFINITY with-rsc-role=Master 25.Add ordering constraint sudo pcs constraint order promote ag_cluster-clone then start virtualip1 25.To update the property value to 2 minutes run. Run following bash command on any node(just one time) sudo pcs property set cluster-recheck-interval=2min sudo pcs property set start-failure-is-fatal=true pcs resource update ag_cluster meta failure-timeout=60s Section 2:Create Availability group in datacenter2(Actually, you can run this section first as long as the certificate is as same as the one used in Section1). Steps in Section 2 is exactly same to regular SQL Server AG in Windows, the only thing I’d like to highlight is that you have to create the endPoint authenticated by certificate, which is the same certificate you used in SQL Linux. 1.Copy certificate backup in SQL Linux to the two Windows servers. 2.Run following queries to create master key ,certificate and endpoint CREATE MASTER KEY ENCRYPTION BY PASSWORD = '**<Master_Key_Password>**' go CREATE CERTIFICATE dbm_certificate FROM FILE = 'C:\Backup\dbm_certificate.cer' WITH PRIVATE KEY ( FILE = 'C:\Backup\dbm_certificate.pvk', DECRYPTION BY PASSWORD = '**<Private_Key_Password>**' ) go create ENDPOINT [Hadr_endpoint] AS TCP (LISTENER_PORT = 5022) FOR DATABASE_MIRRORING ( ROLE = ALL, AUTHENTICATION = CERTIFICATE dbm_certificate, ENCRYPTION = REQUIRED ALGORITHM AES ); ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED; 3.Create login and grant permission to access the endpoint. CREATE LOGIN InstanceA_Login WITH PASSWORD = 'Strong Passworsdfg1#d'; CREATE USER InstanceA_User FOR LOGIN InstanceA_Login; go alter authorization on CERTIFICATE::dbm_certificate to InstanceA_User GO GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO InstanceA_Login; 4.Create Availability group CREATE AVAILABILITY GROUP [AGWindows] FOR REPLICA ON N'node1' WITH ( ENDPOINT_URL = N'TCP://node1:5022', FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, SEEDING_MODE = AUTOMATIC ), N'node2' WITH ( ENDPOINT_URL = N'TCP://node2:5022', FAILOVER_MODE = AUTOMATIC, AVAILABILITY_MODE = SYNCHRONOUS_COMMIT, SEEDING_MODE = AUTOMATIC ) 5.Create Listener ALTER AVAILABILITY GROUP [AGWindows] ADD LISTENER 'AGWindows-Lis' (WITH IP(('192.168.2.111','255.255.255.0')),PORT = 1433); 6.Run following T-SQL to connect the primary replica on all replicas(node2 in this case) ALTER AVAILABILITY GROUP [AGWindows] JOIN GO ALTER AVAILABILITY GROUP [AGWindows] GRANT CREATE ANY DATABASE; 7.Edit the hosts file on two windows servers, make sure they are able to access the listener AGLinux-Lis. Section 3:Create Hybrid Distributed Availability group 1.Create distributed availability group in datacenter1. Run following query in primary replica in datacenter1( red1 in this case) CREATE AVAILABILITY GROUP [DAGHybrid] WITH (DISTRIBUTED) AVAILABILITY GROUP ON 'AGLinux' WITH ( LISTENER_URL = 'tcp://AGLinux-lis:5022' ,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT ,FAILOVER_MODE = MANUAL ,SEEDING_MODE = AUTOMATIC ) ,'AGWindows' WITH ( LISTENER_URL = 'tcp://AGWindows-Lis:5022' ,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT ,FAILOVER_MODE = MANUAL ,SEEDING_MODE = AUTOMATIC ); 2.Run following query on the forward to join the Distributed AG ALTER AVAILABILITY GROUP [DAGHybrid] JOIN AVAILABILITY GROUP ON 'AGLinux' WITH ( LISTENER_URL = 'tcp://AGLinux-lis:5022' ,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT ,FAILOVER_MODE = MANUAL ,SEEDING_MODE = AUTOMATIC ), 'AGWindows' WITH ( LISTENER_URL = 'tcp://AGWindows-Lis:5022' ,AVAILABILITY_MODE = ASYNCHRONOUS_COMMIT ,FAILOVER_MODE = MANUAL ,SEEDING_MODE = AUTOMATIC ); The article for RHEL 7.6 is here:https://sqlserver.code.blog/2020/02/14/create-a-hybrid-distributed-availability-group-between-sql-windows-and-sql-linux/4.8KViews3likes0CommentsUse KTPASS instead of adden to configure mssql.keytab
There is a known issue with the krb5 library that is shipped on RHEL(7.3, 7.4, 7.5, or 7.6) and Ubuntu(16.04) where AES keys are incorrectly salted on Windows AD systems. There has been a patch in the krb5 library, but that patched version hasn’t shipped on the version Microsoft supports. The incorrect salted value(created by addent command) may cause Windows authentication fail. Here is a recommended way to create mssql.keytab in case you running into the issue. Prerequisites === The SQL Server Linux serve has joined domain 1.Create AD user (or MSA) for SQL Server and set SPN. 1)AD User. On your domain controller, run the New-ADUser PowerShell command to create a new AD user with a password that never expires. The following example names the account mssql, but the account name can be anything you like. You will be prompted to enter a new password for the account. Import-Module ActiveDirectory New-ADUser mssql -AccountPassword (Read-Host -AsSecureString "Enter Password") -PasswordNeverExpires $true -Enabled 2)Managed Service account On your domain controller, run the New-ADServiceAccount PowerShell command to create a Managed Service Account with a password that never expires. The following example names the account mssql, but the account name can be anything you like. You will be prompted to enter a new password for the account. Import-Module ActiveDirectory New-ADServiceAccount -Name mssql -Enabled $true -AccountPassword (Read-Host -AsSecureString "Enter Password") -RestrictToSingleComputer 2.Get the KVNO of the account. 1)Run following powershell command for AD User Import-Module ActiveDirectory get-aduser userName -property msDs-KeyVersionNumber 2)Run following powershell command for MSA account Import-Module ActiveDirectory get-ADServiceAccount -Identity MsaName -property msDS-KeyVersionNumber 3.Run following command in Windows server in the domain using Domain administrator to create the mssql.keytab (this step is applicable to both AD user and MSA) (KVNO should be replaced with the value in step 2, <StrongPassword> should be replaced with the password specified in step1) ktpass /princ MSSQLSvc/**<fully qualified domain name of host machine>**:**<tcp port>**@**<Full qualified domain name>** /ptype KRB5_NT_PRINCIPAL /crypto aes256-sha1 /mapuser domainName\accountName /out mssql.keytab -setpass -setupn /kvno **<kvno from above>** /pass <StrongPassword> ktpass /princ MSSQLSvc/**<fully qualified domain name of host machine>**:**<tcp port>**@**<Full qualified domain name>** /ptype KRB5_NT_PRINCIPAL /crypto rc4-hmac-nt /mapuser domainName\accountName /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno **<kvno from above>** /pass <StrongPassword> ktpass /princ MSSQLSvc/**<netbios name of the host machine>**:**<tcp port>**@**<Full qualified domain name>** /ptype KRB5_NT_PRINCIPAL /crypto aes256-sha1 /mapuser domainName\accountName /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno **<kvno from above>** /pass <StrongPassword> ktpass /princ MSSQLSvc/**<netbios name of the host machine>**:**<tcp port>**@**<Full qualified domain name>** /ptype KRB5_NT_PRINCIPAL /crypto rc4-hmac-nt /mapuser domainName\accountName /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno **<kvno from above>** /pass <StrongPassword> ktpass /princ accountName@**<Full qualified domain name>** /ptype KRB5_NT_PRINCIPAL /crypto aes256-sha1 /mapuser domainName\accountName /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno **<kvno from above>** /pass <StrongPassword> ktpass /princ accountName@**<Full qualified domain name>** /ptype KRB5_NT_PRINCIPAL /crypto rc4-hmac-nt /mapuser domainName\accountName /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno **<kvno from above>** /pass <StrongPassword> 4.Copy over the file mssql.keytab to /var/opt/mssql/secrets in the SQL Server Linux server. 5.Use the mssql-conf tool to specify the account to be used while accessing the keytab file. sudo /opt/mssql/bin/mssql-conf set network.privilegedadaccount accountName Note Only include the account name. Domain name is not necessary. and not the domain\account name. 5.Grant the permission. sudo chown mssql:mssql /var/opt/mssql/secrets/mssql.keytab sudo chmod 400 /var/opt/mssql/secrets/mssql.keytab sudo mssql-conf set network.kerberoskeytabfile /var/opt/mssql/secrets/mssql.keytab sudo systemctl restart mssql-server Here is an example Here is an example of ktpass for an account Env === SQL Linux box:red1 port:1433 FQDN:red1.CONTOSO.COM Domain:CONTOSO.COM Account Name:contoso\user1 Password:Strong@P@ssword1 KVNO:2 ktpass /princ MSSQLSvc/red1.CONTOSO.COM:1433@CONTOSO.COM /ptype KRB5_NT_PRINCIPAL /crypto aes256-sha1 /mapuser contoso\user1 /out mssql.keytab -setpass -setupn /kvno 2 /pass Strong@P@ssword1 ktpass /princ MSSQLSvc/red1.CONTOSO.COM:1433@CONTOSO.COM /ptype KRB5_NT_PRINCIPAL /crypto rc4-hmac-nt /mapuser contoso\user1 /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno 2 /pass Strong@P@ssword1 ktpass /princ MSSQLSvc/red1:1433@CONTOSO.COM /ptype KRB5_NT_PRINCIPAL /crypto aes256-sha1 /mapuser contoso\user1 /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno 2 /pass Strong@P@ssword1 ktpass /princ MSSQLSvc/red1:1433@CONTOSO.COM /ptype KRB5_NT_PRINCIPAL /crypto rc4-hmac-nt /mapuser contoso\user1 /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno 2 /pass Strong@P@ssword1 ktpass /princ user1@CONTOSO.COM /ptype KRB5_NT_PRINCIPAL /crypto aes256-sha1 /mapuser contoso\user1 /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno 2 /pass Strong@P@ssword1 ktpass /princ user1@CONTOSO.COM /ptype KRB5_NT_PRINCIPAL /crypto rc4-hmac-nt /mapuser contoso\user1 /in mssql.keytab /out mssql.keytab -setpass -setupn /kvno 2 /pass Strong@P@ssword16.7KViews0likes0CommentsApplying SQL Server 2017 CU18 on secondary replica running on Linux OS may fail with script failure
If SQL Server 2017 is running on Linux operating system and if you have databases configured using Always ON feature, post-applying CU18, on secondary replica the script upgrade error may occur.2.1KViews0likes0Comments