Home
%3CLINGO-SUB%20id%3D%22lingo-sub-1093356%22%20slang%3D%22en-US%22%3EAvailability%20Groups%20in%20Big%20Data%20Clusters.%20Two%20or%20more%20replicas%20in%20the%20same%20node%3F%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1093356%22%20slang%3D%22en-US%22%3E%3CH1%20id%3D%22toc-hId-360259932%22%20id%3D%22toc-hId-360259932%22%20id%3D%22toc-hId-360259932%22%3EAvailability%20Groups%20in%20Big%20Data%20Clusters.%20Two%20or%20more%20replicas%20in%20the%20same%20node%3F%3C%2FH1%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBig%20Data%20clusters%20supports%20high%20availability%20for%20all%20its%20components%3B%20most%20notably%20for%20the%20SQL%20Server%20master%20it%20provides%20an%20Always%20On%20Availability%20Group%20out%20of%20the%20box.%20You%20can%20deploy%20your%20big%20data%20cluster%20by%20using%20the%20HA%20configurations%20as%20explained%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fsql%2Fbig-data-cluster%2Fdeployment-high-availability%3Fview%3Dsql-server-ver15%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3EOnce%20your%20BDC%20cluster%20is%20deployed%20you%20can%20check%20the%20status%20of%20your%20replicas%20using%20kubectl%20in%20Power-Shell.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Ekubectl%20get%20pods%20-n%20mssql-cluster%26nbsp%3B%3C%2FFONT%3E%3C%2FSTRONG%3E%3CSTRONG%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3E-l%20MSSQL_CLUSTER%3Dmssql-cluster%2Capp%3Dmaster%2Cplane%3Ddata%2Crole%3Dmaster-pool%2Ctype%3Dsqlservr%20-o%20wide%3C%2FFONT%3E%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ERecently%20we%20have%20observed%20a%20couple%20of%20occurrences%20where%202%20or%20more%20AG%20replicas%20are%20being%20executed%20within%20the%20same%20node.%20In%20such%20scenario%20the%20command%20executed%20above%20would%20get%20you%20an%20output%20analogous%20to%20this%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%22%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20get%20pods%20-n%20mssql-cluster%20%60%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3E-l%20MSSQL_CLUSTER%3Dmssql-cluster%2Capp%3Dmaster%2Cplane%3Ddata%2Crole%3Dmaster-pool%2Ctype%3Dsqlservr%20-o%20wide%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-0%20%26nbsp%3B%26nbsp%3B4%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%20%26nbsp%3B%203h29m%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E%3CSTRONG%3E03%3C%2FSTRONG%3E%3C%2FFONT%3E%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-1%20%26nbsp%3B%26nbsp%3B4%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%20%26nbsp%3B%203h28m%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E%3CSTRONG%3E03%3C%2FSTRONG%3E%3C%2FFONT%3E%26nbsp%3B%26nbsp%3B%3C%2FFONT%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EIn%20the%20above%20output%20you%20can%20observe%20that%20both%20master-0%20and%20master-1%20are%20hosted%20on%20node%20aks-agentpool-15065227-vmss000003.%20The%20problem%20should%20be%20already%20apparent%20with%20this%20example%2C%20if%20the%20node%20in%20which%20both%20replicas%20are%20goes%20down%20no%20failover%20will%20occur%20and%20the%20containers%20will%20have%20to%20be%20recreated%20and%20started%20on%20different%20nodes.%20This%20seems%20to%20destroy%20the%20purpose%20of%20a%20high%20availability%20solution.%3C%2FP%3E%0A%3CP%3EIn%20this%20post%20we%20will%20discuss%20this%20scenario%2C%20its%20cause%20and%20how%20can%20we%20avoid%20it.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1050821406%22%20id%3D%22toc-hId-1050821406%22%20id%3D%22toc-hId-1050821406%22%3EHow%20is%20HA%20achieved%20in%20BDC%3F%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBig%20Data%20Clusters%20uses%20Kubernetes%20itself%20as%20the%20high%20availability%20solution.%20The%20master%20instance%20AG%20is%20implemented%20using%20a%20%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fworkloads%2Fcontrollers%2Fstatefulset%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3EStatefulSet%3C%2FA%3E%20object%20named%20%3CSTRONG%3Emaster%3C%2FSTRONG%3E.%20The%20stateful%20set%20is%20very%20similar%20to%20a%20ReplicaSet%20but%20it%20ensures%20that%20uniqueness%20between%20pods%20is%20enforced.%20Kubernetes%20will%20do%20its%20best%20to%20ensure%20that%20the%20desired%20state%20for%20the%20replica%20is%20maintained%20at%20all%20times%3B%20that%20is%20the%20number%20or%20replicas%20defined%20for%20the%20set%20to%20be%20running.%3C%2FP%3E%0A%3CP%3EIf%20you%20manually%20delete%20one%20of%20the%20pods%20you%20will%20see%20that%20a%20new%20one%20is%20immediately%20rescheduled.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F163951i66A699227FF30D85%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_0.png%22%20title%3D%22clipboard_image_0.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ELet%E2%80%99s%20quickly%20review%20how%20the%20process%20of%20recovering%20a%20pod%20after%20an%20event%20such%20as%20deletion%20occurs%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3ELet%E2%80%99s%20suppose%20that%20we%20have%20a%204-node%20cluster%20and%203%20-replica%20master%20stateful%20set.%20Each%20pod%20is%20running%20on%20a%20different%20node.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CDIV%20id%3D%22tinyMceEditorclipboard_image_1%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F163954i8F85410EDDD2BAEB%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_7.png%22%20title%3D%22clipboard_image_7.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorclipboard_image_6%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EAn%20event%20that%20causes%20the%20pod%20to%20terminate%20occurs.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F163956iCE0F2877D89F6CA1%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_8.png%22%20title%3D%22clipboard_image_8.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorclipboard_image_2%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20kube-controller-manager%20process%20constantly%20checks%20with%20the%20API%20server%20for%20the%20state%20of%20controller%20objects.%20For%20the%20master%20stateful%20set%20in%20our%20example%20if%20the%20number%20of%20running%20pods%20is%20less%20than%203%2C%20then%20it%20will%20request%20for%20the%20creation%20of%20additional%20pods%20to%20satisfy%20the%20desired%20state.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Freference%2Fcommand-line-tools-reference%2Fkube-apiserver%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3EKubernetes%20Api%20Server%3C%2FA%3E%20recreates%20the%20new%20pod%20within%20its%20internal%20database.%20But%20saving%20the%20metadata%20does%20not%20mean%20that%20the%20pod%20will%20be%20immediately%20scheduled%20into%20a%20node.%20When%20only%20the%20metadata%20for%20the%20pod%20is%20saved%20into%20the%20k8s%20internal%20database%2C%20the%20pod%20is%20left%20in%20the%20%E2%80%9CPending%E2%80%9D%20state.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F163957iA0B0EC5D81D9892C%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_9.png%22%20title%3D%22clipboard_image_9.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorclipboard_image_3%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Freference%2Fcommand-line-tools-reference%2Fkube-scheduler%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Escheduler%20process%3C%2FA%3E%20checks%20for%20pods%20in%20the%20pending%20state.%20The%20Scheduler%20then%20decides%20where%20the%20pod%20should%20be%20placed.%20Once%20the%20scheduler%20has%20made%20the%20decision%20it%20requests%20the%20api%20server%20to%20schedule%20the%20pod%20on%20a%20given%20node.%20The%20kubelet%20process%20running%20on%20the%20selected%20node%20will%20then%20be%20notified%20and%20will%20start%20the%20pod.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CDIV%20id%3D%22tinyMceEditorclipboard_image_4%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F163963iC9E99F7495E2A954%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_10.png%22%20title%3D%22clipboard_image_10.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENow%2C%20coming%20back%20to%20our%20Availability%20Group%20discussion%3B%20two%20events%20in%20the%20above%20list%20interest%20us%20the%20most%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3EPod%20termination%3C%2FSTRONG%3E%3A%20Why%20was%20the%20pod%20removed%20from%20the%20node%20in%20the%20first%20place%3F.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CSTRONG%3EPod%20scheduling%3C%2FSTRONG%3E%3A%20How%20the%20scheduler%20decides%20where%20to%20place%20the%20pod%3F%20Why%20can%202%20pods%20from%20the%20same%20AG%20be%20placed%20on%20the%20same%20node%3F%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ERegarding%20pod%20termination%20the%20possibilities%20are%20manual%20deletion%2C%20container%20process%20termination%2C%20pod%20eviction%20and%20node%20failure.%20A%20manual%20deletion%20should%20be%20unlikely%20in%20a%20production%20environment.%20If%20the%20Sql%20Server%20process%20unexpectedly%20terminates%20or%20even%20if%20a%20shutdown%20is%20issued%20this%20will%20cause%20the%20pod%20to%20terminate.%20The%20kubelet%20process%20proactively%20monitors%20the%20resource%20usage%20in%20a%20node%20and%20can%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Ftasks%2Fadminister-cluster%2Fout-of-resource%2F%23eviction-policy%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Eterminate%20pods%20to%20reclaim%20back%20resources%3C%2FA%3E%20if%20being%20starved.%20Finally%2C%20a%20node%20shutdown%20will%20obviously%20cause%20all%20its%20workload%20to%20fail.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EScheduling%20is%20performed%20by%20default%20by%20the%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fscheduling%2Fkube-scheduler%2F%23kube-scheduler%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ekube-scheduler%3C%2FA%3E%20program%20(although%20a%20customer%20scheduler%20can%20be%20implemented).%20For%20every%20pending%20pod%2C%20the%20scheduler%20will%20find%20the%20%E2%80%9Cbest%E2%80%9D%20node%20available%20and%20will%20schedule%20it%20into%20it.%20How%20the%20scheduler%20determines%20which%20node%20is%20the%20best%20is%20determined%20in%202%20phases%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fscheduling%2Fkube-scheduler%2F%23filtering%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3EFiltering%3C%2FA%3E%3A%20the%20scheduler%20filters%20out%20nodes%20that%20are%20not%20feasible%20to%20assign%20the%20pod%20to.%20This%20includes%20ensuring%20that%20the%20node%20is%20in%20the%20ready%20state%2C%20having%20enough%20resources%20to%20fit%20the%20pod%2C%20not%20being%20under%20resource%20pressure%20and%20checking%20for%20node%20selectors%20and%20taints%2C%20among%20others.%20See%20the%20complete%20list%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fscheduling%2Fkube-scheduler%2F%23filtering%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fscheduling%2Fkube-scheduler%2F%23scoring%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3EScoring%3C%2FA%3E%3A%20The%20scheduler%20ranks%20the%20feasible%20nodes%20based%20on%20a%20set%20of%20policies%20listed%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fscheduling%2Fkube-scheduler%2F%23scoring%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E%20and%20choses%20the%20best%20ranked.%20One%20very%20relevant%20policy%20for%20an%20AG%20group%20is%20the%20%E2%80%9CSelector%20Spread%20Priority%E2%80%9D%20which%20attempts%20to%20spread%20pods%20from%20the%20same%20StatefulSet%20across%20hosts.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHaving%20discussed%20the%20termination%20and%20scheduling%20events%2C%20lets%20see%20an%20example%20of%20this%20issue%20can%20be%20reproduced.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--756633057%22%20id%3D%22toc-hId--756633057%22%20id%3D%22toc-hId--756633057%22%3EExample%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ELet%E2%80%99s%20observe%20the%20following%20cluster%20with%204%20nodes%20and%204%20master%20instance%20replicas.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%20height%3A%20219px%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%20219px%3B%22%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%20height%3A%20219px%3B%22%3E%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20get%20pods%20-n%20mssql-cluster%20%60%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3E-l%20MSSQL_CLUSTER%3Dmssql-cluster%2Capp%3Dmaster%2Cplane%3Ddata%2Crole%3Dmaster-pool%2Ctype%3Dsqlservr%20-o%20wide%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3ENAME%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20READY%26nbsp%3B%26nbsp%3B%20STATUS%26nbsp%3B%26nbsp%3B%26nbsp%3B%20RESTARTS%26nbsp%3B%26nbsp%3B%20AGE%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20IP%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20NODE%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20NOMINATED%20NODE%26nbsp%3B%26nbsp%3B%20%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-0%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss000003%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-1%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss000002%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-2%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss000001%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20size%3D%222%22%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%3Emaster-3%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%203h37m%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss000000%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%3C%2FNONE%3E%3C%2FFONT%3E%3C%2FFONT%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ECurrently%20the%20replicas%20are%20spread%20across%20the%20nodes%20as%20expected.%20Let%E2%80%99s%20cause%20a%20pod%20termination%20by%20deleting%20a%20pod.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%20height%3A%20219px%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%20219px%3B%22%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%20height%3A%20219px%3B%22%3E%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20delete%20pod%20master-2%20-n%20mssql-cluster%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Epod%20%22master-2%22%20deleted%3C%2FFONT%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EImmediately%20afterwards%20we%20check%20the%20status%20of%20the%20pods%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%20height%3A%20219px%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%20219px%3B%22%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%20height%3A%20219px%3B%22%3E%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20get%20pods%20-n%20mssql-cluster%20-l%20MSSQL_CLUSTER%3Dmssql-cluster%2Capp%3Dmaster%2Cplane%3Ddata%2Crole%3Dmaster-pool%2Ctype%3Dsqlservr%20-o%20wide%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3ENAME%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20READY%26nbsp%3B%26nbsp%3B%20STATUS%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20RESTARTS%26nbsp%3B%26nbsp%3B%20AGE%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20IP%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20NODE%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%26nbsp%3B%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-0%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E03%3C%2FFONT%3E%26nbsp%3B%20%26nbsp%3B%20%26nbsp%3B%26nbsp%3B%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-1%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E02%3C%2FFONT%3E%26nbsp%3B%20%26nbsp%3B%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-2%26nbsp%3B%26nbsp%3B%200%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20ContainerCreating%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%205s%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E01%3C%2FFONT%3E%26nbsp%3B%20%26nbsp%3B%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-3%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%203h43m%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E00%3C%2FFONT%3E%3C%2FFONT%3E%26nbsp%3B%20%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CFONT%20size%3D%222%22%3E%26nbsp%3B%26nbsp%3B%3C%2FFONT%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20pod%20was%20rescheduled%20into%20the%20node%2001%20again%20and%20this%20is%20behavior%20we%20want.%20The%20node%2001%20was%20a%20feasible%20node%20and%20the%20%E2%80%9CSelector%20Spread%20Priority%E2%80%9D%20policy%20causes%20it%20to%20rank%20high%20and%20be%20the%20preferred%20node.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENow%20let%E2%80%99s%20repeat%20the%20exercise%20but%20making%20the%20node%2001%20unfeasible%20by%20putting%20a%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fconfiguration%2Ftaint-and-toleration%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Etaint%3C%2FA%3E%20on%20it.%20Taints%20place%20a%20restriction%20on%20a%20node%20that%20only%20allows%20pods%20that%20explicitly%20tolerate%20the%20taint%20to%20be%20scheduled.%3C%2FP%3E%0A%3CP%3EWe%20taint%20node%2001%20with%20%E2%80%9Clactose%E2%80%9D.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%20height%3A%20219px%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%20219px%3B%22%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%20height%3A%20219px%3B%22%3E%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20taint%20node%20aks-agentpool-15065227-vmss000001%20lactose%3Dtrue%3ANoSchedule%3CBR%20%2F%3Enode%2Faks-agentpool-15065227-vmss000001%20tainted%3C%2FFONT%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20delete%20pod%20master-2.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%20height%3A%20219px%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%20219px%3B%22%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%20height%3A%20219px%3B%22%3E%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20delete%20pod%20master-2%20-n%20mssql-cluster%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Epod%20%22master-2%22%20deleted%3C%2FFONT%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20check%20the%20pods%20again%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20style%3D%22border-collapse%3A%20collapse%3B%20width%3A%20100%25%3B%20background-color%3A%20%233531ff%3B%20color%3A%20%23efefef%3B%20height%3A%20219px%3B%22%20border%3D%221%22%3E%0A%3CTBODY%3E%0A%3CTR%20style%3D%22height%3A%20219px%3B%22%3E%0A%3CTD%20style%3D%22width%3A%20100%25%3B%20height%3A%20219px%3B%22%3E%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3EPS%20C%3A%5CUsers%26gt%3B%20%3CFONT%20color%3D%22%23FFFF00%22%3Ekubectl%3C%2FFONT%3E%20get%20pods%20-n%20mssql-cluster%20%60%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3E-l%20MSSQL_CLUSTER%3Dmssql-cluster%2Capp%3Dmaster%2Cplane%3Ddata%2Crole%3Dmaster-pool%2Ctype%3Dsqlservr%20-o%20wide%3C%2FFONT%3E%3C%2FP%3E%0A%3CP%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3ENAME%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20READY%26nbsp%3B%26nbsp%3B%20STATUS%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20RESTARTS%26nbsp%3B%26nbsp%3B%20AGE%26nbsp%3B%26nbsp%3B%26nbsp%3B%20IP%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20NODE%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20NOMINATED%20NODE%26nbsp%3B%26nbsp%3B%20%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-0%26nbsp%3B%26nbsp%3B%204%2F4%20%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3BRunning%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss000003%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-1%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%207d2h%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss000002%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-2%20%26nbsp%3B%26nbsp%3B0%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20ContainerCreating%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%209s%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CSTRONG%3E%3CFONT%20color%3D%22%23FF0000%22%3E00%3C%2FFONT%3E%3C%2FSTRONG%3E%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20%3C%2FNONE%3E%3C%2FNONE%3E%3C%2FFONT%3E%3CBR%20%2F%3E%3CFONT%20face%3D%22courier%20new%2Ccourier%22%20size%3D%222%22%3Emaster-3%26nbsp%3B%26nbsp%3B%204%2F4%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%20Running%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%200%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%204h2m%26nbsp%3B%26nbsp%3B%20xx.xxx.x.xx%26nbsp%3B%26nbsp%3B%20aks-agentpool-15065227-vmss0000%3CFONT%20color%3D%22%23FF0000%22%3E%3CSTRONG%3E00%3C%2FSTRONG%3E%3C%2FFONT%3E%26nbsp%3B%26nbsp%3B%20%3CNONE%3E%3C%2FNONE%3E%3C%2FFONT%3E%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%26nbsp%3B%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ENow%20we%20can%20observe%20that%20the%20pod%20was%20scheduled%20on%20node%2000%20due%20to%20node%2001%20being%20tainted%20with%20%E2%80%9Clactose%3Dtrue%E2%80%9D%20and%20master-2%20not%20being%20tolerant%20to%20lactose.%20It%20will%20also%20take%20longer%20to%20this%20pod%20to%20execute%20on%20the%20newly%20assigned%20node%2C%20because%20the%20persistent%20volume%20claims%20need%20to%20be%20reattached%20to%20a%20new%20host.%3C%2FP%3E%0A%3CP%3ENotice%20that%20if%20we%20remove%20the%20taint%2C%20the%20pods%20will%20remain%20where%20they%20are%3B%20no%20%E2%80%9Crebalance%E2%80%9D%20is%20performed%20unless%20an%20explicit%20deletion%20of%20pod%20is%20executed%20again.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1730879776%22%20id%3D%22toc-hId-1730879776%22%20id%3D%22toc-hId-1730879776%22%3EGeneralized%20Scenario.%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20can%20now%20generalize%20this%20scenario%20and%20assert%20that%20this%20problem%20will%20arise%20when%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EA%20pod%20belonging%20to%20the%20availability%20group%20is%20terminated.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EThe%20scheduler%20cannot%20%E2%80%9Cspread%E2%80%9D%20the%20pod%20due%20to%20not%20finding%20feasible%20pods%20or%20the%20%3CEM%3ESelectorSpreadPriority%3C%2FEM%3E%20being%20at%20odds%20with%20other%20scoring%20priorities.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EPod%20eviction%20due%20to%20memory%20starvation%20is%20likely%20to%20cause%20this%20problem%20if%20there%20are%20not%20other%20pods%20available.%20After%20the%20pod%20eviction%20the%20node%20will%20not%20be%20selected%20to%20reschedule%2C%20since%20the%20%3CA%20href%3D%22https%3A%2F%2Fkubernetes.io%2Fdocs%2Ftasks%2Fadminister-cluster%2Fout-of-resource%2F%23scheduler%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3EMemoryPressure%3C%2FA%3E%20condition%20will%20be%20active%20and%20this%20will%20filter%20out%20the%20host.%20%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20scheduler%20most%20schedule%20the%20pod%20in%20another%20node%20because%20the%20kubernetes%20priority%20is%20to%20preserve%20desired%20state%20(number%20of%20pods).%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--76574687%22%20id%3D%22toc-hId--76574687%22%20id%3D%22toc-hId--76574687%22%3EHow%20to%20avoid%20this%20scenario%3F%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20most%20likely%20happen%20in%20clusters%20with%20a%20very%20small%20number%20of%20nodes%20available.%3C%2FP%3E%0A%3CP%3EIt%20is%20important%20to%20plan%20and%20reserve%20specific%20sets%20of%20pods%20with%20the%20appropriate%20resources%20assigned%20to%20serve%20the%20different%20planes%20for%20the%20Big%20Data%20cluster.%3C%2FP%3E%0A%3CP%3EBefore%20deployment%20you%20can%20label%20your%20nodes%20and%20group%20them%20according%20to%20each%20role.%20During%20the%20configuration%20phase%20of%20the%20BDC%20you%20can%20then%20patch%20your%20configuration%20to%20use%20node%20selectors%20for%20the%20different%20statefulsets%20as%20described%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fsql%2Fbig-data-cluster%2Fdeployment-custom-configuration%3Fview%3Dsql-server-ver15%23podplacement%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehere%3C%2FA%3E.%3C%2FP%3E%0A%3CDIV%20id%3D%22tinyMceEditorclipboard_image_5%22%20class%3D%22mceNonEditable%20lia-copypaste-placeholder%22%3E%26nbsp%3B%3C%2FDIV%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F163972i50293803A515BEDA%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22clipboard_image_11.png%22%20title%3D%22clipboard_image_11.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EWe%20will%20further%20discuss%20the%20deployment%20planning%20in%20another%20post.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1093356%22%20slang%3D%22en-US%22%3E%3CP%3ERecently%20we%20have%20observed%20a%20couple%20of%20occurrences%20where%202%20or%20more%20AG%20replicas%20in%20a%20Big%20Data%20Cluster%20deployment%20are%20being%20executed%20within%20the%20same%20node.%3C%2FP%3E%0A%3CP%3EIn%20this%20post%20we%20will%20discuss%20this%20scenario%2C%20its%20cause%20and%20how%20can%20we%20avoid%20it.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1093356%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3Eag%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eavailability%20groups%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Ebdc%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EBig%20Data%20Clusters%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Eha%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EKubernetes%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ESQL%202019%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E%3CLINGO-SUB%20id%3D%22lingo-sub-1093709%22%20slang%3D%22en-US%22%3ERe%3A%20Availability%20Groups%20in%20Big%20Data%20Clusters.%20Two%20or%20more%20replicas%20in%20the%20same%20node%3F%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1093709%22%20slang%3D%22en-US%22%3E%3CP%3EThank%20you%20Fernando%20for%20Sharing%20this%20Blogpost%20with%20the%20Community%26nbsp%3B%3CIMG%20class%3D%22lia-deferred-image%20lia-image-emoji%22%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Fhtml%2Fimages%2Femoticons%2Fcool_40x40.gif%22%20alt%3D%22%3Acool%3A%22%20title%3D%22%3Acool%3A%22%20%2F%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E
Microsoft

Availability Groups in Big Data Clusters. Two or more replicas in the same node?

 

Big Data clusters supports high availability for all its components; most notably for the SQL Server master it provides an Always On Availability Group out of the box. You can deploy your big data cluster by using the HA configurations as explained here.

Once your BDC cluster is deployed you can check the status of your replicas using kubectl in Power-Shell.

 

kubectl get pods -n mssql-cluster -l MSSQL_CLUSTER=mssql-cluster,app=master,plane=data,role=master-pool,type=sqlservr -o wide

 

Recently we have observed a couple of occurrences where 2 or more AG replicas are being executed within the same node. In such scenario the command executed above would get you an output analogous to this:

 

 

PS C:\Users> kubectl get pods -n mssql-cluster `
-l MSSQL_CLUSTER=mssql-cluster,app=master,plane=data,role=master-pool,type=sqlservr -o wide

 

master-0   4/4     Running   0    3h29m   xx.xxx.x.xx   aks-agentpool-15065227-vmss000003

master-1   4/4     Running   0    3h28m   xx.xxx.x.xx   aks-agentpool-15065227-vmss000003  

 

 

In the above output you can observe that both master-0 and master-1 are hosted on node aks-agentpool-15065227-vmss000003. The problem should be already apparent with this example, if the node in which both replicas are goes down no failover will occur and the containers will have to be recreated and started on different nodes. This seems to destroy the purpose of a high availability solution.

In this post we will discuss this scenario, its cause and how can we avoid it.

 

How is HA achieved in BDC?

 

Big Data Clusters uses Kubernetes itself as the high availability solution. The master instance AG is implemented using a  StatefulSet object named master. The stateful set is very similar to a ReplicaSet but it ensures that uniqueness between pods is enforced. Kubernetes will do its best to ensure that the desired state for the replica is maintained at all times; that is the number or replicas defined for the set to be running.

If you manually delete one of the pods you will see that a new one is immediately rescheduled.

 

clipboard_image_0.png

 

Let’s quickly review how the process of recovering a pod after an event such as deletion occurs:

 

 

  • Let’s suppose that we have a 4-node cluster and 3 -replica master stateful set. Each pod is running on a different node.
 

clipboard_image_7.png

 

 

  • An event that causes the pod to terminate occurs.

clipboard_image_8.png

 

 

  • The kube-controller-manager process constantly checks with the API server for the state of controller objects. For the master stateful set in our example if the number of running pods is less than 3, then it will request for the creation of additional pods to satisfy the desired state.

 

  • The Kubernetes Api Server recreates the new pod within its internal database. But saving the metadata does not mean that the pod will be immediately scheduled into a node. When only the metadata for the pod is saved into the k8s internal database, the pod is left in the “Pending” state.

 

clipboard_image_9.png

 

 

 

 

  • The scheduler process checks for pods in the pending state. The Scheduler then decides where the pod should be placed. Once the scheduler has made the decision it requests the api server to schedule the pod on a given node. The kubelet process running on the selected node will then be notified and will start the pod.
 

clipboard_image_10.png

 

Now, coming back to our Availability Group discussion; two events in the above list interest us the most:

 

  • Pod termination: Why was the pod removed from the node in the first place?.

 

  • Pod scheduling: How the scheduler decides where to place the pod? Why can 2 pods from the same AG be placed on the same node?

 

Regarding pod termination the possibilities are manual deletion, container process termination, pod eviction and node failure. A manual deletion should be unlikely in a production environment. If the Sql Server process unexpectedly terminates or even if a shutdown is issued this will cause the pod to terminate. The kubelet process proactively monitors the resource usage in a node and can terminate pods to reclaim back resources if being starved. Finally, a node shutdown will obviously cause all its workload to fail.

 

Scheduling is performed by default by the kube-scheduler program (although a customer scheduler can be implemented). For every pending pod, the scheduler will find the “best” node available and will schedule it into it. How the scheduler determines which node is the best is determined in 2 phases:

 

  • Filtering: the scheduler filters out nodes that are not feasible to assign the pod to. This includes ensuring that the node is in the ready state, having enough resources to fit the pod, not being under resource pressure and checking for node selectors and taints, among others. See the complete list here.

 

  • Scoring: The scheduler ranks the feasible nodes based on a set of policies listed here and choses the best ranked. One very relevant policy for an AG group is the “Selector Spread Priority” which attempts to spread pods from the same StatefulSet across hosts.

 

Having discussed the termination and scheduling events, lets see an example of this issue can be reproduced.

 

Example

 

Let’s observe the following cluster with 4 nodes and 4 master instance replicas.

 

 

PS C:\Users> kubectl get pods -n mssql-cluster `

-l MSSQL_CLUSTER=mssql-cluster,app=master,plane=data,role=master-pool,type=sqlservr -o wide


NAME       READY   STATUS    RESTARTS   AGE     IP            NODE                                NOMINATED NODE  
master-0   4/4     Running   0          7d2h    xx.xxx.x.xx   aks-agentpool-15065227-vmss000003   <none>          
master-1   4/4     Running   0          7d2h    xx.xxx.x.xx   aks-agentpool-15065227-vmss000002   <none>          
master-2   4/4     Running   0          7d2h    xx.xxx.x.xx   aks-agentpool-15065227-vmss000001   <none>          
master-3   4/4     Running   0          3h37m   xx.xxx.x.xx   aks-agentpool-15065227-vmss000000   <none>       

 

 

 

Currently the replicas are spread across the nodes as expected. Let’s cause a pod termination by deleting a pod.

 

 

PS C:\Users> kubectl delete pod master-2 -n mssql-cluster


pod "master-2" deleted

 

 

 

Immediately afterwards we check the status of the pods:

 

PS C:\Users> kubectl get pods -n mssql-cluster -l MSSQL_CLUSTER=mssql-cluster,app=master,plane=data,role=master-pool,type=sqlservr -o wide


NAME       READY   STATUS              RESTARTS   AGE     IP            NODE                                
master-0   4/4     Running             0          7d2h    xx.xxx.x.xx   aks-agentpool-15065227-vmss000003      
master-1   4/4     Running             0          7d2h    xx.xxx.x.xx   aks-agentpool-15065227-vmss000002   
master-2   0/4     ContainerCreating   0          5s      <none>        aks-agentpool-15065227-vmss000001   
master-3   4/4     Running             0          3h43m   xx.xxx.x.xx   aks-agentpool-15065227-vmss000000   

  

 

 

 

The pod was rescheduled into the node 01 again and this is behavior we want. The node 01 was a feasible node and the “Selector Spread Priority” policy causes it to rank high and be the preferred node.

 

Now let’s repeat the exercise but making the node 01 unfeasible by putting a taint on it. Taints place a restriction on a node that only allows pods that explicitly tolerate the taint to be scheduled.

We taint node 01 with “lactose”.

 

PS C:\Users> kubectl taint node aks-agentpool-15065227-vmss000001 lactose=true:NoSchedule
node/aks-agentpool-15065227-vmss000001 tainted

 

We delete pod master-2.

 

 

PS C:\Users> kubectl delete pod master-2 -n mssql-cluster
pod "master-2" deleted

 

 

 

We check the pods again:

 

 

PS C:\Users> kubectl get pods -n mssql-cluster `

-l MSSQL_CLUSTER=mssql-cluster,app=master,plane=data,role=master-pool,type=sqlservr -o wide


NAME       READY   STATUS              RESTARTS   AGE    IP            NODE                                NOMINATED NODE  
master-0   4/4     Running             0          7d2h   xx.xxx.x.xx   aks-agentpool-15065227-vmss000003   <none>          
master-1   4/4     Running             0          7d2h   xx.xxx.x.xx   aks-agentpool-15065227-vmss000002   <none>          
master-2   0/4     ContainerCreating   0          9s     <none>        aks-agentpool-15065227-vmss000000   <none>          
master-3   4/4     Running             0          4h2m   xx.xxx.x.xx   aks-agentpool-15065227-vmss000000   <none>        

 

 

 

Now we can observe that the pod was scheduled on node 00 due to node 01 being tainted with “lactose=true” and master-2 not being tolerant to lactose. It will also take longer to this pod to execute on the newly assigned node, because the persistent volume claims need to be reattached to a new host.

Notice that if we remove the taint, the pods will remain where they are; no “rebalance” is performed unless an explicit deletion of pod is executed again.

 

Generalized Scenario.

 

We can now generalize this scenario and assert that this problem will arise when:

 

  • A pod belonging to the availability group is terminated.

 

  • The scheduler cannot “spread” the pod due to not finding feasible pods or the SelectorSpreadPriority being at odds with other scoring priorities.

 

Pod eviction due to memory starvation is likely to cause this problem if there are not other pods available. After the pod eviction the node will not be selected to reschedule, since the MemoryPressure condition will be active and this will filter out the host.  

The scheduler most schedule the pod in another node because the kubernetes priority is to preserve desired state (number of pods).

 

How to avoid this scenario?

 

This most likely happen in clusters with a very small number of nodes available.

It is important to plan and reserve specific sets of pods with the appropriate resources assigned to serve the different planes for the Big Data cluster.

Before deployment you can label your nodes and group them according to each role. During the configuration phase of the BDC you can then patch your configuration to use node selectors for the different statefulsets as described here.

 

clipboard_image_11.png

We will further discuss the deployment planning in another post.

 

1 Comment

Thank you Fernando for Sharing this Blogpost with the Community :cool: