This post covers changes to the Cloud Management Gateway (CMG) infrastructure in the internal Microsoft environment and describes actions we’ve taken to handle shifts in management traffic due to the increased number of remote workers in the past few weeks.
Like many other ConfigMgr customers, we rely on CMGs hosted in Azure to route client traffic from remote devices through to our ConfigMgr servers even when an explicit VPN connection is not established. This enables our team to maintain client communication and ensure high deployment success rates for devices not on the corporate network.
Our previous implementation mimicked the layout of regional Primary Sites in our ConfigMgr hierarchy where each of the North America (3), Europe (1) and Asia (1) sites had a CMG associated with it. Aided by some explicit latency testing done by our dev teams, we have been on a path to reduce the number of CMGs and shift to their recommended centralized approach with just two CMG instances in West US 2 and Central US Azure regions. As CMG selection is random for Internet clients, we are following the model of placing our CMGs in the regions where we have our largest client populations. Thus far, we have not observed any adverse impacts and this to a certain degree lets us take additional steps towards architectural aspects of zero trust and reduced distributed infrastructure.
Even as current events cause shifts in our tally of remote clients, we will rely on the scaling ability in Azure to tweak the count of our VM instances as needed. Our decisions here are based on the published scale numbers for the CMG.
We leverage visualizations in Power BI to track CMG Connections:
Windows Update for Business has been an undoubted lifesaver in current times as our team has not had to make any changes to the way we deliver updates to our workstation devices. Our colleague Rob York has described at length the various options available to tweak content flow for updates in previous blog posts. For the purposes of this post, we are broadly defining “management traffic” as the communication between ConfigMgr clients and core Site Systems like the Management Point and Software Update Point.
Most devices in the internal Microsoft environment use an Auto-on, split-tunnel type VPN configuration. For those interested in additional details on this, our colleagues have documented the current configuration in this post. This does mean however that when the VPN connection is established, the ConfigMgr client on remote devices is able to enumerate its local AD and assigned Management Point and assumes a “Currently Intranet” type connection mode. To reduce any potential impacts to our VPN gateways, we have shaped our ConfigMgr client traffic in specific regions to prefer the CMG for communication to internal Site Systems via two different methods.
The first is one covered by Rob in the previous post where we can associate the CMG as a Site System within a Boundary Group to allow the client to leverage the CMG for Management Point communication. Note that for this to function correctly, the hierarchy setting to configure MP preference based on boundary group “Clients prefer to use management points specified in boundary groups” must be enabled. This option introduced in build 1802 allows clients to prefer Management Points associated with its current boundary group before considering any others. Without this, the addition of the CMG to the Site System list in the Boundary Group affects only content download scenarios (àla Cloud DP).
Additionally, having a combination of MPs and CMGs in the Site System list and enabling the “Prefer cloud based sources over on-premise sources” will not cause the client to prefer the CMG for MP communication, as that setting only controls content downloads, where a preference can be set for content download through the CMG as opposed to a local DP.
So, in regions where we want clients to prefer the CMG for MP communication, we remove MPs as Site Systems from specific Boundary Groups to enable the desired behavior. An additional caveat is that in this configuration, though clients prefer the CMG for MP communication, they will not use the CMG for Software Update Point (WSUS) communication.
The second route involves explicitly configuring ConfigMgr clients as “AlwaysInternet” in their communication, also mentioned in previous blog posts. In regions where there may be challenges isolating/determining the exact VPN ranges, this option allows us to force clients to always leverage the CMG for both MP/SUP communication. While this means we do not have to account for existing boundary group configuration, we do need to be careful about enabling this for the right remote worker population and ensuring it is a setting we revert back as remote workers eventually return to work from office locations. A targeted Configuration Item to enable AlwaysInternet mode can be leveraged for this as several checks like detecting Tablet/Laptop chassis, presence of VPN etc... can be employed to ensure correct targeting.
Outside of any targeting checks, a CI would need to set the ClientAlwaysOnInternet registry key and as we may not want to restart the ConfigMgr Client Service (CCMExec) during evaluation, we could force the client to become aware of the change by kicking off a Default MP refresh cycle. High-level PowerShell actions for this could be:
$RegKey ="HKLM:\SOFTWARE\Microsoft\CCM\Security"
Set-ItemProperty -Path $RegKey -Name ClientAlwaysOnInternet -Value 1
Invoke-WmiMethod -Namespace root\ccm -Class sms_client -Name TriggerSchedule "{00000000-0000-0000-0000-000000000023}"
An additional tip: in-console dashboards support toggling:
We hope you find this information useful in your own CMG implementations. If not, like we do for all things in life, blame @SQLBenjamin