Software Update Points in Configuration Manager Service Pack 1
Published Sep 08 2018 04:30 AM 449 Views
Microsoft
First published on CloudBlogs on Mar, 27 2013

In the Service Pack 1 release of System Center 2012 Configuration Manager, we’ve made some significant changes to software update points, and I want to talk about those here.  So what’s changed with software update points in Service Pack 1?  First, that whole active software update point, or single logical-software update point per primary site imitation is gone . Add multiple software update points per primary site as you need them , and we’ll also take care of fault tolerance for you.  Want to add a software update point to an untrusted forest ?  Not a problem, go right ahead.  Security team got you down because they won’t let your top-level software update point communicate with the Internet to get to the Windows Update catalog?  No worries, just point your top-level software update point to source the update catalog from an internal WSUS server .  Here, we’ll walk through these changes in detail.

One other thing to note is that some of the changes made to software update point also impact how you should think about using Group Policy to set the WSUS server when using WSUS for the initial deployment of the Configuration Manager client.  The answer is Group Policy Preferences, and that’s covered in depth in a separate blog that you can find here.

Software Update Points with Failover

First let’s talk about the changes that now allow for multiple software update points per primary site, which provides fault tolerance without requiring the complexity of NLB. That said, I want to be clear that software update point failover is not as robust as NLB for pure load balancing .  It’s designed for fault-tolerance, not pure load balancing.

Also, the design of software update point failover is a necessarily different design than the pure randomization model used in the management point design.  Unlike management points, switching software update psints has client and network performance costs associated with switching.  This is due to the underlying WSUS architecture on which Software update points are built, and the cost comes from switching WSUS servers between scans. Simply put, switching scan sources results in an increased scan catalog size. For this reason, we try to preserve affinity with the last software update point the client successfully scanned against whenever possible as to mitigate this catalog tax, which has both network and client-side performance implications.

From a configuration standpoint, the first thing we recommend you do when setting up WSUS for use by your failover software update points is to use a shared WSUS database .  Having your underlying WSUS servers on a shared database significantly mitigates (but does not completely eliminate) the client and network performance-impact of switching.  When the WSUS servers the client is switching between share a database, the scan delta still occurs, but is much smaller than it would be if each WSUS server had its own database.  Without doing a dissertation on switching costs, let’s just leave it at that— we recommend using a shared WSUS database when using multiple software update points within a forest boundary.

Now that you have your WSUS servers configured, updated with the required KBs, and sharing a database, let’s walk through the new software update point setup workflow.  When adding a software update point, you go through pretty the same workflow as in previous versions of the product. The first software update point you setup on a primary site will be the default software update point, and serve as the update source for all additional software update points that you add to that primary site.  Additional software update points that you add use the same workflow.  It is no longer required that you set any of the software update points as active— they are all active .  Additionally, you set the Client Connection Type directly in the software update points setup now, which is how you create an Internet-based software update point (rather than making these changes in properties post-setup as was the case in previous versions).

Software update points with failover should address any fault tolerance needs you have, but if you do still want to use NLB, this configuration is no longer available in UI.  To add NLB you can do so through the SDK or through a PowerShell cmdlet.  Essentially, add all the software update points that are NLB members (pre-configured through WSUS NLB setup), and then configure the VIP or shared FQDN for the NLB node through the SDK.

After you’ve added your software update points and initiated a synchronization, you can view the status of the software update points as well as their relationships in the Software Update Point Synchronization Status node in the Monitoring workspace.  In the example below in Figure 1 , I have two software update points on a standalone primary site.  JRG2K8R2-DC is the source software update point (the first one I added), and it synchronizes with Windows Update.  It’s also clear that JRG2K8R2-ROLE is a software update point that synchronizes from JRG2K8R2-DC.  If the source software update point is ever removed for any reason from the Configuration Manager console, you will be prompted to select a new synchronization source from the list of available software update points. Changing the synchronization source does incur a synchronization cost , so only do this when necessary (such as when you’re decommissioning the existing synchronization-source software update point for some reason).  This is shown below in Figure 2.

Figure 1 – Software Update Synchronization Status with Synchronization Source

Figure 2 – Removing the synchronization-source software update point with prompt to creating a new synchronization source

How Software Update Point Switching Works

Okay, now you have multiple software update points configure for your primary site, so if one software update point goes down, or is unreachable, clients will be able to fail over to another software update point and still scan for the latest updates .  Keep in mind, as mentioned previously, the solution in this release was designed as a fault tolerance solution, not pure load balancing.  Also, per the earlier comment made on the cost of switching, we try to persist affinity with the client’s currently assigned software update point whenever possible. We only failover when it’s necessary. Let’s walk through that process next.

First, if a client already has a software update point assigned, then it will stay assigned to that software update point forever unless it fails to successfully scan .  If you already have an active software update point (SUP01) on the RTM version of Configuration Manager, upgrade to SP1, and then add a second software update point (SUP02), existing clients will only switch to SUP02 on the condition of a failed scan .  A new client installed after you’ve upgraded to SP1 and configured a SUP01 and SUP02, will randomly be assigned to SUP01 or SUP02 at install and persist affinity with that assigned software update point unless scan failures force a switch.  Finally, if SUP01 goes down for more than a few hours, and when all clients are trying to scan against it, they will all switch to SUP02 as they failover, unable to contact SUP01.

So what determines a scan failure, and how does the client react to these conditions? Scans can fail with a number of retry, and non-retry error codes.  For failover, software update points only retry on retry error codes, and there are eleven of those we use from the Windows Update Agent and WinHTTP to determine that a scan has failed.  These errors will cause the client to retry its scan, and if necessitated by the number of failures (4), switch software update points.  The error codes themselves aren’t something important for you to worry about, but the high level conditions that scan failed are typically because the WSUS server couldn’t be reached, or it’s temporarily overloaded.  The retry error-codes are all variations on these two high-level themes.  In any case, scan just didn’t work, in which case we do the following:

  • Client scans at its scheduled time, or as initiated client-side through the control panel or SDK. If scan fails, then it waits 30 minutes to try again, using the same software update point.
  • The client will minimally retry four times at 30 minute intervals, and after fourth failure, and after two more minutes, it will move to the next software update point in its list.
  • The same process is completed on this software update point until we get a successful scan. Once scan succeeds against a software update point, the client will persist affinity with that software update point until it fails to scan against it , and then only if it fails to scan four times in 30 minute intervals.

Here are some additional points to consider around software update points retries and switching:

  • If a client is disconnected from the corporate intranet and the scan fails, we will not switch software update points.  This is an expected failure as the client can’t reach the corporate network, or the intranet software update point.  There is no point trying to reach out to a different intranet software update point when we know the client can’t reach it, so we don’t do the retry process in this scenario. The availability of the corporate network, and by extension the intranet software update point, is determined by the Configuration Manager client  framework.
  • If Internet-based client management is enabled, and there are multiple software update points deployed for clients on the Internet, switching will follow the retry process outlined above.
  • If the scan started, but the client was shut down before it completed, this does not count as a scan failure, and doesn’t factor as one of the four retries.

So that’s how software update point switching work.  The key takeaways are 1) unlike management point switching, software update point switching persists affinity whenever possible to avoid the client-side tax of switching scan sources, and 2) switching occurs if a client fails to scan 4 times at 30 minute intervals (2 net hours of scan failures). Also, it’s important that your WSUS servers use a shared database to reduce the impact of switching scan sources.

Software Update Points Cross-Forest

The other important scenario the software update point redesign supports is the ability to create a software update point (or multiple software update points) on a primary site in one forest to support clients across an untrusted forest boundary.  Management points and distribution points supported this capability at RTM, and now software update points also support this configuration.

To add a software update point cross-forest, you have to build a WSUS server in the intended forest.  Next, you simply add a site server the way you would add any site server, and specify an account that can reach the cross-forest server hosting the WSUS server.  Also, you need to specify an account for connection to the WSUS server on the appropriate wizard page.  Complete the wizard, and software update point configuration of the cross-forest software update point will begin.

As an example, say your primary site is in Forest A, and two software update points (SUP01 and SUP02) are configured in that forest.  Also, you’ve added two software update points (SUP03 and SUP04) from that same primary site to untrusted Forest B.  When switching occurs in this scenario, the software update points from the same forest that the client is in are prioritized first, ahead of the cross-forest software update points, which are likely not reachable by the client anyway unless the appropriate ports have been opened.

Source Top-Level Software Update Point from Internal WSUS Server

The last change to talk about here is the ability to source your top-level software update point from an internal WSUS server.  This capability was added due to feedback from some customers that security doesn’t allow any Configuration Manager role to communicate with the Internet.  This is problematic for software updates, as the catalog is typically sourced from Windows Update/Microsoft Update, requiring Internet access.  The same is true for update content.  With the changes we’ve made in SP1, you can now specify an internal WSUS server as the catalog source.  For example, you may have a WSUS server in the DMZ with Internet access to reach Windows Update/Microsoft Update, but your top-level software update point does not have access. With this change, you are now able to specify that DMZ WSUS server as the update catalog source for your top-level software update point. It’s a simple configuration in the setup wizard for your top-level software update point.  Just make sure the appropriate connection account is specified and that the ports are open for connectivity from the top-level software update point to the remote WSUS server its using as a source.

Wrap Up

This blog covers the big changes that are part of software updates in Configuration Manager Service Pack 1.  In addition to these big architectural changes, we’ve also added some out-of-box templates for definition updates and Patch Tuesday, to be used with the Deploy Software Updates Wizard and automatic deployment rules, which should make your life easier.  We’ve also made architectural changes to support the deployment of definition updates by using software updates, at a three times a day frequency.  We hope these changes improve your experience with the feature, and as always, let us know how these work for you.

-- Jason Githens

This posting is provided "AS IS" with no warranties, and confers no rights.

Version history
Last update:
‎Sep 07 2018 09:30 PM
Updated by: