I worked through the process of configuring a second Software Update Point (SUP) on my lab Primary site to see how it all works. It was different than what I expected but is very easy and straight forward to implement.
First things first – the installation requirements. Each SUP requires it’s own WSUS instance. Note that I did NOT say each SUP requires it’s own WSUS database. It is fully supported to have multiple WSUS servers share a single SUSDB database and doing so would be a more efficient approach for synchronizing software updates. I’ll explain shortly.
Pick the additional server(s) that will host the remote SUPs and install WSUS on each of them – just make sure you use WSUS 3.0 SP2 or greater and decide whether to use a locally installed database or, alternatively, during setup select the SUSDB that was created when the first SUP at the site was installed.
In my lab the hierarchy is as follows:
I chose Primary 1 as the site to have multiple SUP’s. SUP a was installed first on the primary site server itself - this is no different than typical SUP installation. Next it was time to install SUP b. I added the SUP role to this external site system and decided to let that server install4 it’s own SUSDB.
Configuring SUP’s to share a single SUSDB requires you to configure WSUS that way during install. From there the SUP is installed normally and the shared database is available through WSUS. The advantage of doing this is a reduction in the amount of time required to synchronize updates to the SUSDB at the primary. In a shared scenario the SUSDB will be updated by the master SUP – in this case SUP a (or the first one you install). Since SUP b shares SUSDB there is no need for the update to again be synchronized into SUSDB because it is already there. In the scenario where each SUP maintains it’s own SUSDB there would be a requirement to synchronize the update to each.
If we look at the monitoring node of the console we see all of our SUPs clicking along happily.
If we look at the logs for SUP a, our first SUP, we see the expected components.
If we compare this to the logs available on the remote SUP it’s a bit different.
Well now that's interesting. First, note the log location is in the SMS folder. More interesting – note the only standard WSUS log is the WSUSCtrl.log. So what does this mean? Let’s go back to the master SUP, SUP a, and look at a couple of logs.
On SUP a if we open WCM.log we see that this component is responsible for monitoring both SUP a and SUP b.
So the term master SUP isn’t just a figurative term, it’s the role SUP a actually plays. We see in this logs that the health of both SUPs at the site is being checked and also that we update a WSUS Group with the results. In my case there are no changes so nothing to update. If we had found a SUP with problems of if a new SUP had been checked for the first time then the membership of the group would have been adjusted accordingly.
So what about WsyncMgr? Yep, it is only present on the master SUP and is responsible for updating all SUP’s are the site – this is where you see efficiency if the SUSDB is shared between SUPs because the update only needs to happen once for all of the SUP’s at the site.
So this log should look familiar because it is much the same as what you would see with just a single SUP. Note the differences though. First, we detect that there are two SUPs that need to be synchronized. Next we go through and synchronize the master SUP from the CAS – in this case there isn’t much to do because it has been synced recently. From there we start to sync our replica servers – in this case SUP b. This is a brand new SUP so we need to go through the entire sync process. If SUP b had been configured to use the existing SUSDB we would still see WsyncMgr go through the motions but there wouldn’t be anything to do as the updates are already there.
Cool, so pretty easy to implement and now we understand how it works. Enjoy!