Client on Management Point doesn't work after in place OS upgrade

Brass Contributor

Following the docs here https://docs.microsoft.com/en-us/mem/configmgr/core/servers/manage/upgrade-on-premises-infrastructure  about in place OS upgrade for a CM 2010 install. The servers went from 2012 R2 to 2019. The upgrade worked fine for the primary site server which holds most of the roles (SQL server, Reporting services, WSUS/SUP with shared content folder). I then upgraded our management point which also serves as a secondary SUP. I removed WSUS on both before the upgrades and added them back as described in the docs. The upgrade went fine. I did a site reset to ensure everything is good and there were no issues in the console. Clients are working fine for all features.

After a few days I noticed that the management point is no longer compliant with the configuration baselines. Turns out it hasn't run any of them since the upgrade. I reinstalled the client and it finished with no errors. It seems to register fine. The certificate is fine but then when it tries to download policy or upload messages to the MP (itself), the jobs all error out.

 

The DataTransferService.log show the following sorts of errors repeated:

=============

CDTSJob::HandleErrors: DTS Job '{E27D24C3-091D-4793-92D6-CB8040D35D4C}' BITS Job '{4EA2941A-D5F2-4760-9947-DC6EC8ACD937}' under user 'S-1-5-18' OldErrorCount 415 NewErrorCount 416 ErrorCode 0x80072EFE
CDTSJob::HandleErrors: DTS Job '{3C7E4C35-2A83-4614-9565-608585A49D1C}' BITS Job '{57E58E15-8BEF-41AF-BD25-2F47AA42BC17}' under user 'S-1-5-18' OldErrorCount 131 NewErrorCount 132 ErrorCode 0x80072EFE
CDTSJob::HandleErrors: DTS Job ID='{3C7E4C35-2A83-4614-9565-608585A49D1C}' URL='https : //<MP FQDN>:443/SMS_MP' ProtType=3
CDTSJob::HandleErrors: DTS Job '{ECDFDEFC-2DCF-4570-A0D9-03701B0FF9D2}' BITS Job '{C0EB62DB-F1DC-42E5-94E8-DAE216713B0F}' under user 'S-1-5-18' OldErrorCount 113 NewErrorCount 114 ErrorCode 0x80072EFE
CDTSJob::HandleErrors: DTS Job ID='{ECDFDEFC-2DCF-4570-A0D9-03701B0FF9D2}' URL='https : //<MP FQDN>:443/SMS_MP' ProtType=3
CDTSJob::HandleErrors: DTS Job '{8F8B924E-04B6-4CD1-8928-963E00DE343C}' BITS Job '{5EF47274-86E8-44AA-B9F0-EEDD903D0F37}' under user 'S-1-5-18' OldErrorCount 208 NewErrorCount 209 ErrorCode 0x80072EFE
CDTSJob::HandleErrors: DTS Job ID='{8F8B924E-04B6-4CD1-8928-963E00DE343C}' URL='https://<MP FQDN>:443/SMS_MP' ProtType=3
CDTSJob::HandleErrors: DTS Job '{AC6517A8-EEC9-42F3-9205-215B281F2240}' BITS Job '{64F4719A-9053-400C-BA90-C0AAE00210B9}' under user 'S-1-5-18' OldErrorCount 131 NewErrorCount 132 ErrorCode 0x80072EFE

========================

 

That error source is WinHTTP and means "The connection with the server was terminated abnormally". I see an entry in the IIS logs with a 403 error, but I don't know how to get more details about what is causing it.

 

Looking with bitsadmin it shows the following:

===================

{5211F5B3-89FF-47AC-9BDC-5C67B990CFA7} 'CCM Message Upload {9EF459CF-4EB2-4A75-8FFF-FF400050A3B8}' TRANSIENT_ERROR 0 / 1 0 / 14138
{73F46E79-CD10-4846-840E-7ECD0CCBB976} 'CCM Message Upload {6CE3E579-1C78-4E2D-A1C6-C7118BBAA75E}' TRANSIENT_ERROR 0 / 1 0 / 24650
{9B0D204F-0D88-4683-8800-ADE66C176A0C} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{63EA641C-266A-4EC5-A428-D3DC30266040} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{B6C82074-7794-4A34-999A-3938A2216933} 'CCMDTS Job' TRANSIENT_ERROR 0 / 28 0 / UNKNOWN
{D68CF136-7EFB-4F5F-8E56-7B03C6C83830} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{47152487-B6AA-4134-BBD8-83A85467B8E5} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{90F73916-3F8C-48CD-B201-7F3AD9A5003F} 'CCMDTS Job' TRANSIENT_ERROR 0 / 3 0 / UNKNOWN
{DF682297-9FFE-4348-B104-FB344B178C13} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{6A839CF0-45BB-453A-98D9-F75343BE97D5} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{EA51C2C6-EEB4-4018-AC8A-E0475BD53513} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{2471FF3C-FD78-4BB4-A037-7E146619FDB8} 'CCMDTS Job' TRANSIENT_ERROR 0 / 4 0 / UNKNOWN
{6E50A082-1771-4147-B06A-76DD40DD6DAD} 'CCMDTS Job' TRANSIENT_ERROR 0 / 28 0 / UNKNOWN
{454513C9-AC01-43E1-B0E8-102FBE25779E} 'CCMDTS Job' TRANSIENT_ERROR 0 / 57 0 / UNKNOWN
{34F8B723-1371-461C-A4C5-A4866323EC44} 'CCMDTS Job' TRANSIENT_ERROR 0 / 57 0 / UNKNOWN

===============

 

If I delete the bits jobs, they just come back again of course. I don't know if it is useful to anyone, but when I run a bitsadmin on the job I get the following info.

=================

bitsadmin /info {5211F5B3-89FF-47AC-9BDC-5C67B990CFA7} /verbose

BITSADMIN version 3.0
BITS administration utility.
(C) Copyright Microsoft Corp.

GUID: {5211F5B3-89FF-47AC-9BDC-5C67B990CFA7} DISPLAY: 'CCM Message Upload {9EF459CF-4EB2-4A75-8FFF-FF400050A3B8}'
TYPE: UPLOAD STATE: TRANSIENT_ERROR OWNER: NT AUTHORITY\SYSTEM
PRIORITY: NORMAL FILES: 0 / 1 BYTES: 0 / 14138
CREATION TIME: 7/30/2021 5:26:37 PM MODIFICATION TIME: 8/3/2021 7:27:22 PM
COMPLETION TIME: UNKNOWN ACL FLAGS:
NOTIFY INTERFACE: UNREGISTERED NOTIFICATION FLAGS: 3
RETRY DELAY: 600 NO PROGRESS TIMEOUT: 1209600 ERROR COUNT: 1193
PROXY USAGE: NO_PROXY PROXY LIST: NULL PROXY BYPASS LIST: NULL
ERROR FILE: https : //<MP  FQDN>:443/CCM_Incoming/{9EF459CF-4EB2-4A75-8FFF-FF400050A3B8} -> D:\SMS_CCM\ServiceData\LocalPayload\{9EF459CF-4EB2-4A75-8FFF-FF400050A3B8}
ERROR CODE: 0x80072efe - The connection with the server was terminated abnormally
ERROR CONTEXT: 0x00000005 - The error occurred while the remote file was being processed.
DESCRIPTION:
JOB FILES:
0 / 14138 WORKING https : //<MP  FQDN>:443/CCM_Incoming/{9EF459CF-4EB2-4A75-8FFF-FF400050A3B8} -> D:\SMS_CCM\ServiceData\LocalPayload\{9EF459CF-4EB2-4A75-8FFF-FF400050A3B8}
NOTIFICATION COMMAND LINE: none
owner MIC integrity level: SYSTEM
owner elevated ? true

Peercaching flags
Enable download from peers :false
Enable serving to peers :false

CUSTOM HEADERS: NULL
CLIENT CERTIFICATE INFORMATION:
Certificate Store Location : CERT_STORE_LOCATION_LOCAL_MACHINE
Certificate Store Name : MY
Certificate Hash : 41C2067A522B94550F626B1A136015C4C6FE46D9
Certificate Subject Name : NULL

HTTP security flags
Enable CRL Check :true
Ignore invalid common name in server certificate :false
Ignore invalid date in server certificate :false
Ignore invalid certificate authority in server certificate :false
Ignore invalid usage of certificate :false
URL redirection policy :Redirects will be automatically allowed.
Redirection from HTTPS to HTTP allowed :false

=================

The certificate hash in the job is the same one that shows in the ClientIDManagerStartup.log in the client log folder.

======

>>> Client selected the PKI Certificate [Thumbprint 41C2067A522B94550F626B1A136015C4C6FE46D9] issued to '<MP FQDN>' ClientIDManagerStartup 8/3/2021 5:59:50 PM 5304 (0x14B8)

======

 

I've been scouring the web for a few days now and can't seem to find anything to help. I see no other obvious errors in the log files.  Again, no other machines are having this issue.

 

I know I could probably remove the MP role and reinstall or bring up a new MP, but I would rather not as this is my dev server that I was testing the in place upgrade on before I did the same thing in prod and I'm not looking forward to having to make those sorts of changes this close to the start of the semester on the prod side if the same thing happens when I upgrade prod. Anyone have any ideas?

17 Replies
Anyone have any ideas? I am seeing other computers that aren't config manager servers also showing similar problems in the dataTransfer.log file. They aren't sending in hardware inventory to the management point or they are having issues downloading bits jobs from the management point.

This inplace server OS upgrade has not been a smooth process.
I posted to Reddit as well and got some pointers saying you need to remove the MP role if you fully reinstall the client. No links to any specific docs, more just "I've heard" sort of thing.

I removed the MP role, but left the SUP role. This seemed to remove the client at the same time. At least it removed the Control Panel entry.
I then reinstalled the MP role and then installed the client. They both exited setup with error code 0, but the issue still remains.
I dug into the logs a bit more and it seems like it's something to do with the certs maybe.
Keep in mind this is all on the management point itself and we do require HTTPS via an internal ADCS PKI. If I look at one of the errors in the DataTransferService.log and grab the BITS job:

CDTSJob::HandleErrors: DTS Job '{F0AD54F5-7364-49BE-91C4-33BA0DED45DC}' BITS Job '{20FFFB81-7308-4187-832D-500312F8B7D6}' under user 'S-1-5-18' OldErrorCount 1397 NewErrorCount 1398 ErrorCode 0x80072EFE DataTransferService 8/25/2021 9:46:47 PM 1316 (0x0524)

Then checking the bits job, I can get the URL (https://SERVER-FQDN:443/SMS_MP/.sms_pol?%7B33B321B6-72BD-4827-95FF-623A04B7CE53%7D.SHA256:52BC11A0A1...)
=====================
bitsadmin /info '{20FFFB81-7308-4187-832D-500312F8B7D6}' /verbose

BITSADMIN version 3.0
BITS administration utility.
(C) Copyright Microsoft Corp.

GUID: {20FFFB81-7308-4187-832D-500312F8B7D6} DISPLAY: 'CCMDTS Job'
TYPE: DOWNLOAD STATE: TRANSIENT_ERROR OWNER: NT AUTHORITY\SYSTEM
PRIORITY: HIGH FILES: 0 / 1 BYTES: 0 / UNKNOWN
CREATION TIME: 8/20/2021 4:11:01 PM MODIFICATION TIME: 8/25/2021 9:30:13 PM
COMPLETION TIME: UNKNOWN ACL FLAGS:
NOTIFY INTERFACE: REGISTERED NOTIFICATION FLAGS: 11
RETRY DELAY: 60 NO PROGRESS TIMEOUT: 28800 ERROR COUNT: 1380
PROXY USAGE: NO_PROXY PROXY LIST: NULL PROXY BYPASS LIST: NULL
ERROR FILE: https://SERVER-FQDN:443/SMS_MP/.sms_pol?%7B33B321B6-72BD-4827-95FF-623A04B7CE53%7D.SHA256:52BC11A0A1... -> D:\SMS_CCM\Staging\{33B321B6-72BD-4827-95FF-623A04B7CE53}.3.00.tmp
ERROR CODE: 0x80072efe - The connection with the server was terminated abnormally
ERROR CONTEXT: 0x00000005 - The error occurred while the remote file was being processed.
DESCRIPTION:
JOB FILES:
0 / UNKNOWN WORKING https://SERVER-FQDN:443/SMS_MP/.sms_pol?%7B33B321B6-72BD-4827-95FF-623A04B7CE53%7D.SHA256:52BC11A0A1... -> D:\SMS_CCM\Staging\{33B321B6-72BD-4827-95FF-623A04B7CE53}.3.00.tmp
NOTIFICATION COMMAND LINE: none
owner MIC integrity level: SYSTEM
owner elevated ? true
This job is read-only to the current CMD window because the job's mandatory
integrity level of SYSTEM is higher than the window's level of HIGH.
Peercaching flags
Enable download from peers :true
Enable serving to peers :true

CUSTOM HEADERS: NULL
CLIENT CERTIFICATE INFORMATION:
Certificate Store Location : CERT_STORE_LOCATION_LOCAL_MACHINE
Certificate Store Name : MY
Certificate Hash : 41C2067A522B94550F626B1A136015C4C6FE46D9
Certificate Subject Name : NULL

HTTP security flags
Enable CRL Check :true
Ignore invalid common name in server certificate :false
Ignore invalid date in server certificate :false
Ignore invalid certificate authority in server certificate :false
Ignore invalid usage of certificate :false
URL redirection policy :Redirects will be automatically allowed.
Redirection from HTTPS to HTTP allowed :false

================
Finally if I look in the IIS Log for that URL I see a 403.7 error which from what I can tell indicates a mutual cert auth error.
============================
2021-08-26 01:46:47 10.10.28.120 HEAD /SMS_MP/.sms_pol %7B33B321B6-72BD-4827-95FF-623A04B7CE53%7D.SHA256:52BC11A0A11362FDE7CF392B055993FE9E2BB81CF5A8BF4F69A19C5CE9D7E2DB 443 - 10.10.28.120 Microsoft+BITS/7.8 - 403 7 64 1

================
In order to rule out every issue with certs, I deleted all the certs except one that is configured for both client and server auth and is the one setup for IIS. I verified in the ClientIDManagerStartup.log that it had selected that cert for the CM client. Going to the IIS server from the client browser shows no issues and of course other machines can talk just fine to the MP.
Anyone have any other ideas or know how to get more detailed logging from BITS on IIS?
I've now removed the CM roles, all pre-requisites and then reinstalled and I am still seeing BITS errors from multiple clients talking to the MP. Some downloads and uploads seem to work fine, but others don't. It seems very similar to the issues described in this blog post, but this is an old issue for Windows Server 2016 and we have upgraded to 2019 with all the latest updates. https://systemcenterdudes.com/sccm-hardware-inventory-problem-windows-10-1607/
So at this point is the MP role still in an errored state? Also based on the logs, seems like you have SSL enabled on the MP?
The MP role itself doesn't show any errors. It thinks it's working fine, but the clients are having issues with some BITS transfers. The clients can talk to the MP and find out what apps and updates are available and download them from the DPs with no issues. Uploading client data like hardware scans or baseline compliance and downloading some config baselines are the main issue.

We are all HTTPS with an internal PKI.
That sounds like a phantom issue. Since CM was originally installed on 2012 R2, the pre-reqs were different., Perhaps go grab the SCCM Preq Tool and re run it for your server.

https://msendpointmgr.com/configmgr-prerequisites-tool/

I would run it Primary Site, DP and MP roles to ensure you're not missing any components. Then see if that fixes your BITS issues,. Keep us posted
Thanks for the suggestion. That tool did find a few things that need to be installed.
For the management point it installed:
NET-Framework-45-Features
BITS-Compact-Server
RSAT-Bits-Server
Web-ASP
For the primary site it installed UpdateServices-RSAT and for the DP role it installed Web-Scripting-Tools. After it was done I removed the MP role and added it back, but the issue is still happening.
I thought that would help for sure! Any reboots after the component updates?
Does other clients connecting to this MP have the same issue? Or is it only the client on the MP itself?
Yes, other clients also have the issue.
Try resetting the BITS queue manually:
net stop bits
Del "%ALLUSERSPROFILE%\Application Data\Microsoft\Network\Downloader\qmgr*.dat"
net start bits
I have done that. The BITS jobs just get recreated by the CM client.
Hi,
Have you check SSL setting (Enable RequireSSl) for the APIRemoting30, ClientWebService, DSSAuthWebService, ServerSyncWebService, and SimpleAuthWebService virtual directories.
Have you run this commandline for WSUS :
Program Files\Update Services\Tools\WSUSUtil.exe configuressl <SUP.FQDN>

Sources (Read the « Update 2 ») :
https://sccmentor.com/2021/07/27/in-place-upgrade-of-configmgr-site-server-from-windows-2012-r2-to-2...
Thanks for the response Renald. I have done all those steps. But, we don't have any issues with updates or DP communications. It's mainly clients uploading and downloading policy/device data to the management point.
Hi,
Ok, any chance it can be an antivirus, proxy, firewall related issue ?
CRL is enable, have you check clients cert wasn’t revoked ?
Have you try to troubleshoot a client with Microsoft Support Center Client Tools ? (You can do live monitoring and troubleshoot on client policy, cert, apps, updates, etc.)

Thanks for the suggestions.

There are no AV alerts and we only use Windows Defender. The proxy settings and firewall settings have been checked. The servers are talking fine. I see the 403.7 errors in the IIS logs, so the communications are getting there, the mutual cert auth is just failing for some reason.

CRL is enabled and the certs all check out fine from the OS level. I have also tried the support center client tools with no luck.

It's something to do with BITS mutual cert auth, but I couldn't find any root cause beyond that and nothing online was helpful. I ended up just creating a whole new management point on a fresh server 2019 box and switching everyone to that. That resolved the issue. I guess this thread will just be a warning that an inplace upgrade of the server OS isn't always a slam dunk.