Universal print jobs take an unusually long amount of time & user gets error message

%3CLINGO-SUB%20id%3D%22lingo-sub-2160399%22%20slang%3D%22en-US%22%3EUniversal%20print%20jobs%20take%20an%20unusually%20long%20amount%20of%20time%20%26amp%3B%20user%20gets%20error%20message%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2160399%22%20slang%3D%22en-US%22%3E%3CP%3EHi%2C%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EWe've%20got%20Universal%20Print%20set%20up%20for%202%20users%20and%20roughly%208-9%20printers.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAll%20of%20our%20print%20connections%20are%20facilitated%20via%20the%20connector%20(no%20firmware-based%20connections)%20sitting%20on%20a%20Windows%202019%20Azure%20VM%2C%20which%20connects%20to%20the%20printers%20via%20an%20S2S%20tunnel%20using%20Azure%20VPN%20gateway.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EWhat%20we're%20seeing%20is%20that%20user%20prints%20are%20taking%20an%20unusually%20long%20time%20(up%20to%203%20minutes)%20even%20for%20a%20simple%20test%20print%2C%20and%20users%20are%20receiving%20error%20messages%20that%20their%20print%20job%20failed%2C%20even%20when%20the%20job%20still%20comes%20out.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EWhen%20I%20perform%20a%20test%20print%20to%20one%20of%20the%20test%20printers%2C%20I%20can%20see%20the%20jobs%20failing%20in%20the%20test%20system%20print%20queue%3A%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22printpic1.png%22%20style%3D%22width%3A%20993px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F257182i0722DAA035A056A8%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22printpic1.png%22%20alt%3D%22printpic1.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EIn%20this%20case%2C%20I%20didn't%20even%20see%20the%20print%20jobs%20hit%20the%20print%20queue%20on%20the%20print%20connector%20server.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EThis%20had%20been%20working%20pretty%20much%20flawlessly%20for%202-3%20weeks.%26nbsp%3B%20No%20network%20changes%20have%20been%20made%2C%20in%20Azure%20or%20on-prem.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EFrom%20looking%20at%20the%20PrintConnector%20event%20log%2C%20we're%20seeing%20even%20ID%201%20category%20LogError%3A%3C%2FP%3E%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3ERetry%201%20Exception%3A%20System.NullReferenceException%3A%20Object%20reference%20not%20set%20to%20an%20instance%20of%20an%20object.%0Aat%20ProxyLibrary.NotificationHandler.%3CSUBSCRIBEASYNC%3Ed__10.MoveNext()%0A---%20End%20of%20stack%20trace%20from%20previous%20location%20where%20exception%20was%20thrown%20---%0Aat%20System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()%0Aat%20System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task%20task)%0Aat%20ProxyLibrary.NotificationHandler.%26lt%3B%26gt%3Bc__DisplayClass8_0.%26lt%3B%3CCREATESUBSCRIPTIONASYNC%3Eb__0%26gt%3Bd.MoveNext()%0A---%20End%20of%20stack%20trace%20from%20previous%20location%20where%20exception%20was%20thrown%20---%0Aat%20System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()%0Aat%20System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task%20task)%0Aat%20ProxyLibrary.RetryHandler.%3CRETRYASYNC%3Ed__0%601.MoveNext()%3C%2FRETRYASYNC%3E%3C%2FCREATESUBSCRIPTIONASYNC%3E%3C%2FSUBSCRIBEASYNC%3E%3C%2FCODE%3E%3C%2FPRE%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3Eand%20EventID%2021%2C%20category%20IPPError%3A%3C%2FP%3E%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3EIPP%20Request%20operation%3A%20CreatePrinterSubscriptions%0AUri%3A%20https%3A%2F%2Fnotification.print.microsoft.com%2Fprinters%2FINSERTGUIDHERE%2F%0Aresponse%3AStatusCode%3A%20503%2C%20ReasonPhrase%3A%20'Service%20Unavailable'%2C%20Version%3A%201.1%2C%20Content%3A%20System.Net.Http.StreamContent%2C%20Headers%3A%0A%7B%0A%20%20X-Content-Type-Options%3A%20nosniff%0A%20%20X-MSEdge-Ref%3A%20Ref%20A%3A%20%3CREF%3E%20Ref%20B%3A%20CH1EDGE1006%20Ref%20C%3A%202021-02-23T20%3A59%3A33Z%0A%20%20Cache-Control%3A%20no-store%0A%20%20Date%3A%20Tue%2C%2023%20Feb%202021%2021%3A00%3A02%20GMT%0A%20%20Set-Cookie%3A%20UserRegion%3Dusa%3B%20path%3D%2F%3B%20samesite%3Dlax%3B%20httponly%0A%20%20Content-Length%3A%200%0A%7D%0AHttp%20client%20failed%20with%20statusCode%3A%20ServiceUnavailable%3C%2FREF%3E%3C%2FCODE%3E%3C%2FPRE%3E%3CP%3EEventID%2019%2C%20IPPError%3A%3C%2FP%3E%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3ENotification%20response%20for%20printer%20INSERTGUIDHERE%20%0A%20System.AggregateException%3A%20One%20or%20more%20errors%20occurred.%20---%26gt%3B%20Microsoft.Falcon.MPS_IPPLibrary.ZeroByteReadException%3A%20Initial%20read%20returned%200%20bytes%0A%20%20%20at%20Microsoft.Falcon.MPS_IPPLibrary.IppResponse.%3CDESERIALIZEASYNC%3Ed__24.MoveNext()%0A---%20End%20of%20stack%20trace%20from%20previous%20location%20where%20exception%20was%20thrown%20---%0A%20%20%20at%20System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()%0A%20%20%20at%20System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task%20task)%0A%20%20%20at%20Microsoft.Falcon.MPS_IPPLibrary.IppResponse.%3CCREATEASYNC%3Ed__23.MoveNext()%0A%20%20%20---%20End%20of%20inner%20exception%20stack%20trace%20---%0A%20%20%20at%20System.Threading.Tasks.Task%601.GetResultCore(Boolean%20waitCompletionNotification)%0A%20%20%20at%20ProxyLibrary.NotificationHandler.%3CGETNOTIFICATIONSASYNC%3Ed__12.MoveNext()%0A---%26gt%3B%20(Inner%20Exception%20%230)%20Microsoft.Falcon.MPS_IPPLibrary.ZeroByteReadException%3A%20Initial%20read%20returned%200%20bytes%0A%20%20%20at%20Microsoft.Falcon.MPS_IPPLibrary.IppResponse.%3CDESERIALIZEASYNC%3Ed__24.MoveNext()%0A---%20End%20of%20stack%20trace%20from%20previous%20location%20where%20exception%20was%20thrown%20---%0A%20%20%20at%20System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()%0A%20%20%20at%20System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task%20task)%0A%20%20%20at%20Microsoft.Falcon.MPS_IPPLibrary.IppResponse.%3CCREATEASYNC%3Ed__23.MoveNext()%26lt%3B---%3C%2FCREATEASYNC%3E%3C%2FDESERIALIZEASYNC%3E%3C%2FGETNOTIFICATIONSASYNC%3E%3C%2FCREATEASYNC%3E%3C%2FDESERIALIZEASYNC%3E%3C%2FCODE%3E%3C%2FPRE%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3Eand%20finally%20EventID%202%2C%20also%20category%20LogError%3A%3C%2FP%3E%3CPRE%20class%3D%22lia-code-sample%20language-bash%22%3E%3CCODE%3EFailed%20to%20get%20Active%20Subscription%20for%20printer%20INSERTGUIDHERE%2C%20Exception%3A%20System.NullReferenceException%3A%20Object%20reference%20not%20set%20to%20an%20instance%20of%20an%20object.%0A%20%20%20at%20ProxyLibrary.NotificationHandler.%3CGETACTIVESUBCRIPTIONSASYNC%3Ed__6.MoveNext()%3C%2FGETACTIVESUBCRIPTIONSASYNC%3E%3C%2FCODE%3E%3C%2FPRE%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAnyone%20else%20seeing%2Fseen%20anything%20similar%3F%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2160873%22%20slang%3D%22en-US%22%3ERe%3A%20Universal%20print%20jobs%20take%20an%20unusually%20long%20amount%20of%20time%20%26amp%3B%20user%20gets%20error%20message%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2160873%22%20slang%3D%22en-US%22%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fuser%2Fviewprofilepage%2Fuser-id%2F309134%22%20target%3D%22_blank%22%3E%40ersjohnson%3C%2FA%3E%26nbsp%3BIf%20you%20are%20already%20using%20the%20latest%20version%20of%20the%20Connector%20(%3CA%20href%3D%22https%3A%2F%2Faka.ms%2Fupconnector%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Ehttps%3A%2F%2Faka.ms%2Fupconnector%3C%2FA%3E)%2C%20then%20I%20recommend%20opening%20a%20support%20case%20with%20Microsoft.%3C%2FP%3E%3C%2FLINGO-BODY%3E
Occasional Contributor

Hi,

 

We've got Universal Print set up for 2 users and roughly 8-9 printers.

 

All of our print connections are facilitated via the connector (no firmware-based connections) sitting on a Windows 2019 Azure VM, which connects to the printers via an S2S tunnel using Azure VPN gateway.

 

What we're seeing is that user prints are taking an unusually long time (up to 3 minutes) even for a simple test print, and users are receiving error messages that their print job failed, even when the job still comes out.

 

When I perform a test print to one of the test printers, I can see the jobs failing in the test system print queue:

printpic1.png

 

In this case, I didn't even see the print jobs hit the print queue on the print connector server.

 

This had been working pretty much flawlessly for 2-3 weeks.  No network changes have been made, in Azure or on-prem.

 

From looking at the PrintConnector event log, we're seeing even ID 1 category LogError:

Retry 1 Exception: System.NullReferenceException: Object reference not set to an instance of an object.
at ProxyLibrary.NotificationHandler.<SubscribeAsync>d__10.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at ProxyLibrary.NotificationHandler.<>c__DisplayClass8_0.<<CreateSubscriptionAsync>b__0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at ProxyLibrary.RetryHandler.<RetryAsync>d__0`1.MoveNext()

 

and EventID 21, category IPPError:

IPP Request operation: CreatePrinterSubscriptions
Uri: https://notification.print.microsoft.com/printers/INSERTGUIDHERE/
response:StatusCode: 503, ReasonPhrase: 'Service Unavailable', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
  X-Content-Type-Options: nosniff
  X-MSEdge-Ref: Ref A: <REF> Ref B: CH1EDGE1006 Ref C: 2021-02-23T20:59:33Z
  Cache-Control: no-store
  Date: Tue, 23 Feb 2021 21:00:02 GMT
  Set-Cookie: UserRegion=usa; path=/; samesite=lax; httponly
  Content-Length: 0
}
Http client failed with statusCode: ServiceUnavailable

EventID 19, IPPError:

Notification response for printer INSERTGUIDHERE 
 System.AggregateException: One or more errors occurred. ---> Microsoft.Falcon.MPS_IPPLibrary.ZeroByteReadException: Initial read returned 0 bytes
   at Microsoft.Falcon.MPS_IPPLibrary.IppResponse.<DeserializeAsync>d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Falcon.MPS_IPPLibrary.IppResponse.<CreateAsync>d__23.MoveNext()
   --- End of inner exception stack trace ---
   at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
   at ProxyLibrary.NotificationHandler.<GetNotificationsAsync>d__12.MoveNext()
---> (Inner Exception #0) Microsoft.Falcon.MPS_IPPLibrary.ZeroByteReadException: Initial read returned 0 bytes
   at Microsoft.Falcon.MPS_IPPLibrary.IppResponse.<DeserializeAsync>d__24.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Microsoft.Falcon.MPS_IPPLibrary.IppResponse.<CreateAsync>d__23.MoveNext()<---

 

and finally EventID 2, also category LogError:

Failed to get Active Subscription for printer INSERTGUIDHERE, Exception: System.NullReferenceException: Object reference not set to an instance of an object.
   at ProxyLibrary.NotificationHandler.<GetActiveSubcriptionsAsync>d__6.MoveNext()

 

Anyone else seeing/seen anything similar?

4 Replies

@ersjohnson If you are already using the latest version of the Connector (https://aka.ms/upconnector), then I recommend opening a support case with Microsoft.

Thanks Jimmy! Things seem to be better now, but we're still noticing these error messages in the Connector event log. Should we still open a support ticket?
I had a follow up with the team and found out that the notification service have been receiving excessive calls by some older connectors which are causing a backup of events to process. The notification service should be starting to return back to normal health and the team is continuing to monitor.