OMS Gateway Errors in Event Log

Copper Contributor

Hi, 

 

I am regularly seeing the following error in the OMS Gateway Event Log:

 

2018-01-15 10:34:00 [34] ERROR GatewayServer - Error in worker thread
System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at Microsoft.HttpForwarder.Library.HttpRequestParser.ReadLine(TcpConnection connection)
at Microsoft.HttpForwarder.Library.HttpRequestParser.ParseHttpRequestLine(TcpConnection connection)
at Microsoft.HttpForwarder.Library.GatewayLogic.<RunAsync>d__0.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Microsoft.HttpForwarder.Library.GatewayServer.<HandleClientAsync>d__e.MoveNext()

 

I also have an issue with several agents not able send service map data to OMS and see this in the operations manager event log:

 

A subscriber data source in management group XXXXXXXXX has posted items to the workflow, but has not received a response in 17 minutes. Data will be queued to disk until a response has been received. This indicates a performance or functional problem with the workflow.
Workflow Id : Microsoft.SystemCenter.CollectApplicationDependencyMonitorInformationToCloud
Instance : XXXXXXXXXXX
Instance Id : {F7AA8097-51AA-E6A8-421C-99C975AE4FC9}

 

We do have a large number of agents reporting correctly but a number with the above issue. I cant seem to work out why as they are configured the same and in the same subnet.

 

4 Replies

Hi Martin.

I've been able to collect some information on the errors.

The first error could indicate the message attempted to be sent is too large. There is a limit on the POST body size the workflow on the non communicating agents are probably exceeding that.

The second warning happens when there are connectivity issues and we are queuing the data and retrying till we send it. 

put together, I would assume data has accumulated on some agents due to connectivity issues, and perhaps had reach a size limit.

 

Has the issue been resolved yet?

Hi Noa, many thanks for the reply. We still have the issues and I have a case open with support for it. I will update this thread when I get further news

@Martin Lambdid you solved this? I'm facing the exact same problem now... 2 years later...

@vitordias no we didn't solve it but in the end we removed the OS gateways as all servers were migrated to azure. we then controlled the traffic using NSG's