Hi everyone! My name is Rajneesh Mahajan and I’m a lead software developer on the Remote Desktop Virtualization (RDV) team. Our team is responsible for developing Microsoft RemoteFX in the Windows Server 2012 and Windows 8 release .
Included in RemoteFX are significant improvements in the area of transports. Following are the key goals for these transport improvements:
The key features of RemoteFX for WAN are as follows:
In this blog, we will provide more details on these transport improvements.
The traditional definition of WAN (wide-area networks) implies a high latency network that may be bandwidth constrained (when compared to LAN). However, in practice there are a wide range of network configurations that may qualify as WAN. Examples of such networks include United States coast to coast links (e.g. 60ms RTT, 5 Mbps capacity, 1% loss), intercontinental links (e.g. 250ms RTT, 3 Mbps capacity, 1% loss) and branch office links (e.g. 100ms RTT, 256 Kbps capacity, 0.5% loss). Apart from these traditional WAN networks, we also have challenging mobile and wireless networks (e.g. 3G/4G link with 200ms RTT, 1 Mbps capacity, 5% loss) that present many issues for remote desktops because of high packet loss and latency jitter from signal interference.
When designing the RemoteFX for WAN transports, instead of focusing on optimizing for a specific type of network, we focused on addressing issues caused by each of the fundamental factors affecting the experience—latency, packet loss, and available bandwidth. We also looked at other important factors like variations in latency (also called jitter) and the packet loss patterns (single or continuous, congestion-induced or non-congestion-induced).
In the next few sections, we discuss the effect of these fundamental factors on WAN performance and the relevant RemoteFX improvements.
The goal of an efficient transport is to ensure that the network capacity is utilized to its fullest while incurring minimal queuing delays. Either of these goals is easy to achieve, but it is harder to achieve both of them simultaneously. Sending data at a higher rate than the network can handle will achieve the goal of high throughput, but it will result in large queues forming at different points in the network. These queues will increase the delay, which is extremely detrimental to the responsiveness of any interactive real time application such as Remote Desktop.
Conversely, a transport can achieve the goal of low queuing delays by sending a minimal amount of data. However, this would reduce the quality of the experience due to limited bandwidth for streams such as graphics and multimedia. For a network with bottleneck capacity (C) and round-trip latency (RTT), ideally the transport needs to always keep close to C * (RTT) bytes in flight but not more. The challenge is in determining the network latency and bandwidth used for a RemoteFX connection.
Many of you might be familiar with the Remote Desktop Connection client option of choosing your connection speed (shown in the following diagrams). By selecting these options we were able to improve the RemoteFX throughput (and hence experience) by changing the estimates of pipe filling requirements mentioned previously. Unfortunately, it did require customers to guess their connection speed and even then it may not have been completely accurate.
Windows 7 Remote Desktop Connection Client
Windows 8 Remote Desktop Connection Client
For RemoteFX in the Windows Server 2012 and Windows 8 releases, we have added the RemoteFX Network Auto-Detect feature to address the pipe filling problem. This feature is the default on Remote Desktop Connection, alleviating the need for users to guess their connection speed. During connection establishment and run-time data flow, this feature is used to calculate the available network capacity and link latency. These factors are then used to decide the amount of data needed to keep the pipe full while minimizing delays. Also, the detected network conditions are made available to other RemoteFX features to allow them to be adaptive to changing network conditions (e.g. RemoteFX Media Streaming may decide the target bitrate for video based on network conditions).
The figure below illustrates the improvements in network throughput achieved with RemoteFX Network Auto-Detect. We are not showing no-loss LAN configurations here because most protocols perform well under those conditions. The data shows that in percentage terms, we have significantly improved the TCP throughput compared to Windows Server 2008 R2. However, there is a theoretical limit on TCP throughput at higher losses and latencies. We will discuss these limitations and related RemoteFX features in the next section.
Even if a protocol solves the problem of pipe filling, packet loss can play havoc on the performance. Following are two reasons why packet loss is bad.
Head of Line Blocking : Because TCP ensures reliable and in-order delivery of data, a lost packet must be retransmitted by the sender. While the packet is being retransmitted, anything sent after the lost packet is stuck in the receiver’s TCP queues and can’t be delivered. This stalling is called Head of Line Blocking and it is especially detrimental to responsiveness on WAN because the stall time is at least 1.5 times the network RTT.
Loss of Throughput : Any well-behaving network protocol needs to implement congestion control, which allows senders to reduce the sending rate in response to congestion. Not implementing any congestion control can lead to a congestion collapse where the packet loss and delays are very high and little useful throughput is available. TCP uses a loss-based congestion control [ NewReno or other variations of Additive Increase Multiplicative Decrease (AIMD)] where every packet loss is assumed to come from network congestion. Therefore, it backs off (by cutting its sliding window by half) on every packet loss. This approach has worked exceptionally well to keep a well-behaving global Internet. However, it can lead to severe loss of throughput in networks where we have inherent, non-congestion related loss (e.g. WiFi, 3G/4G and most high latency networks). The effect of loss on reducing TCP throughput is more pronounced on higher latencies. The following figure illustrates this effect by looking at the approximate maximum theoretical TCP throughput (calculated by Mathis equation) for a 50 Mbps link at a few different loss and latency levels. As shown here, loss and latency can combine to reduce the throughput to as low as 270 Kbps despite having a 50 Mbps link.
As long we use TCP, we are bound by its rules of reliable, in-order delivery and a congestion control that backs off aggressively for every loss. RemoteFX addresses this limitation by adding a UDP transport where we take control of all these factors. This UDP transport can provide both reliable and best-effort delivery of data depending on the needs of data producers. For example, audio and video streams are less concerned about recovering from packet loss and more concerned about jitter reduction. For such data flows, the RemoteFX UDP transport can provide the best-effort delivery without the need for retransmissions. For other producers where reliable delivery is more important, RemoteFX UDP transport incorporates reliability semantics with some important improvements mentioned below.
Forward Error Correction: RemoteFX UDP transport uses Forward Error Correction (FEC) to recover from the lost data packets. In the cases where such packets can be recovered, the transport doesn’t need to wait for the data to be retransmitted, which allows immediate delivery of data and prevents Head of Line Blocking. Preventing this stall results in an overall improved responsiveness.
Addressing Loss of Throughput: A protocol without rigorous congestion control is very detrimental to a network’s performance and may impact all network flows even outside remoting. Therefore, RemoteFX UDP protocol implements and performs industry standard congestion control while incorporating the following benefits:
Because Forward Error Correction requires some redundancy in the transmitted data, we didn’t want this overhead to be incurred when not necessary. Therefore, the UDP transport is added in addition to the current TCP transport, which is still used for initial connectivity and non-interactive traffic. UDP is used to transfer interactive data like graphics, audio, video, and touch commands.
The following figure illustrates the improvements we can achieve in overall throughput compared to TCP. This is a very small subset of the performance measurements matrix used to evaluate RemoteFX UDP transport.
Throughput is just a part of an optimized WAN transport. Queuing delay has an equally important role to play in overall responsiveness. The following figure shows how RemoteFX UDP transport is able to keep delays in a good range (around or less than 100ms) even under extreme WAN conditions.
The following figure provides a visual summary of the types of performance improvements and their expected impact.
As discussed above, RemoteFX UDP transport provides significantly improved performance over various WAN configurations. We have ensured that these performance gains don’t come at the expense of security.
When UDP is used for reliable data transfer, it is secured by using Secure Sockets Layer (SSL) similar to TCP transport. However SSL can’t be used for best-effort delivery where some data may be lost. Rather than inventing a custom security mechanism for such best-effort UDP transports, we have teamed with the Windows Security Team to use Datagram Transport Layer Security (DTLS), which is a proposed standard defined by IETF RFC 6347 .
UDP presents many connectivity challenges because of its best-effort nature and lack of support in some network configurations. We have taken many steps to ensure that UDP is available in most of the network configurations. The most significant of these steps is the native UDP support in RD Gateway (rather than tunneling it through TCP like some VPNs). For security and ease of management, only one port (3391) is opened on RD Gateway for UDP connectivity (similar to TCP). Also, only one port (3389) is opened on RD Session Host Server and RD Virtualization Host VM’s for UDP (similar to TCP).
The availability of UDP transport is indicated by the connection quality indicator in the Remote Desktop Connection bar.
We have provided the necessary Group Policy settings to allow administrator control over new RemoteFX for WAN configurations. These policy settings are located under Computer Configuration -> Administrative Templates -> Windows Components -> Remote Desktop Services -> Remote Desktop Session Host -> Connections.
One of these policy settings allows an administrator to configure the use of UDP and TCP transports.
Another policy setting allows an administrator to configure RemoteFX Network Auto-Detect.
In summary, RemoteFX incorporates transports that are adaptive to the changing network conditions and packet loss. Although UDP transport will provide the best performance over wireless and high latency WAN networks, we realize that UDP connections may not always be established in many real world network configurations. Therefore, many of the other RemoteFX for WAN improvements such as RemoteFX Network Auto-Detect have also been incorporated in the TCP transport. We dynamically switch to TCP where UDP is not available. The UDP transport incorporates industry standard encryption and security practices and is also natively supported through RD Gateway for best connectivity. Combined with other improvements in the RemoteFX Adaptive Graphics stack, it provides an optimal user experience for all networks.
NOTE: Questions and comments are welcome. However, please DO NOT post a request for troubleshooting by using the comment tool at the end of this post. Instead, post a new thread in the RDS & TS forum at http://social.technet.microsoft.com/Forums/en-US/winserverTS/threads .
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.