Top 10 RDP Protocol Misconceptions – Part 1
Published Sep 07 2018 06:31 PM 23.8K Views
First published on CloudBlogs on Mar, 03 2009

Hi,

My name is Nadim Abdo and I’m the development manager responsible for the Remote Desktop Protocol (RDP).

Since we first shipped RDP in 1998 with Windows NT Terminal Services Edition we’ve gotten lot of very useful feedback on RDP (please keep it coming!). But we’ve also heard a lot of ‘interesting’ myths and misconceptions about RDP and its performance. So I thought why not try to bust some of those myths.

This post will at times get technical for those inclined to those details but I’ll also try to share some useful tidbits that end-users can apply to get an even better RDP experience.

Top 10 RDP Protocol Misconceptions (Part 1 of 2):

1) Myth: RDP is pretty slow because it has to scrape the screen and can only send giant bitmaps

This is a common misconception. While many alternative protocols are principally screen scrapers, RDP uses sophisticated techniques to get much better performance than can be obtained with a simple screen scraping approach.

To drill into this it helps to first talk a little about what screen scraping really means (i.e. what RDP does not do today) and why it can be slow:
In a screen scraping protocol the server side has to ‘poll’ screen contents frequently to see if anything has changed. Screen scraping polling involves frequent and costly memory ‘scrapes’ of screen content and then scanning through a lot of memory (a typical 1600x1200 by 32bpp screen is about 7MB of data) to see what parts may have changed. This burns up a lot of CPU cycles and leaves the protocol with few options but to send large resulting bitmaps down to the client.

So what does RDP do different today and why is it faster?

RDP uses presentation virtualization to enable a much better end-user experience, scalability and bandwidth utilization. RDP plugs into the Windows graphics system the same way a real display driver does, except that, instead of being a driver for a physical video card, RDP is a virtual display driver. Instead of sending drawing operations to a physical hardware GPU, RDP makes intelligent decisions about how to encode those commands into the RDP wire format. This can range from encoding bitmaps to, in many cases, encoding much smaller display commands such as “Draw line from point 1 to point 2” or “Render this text at this location.”

To illustrate some of the benefits on CPU load, terminal servers today can scale to many hundreds of users. In some of our scalability tests we see that even with hundreds of users connecting to one server running knowledge worker apps (e.g. Word, Outlook, Excel) the total CPU load consumed by RDP to encode and transmit the graphics is only a few percent of the whole system CPU load!

With this approach RDP avoids the costs of screen scraping and has a lot more flexibility in encoding the display as either bitmaps or a stream of commands to get the best possible performance.

2) Myth: RDP uses a lot of bandwidth

In many common and important scenarios such as knowledge worker applications and line of business app centralization RDP’s bandwidth usage is very low (on the order of Kbps per user depending on the app and scenario).

This is certainly much lower than many of the screen scraping approaches can hope to achieve (see point #1 above). More importantly it’s low enough that it provides a good experience for many users sharing the same network and datacenter infrastructure even when over a slow network.

So why has there been a perception that RDP uses a lot of bandwidth?

This is a good question and the answer probably lies in the fact that RDP does not use a constant amount of bandwidth; it actually tries to reduce bandwidth usage to 0 when nothing is changing on the screen. Bandwidth consumption only goes up in proportion to what is changing on screen. For instance, if you just run a line of business app with basic graphics and not much animation you may end up sending just a few Kbps of bandwidth down the wire. Of course if you start running animation-heavy applications or graphics your bandwidth usage will go up to support that scenario.

So let’s illustrate some sample bandwidth usages for RDP6.1 in common scenarios (data is from the RDP Performance Whitepaper ).

Your mileage will vary depending on your application and network conditions, so it’s important to actually measure empirically for your scenario but the whitepaper gives useful general trends.

3) Myth: I can’t get the same rich experience I get locally when working over RDP

This is also a misconception. RDP provides a scalable remoting experience. By default it cuts down on rich effects in the desktop and application experience in order to preserve bandwidth and save on server load (e.g. CPU, memory). However, if you want the highest end user experience it is possible to turn on many rich effects and display enhancements such as:

· ClearType

· Wallpaper

· The Aero theme with full glass and 3D effects (when connecting to Vista with RDP 6.1)

· 32-bit per pixel color

The key to enabling many of these effects is to run the Remote Desktop client, click Options, and then click the Experience tab. Here you can select and enable many high-end features. Note that in some cases your admin might have controlled access to these features with server-side group policies.

In many cases you can get a great end user experience with good parity to the local case.

We’re also constantly working to ‘close the gap’ between the local and remote experience and so we’re looking to improve the remote experience even more in future versions.

4) Myth: RDP can’t be tuned to get better performance

This is again a misconception. RDP has a set of defaults that tries to provide the best balance between bandwidth usage, the remote user experience, and server scalability. However, you can override many settings if you want to manually tune for a specific scenario and in some cases get very significant boosts in performance.

TIP: One of my favorite such settings is the ability to set policy on the server to optimize RDP compression. This can give you a boost of as much as 60% bandwidth improvement over previous versions of RDP. The tradeoff here is that you’d be consuming more server resources (such as memory and possibly CPU) to achieve that bandwidth reduction.

The GP to control this is :

Administrative TemplatesWindows ComponentsTerminal ServicesTerminal ServerRemote Session Environment“Set compression algorithm for RDP data”

There is more information on tuning the bulk compressor as well as other RDP-tunable parameters such as cache sizes in the RDP Performance Whitepaper .

5) Myth: Using lower color depths -- e.g. 8bpp -- gives the best end user experience

This is a common misconception and was historically true, but not anymore!

The first version of RDP only supported 8bpp color. However, ever since Windows XP, RDP has supported up to 32bpp color.

The reason for this is that more and more apps have come to expect 32bpp mode as the default. Even the Windows Aero experience requires it.

Rather than deny this trend and create a difference between the local and remote experiences, we put a lot of effort into optimizing the 32bpp case to bring down its cost. This allows the user to have the flexibility to pick what is best for their scenario without necessarily having to incur a much bigger bandwidth cost.

In general I’d recommend attempting to run your scenario at 32bpp and measuring the resulting bandwidth to see if it’s acceptable for your scenario. It will usually give the best visual experience and in several cases will consume only a small percentage more data than 16bpp.

That’s it for part 1. I hope this list has been useful, come back soon for part II of the Top 10 RDP Misconceptions list.

Version history
Last update:
‎Sep 07 2018 06:31 PM