Project Prague: What is it and why should you care?
Published Mar 21 2019 06:07 AM 878 Views
First published on MSDN on Jul 23, 2017

Guest blog from Charlie Crisp, Microsoft Student Partner at the University of Cambridge

This year at Build, Microsoft announced a ton of cool new stuff, including the new Cognitive Services Labs – a service which allows developers to get their hands on the newest and most experimental tools which are being developed by Microsoft.

Particularly exciting was the announcement Project Prague which aims to empower developers to make use of advanced gesture recognition within their applications without even needing to write a single line of code.

And why should you care? Well aside from being ridiculously cool, this is the sort of stuff that even your non-techie friends will want to hear about. So let me set the scene…

The use of keyboards dates back to the 1870’s where they were used to type and transmit stock market text data across telegraph lines which was then immediately printed onto ticker tape. Mice, on the other hand, took a lot longer to come about, and it wasn’t until 1946 that the world was first introduced to the ‘Trackball’ – a pointing device used as an input for a radar system developed by the British Royal Navy.

Ever since computers have been used primarily with a keyboard and mouse (or trackpad) and advances in technologies such as intelligent pens and gesture control have done little to change this. It is a fact, however, that navigating through different right-click menus and keyboard shortcuts can be very cumbersome and time-consuming.

Gesture control can provide a great alternative way of interacting with a computer in a natural and intuitive way. Whether it’s moving and rotating pictures, navigating through tabs, or inserting emojis, Project Prague allows developers to recognise and react to any gesture which a user can make with their hands.

But the coolest part of this is not how easy this technology is for users, but how easy it is for developers.

Gestures are defined as a series of different hand positions, and this can be done either in code or by using Microsoft’s visual interface. Then it is as simple as adding an event listener which will be triggered whenever that gesture is recognized. Microsoft will even automatically generate visual graphics which will show the user what gestures are available to use in any given program, and what the effects of these gestures are.

If you are even half excited as I am about this, then I would urge you to check out which has more information and a series of awesome demos which are well worth a watch. You can even sign up to test out the technology for yourself thanks to the wonders of Cognitive Services Labs! At the very least, it’s a great way of really freaking out your grandparents!

If you have found this interesting and want to learn more, then I strongly suggest that you check out which has documentation, code samples and the SDK .

Version history
Last update:
‎Mar 21 2019 06:07 AM
Updated by: