UCL & Intel® VisualNav v2 - Facial Navigation for Visual Studio using OpenVino Technology
Published Sep 03 2022 01:37 PM 2,512 Views
Copper Contributor

Facial Navigation with OpenVino for Accessible Development in Visual Studio

This blog highlights a new touchless interface for use with the UCL MotionInputV3.


Over the Covid period, University College London academics and students developed UCL MotionInputV3. This technology uses computer vision with a regular webcam to control your computer like a mouse and keyboard. It also uses natural language processing to control your existing applications and shortcuts. Working with Intel® UK and Microsoft we created VisualNav for Visual Studio, an extension that makes coding fun for disabled developers and young pupils with an accessibility need to design, write and test programs with Visual Studio. 



The WIMP  (windows, icons, menus and pointers) interface paradigm dominates modern computing systems, but disabled users may find it challenging to use. UCL's MotionInput enables voice and gesture controls of a computer with only a webcam and microphone. VisualNav for Visual Studio provides a touchless interface for writing code. It is designed for ease of use for beginners in the world of coding, from beginner through to advanced.


The project adopts a Visual coding paradigm, this is where blocks of code can be pieced together in an analogy to Legos and will be familiar to children coming from a background in Microsoft Make code. We use CEFSharp, an embedded web browser based on chromium, to integrate ‘Blockly’, the JavaScript library that MakeCode is built upon, and render this panel directly within Visual Studio.


How does it work?

Users can select a code block from an accessible command palette, with minimal motor movement via the use of a radial dial component. To ensure the correct block is selected, a preview gives a description and visualisation—finally, a building workspace to drag and assemble the blocks of code, to be compiled into code. Throughout the process, voice commands can be used to trigger shortcuts, as an accelerator.


The project supports 9 kinds of blocks, contains 65 block elements supporting JavaScript, Python, PHP, Lua, and Dart and C#.  C# contains the extra feature of ‘custom blocks’, which allows library functions to can be added to the radial menu as blocks facilitating advanced developers to build more complex applications.


It is now possible to write code with only facial movement and voice commands - Visit Site




Version history
Last update:
‎Sep 04 2022 12:54 AM
Updated by: