Facial Navigation with OpenVino for Accessible Development in Visual Studio
This blog highlights a new touchless interface for use with the UCL MotionInputV3.
Over the Covid period, University College London academics and students developed UCL MotionInputV3. This technology uses computer vision with a regular webcam to control your computer like a mouse and keyboard. It also uses natural language processing to control your existing applications and shortcuts. Working with Intel® UK and Microsoft we created VisualNav for Visual Studio, an extension that makes coding fun for disabled developers and young pupils with an accessibility need to design, write and test programs with Visual Studio.
The WIMP (windows, icons, menus and pointers) interface paradigm dominates modern computing systems, but disabled users may find it challenging to use. UCL's MotionInput enables voice and gesture controls of a computer with only a webcam and microphone. VisualNav for Visual Studio provides a touchless interface for writing code. It is designed for ease of use for beginners in the world of coding, from beginner through to advanced.
How does it work?
Users can select a code block from an accessible command palette, with minimal motor movement via the use of a radial dial component. To ensure the correct block is selected, a preview gives a description and visualisation—finally, a building workspace to drag and assemble the blocks of code, to be compiled into code. Throughout the process, voice commands can be used to trigger shortcuts, as an accelerator.
It is now possible to write code with only facial movement and voice commands - Visit Site