Cloud AI/ML APIs, How easy is it to deploy these on the Edge
Published Mar 21 2019 02:25 PM 326 Views
Microsoft
First published on MSDN on Feb 12, 2019


Custom Vision Identification

The process is very easy as we use the Custom Vision ( http://customvision.ai ) service to train an image classifier and then export the finished model in a number of different forms – CoreML, TensorFlow, ONNX, or dockerfile for IoT Edge/Azure Functions/Azure ML.



So the process is:

  1. Use CustomVision.ai to build a model
  2. Export the modal appropriately e.g. TensorFlow
  3. Use boiler plate IoT Edge module code and swap in new modal files.
  4. Custom test data source as appropriate and execute against models


For an end to end example see https://www.hackster.io/glovebox/image-recognition-with-azure-iot-edge-and-cognitive-services-a...

What can this run on?

The evolution of this is to exploit the Myriad processors from Intel by running a model – such as those provided by Intel’s OpenVino – on the Myriad processor under the control of a Pi. This has enabled some significant CNN models to be executed.

Using Containers

We are containerising our cognitive services ( https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support ... ), currently available are CV – OCR, Face, LUIS, Text Analytics – key phrase extraction, Text Analytics – Language Detection, Text Analytics – Sentiment analysis.

Voice Recognition

Voice recognition with key phrase activation is an interesting topic. There are of course OS features for Linux and Windows which provider this at low edge of devices – for example Raspberry Pi. Windows IoT Core has speech recognition for a number of languages and there are a number of examples of simple Voice Assistants, such an example is - https://wotudonet.wordpress.com/2016/08/04/project-patrick-build-your-own-personal-assistant/ ...

There are also specialist chips which provide local voice processing, specifically activation word recognition, such as the XMOS chip used in the Respeaker.Io expansion board. https://www.xmos.com/

Respeaker even have a workshop for the end to end architecture of a smart speaker: https://respeaker.io/make_a_smart_speaker/

Embedded Learning Library

ELL provides models for resource constrained platforms and single-board computers. https://github.com/Microsoft/ELL

The Embedded Learning Library (ELL) allows you to design and deploy intelligent machine-learned models onto resource constrained platforms and small single-board computers, like Raspberry Pi, Arduino, and micro:bit. The deployed models run locally, without requiring a network connection and without relying on servers in the cloud. ELL is an early preview of the embedded AI and machine learning technologies developed at Microsoft Research.

Go to our website for tutorials, instructions, and a gallery of pretrained ELL models for use in your projects.


Version history
Last update:
‎Mar 21 2019 02:25 PM
Updated by: