Perceptmobile: Azure Percept Obstacle Avoidance LEGO Car
Published May 14 2021 08:00 AM 6,132 Views
Brass Contributor

Azure Percept is a platform of hardware and services that simplifies use of Azure AI technologies on the edge. The development kit comes with an intelligent camera, Azure Percept Vision, and it can also be extended with Azure Percept Audio, a linear microphone array. Azure Percept works out of the box with Azure services such as Cognitive Services, Machine Learning, Live Video Analytics, and others to deliver vision and audio insights in real time. Scenarios like object detection, spatial analytics, anomaly detection, keyword spotting, and others can easily be solved with use of pre-built Azure AI models for edge.

 

I build "Perceptmobile", an Azure Percept-powered obstacle avoidance LEGO Boost car, as a weekend project and in this post I will walk you through all steps how it was built.

 

Perceptmobile-side.png

 

In the standard LEGO Boost package you can find instructions on how to build 4 different models and one of them is M.T.R. 4 model, which I modified a bit to fit the needs of this project. Model was used as a base on top of which I placed Azure Percept with Azure Percept Vision camera. LEGO Boost package also comes with 3 cones that I took pictures of and trained the Custom Vision model. Custom vision models can easily be deployed to the Azure Percept via Azure Percept Studio, and you can easily test it with camera stream.

 

With NodeJS I made a small backend that is based on quickstart "Send telemetry from a device to an IoT hub and read it with a back-end application" and with use of Express I served results on the localhost. For the frontend application I used "Lego Boost Browser Application" which is a great React application for controlling LEGO Boost from the browser via Web Bluetooth API. With it you can easily connect to your LEGO Boost, and the application gives you a nice interface where you are not only able to control motors and other sensors, but you are also able to write different commands and programming logic.

 

In following video you can see full walkthrough how to build this project:

 

 

Quickstart: Send telemetry from a device to an IoT hub and read it with a back-end application (Node.js) can be found here.

 

Azure CLI commands used:

 

az iot hub show --query properties.eventHubEndpoints.events.endpoint --name {YourIoTHubName}

az iot hub show --query properties.eventHubEndpoints.events.path --name {YourIoTHubName}

az iot hub policy show --name service --query primaryKey --hub-name {YourIoTHubName}

 

 

Pictures used for model training and relevant code is available in GitHub repository.

 

4 Comments
Copper Contributor

If i create a brand new iothub with different name. how to i get azure percept to work again

i tried editing the device_connection_string in /etc/iotedge/config.yaml
it connects to the device in the new iothub but only has two modules.
edgeAgent and edgeHub

Microsoft

I have not tested that but knowing how IoT Hub and IoT Edge work, I would assume you have to go through the Azure Percept device OOB setup to change the IoT Hub, so that Azure Percept sets up the new IoT Hub to have the right deployment configured for the device. Changing the device connection string in the config.yaml only points the IoT Edge runtime to connect to a different IoT Hub, but when connecting, the new IoT Hub will not have an IoT Edge deployment setup ready indicating which modules it needs to get and run.

Copper Contributor

makes sense.

since im fairly new at a number of things in the robotics stack, iotedge being one of them, it seems i need the deployment manifest that was used to define the modules and their publish subscribe relationships.

 

I know this from doing hello world from visual studio code.

 

My guess is that somehow i need to find from where the OOB experience runs.   whether thats a container or some systemctl/systemd service in the CBL-Mariner operating system.

my guess is that the website that runs the OOB experience has the Azure IotEdge project and manifest file at its disposal?   Or you call out to github, or maybe an azure devops pipeline?

Which gives you that default vision processing pipeline that then you instruct us to manipulate with the device twin properties to run a differently trained computer vision algorithm.

 

Its good value to have a ready to go thing like this.  But its also very important that newbie developers and vets alike can follow some multi part training where we can dig into the guts of this thing so that we can make it our own and learn the azure iotedge way.

 

This device is good for the decision makers/ POC demonstrations.

But if you also want grass roots adoption from the army of developers out there then we need that middle ground.

NVidia Jetson is the other end of the spectrum.
But somewhere in the middle is the sweet spot.   Maybe you can put me in touch with the team and i can work with them on developing these types of documentations, samples, tutorials, POCs internally as well as for clients?  juan.suero@gmail.com

Im working on combining computer vision on intelligent machines with hololens 2 for collaboration and training of robots.

 

Capture.PNG

 

Microsoft

Let's keep the conversation in one place:

Re: azure percept point to new iothub - Microsoft Tech Community

 

Co-Authors
Version history
Last update:
‎May 13 2021 05:53 PM
Updated by: