Saving Tweety with Azure Percept
Published May 24 2021 12:06 AM 4,466 Views
Brass Contributor

Goran Vuksic

Goran works as a Technical manager for Stratiteq Sweden, he is Microsoft AI MVP, he has 15 years of work experience in IT and wide knowledge about various technologies and programming languages. He worked on various projects for notable clients and projects he worked on have been featured many times on web sites like Forbes, The Next Web, NVIDIA Developer, TechCrunch, Macworld and others. In the last few years, he attended several hackathons and other competitions on which his skills and work were recognized and awarded. Goran is tech enthusiast, he writes technical blog posts and he likes to share his wide knowledge on different workshops and talks. You can connect with Goran on LinkedIn and follow him on Twitter.

 

Introduction

Azure Percept is an easy-to-use platform for creating edge AI solutions. Azure Percept Development Kit comes with an intelligent camera Azure Percept Vision. Services like Azure Cognitive Services, Azure Machine Learning, Azure Live Video Analytics, and many other works out of the box with Azure Percept. With the development kit you can set up proof of concepts in minutes, and integrate it with Azure AI and Azure IoT services. Overview of Azure Percept Development Kit can be found here, and on same link you can also find links how to set up your device.

 

blog-header.png

 

This post will show you how to create a simple project with Azure Percept Development Kit. I will use Tweety and Sylvester LEGO minifigures from Looney Tunes special edition. Idea is to train an AI model to be able to recognise Tweety and Sylvester, and find them amongst other LEGO minifigures. With a model that is able to tell where Tweety and Sylvester are we could keep Tweety safe, gor obvious reasons that Sylvester is always hungry and he should not be anywhere close to Tweety.

 

Azure Percept Studio

Azure Percept Studio makes it really easy to work with your Azure Percept, and it is the single launch point for creating edge AI models and solutions you want to develop.

 

blog-azure-portal.png

 

Interface will guide you in an intuitive way no matter if you are new to AI models and want to create a prototype, try out some sample applications or try out more advanced tools.

 

blog-percept-studio.png

 

In left menu you can click on option "Devices" and overview of your Percept devices will appear. For each device you can see status is it connected, and you can click on it to view details. We will use second tab "Vision" for this project where you can capture images for your project, view device stream and deploy a custom model to the device.

 

blog-percept-device.png

 

Training the model

We want to create a custom model that will be able to recognise Tweety and Sylvester. You can open Custom Vision in a new tab, log in and create a new project. Give a name to your project, create a resource, select "Object detection" and "General" domain. Custom Vision project domains are explained here in more detail.

 

blog-custom-vision-new.png

 

Images can also be added to the project from Azure Percept Studio, with option we have seen recently. If you are using that option you will be able to take pictures with Azure Percept Vision camera. You can also add and use any image that you prepared earlier.

 

blog-add-images.png

 

Once images are added, you need to tag them marking the objects your model will recognise. In this example I create two tags named "Sylvester" and "Tweety". Minimum number of images you should add to train the model is 15, but for an AI model to actually be able to recognise objects like this you should add much more images.

 

blog-tagging.png

 

After you tagged the images select "Train" to train the model and you can choose option "Quick training". In minute or two your model will be ready and you can go back to the Azure Percept Studio.

 

Testing the model

In Azure Percept Studio click option "Deploy a Custom Vision project" and select the model you just trained. You can also select specific model iteration if you trained the model several times.

 

percept-deploy-to-device.png

 

Once the model is deployed, you can click "View your device stream" to see live camera feed and test the model. Notification will appear in a few seconds when your stream is ready and you can open it in a separate tab.

 

percept-webstream.png

Testing a model with a live stream is a great way to see how it actually performs, you can add different objects, position the camera in different angles and get a real idea of the performance. If your model does not have high enough accuracy or it false detects other objects, you can take more pictures, add them to Custom Vision, re-train the model and test new iteration to see how it works.

 

Summary

Through this article you have learned about Azure Percept Studio, how to train your Custom Vision model, how to deploy it to the Azure Percept device and test it via live stream. Now that you learned more about Azure you can claim your Azure Heroes Learner badge by scanning QR code on the following link (please note there are only 50 badges available).

Resources

Microsoft Learn Modules on Custom Vision AI 
Explore computer vision in Microsoft Azure - Learn | Microsoft Docs
Analyze images with the Computer Vision service - Learn | Microsoft Docs
Classify images with the Custom Vision service - Learn | Microsoft Docs
Detect objects in images with the Custom Vision service - Learn | Microsoft Docs

 

Co-Authors
Version history
Last update:
‎May 24 2021 12:06 AM
Updated by: