Questions about VAIDK, Shutter Options & External Trigger

New Contributor

Hi Everyone, 

 

I see that Microsoft is offering VisionAI Dev kit which allows people to run CustomVision.ai exported models directly on the device.

 

My question was:

  1. Many manufacturing applications require a triggered camera, where the camera is triggered by an external trigger. This helps limit the amount of unnecessary frames taken and anlyzed and improves accuracy of results. Is there an option to make VAIDK take only 1 frame when an external trigger is provided(say a sensor indicates that product is in right place)?
  2. The current camera offers a rolling shutter which limits the camera usage in manufacturing where products can be fast moving. Typically a global shutter sensor and image acquisition synced with an external sensor solves this issue. If VAIDK was to have another version which offers Built in Lighting, Global Shutter and External trigger, the use cases would be transformed significantly. 
  3. Are there any other boards where CustomVision exported models can be run directly with hardware acceleration?

Thanks & Regards,

4 Replies
Hi Mouman,
AFAIK, the VisionAI Dev Kit doesn't have sensors like motion detector for what you are trying to achieve.
I recommend you take a look at Azure Percept which proposes a newer devkit which you could more easily customize to achieve your goal.

Is it possible to attach a custom MIPI camera to Azure Percept? Like an Arducam Global Shutter camera? Is there any document explaining supported camera types or it just supports one camera which it comes with for now.

Technically speaking you can do this, but The Azure Percept DK doesn't have a list of supported camera nor documentation specifically to do this as it is a dev kit designed to experience and evaluate the Azure Percept Vision camera. That said Azure Percept is using Azure IoT Edge and IoT Hub which you can work with directly, modifying modules configuration and setup and even code to use a different camera. I believe the Vision module (the one running the vision module) uses the RTSP feed from the camera, so it shouldn't be too hard to modify it. Also you can take a look a the VisionOnEdge project which makes building such projects simpler (and which can be used with the Azure Percept DK): https://techcommunity.microsoft.com/t5/internet-of-things-blog/bringing-your-vision-ai-project-at-th...

@NoumanQaiser1620

Perhaps it's good to mention that Azure Percept runs a fully fledged Linux distro that you as an admin can get root access to, so theoretically you can build all sorts of applications with it to access and use the sensors. Out of the box, Azure Percept starts several Azure IoT edge modules like azureeyemodule which runs continuously and provides a local RTSP endpoint that other services can access. If your use case is only capturing a frame after an event happens, it might be sufficient to write a few lines of code to access the camera and receive 1 frame. Here is an example of how this can be done (and here is one how this library can also be used as an Azure IoT Edge module).

Concerning your question about a custom MIPI camera, this is how the Azure Percept vision chip looks on the backside with the one of the two camera connectors marked with an arrow:

Picture2.png

 

Arducam uses a different MIPI CSI-2 connector so you can't just plug it into the chip directly. What should work is using a USB 3.0 global shutter camera, the disadvantage here are the extra time costs necessary to read data from the camera and put it as a tensor on the VPU while the MIPI interface the media sub system of the Myriad chip offers makes it possible to process the data directly on the chip.