Extract business insights with Live Video Analytics and Intel OpenVINO using Intel NUC devices.
Published Apr 14 2021 06:10 AM 2,799 Views
Microsoft

In this technical blogpost we’re going to talk about the powerful combination of Azure Live Video Analytics (LVA) 2.0 and Intel OpenVINO DL Streamer Edge AI Extension. In our sample setup we will use an Intel NUC device as our edge device. You can read through this blogpost to get an understanding of the setup but it requires some technical skills to repeat the same steps so we rely on existing tutorials and samples as much as possible. We will show the seamless integration of this combination where we will use LVA to create and manage the media pipeline on the edge device and extract metadata using the Intel OpenVINO DL – Edge AI Extension module which is also managed through a single deployment manifest using Azure IoT Edge Hub. For this blogpost we will use an Intel NUC from the 10th generation but it can run on any Intel device. We will look at the specifications and performance of this little low power device and how well it performs as an edge device for LVA and AI inferencing by Intel DL Streamer. The device will receive a simulated camera stream and we will use gRPC as the protocol to feed images to the inference service from the camera feed at the actual framerate (30fps).

 

The Intel OpenVINO Model Server (OVMS) and Intel Video Analytics Serving (VA Serving) can utilize the iGPU of the Intel NUC device. The Intel DL Streamer – Edge AI Extension we are using here is based on Intel’s VA Serving with native support for Live Video Analytics. We will show you how easy it is to enable iGPU for the AI inferencing thanks to this native support from Intel.


Live Video Analytics (LVA) is a platform for building AI-based video solutions and applications. You can generate real-time business insights from video streams, processing data near the source and applying the AI of your choice. Record videos of interest on the edge or in the cloud and combine them with other data to power your business decisions.

lva-overview.png

LVA was designed to be a flexible platform where you can plugin AI services of your choice. These can be from Microsoft, the open source community or your own. To further extend this flexibility we have designed the service to allow integration with existing AI models and frameworks. One of these integrations is the OpenVINO DL Streamer Edge AI Extension Module.

The Intel OpenVINO™ DL Streamer - Edge AI Extension module is a based on Intel’s Video Analytics Serving (VA Serving) that serves video analytics pipelines built with OpenVINO™ DL Streamer. Developers can send decoded video frames to the AI extension module which performs detection, classification, or tracking and returns the results. The AI extension module exposes gRPC APIs.

 

vas-overview.png

 

Setting up the environment and pipeline

We will walk through the steps to set up LVA 2.0 with Intel DL Streamer Edge AI Extension module and set it up on my Intel NUC device. I will use the three different pipelines offered by the Intel OpenVINO DL Streamer Edge AI Extension module. These include Object Detection, Classification and Tracking.

lva-architecture.png

Once you’ve deployed the Intel OpenVINO DL Streamer Edge AI Extension module you will be able to use the different pipelines by setting environment variables, PIPELINE_NAME, PIPELINE_VERSION in the deployment manifest. The supported pipelines are:

PIPELINE_NAME PIPELINE_VERSION
object_detection person_vehicle_bike_detection
object_classification vehicle_attributes_recognition
object_tracking person_vehicle_bike_tracking

 

The hardware used for the demo

For this test I purchased an Intel NUC Gen10 for around $1200 USD. The Intel NUC is a small form device with good performance to power ratio. It puts full size PC power in the palm of your hands, so it is convenient as a powerful edge device for LVA. It comes in different configurations so you can trade off performance vs costs. In addition, it comes as a ready-to-run, Performance Kit or just the NUC boards for custom applications. I went for the most powerful i7 Performance Kit and ordered the maximum allowed memory separately. The full specs are:

  • Intel NUC10i7FNH – 6 cores at 4.7Ghz
  • 200GB M.2 SSD
  • 64GB DDR4 memory
  • Intel® UHD Graphics for 10th Gen Intel® Processors

nuc-01.png

nuc-02.png

 

Let’s set everything up

These steps expect that you have already set up your LVA environment by using one of our quickstart tutorials. This includes:

  • Visual Studio Code with all extensions mentioned in the quickstart tutorials
  • Azure Account
  • Azure IoT Edge Hub
  • Azure Media Services Account

In addition to the prerequisites for the LVA tutorials, we also need an Intel device where we will run LVA and extend it with the Intel OpenVINO DL Streamer Edge AI Extension Module.

  1. Connect your Intel device and install Ubuntu. In my case I will be using Ubuntu 20.10
  2. Once we have the OS installed follow these instructions to set up IoT Edge Runtime
  3. Install Intel GPU tools: Intel GPU Tools: sudo apt-get install intel-gpu-tools (optional)
  4. Now install LVA. Assuming you already have a LVA set up, you can start with this step

When you’re done with these steps your Intel device should be visible in your IoT Extension in VS Code.

vsc-iot-extension.png

Now I’m going to follow this tutorial to set up the Intel OpenVINO DL Streamer Edge AI Extension module: https://aka.ms/lva-intel-openvino-dl-streamer-tutorial

Once you’ve completed these steps you should have:

  1. Intel edge device with IoT Edge Runtime connected to IoT Hub
  2. Intel edge device with LVA deployed
  3. Intel OpenVINO DL Streamer Edge AI Extension module deployed

 

The use cases

Now that we have our setup up and running Let’s go through some of the use cases where this setup can help you. We'll use the sample videos we have available to us and observe the results we get from the module..
Since the Intel NUC is a very small form factor it can easily be deployed in close proximity to a video source like an IP camera. It is also very quiet and does not generate a lot of heat. You can mount it above a ceiling, behind a door, on top or inside a bookshelf or underneath a desk to name a few examples. I will be using sample videos like a recording of a parking lot and a cafeteria. You can imagine a situation where we have this NUC located at these venues to analyze the camera feed.

 

Highway Vehicle Classification and Event Based Recording

Let’s imagine a use case where I’m concerned about the specific vehicle type and color using a specific piece of highway and want to know and see the video frames where these vehicles appear. We can use LVA together with the Intel DL Streamer – Edge AI Extension module to analyze a highway and trigger on a specific combination of vehicle type, color and confidence level. For instance a white van with a confidence above 0.8. Within LVA we can deploy a custom module like this objectsEventFilter module. The module will create a trigger to the Signal Gate node when these three directives are met. This will create an Azure Media Services asset which we can playback from the cloud. The diagram looks like this:

full-topology.png

When we run the pipeline the rtsp source is split into the signal gate node that will hold a buffer of the video and it is send to the gRPC Extension node. The gRPC Extension will create images out of the video frames and feed into the Intel DL Streamer – Edge AI Extension module. When using the classification pipeline it will return inference results containing type attributes. These are forwarded as IoT messages and will feed into the objectsEventFilter module. We can filter on specific attributes to send a IoT message trigger the Signal Gate node with an Azure Media Services Asset as result.
In the inference results you will see a message like this:

 

{
      "type": "entity",
      "entity": {
        "tag": {
          "value": "vehicle",
          "confidence": 0.8907926
        },
        "attributes": [
          {
            "name": "color",
            "value": "white",
            "confidence": 0.8907926
          },
          {
            "name": "type",
            "value": "van",
            "confidence": 0.8907926
          }
        ],
        "box": {
          "l": 0.63165444,
          "t": 0.80648696,
          "w": 0.1736759,
          "h": 0.22395049
        }
      }

 

This is meeting our objectsEventFilter module thresholds which will give the following IoT Message:

 

[IoTHubMonitor] [2:05:28 PM] Message received from [nuclva20/objectsEventFilter]:
{
  "confidence": 0.8907926,
  "color": "white",
  "type": "van"
}

 

 This will trigger the Signal Gate to open and forward the video feed to the Asset Sink node. 

 

[IoTHubMonitor] [2:05:29 PM] Message received from [nuclva20/lvaEdge]:
{
  "outputType": "assetName",
  "outputLocation": "sampleAssetFromEVR-LVAEdge-20210325T130528Z"
}

 

The Asset Sink Node will store a recording on Azure Media Services for cloud playback.

highway-playback.png

 

Deploying objectsEventFilter module

You can follow this tutorial to deploy a custom module for Event Based Recording. Only this time we will use the objectsEventFilter module instead of the objectCounter. You can copy the module code from here. The steps are the same to build and push the image to your container registry as with the objectCounter tutorial.

I will be using video samples that I upload to my device in the following location: /home/lvaadmin/samples/input/
Now they are available through the RTSP simulator module by calling rtsp://rtspsim:554/media/{filename}
Next we deploy a manifest to the device with the environment settings that specify the type of model. In this case I want to detect and classify vehicles that show up in the image.

 

"Env":[
                    "PIPELINE_NAME=object_classification",
                    "PIPELINE_VERSION=vehicle_attributes_recognition",

 

Next step is to change the “operations.json” file of the c2d-console-app to reference the rtsp file. For instance if I want to use the “co-final.mkv” I set the operations.json file to:

 

{
  "name": "rtspUrl",
  "value": "rtsp://rtspsim:554/media/co-final.mkv"
}

 

Now that I have deployed the module to my device I can invoke the media graph by executing the c2d-console-app (i.e. press F5 in VS Code)
Note: Remember to listen for event messages by clicking on “Start Monitoring Built-in Event Endpoint” in the VS Code IoT Hub Extension.

In the output window of VS Code we will see messages flowing in a json structure. For the co-final.mkv using the object tracking for vehicles, persons and bikes we see output like this:
Timestamp: of the media, we maintain the timestamp end to end so you can always relate messages across media timespan.
Entity tag: Which type of object was detected (vehicle, person or bike)
Entity Attributes: The color of the entity (white) and the type of the entity (van)
Box: The box size and location on the picture where we detected this entity.

Let's have a look at the CPU load of the device. When we SSH into the device we can type the command “sudo htop”. This will show details of the device load like CPU/Memory.

cpu.png

We see a load of ~32% for this model on the Intel NUC. It is extracting and analyzing at 30fps. So we can safely say we can run multiple camera feeds on this small device as we have plenty of headroom. We could also trade off fps to a allow even more camera feeds density per device.

 

iGPU offload support

  1. Right-click on this template and “generate a deployment manifest”. The deployment manifest is now available in the “edge/config/” folder
  2. Right click the deployment manifest and deploy to single device, select your Intel device
  3. Now execute the same c2d-console-app again (press F5 in VS Code). After about 30 seconds you will see the same data again in your output window.

gpu.pnggpu2.png

Here you can see the iGPU is showing a load of around ~44% to run the AI tracking model. At the same time we see a 50% decrease of CPU usage compared to the first run which was using only CPU. We still observe some CPU activity because the LVA media graph still uses the CPU.

 

To summarize

In this blogpost and during the tutorial we have walked through the steps to:

  1. Deploy IoT Edge Runtime on an Intel NUC.
  2. We connected the device to our IoT Hub so we can control and manage the device using the IoT Hub together with the VS Code IoT Hub Extension.
  3. We used the LVA sample to deploy LVA onto the Intel device.
  4. In addition we took the Intel OpenVINO – Edge AI Extension Module and deployed this onto the Intel Device using IoT Hub.

This enables us to use the combination of LVA and Intel OpenVINO DL Streamer Edge AI Extension module to extract metadata from the video feed using the Intel pre-trained models. The Intel OpenVINO DL Streamer Edge AI Extension module allows us to change the pipeline by simply changing variables in the deployment manifest. It also enables us to make full use of the iGPU capabilities of the device to increase throughput, inference density (multiple camera feeds) and use more sophisticated models. With this setup you can bring powerful AI inferencing close to the camera source. The Intel NUC packs enough power to run the model for multiple camera feeds with low power consumption, low noise and in a small form factor. The inference data can be used for your business logic.

 

Call to actions

Co-Authors
Version history
Last update:
‎Apr 14 2021 06:13 AM
Updated by: