Sep 19 2020 05:00 PM
Initial Setup: I got a new vision ai dev kit, set it up, updated the firmware and everything works fine. I see it detects a person and puts a box around. I also get events in IoT Hub.
Problem: I build a customvision model and when I update the property "ModelZipUrl" to point to the model zip no events are generated. I tried multiple times pointing to different models (build by others that worked in the past on their visionaikit), but same result no events get generated and sent to IoT Hub. The camera is in this state even if I remove the Url and to get the camera working again and sent events to IoT Hub I have to go through updating the firmware again and resetting it.
I also tried building a new image from scratch using the instructions from below, but same behavior
https://github.com/microsoft/vision-ai-developer-kit/tree/master/samples/official/ai-vision-devkit-g...
Any pointers to help overcome this issue.
Additional details:
Firmware version
adb shell cat /etc/version
v0.5370_Perf
Modules on IoT edge after a firmware update
C:\Users\srkothal>adb shell / # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9bab7e3702de mcr.microsoft.com/aivision/visionsamplemodule:webstream_0.0.13-arm32v7 "npm start" 24 minutes ago Up 24 minutes WebStreamModule e19544242c5d mcr.microsoft.com/azureiotedge-hub:1.0 "/bin/sh -c 'echo \"$…" 24 minutes ago Up 24 minutes 0.0.0.0:443->443/tcp, 0.0.0.0:5671->5671/tcp, 0.0.0.0:8883->8883/tcp edgeHub df21957d2178 mcr.microsoft.com/aivision/visionsamplemodule:1.1.3-arm32v7 "python3 -u ./main.py" 24 minutes ago Up 24 minutes AIVisionDevKitGetStartedModule 88740555d71f mcr.microsoft.com/azureiotedge-agent:1.0.7.1
Example of events being sent to IoT Hub (when no value in ModelZipUrl property)
send: b'\x89\x80?\xd4d\xc1' Found result object {"label": "person", "position_x": 1152.0, "width": 306.816, "id": 1, "confidence": 81, "position_y": 373.896, "height": 618.948} Confirmation[0] received for message with result = OK Properties: {} Total calls confirmed: 2 send: b'\x89\x80\xf4\x1e\xca1' send: b'\x89\x80\xe9Jc\xeb' Found result object {"label": "person", "position_x": 504.96, "width": 626.88, "id": 2, "confidence": 98, "position_y": 398.952, "height": 594.0} Confirmation[0] received for message with result = OK Properties: {} Total calls confirmed: 3 send: b'\x89\x806=\\\x03'
Set the property "ModelZipUrl" in the module identity twin
FYI: I tried both classification and object-detection models, like below
"ModelZipUrl": "https://sridharvisionaistorage.blob.core.windows.net/hardhatmodels/hardhat-classification-model.zip"
"ModelZipUrl": "https://sridharvisionaistorage.blob.core.windows.net/hardhatmodels/hardhat-objectdetection-model.zip"
Received twin update: {'ShowVideoOverlay': 'true', 'ObjectsOfInterest': '["hardhat","nohardhat"]', 'TimeBetweenMessagesInSeconds': 2, 'FrameRate': 30, 'VideoAnalyticsEnabled': 'true', 'Resolution': '1080P', 'ModelZipUrl': 'https://sridharvisionaistorage.blob.core.windows.net/hardhatmodels/hardhat-objectdetection-model.zip', '$version': 14, 'VideoOverlayConfig': 'inference', 'Bitrate': '1.5Mbps', 'Codec': 'AVC/H.264', 'HdmiDisplayActive': 'true', 'ShowVideoPreview': 'true'}
Configuring camera_client
2020-09-19 23:02:52,819 - iotccsdk - INFO - ipcprovider - __send_request:212 - API: http://192.168.1.158:1080/overlay data {'switchStatus': False}
Turning analytics off
configure_overlay: inference
2020-09-19 23:02:52,854 - iotccsdk - INFO - ipcprovider - __send_request:212 - API: http://192.168.1.158:1080/overlayconfig data {'ov_start_x': 0, 'ov_type_SelectVal': 5, 'ov_height': 0, 'ov_usertext': 'Text', 'ov_color': '869007615', 'ov_position_SelectVal': 0, 'ov_width': 0, 'ov_start_y': 0}
configure_overlay_state: on
2020-09-19 23:02:52,885 - iotccsdk - INFO - ipcprovider - __send_request:212 - API: http://192.168.1.158:1080/overlay data {'switchStatus': True}
set_analytics_state: on
2020-09-19 23:02:52,911 - iotccsdk - INFO - ipcprovider - __send_request:212 - API: http://192.168.1.158:1080/vam data {'switchStatus': True, 'vamconfig': 'MD'}
2020-09-19 23:02:53,541 - iotccsdk - INFO - ipcprovider - __send_request:212 - API: http://192.168.1.158:1080/vam data {}
2020-09-19 23:02:53,604 - iotccsdk - INFO - camera - _get_vam_info:431 - RESPONSE: {'status': False}:
2020-09-19 23:02:53,613 - iotccsdk - INFO - camera - _get_vam_info:444 - vam url: None
Send prop: {"FrameRate": 30}
Send prop: {"VideoOverlayConfig": "inference"}
Send prop: {"Codec": "AVC/H.264"}
Send prop: {"RtspDataUrl": null}
Send prop: {"RtspVideoUrl": "rtsp://192.168.1.158:8900/live"}
Send prop: {"Resolution": "1080P"}
Send prop: {"Bitrate": "1.5Mbps"}
Send prop: {"VideoAnalyticsEnabled": true}
Send prop: {"HdmiDisplayActive": true}
Send prop: {"ShowVideoOverlay": true}
Send prop: {"ShowVideoPreview": true}
Send prop: {"SupportedBitrates": "512Kbps | 768Kbps | 1Mbps | 1.5Mbps | 2Mbps | 3Mbps | 4Mbps | 6Mbps | 8Mbps | 10Mbps | 20Mbps"}
Send prop: {"SupportedConfigOverlayStyles": "text | inference"}
Send prop: {"SupportedEncodingTypes": "HEVC/H.265 | AVC/H.264"}
Send prop: {"SupportedFrameRates": "24 | 30"}
Send prop: {"SupportedResolutions": "4K | 1080P | 720P | 480P"}
Send prop: {"ModelZipUrl": "https://sridharvisionaistorage.blob.core.windows.net/hardhatmodels/hardhat-objectdetection-model.zip"}
Send prop: {"TimeBetweenMessagesInSeconds": 6}
Send prop: {"ObjectsOfInterest": "[\"hardhat\", \"nohardhat\"]"}
Confirmation of 204 received for {"FrameRate": 30}.
Confirmation of 204 received for {"VideoOverlayConfig": "inference"}.
Confirmation of 204 received for {"Codec": "AVC/H.264"}.
Confirmation of 204 received for {"RtspDataUrl": null}.
Confirmation of 204 received for {"RtspVideoUrl": "rtsp://192.168.1.158:8900/live"}.
Confirmation of 204 received for {"Resolution": "1080P"}.
Confirmation of 204 received for {"Bitrate": "1.5Mbps"}.
Confirmation of 204 received for {"VideoAnalyticsEnabled": true}.
Confirmation of 204 received for {"HdmiDisplayActive": true}.
Confirmation of 204 received for {"ShowVideoOverlay": true}.
Confirmation of 204 received for {"ShowVideoPreview": true}.
Confirmation of 204 received for {"SupportedBitrates": "512Kbps | 768Kbps | 1Mbps | 1.5Mbps | 2Mbps | 3Mbps | 4Mbps | 6Mbps | 8Mbps | 10Mbps | 20Mbps"}.
Confirmation of 204 received for {"SupportedConfigOverlayStyles": "text | inference"}.
Confirmation of 204 received for {"SupportedEncodingTypes": "HEVC/H.265 | AVC/H.264"}.
Confirmation of 204 received for {"SupportedFrameRates": "24 | 30"}.
Confirmation of 204 received for {"SupportedResolutions": "4K | 1080P | 720P | 480P"}.
Confirmation of 204 received for {"ModelZipUrl": "https://sridharvisionaistorage.blob.core.windows.net/hardhatmodels/hardhat-objectdetection-model.zip"}.
Confirmation of 204 received for {"TimeBetweenMessagesInSeconds": 6}.
Confirmation of 204 received for {"ObjectsOfInterest": "[\"hardhat\", \"nohardhat\"]"}.
Confirmation of 204 received for {"FrameRate": 30}.
Confirmation of 204 received for {"VideoOverlayConfig": "inference"}.
Confirmation of 204 received for {"Codec": "AVC/H.264"}.
Confirmation of 204 received for {"RtspDataUrl": null}.
Confirmation of 204 received for {"RtspVideoUrl": "rtsp://192.168.1.158:8900/live"}.
Confirmation of 204 received for {"Resolution": "1080P"}.
Confirmation of 204 received for {"Bitrate": "1.5Mbps"}.
Confirmation of 204 received for {"VideoAnalyticsEnabled": true}.
Confirmation of 204 received for {"HdmiDisplayActive": true}.
Confirmation of 204 received for {"ShowVideoOverlay": true}.
Confirmation of 204 received for {"ShowVideoPreview": true}.
Confirmation of 204 received for {"SupportedBitrates": "512Kbps | 768Kbps | 1Mbps | 1.5Mbps | 2Mbps | 3Mbps | 4Mbps | 6Mbps | 8Mbps | 10Mbps | 20Mbps"}.
Confirmation of 204 received for {"SupportedConfigOverlayStyles": "text | inference"}.
Confirmation of 204 received for {"SupportedEncodingTypes": "HEVC/H.265 | AVC/H.264"}.
Confirmation of 204 received for {"SupportedFrameRates": "24 | 30"}.
Confirmation of 204 received for {"SupportedResolutions": "4K | 1080P | 720P | 480P"}.
Confirmation of 204 received for {"ModelZipUrl": "https://sridharvisionaistorage.blob.core.windows.net/hardhatmodels/hardhat-objectdetection-model.zip"}.
Confirmation of 204 received for {"TimeBetweenMessagesInSeconds": 6}.
Confirmation of 204 received for {"ObjectsOfInterest": "[\"hardhat\", \"nohardhat\"]"}.
send: b'\x89\x80C\x1c\xaa\xba'
send: b'\x89\x80\xe4M\xa7\xb1'
send: b'\x89\x80\x88L\xd7\xb3'
send: b'\x89\x80N,u\xb8'
send: b'\x89\x80D\xa1\x99E'
send: b'\x89\x80\x84\x9aX4'
send: b'\x89\x80\xef\x15\xbd\xbb'
send: b'\x89\x80e\x1f\xf8!'
send: b'\x89\x80\x1b\x0cA\xa0'
send: b'\x89\x80\x0e\x8c\r\x1b'
send: b'\x89\x80\xb7\xe3\xe8\xac'
send: b'\x89\x802\xb44\x10'
send: b'\x89\x80\xe8\xfal\xee'
send: b'\x89\x80\x1f\xfe.\xfe'
send: b'\x89\x80\x16\xf4V\xae'
send: b'\x89\x80\xe2\xa8\xef\xfb'
send: b'\x89\x80R\x80\xd8`'
send: b'\x89\x80\xd3D=='
send: b'\x89\x80u0A\x08'
send: b'\x89\x80\xd4c\xab]'
send: b'\x89\x80\xc5\x9a\x03\x08'
Sep 20 2020 08:46 PM
Hi @Sridhar_Kothalanka, I would suggest you to do factory reset and configure it once again and let us know if the problem still persists.
Here is the link to configure the dev kit-https://azure.github.io/Vision-AI-DevKit-Pages/docs/Run_OOBE/. Meanwhile, we will investigate the issue and let you know.
Sep 21 2020 07:18 AM
Sep 23 2020 04:15 AM - edited Sep 23 2020 04:41 AM
I ran into the same problem. Factory reset a couple of times setting up everything from scratch. Neither of our two cameras will even run the default model anymore after factory reset and setting up Azure resources from scratch in the quick start. Everything seems to running ok on the camera and in Azure but no boxing of objects happening.
Sep 23 2020 04:31 AM
@Kristian_Heim, @Sridhar_Kothalanka, We're looking into this with another team, I'll follow up once we find out more.
Sep 25 2020 02:41 AM
Solution@Nikunja - I found one problem. There was no more space available on the Camera for some reason. I reflashed it using the fastboot procedure listed in toubleshooting. The default model OOBE deployment now works. However, still running into the same issue with model not running when specifying ModelZipUrl as mentioned in this thread
Nov 26 2020 06:43 AM
Dec 01 2020 05:38 PM
I would like to know also as I am seeing the same issue with custom model generated for AISDK from the Custom Vision portal. The sample SSD model worked fine send human readable text... it's almost like the tensor names might be off in the camera va- config, or the json serialization in the sample is not pointing to the right tensor name for the custom models not the coco SSD from the sample?.. Anyway, I'm seeing the same output:
\x89\x80\x00\x85\x86M.... or other variants...
Dec 06 2020 04:27 AM - edited Dec 06 2020 04:30 AM
Hi..
I found out more... this is not a space on the camera issue issue.... df reports the /data disk is only 34% full.
1) I built the samples from the SDK from scratch, pushed them to my own repo and using the default DLC model, the camera will recognize a new ModelZipUrl property value and apply properly.
2) When I publish my own DLC exported from the Custom Vision Portal, placed in storage and the IoT Edge Agent will promptly download a new zip, and apply it correctly. By design, it wipes the original zip and contents of the mounted drive, then unzips properly.
3) The Agent forces the visionsample module to stop and start. Properties are applied including the new model, except during startup the /vam data lookup API call where the iotccsdk is now always failing to return good result and /data in the event is null and even though the original DeviceTwin report showed to set Analytics to 'true', the camera property is always then always is auto set to 'false'. I'm guessing because the model DLC is failing to load correctly to the on camera NN processor dur to certain nodes in the exported custom DLC from the ZIP generated directly from the portal with the version of the 'caffe' model used as a source.
4) The setting back to the coco DLC from the original sample from Git returns valid values for the API call through /data. This lead me to believe it might be something with my model DLC... even though it was purely straight exported version from the portal.
5) Luckily I had a version of my custom model DLC from August (almost 50 iterations prior). That ZIP had been known to produce custom results from 'on camera' and load correctly. And guess what... it did!! Although I still get the weird "send: \x89\x80\x00\... "...
6) but my guess now is this is actually telemetry maybe direct from the chip or camera itself or maybe IoT Hub Device Client... that is being directed thru the console and showing up in the logs as an encoded byte stream... I think it's the Hub because I see my message count rising when no other registered devices are connected or on.
7) Since I was able to load an older model, I decided to attempt to create my own DLC from an existing frozen graph. I downloaded the Snapdragon SDK from Qualcomm, got it up and running correctly and attempted to convert one of my new frozen graphs myself. It has tools for viewing the contents of a DLC as well which comes in handy if you want to know what the DLC NN structure is...
8) The SDK could not convert the SavedModel to DLC due to structure and other node anomalies! The version of Snapdragon SDK for the working models are ~25 while the most recent version used in the Custom Vision portal is ~40...? Anyway, I think the older model works because the base 'caffe' version of the model which is used to create the DLC from the portal is generating a DLC that is too complex for the version of the chip on the camera since my model is now way more complex than the one from August (which is the working DLC from my project and iteration 80).
9) The issue with the camera and custom models is probably due to how the version of the SDK hosted behind the Vision Portal is picking operators during the conversion and whether or not the generated structure is supported by the Snapdragon NN bits on the camera...
10) With no container/module changes and changing only the device twin property "ModelZipUrl" back to the the DLC from the sample (the coco model) the camera set analytics to true, and reported results again...
Something about the DLC being generated from Custom Vision portal and compatibility with the 'SNPE' on camera - my project has ~1800 training images of vary sizes and crops and only ~13 labels.. I have Snapdragon SDK 1.43... and attempting to run the manual conversion to DLC myself against a TensorFlow (instead of Caffe) model fails with errors about invalid unsupported model structure. I attempted to do the inference and set 'Runtime'=0 force CPU instead of GPU and it failed same way on the camera...
Thanks in advance!
Dec 30 2020 02:35 AM
@Sridhar_Kothalanka Hello, I am having the same problem. Have you found a solution yet ?
Apr 27 2021 09:17 AM
Oct 04 2021 10:36 PM
Oct 10 2021 03:27 AM
Sep 07 2022 10:39 PM
Sep 25 2020 02:41 AM
Solution@Nikunja - I found one problem. There was no more space available on the Camera for some reason. I reflashed it using the fastboot procedure listed in toubleshooting. The default model OOBE deployment now works. However, still running into the same issue with model not running when specifying ModelZipUrl as mentioned in this thread