We are trying to deploy a custom model on to Microsoft AI Camera (QCS603 chip-set). We have created a DLC file out of a custom model created by us and, after creating the DLC file we tested the same using offline inference technique on SNPE (Snapdragon Neural
Processing Engine) and its working as expected.
Now we are trying to move the DLC file to camera in order to tap into the Camera pipeline to run the inference live on incoming frames and for the same, we are trying the approach of pushing the DLC via Azure IOT HUB. When we place our DLC and associated files, Camera recognizes that it has to download a new run-time (essentially the docker with new DLC and related files) and download them but we don't see the overlay of BBOX from our model as the output.
We tried to check the IOT Hub pages and found that the DLC exited with status "Backoff". Any suggestions / advice here on how to debug this issue and can anyone point us to the right resources to resolve this issue.