Forum Discussion
Ask the IoT Expert: Computer Vision based AI Workloads at the Edge
pdecarlo I have been following you guidance and I really appreciate all the work you put into these documents. I have been having trouble figuring out however attaching USB cameras as a source. I believe I have the right brands but my goal was to use two USB cameras for a connected project. I might of missed where you discussed that in the past. If so I apologize.
Essentially, you need to expose the /dev/video* entry to the container and from there it should be accessible from your IoT Edge workload.
First list the available USB video devices like this:
ls -ltrh /dev/video*
Which should list your available devices, then for each device that you would like exposed to an IoT Edge Module, add an entry similar to the following for your devices in your deployment.template.json:
"HostConfig": {
"Devices": [{ "PathOnHost": "/dev/video0", "PathInContainer":"/dev/video0", "CgroupPermissions":"rwm" },
{ "PathOnHost": "/dev/video1", "PathInContainer":"/dev/video1", "CgroupPermissions":"rwm" }
]
}
We have an article that focuses on connecting CSI cameras that should assist in more specifics, while not related to USB cameras the underlying concepts are very much relevant:
https://www.hackster.io/pjdecarlo/custom-object-detection-with-csi-ir-camera-on-nvidia-jetson-c6d315
We also have a a recorded livestream that I believe covers the usage of USB cameras and how to modify an associated DeepStream configuration to use them @ https://www.youtube.com/watch?v=yZz-4uOx_Js
Let me know if this helps, while not specific to your request it should give details on the bigger picture to help understand how to accommodate a variety of video input methods.