User Profile
pdecarlo
Joined 7 years ago
User Widgets
Recent Discussions
Using IoTEdge with Cognitive Services Containers to translate Retro Video Games 🧠+🎮
Github Repo with instructions to reproduce Dev.to article on Using Cognitive Services Containers with IoT Edge This project uses AI services running side-by-side Retroarch on top of Lakka through the use of containers to allow for interesting interactions with Retro Video Games in a modular and remotely configurable fashion.1.3KViews2likes0CommentsRe: Ask the IoT Expert: Computer Vision based AI Workloads at the Edge
Essentially, you need to expose the /dev/video* entry to the container and from there it should be accessible from your IoT Edge workload. First list the available USB video devices like this: ls -ltrh /dev/video* Which should list your available devices, then for each device that you would like exposed to an IoT Edge Module, add an entry similar to the following for your devices in your deployment.template.json: "HostConfig": { "Devices": [ { "PathOnHost": "/dev/video0", "PathInContainer":"/dev/video0", "CgroupPermissions":"rwm" }, { "PathOnHost": "/dev/video1", "PathInContainer":"/dev/video1", "CgroupPermissions":"rwm" } ] } We have an article that focuses on connecting CSI cameras that should assist in more specifics, while not related to USB cameras the underlying concepts are very much relevant: https://www.hackster.io/pjdecarlo/custom-object-detection-with-csi-ir-camera-on-nvidia-jetson-c6d315 We also have a a recorded livestream that I believe covers the usage of USB cameras and how to modify an associated DeepStream configuration to use them @ https://www.youtube.com/watch?v=yZz-4uOx_Js Let me know if this helps, while not specific to your request it should give details on the bigger picture to help understand how to accommodate a variety of video input methods.1.5KViews0likes0CommentsAsk the IoT Expert: Computer Vision based AI Workloads at the Edge
Recent advances in artificial intelligence allow computers the ability to identify objects within image data and video streams in near real-time. This concept, referred to as “Computer Vision” is now more accessible than ever before, with the advent of services like CustomVision.AI that can allow you to train custom object detection models by providing sample images of the objects of interest. As computer power is increasing, it is unlocking the ability run accelerated computer vision workloads on small form-factor IoT devices. This opens the door to a wave of AI powered solutions which can perform vision related tasks without the need for heavy computational resources located off-site. This paradigm is often referred to as the Artificial Intelligence of Things or AIOT. Applications of AIOT can provide benefit to practically any physical space (factories, smart cities, office space etc.) by allowing for solutions that can detect anomalies, alert on visual cues, and provide insight into the environment by using camera feeds as a new means of sensor input operated on by Computer Vision algorithms. My name is Paul DeCarlo, Principal Cloud Developer Advocate at Microsoft, with a focus on Internet of Things solutions and the application of Computer Vision concepts within Edge environments. I have a passion for sharing how to design and develop AIOT solutions with broad audiences and often do so in the form of developer livestreams and the creation of online training materials. Are you interested in learning more about how Computer Vision workloads might work for you or are you interested in exploring the field to understand what is currently possible? I am here to answer any and all questions that you might have regarding the topic from all levels, whether you have a specific question on how to implement the appropriate computer vision algorithm to solve a problem for your line of business or if you are interested in where to get started from a beginner perspective. I will be monitoring this forum throughout the month of October and look forward to helping answer your questions as they arise. If you have any questions regarding Computer Vision based AI Workloads at the Edge, please leave them as comments in this very discussion and either myself or members of the community will be on hand to provide you with answers. To make this Ask the IoT Expert globally inclusive the Q&A will play out in this post and last for the whole month of October.1.9KViews1like2CommentsLearn TV - Sharpen your AI Edge skills with Microsoft Learn and Oxford University 12th Oct 2020
Agenda On October 12 at 1PM PDT, 9PM BST. Ayse Mutlu from the University of Oxford and Paul DeCarlo from Microsoft’s IoT Advocacy team will livestream an in-depth walkthrough of a custom Continuous Integration / Continuous Delivery pipeline using Azure DevOps for IoT Edge Solutions. The content will be based on an interactive learning module from MS Learn that can be followed at your leisure at https://aka.ms/learnlive/cicd. You can follow along with us live on October 12, or join the Microsoft IOT Cloud Advocates here in the forum https://techcommunity.microsoft.com/t5/internet-of-things-iot/ct-p/IoT throughout October to ask your questions about CI / CD development with Azure DevOps for IoT Edge solution. Meet the Presenters Ayşe Mutlu Data Scientist & Tutor of AI: Cloud and Edge Implementations - University of Oxford Paul DeCarlo Principal Cloud Advocate, Microsoft Session Details The Internet of Things is a technology paradigm that involves the use of internet connected devices to publish data often in conjunction with real-time data processing, machine learning, and/or storage services. Development of these systems can be enhanced through application of modern DevOps principles which include such tasks as automation, monitoring, and all steps of the software engineering process from development, testing, quality assurance, and release. In this LearnTV Live session, we will create a DevOps solution for Azure IoT Edge devices. The solution will employ a CI/CD (continuous integration/continuous deployment) strategy using Azure DevOps, Azure Pipelines, and Azure Monitor Application Insights on a Kubernetes cluster. Learn Live Module which will be discussed and demonstrated during the session: https://aka.ms/learnlive/cicd Ready To Go Our Livestream will be shown live on this page and at Microsoft Learn TV on Monday 12th October 2020. This is a Global event and can be viewed LIVE at these times: :india: 13.30pm IST | :european_union: 10pm CEST | :united_kingdom: 9pm BST | :united_states: 4pm EDT | :united_states: 1pm PDT We hope to be able to bring you live chat on Learn TV on that date, but you can also interact with us watch the live stream on LearnTV: http://aka.ms/IOTOxford1.5KViews2likes2Comments
Recent Blog Articles
🤖🧵Fabric + AI HackTogether Wrap-Up: How to submit and get ready for Fabric Community Conf
Hack Together: The Microsoft Fabric Global AI Hack The Microsoft Fabric Global AI Hack is your playground for creating and experimenting with Microsoft Fabric. With mentorship from Microsoft expert...1.1KViews0likes0Comments🤖🧵Microsoft Fabric AI Hack Together: Create, Evaluate, and Score a Churn Prediction Model
Hack Together: The Microsoft Fabric Global AI Hack The Microsoft Fabric Global AI Hack is your playground for creating and experimenting with Microsoft Fabric. With mentorship from Microsof...1.4KViews0likes0Comments🤖🧵Microsoft Fabric AI Hack Together Kickoff: Ready, Set Hack! Do more with AI in Microsoft Fabric
Hack Together: The Microsoft Fabric Global AI Hack The Microsoft Fabric Global AI Hack is your playground for creating and experimenting with Microsoft Fabric. With mentorship from Microsoft...1.7KViews2likes0Comments