Blog Post

Internet of Things Blog
6 MIN READ

A Visual Guide to Azure Percept

nitya's avatar
nitya
Icon for Microsoft rankMicrosoft
Sep 16, 2021

 

Welcome to an illustrated guide to Azure Percept – a new end-to-end edge AI platform from Microsoft that helps IoT practitioners go seamlessly from silicon to services when developing & deploying intelligent edge applications. The guide gives an overview of the Azure Percept platform capabilities and components, explains how it solves a key problem for edge AI developers today, and concludes with a list of relevant resources to jumpstart your own prototyping journey.

About Visual Guides:

Did you know 65% of us are visual learners? Our brains are wired to absorb information from visual cues and use them to detect patterns and make connections faster, and with better recall. Visual guides offer a “big picture” summary of the topic that you can use as a pre-read (before diving into documentation) or a post-recap resource (to identify gaps in learning or coverage). Download a high-resolution image of the visual guide to Azure Percept and use it as desktop wallpaper or print it out as a handy reference to support your learning journey.

About AI at the Edge:

Edge Computing defines a distributed architecture with compute resources placed closer to information gathering resources to reduce network latency and bandwidth usage in cloud computing. By pairing an intelligent edge with an intelligent cloud, we get faster decision making, offline operation, optimized network usage, and data privacy protections. Edge AI uses edge compute resources to run machine learning and data analytics processes on-device (e.g., for real-time insights, intelligent decision-making, and workflow automation solutions) making such platforms critical to hybrid cloud strategies.

 

Why Azure Percept?

 

Many edge AI solutions today must be built from the ground up using diverse hardware (devices) and software (services) that need to be integrated and managed manually, creating workflow complexity for developers. Creating and deploying AI models also assumes a level of data science and machine learning expertise that many traditional IoT developers lack.

Azure Percept is an end-to-end technology platform from Microsoft that was designed to tackle these challenges, making it easier for IoT practitioners to rapidly prototype, deploy, and manage, their edge AI solutions. Azure Percept has three core aspects:

With Azure Percept, practitioners can build and deploy custom AI models, setup and manage IoT device collections, and integrate seamlessly with a rich set of Azure cloud services – for edge AI application prototyping & deployment at scale. Azure Percept fits seamlessly into familiar Azure IoT architectures, lowering the learning curve for adoption. It supports rich tools and documentation for low-code development, so developers can build & deploy edge AI solutions without needing deep data science or cloud computing expertise. Let's explore the visual guide to Azure Percept!

A Visual Guide To Azure Percept

 

The illustrated guide below gives a visual summary of the Azure Percept Overview documentation. I recommend you download the high resolution image of this guide and use it as a reference for the rest of this post.


 

In the next few sections, we’ll explore some sections of the visual guide (with relevant links for self-guided deep dives) with a focus on three aspects: the big picture, the core components, and next steps to get started! Let's dive in!

 

1. Azure Percept: The Big Picture


Azure Percept is a family of hardware, software and services  that covers the full stack from silicon to services, to help customers solve integration challenges of using AI at the edge, at scale. It tackles three main points of friction for edge AI integrations:

  • Selecting the right silicon (hardware integrations) to power the edge AI solution.
  • Providing end-to-end security (hardware, software, models, data) for edge AI.
  • Building & managing edge AI solutions (device & service deployment) at scale.


2. Azure Percept: Core Components

To achieve this, the Azure Percept platform provides support for on-device prototyping (dev kit), cross-device workflow and solution management (portal), and guidance for best practices in each case. Let’s look at the three aspects briefly:

 

  • Azure Percept Development Kit – with flexible hardware options for diverse AI prototyping scenarios using computer vision or speech. Provides build-in hardware acceleration and trusted security solutions. Supports 80/20 railing system for limitless device mounting configurations. Integrates with Azure AI and Azure IoT services – but also runs AI models on-device without a connection to the cloud, for reliable and efficient real-time insights. It also integrates seamlessly with Azure Percept Audio, an optional accessory for speech solutions.
  • Azure Percept Studioa single launch and management portal for your edge AI models and solutions. Access pre-built AI models or develop custom versions for your app requirements. Use guided workflows to seamlessly integrate AI-capable hardware (edge devices) and cloud services (Azure AI and Azure IoT) – using a low code approach to development. Create end-to-end edge AI solutions quickly without extensive data science or cloud computing experience.
  • AI Hardware Reference Design and Certification Programs – from best practices (e.g., security recommendations) to device specifications (e.g., datasheets for Azure Percept Vision, Azure Percept DK) and support for firmware updates and device deployments.


3. Azure Percept Studio: Getting Started

To get started prototyping your edge AI applications or solutions, you’ll need access to suitable device hardware and an Azure account for service integration and solution management needs.

  • The Azure Percept Development Kit is currently available for purchase here .
  • The Azure Percept Studio is accessible here (with a valid Microsoft Azure account)

Start by logging into Azure Percept Studio portal – you'll see an entry page like the one below.

  • The sidebar menu shows how the portal supports both device management (view and deploy edge devices) and application management (create and manage AI vision or speech projects).
  • The portal also provides handy links to Vision and Speech demos & tutorials for development – with two complete sample applications (People Counting showcasing Live Video Analytics, and Vision On Edge showcasing end-to-end pipelines with 100+ camera feeds deployed) that can be deployed to Azure with one click, for hands-on experimentation.
  • Finally, the portal provides access to advanced tools for cloud development (e.g., Azure ML Notebooks), local device deployment (e.g., AI dev toolchain) and security measures (e.g., AI model protection).
  • Plus, access other advanced development content in preview mode on this GitHub repo.


The "Create a Prototype" option is a good place to start your prototyping journey. The Azure Percept Studio portal provides  computer vision and speech (voice assistant) tutorials with a low-code approach (using visual interfaces & templates for interactive configuration) that walks you through the process of exploring sample AI models, creating custom models, and deploying these prototypes to your edge devices – all from one unified interface. Sample models exist for people detection, vehicle detection, general object detection and products-on-shelf detection use Azure IoT Hub and Azure IoT Edge service integrations for seamless deployment.

Interested in deploying AI at the Edge in your solutions and projects? We’d love to hear from you about the application domain and usage scenarios and keep you updated on what’s coming next. Drop a comment below, subscribe to the blog, stay in touch!

 

4. Summary & Next Steps

Thanks for reading! This was a quick visual introduction to Azure Percept – an end-to-end edge AI platform that supports the “Sense. Know. Act.” requirements for intelligent edge applications!

  • Azure Percept consists of an Azure Percept Development Kit (for devices), a backend Azure Percept Studio portal (for service integrations and solution management), and sample AI models (for computer vision and speech) for rapid prototyping of edge AI applications.
  • The illustrated guide to Azure Percept summarizes the main takeaways from the Azure Percept Overview with a downloadable hi-resolution image for pre-read or post-recap help.

Want to learn more on your own? Here are some relevant resources to get you going:

 

Azure IoT Fundamentals:

Azure Percept:

 

 



Updated Sep 15, 2021
Version 1.0
  • benkotvis's avatar
    benkotvis
    Copper Contributor

    This visual guide is very nice!  What does Microsoft envision as the next step after prototyping a vision solution.  The percept hardware and the supporting software is really easy to use and works quite well but after the vision model is created it's not clear where to go from there.  There is no "PK" (production kit).  The vision model that is produced by Azure Custom Vision is not a compact domain vision model that can be operationalized elsewhere.  So the only option is to use that model as an API in the cloud which is a very different architecture from what was being prototyped and doesn't use edge computing in a meaningful way.  Are there guidelines in the works post-prototype?  

  • benkotvis thanks so much for taking time and effort to provide such valuable feedback!! I'll make sure to update this comment thread when Amiyouss and the awesome IoT / Azure Percept team hit the relevant roadmap marker.