COVID-19 spread has changed our day-to-day life in unprecedented ways. Organizations around the world are taking action to, contain and help prevent further spread of the disease by using AI technologies like computer vision, help ensure the safety of the employees and customers.
Azure Cognitive Services now, provides Mask detection functionality, to assist application developers in building solutions that can help monitor and contain the spread. Mask detection can be deployed anywhere, from the cloud leveraging Face service, to the edge using Spatial analysis service.
Mask detection on the edge
Spatial analysis is, a capability of Computer Vision, part of Azure Cognitive Services. This capability understands people’s movements in a physical space by analyzing real-time video, significantly increasing efficiency, and providing valuable insights for enabling various scenarios including,
Counting people in a space for maximum occupancy
Understanding the distance between people for social distancing measures
Determining customer footfall such as in retail spaces
Determining wait time in a checkout line
Determining trespassing in protected areas
Spatial analysis can now detect whether a person is wearing a protective face covering or not. With this new capability, businesses can leverage insights to build applications that can measure safety and enhance compliance. For example, a business can aggregate data of percentage of people wearing masks in a physical space to improve compliance measures. To help ensure the safety of people working in a given space, mask detection can also be used to notify when a person may accidentally enter the space without a face mask.
Mask detection can be enabled for the following spatial analysis operations - personcount, personcrossingline and personcrossingpolygon. The classifier model can be enabled by configuring the ‘ENABLE_FACE_MASK_CLASSIFIER’ parameter to True, this is disabled by default. The attributes, face_mask or face_noMask, will be returned as metadata with confidence score for each person detected in the video stream.
Face mask and Person detection with Spatial analysis
Spatial analysis operations provide a real-time video analysis pipeline on new and existing RTSP cameras. The deployment of the spatial analysis container on edge devices is facilitated by Azure IoT Hub. When video is streamed and processed by spatial analysis, the container emits AI insight events about people’s movement which in turn are sent to Azure IoT Hub as IoT telemetry. From IoT Hub you can create various routes to other Azure services and build your business solutions.
Spatial analysis container deployment with Azure IoT
The events from each operation are egressed to Azure IoT Hub on JSON format. Sample JSON for an event output by cognitiveservices.vision.spatialanalysis-personcount operation.
Learn how to build business applications with spatial analysis, follow these instructions to deploy a sample Azure Web Application that presents a live view of people counting events in a physical space. You can modify this app with other spatial analysis operations and make modifications based on the event output of the container.
Mask detection in the cloud
Mask detection is also available through the Face Detection cloud endpoint in Azure Cognitive Face API Service. This capability analyses images, detects one or more human faces along with attributes for each face in the image. Face mask attribute is available with the latest detection_03 model, along with additional attribute “noseAndMouthCovered” that provides insight about whether the mask covers both the nose and mouth.
To leverage the latest mask detection capability, users need to specify the detection model in the API request - assign the model version with the detectionModel parameter to detection_03. Refer to How to specify a detection model to learn more about the capabilities of each detection model and sample code to call it.
Face mask detection with Face Service
Detection_03 API response with face mask attribute:
Microsoft’s principled approach enables developers to build rich solutions while ensuring responsible use.
Responsible deployment recommendations for spatial analysis is provided in accordance with Microsoft Responsible AI Principles: fairness, reliability & safety, privacy & security, inclusiveness, transparency, and human accountability. For general guidelines and specific recommendations for height, angle, and camera-to-focal-point-distance, see Camera placement guide. And refer to Face API Transparency Note to get clear guidance on use of facial recognition to help ensure it fits your goals and achieve accurate results.