Applying AI to save lives: How A Team at Microsoft made Drones Self Aware
Published Apr 09 2020 12:01 AM 8,101 Views
Microsoft

One of the more memorable proof of concepts I participated in was alongside Salt Spring Island, BC based startup InDro Robotics. This drone operating outfit reached out to our to explore how they could better enable their drones for search and rescue efforts. The plan we collaborated on was to test the ability of automatic object detection to transform search and rescue missions in Canada. The proof of concept was initialized to be able to assist organizations such as the Canadian Coast Guard in their search and rescue efforts in future. Prior to partnering with our team, InDro Robotics was currently manually monitoring specific environments to recognize emergency situations. Each drone had to be monitored by a person, resulting in a 1:1 ratio of operators to drones which meant InDro Robotics had to rely heavily on its operators.

 

Leveraging Custom Vision Cognitive Service, Azure IoT Hub and other Azure services, the developers at InDro Robotics can now equip their drones to:
 

  • Recognize emergency situations and notify control stations immediately, before assigning to a rescue squad
  • Identify objects in large bodies of water, such as life vests, boats, etc. as well as determine the severity of the findings
  • Establish communication between boats, rescue squads and the control stations. The infrastructure supports permanent storage and per-device authentication

The Custom Vision service allows the ability to teach machines to identify objects. Prior to this offering, deep learning algorithms would require hundreds of images to train a model that would recognize just one type of object making it difficult for the drone to conduct edge computing due to its small computation capability. 

 

The following steps were taken and learnings observed to complete the Proof of Concept: 
 

  • Training of the Custom Vision service works best in a closed or static environment.
    • This ensures the best results in training the service for object detection.
       
  • The service does not require object boundaries to be defined or any specific information regarding object locations.
    • Simply tagging all images enables the framework to compare images with different tags to define the differences in objects requires less time to prepare data for the service.
       
  • Provided images of objects in real environments for better identification.
    • The best training environment for life jacket detection was in the water itself. Training images captured in the water, as well as images of the water without any objects greatly increased accuracy. Using life vest images on a white background did not work to train the model due to differences in environment. Use of InDro Robotics drones were used to capture photos in real-time to properly prepare for the project.
       
  • A minimum of 30 images per tag/label are required to adequately prepare the images.
    • This capability to work off a very low control really helps to simplify work flow, which is an important aspect, especially when it comes to training many different objects. 
       
      Custom Vision AI: Training the modelCustom Vision AI: Training the model

 

  • One or more tags can be assigned per image during or after the upload process.
     
  • It is important to create an “empty” environment tag.
    • This project required uploaded images of the ocean without the life vets at an altitude of 3000 feet, using different filters to replicate weather patterns and varying water depths:
       
      Custom Vision AI: Tagging imagesCustom Vision AI: Tagging images
 

 

  • Training of the images took a matter of seconds and were ready for testing when all the tags were completed
     
  • Using the “Performance” tab allows for the ability to check how many images were accepted per tag.
    • The current iteration can be set as default if you are satisfied with the results:
       
      Custom Vision AI: Performance per tagCustom Vision AI: Performance per tag

 

  • The Custom Vision portal provides multiple opportunities to test images from your dataset directly on the portal.
    • Click on the “Quick Test” button and upload an image to accomplish this:
       
      Custom Vision AI: Quick testCustom Vision AI: Quick test

       
  • The “Predictions” tab can be used to see all results on each iteration, and reassign images using existing or new tags to improve the model on future iterations.
     
  • Integrating the prediction model with your application is can be completed quickly.
    • Select the “Performance” tab and select the Prediction URI menu item:
       
      Custom Vision AI: Prediction APICustom Vision AI: Prediction API

 

It is possible to use URIs or image files themselves. The images in this proof of concept were stored in a private blob and so direct URIs would not have worked.

 
Custom Vision AI: Image set in blob storageCustom Vision AI: Image set in blob storage

 

The following Azure services were utilized beyond the use of Custom Vision to help achieve success of the proof of concept:
 

  • Azure IoT Hub
    • Authenticated drones sent notifications to the infrastructure when new images became available. The messages contained information like the images’ storage location, GPS coordinates and other pertinent information
       
  • Azure Blob Storage
    • Used to store images permanently and made available for the control center application
       
  • Azure Functions
    • Triggered new messages from IoT Hub to invoke the Custom Vision service and update Azure SQL DB
       
  • Azure SQL 
    • Permanent storage used to house and index final results
       
  • Azure App Service
    • Provided required infrastructure to communicate between the control center application and Azure SQL
       
  • Xamarin
    • The primary technology enabling cross-platform application for rescue services

Plans are underway in further evolving the project. Please feel free to comment below regarding any additional questions or ideas on how to further evolve this proof of concept.

 

1 Comment
Co-Authors
Version history
Last update:
‎Feb 23 2022 05:41 AM
Updated by: