Forum Discussion
Regenerative Maps alive on the Edge
This week Mapbox announced it will integrate its Vision SDK with the Microsoft Azure IoT platform, enabling developers to build innovative applications and solutions for smart cities, the automotive industry, public safety, and more. This is an important moment in the evolution of map creation. The Mapbox Vision SDK provides artificial intelligence (AI) capabilities for identifying objects through semantic segmentation – a technique of machine learning using computer vision that classifies what things are through a camera lens. Semantic segmentation on the edge for maps means objects such as stop signs, crosswalks, speed limits signs, people, bicycles, and other moving objects can be identified at run time through a camera running AI under the covers. These classifications are largely referred to as HD (high definition) maps.
HD maps are more machine friendly as an input to autonomous vehicles. Once the HD map objects are classified, and because other sensors like GPS and accelerometer are onboard, the location of these objects can be registered and placed onto a map, or in the advancement of “living maps,” registered into the map at run time. This is an important concept and where edge computing intersects with location to streamline the digitization of our world. Edge computing, like what will be powered by Azure IoT Edge and the Mapbox Vision SDK, will be enablers of map data generation in real time. What will be critical is to allow these HD maps to be (1) created through semantic segmentation; (2) integrated into the onboard map; (3) validated with a map using CRUD (create, read, update, delete) operations and then distributed back to devices for use. Microsoft’s Azure Machine Learning services can enable the training for these AI modules to support the creation of additional segmentation capabilities, not to mention the additional Azure services which provide deeper analysis of the data.
Read about it in the Azure blog.