Mixed reality is enabling developers to merge physical and digital worlds, creating rich and immersive experiences. Our platform provides capabilities to enable 3D representations of people, spaces, and objects that are core to these experiences. We are expanding this portfolio with a new service called Microsoft Azure Object Anchors.
In many of today’s mixed reality scenarios, there is often a need to align digital content with physical objects. Many customers complete this scenario via physical markers such as QR codes combined with manual alignment. Azure Object Anchors enables mixed reality developers to automatically align and anchor 3D content to objects in the physical world. Object Anchors aligns 3D content to real-word objects saving significant touch labor, reducing alignment errors, and improving user experience.
With Object Anchors, developers can build applications to automatically detect a specific physical object in the user’s environment and align 3D content to it without using any markers. Object Anchors leverages depth and geometry data about your object, from your HoloLens 2, to enable object-specific anchoring and automatic 3D content alignment.
Object Anchors encompasses two main steps:
Object Anchors service converts 3D assets into AI models that enable object-aware mixed reality experiences. The usage flow involves the following:
Now that you’ve got your training output, you can start detecting your object(s) of interest and aligning 3D content to them. In many scenarios, users can experience automatic hologram alignment without requiring network access, by leveraging the runtime functionality on your HoloLens 2.
To create a runtime experience, the steps of your code flow will look like:
We’ve made it easy to get started by offering Object Anchors sample applications leveraging Unity and MRTK. You can also write custom code using our runtime SDK.
In mixed reality, the key scenarios that we see repeatedly include task guidance, training, and visual inspection. With Object Anchors, you can:
Toyota is evaluating Object Anchors for their mixed reality application, providing their technicians with a custom task guidance experience. Markerless detection and alignment was a crucial requirement for Toyota, in part due to the risk of cosmetic damage to customers’ vehicles when placing physical markers. Furthermore, Toyota wanted the ability to train their vehicle model once at HQ and enable technicians around the world to then utilize the runtime experience for maintenance procedures. Koichi Kayano summed it up well when he noted:
“Azure Object Anchors enables our technicians to service vehicles more quickly and accurately thanks to markerless and dynamic 3D model alignment. It has removed our need for QR codes, and eliminated the risk of error from manual model alignment, thus making our maintenance procedures more efficient.“
Koichi Kayano, Project Manager Technical Service Division at Toyota
Bouvet, a leading AR/VR agency in Norway, has started evaluating Object Anchors for a range of scenarios. Oyvind from Bouvet describes their experience with Object Anchors below:
“Having the ability to anchor models to objects without needing QR codes is a very good, highly valuable addition to our mixed reality development experience. Azure Object Anchors is simple to setup, easy to use, and improves our overall user experience.”
Oyvind Soroy, Bouvet
We are excited to expand our mixed reality services with Azure Object Anchors and announce private preview of this service. We cannot wait to see what you build with Object Anchors. Sign up here to be considered for the private preview. Learn more about our other mixed reality services here.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.