Azure Object Anchors is now in private preview
Published Sep 22 2020 07:11 AM 34.5K Views
Microsoft

Mixed reality is enabling developers to merge physical and digital worlds, creating rich and immersive experiences. Our platform provides capabilities to enable 3D representations of people, spaces, and objects that are core to these experiences. We are expanding this portfolio with a new service called Microsoft Azure Object Anchors. 

 

ArminRezaiean_0-1600730176698.png

 

In many of today’s mixed reality scenarios, there is often a need to align digital content with physical objects. Many customers complete this scenario via physical markers such as QR codes combined with manual alignment. Azure Object Anchors enables mixed reality developers to automatically align and anchor 3D content to objects in the physical world. Object Anchors aligns 3D content to real-word objects saving significant touch labor, reducing alignment errors, and improving user experience.   

 

With Object Anchors, developers can build applications to automatically detect a specific physical object in the user’s environment and align 3D content to it without using any markers. Object Anchors leverages depth and geometry data about your object, from your HoloLens 2, to enable object-specific anchoring and automatic 3D content alignment.  

 

Azure Object Anchors Workflow 

Object Anchors encompasses two main steps: 

  1. Training experience: the first step is to use a 3D model of the object you want to align with. Run this 3D model through our cloud-based training and ingestion pipeline. From this, you'll receive an object model output to use on your HoloLens 2 device. 
  2. Runtime experience: the second step is to leverage the object model on a HoloLens 2, by calling our SDK and getting real-time, markerless 3D content alignment.  

Training experience 

Object Anchors service converts 3D assets into AI models that enable object-aware mixed reality experiences. The usage flow involves the following: 

  1. For the physical object to which you wish to align content, you’ll need a 3D model of it in one of our supported formats (gltf, glb, obj, fbx, ply). 
  2. Through the Object Anchors service, the 3D model is ingested and run through our training pipeline. 
  3. Once training is complete, an object model binary is outputted.  

Runtime experience 

Now that you’ve got your training output, you can start detecting your object(s) of interest and aligning 3D content to them. In many scenarios, users can experience automatic hologram alignment without requiring network access, by leveraging the runtime functionality on your HoloLens 2. 

 

To create a runtime experience, the steps of your code flow will look like: 

  1. Start session 
  2. Load object model(s) 
  3. Set search area 
  4. Asynchronously detect object(s) and align content 
  5. Lock alignment 
  6. Render 3D content  

We’ve made it easy to get started by offering Object Anchors sample applications leveraging Unity and MRTK. You can also write custom code using our runtime SDK.  

 

Use case scenarios 

In mixed reality, the key scenarios that we see repeatedly include task guidance, training, and visual inspection. With Object Anchors, you can: 

  1. Seamlessly guide employees step-by-step: walking employees through a set of tasks can be greatly simplified when leveraging mixed reality.  Object Anchors makes it easy to overlay digital instructions and best practices, on top of a physical object.  
  2. Simplify training development: create mixed reality training experiences for your workers, without the need to place markers or spend time manually ensuring hologram alignment.    
  3. Visual inspection: use existing 3D models of objects in your physical space to leverage Object Anchors, locating and tracking instances of that object in your environment. Quickly perform an inspection with digital content overlaid. 

Customer feedback 

Toyota is evaluating Object Anchors for their mixed reality application, providing their technicians with a custom task guidance experience. Markerless detection and alignment was a crucial requirement for Toyota, in part due to the risk of cosmetic damage to customers’ vehicles when placing physical markers. Furthermore, Toyota wanted the ability to train their vehicle model once at HQ and enable technicians around the world to then utilize the runtime experience for maintenance procedures. Koichi Kayano summed it up well when he noted: 

 

“Azure Object Anchors enables our technicians to service vehicles more quickly and accurately thanks to markerless and dynamic 3D model alignment. It has removed our need for QR codes, and eliminated the risk of error from manual model alignment, thus making our maintenance procedures more efficient.

Koichi Kayano, Project Manager Technical Service Division at Toyota  

 

ArminRezaiean_1-1600730176713.png

 

Bouvet, a leading AR/VR agency in Norway, has started evaluating Object Anchors for a range of scenarios. Oyvind from Bouvet describes their experience with Object Anchors below:  

 

“Having the ability to anchor models to objects without needing QR codes is a very good, highly valuable addition to our mixed reality development experience. Azure Object Anchors is simple to setup, easy to use, and improves our overall user experience.” 

Oyvind Soroy, Bouvet  

 

We are excited to expand our mixed reality services with Azure Object Anchors and announce private preview of this service. We cannot wait to see what you build with Object Anchors. Sign up here to be considered for the private preview. Learn more about our other mixed reality services here 

8 Comments
Iron Contributor

This is amazing! Had tried object detection and anchors on mobile platforms like ARKit. It's great to see support coming to Azure Mixed Reality Services as well :hearteyes:

Copper Contributor

Nice, any chance to be able to create object targets using the hololens for those cases where you don't have access to a 3D model? 

Copper Contributor

Will the "runtime experience" also be available on devices other than Hololens?

Microsoft

@AndersMarkstedt at the moment, that functionality doesn't exist, so you'll need a 3D model of your object(s) in order to leverage Azure Object Anchors. 

 

@dvisentin today we only support HoloLens 2. As usage and engagement with our technology evolves, our team constantly looks at platform support needs. But for now, HoloLens 2 is the only supported device.

Copper Contributor

How sensitive will detection be ?
Tried other expensive 3th party add-ons and unless model matched 100% to psysical model, no match was attained.

 

Lets say we add an alternator from the engine bay of a car.. 
will it match allthough wires attached to it etc. ?? 

 

Regards

Søren

Copper Contributor

a great technology. how precise is this working? 

Copper Contributor

Thank you very much this looks very interesting! This sounds very interesting even for medical application for instance in navigation systems (i.e. neuronavigation). But there is one crucial question: How accurate is the object overlay? I read on a different page from microsoft about 2-8cm, for large objects. How accurate would it be for objects around with 30cm diameter? 

Copper Contributor

产品呢IJP怎么试用素食ZFM

Version history
Last update:
‎Sep 22 2020 07:11 AM
Updated by: