World Locking Tools and Cross Session Object Persistence

Copper Contributor

Hello all, I am porting to the Hololens2 an application previously developed with Hololens1 (if curious, here you can find a video description of the app). Most elements have been straightforward during porting, but the new persistence management is causing me many issues (and headaches). I have 2 key requirements:

 

1. Be able to move and rotate objects in the environment at runtime within the Hololens2, and then each time the application starts, it recognizes the room and repositions all objects as they were placed in previous application runs. 

 

2. Share the real environment position/rotation of my 3D objects with other Hololens2 devices, to have all objects correctly aligned among all users. Sharing is done on a local network, which might not have access to the internet.    

 

When I developed (a long time ago) this system for the Hololens 1, everything worked fine, and I relied on the WorldAnchorStore and Spatial Anchors (https://docs.microsoft.com/en-us/windows/mixed-reality/develop/unity/persistence-in-unity). However, with the latest developments to the Mixed Reality API I have understood and tested the two new available approaches:

 

 a) World Locking Tools

 b) ARAnchorManager, using the OpenXR plugin and ARFoundation API

 

One first consideration is that the World Locking Tools API approach seems extremely more stable than the ARAnchorManager approach. And from the documentation, it seems to be the preferable approach suggested by Microsoft. Therefore I would like to achieve my requirement (number 1) with the World Locking Tools. I seem to have it working, but (1) my solution seems quite hacky, and (2) I am unable to share my information with other devices (failing requirement number 2). 

 

In the World Locking Tools methodology, I understood that there are no more "explicit" anchors as the WLT system instantiates them in the background. Still, it is possible to create Attachment Points, which help stabilize objects (this is my understanding). Therefore, using attachment points, to load objects in the same position at each application run, what I do is:

 

- read from a local file (saved in XML) a list of all my movable objects and all their positions and rotation (saved in previous runs)

- assign to each of these movable objects an Attachment Point (using the AttachmentPointManager.CreateAttachmentPoint() https://docs.microsoft.com/en-us/dotnet/api/microsoft.mixedreality.worldlocking.core.iattachmentpoin...)

- each time an object is moved, I save (or update) its position and rotation in the local file

 

Despite this solution seems to work (i.e., when I load the application again, objects are where I previously positioned them), there are a couple of open points which I can't understand and are not exhaustively explained in the documentation.

 

1. What is the point of creating an attachment point at runtime if there is no link with the game object associated with it?

2. I understand there are different coordinate spaces (https://docs.microsoft.com/en-us/mixed-reality/world-locking-tools/documentation/concepts/advanced/c...). Do I need to adjust (during update) the position of the objects according to other coordinates (Frozen, Locked)? And if so, how should I do so?

3. Am I in charge of serializing and deserializing each object's position and rotation information to have their position/rotation persistent across sessions?

4. How can I share this serialized information with other devices for them to load objects in the same position/rotation for all? I understand there are Azure Spatial Anchors; however, I need to achieve the sharing of objects' real-world positioning without accessing the internet (I only have a local wireless network).

 

I hope someone can help me, and I am available to provide further details and code if necessary.

 

Thanks a lot!

1 Reply
dude, I suggest you put your question on GitHub world locking Tool issue page, only a few people come around here.