OpenXR
2 TopicsHow to unproject a 2D point in an captured image back to its accurate location in the physical world
Hi all, I'm creating HoloLens2 software that detects corners in an image and then shows their actual location in the world. However, there may be slight deviations between the projected points and their actual location, which increase with distance from the user. As following figure shows. Here's my code snippet for projecting the detected corners: I convert the corners from image to screen coordinates, then to NDC for unprojection. Next, I use the inverse of the projection matrix to convert the points from NDC to Camera space and then from Camera space to World space using the camera-to-world matrix. Finally, I instantiate the detected points using a Ray and RaycastHit. For getting the cameraToWorldMatrix and projectionMatrix, i referred to https://github.com/cviss-lab/XRIV/blob/main/XRIV/Assets/LocatableCamera/Scripts/LocatableCamera.cs From my side, i guess it may be the reason that the depth information is not estimated properly or the value of cameraToWorldMatrix and projectionMatrix are not correct. However, i have made efforts on this two reasons, but failed. May dear friends could give me some insights about how to mitigate this projecting devation, i show my great appreciation here in advance!860Views0likes0Commentssk_gpu.h - MR cross platform rendering API
I recently made some cool milestones with the cross-platform renderer I've been working on for StereoKit, so I figured I'd share a bit about it on here! So first! sk_gpu.h is a single-header, cross platform graphics library focused towards Mixed Reality rendering. It's somewhat low level, but it's definitely meant to be much easier than directly working with Vulkan or D3D12! It uses real HLSL for the shaders (I love HLSL!) and the goal is that it'll work on any platform that also has a Mixed Reality headset 🙂 The exciting part is that as of last week, I've already replaced StereoKit's old D3D11 renderer with sk_gpu in a branch, and it works great! StereoKit itself still only works with OpenXR on Windows/HoloLens platforms, but sk_gpu.h by itself works with OpenXR on Windows, HoloLens, and Oculus Quest! It even works on the flat web too, while I'd love to get to WebXR at some point, there's still some obstacles there. This is just the test scene I've been working with, but it demonstrates dynamic meshes, render to texture, cubemaps, instancing, and a few other things! But a lot of people ask me, why did I write this? Why not use something else? There's excellent tech out there already, like BGFX, and the amazing sokol_gfx! Even webgpu native could work if it wasn't so early. But here's my reasoning why: - I really wanted to use real HLSL. Not GLSL, or a pseudo shader language! - Tight focus on Mixed Reality rendering allows for simplification and hardware assumptions. - Smaller code base and more agility to respond to new MR graphics features! - I uh... couldn't figure out how to build any of the other libraries 😞 - ...and I really enjoy writing graphics code! There's still plenty more to do with sk_gpu.h, it definitely needs some cleanup, and I still need to figure out the best way to render in stereo on Quest! I'd also love to add some higher level things to it, like simple image and mesh loading to make it easier for people to try. Maybe even a Dear ImGui integration. Anyhow, I'd love to know what people think of it, or what everyone would like to see from a library like this 🙂 The source is MIT license up on Github! And definitely check out StereoKit too if this tickles your fancy!1.4KViews1like0Comments