Guest Blog by UCL/Microsoft project team on Mixed Reality within HealthCare with UCLH
Hololens Mixed Reality team at UCL Computer Science
Tim Kuzhagaliyev - https://github.com/TimboKZ
Laura Foody - https://twitter.com/lbf_l
Fraser Savage - https://github.com/savage-engineer/
Hello there. We are a group of 2nd year computer scientists from University College London (UCL) working on applications of Microsoft HoloLens in medicine. At the moment virtual and augmented reality development is mainly driven by the personal entertainment industry — but people have begun to look at other applications in fields like architecture, robotics, and of course medicine.
In our case, we are very interested in the value that augmented reality can add to routine medical imaging, particularly when preparing for complicated surgery. Most medical scans are displayed as 2-dimensional images, which are not very good at revealing the complexity of 3-dimensional anatomy.
Our project, PEACH Reality, aims to aid surgeons and other medical specialists in visualising surgery before, during and after the operation. We accomplish this by converting CT scans into 3D models and displaying them as holograms for the user to interact with.
My name is Tim and I was the leader our team, focusing on HoloLens development. Before we dive into the details of the project, I'd like to talk about the resources we used and give some useful advice to beginner HoloLens developers.
Mixed Reality Academy should be the first step of your journey into the world of HoloLens development. It will give you a basic understanding of the features that HoloLens offers to you and your users, as well as some common implementation strategies. Unfortunately, the tutorials don't really go over the code and it's more about you understanding how to setup a HoloLens project in Unity and Visual Studio, how to build your project, how to deploy it to HoloLens or emulator and so on. The second half is copying and pasting various snippets of code that have been written for you and seeing the whole HoloLens development pipeline in action.
I'd recommend glancing over the code in the tutorials and trying to understand what are the basic mechanisms and libraries involved in making certain things happen - like how information about user's hand movements can be used in Holograms 211 or how Gaze can be used to interact with objects in Holograms 210.
Make sure to
spend too much time learning remember code snippets in the tutorials by heart because some parts might be outdated and in fact specifics of implementation could have completely changed since the time tutorials where recorded. More about it in the next section.
Definitely read more about MixedRealityToolkit-Unity on its GitHub page and browse the code and examples in the repo if you have the time. If you've done the tutorials, you would've encountered the MixedRealityToolkit (standalone) in some shape or form. MixedRealityToolkit -Unity is pretty much the same thing but optimised to play nicely with the Unity game engine. Keep in mind that the version used in the tutorials is most likely outdated , so I suggest cloning the repository linked in the beginning of this paragraph directly. As mentioned before, scripts and their interfaces might differ from the ones used in the tutorials, so copying and pasting the code from tutorials into a Unity project with the most recent version of MixedRealityToolkit-Unity is probably not the best idea.
It's a collection of useful scripts that cover pretty much all of the basic needs of a HoloLens app developer, such as billboard/tagalong components, spatial understanding, hologram sharing and more. I wasn't able to find too much documentation about it so for me using it was pretty much a trial-and-error process, but you might have more luck than me.
this short overview
of setting up a Unity project for HoloLens development. The section you're interested in is 1. Initial set up of the unity project, the rest might be a bit too specific to apply to you. Again, info there can be outdated but it should provide some basic guidance.
One of the most important things to remember is that the HoloLens app you're developing in Unity is first and foremost a Unity app . Most of the challenges you'll be experiencing during the development process are common Unity issues, so make sure to scavenge Unity forums for solutions before blaming HoloLens. While it is true that you'll mostly likely be developing some MR-specific functionality, Microsoft did an amazing job seamlessly integrating a lot of its APIs into Unity's game engine so at some points you won't even notice the difference between developing a HoloLens app and a generic Unity game.
Choosing the right Unity version for the job is a a completely different story. First of all, if you're working in a group, you should all use the same version of Unity, or you'll have to spend ages re-importing and re-configuring your projects, given they will load at all. Also, do not update to the most recent version of Unity unless it has a feature crucial for your development process or you know what you're doing. It can break your app, it will mean all developers in your team will have to update, and, worst case scenario, HoloLens development might not even be supported on that new version so you'll have to go back to using an older version. To make sure you're using the right tools you'll have to spend some time reading the docs in the MR Academy and on various GitHub pages. It might seem annoying but it is definitely worth it in the end.
Since you'll be working with Microsoft technology (duh), Visual Studio (VS) is your best bet when it comes to choosing an IDE. Unity is also nicely integrated with VS so this choice is really a nobrainer. You might want to install
, an amazing extension for VS that gives it some super powers of IntelliJ Idea. If you're a student, you can get a free license for a year and extend it in the future, if needed.
strongly suggest you read Performance recommendations for Unity by Microsoft before you start working on your project. It has some very useful tips on the architecture of your application which are quite hard to implement retrospectively.
To save yourself some time developing the UI for your app, you can use the existing UnityUI framework. To make it work with HoloLens, you'll have to make some adjustments, which can be found in this post on Unity forums .
Another thing that's worth mentioning: If you're a seasoned developer you must remember that Unity isn't your only option. HoloLens is running
Universal Windows Platform
(UWP) which opens a bunch of different approaches and programming languages you can use for HoloLens development. This is especially relevant if you're interested in the computer vision aspect of HoloLens development. Microsoft recently released
HoloLens for Computer Vision
- a collection of example projects working with headset's sensor directly (HoloLens has
quite a few
In this section I'll talk about PEACH Reality as a whole, as well as the challenges we encountered during the development process.
Viewing raw CT scans initial experiments
The challenge has been to come up with a seamless pipeline from the raw CT scan data to holographic patient cases, wrapped in an intuitive user interface. There are many challenges!
We are working with the Translational Imaging Group of UCL, which kindly provided us with a deep learning system for extracting 3D meshes of body structures from CT scans. The time taken to do this extraction is constantly improving as the project develops. The PEACH Reality project wraps this system in a set of tools that make up a platform for studying and sharing holographic patient cases.
One of our key aims is to make the user experience as straightforward and unambiguous as possible. To achieve this, we’re designing a web app, an API and a Microsoft HoloLens application, which provide the user with means of completing different steps in the process of creating a holographic patient case. We hope to cater for users of varying levels of computer literacy by designing user-friendly interfaces and automating most of the non-essential tasks.
The web app is the entry point to our system. It allows users to view, create and modify existing holographic patient cases, as well as invite their colleagues to collaborate on certain cases. It communicates with the API, which does all of the heavy lifting. It handles security, processes uploaded files, extracts body structures to 3D models and optimises models to improve performance.
Finally, holographic patient cases can be opened using our HoloLens application. In there, users can view and interact with models and raw CT images in a mixed reality setting, annotating certain parts by attaching voice recordings or text notes to them. The changes can later be uploaded back to the API and exported using the web app.
Exploding the 3D model initial experiments
We continue to grow the tools and the PEARCH repository and develop further whitepaper specifications to help future developers understand the challenges we encountered and the solutions we came up with.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.