I consider myself extremely lucky, part of the future, special and being in the right place at the right time.
This is the HoloLens website.
As part of attending the Microsoft Build conference I got a chance to attend the Holographic Academy. I was one of the first few hundred developers to try developing an app for HoloLens. It was four and a half quality hours in a hotel hall with a big Microsoft team and magic all around. No cameras or recording devices allowed- so I don’t have photos to share. It’s still a lot of secrecy and unanswered questions but what a spectacular unveiling of the technology of the future!
Somebody came with an optician tool to measure something about my eyes. I don’t understand it all but I believe it measures the space between the pupils. I got 60 as a number and I think it was the smallest number from the people around me- go figure.
The HoloLens customizations and deployments can be done wirelessly but in our with unreliable network, we had to connect the HoloLens to the computer through an USB cable. As a first step, I needed to access the device through the browser with an IP address & port number. You access a website where you plug in the number to customize your HaloLens.
The Development Tools
The tools you need are Unity for the 3D work (https://unity3d.com/) and Visual Studio 2015 for the compilation of the code and deployment.
I opened an existent application, built it in Unity and compiled/deployed it from Visual Studio. When compiling it, I had to look at the spot where I expected to see my hologram. Once the deployment finished, I disconnected the cable and I was free to move around with my hologram – a small race toy car. With a small tab in the air, I was able to move it around.
HoloLens have space recognition- you see the space around so my car would fall from the edge of the coffee table and get stuck on impact with my backpack.
I started with a new project in Unity. Microsoft provided us with assets to use for the app. There are 3 main components/sensors you code for – gaze, tap and voice commands! We would add C# script for gaze, tab and voice recognition and attach them to objects.
The little ring cursor will point at the object when your eyes/gaze reach the object.
The little tap with the index finger in the air will trigger a command.
I was able to implement in the code my personal commands. I was impressed how good the voice recognition was. It was easy to implement my commands for reset and drop the object. I think HoloLens actually interpret my accent better than the people around me.
With the provided assets and code, my app had a small board with objects in it. There were two balls in the air that I was able to drop with a tap or a voice command. When the first ball dropped, an exposure would happen and the underground world would get exposed. I would place my gaze on the second ball and tap/give a command and the ball will drop in the underworld and I would be able to follow with my eyes the ball.
I would be able to give my HoloLens to somebody else and they would experience my world. We were not able to interact with each other in one augmented reality. This functionality will be there in the future.
When I saw the press release video from Microsoft in January, when I saw the demo at the keynote of //Build, my reaction was- no idea how this can be real. The experience with my hands on the HoloLens and the SDK surpassed my expectations by far! The augmented reality is very real, the HoloLens are comfortable and writing an app for HoloLens was not as hard as it sounds. I probably cannot explain the extent of my excitement and fascination. I have dreams now how to use the HoloLens, how to build apps for it. And after dreams and vision, now it’s the time to roll my sleeves and learn Unity…