My thoughts on developing with Magic Leap One for a week

As many of you asked, my Magic Leap review after spending a week with it:

I genuinely love it actually.
Compared to ‘that other AR headset’ I find it more refined in most areas:

Wearing it is very comfortable as it’s so light and well balanced.

Fitting was not as delicate as I expected, you can go through a proper fitting/calibration routine but I generally don’t do this for demos.

It comes with 2 forehead pieces, 5 nose pieces, can do eye calibration and adjust the waveguide planes during calibration.
Maybe you can fit (small) glasses underneath but that’s probably not a good idea (running the risk to scratch the optics).

The increased FoV is welcome and noticeable (especially in the height) and the slightly blocked of sides actually help.

Visual quality is much increased, no more RGB separation (due to the more modern, much flatter waveguides).

Build/deploy cycles are very fast (there is no UWP overhead for example).

“Zero Iteration Mode” allows you to have a complete live Unity scene and access to the profiler in realtime. Haven’t checked if this is available for UE4 yet.

The controller is very nice, navigating the menus works much better than using hand gestures, you always feel like you’re in control.

On a tech level there are lots of welcome features:

More hand gestures and true hand/finger points.
World meshing is better and faster.
Plane detection/extraction (from the world mesh) is very fast and easy.
Raycasting using controller, head direction and eye direction is great. Using the controller feels very intuitive and more precise than just the head direction.
Eye tracking works well (even though I wear contact lenses) and opens up new doors for interaction/storytelling.
Integrated image recognition/tracking means no need for (slow) Vuforia anymore.
The increased compute and GPU power is very very welcome, feels we can actually do things now and not constantly be restricted.
Using UE4 is supported (although the samples aren’t as comprehensive as Unity).

Tracking is good but not perfect.

In comparison HoloLens has slightly better tracking but on that platform if the app falls below 60 fps (and many do due to the lack of CPU/GPU power) tracking quickly gets wobbly.

Magic Leap (the company) seems very responsive and helpful on social media and on their forums.

There are many small OS/driver updates instead of a few large ones per year (at least so far in the pre 1.0 stage).

There is actually warranty in case there is a hardware problem (should be a no-brained but some other companies refuse warranty on developer kits).

The SDK/examples are clean and good (documentation is excellent) it’s not as comprehensive as the massive MRTK community effort though.

At the moment live-streaming is still missing which is a bit tricky when doing demos (should be coming soon though).

At the moment there is no built in multi-device sharing (I hear that is coming as well and of course you can roll your own).

No carrying case, the box comes with handle, I’m using a pelican-case (well different brand, same idea).

I can litterally develop all day, with the powered USB hub it keeps it charge while connected and being used.

It’s also comfortable enough to wear for hours on end and wear it while using Unity / Visual Studio.

 

At the bottom of the line the Magic Leap One is cheaper and offers more features and mostly of higher quality for that price.

 

 

And before you ask, don’t get me wrong I still love my HoloLens 1 and hope Microsoft will soon let us know more about v2, the AR headset wars have finally begun! 🙂

Saturday, September 29th, 2018 Magic Leap, Ranting, XR No Comments

VRLA/FMX talk available online

As some of you may know my VRLA talk video footage got lost….
But thanks to someone from the excellent video crew going out of his way to track it down (in his precious spare time nonetheless!) it is now finally available! 😁

Friday, July 20th, 2018 Talks, XR No Comments

Kinect v4 (aka Kinect for Azure) all the details we know so far

Dedicated page moved to here: https://brekel.com/kinect4

Wednesday, May 9th, 2018 Kinect No Comments

Talks in spring 2018 at FMX and VRLA

It’s official I will be speaking at

FMX: April 24 – 27 in Stuttgart, Germany
https://fmx.de

 
And
 

VRLA: May 4 – 5 in Los Angeles, United States
http://virtualrealityla.com

 
Early bird tickets are already available for both conferences at their respective websites.
 
 
Topic will be “Ahh Screw It…Let’s Use Depth Sensors and VR/AR Equipment in Production”

 

The main vibe of this talk will be inspirational by showing real world examples of using low cost hardware in 3D animation, visual effects, game production.

In the first half we'll explore the use of consumer depth sensors (like Kinect, Orbbec & RealSense).
 Topics include pointlouds (volumetric video) and motion capture.

In the second half we'll look at recording Vive/Rift tracking data and see how it compares to some higher end motion capture equipment.
 As well as how a HoloLens can be used on a highend mocap shoot to help visualize realtime 3D characters on set.

Oh and there may be some 80s & 90s animated gifs :)
Thursday, February 1st, 2018 Highend Mocap, HoloLens, Kinect, Talks, Tools, XR 2 Comments

Collaborative VR experiment with multiple Valve/HTC Vive headsets

Recently we had the opportunity to play with two Valve/HTC Vive headsets and controllers so we opted to experiment a bit for a few days.

Since I strongly believe the Virtual Reality revolution needs a social element in order to become more than just a hype of we implemented the following things to play with:
– multi user collaborative experience
– interaction with virtual objects using Vive’s hand controllers
– implemented using two (networked) machines running Unity
– build it so it works when users share the same physical space (like in this example)
– but also works when users are in different places with access to a (5 Mbps) internet connection

R&D project by:
Jasper Brekelmans……..brekel.com………(pointcloud streaming & Unity integration)
Jeroen de Mooij………….thefirstfloor.nl….(Unity networking & scene design)

Thanks to:
Adriaan Rijkens…….VRheroes.nl
Marald Bes…………AnyMotion.nl
Erwin Kho…………..Zerbamine.nl

 

Tuesday, January 12th, 2016 Ranting, XR 2 Comments

My HoloLens experience and technical insights

A little while ago in my role as Microsoft Emerging Experiences MVP I was given the opportunity to experience the Microsoft HoloLens during a Holo Academy session in Redmond.
We attended with a group of tech-savvy MVP’s so naturally we started analyzing the device and experience with that mind set.
Photo and recording devices were not allowed so you’ll have to do with text and this picture 🙂
Holographic_Academy
Since others have posted reports on their HoloLens experiences (like here & here), I thought I’d try to report some views that focus more on the technical aspects.
What follows are my personal observations, opinions, speculations and views on the hardware that was presented to us.
These may or may not describe the actual hardware that will be available at some point in the future.
They are in no way official specs, but nevertheless may be interesting to others, so here goes.

If you don’t know what a HoloLens is you may want to look here.

 

The device itself:

  • The device is completely tetherless, all computing and batteries are embedded in the helmet itself with no cable(s) connected to a computer.
  • All of the devices weight rests on a band that fits snugly around the head, none of it rests on the nose making it more comfortable to wear than current gen VR headsets for example.
  • The lenses can not only be adjusted up and down but can also freely slide towards and away from the face, group members wearing glasses could easily keep them on while wearing a HoloLens.

 

The projection/display:

  • This is not using standard LCD or OLED displays but something called waveguide technology, more on that here.
  • By extending our arms forward and spreading thumb/pinky we estimated the Field of View to be slightly less than 40 degrees (horizontally).
  • Although it has been reported than on some events Interpupillary distance (IPD) of the user had to be measured in our case we weren’t measured.
    I’m not sure if the device is now capable of automatically measuring/adjusting this or that for our demos an average IPD setting was used.
  • Virtual objects felt mostly opaque, only when the real world background was very bright you could see the projection being slightly transparent.
  • Resolution looked pretty high and only became apparent when objects were very very small in view.
  • We couldn’t check frame rate but things appeared very smooth.

 

How does the limited Field of View feel in practice:

  • Since you can see through the glasses of the HoloLens and your peripheral view is not blocked you never get the claustrophobic feel that a limited FoV in VR can give you.
  • Only experiences where virtual objects are clipped by the edges of the FoV will make you aware of it.
  • Since virtual objects are truly anchored to the real world this seems to trigger something special in your brain.
    Even if objects are clipped by the FoV this doesn’t seem too bothersome since you have a natural tendency to move around them and step backwards if needed.
    Your brain perceives it as a ghostly appearance that is maybe not real but is truly there in the real world, even if it sometimes is partly clipped.
    I find it hard to describe and to be honest my expectations going into the demo were skeptic.
    Additional hardware components:
  • We noticed 2 lenses on each side, placed above the temples, pointed slightly forward and backward, they appear to be miniaturized depth sensors (similar to the Kinect) used for tracking purposes.
    Note that during stage demos Microsoft used a Kinect v2 strapped to a witness camera to show external footage of the mixed reality experience, this confirms the use of depth sensor(s) for tracking.
  • We noticed 2 sensors on the front, in between the eyebrows.
    Most probably a color camera as the photo app was reported to be taking snapshots from this sensor location.
    Possibly also an ambient light sensor although we couldn’t confirm this.

 

Tracking:

  • The device tracks full positional and rotational information.
  • Tracking was impressively robust.
  • There appears to be hardly any drift, even after walking away 6 meters and coming back to the same spot and walking around for several minutes, virtual objects stayed anchored to the same spot in the real world as you would expect from a real object.
  • We expect it to be based on something similar like Kinect Fusion, where it continuously builds a rough 3D model of the surrounding to track the head transformations.
  • In our demos the environment was scanned upon initialization (depicted by a cool shading effect for a few seconds).
  • Other demos are known to work with pre-scanned environments where things can already be placed within this known environment.
  • Tracking could be broken intentionally by occluding 3 or 4 of the depth sensors.
  • When tracking is lost it seems IMU (accelerometer, gyro, magnetometer) takes over to still provide rotational data.
  • The IMU is probably also used for high speed (rotational) tracking in between depth frames.
  • Occasionally with big changes/occlusions in surroundings or after lost tracking it would be slightly jittery for a second, after that it would be rock solid again and remain so.
  • The “Project XRay demo” (which unfortunately wasn’t available for us to try) also tracks the forearm and hand to attach a virtual gun so quite possibly the Kinect’s body tracking functionality can be used.

 

Occluding virtual/real world:

  • It appears the depth sensors build a 3D mesh of the surroundings upon initialization, and continuously add to this when new areas become visible (for example when looking around.
  • This 3D mesh is used for occlusion, for example if a person is present during this initialization virtual objects can be placed in front or behind the person.
    If the person moves this would break the illusion as the occluder would still remain the same as initialized.
  • It was unclear if this occluder geometry is updated over time when running the demo for longer periods.
    I believe the Kinect Fusion algorithms can adjust for objects that have disappeared after initialization.
    Note that algorithm settings may have been primarily tuned for lower computational needs and battery consumption and this may even vary between applications.
  • With 5 people in a living room sized area (and 5 more of those groups in the bigger open space room) I’m still amazed tracking was so robust.
    I can only imagine what the full room would look like in InfraRed knowing how much active IR projection a depth sensors does.

 

Hand controls:

  • The primary interactions with the device is using hand gestures.
  • I’m unsure if a controller can be paired with the device at this point.
  • The main gesture is an “Air Tap” gesture, simply tapping your index finger and thumb together, which acts like a mouse click.
  • Click and drag functionality exists.
  • Another gesture we used was making a fist with palm facing upwards and then opening the fingers, used to go back to the main menu.
  • There may be other gestures that simply weren’t used for our particular demo applications.
  • Gesture detection seemed to work on a wide variety of body poses as I deliberately tried triggering it with my hand extended to the front, across the body, to the side etc while detection kept working.
  • I did miss some more fine control like buttons or thumb sticks, it may be interesting to try and pair with a controller in the future (like the ones from HTC or Oculus for example).

 

Voice control:

  • The device has built in microphones and can be controlled by voice commands just like the XBox One console with Kinect for example.
  • Personally I didn’t try this in depth but heard from others this didn’t always work 100%, which may have been due to the noise in the room (with 30+ people talking).

 

Software:

  • As mentioned in earlier Microsoft presentations the device runs a version of Windows 10 (don’t expect a regular desktop though, as this doesn’t make sense).
  • There were several familiar 2D applications (like Photos and the Edge browser), these could be pinned anywhere in the world and would remain anchored to that spot.
  • One of the engineers loaded some of the demos on my personal HoloLens device from a desktop machine and a USB cable, this took a few seconds to copy.
  • Many of the demos we saw appeared to have been built using the Unity game engine.
  • I got several confirmations that Unity is a strong focus of integration.
  • There may be other ways to deploy Universal Apps and/or scenes from other game engines like Unreal Engine 4 to the device, but I can’t confirm this.

 

Overall experience:

  • I was very impressed with the device and how well polished it felt for a pre generation 1 product.
  • Even though it’s relatively big and heavier than a VR headset it felt much more comfortable to wear, mainly due to the weight distribution resting on the head and not at all on the nose.
  • There was no feeling of being nauseous or weary in any way, most probably since your peripheral vision always has the real world to anchor to.
  • Tracking was much more robust than what I expected from using Kinect Fusion in the past.
  • The limited FoV had much less of an effect than I expected as my brain seemed to interpret these “Holograms” differently than a VR experience.
  • I see this current generation being most useful for non-games applications, although I do see potential for new interesting game concepts when hardware improves.
  • This is a very social device since you can keep communicating with others in the room without restriction, and potentially with other networked HoloLens users over an internet connection.
  • There is a lot of very powerful hardware (remember this includes a fully portable computer) crammed into a small device so I don’t expect this to be cheap (for the near future), but then again neither is a VR headset with computer.
Sunday, January 10th, 2016 HoloLens 8 Comments

“He-Man vs Skeletor”

A short animation film by Hethfilms and Heiko Thies, captured with Brekel Pro Body v2.
These guys were clearly having fun!

Wednesday, September 16th, 2015 Kinect, Ranting No Comments

First Alpha of new Multi Sensor solution available for testing


After many months of work the first alpha version is available for testing (currently for v2 license holders only).

More info here:
https://brekel.com/multi-sensor

Thursday, August 27th, 2015 Kinect 5 Comments

Microsoft MVP (Most Valuable Professional)

Microsoft has awarded me with the MVP (Most Valuable Professional) title as an appreciation for my work with the Kinect v1.x and v2.x SDKs.
In practice this means I’ll be active in forums/social media/email and will answer your questions like I’ve always done 🙂

But it’s very nice to feel the appreciation from the cool folks at MS that gave the world these innovative tools in the first place!

https://mvp.microsoft.com/en-us/mvp/Jasper%20Brekelmans-5001356

MVP_Logo_Horizontal_Preferred_Cyan300_RGB_300ppi

 

 

Saturday, April 4th, 2015 Kinect, Ranting No Comments

ProBody v2 + ProFace vs Bundle

Due to popular demand a Bundle of ProBody2 and ProFace2 was added to the online shop for $239 (save $39)

Body2Face2_bundle

Tuesday, March 24th, 2015 Kinect 9 Comments