Some info regarding Kinect/Mobu

Just an in between post to answer a few questions.
Most importantly, I’m looking at releasing something at the end of this week or next week for you all to play with.
Just check in on this page, or follow me on twitter.

My implementation is based on OpenNI and NITE by PrimeSense, runs at 640×480 30fps for image/depth acquisition and a reasonably new machine will be able to process and stream at 30fps as well.(using multiple threads)

There is a base application for data acquisition, pointcloud calculation and skeleton tracking.
You can export color and depth image frames in several formats as we’ll as geometry in .obj (and a few others) format. Currently this is as fast as it gets (depending on export formats) but nowhere near 30fps.
At the moment it can’t export motion directly (BVH for example), but only streams it out to MotionBuilder.

Then there is a separate device for MoBu (2009-2011 32&64bit). This connects over a TCP connection to the base app, so you can either run both on a single machine or network two machines together.
The device generates a skeleton with positions for Hips, Spine, Neck, Head, Uparms, Forearms, Hands, Uplegs, Lowerlegs and Feet.
And rotations for everything except the head, hands and feet. (unfortunately that’s a limit of the NITE skeleton tracker at least for now). The device is recordable and can create a MoBu character node for retargeting to other characters.

Currently there are still some issues with rotational flips I have to iron out, once that is done and after a little bit of polishing I’ll release something.

Multiple Kinects would be possible in the future but the drivers for that aren’t here yet.
You could theoretically use additional Kinects for filling occlusions, or increase framerate. I’ve seen/heard about others experimenting with this and both seem valid.

The current quality of this technology is by no means a replacement for any pro (optical) capture system. (I use, and maintain a pipeline with a 24camera, multi actor Vicon system for my daily job).

The strengths are in low cost, easy setup, and use with regular clothing.

The problem areas lay with low data quality, motion artifacts, small capture area, no rotation for certain body parts (could potentially change in the future) and lack of resolution to do fingers/facial capture.
And you have to be careful with staying visible to the sensor, watch out for hands behind your back, sitting/crouching, loose clothing, leaving the view or passing behind objects.

But hey, it’s most definitely a very accessible and nice device to play with. And this kind of sensor has a lot of potential to create a big big stir in interface design.
For example, I want to be able to swipe through my media files, from the couch, on my media player instead of clicking a remote! 🙂

Ok, out for now, will keep you posted.

Thursday, January 6th, 2011 Kinect, Ranting

5 Comments to Some info regarding Kinect/Mobu

  • eric says:

    Dear Jasper,
    You are always the first ! First to make physics, and now Kinect, on MotionBuilder.
    So you beat the russians iPi (Motion Capture for the Masses) that was just about to release their standalone application.

  • Thomas Goddard says:

    This is very cool! Great work and thank you for sharing so generously. The interaction ideas for this are awesome. I would love it if I never had to worry about finding my television remote again 😉

  • taran says:

    awesome can’t wait to try this out. Even if it isnt a full solution it will go a along way to get the base animation into motionbuilder, so that you can carry on animating on top.

  • Johan Steen says:

    Thanks for the update!
    I’m really looking forward to take the first iteration of your implementation for a spin.
    I’m so excited about your development! Brilliant! 😀


  • Leave a Reply

    • Google+
    • LinkedIn
    • YouTube