Wearable Kinect aka Kinect Snorricam
Just posted some info on my latest experiments creating a wearable Kinect setup for face tracking, based on the SnorriCam principle.
Just posted some info on my latest experiments creating a wearable Kinect setup for face tracking, based on the SnorriCam principle.
I feel foolish — again! Would like to download the beta of your body v3 but I don’t know where to find it! “Help – Download” on WHAT site / page?? I can’t find a HELP tab on your brekel site. How am I going to be a mocap virtuoso if I can’t even find the link???
Also into 3d and saw what you have going on here. I make cool stuff too: (Engineering Hardware) dreamcutter.com , (3D Animation) farmpeeps.com .
Here is an idea: Lets make a levitating sorricam / body cam that is based on a gyroscopic toy helicopter that is programmed to track eye retina (IR heat) and to use as a reference to hover 40 CM in front of your face. The retina give of a unique IR signature in comparison to the rest of the body and are “identifiable” heat sigs. I have access to engineering the hardware and electromechanical controls and you seem to have the software integration aspect. Just a thought at this point.
Sounds a bit dangerous and maybe not too practical.
But also sounds like something incredibly cool if you can pull it off to get a helicopter staying in front of a person!
Guess it does demand for an actor with balls of steel and a very loud voice though 🙂
please how can i get a hold of your custom wearable SnorriCam ? can you make me one?
All the parts are listed so it shouldn’t be very hard to build your own using equivalent parts.
okay, please could you give a link to where you listed them, i think i most have missed it
Have a look at the bottom of the full article by clicking on the photo of this blogpost, or using the link in the menu on the right.
or if not to much to ask, if you could make a tutorial
can brekel point clound isolate the face and neck and save to .obj?
There is no specific code to do it.
However if you make sure there is a bit of distance between you and the background you can use the cropping functions to throw out any data too far away and end up with the head being isolated.
You can save as OBJ sequences, but also to Alembic and Realflow cache formats which will be much more efficient for most packages.
This is really cool….im curious though, are u using the xbox’s kinect or the windows version of the kinect sensor? and would it make a difference for facial animation with one or the other?
When mounting it on a desk and sitting in front of it you can use both sensors.
When wearing it on your body you want it as close as possible to be practical so the Kinect for Windows is much nicer since nearMode reduces the minimal distance to 45cm. (for XBox is about 80cm minimum)
For the second example using Faceshift I’ve used a Primesense Carmine 1.09 closerange sensor.