Brekel Face v1 – FAQ
- What is Brekel Kinect Pro Face?
Brekel Kinect Pro Face allows realtime motion capture of head motion and basic tracking of facial features using the Microsoft Kinect sensor.
It can export 3D data in FBX, BVH, TXT and .PZ2 (for Daz/Poser) files as well as standard AVI and WAV files for the video and audio data.
- What is it not?
It’s not a highend facial capture and retargeting tool, there are other products out there for that at higher price points. Consult me if that is what you’re looking for. But it does provide robust tracking that does not need to be pre-trained to a specific actor.
- Which drivers are used?
Brekel Kinect Pro Face uses the Microsoft Kinect drivers, the installer will automatically download and install these for you, or you can find them yourself here.
- Does it support capturing using more than sensor at a time?
Currently only a single sensor can be used at a time, you can however run the software more than once and choose which sensor to use if you have more than one connected.
- Which sensors are supported?
Both the ‘Kinect for Windows’ and ‘Kinect for XBox’ sensors are fully supported, the Asus Xtion and PrimeSense Carmine sensors are not supported.
- What are the differences between the ‘for Windows’ and ‘for XBox’ sensors that are relevant?
- ‘for XBox’ is officially only licensed by Microsoft for developers, ‘for Windows’ is licensed for commercial and consumer use
- ‘for Windows’ supports nearMode which can track between 40cm and 3m distance, regular mode tracks between 80cm and 4m (mainly interesting for face tracking)
- ‘for Windows’ adds control over the video camera’s exposure and whitebalance settings
- What are the minimum hardware requirements
- ‘Kinect for Windows’ or ‘Kinect for XBox sensor’
- Power supply for the Kinect sensor
- Microsoft Windows 7/8
- 32 bit (x86) or 64 bit (x64) processor
- Intel Core i5 or faster (or equivalent) processor
- Dedicated USB 2.0 bus
- 4 GB RAM
- Graphics card with OpenGL support
- Is there a trial available and what are it’s limitations?
There is a free trial available which allows you to explore the tracking quality.
All the export and network streaming functionality is disabled and only available in the retail version.
- Is there an evaluation available without limitations?
If you want to test Brekel Kinect Pro Face including it’s recording and streaming functionality for a few days please contact for a full evaluation version.
- Why don’t you have an app that tracks both Body and Face simultaneously?
I have experimented with this, however currently a single Kinect does not provide enough data to do this well enough.
The limited resolution does not provide good results for the face tracker when standing at a typical distance for the body tracker.
And having only a single viewpoint means you will have to keep looking at the sensor so your face stays visible, which would severely limit the range of motion you could perform.
- I’m getting jitters and noise, what can I do to improve?
- Make sure the video image is well lit and not too dark
- Adjust and play with the Filtering sliders
- Get as close to the sensor as possible (a ‘Kinect for Windows’ sensor and enabling near mode will allow you to get even closer)
- I’m having trouble recording video files
Video output is dependant on which video codecs are installed on your machine.
When the ‘Ask for codec’ option is disabled the ‘XVID’ codec will automatically be used if available.
If you’re having trouble with codecs I can recommend the K-Lite codec pack and/or Xvid:
http://www.codecguide.com/download_kl.htm
Remember to also install the x64 version of the codec if you’re using the x64 version of Brekel Kinect Pro Face.
- What exactly is being tracked and exported to the output file and network stream?
- Head position as x,y,z coordinates
- Head rotation as x,y,z euler rotation values
- The face points (see the mesh in the 3D window) as x,y,z position coordinates (note not all of these contain unique tracking data)
- Animation units, these are values representing facial poses (for example to drive blend/morph shapes)
- Shape units, these are static values representing facial structure (for example head height, eyes width, mouth width etc)
- Not all of the points move, how come?
This is correct, not all of these points are tracked invidually, internally they’re derived from the Animation Units and Shape Units.
- Which animation units are there?
The following list of animation units is currently being tracked, you can use directly to drive blend/morph shapes in your face rig for example.
-
-
- “Brows Inner Up”
- “Brows Inner Down”
- “Brows Outer Up”
- “Brows Outer Down”
- “Lip Stretch”
- “Lip Kiss”
- “Lip Corners Up”
- “Lip Corners Down”
- “Upper Lip Up”
- “Upper Lip Down”
- “Jaw Open”
-
- What is the difference between a shape and an animation unit?
Shape units are static values, they describe the shape of the face, for example how wide the eyes of the person are apart.
Animation units are animated values, they describe the expression of the face and can be used to drive blend/morph shapes in your face rig.
- Can you add more Animation or Shape Units or tracking points?
These units are a direct result of the Microsoft face tracker and currently can not be altered.
If you have wishes please contact the Microsoft Kinect team about it for inclusion in an upcoming release.
- Is it safe to stare into the Kinect’s Infra Red light source?
I’m no expert but I believe it is, you can find a lengthy discussion about it here: