Pro Body 2 – FAQ

    • What are the minimum hardware specifications?

Kinect for Windows v2 sensor or Kinect for Xbox One with separately available adapter
Windows 8 / 8.1 / 10 (USB stack of Windows 7 or below can’t handle bandwidth requirements of v2 sensor and is NOT supported)
USB 3.0 port, Intel and Renesas chipsets only! (others brands may or may not work)
DirectX 11 capable GPU
4 GB or more RAM
i5/i7 CPU 3GHz or faster
1280×1024 screen (recommended: 1920×1080 or higher)


    • What are the minimum GPU chipsets?

Intel HD 4400 integrated display adapter
ATI Radeon HD 5400 series
ATI Radeon HD 6570
ATI Radeon HD 7800 (256-bit GDDR5 2GB/1000Mhz)
NVidia Quadro 600
NVidia GeForce GT 640
NVidia GeForce GTX 660
NVidia Quadro K1000M


    • Does this just use the standard Kinect SDK tracking?

Yes and no, tracking starts with joint positions from the SDK but a lot of technology has been built on top of this to drastically improve quality, especially in regards of rotations and additional joints.


    • Is there an upgrade discount for v1 license owners?

Yes, please contact me providing your name and/or email used in your the v1 purchase.
I’ll send you a personal coupon code for a 30% discount on the v2 software.


    • Which sensor does this work with, and where can I buy one?

Kinect for XBox One with separately available USB3/Power adapter, besides the  Microsoft Store many places like Amazon and local game stores also sell these.
Kinect for Windows v2 (now fully replaced by the functionally identical Kinect for XBox One sensor).


    • Does this work with the Kinect for XBox360 or Kinect for Windows v1 sensor?

No, please have a look at Pro Body, Pro Face and Pro PointCloud


    • Does this work with the Intel RealSense sensor?s

No the RealSense sensors are not supported.

I have one of the close range sensors and am evaluating if it’s possible to add support to Pro Face 2 and Pro PointCloud 2 in the future, Pro Body 2 is dependent on the Kinect 2 sensor though.
It seems Intel will only sell these as devkits and wil focus on integration with tablets/laptops/all-in-ones for retail, so market penetration will probably be limited for 3D animation users that are my main clients.


    • Can I install this alongside the other Brekel Pro or Brekel (free) applications

Yes, the Kinect v2 sensor uses completely separate drivers so installing it will not interfere at all.


    • Does this new v2 sensor physically interfere with a Kinect v1 or Leap Motion sensor

Nope there seems to be no interference.
The 3 sensor technologies use slightly different InfraRed light frequencies to do their depth measurement and can therefore be used together.


    • Does this work on Windows 7 or XP?

Unfortunately no.
The Kinect v2 drivers need the completely rewritten USB 3.0 stack from Windows 8/8.1 provided by Microsoft.
With Windows 7 USB 3.0 support was left to the manufacturers and high bandwidth devices prove to be too unstable.


    • Why do I need a Intel or Renesas USB 3.0 controller, does it work with other brands?

These are the officially supported chipsets by the Microsoft drivers, they are proven to be able to handle the high bandwidth requirements robustly.
Support for other chipsets may be added in the future but may likely be unstable or not work at all.


    • Why do I need a DirectX11 capable graphics card?

The drivers convert the raw Infra Red video stream on the GPU to ensure realtime performance, this is based on DirectX11 DirectCompute code.


    • Does this software support multiple sensors on the same machine?

In fact this is a hardware limitation.
Due to the high bandwidth requirements of the sensor the limiting factor is the PCI bus speed of most computers.


    • Does this software support multiple sensors on multiple machines?

I am experimenting with this (you can find some posts about it in the blog) but this is currently in an internal pre-alpha state.
Keep an eye on this webpage and social media streams for future announcements if/when this will become an add-on module.

However all Brekel software has a “Record Triggering” feature which allows synced recording across multiple apps on one or more machines.
So yes you can record simultaneously using multiple sensors, but the software doesn’t automatically fuse the data.


    • How much better is this compared to the v1 software?

Depth measurement is based on a different principle than the previous generation Kinect and has about 3 times the fidelity.
Skeletons tracking now tracks 25 points (vs 20 on v1) on the body with more anatomically correct placement.
Due to the extra detail joint rotations can be calculated more accurately.
Skeleton data is in general less noisy than on the v1 sensors.
Up to 6 people can be tracked simultaneously (although the camera view will be pretty much filled).
The lenses have a wider field of view so a bigger area can be seen.


    • What is the minimum and maximum depth range a person can be tracked in?

Minimum: 0.5 meters
Maximum: 4.5 meters


    • Will this run on a Mac?

The software requires Windows 8/8.1/10 to run.
It has been confirmed to work well under BootCamp but will most probably not work under a Virtual Machine unless since they usually don’t support DirectX11 nor the full USB3 specs.


    • What can I do when the hands/forearms flip?

– make sure your graphics card drivers are updated to the latest version from the NVidia/AMD/Intel site
– place your sensor at about chest/head height and slightly tilted down so it can still see the floor
– hand/forearm rotations are dependent on visibility of the thumb
– acting with open/relaxed hands is better than fists
– don’t wear baggy clothing that obfuscates the wrists/hands
– certain clothing reflects very little light which can result in lower tracking accuracy, check the IR/depth views
– you can play with the Forearm and Hand Roll Sensitivity settings in the Skeleton tab to tune stability vs fidelity


    • How does this differ from iPi?

iPi and Brekel internally use very different algorithms and have different workflow philosophies.
I suggest you simply try both and see what you like, but here are some differences:

– operates in realtime and needs no offline processing
– starts tracking instantaneously when a body is seen, no initialization is needed
– has little to no learning curve (select format, stand in front of sensor, hit record)
– can track 1-6 bodies simultaneously
– produces higher fidelity data including shoulder, head, and hand movement without the need for additional hardware
– is a bit more susceptible to occlusions compared to the much higher priced multi-sensor iPi (which is not available for Kinect 2 yet btw)
– can track hand states: open/closed/lasso (two fingered pointing) and drive finger joints using various poses
– can do simple 2D face tracking to enhance head rotations (also exported to FBX)
– can record pointcloud data and export to various mesh and particle cache formats using Pro PointCloud
– can record audio in sync
– exports to the much better FBX file format (incl hand states, face tracking and additional state info), as well as BVH (skeleton only)
– can export to TXT and CSV formats
– does not do biomechanical analysis
– no subscription service, you simply buy and own a permanent license old-school-style

  • Google+
  • LinkedIn
  • YouTube