Brekel Body v3 – FAQ

  • What are the minimum system requirements?

This is dependent on the amount and type of sensors you want to use.
You can find more info in the documentation included with the app.

 

  • Can I mix & match different sensor types/brands?

Yes definitely, the app was designed to handle data from different sensor types.
In fact combining different sensor types may be beneficial in some cases since they all have different noise/accuracy characteristics in the pointcloud and skeleton data they can deliver.

 

  • What sensor/brand/type is the best?

As always ‘best’ is a subjective term and ‘it depends’.

Azure Kinect and Kinect v2 produce the best tracking quality, but generally Azure Kinect needs a faster CPU/GPU and Kinect v2 is restricted to one sensor per machine (due to drivers/SDK). Kinect v1 has decent tracking and has low GPU/USB requirements, Orbbec has ok tracking with low requirements and doesn’t need a power supply. RealSense needs no power supply but is generally very noisy.

 

  • Is a webcam just as good as a depth sensor?

The short answer is no.
A depth sensor provides depth readings which the skeleton tracker can directly use for 3D joint placement.
A webcam only delivers 2D data and the depth will be estimated by the deep learning based tracker.
Webcams are great extensions to add to a setup with one or more depth sensors providing additional viewpoints and improving the quality of the solve.
Webcams generally are also less expensive, use less USB bandwidth and some can provide 60 fps.

 

  • Which webcam brands/types are supported?

All webcams should work as long as they provide decent image quality.
The tracker will need to know the “field of view” of the lens (in degrees) in order to estimate depth.
Logitech webcams are automatically recognized and this “field of view” is known, for other brands the user will have to input this value manually.

 

  • How many sensors should I use?

The software works with one or more sensors (as many as your hardware can handle).
Adding additional sensors with different viewpoints can increase quality due to seeing parts that were occluded for another sensor.
So basically the more the merrier.

 

  • How many sensors can I use on a single computer?

This is dependent on the USB bandwidth of your machine, the type of sensors and the CPU/GPU of your machine.

Kinect v2 (XBox One) and Orbbec sensors have driver/SDK limitations restricting usage to a single sensor per machine.

For desktop machines you can add PCI-Express cards to expand your USB bandwidth.

You can use additional machines with a network connection and use sensors connected to them.

(see documentation for more information)

 

  • Why can’t I use multiple Kinect v2 or Orbbec sensors on a single computer?

Kinect v2 (XBox One) has a driver/SDK limitation that prevents access to more than a single sensor per machine, there is no way around this.
The reverse-engineered open source LibFreenect2 SDK could potentially access color/pointcloud data (on some hardware setups) but that excludes body tracking.

Orbecc can access multiple sensors for color/pointcloud but their body tracker only works on the first sensor at the moment.

 

  • How much better is a setup with sensor A, B, C versus a setup with X, Y or just a single sensor Z (substitute letters with your favorite sensor brands/types).

Generally speaking multiple sensors are always preferable since they see more angles of the subject and have less occlusions.
And newer sensor types almost always provide better/cleaner data than older ones, especially the Kinect sensor range.
Your particular setups I have most likely not tested so if you need a more specific answer, test it out with the trial and/or evaluation version.

 

  • Do I need multiple licenses when using multiple sensors/machines?

One “Multi Sensor” license allows you to connect to as many sensors/machines as you want.

When using multiple machines the idea is to run the GUI on one machine and the headless/console version of the same app on your other machines and use the “Network Sensor” option to receive data from sensors on networked machines.

 

  • Is there interference between overlapping sensors?

Yes and no.
Structured Light sensors (like Kinect v1 & Orbbec Astra) can produce a bit more noise in overlapping areas.
Kinect v2 (Time of Flight) can on occasions have some Z-wobble since they cannot be synchronized.
Azure Kinect can be synchronized fully reducing any interference.
In general interference does not pose much of an issue for the solver.

 

  • Can sensors be synchronized?

Azure Kinect sensors have sync in/out ports on their backs (remove the cover) and can be daisy chained using a simple 3.5mm audio jack cable, the Brekel app will automatically detect this and set things up accordingly.
Internally the software synchronizes all incoming data using timestamps.

 

  • How is v3 different from v1/v2?

v3 supports aligning & fusing data from multiple sensors which can, depending on your setup, improve quality with occlusions and/or increase capture volume.
v3 has a new set of skeleton solvers compared to v1/v2
v3 has the option to use a deep-learning based tracker for improved quality and to help determine left/right/front/back of people.
v3 is in active development.

 

  • What is the Deep-Tracking functionality?

The Deep-Tracking feature utilizes a deep-learning based tracker to help improve quality by providing additional joint estimation in 2D and 3D, generally it can better deal with identifying left/right/front/back of people compared to some (older) sensors.
It can either run in CPU mode, as long as your CPU supports AVX2 instructions (generally all CPUs from 2013 and later).
Or run in GPU mode on NVIDIA RTX 2xxx/3xxx GPUs with Tensor Cores.

 

  • Are there upgrade discounts for v1/v2 license owners?

Yes of course, you can find more info about updates & upgrades here

 

  • Why are there no sample files?

Data quality is dependent on how many sensors you use, their brand/type, how you set them up regarding angle/distance to subject and if you use the deep tracker or not for example.
Due to these variances it’s best to try things out for yourself for your particular setup.

 

  • Why don’t I see color when loading a Body v3 BPC file in PointCloud v3?

By default Body v3 is tuned for best body tracking performance and uses infrared streams where possible. As decoding this is generally faster than color streams and offers more stable lighting.

You can however switch to using color instead using this toggle from the top menu: “Settings > Force using Color for video”.

 

  • When can we expect a MacOS release?

Many of the supported sensors don’t come with MacOS drivers.
At the same time Apple dropping OpenGL support in favor of a proprietary substitute, not offering cross compilation, no NVIDIA CUDA support, forcing developers into a paid development subscription and stopping x86/x64 CPU’s makes MacOS not a very friendly platform to develop for. It would also require rewriting major portions of the apps for a very small user-base.
So there are currently no plans for a MacOS release.

 

  • What about a Linux release?

Some sensors offer Linux drivers and some portions of the app could potentially be ported to Linux.
At the moment there are no plans but things may change in the future if there is enough demand.

%d bloggers like this: