Multi Kinect v2
The R&D below has been migrated into:
Brekel Body v3
PointCloud v3
Some screenshot of Multi-Kinectv2 Calibrator pre-alpha that we developed at & for the Microsoft Hackathon in Amsterdam.
Showing 3 sensors (more if you have more sensors & machines) calibrated in less than a second.
Unfortunately we couldn’t port our C++ skeletal streaming into Unity C# and finish a game concept in time, but still had lotsa fun!!
Note this is highly experimental internal pre-alpha code and not available for public testing as of yet.
However my intention is to develop this further into an add-on module for the Brekel apps for Kinect v2 in the future.
This project is still very interesting !
Have you any news ?
This has been integrated (and rewritten) into these apps:
https://brekel.com/brekel-body-v3
https://brekel.com/brekel-pointcloud-v3
Hi Brekel,
Have u finished this project? I wonder to know, how can you find the correct correspondence between different camera to control the 3D model in 360 degree? (Because Kinect always assume that the user is face forward to the camera, so it only can work in 180 degree!,Left-Right hand problem!).
I’ve moved on to rewriting it from scratch, here’s a teaser of the new multi sensor skeleton solver: https://www.youtube.com/watch?v=s0oviurS5Mw
There are still some things to work but I should be able to share a beta version for all existing customers soon.
Hi, is there any news about brekel multi sensor skeletal fusion!
Hi, I’m buying the kinect package probody and proface, but I’m interested in multi sensor too
I wonder if there is any forecast to complete this program, or if there is already a test version compatible with pro body.
Well brekel I hop to get multi Minecraft pro body v2 is a upcoming motion capture solution best of luck
Hello I am working on the multi Kinectv2 sensors to track the human body. As we have occlusions using single Kinect Camera, we are trying to form one skeleton data by using skeleton data from multiple kinect cameras.
I am facing the problem of synchronization of the data frames from the multiple cameras.
One more problem is with initialization of the kinect, multiple kinects are initialised at different times (a Kinect starts data collection of the skeleton only when it recognize a human skelton infront of it) so I could not figure out how to make synchronisation between the cameras to fuse the data from multiple cameras.
Hello
I am also working on the multi Kinect v2 sensors to track the human body. But I face a lot of problem because I am a student and just being to do it.I am wondering if you could gei me your code? Just for study.Thank you!
Hi Brekel
We are using multiple kinect v2 for a surface capture system. We are using checkerboard to calibrate the whole system. However, we notice that the calibration never be perfect. We are considering to use a object(a ball or a Cylinder) to calibrate the system. Could I ask what kind of calibration object you used? Could I have some advice about how to calibrate multiple kinect v2? Thanks very much!!
Checkerboards work reasonably well for intrinsics calculation but don’t provide a lot of information for extrinsics calibration. You may want to look into using other patterns, for example ones that also provide info about which way they’re pointing.
Hello I am working on the multi Kinectv2 sensors to track the human body. As we have occlusions using single Kinect Camera, we are trying to form one skeleton data by using skeleton data from multiple kinect cameras.
I am facing the problem of synchronization of the data frames from the multiple cameras.
One more problem is with initialization of the kinect, multiple kinects are initialised at different times (a Kinect starts data collection of the skeleton only when it recognize a human skelton infront of it) so I could not figure out how to make synchronisation between the cameras to fuse the data from multiple cameras.
Hi Brekel,
I’m quite interested in doing mocap with more than 1 kinect sensor (possibly v2) and I’ve been reading about noise due to mutual interference of the sensors.
An in-depth and specific study for both v1 and v2 kinect versions is “Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect” which can be found here
http://www.researchgate.net/publication/277023318_Kinect_Range_Sensing_Structured-Light_versus_Time-of-Flight_Kinect?enrichId=rgreq-ff1d5f67-3d8f-4c82-aebd-a79cbcebc913&enrichSource=Y292ZXJQYWdlOzI3NzAyMzMxODtBUzoyMzMyNzU5MDE4NzAwODBAMTQzMjYyODcxNzQ2Mg%3D%3D&el=1_x_2
According to it v2 is much more sensitive to reciprocal induced noise than v1 and quoting it from page 30 “The Kinect ToF camera (v2) shows low interference for the majority of the frames (RMSE: < 5mm), but extreme interference errors for some 25% of the frames (RMSE up to 19.3mm) that occur in a sequence which has a nearly constant repetition rate. This behavior is most likely due to the asynchronous operation of the two devices".
Did you experience noise due to mutual interference of the sensors?
Please describe your experience with noise and the way you coped with it.
Thanks very much and keep up with the good job.
Thanks for the link to that paper, hadn’t seen it yet but will definitely study it 🙂
Yes there certainly can be interference with Kinect v1 and v2, and both are different in nature.
With v1 you generally get noise on parts where there is overlap, the noise will be there all the time and can be quite severe to the point where data becomes unusable.
This is due to the sensor working by projecting a pattern of dots and analyzing how it distorts in space, multiple devices will confuse each other’s pattern essentially.
There was a paper showing how to reduce the noise by adding vibrating motors at different speeds to each sensor but I’ve never been able to get stable results with that myself.
The v2 works with about 300 light pulses per second which are also extremely short.
There is a small chance that multiple devices will send a pulse at the same time and this will be perceived as an IR image that is too bright, resulting in a shift in Z in the depth signal.
In practice this only happens on occasions and phases in and out, I haven’t done measurements but in severe cases the error seems to be in the range of up to 5-10 cm or so.
It may be happening less when devices have warmed up but I’m not 100% sure.
I’m not at a point yet where I can give a definitive answer on how bad the data is when actually using pointclouds/skeletons but it seems more managble and happening less often than the v1. And there may be some ways (at least theoretically )to compensate.
On a side note the v2 sensor’s video and IR streams also allow for easier intrinsic/extrinsic calibration of multiple devices.
Thanks for sharing your insights/observations.
I read a post where a guy was reporting a reduction of the interference after 20 minutes warming up of the v2 devices.
Unfortunately he reported too, in a new post, that after a bit the interference pattern seemed to recur (wonder if the sensor cooled down due to fan operation or interference just due to a random temporal pattern unrelated to sensor temperature).
I also read another post where people were speculating about possible randomic change of the 2 modulating frequencies (around 80Mhz) used for phase detection and wavelength period discrimination (such a frequency change being an autonomous sensor attempt to solve potential ambient light and dynamic range issues) and regarding it as a possibile cause of the temporal interference pattern recursion.
Assuming, as it seems, that there is no external way to synchronize multiple devices and alter the modulating frequencies to avoid mutual interference: perhaps, using the HD RGB camera included, a Visual Hull check and constrain of the point-cloud may be used as compensation for too severe occasional distortions (up to 20 cm in the worst case according to that paper) due to randomic light interferences among sensors, even though such a small number of sensors (2 or 3 in the sensor emitters/receivers overlapping lines of sight) might imply a too coarse visual hull reconstruction rendering it close to useless.
What do you think about it?
I haven’t gone into trying to compensate yet, too much other work to do with calibration and skeleton data fusion first.
You may be able to do some detection in the 2D IR image based on intensities and previous frames and compensate the depth values.
Any recent updates on the multi-Kinect setup?
Hard at work on it….
I am wondering if you will allow for multiple Kinect V1s to be used as well? Since most computers can’t handle multiple Kinect V2s. This would be useful for users that want 360 degree data using one computer.
Thanks
All the groundwork is currently being done with the v2 sensors, haven’t yet tried how well the v1’s can be calibrated with their low quality video but I will investigate that.
Also note that most machines can only run up to two v1 sensors unless you add additional hardware.
If I’m able to have 2 to 3 Kinect V2s connected to one computer, I’ll be happy… Very happy.
So what additional hardware is required to run multiple Kinect V2 sensors from a single laptop (I’m in the process of customizing a laptop purchase)?
You cannot run multiple sensor from a single computer (and definitely not from a laptop) due to PCI-Express bandwidth and driver/SDK constraints.
I suppose the next gen motherboards with updated PCIe technology and an updated driver/SDK will resolve this issue.
Amazing stuff!
Yes, but then it’s still in Microsoft’s hands to support that in their drivers/SDK.
I’m surprised that Microsoft would even put any limitations to begin with.
The limitation is at a hardware level, software wise multi sensor addressing isn’t implemented since it’s not a possibility on 99.99% of the systems out there. And of course resources of the MS Kinect team are finite.
Gotcha. hopefully, the next gen hardware will allow the full use of this amazing product. Thanks for the update.
If it does I’ll definitely work hard at implementing it 🙂
Due to PCI-Express bandwidth limitations you’ll need one machine per sensor.
This says otherwise: https://github.com/OpenKinect/libfreenect2
and Doc_OK https://www.reddit.com/r/oculus/comments/32vfhi/i_can_see_myself_in_vr_kinect_v2_proof_of_concept/cqf3x17
That is referring to the LibFreenect2 drivers, which last time I checked were very slow and needed some good knowledge to get installed and working.
I’m using the official Microsoft drivers/SDK which are much more robust and much faster.
Note that LibFreenect2 also doesn’t support any (body/face) tracking functionality at all.
This multi-Kinect2 fusion is awesome! You are certainly a programming wizard! Do you have any updates on the status of this project… I would definitely purchase 2 more laptops ( 3 total & the Kinects to be a beta tester for you in a 3-Kinect setup ).
So you say this has a range of about 4.5m (or 15 feet) so in a 3-Kinect2 setup the largest room “cube” the sensors could capture data for would be 15 feet x 15 feet x 15 feet ??? Just curious. The 3 Kinects would let you turn-around, etc.?
I definitely plan to purchase a Kinect and your software for my laptop.
Keep up the awesome work!!!
The last couple of months life (the not so pleasant aspects) and contract work got in the way of serious dedicated development time.
Fortunately I’ve recently started picking things up again and am making progress.
It’s still too early to release any more info but I’m definitely looking at opening a beta for existing license owners once the needed features are in a usable state.
Hoi Jasper,
ik ben van plan Brekel Pro Body te kopen, maar wil 2 kinects (nieuwe versie) gebruiken. Is daar al support voor in Pro Body?
Dank voor de reply..
Groet Jasper
Op het moment is deze add-on nog in experimentele pre-alpha fase en nog niet beschikbaar.
Je kan wel de software meerdere malen runnen en data van meerdere sensoren tegelijk capturen, maar je zult dit dan zelf moeten combineren in je favouriete 3D pakket.
Houd er rekening mee dat je 1 machine per Kinect v2 sensor nodig hebt in alle gevallen vanwege de hoge bandbreedte.
Can i link this into unreal 4 engine?
Maybe at some point in the future.
Any updates about multi-kinect support with Pro body v2?
All my time has gone into Pro Face v2 lately, so no new announcement on the multi-sensor stuff just yet. Stay tuned.
Now the Pro face V2 is out. Are you going to be focusing on Multi Kinect v2? because this looks like it could be extremely useful…
Yeah i agree ! Multi kinect would be extremely helpfull and i woud use it for sure !! When will it be released ?
Looks Awesome cant wait!
I have a workstation with 3/multi separate usb3 bus.
Will multikinectv2 work with no limitation?
There is currently no way to run multiple Kinect v2 sensors on a single machine.
The problem is not the USB3 bus but the PCI Express bus it’s connected to.
OK, got it.
How do I get to be a beta tester.
I may have a particular use of this technology.
Can I pay for a beta solution, just to support the efforts for development?
This is not in beta yet, it’s an internal pre-alpha.
What if there’s a PC with multi PCI-E USB 3.0 Controllers?
In theory if you have one of those newest types of motherboards and a USB3 chipset connected to those new lanes it could work.
But in practice multi device support is not in the drivers/SDK since hardware support is very rare on consumer machines.
Hi all
“In theory if you have one of those newest types of motherboards and a USB3 chipset connected to those new lanes it could work.”
Which types of motherboards do you mean?
I only know about the 4x Dedicated 5Gb/s USB 3.0 Ports Controllers, i.e.:
http://www.amazon.com/HighPoint-RocketU-PCI-Express-Controller-1144C/dp/B00DW5QGLM/ref=dp_ob_title_ce
http://www.amazon.com/Express-SuperSpeed-Adapter-Dedicated-Channels/dp/B00HJZEA2S/ref=dp_ob_title_ce
The limitation is related to the PCI Express bus speed that the USB3 ports are connected to on the motherboard and the fact that the drivers/SDK therefore don’t support multiple sensors on one machine.
No way around that.
OK, but which “newest types of motherboards” do you mean?
Thanks for your time!
is this going to be a separate software from pro body2 ?? or just upgrade version. Is there going to be any discount if I have pro body 2??
This is separate software that connects to multiple instances (through a network port) of Pro PointCloud/Body/Face v2.
Hi
I`ll be very interested to test it as soon as is available.
If you are doing a private beta I am a current customer and would like to be considered.
At the moment it’s still a highly experimental internal pre-alpha project, once it’s ready for beta testing I’ll announce it on this site and social media streams.
[…] 該作者也有Demo了多個Kinect sensor的測試,之後應該也會推出相關應用。 https://brekel.com/multikinectv2/ […]
dose the multi and probody v2 work with kinect v1 ?
The v2 apps are specifically designed for the v2 sensors only (Kinect for Windows v2 and Kinect for XBox One)
The v1 apps remain available for the v1 sensors (Kinect for Windows v1 and Kinect for XBox360)
The experimental multi-sensor support is also just for the v2.
In fact the v1 sensors use a very different way of measuring depth and overlapping multiple sensors generates a lot of noise.
The increased resolution of the v2 sensor is also needed for accurately (and quickly) aligning them into one coherent coordinate system.
Very interesting indeed
I know it’s very early days yet, but I do have a couple questions:
Would your system still be running real-time?
How large is the capture volume if you set up multiple kinects?
I’ll definitely keep an eye on this!
thanks,
e1
Yes this can still run in realtime on multiple networked machines.
Streaming skeleton data is light and so is fusing them.
Streaming pointcloud data is too heavy to do at full resolution in 30fps (yes I do compress them) but for visualization purposes they can be downsampled in both space & time.
But each machine can receive the calibration data and then record on it’s own at full quality.
The size of the capture volume depends entirely on the amount of sensors and their overlap.
Given that data becomes too noisy at about 4.5m there are limitations though.
Hello Brekel,
impressive setup you got there. I am womdering how you got three kinects connected to a single machine, are they all recording in realtime at once?
Microsoft usually states that the data of one Kimect already saturates the USB bus, so I would be very interested to learn how you pulled this off.
What are your plans with this software, will this be published and if sp, when and where and at what cost?
Do you got some more media than just these three screenshots?
Thanks for your answers, this looks really interesting,
C. Leske
This is running on multiple machines, as MS states one sensor per machine.
Since it’s highly experimental internal ore-alpha code it’s too soon too answer questions regarding release, price etc.
However I do intend to devote more development time on it once the single-sensor beta apps reach maturity.
So keep an eye on this site and social media.
Hello i’m a student of programming and i like so much this class of software because i want to do a mocap with 3 kinects and your software can do it please, if you release your software for free use i want to try
Sorry, this will be not be free software.
I hope we have only pay once for this module 🙂
I’m waiting Pro Face v2 and this module to be released 🙂
Good luck Brekel..
OIC.Thx brekel.
exp
Sir brekel.
have simple question
.3kinect on 3PC
but [Motionbuilder] launch on 1PC.
CAN I us build Multi kinect system on Motionbuilder ?
From the text above: “Note this is highly experimental internal pre-alpha code and not available for public testing as of yet”