Hi guys,
I don’t know if any of you had this problem (this is a really old conversation), but when I stream into Mobu, my skeleton keeps coming in backwards (ex. right bones are left bones and left bones are right bones). Is there any way to work around this???
I’m working with valve models in Mobu 2013, with brekel v0.5.
Hi,
sorry me again….
is it possible that you to implement to load recorded oni files instead of the live capture of the kinect ? because i don’t have a device for testing.
Hi there,
i tried to export data to nuke. i’m only able to connect an TIFF RGBXYZ or EXR RGBXYZ. to the “xyz” input of the “PositionToPoints” node and the same picture to the “col” input. But then i don’t get an rgb picture on top of the pointcloud. I tried to set the “Export Format” of color to png. When i connect the png image to the “col” and a RGBXYZ (tiff/exr) to the “xyz” input nuke break down.
When I connect a PNG directly to the “Viewer” node, Nuke displays the PNG in 2D Mode. But when I connect the PNG and depth_rgbxyz.tif over the “PositionToPoint” node to the “Viewer” Nuke crashes. Same with exr files.
Connecting “col” and “xyz” to the rgbxyz.tif works.
( http://www.pictureupload.de/originals/pictures/040211102809_kinect_nuke.jpg )
I just tried it here and it works fine using NukeX 6.1v1 64bit.
What Nuke version are you on? Maybe there are issues with the PositionToPoints node on certain versions.
great stuff. enjoyed using it so far. some peopel are just never happy though eh?! ‘can it be…better and use more kinects!?’….. great work to utilise a 100 quid gadget. heres a vid i posted after playing with it.
Well my wishlist would be:
1) Ability to capture 2 skeletons at the same time
2) Being able to use multiple (at least two) Kinect devices to cover hidden body parts
3) Motor control within your application
BVH support isn’t as crucial in my opinion. The data I’m receiving still needs a lot of manual cleanup and Motionbuilder is suited best for that job anyways.
Keep up the great work, can’t wait to get my hands at the next release.
Hi Thanks again for the great job
My trial version of MoBu is running out just wondered if you could put a little application together that outputs the translation and rotation values of the Brekel Kinect 3d scanner into a file. So that I read into a biped in 3d studio max. 😉
I’ve got BVH saving done and working, except Biped is quite a piece of crap since it has a non-standard coordinate system and is extremely specific in what type of BVH it understands.
So it’ll take a bit more time to reorient and massage the Kinect data so Biped doesn’t keep whining like a little child. 🙂
Hi there
Please, it would be possible to get a pre release of the next version with functional bvh feature. To play/work with. My mobu trial has expired. The CAT tool in 3dsmax can read bvh.
Wow. You’re a geeenius! Do you think at some point there could be support for simultaneously tracking more than one person’s skeleton? Also, what would you say is the maximum capture area/distance?
It should theoretically be possible to support tracking multiple users.
In fact someone mentioned that detection and drawing of the skeleton dots on the viewports already worked up to 4 people at the moment. But my internal processing and streaming to MoBu only supports the first calibrated user at the moment.
However the question is how useful it will be in practice. As 2 users will occlude each other very easily or will fight for pixel space on the small sensor.
For the maximum area/distance. I think the sensor is rated between 50cm and 4-ish meters, and from experience the data quality drops the further you go from the sensor.
So practically the area is a lets say 1 step to each side from a center position or so?
Thanks a lot Brekel… Do you think using multiple kinects would solve the occlusion problem and increase the capture area size? I would really like to be able to capture the area of an actual stage.
I’ll keep you posted on anything I do… Cheers for all this…
Multiple should both help occlusions and allow for bigger area’s.
However you do have to start calibrating them so they all know where they are in world space in relation to each other.
So you’ll move more and more towards a pro mocap setup then 🙂
And at some point the interference patterns will start to interfere too much. (I’ve seen tests with 2 and 3 where it still holds up fairly well, but haven’t test anything myself as I only own one)
I seem to have lost the bottom half of the screen of the 3d Scanner.. Even after uninstalling and reinstalling it still is missing.. It leads me to believe that the application is reading & writing to a config file somewhere that saves the settings of the previous session… Where do these settings get saved?
You’ve stumbled upon a bug 🙂
Settings are stored in the registry, but there is a way to get it back from the GUI, it’s in the FAQ on this site under ‘I’ve accidentally closed the window with the color and depth images, how can I get it back?’
I’ve fixed it properly for the next upcoming version, but there is still one big feature to complete and I’ve just caught the flu so things are a bit slow.
I was just looking some more at the Mobu plugin and it seems that if the camera isn’t at waist height looking straight on then the skeleton appears to be leaning backward or forwards depending if the camera is looking up or down slightly.. I guess this makes sense as there is no camera calibration going on.. but easy to adjust anyway so not a big deal just thought I would mention it as it helps you get a better recording 🙂
So, after a complete reinstallation i still have the same problem. I’ve tried another app with kinect, the FAAST, and that work great so i think that drivers are good. :/
I have no idea where the problem can be, maybe my configuration ? I have an AMD Turion 64×2, 1,5 go of ram..
Sorry man, but I really don’t have a clue.
As you say the drivers should be good.
I haven’t tested on AMD but I don’t think that should be a problem either.
That’s the only thing I can think of at the moment, that your graphics card can’t handle something. Maybe try updating to the latest drivers if you haven’t already.
I found pressing maximise snapped all the windows to fit on mine, yeah you remember on my nvidia 7900gs based laptop it couldn’t create the required streams.. that was wierd too, but i didnt try the new build, which I will do later.. just working on this vid for now.. I have faith in your skills so im sure you will debug them all in the end :))
very nice work, but i have a problem using the kinect 3D scanner, it wouldn’t work.
When i run it, the first window open and is “waiting for connection”, then the window’s app open, but as soon as i try to interact with it nothing happen and the program stop running. I noticed that the window’s app is also more higher than my screen resolution. That can be the problem origin.
I know it’s an experimental work so i really understand that some trouble can happen. It’s already a big work 😉
Yes i followed all the installation instructions and the OpenNI examples work, i can see me with a yellow color. I’ve reinstall the app 3 times at least, and the result still being the same. :/ Actually, the window “don’t answer” (i don’t remember how it say in english), and close after few minutes by windows error.
So if the OpenNI works, the problem is the 3D scanner app, isn’t it ?
(sorry for my poor english due to the french school lol)
No, they are no streaming, sometimes both video at the bottom show an image, but when i try to move nothing happen. And the main video never appear, in think that the app stop running just at the beginning, and when i click inside windows tell me that the app don’t answer..
I may should uninstall all and reinstall it, do i have to install the 3d scanner before all drivers or after ?
It worked great man!!! I had a few teethers with the space but people were freaking out :)) video real soon! Yay a world first there then :))
Can’t wait to see this driver progress..
It still can crash occasionally, just 3dscanner stops, but the character keeps going until u press close so Maybe it’s not nite that is causing it. But it is still way better than the first build :))
That’s great to hear! 🙂
It’s sometimes difficult to catch crashes when a program multithreaded, but I’m sure I’ll be able to catch em all sometime in the debugger.
wow its like 10x more stable under both 2009 & 11 and switching between apps works fine now where as before it was getting jammed.. It did go down once, but no problems recording and driving a character for 30mins now so great work, i dont know what you did, but its much better.. I am going to use it for a show this weekend so will try to film it 🙂
FBX file reloading stuff is totally stable.. well done :))
just a quick question..when I switch off all features of 3dscanner the cpu is still 50%, is that normal? also I see the frame rate in the point cloud changing when its switched off, does that mean that its still active underneath?
Anyway thanks so much for responding to the teethers in time for the weekend.. your a star!! Many many people will love you for this..
Good to hear it improved, the crashes turned out to be an easy fix actually, some memory initialization issues.
Switching the features off and 50% cpu is normal.
Internally the data collection and some calculation is still going on as things are dependent on one another.
It’s only turning off display and the related calculations for that.
I probably could/should handle this more clever once I start optimizing code, but for now I’m still trying to get a few more features to work first 🙂
Good luck with your show, would love to see some pics/movie if you have the time to pull out a recording device for a sec.
I am trying to get it shot in HD 🙂 will also post a test for the show..
Glad to hear its normal cpu.. its been running a character for well over an hour no problems so thats great work, going to use it for sure 🙂
Something else I noticed, which was a problem we had with our MB drivers for the inertial systems where I work, is that when we record in MB the recorded keyframes don’t match up to the frames in MB, they’re not on frame.. The same seems to be happening here.. I can contact the programmer next week who managed to fix it for some info if it looks painful 🙂 Just thought I would mention it, not really an issue for me at this stage though.
I did see also that the skeleton is below the ground in MB sometimes..
Its still running so I’m going to leave it and see how long it lasts :))
Very cool to hear you intend to use of for your show 🙂 🙂
Good point on the subframes, to be honest I never thought much of it. With out Vicon at work I usually leave it that way since we plot on frame somewhere in the pipeline anyway.
But you may be right that it’s cleaner to be on frame from the start. If you could ask your programmer some tips how I can implement it that would be great. (you can also contact me by mail if you don’t want the answer to be public).
As far as the feet below the ground go, in fact I’m just using a fixed offset of 90 now to raise the skeleton up from the default NITE coordinate system.
Apparently that’s my personal hip height then 🙂
I think just making a little offset slider in the MoBu GUI for the next version would be the cleanest option then.
Nice one! I’ve had the previous version running on win xp with MoBu2011. It works good, my comp is a Core2Duo, 2Gb RAM and I get about 15 frames/sec. Good stuff, keep it up!
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Hi guys,
I don’t know if any of you had this problem (this is a really old conversation), but when I stream into Mobu, my skeleton keeps coming in backwards (ex. right bones are left bones and left bones are right bones). Is there any way to work around this???
I’m working with valve models in Mobu 2013, with brekel v0.5.
Thanks,
Jacob
The old free version was created way before MoBu 2012 existed so there may be troubles with versions above 2011.
Switching the character retargeter from the now default HIK solver to the old MB solver may help.
The new Pro version fully supports recent MotionBuilder versions and is actively tested with them.
Hi,
sorry me again….
is it possible that you to implement to load recorded oni files instead of the live capture of the kinect ? because i don’t have a device for testing.
Thanks
Stefan
Someone else also mentioned ONI file, I’ll look into them at some point, but mainly for recording purposes.
Hi there,
i tried to export data to nuke. i’m only able to connect an TIFF RGBXYZ or EXR RGBXYZ. to the “xyz” input of the “PositionToPoints” node and the same picture to the “col” input. But then i don’t get an rgb picture on top of the pointcloud. I tried to set the “Export Format” of color to png. When i connect the png image to the “col” and a RGBXYZ (tiff/exr) to the “xyz” input nuke break down.
Did i anything wrong ?
thanks
stefan
Sounds like something funky inside of Nuke, does it also crash when you try to just display the PNG in 2D mode?
When I connect a PNG directly to the “Viewer” node, Nuke displays the PNG in 2D Mode. But when I connect the PNG and depth_rgbxyz.tif over the “PositionToPoint” node to the “Viewer” Nuke crashes. Same with exr files.
Connecting “col” and “xyz” to the rgbxyz.tif works.
( http://www.pictureupload.de/originals/pictures/040211102809_kinect_nuke.jpg )
I just tried it here and it works fine using NukeX 6.1v1 64bit.
What Nuke version are you on? Maybe there are issues with the PositionToPoints node on certain versions.
great stuff. enjoyed using it so far. some peopel are just never happy though eh?! ‘can it be…better and use more kinects!?’….. great work to utilise a 100 quid gadget. heres a vid i posted after playing with it.
http://vimeo.com/19504423
Multiple Kinects should be possible at some point but will take me quite some time still.
Pretty cool stuff on your vimeo btw, well done!
Well my wishlist would be:
1) Ability to capture 2 skeletons at the same time
2) Being able to use multiple (at least two) Kinect devices to cover hidden body parts
3) Motor control within your application
BVH support isn’t as crucial in my opinion. The data I’m receiving still needs a lot of manual cleanup and Motionbuilder is suited best for that job anyways.
Keep up the great work, can’t wait to get my hands at the next release.
Nice work!
Is there a way to to get 2 skeletons working at the same time? I tried adding 2 devices, but only one gets recognised inside MB.
Also, head rotations are not possible due to Kinect limitations, right?
Head rotations are a limitation by the NITE skeleton tracker at the moment indeed.
And 2 skeletons is not doable yet either, but it is on my todo list.
Hi Thanks again for the great job
My trial version of MoBu is running out just wondered if you could put a little application together that outputs the translation and rotation values of the Brekel Kinect 3d scanner into a file. So that I read into a biped in 3d studio max. 😉
Actually…. working on that for the next version.
I’ve got BVH saving done and working, except Biped is quite a piece of crap since it has a non-standard coordinate system and is extremely specific in what type of BVH it understands.
So it’ll take a bit more time to reorient and massage the Kinect data so Biped doesn’t keep whining like a little child. 🙂
Thank a lot.
BVH should sort me out for the moment.
Looking forward to the release.
Once again well done for the good job
Hi there
Please, it would be possible to get a pre release of the next version with functional bvh feature. To play/work with. My mobu trial has expired. The CAT tool in 3dsmax can read bvh.
Thanks
Nice software…
Thanks for developing it
http://vimeo.com/18655901
http://vimeo.com/18727953
http://vimeo.com/18781777
http://vimeo.com/18909224
Ur welcome, btw cool stuff you did!
Hi.
I want to see more video tutorial.
Especially, recording a motion
I don’t know MotionBuilder operation so much.
for foreigner
Wow. You’re a geeenius! Do you think at some point there could be support for simultaneously tracking more than one person’s skeleton? Also, what would you say is the maximum capture area/distance?
Once again, this is perfect…
It should theoretically be possible to support tracking multiple users.
In fact someone mentioned that detection and drawing of the skeleton dots on the viewports already worked up to 4 people at the moment. But my internal processing and streaming to MoBu only supports the first calibrated user at the moment.
However the question is how useful it will be in practice. As 2 users will occlude each other very easily or will fight for pixel space on the small sensor.
For the maximum area/distance. I think the sensor is rated between 50cm and 4-ish meters, and from experience the data quality drops the further you go from the sensor.
So practically the area is a lets say 1 step to each side from a center position or so?
Thanks a lot Brekel… Do you think using multiple kinects would solve the occlusion problem and increase the capture area size? I would really like to be able to capture the area of an actual stage.
I’ll keep you posted on anything I do… Cheers for all this…
Multiple should both help occlusions and allow for bigger area’s.
However you do have to start calibrating them so they all know where they are in world space in relation to each other.
So you’ll move more and more towards a pro mocap setup then 🙂
And at some point the interference patterns will start to interfere too much. (I’ve seen tests with 2 and 3 where it still holds up fairly well, but haven’t test anything myself as I only own one)
I seem to have lost the bottom half of the screen of the 3d Scanner.. Even after uninstalling and reinstalling it still is missing.. It leads me to believe that the application is reading & writing to a config file somewhere that saves the settings of the previous session… Where do these settings get saved?
You’ve stumbled upon a bug 🙂
Settings are stored in the registry, but there is a way to get it back from the GUI, it’s in the FAQ on this site under ‘I’ve accidentally closed the window with the color and depth images, how can I get it back?’
I’ve fixed it properly for the next upcoming version, but there is still one big feature to complete and I’ve just caught the flu so things are a bit slow.
Thanks! I just found the setting myself in the registry.. I deleted and everything functions again properly..
I was just looking some more at the Mobu plugin and it seems that if the camera isn’t at waist height looking straight on then the skeleton appears to be leaning backward or forwards depending if the camera is looking up or down slightly.. I guess this makes sense as there is no camera calibration going on.. but easy to adjust anyway so not a big deal just thought I would mention it as it helps you get a better recording 🙂
Aaaah good point, yes it makes sense everything is relative to the Kinect sensor itself.
Hi mate,
http://www.youtube.com/watch?v=wXh-v8tuB1o
my first try with the 3d scanner.. works great !
Cheers!
Alex
Heheheh, you look scary like that dude.
But very cool 🙂
Yeah ~~ it’s work !!
Thanks brekel !!
I uploaded my test on youtube :
http://www.youtube.com/watch?v=2vLKoo9g-tw
Hey thanks for posting it up there.. Had 70 hits already! :))
Well heres the video..
Thanks mate, I put your website address at the end..
Hope you like it.. It was alot of joy for everyone there.. We put on a great event as you can see..
http://www.youtube.com/watch?v=DOPkLgSnOF8
Enjoy!
P.s – If you like the music on our site just let me know would be happy to send you some cds (its my label)..
Wow that is so cool! Seems like people are having a blast, very nicely done!
I’ve included in a new User Gallery section.
P.S. thanks for the cd offer but to be honest I’m actually a metal head, and quite a heavy one at times 🙂
So, after a complete reinstallation i still have the same problem. I’ve tried another app with kinect, the FAAST, and that work great so i think that drivers are good. :/
I have no idea where the problem can be, maybe my configuration ? I have an AMD Turion 64×2, 1,5 go of ram..
Sorry man, but I really don’t have a clue.
As you say the drivers should be good.
I haven’t tested on AMD but I don’t think that should be a problem either.
What graphics card do you have?
I have an ATI Mobility Radeon X1300..
That’s the only thing I can think of at the moment, that your graphics card can’t handle something. Maybe try updating to the latest drivers if you haven’t already.
I found pressing maximise snapped all the windows to fit on mine, yeah you remember on my nvidia 7900gs based laptop it couldn’t create the required streams.. that was wierd too, but i didnt try the new build, which I will do later.. just working on this vid for now.. I have faith in your skills so im sure you will debug them all in the end :))
Be sure to check the latest build (v0.32 of base app, v0.11 of mobu plugin) as the one before that was a lot buggier.
Hi,
very nice work, but i have a problem using the kinect 3D scanner, it wouldn’t work.
When i run it, the first window open and is “waiting for connection”, then the window’s app open, but as soon as i try to interact with it nothing happen and the program stop running. I noticed that the window’s app is also more higher than my screen resolution. That can be the problem origin.
I know it’s an experimental work so i really understand that some trouble can happen. It’s already a big work 😉
Thanks, benima !
The awaiting connection just means there is no connection to MotionBuilder yet, but that shouldn’t prevent the rest of the app on showing live images.
Did you go through the installation instructions and were you able to run one of the OpenNI examples?
https://www.brekel.com/?page_id=170
Yes i followed all the installation instructions and the OpenNI examples work, i can see me with a yellow color. I’ve reinstall the app 3 times at least, and the result still being the same. :/ Actually, the window “don’t answer” (i don’t remember how it say in english), and close after few minutes by windows error.
So if the OpenNI works, the problem is the 3D scanner app, isn’t it ?
(sorry for my poor english due to the french school lol)
Hmmm, I really don’t have a clue, you’re the first one to report it.
So if I understand correctly it works at first, streaming some images, and as soon as you click inside the window it stops?
I just ran it from my laptop which has a smaller screen so the bottom of the window was clipped, and it ran fine.
Lol, i never do like the others..
No, they are no streaming, sometimes both video at the bottom show an image, but when i try to move nothing happen. And the main video never appear, in think that the app stop running just at the beginning, and when i click inside windows tell me that the app don’t answer..
I may should uninstall all and reinstall it, do i have to install the 3d scanner before all drivers or after ?
It shouldn’t matter if you install the drivers first and then my app, of my app first and then the drivers.
But it worked fine while I projected it :))
It worked great man!!! I had a few teethers with the space but people were freaking out :)) video real soon! Yay a world first there then :))
Can’t wait to see this driver progress..
It still can crash occasionally, just 3dscanner stops, but the character keeps going until u press close so Maybe it’s not nite that is causing it. But it is still way better than the first build :))
Thanks brekel :))
Alex
That’s great to hear! 🙂
It’s sometimes difficult to catch crashes when a program multithreaded, but I’m sure I’ll be able to catch em all sometime in the debugger.
🙂 OVER 3HRS no probs..happy days 🙂
wow its like 10x more stable under both 2009 & 11 and switching between apps works fine now where as before it was getting jammed.. It did go down once, but no problems recording and driving a character for 30mins now so great work, i dont know what you did, but its much better.. I am going to use it for a show this weekend so will try to film it 🙂
FBX file reloading stuff is totally stable.. well done :))
just a quick question..when I switch off all features of 3dscanner the cpu is still 50%, is that normal? also I see the frame rate in the point cloud changing when its switched off, does that mean that its still active underneath?
Anyway thanks so much for responding to the teethers in time for the weekend.. your a star!! Many many people will love you for this..
Thanks again and again..;))
Alex.
Good to hear it improved, the crashes turned out to be an easy fix actually, some memory initialization issues.
Switching the features off and 50% cpu is normal.
Internally the data collection and some calculation is still going on as things are dependent on one another.
It’s only turning off display and the related calculations for that.
I probably could/should handle this more clever once I start optimizing code, but for now I’m still trying to get a few more features to work first 🙂
Good luck with your show, would love to see some pics/movie if you have the time to pull out a recording device for a sec.
Cheers,
Brekel
I am trying to get it shot in HD 🙂 will also post a test for the show..
Glad to hear its normal cpu.. its been running a character for well over an hour no problems so thats great work, going to use it for sure 🙂
Something else I noticed, which was a problem we had with our MB drivers for the inertial systems where I work, is that when we record in MB the recorded keyframes don’t match up to the frames in MB, they’re not on frame.. The same seems to be happening here.. I can contact the programmer next week who managed to fix it for some info if it looks painful 🙂 Just thought I would mention it, not really an issue for me at this stage though.
I did see also that the skeleton is below the ground in MB sometimes..
Its still running so I’m going to leave it and see how long it lasts :))
Cheers then!
Alex
Very cool to hear you intend to use of for your show 🙂 🙂
Good point on the subframes, to be honest I never thought much of it. With out Vicon at work I usually leave it that way since we plot on frame somewhere in the pipeline anyway.
But you may be right that it’s cleaner to be on frame from the start. If you could ask your programmer some tips how I can implement it that would be great. (you can also contact me by mail if you don’t want the answer to be public).
As far as the feet below the ground go, in fact I’m just using a fixed offset of 90 now to raise the skeleton up from the default NITE coordinate system.
Apparently that’s my personal hip height then 🙂
I think just making a little offset slider in the MoBu GUI for the next version would be the cleanest option then.
That sounds great..
heres something for you..
http://www.youtube.com/watch?v=Shd4P24kPTU
no sound just a small scale render.. its not finished yet.. quite a bit to do before the show..
will try to speak with my friend next week and mail you..
enjoy!
Alex.
Yeah, that’s pretty darn cool!
Brekel you are the man! :)) I cant wait to go test to see the updates..
hey big thanks for fixing the FBX file glitch..
Let you know how it goes :))
Cheers mate..
Alex
Nice one! I’ve had the previous version running on win xp with MoBu2011. It works good, my comp is a Core2Duo, 2Gb RAM and I get about 15 frames/sec. Good stuff, keep it up!