Well, if you can do real time capture expression I have been very satisfied, buy software time can be incidental scene file or create video demo? Thank you.
Before I read your original capture data files, I find the data jitter sometimes is still relatively large, but in the video did not appear obvious shaking, so I don’t know you capture data more good.
Smoothing can be adjusted to taste using the sliders for the head and face independently in the GUI.
I’m looking at setting up some tutorial material but it’s a bit tricky since everyone uses different software and different rigging methods.
So it’ll probably become more conceptual instead of step-by-step.
Nice work! Would you be able to share the rig or the bones, or tell a bit more about the how it was rigged?
Are you using physics to animate the beard and the hat, or just damping. Do you just use morphkeys to animate the character or bones?
I’m not sure if I can share it yet but I’ll look into it.
Beard/hat are just simple damping no real physics.
It’s rigged using bones but they are driven using MotionBuilder’s Character Face tool which allows you to setup poses.
So in essence it’s very similar to using morphtargets/blendshapes.
I’m connecting them to the ‘animation units’ from Brekel Kinect Pro Face instead of directly to the markers.
The reason is that those ‘animation units’ are pretty much the same for any person. Where the markers are placed a bit better but may shift a bit from person to person.
And I wanted to create a scene that would just work with anyone walking by.
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Well, if you can do real time capture expression I have been very satisfied, buy software time can be incidental scene file or create video demo? Thank you.
Before I read your original capture data files, I find the data jitter sometimes is still relatively large, but in the video did not appear obvious shaking, so I don’t know you capture data more good.
Smoothing can be adjusted to taste using the sliders for the head and face independently in the GUI.
I’m looking at setting up some tutorial material but it’s a bit tricky since everyone uses different software and different rigging methods.
So it’ll probably become more conceptual instead of step-by-step.
Hi,
Nice work! Would you be able to share the rig or the bones, or tell a bit more about the how it was rigged?
Are you using physics to animate the beard and the hat, or just damping. Do you just use morphkeys to animate the character or bones?
so many questions and so little time
thanks,
e1
I’m not sure if I can share it yet but I’ll look into it.
Beard/hat are just simple damping no real physics.
It’s rigged using bones but they are driven using MotionBuilder’s Character Face tool which allows you to setup poses.
So in essence it’s very similar to using morphtargets/blendshapes.
I’m connecting them to the ‘animation units’ from Brekel Kinect Pro Face instead of directly to the markers.
The reason is that those ‘animation units’ are pretty much the same for any person. Where the markers are placed a bit better but may shift a bit from person to person.
And I wanted to create a scene that would just work with anyone walking by.