Markerless, video-based facial motion capture

Here’s a tech preview of a markerless, video-based facial motion capture solution I’ve been working on at Motek Entertainment.
Uhhhh and yes that’s what I look like in infrared on a wide angle lens, on the bottom left that is πŸ˜‰

Friday, January 6th, 2012 Highend Mocap, Projects

11 Comments to Markerless, video-based facial motion capture

  • Thanks for sharing all of this!

    I saw a request of yours for SLAM info and being an old Computer Vision grad student (long time ago) I went and researched SLAM. I found:

    https://www.rocq.inria.fr/imara/dw/_media/users/oussamaelhamzaoui/coreslam.pdf

    and the associated code (see the link tinySLAM) at openSLAM.org:
    http://openslam.org/tinyslam.html

    I am very comfortable c, so let me know if you need help getting it ported, or whatever your doing with it.

    -Mike

  • bartosz says:

    This is amazing! Do you plan on sharing such beautiful work? πŸ™‚

    • brekel says:

      We’ve been collaborating with Dynamixyz for a while now on developing this.
      Their video-based facial tracker is a commercial product.

      At Motek we’re working on the Maya integration, rigging & retargetting as well as integration with full body capture workflow. (Vicon)
      We’re looking at offering services using this technology in the near future.

      So if you want more info (tech or practical) feel free to contact.

  • josh purple says:

    Looks excellent πŸ™‚ !

  • Greg says:

    So seeing how the kinect runs at 30hz video rate, and if that is a markerless, can your software use the kinect data to do this?
    its all in the algroithm correct?

    • brekel says:

      The tracking algorithms are 2D at the moment, although they might be extended to 3D I’m not sure how the inherent noise of the Kinect would come into play.

      Facial motion (especially eye motions and the mouth during speech) really benefits from at least 60fps though. In fact we’re doing our new experiments at 120fps right now.

      Besides the current Kinect/Asus/Primesense sensors are still quite bulky and heavy to be worn comfortably strapped to a helmet worn by an actor.
      For comparison, our video camera is only 70grams and a few centimeters in size.

      Stereo, or Quad camera setups could potentially be used, and as with everything tech-related things can change quickly in the near future. πŸ™‚

      • tri says:

        Any luck of seeing this solution released from you in the near future? It really looks amazing!

        • brekel says:

          We’re currently looking into offering this technology commercially in the near future at the company I work for Motek Entertainment.

          • Nick Tesi says:

            Would be interested in seeing more of this. We would like to offer it as a service when we find the right tools or system.

            • Marcus says:

              Aww, too bad πŸ™‚ We are currently working on a project for a course at our university, it would be neat to have facial motion capturing. Hopefully there will be some free or inexpensive software soon. I wish more companies would release EDU licences that are affordable; For example Adobe has EDU licences, but most students seem to use pirate copies as they can’t afford it πŸ˜›

  • Leave a Reply

    • Google+
    • LinkedIn
    • YouTube