Help on Bad Mocap Correction

Hey Ragdoll Team!

I have been following your work for a while now and finally am in a good spot to play around with it and see if I can integrate it into my project pipeline to add some fun nuance to the movement! I’ve watched your tutorial videos, and I have a couple of questions about setting it up optimally and playing with the tool.

Similar to your tutorial with “bad mocap” on youtube, I want to fix up some mocap/monocular video → mocap data. My main goals are:

  • remove the jitter
  • make sure the mocap feet contact are actually on the floor
  • fix the monocular mocap leaning (it struggles with depth, so sometimes it will have a slight lean to the mocap data)

I was super excited to see the example using the CMU data set in that video because that is one of the sources I was looking at! However, after checking out that video and the blender pipeline, I still am not totally confident about my setup/process. I am super new to these concepts, so I apologize for not being a pro-user at this point (but hopefully one day in the future that will change!).

Here is some example of raw data. I have both in-place and root motion data, for these questions I’ll start with just in-place data. This is “carrying a heavy object” type of walk cycle.

From there, I went down the kinematic chains and selected “assign and connect”, moved and reshaped the markers to get to here:

Now when I run the animation with the ragdoll simulation, I can see the spine (I think?) working with the ragdoll, but I don’t see much change in the hip jitter.

I do think the feet are looking better though, in terms of contact on the ground:
feet

Overall, I think I am partially there, but would be curious about tips/tricks/recommendations to nail this mocap correction from your team/community. Here is the Maya scene I am working on if that is helpful. The goal at the end of this is to have a super clean piece of mocap data that I can then use in my projects (the animation is recorded at 120 fps)

https://drive.google.com/drive/folders/1fzqIZfe75_EJsoWKNzrVDvY5CLU5qSwD?usp=sharing

Thanks so much! I will keep reading thru the documentation and rewatching your videos!

Katie

Hello @katie, and welcome to the forums! :partying_face:

I can spot a few low-hanging fruits with your attempt so far, but the headline of my response would be that Ragdoll - alongside all pure-physics approaches - is no panacea when it comes to cleaning up motion capture.

But let’s talk about what it can do and then dive into the known limitations so far.

The first things to address in your example is the proportions of your character. It’s important that every child has less volume than its parent. There are exceptions, but generally this will give you the most stable and realistic simulation, and mimics our own anatomy well. Primarily, those clavicles could never sustain the weight of those arms. The rest is not bad, although I’d make the feet wider to get a more stable and flat contact with the ground. The hands and fingers are much much too thin. So in addition to making children smaller, also try and maintain a 1-10x ratio between every Marker. Namely, if the hand weighs 1.0 kg, try to keep children in the 0.1-0.5 kg range.

As your file is an .mb, I’m unable to open it on our end for security, I’ve used another CMU clip. Here’s the starting point.

As we can see, it’s able to match the pose, but not retain balance. To retain balance, we have a few options. One of which is to make parts of your character Animated such as in your example. However this means those parts will be unaffected by the simulation and retain any defects, such as jitter.

To affect jitter, we must let the entire character remain Simulated and instead achieve balance via a e.g. Pin Constraint.

By parenting the Pin Constraint underneath the control we’ve assigned it to, it will carry the animation into the simulation similar to the Animated behaviour, but still allow some deviation based on the Pin Constraint parameters.

We’re not there yet though, because the Pin Constraint is clearly pulling the character in every direction. So what if we remove any pull in the Y-axis, letting him fall to the ground whilst still following the walking path?

Better, now what if we try the opposite; of only having it follow Y and not XZ?

Now any forward movement is purely from the motion capture itself, very natural. But it’s still not there, it’s still being pulled via world space force. Like someone reaching into the scene and grabbing the hip.

So what if we attempt to follow - not a worldspace position - but a worldspace rotation only?

Now the character remains upright because the hip is staying upright. But the feet are still getting stuck in the ground, either because the mocap is too close, because our physics shapes are too big. We could attempt to rid ourselves of it by removing friction altogether.

But that just ends up looking slippery and not what we want. So what if we augment the mocap with an animation layer, lifting the leg slightly to let the simulation walk as intended?

Ok, could do a better job animating but hopefully you get the gist. Counter-animating the animation may seem counter intuitive if your goal is to fix the simulation to better match the mocap, but remember the mocap is flawed and the simulation is approximate. Expect to edit both to find a common middle-ground.

Next, things are a little wobbly, so I’ll crank up the stiffness to stick more closely to the mocap orientation, and the damping to make it less springy.

Ok, decent. In this case, we’ve got an invisible environment, so let’s add in the critical elements.

And presto, our result.

There is still much to improve. Some of which can be achieved by tuning the input mocap and simulation parameters, but it is a hard challenge so do not underestimate it.

As a bonus, if you find yourself tuning something towards the middle or end of your simulation and don’t want to continuously replay, try the Ragdoll → Edit → Cache menu item to refresh the current frame whilst you edit.

Limitations

Our result does not perfectly align with the worldspace position of the input mocap. And it could not, not without sacrificing realism. How close it gets depend on, amongst other things:

  1. The size of the feet, longer feet would get further
  2. The geometry of the feet, if they are square or round affects where contact happens
  3. Friction between foot and ground, less friction would allow for more slip, thus reducing the overall distance
  4. Gravity and other physics parameters; we’re relying on defaults here, but if we take a look at the scale Ragdoll things your character is in, we can see he’s actually a giant.

Pure-physics approaches also do not understand balance. What we’re doing here is one big giant cheat, with worldspace forces that don’t exist in the real world. The closest thing would be being strapped into a harness, like stunt actors have. It adds to the feeling of being fake and must be hidden and tempered to blend in.

That’s why things like foot contacts are such a difficult challenge to solve with pure-physics. The true solution is one that does understand balance and hopefully we can roll out our take on this real soon.

Hope it helps, and let me know if you have any further questions.

PS: Here’s the scene file, made with Ragdoll 2024.05.07
mocap_v003.zip (1.8 MB)

1 Like