Live Feedback/Telemetry

Hi Marcus,

First of all, thank you for the creation of this incredible software. I hope in the near future it becomes an industry standard for a multitude of use cases!

Helping that effort, I made an ambitious decision a few months ago to design a universal vehicle rig that utilizes Ragdoll to simulate physics. I’m happy to say that after some initial tests followed by a working prototype Ragdoll is totally capable of creating realistic simulation. I will post some results on the forum later on.

There are some caveats though, and I’m hoping I could get you guys to implement a method that allows for processing live data during playback in a future release if there’s no existing solutions for this already. (tried my best digging after this)

What I’m hoping to have is a variety of output attributes on both markers and constraints to read their current state to further process it on the node network level to then update input parameters for the next frame. This would open doors for a ‘semi automated’ workflow that includes the node network in the simulation loop, to ultimately further influence it with regular Maya nodes.

After testing the ‘rMarker.outputMatrix’ attribute I’ve found out that it has a strange delay and tendency to not update at all during playback, so atm it’s not too reliable.
Having access to already calculated data, such as forces, velocity, world positions, angles, etc would be really awesome, either having them fed into output attributes or perhaps extracting them through a pickMatrix kind of node connected to the .currentState or such is what I’m imagining as a solution.

What do you think, would it be possible to implement this?


Welcome to the forums @mate :partying_face:

Yes, this is definitely doable, there is tons of data chugging along in the background that would make sense to expose as Maya attributes. If you have some idea of data you are looking for, or final results you aim for, that would help us figure out what to expose.

Yes, this attribute can only give you the previously computed results, as the current computed results depend on the current frame. Only once Maya has provided the current frame can we compute the subsequent simulated position.

It should definitely update during playback though, so that doesn’t sound right. Could this be a DG versus Parallel evaluation issue? Ragdoll works best in Parallel. It should consistently output exactly the last frame of simulation.

Aha, here we go. Some of these are a challenge due to Maya not liking attributes that are created/removed during playback; like array attributes. Like forces; the number of these would naturally change over time, so any attribute that outputs an array of vectors representing forces would need to change in count over time.

What would happen however is you specifying a maxCount for forces, like 10, and then read from those constant 10 attributes. Most of which would be 0 when there are less than 10 forces.

World position and angles are already output; that’s the .outputMatrix. Velocity is possible in the same way, so long as you’re happy with Maya always being 1 frame ahead of the simulation.

Woah, thanks for the rapid response! Apologies for the delay on my end but I needed to collect my thoughts on this. Glad to hear you’re open for these additions! Below are some ideas for a mostly simulation oriented perspective rather than animation.

This explains the “weird” in its behavior. I only recently discovered that scrubbing the timeline triggers Maya to refresh on a mouse drag event basis rather than frame change. (A locator connected to the outputMatrix of an rMarker catches up the second time that frame is refreshed.)
I need to refresh my memory on the evaluation graph to understand why the outputMatrix delay occurs. Is this a fundamental limitation that cannot be resolved or a design choice to improve performance? I’m wondering if calculating mid-frames could help here. It would improve simulation accuracy without having to change scene frame rate too, which would also be welcome in certain situations.

I certainly have some ideas here:

  • Forces: I was thinking a sum of all forces acting on the marker channeled into a single matrix attribute could be pretty useful in general, but since things like gravity, air/drag, restitution, etc. may be part of that, I can see that value being inaccurate if you only want to know impact forces.
    The attribute itself would be useful if you wanted to calculate the force needed to stop the object when it’s in motion by reversing that force, or accounting for its weight you could calculate the force needed to accelerate it to a certain speed.

  • Velocity: This would help a lot where you want to calculate/adjust constraint parameters actively, such as a suspension with active damping. BeamNG v0.27 introduced a long travel suspension design that alters damping based on position and velocity.

  • Constraint attributes: If available, it would be lovely to utilize values such as “current distance” on a distanceConstraint or a “current offset” value on a pinConstraint to actively adjust their input parameters for the next frame, but since these are easy enough to calculate from the outputMatrix already, may not be worth the effort.

I think these are the only fundamentals, what I’m looking for can be achieved with these and I can’t really think of other scenarios where these values with the outputMatrix wouldn’t be enough to calculate things, but I may be wrong.

  • Another slightly related idea came up though: Input Forces!
    Bullet Physics had these attributes called Impulse/Torque Impulse vector attributes that I found very useful for simulation purposes. You would use them if you want an object to move or rotate persistently on its own.
    There are several use cases for this, one of which would be a rotating car wheel. Unfortunately using the rotateStiffness to spin their controllers is not ideal as releasing the stiffness to allow free roll puts them out of sync and weird things happen if you want to reapply stiffness.
    Another idea is a helicopter type of thing that can levitate using these attributes.

  • One more thing: What are the chances of soft body simulation coming to Ragdoll in the near future? Obviously it would be tremendously useful for so many things but my whacky hack to simulate tires leaves some after taste and could use an upgrade:)

Thanks so much for considering these again.

1 Like

It’s a limitation of our design. We want simulation to both affect and be affected by an object in Maya, that’s cyclic behaviour. To break that cycle, we output the previous frame. That output is not intended to be ready by the user, we use it only during the recording process, whereby the simulation is already cached. You can manually set the solver to be cached, at which point the output matrices will no longer cause a cycle and be up to date, at the expense of an interactive simulation.

Mid-frames (a.k.a. substeps) are already there.

This one is possible, it’s something I’d also like to have internally and in the UI, to visualise what is happening to the user.

This one is rather simple to compute by hand; but having it directly output from the system does make sense and is possible.

True, this makes sense.

See Fields

Eventually, yes. I’d like it for real-time muscle, cloth and hair. Near future, unlikely, there is too much to squeeze out of rigid body simulation before we begin to open that pandora’s box. :blush:

Let me have a think about the suggested attributes; they fall somewhat out of scope for what Ragdoll is intended for (i.e. ragdolls) but I can see the value in such simple things and the cost on our side (maintenance-wise) is small. These are attributes that already and I expect will always exist, so outputting them should come at zero cost.

I know that I’m venturing away from the intended scope of Ragdoll so I really appreciate the consideration of anything that I brought up here. I do believe there’s a great potential for Ragdoll beyond animation due to its strong foundations and I think not much is needed for it to make a leap into surprising directions.

Fields vs Input Forces (Impulse/Torque Impulse):
Fields are fantastic, but input forces would open doors for a different kind of manipulation. It’s very versatile and exclusive by nature as opposed to fields that affect every object unless they’re all set to ‘ignore all fields’ except the one needing that manipulation. Field groups (similar to collision groups) would help in certain situations but I believe there’s no way to make an object rotate around its center without animation based input which has its own issues I highlighted earlier. (think of a car wheel that can also roll free). There would be other benefits too, such as automation that can be scripted.

Realtime Feedback:

Well it makes sense now, this isn’t actually a “limitation”, it’s just the way sampling works, isn’t it?
What would actually solve this problem (I believe) entirely is if the state of the sub-steps could be piped into the outputMatrix instead of the state of the full frames. Therefore on a full frame the latest sub-step value is measured instead of the latest frame. The higher the sub-step rate, the higher the accuracy of the outputMatrix exponentially. When sub-step is set to 16 the difference/delay would be negligible.
Since Maya nodes are only evaluated on full frames, I risk saying that only the latest sub-step value would appear in the output anyway so it wouldn’t cost performance either, but I’m not 100% sure about this one.

Anyways, I hope I sparked some thoughts on these matters. Again, any of these taken into considerations are greatly appreciated and I believe they would benefit Ragdoll in the long run.


1 Like

Not quite, it’s a problem of evaluation. Ragdoll (and Maya in general) evaluates an attribute as you ask for it. So when you ask for outputMatrix, Ragdoll will go ahead and compute it for you. But, outputMatrix depends on inputMatrix, and the inputMatrix is your wheel. So now the rotation of your wheel depends on the rotation of your wheel, and bam! A cycle is born. In your specific case, since you don’t want the inputMatrix to affect your wheel - you just want a static impulse - this could have worked. But this is one of those situation where your usecase and the usecase of Ragdoll differs; where Ragdoll assumes you want to follow the original animation.

It’s a little late in the day to fully think this through, but I think not. Ragdoll does not sample Maya e.g. 16 times, but rather once, and linearly interpolates 16 times. That interpolation happens between the previous and current frames, and with the above in mind, there is no way to both read and write to the current frame at the same time. To you are already getting the latest possible sample.

Overall, the outputMatrix is not meant to be read or connected; it’s an internal attribute we use during the recording process, which assumes the simulation has already been cached. I.e. the inputMatrix has already been fully read.

Ok, I see what you mean. I can see the use for impulse forces to design e.g. rockets. But for wheels, I’d need some convincing. :slight_smile: All cars we’ve ever done with Ragdoll so far has used the vanilla Rotate Stiffness/Damping you’ve already got. With limits preventing rotation in all but 1 axis. Speed is controlled by the animation on e.g. Rotate X. For continuous rotation, you’d set two keyframes with linear interpolation, and cycle them with offset. That way, they will spin at a constant speed indefinitely.

The benefit of this approach, as opposed to the impulse force method, is that you gain control over exactly how many rotations each wheel takes, which means tuning is much more robust and your vehicle is much more likely to end up in the final destination.

If you needed to automate speed, e.g. hook it up to an XBox controller, then you’d control Y-value of the last keyframe. A steel slope means higher speed. No slope is zero velocity.

The one limitation is that it becomes challenging to achieve very high velocities; as if you set a target past 180 degrees, the wheel would try and spin backwards (shortest distance). But, I have yet to find a usecase that runs into this limitation.

So, very doable, but I’m cautious to implement features that do not serve a clear and unique usecase. So, help me help you!

Hi Marcus, sorry for the delayed answer, I wanted to experiment further before getting back.

Oh yeah that’s true… I overlooked that, sorry. What you’re saying makes sense. I think this is as far as I can get without knowing the inner workings of Ragdoll, which is fine. If you ever do find a way to output live data, I’d be happy to test/experiment with it, even if it requires some caution to use it. Perhaps a dedicated attribute? Ignore the wheel example here but I genuinely believe a live outputMatrix can be purposed in several ways, such as camera tracking or updating the rig/geo real-time, etc.

Did some tests and made some observations that may or may not help looking into this further:

  • A marker’s outputMatrix is evaluated before the inputMatrix. (probably the design choice you mentioned)
  • Caching doesn’t seem to “un-delay” the outputMatrix value. A connected locator is still lagging behind.
  • If you run playback with a script that sets the same frame 2x using currentTime($frame); , you can actually force the outputMatrix to be up-to-date on the desired frame. (Here’s a script to test this if interested) (442 Bytes)

I’m sorry if I mixed these two things together here. The live feedback and the impulse forces are two separate desires. The driven wheel is a challenge I’m dealing with atm and the impulse forces would be the optimal solution.

I understand your point but instead of a predeterministic approach impulse forces would allow for per frame interactions in a very simple fashion while letting the simulation to “take the wheel” (ba-dum tss). Animating CG vehicles is a different beast, they tend to behave bad on screen because they are very hard to master. If you want to animate one, you have to simulate it and let physics drive it completely, only manipulate it via the forces (steering, gas/brake, etc.), otherwise it’s an uphill battle that most people lose.

Impulse forces would open door for a certain level of automation that can help keeping things under control while letting physics do its thing. They aren’t always the answer, but sometimes they can mean a massive difference in efficiency well beyond vehicle physics.

Imagine shooting at a character where you want the shoulder to be hit. You can directly add an impulse for a frame that would only affect the shoulder instead of trying to find the best approach to achieve this effect, you can get it done in a minute without having to overthink it. When you get a note that the character needs a headshot instead, you simply cut/paste the curves over. (In production: quick & dirty > smart, clean, nice or creative.)

These are my main points to justify this feature. I could think of other situations if that helps but wanted to keep this comment “short” lol

Thanks again!

Technically this is not a problem, it’s a matter of choosing what the input should be. At the moment, the input and output lead to the same transform. If you want live output, you’ll need to separate them. I.e. those rendered Markers need to be an actual duplicate of the transform. More work, but definitely doable. We used to do this in the past, before Markers.

If you can demonstrate, even with this 1-frame offset, at least one usecase, that would help justify incorporating such a feature for Ragdoll.

It sounds like we’re talking about e.g. gamepad input to Maya, as opposed to keyframed input? Typically, animators working in Maya pre-define all inputs via keyframes, so this is a little unusual. Can you confirm?

Yes true. This is a good usecase. For such a force, we’ve got it our list to make a new node type similar to the Pin Constraint. You’d (1) select a control and (2) add a force. You would end up with an arrow in the viewport you could control the direction and magnitude for. For angular forces, I imagine we’d have a speedometer graphic of sorts to visualise the angles per second.

Apologies for the long silence, I had to put the project aside but I managed to pull a little demo together for you to see the source of my desires.

In the demo below you see a locator moving on a path. The car isn’t constrained to that locator in any way, it is just following it. It’s based on some simple math that takes distance and angle into account to control the car, a basic autopilot if you will. There’s no controller input, that isn’t the goal. It’s a simulation driven system that can be influenced with forces rather than animated.

The live data output and the ability to apply direct impulses on the markers would open doors for me in multiple scenarios that would yield better results at higher accuracy with way less bloat.
Two example cases are the active suspension with real time spring/damping adjustments based on its load (to mimic something similar explained in this video). The other case is a way to control the vehicles with external forces to make them drift, lean or do any random thing that we might need them to do depending on the action.

Please note: This project is not about this specific car, it’s a universal system and currently this model is being used to test the capabilities. What you’re seeing is heavily work in progress and is only here to support my case.

Happy Holidays,

1 Like

Oh wow, that looks great. Would it be possible to a see it with just the Markers, without any geometry? The way you’ve managed to get suspension without translation limits is impressive. Now I’m tempted to expose a directOutputMatrix just to see what you can come up with. :slight_smile:

I’m very interested to see how that works if you are ok to share! Looks awesome!

1 Like

Just a minor note on this; the Pin Constraint does that too. It considers the difference between where an object is, and where you want it to be, along with mass and current velocity, to figure out a force to then apply to it. It sounds like that’s what you have done here as well; have you tried attaching a Pin Constraint to that path, and have the car controlled by it? I expect you can get similar if not identical results that way.

Thank you, there’s been a lot of trial and error behind these results, even before bringing Ragdoll into the picture, and there are still a number of milestones to hit.
Truth is, there’s a joint based system designed that mimics the characteristics of the suspension accurately which is already fully animatable and even though Ragdoll is layered on top of that, it isn’t driving things 1-to-1 despite of the same limitations being applied to the markers. There’s a sort of bridging involved which offers some benefits while it also relieves pressure on the accuracy of baking process.
So long story short, what you’re seeing here is the best representation of the simulation. For now I’d like to keep things covered at least until the system is production ready. I will eventually reveal the full capabilities of this design and the inner workings of it, I’ll make sure to let you know.

That’s exciting, I’d be very interested to test that out! I’m sure I’ll run into some walls (…cycles…) but hopefully I can come up with some ways to utilize it regardless, I’ll post the results! I’ve lost hope during this project way too many times to be afraid of it at this point…

Pin constraint is a good thing and I’m using it for other purposes, I can’t use it here though. I need the wheels to actually rotate by force. Pulling with a constraint will feel like something’s pulling it, and it introduces a few other unwanted side effects as well. This is the main reason I’d welcome the impulse forces very much. It’s a little clunky to achieve drive atm but I’ll keep trying to fine tune that.