More sophisticated integration algorithms #6610
Replies: 3 comments 3 replies
-
Thanks for taking the time to write a summary of our discussion. Continuous Input is an interesting idea, but I am not sure how we should implement this from a UX point of view. We would need a way for users to tell the physics server that something has changed and the current step needs to be interrupted. Ideally this would happen automatically, but I have no clue how to implement that. I suppose we could create a list of modified parameters and then iterate over it and update all affected objects at the end of an input tick? PS: Did you seriously take the time to write Latex equations for this post? Hats of, this is one of the most detailed proposals I have seen in a while. |
Beta Was this translation helpful? Give feedback.
-
Yeah, there definitely are issues to consider! I would probably provide a thread-safe function to ask the physics server, "Please respond to the latest changes at your earliest convenience." It could be called at any time from any thread. That would at least keep all the complicated threading issues internal to the physics engine. And it would just be a suggestion, not guaranteed to have any particular effect, so alternate physics engines would be free to ignore it. For an initial implementation, we could probably rely on users to call it directly when they thought it was appropriate. Later on if we felt ambitious, we could try to make it get called automatically.
Thanks! Latex isn't bad once you get used to it. |
Beta Was this translation helpful? Give feedback.
-
I'm not as experienced in calculus as you are, but as far as I know, the constant step is used for deterministic physics. This is important for multiplayer, recorded replays, and anywhere perfect physics is required (for example, bullet hells and hardcore platformers like IWBTG). As far as I know, a variable step cannot guarantee determinism in principle. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This is a followup to a discussion on the physics forum. There are lots of ways in which the numerical integration code in Godot could be improved. This is going to be a bit of a treatise, involving several distinct but connected features. I'll try to take it one step at a time and provide the necessary background. I'm not necessarily advocating for any particular one of these changes. They just seem worth considering.
Integrators
An "integrator" is an algorithm for advancing a simulation through time. At any moment, your system is described by a vector of coordinates$\mathbf{x}$ and its time derivative $\mathbf{v}=d\mathbf{x}/dt$ . The elements of $\mathbf{x}$ can be anything that changes with time based on physics: positions, rotation angles, etc. Given $\mathbf{x}(t)$ and $\mathbf{v}(t)$ , their values at some time $t$ , the integrator's job is to compute their values a short interval $dt$ later, $\mathbf{x}(t+dt)$ and $\mathbf{v}(t+dt)$ . This is called a "time step". The interval $dt$ is called the "step size". A simulation consists of doing this over and over, taking one step after another, to advance the simulation through time. The smaller $dt$ is, the more accurate the simulation will tend to be but the more steps (and therefore CPU time) you'll need to simulate the same amount of time.
How Godot works
Here is my understanding of how Godot currently works based on looking at the code. Please correct me if any part of this is wrong.
It uses the "explicit Euler" integrator:
where$\mathbf{a}$ is the vector of accelerations due to forces acting on the coordinates.
This is a very simple integrator, and also an inefficient one. It assumes the coordinates move in a straight line at constant speed between$t$ and $dt$ . Since objects frequently don't move at constant speed in a straight line, it's a very rough approximation. That requires you to take lots of tiny steps to get decent accuracy.
Every step has the same size, determined by the "physics ticks per second" project setting. The default value is 60 steps per second. Games with complex physics or fast action often increase it.
After each step is complete, it checks whether any user input occurred since the last step and handles it. That means$dt$ affects input latency. If you run physics at 60 Hz, it's possible for up to 1/60 second to pass between when the user presses a button and when the physics engine responds. Some games increase the physics tick rate just to improve the input latency, even though a lower rate would be good enough for accurate simulation.
Higher order integrators
You can write the "true" coordinates (what you would get with infinitely small steps) as a power series in$dt$ .
Integrators are designed to exactly reproduce the first few terms of this series. The highest term an integrator reproduces exactly is called the "order" of the integrator. The first term it does not reproduce exactly determines the error in the result. For example, the explicit Euler integrator above exactly reproduces the$dt$ term but not the $dt^2$ term. That means it is a first order integrator, and its error scales as $dt^2$ .
Higher order integrators have smaller errors, which means you can take larger steps. A very popular method is the Verlet integrator. It can be formulated in a few ways that are exactly equivalent to each other. Usually people use either the velocity Verlet formulation
or the Leapfrog formulation
The Verlet integrator still requires only one force evaluation and one position update per step, so the cost of calculating a step is usually no higher than for explicit Euler. But it has much lower error, which means you can take much larger steps.
Verlet has some subtle and surprising properties. For example, the highest term that appears in the formulas is proportional to$dt^2$ , so you would assume it is a second order integrator. But due to a cancellation of terms, it actually is a third order integrator, two higher than explicit Euler. The error scales as $dt^4$ . It also is a "symplectic" integrator. That's a harder property to explain, but it means Verlet is really excellent at conserving energy, much better than you would expect based on the size of its errors. There are good reasons Verlet is such a popular integrator.
Another popular class of integrators are the Runge-Kutta (RK) integrators. They use multiple force evaluations and multiple position updates for each time step. That sounds like it would make them expensive, but in practice they make up for it by being able to take larger steps. For example, the second order RK integrator is
The 2nd, 3rd, and 4th order RK integrators are all widely used.
One case is particularly worth commenting on. If an object moves under constant acceleration (for example, gravity), any integrator that's 2nd order or higher can exactly compute its motion with arbitrarily large time steps. Given how common gravity is in games, there's a large benefit to using at least a 2nd order integrator.
Continuous event handling
There are other reasons beyond accuracy that can limit the size of the time steps. You want animation to look smooth. If the frame rate were limited to the physics tick rate, that could lead to jerky animation. To avoid this, you generally want to decouple physics from rendering. The physics engine can take large steps, and then interpolate between the endpoints to produce frames for rendering. Godot already has some support for that. When using higher order integrators, you interpolate along a curve rather than a straight line.
Input latency is another issue. Suppose you use a high order integrator that lets you take 100 ms time steps. If inputs only get handled between steps, that would lead to very high input latency. Each button press might need to wait up to 100 ms to get processed.
But you don't need to wait! Especially with a method like Verlet that only depends on the forces at the start of the step, you're free to change your mind about the step size at any time. You take a step, which you intend to span 100 ms. 25 ms later the user presses a button. So we change our mind about the step size: it's only going to be a 25 ms step after all. You can immediately process the input and begin a new step. The step size no longer introduces any input latency, which means you never need to increase the tick rate just to reduce latency.
You can also do this with collision detection. Many of the methods used for continuous collision detection can be easily adapted to higher order integrators. They generally involve extruding a shape along its motion direction, then checking for collisions with the extruded shape. When using a higher order integrator, you extrude it along a curved path rather than a straight line.
Variable step size integrators
All the methods described above assume that$dt$ is the same for every step. The user needs to choose it in advance. If it's set too large, the physics won't be accurate and may go unstable. You can address that by reducing $dt$ , but at the cost of more CPU time.
That seems wasteful. In a lot of games, there often is very little going on in terms of physics. Objects are just sitting there or moving along smooth paths. You can integrate that accurately with very large steps. Then some complicated interactions happen and for a short time you need very small steps to compute them accurately. Using the small steps even when they aren't needed wastes a lot of processing.
Variable step size integrators try to solve this problem. Instead of choosing a step size, you choose an error tolerance. The integrator continuously adjusts$dt$ (and sometimes the integration order) to keep errors within the limit you choose.
To do this, the integrator needs to measure how much error it makes in each step. Usually that's done by computing the step with two different integration algorithms, one more accurate than the other. The difference between the two is a pretty good estimate of the error in the less accurate method. Of course, when it actually comes time to take the step you'll want to use the more accurate method, not the less accurate one. So it's more useful to think of the difference as being a very conservative estimate of the error in the more accurate method.
This sounds expensive! Doesn't computing the step with two different methods take a lot more work than computing it with only one? Not necessarily! You can take advantage of "embedded" integrators. That's when one algorithm is completely contained in a second one, so that you'll automatically compute the first one in the process of computing the second.
A good example of this is found in the RK integrators. RK3 involves three updates, the first two of which are identical to the two updates in RK2. This leads to a really nice variable time step algorithm.
This leads to very efficient integration. When not much is happening, it takes large steps. When necessary to resolve fast changes, it automatically reduces the step size, then increases it again as soon as possible.
In addition to being more efficient, variable step size integrators are more user friendly. With a fixed step size integrator, the user has to choose the step size in advance. They may not have much idea what value is necessary to get sufficient accuracy. They just have to try a value and see what happens. The step size needs to be chosen based on the fastest events that will ever happen, and those may only happen infrequently during testing. Even after you find an acceptable step size, an unrelated change like reducing the mass of a body or increasing a spring constant could cause it to no longer be stable. And if the physics does occasionally glitch out, it's hard to tell whether it was caused by the step size being too large or by some other sort of problem. Chosing an error tolerance instead of a step size is easier and more reliable.
What should we do?
If we're looking for low hanging fruit, switching from Euler to Verlet is the top thing to consider. It's much more accurate, no more expensive, and not much more complicated.
I think continuous input handling is a good thing to consider. It both reduces input latency and saves CPU time.
I really like variable step size integrators, but they would probably take bigger architectural changes to implement. I don't know whether it's worth the trouble. Fixed step size Verlet might be good enough for our purposes. It's so much more accurate than Euler, it would already let us significantly increase the step size while still being more accurate than before.
Beta Was this translation helpful? Give feedback.
All reactions