Whether you call it a step or slice doesn’t matter, tick is another common term for the same concept. Fabien’s solution is mentioned in Glenn’s. Glenn just takes it a stage further afterward and allows the fractional time remaining to be accounted for when rendering.
As I understand it, the inner while loop's is essentially playing "catch up" with the outer while.
For simplicity's sake, let's assume that simulation_time is incremented to 1 in the inner while (simulation_time += 1U).
Further, let us assume that simulation_time is 42 in the first iteration. Now, the inner while needs to execute 42 times until it fails the loop condition (42 < 42).
On the second iteration, simulation_time is equal to 58 (42 + 16). The inner while executes (42 < 58) 16 times now.
On the third iteration, simulation_time is equal to 59 (58 + 1). The inner while executes (58 < 59) 1 time.
The real_time cannot stay the same, as it polls the number of ticks since SDL was initialized. So apparently, simulation_time is always smaller than real_time.
(If for some reason, real_time isn't incremented in an iteration, then the inner loop will not get executed. However, I don't see a case where it can happen.)
Now, inside the update function there is a position update akin to current_position += speed * 16U. Now with the above iterations:
- The first iteration, causes 42 update calls (so current_position is updated 42 times as well).
- The second iteration, causes 16 update calls.
- The third, calls update 1 time.
So we are advancing the position of something variable times. (We are also executing the inner while in variable amounts.)
Maybe I am missing something here, but why doesn't the non-constant number of updates cause jitter? I really don't understand why it works at all to be honest. I am trying to understand it fully.
The simulated time is incremented by 16ms per inner loop a roughly 60hz rate. You’re correct that the simulated time catches up to the real time. Just wrong about the number of sub-steps and how the timers proceed. Technically Fabian’s solution will likely run slightly ahead of real time because of the inner loop condition.
You do get jitter from a changing number of sub-steps per variable step which is why you want to do things like the interpolation in Glenn’s solution. This is also why you still want to do everything in your power as a game developer to reduce the variance in real step size.
> This is also why you still want to do everything in your power as a game developer to reduce the variance in real step size.
Well the main reason is that for either variable time slices or fixed time slices with interpolation, what the time for the simulation/interpolation you want is that for the current frame, but you don't have that until you render it. So what you do is assume that it will take about as long as the previous frame, which is a bad prediction if the frame time variance is too big. If you really want to eliminate the jitter you'd need to simulate/interpolate for a time that you are guaranteed to be able to render the frame in and then make sure that the frame is displayed when that time elapses and not before. This of course increases latency so there is always a tradeoff.
Another cause for jitter is that even measuring the time between two successive frames can be difficult with modern GPUs - the CPU time at the start of the game loop doesn't really cut it: https://medium.com/@alen.ladavac/the-elusive-frame-timing-16...
> The simulated time is incremented by 16ms per inner loop a roughly 60hz rate. You’re correct that the simulated time catches up to the real time. Just wrong about the number of sub-steps. Technically Fabian’s solution will likely run slightly ahead of real time because of the inner loop condition.
Can't the inner while take 1 ms or 1 ns for an iteration? I don't see how the inner while's execution time is at roughly 16 ms.
Okay, if my number of sub-steps is wrong, then I am really missing something here. Just trying to understand what exactly is wrong with my thinking.
It is supposed to be so simple, yet I have real trouble understanding it currently. I am basically stuck now.
The inner loop can take as little time as it likes to actually do the update but it is accounted in simulated time by adding 16 to the timer and accounting for that in the update by passing in 16 as the amount of time to update for.
Real time is updated once at the start of the step then simulated time catches up in 16ms increments. So you get as many update calls as there are 16ms increments to simulated time before it exceeds real time.
This is all very simple but also pretty finicky to think through. I’d replicate the code in the language of your choice and step through it.
I think I finally got it now. I set myself up with roadblocks preventing me from seeing the tree in a forest.
After learning what delta time (and variable timesteps) are, I finally understood what the core issue (frame rate independence) is and why you need a fixed timestep for it.