Table of contents
Open Table of contents
- The Game Loop
- The Problem: It Works On My Machine
- Attempt 2: Delta Time ()
- The Trap of Variable Delta Time
- The Solution: Fixed Timestep
- The Final Polish: Interpolation
- The Butterfly Effect
- Pure Functions
- The Illusion of Precision
- Why This Breaks Your Game
- The Myth of Impossibility
- Taming the Chaos
- The Payoff: Infinite Replays
- The Nuclear Option
- The Bedrock of Integers
- The Millimeter Analogy
- The Jitter Test
- Forging Fixed64
- Conclusion
- Resources & Inspiration
The Game Loop
At the core of every game engine lies a single while loop. It’s the heartbeat of your entire simulation, repeating dozens or hundreds of times per second to do two things:
- Update: Advance the state of the world (move characters, check collisions, run AI).
- Render: Draw the current state of the world to the screen.
When you write your first engine, you usually start with something like this just to get a sprite moving on screen:
while (game_running) {
// 1. Update the world
character.position.x += 5.0;
// 2. Draw the world
render(character);
}
It’s simple, intuitive, and completely broken. The hidden assumption here is that every frame takes the exact same amount of time to process.
The Problem: It Works On My Machine
Run this on your dev machine, and it looks great. Send it to a friend with a high refresh rate monitor, and your character moves like they’re on speed. Run it on an old laptop, and they’re moving in slow motion.
- Old Laptop (30 FPS): Loop runs 30 times/sec. Character moves pixels.
- High-End PC (120 FPS): Loop runs 120 times/sec. Character moves pixels.
Your game speed is tied to the frame rate, which means the gameplay experience is completely different depending on the hardware. We need to break that link.
Demo: Hardware Dependent Movement
Drag the slider to change the "Hardware Speed" (Simulated FPS). Watch how the box speed changes.
Attempt 2: Delta Time ()
The standard fix is Delta Time (). Instead of thinking in “pixels per frame”, we start thinking in “pixels per second”.
We measure how much time passed since the last frame, and scale our movement by that amount.
If the frame took 0.1 seconds (10 FPS), we move pixels. If the frame took 0.01 seconds (100 FPS), we move pixels.
The code looks like this:
double last_time = get_time();
while (game_running) {
double current_time = get_time();
// Calculate how much time passed since the last loop
double dt = current_time - last_time;
last_time = current_time;
// Update position based on time, not frames
character.position.x += 300.0 * dt;
render(character);
}
Demo: The Delta Time Fix
Off: Move fixed pixels per frame.
On: Move fixed pixels per second.
Try the Delta Time Fix demo above. Toggle the checkbox to see how the math corrects the speed.
This looks correct.
- Slow PC (0.1s per frame): 10 frames 30 pixels = 300 pixels/sec.
- Fast PC (0.01s per frame): 100 frames 3 pixels = 300 pixels/sec.
Both computers now move the character the same distance over the same amount of real time.
When you ship the game it feels smooth. But then players start reporting bugs: falling through floors, missed jumps on laptops, or physics exploding when the game lags.
The math feels right, so what’s wrong?
The Trap of Variable Delta Time
The issue lies in how computers simulate physics. Physics engines use Numerical Integration.
In the real world, time is continuous. An object moves smoothly from point A to point B. In a computer, time is discrete. We take snapshots. To find the new position of an object, we look at its current position and velocity and take a “step” forward in time.
This equation assumes that velocity is constant during that entire . But in a complex game, velocity is constantly changing due to gravity, friction, and collisions.
When is small (high FPS), the error is tiny because the velocity doesn’t change much in that short time. When is large (low FPS or lag spike), the error becomes massive.
The Tunneling Effect
This is a symptom of Discrete Collision Detection. By default, many physics engines don’t check if a bullet touched a wall during its travel. They only check if the bullet is inside the wall at the end of the frame.
- At 60 FPS (): The bullet moves in small steps. It is likely to land inside the wall during one of those steps, triggering a collision.
- At 5 FPS (): The bullet takes one giant step. It starts in front of the wall and ends up behind the wall. The engine sees no overlap, so no collision occurs.
If we were using continuous collision detection (sweeping the bullet along its path and testing the whole segment instead of just the endpoints) this tunneling would not happen. But continuous tests are more expensive, so many games rely on discrete checks and simply try to keep small enough that tunneling is rare.
Demo: Quantum Tunneling
Lower the simulated FPS and/or increase the speed to see the ball "tunnel" through the wall.
This behavior is inconsistent. On a fast machine, the bullet hits the wall. On a slow machine (or during a lag spike), it tunnels through. You cannot tune your game logic to prevent this because you never know how big the step will be. A game that works perfectly on your developer PC might break completely on a player’s laptop.
The Determinism Problem
Because the result of the integration depends on , and in our variable-delta loop comes from real time and floating-point math, the simulation is now non-deterministic. In a perfect world, if you replayed the exact same inputs you would get the exact same sequence of states. In reality, tiny differences in (from OS scheduling, clocks, and rounding) mean each machine takes slightly different-sized steps through time. The same jump will land at a slightly different position on every run, and those sub-pixel errors accumulate over thousands of frames. This makes it impossible to have rock-solid gameplay, replays, or multiplayer synchronization.
In the Quantum Tunneling demo above, you can see this in action. Lower the simulated FPS, and the ball will eventually skip right through the wall. This isn’t a bug in the demo. It’s a fundamental flaw of using large, variable time steps for collision.
The Solution: Fixed Timestep
We are in a bind.
- We want the game to run as fast as possible (Variable FPS) for smooth rendering.
- We need the physics to run at a constant speed (Fixed FPS) for stable simulation.
The solution is to decouple them. We run the rendering loop as fast as the hardware allows, but force the physics loop to run at a strict, fixed heartbeat (e.g., 60 times per second).
We do this using the Accumulator Pattern.
The Accumulator Pattern
Think of time as a currency.
- Real Time is your income. Every millisecond that passes in the real world is added to your bank account (the
accumulator). - Simulation Steps are your expenses. Each physics step costs a fixed amount (e.g.,
0.016seconds).
In every frame of our loop, we look at our bank account. If we can afford a physics step, we “buy” one (run the physics) and subtract the cost. We keep buying steps until we can’t afford any more. Whatever is left over stays in the account for the next frame.
Here is the standard implementation:
// We want physics to run exactly 60 times per second
const double dt = 1.0 / 60.0;
double current_time = get_time();
double accumulator = 0.0;
while (game_running) {
double new_time = get_time();
double frame_time = new_time - current_time;
current_time = new_time;
Input current_input = read_input();
// 1. Income: Add real time to the accumulator
accumulator += frame_time;
// 2. Expense: Spend time to run physics steps
// We use a while loop because if the game lagged, we might need
// to run multiple physics steps to catch up.
while (accumulator >= dt) {
integrate_physics(current_state, current_input, dt);
accumulator -= dt;
}
// 3. Render: Draw the state
render(current_state);
}
A Note on Input
Notice we read input once per frame, but we might use it multiple times in the while loop. This is acceptable for most games. For high precision competitive games, you might want to buffer inputs with timestamps, but for now, applying the “current” input to all physics steps in this frame is a good approximation.
Now, no matter how fast or slow the display is, integrate_physics is always called with dt = 0.0166.
- Fast PC: Render runs 1000 times/sec. Accumulator fills up slowly. Physics runs once every ~16 render frames.
- Slow PC: Render runs 30 times/sec. Accumulator fills up fast. Physics runs twice per render frame.
The physics simulation is now identical on both machines.
The “Spiral of Death”
There is one dangerous edge case.
What if your integrate_physics function takes longer to run than the dt itself?
Imagine dt is 16ms (60 FPS), but your physics calculations are heavy and take 20ms to complete.
- Frame 1: You accumulate 16ms. You enter the
whileloop. - Physics: You run
integrate_physics. It takes 20ms of real time. - Result: By the time you finish, another 20ms has passed in the real world!
- Frame 2: The accumulator now has the leftover time PLUS the new 20ms. You now have ~36ms accumulated.
- Catch Up: The loop sees 36ms. It needs to run physics twice (16ms + 16ms) to catch up.
- Disaster: Running physics twice takes 40ms. Now you are even further behind.
Your game freezes. The window stops responding. You have to force-quit.
This is the Spiral of Death. The engine falls behind, tries to simulate more frames to catch up, which takes even more time, putting it further behind. It’s an infinite loop of playing catch-up that it can never win.
The fix is simple: Clamp the Accumulator. We set a maximum limit on how much time we simulate per frame (e.g., 0.25 seconds). If the game falls further behind than that, we just accept the slowdown (slow motion) rather than freezing the computer.
accumulator += frame_time;
// Safety: Never simulate more than 0.25 seconds in one frame
if (accumulator > 0.25) {
accumulator = 0.25;
}
while (accumulator >= dt) { ... }
Why Doubles Are A Trap
You might have noticed we used double for time.
In a simple game, this is fine. But if you’re building something serious, especially networked multiplayer, floating point time is a trap.
Floating point numbers lose precision as they get larger.
- At
time = 0.0, precision is tiny (nanoseconds). - At
time = 10,000.0(approx 3 hours), precision drops to ~1 millisecond.
So when you do time += dt; or accumulator += frame_time; every frame, each addition is rounded to the nearest representable value. Individually these errors are microscopic, but they always push in some direction.
Leave your game running overnight, and things start to get weird. Animations jitter. Physics behaves slightly differently. In multiplayer, this is fatal: if Player A’s clock drifts even a fraction of a millisecond from Player B’s, their simulations diverge.
Why Integers Matter:
Modern engines measure time in “Ticks” (usually nanoseconds) using a 64-bit integer (int64_t).
1second =1,000,000,000nanoseconds.dt(60Hz) =16,666,667nanoseconds.
Integers never lose precision. Whether the server has been up for 1 second or 10 years, 16,666,667 ticks is always exactly 16,666,667 ticks on every machine, and the sequence of dt values is bit identical across clients.
In later sections, we’ll dig deeper into how to tame floating point and build simulations that stay deterministic even under the most demanding multiplayer conditions.
Here is the final loop using high precision integers:
// 16.666ms in nanoseconds
constexpr int64_t dt = 16666667;
int64_t current_time = get_time();
int64_t accumulator = 0;
while (game_running) {
int64_t new_time = get_time();
int64_t frame_time = new_time - current_time;
// Prevent Spiral of Death: Max 0.25 seconds (250,000,000 ns)
if (frame_time > 250000000) {
frame_time = 250000000;
}
current_time = new_time;
accumulator += frame_time;
while (accumulator >= dt) {
previous_state = current_state;
// Convert nanoseconds to seconds for the physics math
integrate_physics(current_state, dt / 1000000000.0);
accumulator -= dt;
}
// ... Render ...
}
The Final Polish: Interpolation
We have solved the physics stability, but introduced a visual problem. Since the physics updates (60Hz) are decoupled from the rendering (Variable Hz), the screen might draw between two physics steps.
If the character is at at step 1, and at step 2, but we render halfway through, the character will still be at (because the physics hasn’t updated yet). Then, suddenly, they snap to . This causes “jitter” or “stutter.”
To fix this, we use the leftover time in the accumulator. The accumulator holds the time that we haven’t simulated yet. It tells us exactly how far we are between the previous physics step and the next one.
We calculate an interpolation factor alpha:
- If
alphais 0.5, we are exactly halfway between the last physics state and the current one. - We blend the two states to find the render position.
// Calculate how far we are into the next frame (0.0 to 1.0)
// We cast to double here because alpha IS a fraction.
const double alpha = (double)accumulator / (double)dt;
State interpolated_state = interpolate(previous_state, current_state, alpha);
render(interpolated_state);
Demo: Smoothing the Jitter
The Cost of Smoothness
There is a catch. By interpolating between previous_state and current_state, we are technically rendering the past.
If alpha is 0.5, we are showing the user where the character was half a frame ago.
This introduces a latency of exactly one physics frame (e.g., 16ms). In the grand scheme of things (where input lag from the monitor and OS can be 50ms+), this is a negligible price to pay for removing all visual jitter. But it is a trade-off: we trade a tiny bit of immediacy for perfect visual stability.
In the Smoothing the Jitter demo above, notice how the Green box (Interpolated) is always slightly behind the Red box (Raw). It is waiting for the next physics frame to arrive so it can blend towards it.
Try lowering the Physics Rate to 2 Hz in the demo. You will clearly see the Green box trailing significantly behind the Red box. This is the latency in action.
The Butterfly Effect
We might look at this perfect loop and assume our work is done. We might assume that if we run this code on two different computers, feeding them the exact same inputs at the exact same ticks, they will produce the exact same simulation.
This assumption is dangerous.
In the chaotic world of numerical simulation, a difference of a single bit is not a minor error. It is a divergent timeline. If a character’s position differs by 0.00000001 on Frame 1, that tiny discrepancy will:
- Ripple through the collision system
- Amplify through the physics solver
- Compound over thousands of frames
By Frame 1000, one simulation has the character landing a jump, while the other has them falling into a pit.
This is the Butterfly Effect. And in the world of floating point math, the butterfly is always flapping its wings.
Pure Functions
Before we even look at the math, we have to look at the state.
A deterministic system is simple:
State(n+1) = Update(State(n), Input(n))
If you run this function twice with the same inputs, you get the same output. That is a Pure Function.
But most game code is filthy. We rely on global state, system timers, and memory pointers. To achieve determinism, we must purge these sins. Your simulation needs to live in a bubble, completely isolated from the real world.
- No System Time: Never read time or clock (ex:
std::chrono::steady_clock) inside your simulation logic. If your logic reads the wall clock, it is different every frame. - No Random Numbers: Never use
rand(). Use a seeded deterministic PRNG, unless you explicitly save and sync the seed. - No Memory Addresses: Never iterate over a map keyed by pointers. ASLR (Address Space Layout Randomization) ensures these addresses are different every time you run the game. Your iteration order will change, your update order will change, and your game will break.
The Illusion of Precision
“Okay,” you say. “I wrote clean code. No globals. No random numbers. I am safe.”
No, you are not. Because you are using float/double.
We think of computers as math machines. 1.0 + 2.0 = 3.0. It feels precise.
But floating point numbers aren’t real numbers. They are scientific notation with a limited number of bits.
- Around
0.0, they are precise (nanometers). - Around
1,000,000.0, they get fuzzy (millimeters). - Around
10,000,000,000.0, they can’t even represent every integer.
This lack of precision leads to rounding errors. And here is the kicker: The order of operations changes the rounding errors.
The IEEE 754 Standard
Why? Because a 32-bit float only has 23 bits of precision (the Mantissa). That is about 7 decimal digits. When you add a tiny number to a huge one, the computer has to shift the tiny number’s bits to match the huge number’s exponent. If you shift it too far, the bits fall off the edge.
10^20 + 1.0 = 10^20. The 1.0 was shifted right off the cliff.
The Associativity Trap
In the math we learned in school, addition is associative. The order in which you add numbers does not matter.
In the world of floating point, this is a lie.
Consider a scenario where we have three values:
- A: A massive number ()
- B: The negative of that massive number ()
- C: A small number ()
Mathematically, and should cancel each other out, leaving . The result should be .
Let’s see what the computer actually does.
// Feel free to copy-paste and run this yourself!
// I think it's always more impactful to see it with your own eyes.
#include <iostream>
#include <iomanip>
int main() {
volatile float a = 1e20f;
volatile float b = -1e20f;
volatile float c = 1.0f;
// Case 1: Cancel the huge numbers first
// (1e20 - 1e20) = 0. Then 0 + 1 = 1.
volatile float result1 = (a + b) + c;
// Case 2: Add the small number to the huge number first
// (1e20 + 1) It vanishes due to lack of precision. The result is just 1e20.
// Then (1e20 - 1e20) = 0.
volatile float result2 = a + (b + c);
std::cout << std::fixed << std::setprecision(10);
std::cout << "result1 = " << result1 << "\n";
std::cout << "result2 = " << result2 << "\n";
}
Output:
result1 = 1.0000000000
result2 = 0.0000000000
In the second case, the 1.0 was simply erased from existence. It was too small to be represented when added to , so it fell off the end of the mantissa.
Why This Breaks Your Game
“I am not adding in my game.”
Maybe not. But you are summing forces.
Force = Gravity + Wind + Explosion
If you compile for different architectures (e.g., x86 vs ARM), the compiler might optimize this to (Gravity + Wind) + Explosion on one, and Gravity + (Wind + Explosion) on the other. You get different results.
- Frame 1: Player A gets
10.000001. Player B gets10.000002. - Frame 100: That tiny difference causes a collision to happen on B’s machine but miss on A’s machine.
- Result: Desync. Game over.
The Myth of Impossibility
It is often said that floating point determinism is impossible. For a long time, the standard advice for networked games was to abandon floats entirely and use Fixed Point math (integers) to guarantee synchronization.
This was true in the era of the x87 FPU, a piece of hardware notorious for its inconsistent internal precision. But the hardware landscape has changed.
Modern CPUs, from Intel to AMD to ARM, use standardized instruction sets like SSE and AVX. These instructions are rigorously defined. An ADDSS (Add Scalar Single-Precision) instruction will produce the exact same bit-pattern result on any compliant processor, provided the inputs are identical.
It is possible to tame the float. But it requires strict discipline.
NOTE: For a deep dive into the modern state of floating point determinism, Erin Catto’s article Box2D Determinism is an excellent resource.
Taming the Chaos
To achieve determinism with floats, we must strip away anything that introduces ambiguity.
- Disable “Fast Math”: Compilers have a flag (often
-ffast-mathor/fp:fast) that explicitly allows them to break the rules of math to gain speed. It tells the compiler to assume associativity where none exists. We must turn this off and enforce strict IEEE 754 compliance. - Watch out for FMA (Fused Multiply-Add): Some CPUs can perform
a * b + cin a single step with one rounding error. Others do it in two steps:(a * b)then+ c, incurring two separate rounding errors. One is more accurate, but they are different. You must ensure your compiler generates consistent instructions. - Avoid Transcendental Functions: Operations like
sin(),cos(), andsqrt()are often implemented in software libraries (the Standard Library) rather than hardware instructions. These implementations vary by compiler and operating system. A sine wave calculated on MSVC/Windows might differ slightly from one calculated on GCC/Linux.
The Payoff: Infinite Replays
Why go through all this trouble? Why fight the compiler and the hardware for every single bit of precision?
Because if we succeed, we unlock a superpower: Time Travel.
In a non-deterministic game, recording a replay means saving the position and rotation of every object in the world, every single frame. This results in massive files that are essentially video recordings of data.
In a deterministic game, we don’t need to save the world. We only need to save the Inputs.
If we record the initial “seed” of the world, and then record every keystroke the player makes, we can reconstruct the entire match by simply re-running the simulation. Because the logic is deterministic, the game will play out exactly the same way, bit for bit, as it did the first time.
We can record an hour-long match in a file the size of a text message. We can rewind time, fast-forward, and even jump into the replay and take control.
This is the promise of determinism. It is the foundation of modern netcode, from fighting games to RTS epics.
But for some, even this isn’t enough. For some, the slight risk of a compiler optimization or a library mismatch is too high. For those who demand absolute, unshakeable truth, there is only one option left.
The Nuclear Option
In the previous sections, we explored the treacherous landscape of floating point math. We learned that with enough discipline, enough compiler flags, and enough rigorous testing, we can coax floats into behaving deterministically. For 99% of games, this is the correct path. It is fast, it is hardware-accelerated, and it is standard.
But what if you are building the next Street Fighter? What if you are building an RTS with 10,000 units? What if you want to support cross-play between a PC, a console, and a mobile phone, and you absolutely, positively cannot risk a single bit of desync?
Sometimes, “careful” isn’t enough. Sometimes, you want a guarantee.
When the cost of failure is total desynchronization, we turn to the Nuclear Option. We abandon floating point numbers entirely. We return to the bedrock of computing: Integers.
This approach solves two fundamental problems at once:
- Determinism: Integers behave identically on every processor architecture.
- Precision: Integers provide uniform precision across the entire number line, eliminating the “wobbly” physics that occur far from the origin in floating point worlds.
The Bedrock of Integers
Integers (int, long, int64_t) are the only honest numbers in a computer.
1 + 1 = 2. Always.1000000 + 1 = 1000001. Always.
There is no fuzziness. There is no mantissa. There is no rounding mode that changes based on the phase of the moon. An integer operation on an Intel CPU produces the exact same bit-pattern as an integer operation on an ARM chip, a GPU, or a toaster.
But how do we build a physics engine with integers? How do we represent a speed of 0.5 meters per second, or a gravity of 9.8, or the value of PI?
We use a technique called Fixed Point Math.
The Millimeter Analogy
The concept is deceptively simple. It is all about changing your perspective on units.
Imagine we are measuring distance in meters.
- Float:
1.5meters. - Integer: We can’t do
1.5. We only have1or2.
So let’s change our unit. Let’s stop measuring in meters and start measuring in millimeters.
- Integer:
1500millimeters.
That is it. That is the entire secret. We pick a “scaling factor” (in this case, 1000) and we multiply all our numbers by it.
1.0becomes1000.0.5becomes500.
Addition works perfectly: 1000 + 500 = 1500 ().
Multiplication requires one extra step. If we multiply 2.0 () by 0.5 (), we get 1,000,000. But we know the answer should be 1.0 ().
Because we multiplied the numbers, we also multiplied the scaling factors (). To get back to our correct unit, we simply divide by the scaling factor.
2000 * 500 / 1000 = 1000.
Binary Fixed Point
Computers are binary machines, so they prefer powers of 2 over powers of 10. Instead of scaling by 1000, we scale by shifting bits. A common standard is 16:16 Fixed Point. This uses a 32-bit integer, where the top 16 bits represent the whole number, and the bottom 16 bits represent the fraction.
- Scaling Factor: .
1.0is stored as65536.0.5is stored as32768.
The Jitter Test
We mentioned that fixed point math provides Uniform Precision. Why does this matter?
Because floats degrade. As a floating point number gets larger, its precision gets worse. It’s like a ruler where the tick marks get further and further apart the further you go.
Fixed Point numbers are a uniform grid. The distance between 1 and 2 is exactly the same as the distance between 1,000,000 and 1,000,001.
We can visualize this degradation with a test. Let’s simulate an object moving very far away from the origin (). We will compare a standard 32-bit Float against a high-precision 32:32 Fixed Point implementation (since 16:16 would overflow at this distance).
Watch the Red Box (Float). As the coordinate gets larger, it starts to “jitter” or “snap”. It can no longer move smoothly because the computer literally cannot represent the numbers between the snaps. The Green Box (Fixed) remains perfectly smooth, forever.
Demo: Precision Loss at Distance
We are simulating an object moving slowly at a huge X coordinate.
Float (Red): Uses Math.fround() (32-bit float emulation).
Fixed (Green): Uses 64-bit Integers (32:32).
Forging Fixed64
For a modern game engine, 16:16 isn’t enough. The maximum value is only around 32,000. That is barely enough for a small room, let alone a sprawling open world.
We need more range. We need 32:32 Fixed Point.
We store this in a 64-bit integer (int64_t).
- Range: . Enough to simulate a solar system with millimeter precision.
- Precision: . Sub-atomic accuracy.
struct fixed64
{
int64_t value;
};
fixed64 f64_from_int(int32_t v)
{
fixed64 result;
result.value = (int64_t)v << 32; // Might want to double check this isn't UB.
return result;
}
fixed64 f64_from_double(double v)
{
fixed64 result;
result.value = (int64_t)(v * 4294967296.0);
return result;
}
fixed64 operator+(fixed64 a, fixed64 b)
{
fixed64 result;
result.value = a.value + b.value;
return result;
}
fixed64 operator*(fixed64 a, fixed64 b)
{
fixed64 result;
#ifdef _MSC_VER
int64_t high;
int64_t low = _mul128(a.value, b.value, &high);
result.value = (int64_t)((unsigned __int64)low >> 32 | (unsigned __int64)high << 32);
#else
__int128 temp = (__int128)a.value * (__int128)b.value;
result.value = (int64_t)(temp >> 32);
#endif
return result;
}
This simple struct is robust. It behaves exactly like a float, but it is perfectly, mathematically deterministic.
Conclusion
We started with a simple while loop that ran as fast as it could, and we saw how it fell apart the moment it left our development machine. We tried to fix it with Delta Time, only to discover that variable time steps introduce non-deterministic chaos into our physics.
The solution wasn’t to choose between speed and stability, but to decouple them. By using a fixed timestep accumulator, we gave our game the best of both worlds. A rendering loop that utilizes every ounce of GPU power, and a simulation loop that ticks with the precision of a metronome.
This structure is more than just an optimization. It is a guarantee. It guarantees that a jump on a laptop reaches the same height as a jump on a gaming PC. It guarantees that your physics won’t explode when the frame rate drops. It gives you a stable foundation upon which you can build complex, reliable gameplay systems.
It takes more effort to build than a simple loop. It requires you to understand how your computer handles numbers. But that effort pays off every single time your game runs.
Resources & Inspiration
I wrote this post because I went looking for answers. This article is the collection of everything I learned on that journey.
If you want to go deeper, these are the resources that guided me:
- Fix Your Timestep! by Glenn Fiedler.
- Box2D Determinism by Erin Catto.
- Game Loop by Robert Nystrom.