Numbers That Betray
In the previous chapter, we built a single Full Adder. It could add 1 + 1 to get 2 (`10`). But 2 is a small number. To build a universe, we need larger numbers.
We do this by chaining adders together. We wire the Carry Out of one adder to the Carry In of the next. This creates a chain of dependencies.
But this chain reveals a terrifying truth: Numbers are Finite.
The Cliff
In mathematics, the number line is infinite. You can always add 1.
In a computer, a number is just a row of $N$ wires. If you have 4 wires, you can represent 16 states (0
to 15).
What happens if you have the number 15 (`1111`) and you add 1?
The math says 16 (`10000`). But we only have 4 wires. The 5th bit (the 16) has nowhere to go. It falls off the edge of the universe. The result wraps around to 0 (`0000`).
The number line is not a line. It is a circle. Hardware integers do not implement \(\mathbb{Z}\) (the integers). They implement arithmetic modulo \(2^N\). This means every operation silently wraps around.
This wrap-around is called Overflow. It is not a hardware error. The hardware did exactly what it was designed to do. The error is assuming the math was infinite.
The Ripple Delay
Look closely at the simulator when you change a low bit (like bit 0). The value doesn't update instantly across the whole chain.
The Carry bit must be calculated by the first adder before it can be sent to the second. The second needs it before the third.
This carry chain is the first time we encounter a fundamental enemy of computation: Waiting.
This is called Ripple Delay. If you make your number 64 bits wide, you have to wait for the carry to ripple through 64 gates. Because addition sits on the critical path of almost every instruction, its delay limits the maximum clock speed of the entire CPU.
(Note: Signed numbers are not stored with a sign bit and magnitude. They are encoded using a trick called Two’s Complement, which makes subtraction cheap but overflow even more subtle. We will explore this later.)
Is overflow undefined behavior or defined behavior?
In hardware, overflow is always defined: it is wraparound modulo \(2^N\). In software, languages choose what to do. Unsigned integers in C/C++ wrap by definition. Signed integers in C/C++ invoke "undefined behavior" so compilers can optimize aggressively. The hardware is honest; the language chooses the rules.
Why is addition so important compared to other operations?
Because almost everything reduces to addition. Subtraction is addition. Multiplication is repeated addition. Address calculation is addition. Loop counters are addition. If addition is slow, the entire machine is slow.
How do CPUs avoid ripple delay?
They cheat using Carry Lookahead, Carry Select, and Prefix Adders. These circuits trade area and power for speed by predicting carry bits in parallel instead of waiting for them to ripple. Faster addition costs more silicon and more energy.
Why don't we just add more wires?
We do! We went from 8-bit to 64-bit. But every wire costs physical space and adds capacitance. Does this mean larger integers are always worse? Not always. Modern CPUs optimize adders so 32-bit and 64-bit adds often take the same architectural time. But physically, the cost still exists and shows up in power and heat.
Why does overflow cause security bugs?
Because memory sizes and pointer arithmetic use integers. If a size calculation overflows, a program may allocate too little memory and then write past the end, corrupting memory. Many exploits begin with a single overflow.
Is floating-point any better?
No. Floating-point has more ways to betray you: rounding, NaNs, infinities, and precision loss. It avoids overflow in some cases, but introduces approximation error instead. You trade one form of betrayal for another.
Is this why Pac-Man glitches at level 256?
Exactly. Pac-Man used an 8-bit integer for the level number. Max value is 255. Level 256 wraps to 0. The game logic breaks because it wasn't designed for that state, resulting in the famous "Kill Screen".
How do Python/Ruby handle huge numbers?
They implementation "Arbitrary Precision Integers" (BigInts) in software, linking multiple blocks together. This makes math infinite, but 100x slower than raw hardware addition.