Chapter 05

Bits Mean Nothing

We have built wires, transistors, gates, adders, and a clock. We can store data and modify it reliably. But what is the data?

If you open the RAM of your computer and look at address `0x1000`, you might see the byte `0x41`.

What does `0x41` mean?

The Lens

The answer is: It is just high and low voltage. The physics doesn't care.

Bits have no intrinsic type.

Meaning is something we impose on top of the physics. This is what we call a Type. A Type is not a property of the data; it is a lens through which we choose to look at the data.

In languages like Python, the Type is sticky—it lives with the value. In C or Assembly, the Type is a lie. You can interpret any memory as any type, often with disastrous results.

Physics Lens: Latency ↓ (No fetch) | Throughput ↑ (Interpretation is contextual, not free) | Energy ↓ (No ALU use) | Waste ↓ (Dense)
Experiment: Toggle the 32 bits below. See how the exact same pattern produces radically different values for Integer, Float, and ASCII.

IEEE-754: The Greatest Betrayal

You will notice that the "Float" view behaves very strangely.

If you set the bits to `0x3F800000`, the Float says `1.0`. If you flip the LSB, it changes by a tiny amount. But if you flip a bit in the middle, it jumps wildly.

Floating Point numbers are not spaced evenly. They are dense near zero and sparse near infinity. This means that (A + B) + C does not always equal A + (B + C). The order of operations changes the result.

Hypotheses
Why does C allow me to treat an Int as a Char?

Because C (and the hardware) assumes you know what you are doing. This is called Type Punning. It is essential for systems programming (e.g., writing a network packet driver), but it is a common source of bugs where you read data through the wrong "lens".

Why is 0.1 + 0.2 != 0.3?

Because 0.1 and 0.2 result in infinite repeating fractions in binary (just like 1/3 is 0.333... in decimal). We have finite bits, so we chop off the end. When you add the chopped bits back together, the error accumulates, and you get 0.30000000000000004.

What is Endianness?

It is the order in which we store the bytes of a multi-byte type. If you have the integer `0x12345678`, Big Endian stores `12 34 56 78`. Little Endian (x86/ARM) stores `78 56 34 12`. The bits are the same, but if you read them carefully, the order is flipped. This is pure convention, like driving on the left or right side of the road.

What programmers usually get wrong here

Assuming the CPU checks types. It does not. If you write a float to memory and read it back as an instruction pointer, the CPU will happily try to execute your number as code. This is the basis of most security exploits.

Does the CPU know types at all?

No. The CPU only moves and transforms bits. Types exist only in the programmer’s mind, the compiler, and debugging tools.

This works — until we scale it. A system without types is fast, but it is infinitely fragile.