Chapter 25

Why Looking Hurts

We enter the final phase: Algorithms.

The simplest algorithm is Search. Find X in a pile of N things.

Complexity Theory says this is O(N). You look at N things.

Physics says: it depends on where those things are.

The Cost of Walking

If your data is in an Array (Contiguous), the CPU can see the pattern. It engages the Prefetcher (Chapter 12).

It pulls data before you ask for it. The cost of "looking" drops to nearly zero (a few cycles, hidden by overlap).

If your data is in a Linked List (Scattered), the CPU is blind. It cannot predict the next address. It must wait for memory (Latency). The cost explodes (20-100 cycles).

O(N) Array is fast. O(N) List is slow. They are the same Big-O. They are different Physics.

Physics Lens: Contiguous = Prefetcher Win. Scattered = Latency Loss.
Experiment: Compare scanning 20 items.
Array: Total Cycles ~20.
List: Total Cycles ~400.
Looking hurts when you have to walk far to see the next item.

Bandwidth is Finite

Even if it's fast, looking at everything consumes Memory Bandwidth.

Every byte you read clogs the bus. If you have 10GB of data, a Linear Scan saturates the link between CPU and RAM.

While you scan, nothing else runs fast — not networking, not rendering, not the OS itself.

Brute force is expensive.

We cannot afford to scan memory forever. We are bound by the speed of light and the width of the bus.

To go faster, we must stop Looking. We must start Ignoring.

But Ignoring is not free. To ignore safely, we must make decisions. Decisions introduce branches. And branches bring the Oracle back into the game.

Hypotheses
Why not just use a Hash Map (O(1))?

Hash Maps are O(1) on average, but that "1" is expensive. It involves hashing the key (math) and jumping to a random bucket (Latency/Cache Miss). For small N (e.g., < 100), a linear scan of an Array (1 cycle per item) is often faster than a single Hash Map lookup (100+ cycles).