Why Order Fails
In Chapter 26, we saw that Arrays are too rigid. To insert an item, we had to move everything.
So we built a Tree. A structure of linked nodes. Now we can insert anywhere just by changing a pointer.
But we paid a terrible price. We destroyed Traversal.
The Latency Chain
To read a Tree, you must follow pointers: Node -> Left -> Right.
The CPU doesn't know where Node.Left is until it finishes loading Node.
This is a Dependent Load. The CPU cannot prefetch. It cannot overlap. It must wait for the full latency of memory (hundreds of CPU cycles) at every single step.
The problem is not pointers. The problem is that order forces each step to depend on the previous one, serializing time itself.
1. Array Scan: 15 items. The CPU flies. (~15 cycles).
2. Tree Walk: 15 items. The CPU stalls at every node. (~300 cycles).
We preserved the logical order, but we lost the physical order.
The Pointer Tax
Every pointer is a promise of a future cache miss.
In an Array, 15 items fit in 1 or 2 Cache Lines. In a Tree, 15 items might be scattered across 15 different Pages.
Each step fetches an entire cache line just to read a single pointer-sized value.
Your "fancy" algorithm is drowning in TLB Misses and Cache misses. O(N) on paper is
O(N * Latency) in silicon.
We are lost in space.
We fragmented memory to gain flexibility. In doing so, we forced the CPU to wait at every step.
If simple traversal is this broken, what happens when we try to find the Shortest Path through a maze of pointers?
We will find that distance is a lie.
What about B-Trees? aren't they trees?
B-Trees calculate "Order" correctly. Instead of 1 item per node, they pack K items
per node (e.g., fitting perfectly in a Cache Line). Scanning inside a node is fast (Array
physics), and we only jump pointers occasionally. B-Trees win because they are "Arrays
disguising themselves as Trees."