When Structure Collapses
We have spent five chapters building structures to fight Latency.
Arrays for bandwidth. Binary Search for speed. Trees for insertion. Graphs for relationships.
But there is a physical limit that breaks them all: The Working Set.
The Cliff
Every algorithm works perfectly when N is small.
While your data fits in the Cache (L1/L2), random access is essentially free. You feel like a genius. Your Graph Database is flying.
Then you add one more node. The Working Set exceeds the Cache Size.
Collapse.
1. N=8 (Safe): The entire universe fits in L1 Cache. Hit Rate is 100%. Latency is 1 cycle.
2. N=32 (Collapse): The universe is too big. We Trash. Old nodes are evicted to RAM just before we need them again. Latency spikes to 100 cycles.
The Lie of O(N)
Big-O notation tells you how operations grow with N.
It assumes that accessing Data[0] costs the same as accessing Data[1,000,000].
It is wrong.
There is a cliff where "Constant Time" becomes "Latency Time". In modern computing, crossing this cliff is often the only optimization that matters.
Part VI Conclusion.
We tried to impose Order (Sorting, Linking, Mapping) on a machine built for Chaos (Electricity).
We found that Bandwidth is king, Latency is the enemy, and Locality is the only salvation.
But who manages this memory? Who decides what lives in the Cache and what dies in RAM?
It is time to meet the Landlord.
Can't we just buy more RAM to prevent Thrashing?
Adding RAM fixes Capacity (Disk->RAM thrashing) but does not fix Latency (RAM->Cache thrashing). If your Working Set is 1GB, it fits in RAM, but it doesn't fit in L3 Cache (usually < 50MB). You will still thrash the cache, suffering constant Latency penalties.