Why Distance Lies
In Algorithms class, you learn Dijkstra's Algorithm or BFS.
You learn that the distance between Node A and Node B is the number of "Hops".
Physics disagrees.
The Hop Delusion
To an algorithm, all Hops are equal. Moving from A -> B costs "1".
To a CPU, Hops are Memory Accesses.
If A and B are on the same Cache Line, the Hop is free (1-4 cycles).
If A and B are on formatted memory pages, the Hop is expensive (100+ cycles,
TLB miss).
Logical distance (1 hop) can mean physical distances varying by 100x.
The cost comes not from distance alone, but from dependency: each hop must wait for the previous one to resolve.
Algorithms count edges. Hardware pays for memory accesses. Minimizing hops does not minimize time if each hop is slow.
1. Grid Graph: Neighbors are physically close. The BFS wave flows quickly.
2. Random Graph: Neighbors are scattered. The BFS wave stutters violently with Latency.
Both graphs have 16 nodes. Both traversals visit 16 nodes. One is fast. One is slow.
The Invisible Wall
This is why massive graph databases often crawl.
They traverse millions of "edges". But edges are just pointers. If your graph is random (like a Social Network), every friend-of-a-friend lookup is a jump into the abyss of RAM.
The Pre-fetcher is helpless against random graphs. It cannot guess who your friends are.
The Abstraction is Leaking.
We built elaborate structures to organize our logic. Arrays, Lists, Trees, Graphs.
But the machine only knows one thing: The Cache Line.
If your structure respects the line, it lives. If it fights the line, it dies.
As the structure grows, it eventually outgrows the cache that protects it.
It is time to watch the structure collapse.
Why is A* (A-Star) faster than Dijkstra?
A* uses a heuristic to explore fewer nodes. It reduces the number of Hops. However, if the graph is random/scattered, every Hop it does take is still a Cache Miss. A* saves Logic (steps), but it cannot fix Physics (latency).