The History Tables
In Chapter 35, we saw that decoding is a funnel. To keep it full, we must fetch instructions before we know if we need them.
This requires predicting the future. We call it "Branch Prediction," but a better name is Pattern Recognition.
The CPU doesn't guess. It remembers.
The Global History Register (GHR)
The CPU keeps a tiny diary of the last N branches you took.
If you took the last 3 branches (Taken, Taken, Taken), the GHR records 111.
This register is used as an Index into a table of counters. It asks: "The last time we
saw the pattern 111, what happened next?"
1. Pattern: Try alternating: Taken, Not Taken, Taken, Not Taken...
2. Watch GHR: See the bits shift (1, 0, 1, 0...).
3. Watch PHT: Notice how specific slots (like
1010) turn Green (Taken) or
Red (Not Taken). The CPU is learning your specific oscillation.
The Saturating Counter
Inside each table slot is a simple 2-bit counter.
- Strongly Taken (3)
- Weakly Taken (2)
- Weakly Not Taken (1)
- Strongly Not Taken (0)
This "Hysteresis" prevents the CPU from panicking after one outlier. It takes two wrong guesses to change its mind completely.
The Assumption
The History Tables work for one reason only: the future usually looks like the past.
Loops repeat. Branches stabilize. Programs are boring. The CPU exploits this boredom to run fast.
The moment your code stops being boring — when patterns disappear — the predictor collapses.
Why does Randomness destroy performance?
If your data is truly random (e.g., sorting random numbers), the GHR fills with random noise. The PHT slots are accessed chaotically. The counters never saturate. The predictor accuracy drops to 50%, and the pipeline flushes constantly.
The CPU knows you.
It has learned your loops. It has memorized your data patterns.
But since this mechanism is deterministic, it can be exploited.
What happens when we write code specifically designed to poison this memory?
It is time to break the learning machine.