Chapter 24

The CPU Learns You

We have seen the CPU predict directions (Left/Right).

But modern programming is built on something more complex: Interfaces.

When you call shape.draw(), the CPU doesn't know where the draw function lives. It could be Square.draw today, and Circle.draw tomorrow.

This is an Indirect Branch. The target changes dynamically. This should be slow.

It isn't slow, because the CPU doesn't know types — it learns recurring jump destinations.

Monomorphic vs Polymorphic

Among other predictors, the CPU keeps a table called the Branch Target Buffer (BTB). It remembers the last address you jumped to.

If your code only ever uses Square (Monomorphic), the CPU predicts Square.draw with 100% accuracy. The interface cost vanishes.

If you use a high-entropy mix of types (Polymorphic), the CPU fights itself. It predicts Square, sees Circle, flushes, updates...

You are confusing the machine's learning model.

Physics Lens: Type Stability = Speed. Type Chaos = Pipeline Flushes.
Experiment: 1. Mono: Stable type. Speed is nearly perfect.
2. Poly: Alternating types. The BTB fails constantly.
This is why Java/V8 optimizers try to "devirtualize" calls—they want to make Polymorphism look Monomorphic to the hardware.

Capacity Limits

This learning is not infinite. Its predictors are small, lossy tables.

When your program grows large enough, old patterns are evicted to make room for new ones. Code that was once predictable becomes chaotic simply because the machine ran out of memory to remember you.

This is why huge microservices often run slower than benchmarks predict. They suffer from Instruction Cache Misses and Prediction Evictions.

Hypotheses
Is Object-Oriented Programming bad?

No. But Dynamic Dispatch is not free. Using Interfaces everywhere "just in case" adds overhead if the types change frequently in a tight loop.

How do JITs fix this?

Browsers (V8) and JVMs use Inline Caching. They check the type. If it's usually a Square, they rewrite the code to say if (type == Square) call_square_directly(). They convert the Indirect Branch into a simple Conditional Branch (which is easier to predict).

The Machine is not neutral.

It rewards repetition. It punishes entropy.

If you write code that fights its assumptions, it will not argue — it will slow down.

Now we must ask: Which algorithms survive in a world that hates surprise?