Goodhart's Law

Goodhart's Law explains why metrics lose their usefulness the moment you start optimizing for them directly.

Share
Goodhart's Law

When a measure becomes a target, it ceases to be a good measure.

Plausibility Index: 4.8/5 — Rock Solid

Extensively documented across economics, business, and public policy with countless real-world examples.

The quick version

Named after economist Charles Goodhart, this principle reveals a fundamental problem with measurement: once people know they're being judged by a specific metric, they'll game the system to hit that number, often destroying what the metric was supposed to measure in the first place. It's why teaching to the test makes test scores meaningless, and why focusing on quarterly profits can kill long-term value.

Origin story

In 1975, British economist Charles Goodhart was wrestling with a problem that would sound familiar to anyone who's ever worked in a modern corporation. The Bank of England was trying to control inflation by targeting money supply growth, but something weird kept happening: whenever they focused on a particular monetary measure, that measure would start behaving erratically and lose its predictive power.

Goodhart crystallized this frustration into what would become one of the most quoted laws in economics: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." In plain English: the moment you turn a measurement into a target, people will find ways to hit that target that completely miss the point.

The insight wasn't entirely new—economists had noticed similar patterns before—but Goodhart gave it a name and a framework that resonated far beyond monetary policy. His law captured something universal about human behavior: we're incredibly good at optimizing for whatever we're measured on, even when that optimization defeats the original purpose.

What started as an observation about British banking policy has since become a fundamental principle in fields ranging from education to software engineering to public health. It turns out that the tendency to game metrics is as predictable as gravity—and often just as destructive.

How it works

Goodhart's Law works because of a fundamental mismatch between what we want to measure and what we can actually measure. Think of it like trying to judge a restaurant's quality by how quickly they serve food. Speed might correlate with good operations initially, but the moment the restaurant knows you're judging them on speed, they'll start cutting corners—reheating pre-made food, reducing portion sizes, or rushing customers out the door.

The mechanism is deceptively simple. First, you identify a metric that seems to capture something important—customer satisfaction scores, lines of code written, hospital readmission rates. Initially, this metric works well because people are naturally doing the thing you want, and the metric reflects that reality. But then you make the metric official, tie it to rewards or punishments, and suddenly the game changes.

Now people start asking: "How can I move this number?" rather than "How can I improve the underlying thing this number represents?" They discover that it's often easier to manipulate the metric than to improve the reality it's supposed to measure. A teacher can raise test scores by teaching test-taking tricks instead of actual knowledge. A salesperson can hit their quota by pushing customers toward cheaper products or delaying sales to the next quarter.

The tragedy is that this optimization often makes the original problem worse while making the numbers look better. You end up with students who can't think critically but ace standardized tests, or companies that hit their quarterly targets while destroying long-term customer relationships. The metric becomes a lie that everyone pretends to believe.

Real-world examples

Wells Fargo's Account Opening Scandal

Wells Fargo incentivized employees to open new customer accounts, reasoning that more accounts meant happier, more engaged customers. The metric made sense initially—satisfied customers often do open multiple accounts. But when the bank started tying bonuses and job security to account-opening numbers, employees began creating millions of fake accounts without customer knowledge. The metric that was supposed to measure customer engagement instead measured employee desperation. The scandal cost Wells Fargo billions in fines and destroyed its reputation, proving that hitting the target had completely missed the point.

Soviet Nail Factory Production

In the Soviet Union, central planners tried to measure factory efficiency by setting production targets. When they measured nail factories by the number of nails produced, factories made millions of tiny, useless nails. When planners switched to measuring by total weight of nails, the same factories started making giant, impractical railroad spikes. The metric kept changing, but the problem remained: workers optimized for the measurement rather than making nails that people actually needed. This became a classic example of how even well-intentioned metrics can produce absurd outcomes.

Emergency Room Wait Times

Hospitals began tracking emergency room wait times to improve patient care, which seemed perfectly reasonable—nobody wants to wait hours for medical attention. But when hospital administrators started getting judged on these numbers, some facilities began gaming the system in creative ways. They'd start the 'wait time' clock only after patients were fully registered, or they'd move patients to hallway beds to technically end their 'waiting' even though they hadn't seen a doctor. The metric that was supposed to ensure faster care sometimes just created the illusion of efficiency while patients still suffered from delayed treatment.

Criticisms and limitations

The biggest criticism of Goodhart's Law is that it can become an excuse for not measuring anything at all. Some managers use it to argue against any kind of performance metrics, claiming that all measurement is futile. This misses the point—the problem isn't measurement itself, but rather treating any single metric as the whole truth. Good measurement systems use multiple indicators and regularly rotate what they emphasize.

Another limitation is that Goodhart's Law doesn't tell you how to design better metrics, only why current ones might fail. It's more diagnostic than prescriptive. Knowing that people will game your system doesn't automatically tell you how to make an ungameable system—which might be impossible anyway.

Some critics also argue that certain metrics are more resistant to gaming than others. Financial metrics like profit or revenue, while imperfect, are harder to manipulate than subjective measures like employee satisfaction scores. The law might apply more strongly to some types of measurement than others.

Finally, there's the question of intent. Goodhart's Law assumes that people will try to game metrics, but this isn't always true. In high-trust environments with strong cultural alignment, people might continue optimizing for the underlying goal even when they could easily game the metric. The law works best as a warning about human nature under pressure, not as an iron rule of behavior.

Campbell's Law

A more specific version focusing on how social indicators become corrupted when used for decision-making.

Cobra Effect

Describes how incentive systems can backfire and make problems worse instead of better.

McNamara Fallacy

The error of focusing only on quantifiable metrics while ignoring important qualitative factors.

Go deeper

The Tyranny of Metrics by Jerry Muller (2018) — Comprehensive look at how metric fixation damages institutions and decision-making.

Seeing Like a State by James Scott (1998) — Classic analysis of how simplified measurements can lead to disastrous policy outcomes.

Problems with Monetary Targeting by Charles Goodhart (1975) — The original paper where Goodhart first articulated his famous law.

Footnotes

  1. Goodhart's original formulation was specifically about monetary policy, but the principle has been generalized far beyond economics.
  2. The law is sometimes stated as 'When a measure becomes a target, it ceases to be a good measure'—a pithier version that captures the same idea.
  3. Some organizations try to combat Goodhart's Law by using balanced scorecards with multiple metrics, but this often just creates multiple ways to game the system.