Measurement is both a compass and a trap. It tells us where we are, where we are heading, and how far we’ve gone. Yet the same act of measuring can twist incentives, distort reality, and lead us astray. Two intellectual giants captured this paradox: Peter Drucker with his famous dictum, “If you can’t measure it, you can’t improve it”, and economist Charles Goodhart with his cautionary law, “When a measure becomes a target, it ceases to be a good measure.”
At first glance, Drucker and Goodhart seem to stand on opposite ends of a spectrum, one urging us to measure everything for progress, the other warning that measurement itself may betray us. But if we follow their ideas carefully, a more nuanced truth emerges: they are not adversaries, but complementary forces, like an engine and brakes, that, together, allow us to move safely and effectively through the landscape of improvement.
This article explores both insights in depth: what they mean, their misinterpretations, their interplay, and how they form a complete philosophy of measurement.
Peter Drucker’s dictum is deceptively simple: improvement requires feedback. Without some form of measurement, whether numerical or qualitative, you cannot know if your efforts are working. Measurement defines what “better” looks like, creating a standard against which to compare progress, human mind has evolved to work in a comparative manner. It establishes feedback loops that allow us to detect whether actions are pushing us forward or dragging us backward. Most importantly, it grounds decisions in evidence rather than hunches. In management, education, or personal growth, metrics are the engine of improvement: they accelerate learning and drive purposeful action. Without them, we are simply guessing.
Despite its clarity, Drucker’s idea is often misapplied. Many interpret it as suggesting that only numbers matter, reducing the rich texture of human experience to cold spreadsheets or ticked To-Do list. Yet Drucker never excluded qualitative insights; he meant measurement in the broad sense, including observations and subjective evaluations. Others believe his dictum means that unmeasurable things cannot improve, but history proves otherwise: art, ethics, and relationships evolve without strict metrics, though measurement can still sharpen these processes. Another common trap is assuming we should measure everything, but over-measurement creates noise and false precision. Finally, some confuse measurement with control, believing that tracking numbers automatically solves problems. In truth, metrics merely signal; they do not repair.
Drucker’s principle, then, is about necessity rather than sufficiency. Measurement is the starting point of improvement, not its guarantee.
Goodhart’s Law warns us that once a metric becomes a target, it loses its value as an indicator. Metrics are useful only when they serve as neutral reflections of reality. The moment they are tied to incentives or rewards, people inevitably learn to optimize the number itself, often at the expense of the underlying goal. A test score stops measuring learning when students are taught to memorize answers or game exams. A hospital’s average discharge time stops measuring patient wellbeing when staff rush people out prematurely. A company’s quarterly revenue stops reflecting long-term health when executives manipulate short-term sales figures. The mechanisms of failure are diverse. Metrics are gamed when people find loopholes that inflate the number without improving the system. They create perverse incentives when optimizing the metric actively harms the larger goal, such as schools lowering standards to boost pass rates. And they cause overfitting when organizations become so fixated on one number that they lose sight of the bigger picture. In each case, the measure has ceased to measure what it was meant to.
Goodhart’s insight, too, is often misunderstood. Some interpret it as saying that measurement itself is bad, but this is inaccurate: the law presupposes that measurement is already in use. Others assume it implies that all metrics will eventually fail, which is not true. Metrics fail when they are rigidified into sole targets without adaptation. Finally, some argue that we should abandon metrics altogether, but this is impossible. Systems need feedback to function. Goodhart is not anti-measurement; it is anti-naïve measurement.
No measurement can be entirely immune to Goodhart’s Law, because the problem arises not from the metric itself but from the act of turning it into a target. When optimization pressure is applied, even the best proxy eventually loses correlation with the underlying goal. However, measurements can be made more resistant to Goodharting. One approach is to use ensembles of independent metrics, making it harder to exploit all at once. Another is to design dynamic or adversarial shifting metrics that adapt against gaming. Process-based metrics, which evaluate how results are achieved rather than only outcomes, also reduce vulnerability. Human oversight and domain-specific constraints (like physical limits on energy or time) can act as anchors. The goal, then, is not to design a “Goodhart-proof” metric, which is impossible, but to create ones that fail gracefully, resist exploitation, and reveal gaming quickly enough to enable correction.
Despite their apparent opposition, Drucker and Goodhart have much in common. Both revolve around the link between measurement and improvement, acknowledging that without metrics, systems cannot progress effectively. Both recognize that metrics shape behavior, sometimes for the better, sometimes for the worse. Both demand careful selection of measures that align with true goals. And both are, in their own way, cautionary: Drucker warns against blindness, while Goodhart warns against distortion.