Marketing Measurement Isn’t a Science — But It Can Be

Marketing Measurement Isn’t a Science — But It Can Be

Marketing loves data but fears uncertainty. To earn its credibility, measurement must grow up—testing, validating, and embracing the science behind every spend.

Over the years, marketing has been called creative, qualitative, and intuitive, all of which are strengths in their own right. But when it comes to measurement, that legacy becomes a liability. Too often, vanity metrics serve more as post-hoc justifications than decision-making tools.

The irony is that measurement should be the most scientific part of marketing. We have tools, models, and data. We run tests and generate forecasts. However, the results regularly lack many of the qualities we expect from good science, including rigor, transparency, repeatability, and, most importantly, falsifiability.

In a world of ever-increasing economic uncertainty, marketing worth its spend needs to treat measurement as a scientific discipline. This doesn’t mean turning marketers into statisticians. Instead, it means adopting a mindset and a set of practices that create better, faster feedback loops, ultimately leading to more confidence in the decisions that marketing leaders make.

What Science Measures (and Marketing Often Doesn’t)

In most scientific disciplines, measurement goes beyond simply reporting what happened by testing hypotheses, validating mechanisms, and falsifying assumptions. Importantly, scientific results need to be independently verified. Newton didn’t ask everyone to trust his interpretation of gravity; he showed them the math, the experiment, and the results.

In marketing, we unfortunately don’t always have that luxury. Budgeting decisions are made based on aggregate models and historical patterns. Assumptions go untested. Models tend to get built and rolled out before they can be validated. Once a number hits a dashboard, it’s often treated as truth, without an audit trail or uncertainty range in sight.

In my work with large marketing teams, this has led to three major issues:

  • Leaders act on metrics that can’t be proven 
  • Models produce single-point answers, where they should show a range
  • Measurement becomes a justification tool, not a learning tool

If your model says “CTV has a 3.2x ROI,” there’s not much room for uncertainty or debate, even if the underlying data is weak or the assumptions are flawed. It may feel like science, but it’s not.

Bringing scientific thinking into marketing measurement

What would it look like to bring scientific rigor into marketing measurement? I’d argue it starts with five core principles:

  1. Make your hypotheses explicit. Before analyzing a channel, campaign, or creative strategy, articulate what you believe to be true. For example, “This media investment will drive incremental conversions,” or “This offer will increase account funding.” When you write the hypothesis down, you give yourself the chance to test it properly (and to learn when you’re wrong).
  2. Design tests with counterfactuals in mind. Scientific experiments tend to rely on a control group. Marketing experiments should do the same. Geo holdouts, audience splits, and staggered rollouts can all measure not just what happened, but what would’ve happened without the spend. If you’re not doing this, you’re measuring correlation, not incrementality.
  3. Prioritize falsifiability. The goal of measurement shouldn’t be just to prove something works. You also need to disprove what doesn’t work. That means that if your current model can’t be wrong, it’s not useful. Ask yourself: “What would it take for this measurement to tell me this channel isn’t working?” If the answer is at all unclear, your measurement isn’t falsifiable.
  4. Forecast, then validate. Most marketers use MMMs and other models to explain past results. A better approach is to use those models to make predictions and then verify whether those predictions were correct. This is how science builds confidence in models. Forecast validation is the most straightforward way to determine whether your measurement is valid or merely a complex calculation.
  5. Embrace uncertainty. Every scientific discipline quantifies uncertainty. In marketing, we need to do the same. Don’t just report that Meta drove $1.3M in sales. Instead, say that based on your model, you expect Meta to drive between $1M and $1.6M, and use that range to plan—confidence intervals like these separate proper measurements from guesswork.

Changing the role of measurement

Measurement shouldn’t end with reporting. It should guide planning. That’s why when CMOs ask where to invest the next $5 million, the answer needs to come from a model that’s transparent, testable, and tied to business outcomes, along with a clear sense of how confident the team is in that recommendation.

This approach provides marketing measurement with a scientific foundation, featuring clear hypotheses, consistent testing, an honest reflection of uncertainty, and a willingness to revise the plan if the data don’t hold up. You don’t need a PhD to work this way. You do need a culture that’s built around learning, where getting it “wrong” is baked into the (scientific) method. Because that’s the only way that you’ll actually get marketing right.