What if measurement wasn’t just about proving success – but about enabling strategic doubt?

By Ana Adi and Thomas Stoeckle

AMEC’s Measurement Month 2025 arrives at a moment of clarity and momentum. The launch of the Barcelona Principles 4.0 signals something crucial: that measurement and evaluation are becoming more coherent and better integrated in the communication planning, execution and assessment process.

We applaud this update. We contributed to it. We believe it matters.

But we have a challenge.

What if the next step forward in measurement is not a new principle, method or metric, but a new mindset?

The “What if” Framework: Reflexivity as Strategic Infrastructure

In our recently published article in Public Relations Review, we introduce the What if” framework—a practical scaffold that places reflexivity, deliberation, and dialogic adaptation at the heart of communication planning and evaluation.

It starts not from key performance indicators or conversion rates, but from a simple question:

What if?

What if we’re wrong? What if we saw this from a different perspective? What if the strategy isn’t landing? What if stakeholders are reacting differently than we expected? What if unintended consequences are emerging—but our dashboards can’t yet see them?

Rather than answering this rhetorically, the “What if” framework offers a method to build that question structurally into any communication strategy—so that measurement becomes not just a deterministic management tool of validation and control, but a space for experimentation, learning and development.

The new Barcelona Principles 4.0 mark an important evolution. They affirm that:

The field is maturing. But measurement still risks being too linear, too client-serving, too controlling, too focused on answering pre-defined questions instead of asking different ones.

That’s where the “What if” framework comes in. It doesn’t replace existing models or metrics—it complements them with strategic reflexivity and contingent adaption. In short, it allows and invites humility and doubt.

So, this measurement month we’re inviting you to use this framework. Here are some ideas how:

  1. Introduce a “What If” Loop into Your Evaluation Process

Design counterfactuals. Ask: what assumptions underpin our indicators? What surprises might we anticipate? Share those reflections during post-campaign reviews—not just results but risks you avoided or never saw coming.

  1. Host Dialogic Check-Ins with Clients and Stakeholders

Don’t just report. Pause and reflect: how do those receiving your messages understand them? What do they push back on?
Even short conversations can reveal blind spots—and strengthen insight.

  1. Integrate AI as a Critical Thinking Tool not as a helpful, aligned companion

Use generative AI to surface divergent interpretations. Ask “What’s missing from our framing?” or “Who might see this differently?” and make sure you save those answers.
This isn’t sentiment analysis—it’s structured dissent – and it might be useful at a later stage to help you uncover all the facets of the known and unknown.

  1. Start a “What Surprised Us” Diary

Write short reflections on what didn’t go as expected. What metric misled? What indicator contradicted the narrative? Share those widely—not as failure, but as learning and as an opportunity for others to learn too.

  1. Challenge Your Insight and Strategy with Moral Pluralism

Inspired by our own findings – (social) impact and (social) value are not correlated and that they might change in time and depending on context –  and Jonathan Haidt’s work on moral foundations, ask yourself: what if what we call “impact” looks different to others? What if care, liberty, loyalty, sanctity aren’t equally valued across stakeholder publics? What then?

Why This Matters for AMEC—and for You

Yes, this requires resources, and we are familiar with the instinctual application of the Practitioner’s Razor: nice but can’t do because no resources, no budget, no headspace, pressing deadline etc. That’s not new. It’s just a variation of the urgent vs important dilemma. But we can’t allow old arguments to hold us back forever.

Therefore, if you’re already practicing at the cutting edge of measurement, this framework won’t slow you down. It might help you pivot faster, see deeper, and build trust more durably.

The opportunity is clear: to go beyond principle-setting and lead a culture of reflexive measurement. A culture that asks not only “did it work?” but “what does it mean to work—across time, across publics, across perspectives?”

So let’s make this a space not just for best practice—but for critical practice. Let’s not only celebrate what we’ve achieved, but test what we’ve assumed. And if you’re willing to try the “What if” framework in your work—or help us improve it—we’d love to hear from you.

This Measurement Month, let’s reflect out loud. Together.