Username or Email Address
Password
Remember Me
Lost your password?
International association for the measurement and evaluation of communication
At AMEC, we consider ourselves fortunate to have the opportunity to engage in collaborative endeavors with industry organizations, partners, and end-user clients. Our membership comprises a diverse range of communication research and intelligence providers, consulting firms, public relations agencies, academics, and non-profit organizations.
This paper, produced by our Public Relations Agency working group, draws upon the real-life experiences of select agencies. This valuable resource has the potential to empower any Public Relations agency, enabling them to elevate their communication planning and research practices, ultimately leading to amplified levels of success.
We look forward to exploring and discussing the insights and knowledge gleaned from this paper in the coming months. If you wish to stay informed and be a part of these discussions, we invite you to subscribe to our monthly newsletter by clicking here.
Best regards, Aseem.
Introduction – Aseem Sood
Setting the Agency Agenda in Measurement and Evaluation – Stephen Waddington
Buyer versus seller: social listening technologies – Jonny Bentwood
Measurement as an agency revenue stream – Marianne Morgan
What’s the price of measurement and who pays? – Jon Meakin
Keep the human in the loop – Allison Spray
Approaching analytics in the age of attention – Ben Levine
Attribution versus contribution – James Crawford and Ben Levine
This paper has been produced by members of the AMEC Public Relations Agency Working Group. Its goal is to promote measurement and evaluation best practices throughout the public relations agency sector.
The paper covers contemporary issues in measurement best practices and aims to kickstart conversations ahead of the 2023 AMEC Global Summit in Miami, Florida.
Agencies are typically at the sharp end of innovation and client demand. The perspectives of the contributors address areas of innovation in public relations evaluation and measurement.
Each contribution is laid out in the style of a management essay and comprises a synopsis, learning objectives, a discussion and narrative, and further reading.
– Stephen Waddington
An unfortunate truth is that technology providers see the world and prioritise features differently from those that buy the solution. In the marketing obfuscation of promises and deliverables, it is hard for the public relations buyer to really understand whether the proposed solution is right for them; and whether any limitations are due to the vendor not having this developed or whether it is a wider industry issue. Confusion is rampant and vendor differentiation is almost impossible.
This essay sets out how measurement tool vendors align their messaging to user demand whilst supporting buyers in understanding key criteria to consider in their purchase. It is based on a survey of 168 AMEC members* (87 providers and 81 buyers) in 2022. The results are broken down and prioritised needs across core demands within social listening.
Figure: What is the main use case for Social Listening?
Even though public relations agencies typically need technology solutions from a range of scenarios (e.g. influencer identification, audience understanding), by far the most common demand is that of monitoring what is being said across news and social across a range of channels. Forrester regularly lists over 20 vendors who provide this service and yet speaking to both buyers and sellers, there is a remarkable difference in how each side sees the significance of this tool with sellers seeing it as a key strategic asset whereas users see it as an important part of their data stack but only a piece and not the entire puzzle.
Figure: The tool cannot be strategic if I need to use several other technologies to get the data I want
It’s interesting to note that sellers consider ‘price’ to be seventh most important factor when it comes to influencing purchase decision. In this they are right. Of course, cost is a factor but is not the one that tops the list.
The simplest message that sellers need to be aware of is that the only critical thing that counts is getting the data. Analytics, functionality, and price are all important but not if the public relations team does not have the right data to work from. Users will happily use a more complex solution if it means getting hold of the information that is vital to making insightful conclusions.
Figure: The most important piece of any measurement and evaluation process is using the data to create insights that drive actions and strategy
There is always a significant delta between what is available and what is wanted or needed. Excel for example has over 400 functions and yet the keenest data scientist would rarely use more than a score of them. However, the perceived difference in what sellers consider to be their most important features vary considerably with that of buyers.
Continuing the earlier theme that users will forego ease of use in favour of getting the right insights, it comes as no shock that providers of social monitoring solutions have erred in thinking that visualised dashboards are the number one function that data analysts want. In fact this area is rated only seventh in terms of priority, whereas ‘insights’ are the go to area that users demand.
“We don’t provide our clients access to our tools and instead use the data to produce our own reports/dashboards”
Compounding this situation is the vendors belief that just because their tool can make a passable attempt at providing a service, it should be chosen over other best-of-breed solutions focused on that area. A perfect example of this is how social monitoring tools can recommend key influencers via their reach, engagement, and relevance. What isn’t stressed is that these solutions often miss core channels where their source data is not as rich due to API constraints, most notable when looking at data from Meta or other hard-to-analyse channels such as LinkedIn.
“Over-reliability on Twitter data makes providers unsuitable for influencer identification or audience intelligence”
The one caveat on these findings is that there was a noticeable difference between the need from enterprise/in-house users and those from agencies where the former often views the same data sets repeatedly making dashboards a very helpful area of functionality.
Figure: What are the most critical data sources?
Bears have known for thousands of years a critical learning that providers of technology should be more aware of. Namely “fish where the fish are”. In the public relations scenario this is translated to ‘go where the audience is’.
It is well known that most social listening providers rely heavily on APIs to get their data sources. Even though some vendors shirk that route in favour of harvesting, this limitation will mean that access to owned analytics such as data from a company’s own social channels, would be impossible.
Given this situation, vendors are not keen to temper the relationship with companies such as Meta and harvest data themselves, resulting in incomplete or missing data sets from channels where many key audiences participate in conversations. Data analytics from Instagram from non-business users or not hashtagged have resulted in an overuse of proxy data from Twitter or other easy-to-find data sets. However, Twitter is not Instagram and to say that the behaviour on one channel is a good approximation for another is simply not true, forcing many users to purchase multiple solutions so that they can get the information they need. This is not ideal and goes against the objective of having all listening data in one place. This is self-evident from the survey evidence showing that users demand Instagram and Facebook data in their top five whereas it is merely listed as one of the least important from a seller perspective. Providers after all focus on what they can provide.
Another interesting finding is the significance of data compliance. No-one is doubting its’ importance (as noted by it named as the #1 spot by vendors) but rather the buyers of their solutions see this as a hygiene factor and have delegated responsibility of this to their provider. It is important but it’s a given and placed far less significantly over other areas.
Figure: Being compliant with data regulations probably isn’t the number one desire from customers, but it absolutely should be
Both buyers and sellers agree that getting the right data through strong Boolean and AI are vital. This collaborates with earlier findings where users are less inclined to favour ease-of-use over getting the information that counts to create valuable insights.
“All I care about is whether it can get the right data. Strong Boolean is essential. I also don’t mind if the technology is difficult to use provided I get the right results”
However, there is a delta when it comes to the value of historic data. Just because something is possible does not necessarily mean it is needed. Many companies tout how they are able to provide several years of historic information whereas the survey noted that the need to access anything over twelve or at best 24 months is rare at best.
Figure: I rarely need more than 12 months of historic data so telling me it is a great offering is irrelevant to me
From a seller perspective, a key differentiator within analytics focuses on how easy it is to do on their platform. Automated solutions and reports feature heavily along with alerts to make users life less difficult. The contrast again versus buyers is strong with analysts content to work harder and provide information to their stakeholders via their own reporting templates rather than exported charts and content showing content without insight.
“I have no need for automated reporting when the true value comes from an analyst stating what brands need to know from all this information”
The most frequent demand is to take the data that social listening providers have, export it, and analyse it in external platforms combined with other sets so that practitioners can make their own interpretations.
“Strong analytics that enables me to see the insight as a single sentence without having to provide charts is more valuable than anything else”
This move towards data storytelling over data listing is a trend that has been continuing to grow and vendors should take note that their imagery, though exceptional in providing shortcuts to gaining insight, is not the insight itself.
However, the enterprise sale is different once more. In conversations with providers, there is an acknowledgement that the greatest revenue opportunities lie within this area rather than that of agencies and therefore as a result, their roadmap is more geared towards their use case (perhaps with few users).
Figure: Key criteria we see for enterprise brand priorities include quality of data, and ability to classify beyond sentiment to include areas such as trust, advocacy, relevancy, and more
A common frustration amongst both buyers and sellers is the misalignment of business models. Nobody wants to see a potential partnership fail due to a lack of agreement on this basic need.
From a vendor perspective, the priority is setup, maintenance, reporting, and contract length over the aim of users, which is cost by data consumed. Buyers meanwhile need certainty in their budgeting and prefer a fixed monthly fee in order to have a plannable budget based on data consumed.
Even though sellers often provide a service/consultancy solution many buyers prefer to use their own in-house team who have a deeper knowledge of their industry over that of the technology vendor.
Figure: We never use support for reporting, as that is what we specialise in
*The research was conducted in 2022 by the AMEC Tech Hub and presented by Sophia Karakeva (2021-2022 Tech Hub Chair) and Jonny Bentwood at the 2022 AMEC Global Summit on Measurement and Evaluation in Vienna. The session was moderated by Todd Grossman and can be viewed here.
Jonny Bentwood is the global head of data and analytics at Golin, an award-winning communications expert with a focus on data-driven strategies. Under his leadership, Golin has earned prestigious accolades from AMEC, PR Week, Sabre, and PRovoke. He advocates for blending art and science in communications, emphasising the importance of data throughout the customer journey and challenging conventional measurement methods.
Jonny has advised numerous high-profile clients, including YouTube, Unilever, Walmart, McDonalds, Microsoft, Carlsberg, LEGO, World Economic Forum, World Bank, and Facebook. His innovative analytic and measurement tools, such as Relevance Radar, Customer Journey Modelling, Command Center, TweetLevel, BlogLevel, and Flow140, have garnered numerous awards and media recognition.
A frequent speaker at esteemed institutions and events, Jonny shares his expertise on analytics at venues like the Royal Society, Stanford University, Oxford University, and Social Media Week.
Measurement and reporting are essential elements of most communication programmes but how can agencies turn them from functional tasks to financial drivers? In this article, the author discusses three ways that measurement can generate commercial growth, the skills needed to deliver a revenue-driving measurement offering and whether having a standalone specialist team is always the right option.
The UK Prime Minister Rishi Sunak recently said that he wants all pupils in England to study maths up until the age of 18. The BBC quoted him as saying: “in a world where data is everywhere and statistics underpin every job, letting our children out into that world without those skills is letting our children down”.
The ‘maths until 18’ plan generated heated debate in the UK but the driving force behind it – that all professions are becoming increasingly data-led – is hard to disagree with.
And the impact of this is already clear in the world of public relations. Pressure from other marketing disciplines – such as PPC, where every pound spent can be instantly quantified in terms of sales and bottom-line impact – have seen communication directors fighting for the budget by upping their measurement game. And demands on agencies to deliver data-led actionable insights and prove value seems to be increasing.
Despite this, Muck Rack’s 2022 State of Public Relations report revealed that 40% of agencies are challenged by measuring business impact. And, anecdotal feedback suggests that an even bigger percentage haven’t yet managed to convert measurement into a profitable revenue stream.
It’s hard to quantify the measurement market with agencies and clients rarely sharing this information freely.
However, what percentage of a client budget should be allocated to measurement has long been a topic of conversation. Over the years, both PR Week and the UK Government have attempted to put a number on it: 10%. But speak to anyone in the industry – from clients to agencies and measurement specialists, and it is clear that this figure is far from the reality for most campaigns.
The truth is that, despite the value that measurement can add, most businesses want to spend their budget on activation rather than reporting.
This doesn’t mean there isn’t money to be made in measurement. A strong measurement programme brings three key financial benefits to agencies.
Public relations practitioners traditionally deal in stories, experiences, and emotions and the word ‘data’ can often seem daunting. But, the good news is that effective measurement and evaluation don’t have to signal a shift into endless spreadsheets.
In fact, talking to comms directors and agency heads suggests that there are four key skills that most agencies need in order to demonstrate value and unlock new revenue streams.
On the surface, there seems to be a notable structural difference between agencies with a mature measurement offering and those at the start of the measurement journey.
Those known for having a robust measurement revenue stream almost always have a specialist measurement team or outsourced specialist partner. By contrast, agencies with less mature measurement set-ups tend to have more junior team members – account executives, for example – running the reporting for their clients.
It is worth noting that this could simply be down to agency size and cost. Agencies with mature measurement offerings tend to be larger, where having one central team provides clear economies of scale whereas employing measurement specialists may be harder to justify initially for smaller agencies.
Ultimately, agency heads will be best placed to judge what structure will work for them. However, there are a few aspects of the traditional model – where measurement is seen as a rite of passage to be delivered by the most junior team members – that create potential barriers to measurement growth.
Like most communications agencies in the early 2000s, UK- based Citypress offered reporting and evaluation to its existing clients as a way to demonstrate value. Reports were typically produced by account executives and were often provided to clients free of charge – something that the agency showcased as added value during the pitch process.
But, inspired by the advertising and media-planning industries, Citypress spotted an opportunity to shift the dial – using data, clever insight, and learnings-based evaluation on a bigger scale to inform and deliver more impactful client campaigns.
The agency trialled the introduction of a centralised insight function and within six months was recruiting consultants and tools to support its growth. As well as proving popular with the agency’s comms director contacts, it also provided the perfect platform to showcase Citypress’ strategic thinking – embedding the agency more deeply and broadly with contacts at all levels within client organisations.
Today, research and analytics is a profitable standalone practice area for Citypress – driving campaign planning, content creation, and impact assessment for clients across the agency as well as supporting new business acquisition and generating award wins.
The agency highlights a number of key steps that helped make the launch of its research and analytics function successful. These include buy-in from the most senior leadership team; a designated lead who took responsibility for upskilling the agency on best-practice and championing change; and a holistic view of success which took into account the impact on client relationships and retention as well as directly attributable fee income. Key to monetising the function has been a clear product offering aligned to the agency’s broader services (making it logical and easy for clients to ‘bolt on’ research and analytics work) and a focus on actionable insight and business impact.
Marianne Morgan has 20 years of communications measurement experience – spending a decade as a public relations practitioner before setting up a central research and analytics function at Citypress from scratch.
She is a former board director for AMEC and her team has been named Small Research and Measurement Team of the Year three times.
She has advised brands including Aldi, Lloyds Banking Group and eBay on best-practice measurement. She has also been commissioned as an expert witness to provide media due diligence for criminal prosecutions.
Marianne oversees Citypress’ research and analytics offering, providing consultancy to clients and developing new measurement methodologies.
The public relations agency world’s embrace of robust measurement and evaluation practices is slow and patchy, and intrinsically linked to the endemic problem of over-servicing. A bias for tactics by many agencies means measurement and evaluation services are frequently either ignored, inadequately priced, or given away for free. This creates a cycle in which public relations activity is not properly evaluated, leaving the practice as a poor relation of the marketing mix, with agencies paying the price. How can this cycle be broken?
It is no secret that the public relations agency world has been slower than other parts of the marketing mix to embrace rigorous measurement and evaluation methodologies. And while there are many excellent examples of agencies doing ground-breaking work in this area with and for their clients, the picture is still very mixed.
Successive surveys by AMEC over the years have shown a gradual erosion of outdated and discredited practices like Advertising Value Equivalents (AVEs), yet these still persist in some parts of the world, expected or even demanded by some clients, perpetuated by a certain sector of less sophisticated practitioners, and enabled in turn by many technology platforms that still offer up AVEs as an acceptable form of measurement.
The question of why this is the case, why every public relations activity is not robustly evaluated in line with best practice guidelines like the Barcelona Principles, using the methodologies and frameworks that are freely available, is intrinsically linked to the question of how and how much agencies charge their clients.
It is incorrect to lump all agencies together, of course. There is a ‘super league’ of large, mostly international agencies, servicing large, mostly international clients, with sophisticated teams of data analysts, insight specialists, strategists and planners, who provide world-class measurement and evaluation services – and charge appropriately.
There are also smaller agencies, mostly founded in the past decade, that are tech-forward, led and staffed by digital natives, who put insight, measurement, and evaluation at the heart of their offer, and price it in. But these are rare.
Public relations is still viewed by many as a tactical function, rather than strategic, somehow less sophisticated than other marketing disciplines. And it is my belief that agencies themselves are at least partly responsible for this.
Agencies have historically priced activity based on the time taken to execute a series of tactics – liaise with media, produce a piece of content, organise and manage an event, whatever. We charge for activity. But for reasons I have never really understood, the agency world has been reluctant to charge properly for thinking. According to the PRCA, 90% of agencies over-service, and 17% over-service every single client. Small wonder, when we don’t charge adequately for the senior time that goes into thinking and planning, which is after all where the real value lies.
Again, I am generalising here, because there are many clients that truly value strategic communications, and many agencies running crisis management activity or handling communication around public offerings or mergers and acquisitions, and nobody could accuse these of being unsophisticated or tactical.
But they account for a tiny fraction of the multi-billion dollar global public relation agency market. Most of the rest of that market is comprised of small-to-medium-sized agencies working hard to deliver against their clients’ briefs to gain visibility and salience in their respective sectors. And most agencies’ fees are not in the millions or even the hundreds of thousands. What that means, and has always meant, is that agencies and clients coalesce around a desire to put as much of that fee as possible towards execution (or ‘doing’). As a result, high-value thinking and planning time gets squeezed, or given away for free. Hello over-servicing, goodbye to any notion of spending time evaluating whether an activity has actually been successful or not. And if you can’t demonstrate success, it’s hard to make the case for an increased budget. It’s a circular problem.
Despite so many agencies ignoring or only delivering a very basic level of analysis when it comes to performance, most do still pay lip service to measurement and evaluation when costing plans and proposals. But there are a number of common pitfalls.
The reasons should be clear.
Measurement and evaluation take time, senior time.
There are hard costs associated with it: There are many technology platforms that enable us to evaluate the success of our work like never before – but these do not come for free, and not charging for their use eats into agency margins.
Anything provided for free is perceived as having no value.
It is a high-value service: Robust evaluation delivers insight into what aspects of our work are the most effective, enabling us to double down on those. That has real value to clients, and to agencies.
Every agency is different and it would be foolish to try to provide a one-size-fits-all recommendation for pricing measurement and evaluation services. Instead, the following guidelines are offered for consideration.
So who pays the price for measurement and evaluation? In all but a handful of cases, it is the agency, rather than the client. But the tools and resources are there to change that dynamic, which will benefit agencies and their clients.
Jon Meakin has almost 30 years’ communications agency experience, gained on both sides of the Atlantic. He has worked for independent boutiques and international networks alike, advising and representing clients across a range of sectors. For the past decade, Jon has been focused on technology clients, and now runs his own brand communications consultancy, Coldharbour Communications. Jon has been a member of the international board of the Association for the Measurement and Evaluation of Communication (AMEC) since 2018 and is the founding Chair of AMEC’s Agency Group, which is focused on embedding best practice measurement methodologies throughout the agency world.
The scope for artificial intelligence (AI) in public relations continues to grow – how can agencies make the most of the technology, while keeping the human in the loop? This essay addresses the use cases for better-known applications, for example, Natural Language Processing (NLP) and other machine learning, as well as newer technologies (GPT-4, DALL-E). It provides suggestions for how agencies might use these applications and summarises a broader point of view on the importance of seeing them as additive to human workflows, rather than as a replacement.
Agencies exist in a fast-moving industry. It requires us to have a constant finger on the pulse of culture; to understand where there is media opportunity, jump on it for clients before the moment passes; and quickly get to grips and manage issues and crises for brands and stakeholders.
In this rapid environment, you would expect AI to have made more significant inroads into agency workflows. But as a discipline, public relations has been slower to adopt data and technology than other marketing verticals.
The pace of change, however, is now undeniable; and if this is our Blockbuster vs. Netflix moment, how should agencies shift course and make better use of these advances?
Fortunately, the scope for AI in public relations continues to grow. While first-mover advantage may have passed the benefit to being a (slightly) later adopter is that there are countless examples of how AI might be applied.
Before diving into exactly what agencies might consider with AI, let’s start with some definitions. AI is a term with no single definition; it varies by industry but broadly covers an array of machine learning (ML) applications which can perform tasks traditionally associated with human intelligence, for example learning or problem solving. This is obviously a very wide net, so for the purposes of this paper we will focus on the business context – specifically, those use cases where AI can augment what we’re doing now, and automate those processes, at scale.
According to Martin Schmalz, professor at the Saïd Business School, Oxford, many of the well-established use cases leverage machine learning for prediction – because machines are faster, better and more cost-effective than we are at predicting what will happen next (at least, where the past is a good indicator of the future). What AI isn’t so good at just yet are many other components of ‘human intelligence’: intuition, humour, strategy.
Where does that leave us? The best opportunities for public relations agencies to leverage AI are the spaces where human and machine intelligence complement each other. We need to step bravely forward into an augmented future where we leverage technology to do better work.
Let’s talk through a few examples of how agencies are doing just that.
One of the first and most familiar spaces for applying AI in public relations is in the analysis space. AMEC members are likely already employing some elements of the technology in their workflows as large data partners such as Brandwatch and Cision have implemented it into their platforms over the past several years. In the spirit of better, faster and more cost-efficient, let us consider one of the most popular: natural language processing (NLP).
NLP can be helpful in a few ways when it comes to augmenting research processes at agencies. For example; social listening allows users to see hundreds of thousands or even millions of consumer opinions on a particular topic. But how do analysts get their arms around that much data (without burning through all of the budget)?
Enter NLP. Now baked into many of the data collection tools agencies likely employ, this helps the machine understand the information – allowing analysts to sort through it more quickly. This might be through spike detection, helping analysts to quickly get to the route of what’s caused an uptick in conversation; or a combination of NLP and ML clustering as seen in tools like Quid, which give a visual representation of language similarity across news stories. For large-scale applications, a custom machine learning model could even help brands looking to understand more nuanced metrics like message penetration over time.
Of course, the news cycle of 2023 has largely been on other applications of AI – specifically, generative AI. This includes large language model (LLM) text generation applications like ChatGPT or Bard, as well as image generation tools such as Looker, DALL-E and MidJourney.
At its most basic: ChatGPT works by analysing the text provided by the user, understanding its meaning and context, and then generating a response that is relevant to the query based on its learned knowledge of language structures and patterns. Its applicability to public relations is perhaps obvious, in a discipline that so often focuses on the written word. But beyond leveraging ChatGPT (or the underlying technology) in proprietary tools there are myriad ways to begin testing the tool now.
Planning: ChatGPT can also be leveraged to push insight and creative strategy faster. For example – ask ChatGPT to summarise and extract actionable information during the research stage. It can also be used to test ideas; one option would be asking it the most obvious solutions to a particular question (such that you avoid falling into a cliché). Most usefully, consider how ChatGPT can provide a multitude of suggestions – effectively allowing agency staff to act as an editor, sifting through to find the most useful combination of ideas.
Drafting: There are some fascinating companies coming through which promise to help us get past writer’s block, and first drafts, faster (and ideally, leveraging a broader corpus of information than any single human could!). One example is Intentful, which helps brands as diverse as Broadway with their content creation.
Research and Analysis: There have been intriguing examples in this space already, including a viral LinkedIn post where a researcher asked ChatGPT how readers of different UK publications would respond on different topics (for example, immigration). Consider as well how generative AI might be able to help your teams get to grips with a topic. Others are looking to use the tools in tasks like the creation of Boolean queries.
With all of these newer use cases, do make sure of sense and fact-checking the results; ultimately LLM are working off a prediction and where they don’t know an answer will make one up – otherwise known as ‘hallucinating.’ It also bears repeating data shared with LLMs isn’t necessarily secure. Do keep in mind that generative AI tools aren’t a safe space to share sensitive client information or private data, and that there are many ethical and legal frameworks to consider.
It’s a great time to test and learn with these new technologies but agencies must also take the time to put together a formal agreement on how (and where) they’re used, and where they’re not.
These are just a handful of examples of where AI is already in use at agencies around the globe. As technology continues to leap forward there is a tremendous opportunity not only to make gains in efficiency – always a priority of agencies – but also to create entirely new ways of working, which make the most of machine and human intelligence.
As you look at where you might choose to invest time and hard costs, consider these three questions.
The agency data team within H+K is responsible for developing tech-enabled solutions which solve the challenges of agency clients. One recurring question posed to account leads was how brands could identify not only the red thread that would tie a campaign together, but how they might know they were not creating a campaign where there was no editorial interest – or worse yet, too much noise to cut through.
The team set about breaking down the traditional analysis process to identify the right moment to insert technology to augment human intelligence.
The result? Space+, H+K’s proprietary tool for understanding the white space a brand can own and stand out from the competitive set.
Built in partnership with Quid, it leverages AI and NLP to understand the conversation landscape across a topic, with a custom algorithm to surface the topic areas that have the greatest opportunity for brands.
Taken together this quantifies the highest potential space to own, and the highest areas of potential brand risk within a narrative, specific to a brand and competitive set.
Since its creation, H+K have helped numerous brands identify spaces to play, and provided greater certainty that their campaigns will pay dividends. These include:
Allison Spray is Managing Director for Data + Analytics at Hill+Knowlton Strategies. She is passionate about making data accessible to everyone, and a big believer in the power of technology to transform how we work for the better.
Allison leads H+K’s specialist data team, and is responsible for measurement, as well as innovating new data strategies for clients and the integration of the agency’s behavioural science offering.
She is a well-respected industry figure and AMEC Board Director and in 2022 was awarded a postgraduate diploma with distinction from the Saïd Business School, University of Oxford in artificial intelligence for business.
Our media platforms – from television to the internet, and social media – have had profound effects not only on the ways in which we communicate, but also on the values and expectations we place on all forms of public discourse and interactions. We now expect to be constantly entertained by platforms that demand our constant attention. This shift presents challenges for public relations practitioners who are looking to not only understand what consumer and stakeholder audiences do, but why they do it. In a modern communications ecosystem where there seems to be a limitless supply of data from multiple sources, it’s important to recognise the limitations of these inputs – which have been further stressed by our changing values – and think holistically about how to use the right mix of quantitative and qualitative analytics tools and research methods to build true audience understanding.
In the prophetic book Amusing Ourselves to Death: Public Discourse in the Age of Show Business, published in 1985, author and media theorist Neil Postman examines the effect of media – specifically television – on our culture and discourse. He argues that the advent and ubiquity of television ushered in a new cultural era, shifting us away from an information world dominated by print that encourages intense intellectual involvement, to television that only requires passive involvement and engagement. Or to put more simply, a world in which entertainment becomes the underlying value we expect from all forms of discourse, debate, and communication.
In Postman’s words, “television is altering the meaning of ‘being informed’ by creating a species of information that might properly be called disinformation—misplaced, irrelevant, fragmented, or superficial information that creates the illusion of knowing something but which in fact leads one away from knowing.”
Flash forward nearly 40 years and it is safe to say that Postman’s theory, and warning, of the consequences of a culture made subordinate to television have played out and multiplied – 24-hour news and sports networks, the expansion of cable television, the internet and high-speed connections, the ubiquity of smartphones, explosion of streaming services, and of course social media. One could say that while entertainment is still shaping our expectations of cultural, civic, and media institutions, the dominating force above all else is the need to capture our attention (i.e. continuous engagement with social and digital platforms). To be boring is to be irrelevant.
At the heart of this shift is our use of social media. Platforms that in their infancy promised to be a space to connect and build relationships have, in the last decade, shifted to mediums where users spend more time skimming news, shopping, or simply trying to fill time during the day rather than making meaningful connections. Our reliance on these platforms is not going to stop any time soon. In 2022, roughly 4.59 billion people used social media worldwide, a figure that is expected to hit 5.85 billion in 2027 according to Stastita.
In a world where the value of media, information, and communication are underpinned by the need to capture attention, increasingly one must be not only entertaining, but enraging, shocking, or controversial. In other words, users gravitate to content and news that, more often than not, caters to and reinforces our increasingly polarised identities. If this tenet drives, shapes, and creates most of our discourse on a macro level, as public relations practitioners the question is this: are the tools and techniques we currently use to gather data and information for campaign planning currently fit for purpose to provide true wisdom and insight about target consumer and stakeholder audiences alike?
In 2012, statistician and author Nate Silver explored this challenge in his book “The Signal and the Noise,” which argued that merely having access to more data and information does not, on its own, translate to better insight and decision making. It takes work and the application of research and analytics techniques. One could argue that despite the advancement of digital research and analytics tools since then, finding these signals has not gotten any easier.
The rise of data management platforms (DMPs) that leverage first and third-party data has opened a seemingly limitless world of audience intelligence. These DMPs have promised audience precision and understanding, but it is important to recognise that these technologies are meant to serve different objectives from those we often hold in public relations. Namely, the data provided does not delve deeply into an audience’s motivations, concerns, influences, issues, or aspirations. DMPs, along with demand side platforms (DSPs) are meant, primarily, to connect publishers that have digital space to sell to organisations that want to get their name, product, or service in front of these publishers’ audiences.
Now, this is not to say that this isn’t a valid business aim, even if there is some evidence that the effectiveness of digital advertising is oversold. Rather it is to make the point that the data collected is primarily transactional in nature. If we rely solely on these data to build communication campaigns, then our insight and creative will also feel transactional and inauthentic. Substituting scale for depth has real implications on the content that agencies and brands alike create, as highlighted by a study from Facebook that found more than half (54%) of people do not feel culturally represented in online ads.
It’s also important to recognise the role of public relations in the marketing mix, which is often tasked with persuasion, managing reputation, and building trust. Indeed, to connect with and build relationships organisations must show they understand audiences on a very personal level. This point is underscored by the fact that 59% of consumers are more trusting of brands where they believe they are represented in their ads, a proportion that increases to 61% among women and 67% among ethnic minorities.
Further, while the online advertising industry continues to expand (according to eMarketer, the global digital ad spend is currently $626.9 billion and is expected to hit $835.8 billion in 2026), a growing number of consumers are taking steps to stop the industry from tracking them online. Nearly one billion people (763.5 million) use ad blockers worldwide, with 26.4% ad-blocker user penetration rate in the US alone. Third-party data available to advertisers will also be impacted by Apple’s move to limit access in its app store, as will Google’s planned phase-out of third-party cookies in Chrome. These trends point to a potential situation in which the data available via DMPs and DSPs will become more shallow and less representative. In addition, these data sources can’t fully resolve two critical public relations planning inputs. Firstly, the why behind what people do, and secondly, the relevance and context of the actions they track. In a world where online users are increasingly engaging with content on the edges, understanding context and relevance is paramount.
To enrich data, and tap into cultural moments, public relations has long leaned into social media, which has often been described as “the world’s largest focus group.” While this may have been true in the past, the effects that social media platforms have had on our values and expectations of the medium and content we choose to engage with – as argued above – dilutes this maxim.
For one, social listening tools have, traditionally, over indexed on Twitter which no longer cracks the top-10 for monthly active users globally. This means that when listening at scale we are potentially missing a large portion of the discourse on a topic, issue, or cultural phenomenon. Second, social listening tools can’t tap into messenger networks, such as WhatsApp, where audience communities are increasingly convening to engage and share news, content, and information. Thirdly, our attention and means of communication continues to shift more and more to visual content (usually video), content that is not easily tracked nor captured by social listening due to its reliance on Boolean queries.
In the context of the public relations industry, and more specifically among those whose focus is on data, analytics, and measurement, what does the future hold and how should we adapt to the shifting communications and cultural landscape?
Firstly, among measurement and analytics practitioners, from providers to agency and in-house, it is important to recognise that there is optimism for the sector and how it will evolve. While challenges are abounded, according to AMEC’s 2022 Global Membership Survey the industry remains bullish about 2023, with a third (33%) of members stating they believe it will grow by more than 10% (overall, 88% predict some level of growth). These perceptions mirror growth expectations for the media analytics industry, which is predicted to more than double by 2029 to the tune of $11.5 billion. The top-three activities that comms measurement, evaluation, and insights businesses believe will power growth in 2023 are providing professional services (65%), new/automated technologies (61%), and building integrated measurement capabilities/solutions (57%) for their clients and end users.
If these trends play out in the year ahead and beyond they are indeed positive signs for the public relations industry. The table below sets out how public relations practitioners need to develop their skills to benefit from this developing opportunity.
Understand just how dramatically social media has reset our cultural values and expectations as this will provide us with the proper framing to better understand how and where branded content the industry creates fits within that ecosystem.
Within this ecosystem, know and understand the limits of the knowledge and insight that can be derived from the signals these mediums and platforms create.
Devise a research and analytics strategy that accounts for scale and depth, quantitative and qualitative data inputs and outputs, and one that considers both short and long-term actions and milestones.
Fundamentally, the goal is to ensure that the data inputs we are using to drive campaigns and to engage with consumer audiences and stakeholders is founded on data that is both quantitatively robust and qualitatively relevant. The good news is that many strides have already been taken to meet this aim. Social and media monitoring providers have come a long way in applying AI and machine learning on their platforms to help users find the signals they are after. Agencies and brands alike are innovating to bring data from multiple sources – be it third-party data, survey panel, search, website, etc. – together to analyse it in the round. The key to these advancements will be to continuously scrutinise the data that does become available, and to analyse it in the context of our cultural times.
Ben Levine leads FleishmanHillard UK’s research and analytics practice, TRUE Global Intelligence, supporting clients with the development and implementation of multi-channel measurement programmes, as well as research focused on brand performance, reputation, and deep audience understanding. Ben has 15+ years of experience helping to transform data and analytics into relevant and actionable insights for clients across a variety of sectors, from healthcare, to consumer, energy, and technology.
Over the last 10 plus years, Ben has helped shape the direction of AMEC (the International Association for the Measurement & Evaluation of Communications) serving on its International Board of Directors from 2015 to 2021. During that time, he supported with the development and launch of AMEC’s Measurement Maturity Mapper and with the Barcelona Principles 3.0 update in 2020.
The demise of third-party cookies and the introduction of Apple’s iOS 14.5 have prompted a shift in marketing measurement, with brands like Google and Meta focusing on holistic approaches that consider all touchpoints of a campaign. As traditional first-click and last-click attribution models prove insufficient, public relations practitioners are turning to econometric techniques like regression analysis, time series analysis, and marketing mix modelling to better understand the impact of their campaigns and drive more effective strategies.
After the death of the third-party cookie and also the introduction of Apple iOS 14.5, it is increasingly important for brands to measure the impact of their efforts in new and more expansive ways. Gone are the days when direct attribution was enough; now companies like Google and Meta recognise that there is more to a successful campaign than just its immediate results.
The big tech companies say they have begun to focus on measuring all touch points associated with a campaign, not just those directly contributing to success or failure. In this essay, we will look at how this shift in attitude towards measurement away from the language of attribution has changed our understanding of the impact of public relations on brand performance.
We will explore the drawbacks of relying solely on first-click attribution, last-click attribution, and econometrics as methods for evaluating public relations campaigns. We will discuss what practitioners should be looking out for when considering different measurement options, and how they can ensure they are getting accurate results that accurately reflect the success of their public relations efforts.
This shift is away from traditional first-click or last-click attribution measurement models, to more holistic approaches looking at wider contributions incorporating econometrics or approaches that might be fed into more classical measurement techniques and econometrics.
First-click attribution is a method of measurement which looks at the first point of contact with a public relations campaign. This could be traffic from an email, advertorial, television programme or social media post.
While this approach provides some useful data points such as who was exposed to the public relations and when it fails to take into account any other factors that could influence the success of public relations, and thanks to how cookie tracking is changing and how channels like branded search work, it is nigh on impossible to track with 100% accuracy. To illustrate the limitations of first-click attribution, consider a real-life example. Imagine a public relations campaign promoting a new product through various channels, including media relations, social media, email marketing, and influencer partnerships. The first click attribution model would credit the success of the campaign solely to the first touchpoint that led a customer to the product page, even if subsequent touchpoints played a significant role in the customer’s decision to make a purchase.
For instance, a customer might have initially discovered the product through a social media post but only made a purchase after receiving an email with a discount code. In this case, the first click attribution model would not give credit to the email channel, even though it played a crucial role in converting the customer. This example highlights the importance of considering other touchpoints along the customer journey and using a more comprehensive attribution model to accurately measure the impact of public relations campaigns.
Another widely used measurement model is last-click attribution. This method looks at the final point of contact with a campaign before a goal is achieved, and attributes the success of the public relation to that last touch point.
To further illustrate the limitations of last-click attribution, consider an example. Imagine a campaign promoting a charity fundraiser through various channels, including social media, email marketing, and print media. A potential donor might have initially discovered the fundraiser through a print advertisement but only decided to make a donation after seeing a social media post from a friend. In this case, the last click attribution model would credit the success of the campaign solely to the social media channel, even though the print ad played an important role in raising awareness and creating interest in the fundraiser. This example highlights the importance of considering other touchpoints along the customer journey and using a more comprehensive attribution model to accurately measure the impact of campaigns. This means results may not accurately reflect the impact of public relations in its entirety. But what does this mean in reality?
The once crystal-clear waters of performance marketing are now muddy. Meta in particular is suffering and we are not convinced Google’s advertising customers are completely happy either. Google and Meta have stated that they are attempting to broaden their measurement offering, looking at all the different ways people interact with the campaign, not just if it made money directly, but also its impact on metrics like brand consideration, visibility, and attitudes to purchasing.
The goal of all this is to protect revenue streams for big tech to help public relations practitioners to see further than just attribution and how much people actually think about their campaigns, by looking at things like how one touchpoint may lead to another or how it affects behaviour over time. Or at least that is the plan. Despite improvements in technology AI there is no one size fits all approach or a silver bullet, but at least the over-reliance on attribution is coming to an end.
For those of us tracking organic performance, the job is even harder. Some might argue that this is made deliberately difficult to push us all to keep spending on ads.
Relax.
All is not lost.
First, we all must accept that there is no perfect way to measure marketing. Many public relations practitioners are now relying on what might be called econometrics to measure impact, which uses economic and statistical models to predict performance. While this approach can provide accurate predictions of success, it is important to note that these predictions may not be completely reliable due to the fact that it takes time and “time is money”, putting it out of reach for many clients.
We know that the outputs of public relations campaigns can be intangible and difficult to measure. As such, public relations practitioners should be aware of the potential for inaccurate predictions when relying on econometrics techniques.
Measuring marketing performance is a complex task. The temptation to go for the cheapest data and opt for an approach that gives instant gratification can be strong, but this could be misleading if it is not accurate. In case you were wondering, here are few econometric techniques. The chances are you probably have used some of them, even if you don’t really know what econometrics is.
Regression analysis – This technique involves analysing the relationship between public relations activities and business outcomes by estimating the impact of variables (such as media coverage or social media mentions) on sales or other relevant metrics, while controlling for other factors that could affect the outcome.
Time series analysis – This technique involves analysing changes in key metrics over time and identifying any correlations with public relations activities. It can help practitioners identify the impact of specific campaigns or events on business outcomes, and track the long-term effects of activities.
Difference-in-differences analysis – This technique involves comparing the changes in key metrics between a group exposed to a public relations campaign and a control group that was not exposed. It can help practitioners isolate the impact of activities from other factors that could affect the outcome.
Attribution modelling – This technique involves assigning credit to different marketing channels, including public relations, for their contribution to specific outcomes such as website traffic, leads or sales. Practitioners can use attribution modelling to identify the impact of activities on specific metrics and optimise their campaigns accordingly.
Marketing mix modelling: This technique involves analysing the impact of different marketing channels, including PR, on overall business outcomes such as sales or revenue. It can help practitioners identify the optimal allocation of resources across different marketing channels and measure the return on investment of activities.
In the past, over-reliance on attributions based on cookie data can lead to overstating the impact of certain channels while ignoring the wider picture. PR, for example, is a hard-to-measure channel but can be extremely valuable in terms of influencing public opinion and driving sales. By not taking public relations into consideration when measuring performance, companies risk missing out on crucial information that could help inform their decisions.
Ultimately, it pays to invest in accurate data over cheap but wrong data. It may cost more upfront, but the long-term value gained is well worth it. With accurate measurements and public relations taken into account, marketing practitioners can be sure that they are making decisions based on solid evidence and an informed understanding of their performance.
Industry legends like Les Binet have long pushed econometrics and marketing mix modelling (MMM) as proposed as solutions to the current conditions such as the death of third-party cookies and where nearly 50% of people using adblockers, and Apple iOS 14 updates stopping ad tracking on their devices.
There is both an interesting challenge and opportunity for practitioners who work across the PESO mix in that many digital marketers fall for the fallacy of immediacy which assumes that digital marketing campaigns will result in immediate leads, when in reality purchases often occur long after the original campaign.
Measurement and evaluation is essential for understanding customer behaviour and thus driving more effective campaigns whilst avoiding potential pitfalls from econometric models. Public relations should also be used to track a brand’s reputation and engagement, enabling marketers to respond in real time and maintain a positive presence. To create successful campaigns with lasting results, public relations activity and measurement must go hand-in-hand.
By understanding measurement and attribution techniques, practitioners can better gauge the effectiveness of their campaigns. First-touch attribution, multi-touch attribution, digital attribution technologies, and econometrics modelling are all measurement and attribution methods that practitioners should be aware of in order to accurately measure success.
It is important for practitioners to note that measurement and attribution techniques are not without flaws. Critics are wary of econometrics being used as a solution due to its expense and reliance on extensive data, while alternatives such as correlation or traditional research can sometimes better explain buyer behaviour. We live in an imperfect world and we must carefully weigh all measurement and attribution techniques in order to make the best decisions for our public relations campaigns.
It is worth noting that not all campaigns are solely focused on sales and acquisition. Public relations campaigns can also aim to build reputation, create awareness and demand, and change perceptions, which require a different approach to measurement. In these cases, a contribution model or approach is often used, which focuses on more qualitative measures.
For example, public relations practitioners might use surveys or focus groups to understand changes in attitudes or perceptions towards a brand or product following a campaign. Quantitative data can also be used to make qualitative inferences, such as using web analytics to understand how users engage with a website and what content resonates with them. It is important to choose the right measurement techniques and tools for each campaign, depending on the objectives and goals of the campaign. A critical and nuanced approach to public relations measurement and attribution can help ensure that practitioners make informed decisions and demonstrate the value of their work to stakeholders.
So, relax. Try and measure something as best you can. Big tech is struggling with it and so is everyone else. But by trying to measure contributions and doing so with an econometric mindset, we think we are on the right track.
James Crawford founded Manchester and London based PR Agency One, which since its launch in 2012 has delivered award-winning work in the consumer and B2B and corporate sectors. His company focuses on the space where creativity and measurement meet to deliver commercial outcomes.
His business has won numerous national and international awards and accolades for its work in diverse categories, from the best use of creativity to measurement and insight. He is also a board director at AMEC and runs an accelerator to help found new public relations businesses which to date has founded two start-ups
Ben Levine leads FleishmanHillard UK’s research and analytics practice, TRUE Global Intelligence, supporting clients with the development and implementation of multi-channel measurement programmes, as well as research focused on brand performance, reputation, and deep audience understanding. Ben has 15+ years of experience helping to transform data and analytics into relevant and actionable insights for clients across a variety of sectors, from healthcare, to consumer, energy and technology.
Over the last 10 plus years, Ben has helped shape the direction of AMEC serving on its International Board of Directors from 2015 to 2021. During that time, he supported with the development and launch of AMEC’s Measurement Maturity Mapper and with the Barcelona Principles 3.0 update in 2020.