A TAXONOMY OF EVALUATION
A TAXONOMY OF EVALUATION
WHAT IS A TAXONOMY AND WHAT IT IS FOR?
A wide range of models of PR and communication evaluation exist using a wide range of terms including inputs, outputs, outtakes, outflows, outgrowths, effects, results, and impact. An even wider range of metrics and methods for evaluation are proposed for each stage. The field is confusing for many practitioners.
This page presents a taxonomy of evaluation tailored to strategic public communication – a taxonomy being a mapping of a field to produce a categorisation of concepts and terms – in short, to show where things go and where they fit in relation to each other. This taxonomy identifies:
- The major stages of communication (such as inputs, outputs, etc.);
- The key steps involved in each stage (such as distribution of information, reception by audiences, etc.);
- Examples of metrics and milestones that can be generated or identified as part of evaluation at each stage; and
- The most commonly used methods for generating these metrics and milestones.
A taxonomy is not the same as a model, as a taxonomy attempts to list ALL the main concepts, terms, metrics, methods, etc. in a field, while a model is an illustration of a specific program or activity to be applied in practice. However, models should be based on the concepts and methods identified as legitimate in the field and apply them appropriately.
An important benefit of a taxonomy is that it puts concepts, metrics, methods, etc. in their right place – e.g., it avoids output metrics being confused with outcome metrics. The authors of the widely used PR text Effective Public Relations, Cutlip, Center and Broom have noted repeatedly in editions from 1985 to the late 2000s that “the common error in program evaluation is substituting measures from one level for those at another level” (1985, p. 295; 1994, p. 44; Broom, 2009, p. 358). Emeritus Professor of Public Relations Jim Grunig similarly says that many practitioners use “a metric gathered at one level of analysis to show an outcome at a higher level of analysis” (2008, p. 89). PR and strategic communication is not alone in this. The widely-used University of Wisconsin (UWEX) guide to program logic models for evaluation says, for example, “people often struggle with the difference between outputs and outcomes” (Taylor-Power & Henert, 2008, p. 19).
No taxonomy is ever complete, but the taxonomy presented here draws on a wide range of research studies to be as comprehensive as possible (see ‘Introduction to the AMEC Integrated Framework for Evaluation’ for details of the origin and basis of this taxonomy and the framework itself).
NOTES FOR USING THIS TAXONOMY
- The key steps, metrics and milestones, and methods are not exhaustive, and not all are required in every program. They are indicative of common and typical approaches to evaluation of public communication such as advertising, public relations, marketing communication, etc. Practitioners should choose relevant metrics and milestones and methods, ideally selecting at least one at each stage.
- The arrangement of inputs, activities, outputs, etc. should not be interpreted as a simple linear process. Feedback from each stage should be applied to adjust, fine-tune, and change strategy and tactics if necessary. Evaluation is an iterative process.
- Not all evaluation can show impact, particularly when evaluation is undertaken within a relatively short time period following communication. Impact often occurs several years ‘downstream’ of communication. Also, the objectives of some public communication is to create awareness (an outtake or short-term outcome) or build trust (an intermediate outcome)[xii]. However, as a general rule, evaluation should report well beyond outputs and outtakes. Evaluation should identify and report outcomes at a minimum and, when possible, impact.
- An important feature of this taxonomy is that impact includes organizational, stakeholder, and societal impact/outcomes. This aligns with program evaluation theory and program logic models (e.g., Kellogg Foundation, 1998/2004; Taylor-Power & Henert, 2008; Wholey, 1979; Wholey, Hatry, & Newcomer, 2010) and with Excellence Theory of PR, which calls for evaluation to be conducted at (a) program level; (2) functional level (e.g., department or unit); (3) organizational level; and (4) societal level (L. Grunig, J. Grunig & Dozier, 2002, pp. 91–92).
Developed by Professor Jim Macnamara for AMEC.
|Short definition||What you need in preparation for communication||Things you do to plan and produce your communication||What you put out that is received by target audiences||What audiences do with and take out of your communication||Effects that your communication has on audiences||The results that are caused, in full or in part, by your communication|
• Resources (e.g., staff, agencies, facilities, partnerships)
|• Formative research|
• Production (e.g., design, writing, media buying, media relations, media partnerships, etc.)
• Interest / liking
|• Learning / knowledge9|
• Attitude change
• Compliance / complying
• Organisation change
• Public/social change
|EXAMPLE METRICS & MILESTONES
|• SMART objectives|
• Targets / KPIs
|• Baselines / benchmarks|
(e.g., current awareness)
• Audience needs,
• Strategic plan
• Evaluation plan
• Pre-test data (e.g.,
• Content produced (e.g.,
• Media relations
|• Publicity volume|
• Media reach
• Share of voice
• Messages placed
• Posts, tweets, etc.
• Advertising TARPs
• E-marketing volume
• Event attendance
|• Unique visitors|
• Response (e.g., follows,
likes, tags, shares,
• Return visits/views
• Recall (unaided, aided)
• Positive comments
• Positive response in
• Subscribers (e.g., RSS,
|• Message acceptance|
• Trust levels
• Statements of support or
• Registrations (e.g., organ
• Brand preference
• Reaffirming (e.g., staff
|• Public/s support|
• Meet targets (e.g., blood
• Sales increase
• Donations increase
• Cost savings
• Staff retention
• Customer retention/
• Quality of life / wellbeing
|• Internal analysis|
• Environmental scanning
• Feasibility analysis
• Risk analysis
|• Metadata analysis (e.g., past research and metrics)|
• Market/audience research (e.g., surveys, focus groups, interviews)
• Stakeholder consultation
• Case studies (e.g., best practice)
• SWOT analysis (or PEST, • PESTLE, etc.)
• Pre-testing panels
• Peer review / expert review
|• Media metrics (e.g., audience statistics, impressions, CPM)|
• Media monitoring
• Media content analysis (quant)
• Media content analysis (qual)
• Social media analysis (quant and qual)
• Activity reports (e.g., events, sponsorships)
|• Web statistics (e.g.,|
• Social media analysis
(qual – e.g.., comments)
• Feedback (e.g.,
• Netnography11 (online
• Audience surveys (e.g., re
• Focus group (as above)
• Interviews (as above)
|• Social media analysis|
• Database statistics (e.g.,
• Netnography (online
• Opinion polls
• Stakeholder surveys (e.g.,
re satisfaction, trust)
• Focus groups (as above)
• Interviews (as above)
• Net Promoter Score
|• Database records (e.g.,|
blood donations, health
• Sales tracking
• Donation tracking
• CRM data
• Staff survey data
• Reputation studies
• Cost Benefit Analysis/
Benefit Cost Ratio
• ROI (if there are financial
• Quality of life scales &
SUPPORTING FOOTNOTES FOR THE TAXONOMY OF EVALUATION
- Trust is considered an intermediate outcome because it is sought in order to achieve a longer-term impact, such as being elected to government, customers continuing to do business with a company, etc. It is not an end goal in itself.
- Some program logic models refer to this first stage as Inputs/Resources.
- Advanced outtakes overlap with, or can be the same as, short-term outcomes. This is why the most commonly used program logic models do not use outtakes as a stage. Outtakes and outcomes can be cognitive and/or affective (emotional) and/or conative (behavioural).
- Long-term outcomes overlap with and are sometimes considered to be the same as impact.
- Impact is often evaluated only in relation to the organization. However, as noted in the introduction, impact on stakeholders, publics, and society as a whole should be considered. This is essential for government, non-government, and non-profit organizations focussed on serving the public interest. But also, impact on stakeholders and society affects and shapes the environment in which businesses operates (i.e., this evaluation forms part of environmental scanning, audience research, and market research that will inform future planning).
- Causation is very difficult to establish in many cases, particularly when multiple influences contribute to impact (results), as is often the case. The three key rules of causation must be applied: (a) the alleged cause must precede the alleged effect/impact; (b) there must be a clear relationship between the alleged cause and effect (e.g., there must be evidence that the audience accessed and used information you provided); and (c) other possible causes must be ruled out as far as possible.
- Some include planning in inputs. However, if this occurs, formative research (which should precede planning) also needs to be included in inputs. However, most program evaluation models identify formative research and planning as key activities to be undertaken as part of the communication program. Inputs are generally pre-campaign/program.
- Reception refers to what information or messages are received by target audiences and is slightly different to exposure. For example, an audience might be exposed to a story in media that they access, but skip over the story and not receive the information. Similarly, they may attend an event such as a trade show and be exposed to content, but not receive information or messages (e.g., through inattention or selection of content to focus on).
- Learning (i.e., acquisition of knowledge) is not required in all cases. However, in some public communication campaigns and projects it is. For example, health campaigns to promote calcium-rich food and supplements to reduce osteoporosis among women found that, first, women had to be ‘educated’ about osteoporosis (what it is , its causes, etc.). Similary, combatting obesity requires dietary education. Whereas understanding refers to comprehension of messages communicated, learning refers to the acquisition of deeper or broader knowledge that is necessary to achieve the objectives.
- Ethnography is a research method based on intensive first-hand observation over an extended period, often supplemented by interviews and other research method.
- Netnography is online ethnography in which online users are closely monitored to identify their patterns of behaviour, attitudes, etc. via their comments, click trails, and other digital metrics.
- Net promoter score is a score out of 10 based to a single question: ‘How likely is it that you would recommend [brand] to a friend or colleague?’ Scores of 0–6 are considered ‘detractors’/dissatisfied; scores of 7–8 are satisfied but unenthusiastic; and scores of 9–10 are those considered loyal enthusiasts, supporters, and advocates. (Seehttps://www.netpromoter.com/know)
- Econometrics is the application of mathematics and statistical methods to test hypotheses and identify the economic relations between factors based on empirical data (seehttp://www.dummies.com/how-to/content/econometrics-for-dummies-cheat-sheet.html)
CPM Cost per thousand (mille = Latin for thousand)
CRM Customer relationship management (data commonly held in CRM databases)
KPI Key Performance Indicator (see http://www.ap-institute.com/what-is-a-key-performance-indicator.aspx
OTS Opportunities to see (usually calculated the same as ‘impressions’)
PEST An evaluation framework that examines ‘political’, ‘economic’, ‘social’ and ‘technological’ factors
PESTLE An evaluation framework that examines ‘political’, ‘economic’, ‘social’, ‘technological’, ‘legal’ and ‘environmental’ factors (also used as PESTEL, with ‘environmental arranged before ‘legal) (see http://pestleanalysis.com/how-to-do-a-swot-analysis)
ROI Return on investment
SMART Refers to objectives that are ‘specific’, ‘measureable’, ‘achievable’, ‘relevant’ (e.g., linked to organizational objectives), and ‘time-bound’ (i.e., within a specified period)
SWOT A strategic planning method that examines ‘strengths’, ‘weakness’, ‘opportunities’, ‘threats’
TARPs Target audience ratings points based on the ratings system used in advertising (see http://www.multimediabuying.com.au/faq)
UWEX University of Wisconsin Extension program (see Taylor-Power & Henert, 2008)
REFERENCES & FURTHER INFORMATION
Broom, G. (2009). Cutlip & Center’s effective public relations (10th ed.). Upper Saddle River, NJ: Pearson.
Cutlip, S., Center, A., & Broom, G. (1985). Effective public relations (6th ed.). Englewood Cliffs, NJ: Prentice Hall.
Cutlip, S., Center, A., & Broom, G. (1994). Effective public relations (7th ed). Upper Saddle River, NJ: Prentice Hall.
Grunig, J. (2008). Conceptualizing quantitative research in public relations. In B. van Ruler, A. Tkalac Verĉiĉ, & D. Verĉiĉ (Eds.), Public relations metrics: research and evaluation (pp. 88–119). New York, NY: Routledge.
Grunig, L., Grunig J., & Dozier, D. (2002). Excellent organizations and effective organizations: A study of communication management in three countries. Mahwah, NJ: Lawrence Erlbaum.
Stacks, D., & Bowen, S. (Eds.). (2013). Dictionary of public relations measurement and research. Gainesville FL: Institute for Public Relations. Available athttp://www.instituteforpr.org/topics/dictionary-of-public-relations-measurement-and-research
Taylor-Power, E., & Henert, E. (2008). Developing a logic model: Teaching and training guide. Retrieved fromhttp://www.uwex.edu/ces/pdande/evaluation/pdf/lmguidecomplete.pdf
Wholey, J. (1979). Evaluation: Promise and performance. Washington, DC: Urban Institute Press.
Wholey J., Hatry, H., & Newcomer, K. (Eds.). (2010). Handbook of practical program evaluation (3rd ed.). San Francisco, CA: Jossey-Bass.