The practices of PR and strategic public communication have struggled with evaluation for almost half a century, as leading practitioners such as Fraser Likely and scholars such as Emeritus Professor Tom Watson have noted (Likely & Watson, 2013).  Much good work has been done and significant progress made such as the Barcelona Declaration of Measurement Principles (AMEC, 2015). However, practitioners face a diverse and often confusing range of models, metrics, and methods, and a lack of standards, which are undoubtedly reasons that many practitioners still do not do evaluation based on rigorous methods.

 A number of industry organisations have recognised the need to develop standards for evaluation and provide tools to help practitioners that are based on best practice. The AMEC Integrated Evaluation Framework is a major initiative in this development.

In developing this Integrated Evaluation Framework, AMEC has worked and continues to work in collaboration with many industry organisations around the world, with practitioners, and with leading academics in the field of evaluation and social science research.

An important breakthrough presented in this framework is that AMEC has looked beyond PR evaluation models to other fields and disciplines including performance management, public administration, organisational development, and advertising and marketing.

Program evaluation is a highly developed field, particularly in public administration, with a body of theory such as program evaluation theory and theory of change – i.e., knowledge about how and why things change and how change can be effected. Within the practice of program evaluation in public administration and fields such as project management, program logic models are widely used as practical models for implementation. While some evaluation models for PR and communication have applied elements of program logic models, much of the knowledge and resources in other fields have been overlooked, resulting in lost opportunities, fragmentation, and ‘reinventing the wheel’.

By reaching into other disciplines and fields of practice such as public administration and management studies, as well as social science disciplines such as social psychology, this framework is able to identify widely used approaches and best practice as the basis for standards.

For example, while PR evaluation models commonly identify three or four stages of communication programs and campaigns – e.g., inputs, outputs, outcomes, and sometimes outtakes – and other PR evaluation models introduce new terms such as outflows and outgrowths, program logic models that are widely used across many industries worldwide include:

The AMEC Integrated Evaluation Framework is built on these widely-used logic models, underpinned by a solid foundation of theory and extensive testing in practice, adapted to strategic public communication.

Best practice in communication evaluation also draws on communication and media theories such as the steps of information processing identified by W. J. McGuire (1985, 2001) in The Handbook of Social Psychology, and the communication-persuasion matrix. The AIDA model (attention, interest, desire, action) used in advertising is a derivative of McGuire’s steps, although the full list of steps in information processing is much more extensive including exposure, attention, understanding, liking, retention, consideration, acquiring skills or knowledge, attitude change, intention, action/behaviour, and advocacy (McGuire, 2001).

The AMEC Integrated Evaluation Framework is an integration of best practice and best knowledge from these and other fields and disciplines. This brings greater consistency to evaluation of strategic public communication leading the way to standards, and it ensures rigour and validity.

The taxonomy, model, tools, and resources produced as part of this AMEC framework also align with planning and program management models developed in public relations  and corporate communication including the RACE model of PR planning, which stands for research, action, communication, evaluation (Marston, 1981); the ROPE model which stands for research, objectives, program/plan, evaluation (Hendrix, 1995); the expanded RAISE model (Kendall, 1997), which advocates research, adaptation, implementation, strategy, implementation; and Sheila Crifasi’s (2000) ROSIE model, which slightly rearranges the stages as research, objectives, strategies, implementation, evaluation.

Jim Macnamara

Professor of Public Communications,

University of Technology, Sydney


Crifasi, S. (2000). Everything’s coming up rosie. Public Relations Tactics, 7(9), September, Public Relations Society of America.

Hendrix, J. (1995). Public relations cases (3rd ed.). Belmont, CA: Wadsworth.

Kendall, R. (1997). Public relations campaign strategies: Planning for implementation (2nd ed.). New York, NY: Addison-Wesley.

Likely, F., & Watson, T. (2013). Measuring the edifice: Public relations measurement and evaluation practice over the course of 40 years. In J. Sriramesh, A. Zerfass, & J. Kim (Eds.), Public relations and communication management: Current trends and emerging topics (pp. 143–162). New York, NY: Routledge.

Marston, J. (1981). Modern public relations. New York, NY: McGraw-Hill.

McGuire, W. (1985). Attitudes and attitude change. In G. Lindzey & E. Aronson (Eds.), Handbook of social psychology, Vol. 2 (3rd ed., pp. 233–346). New York, NY: Random House.

McGuire, W. (2001). Input and output variables currently promising for constructing persuasive communications. In R. Rice & C. Atkin (Eds.), Public communication campaigns (3rd ed., pp. 22–48). Thousand Oaks, CA: Sage.