Highlights from a keynote presentation on the future of PR measurement at the Amsterdam AMEC Summit by Professor Jim Macnamara, Deputy Dean, School Of Communication, University of Technology, Sydney.
The ‘march to standards,’ as the international project to develop standards for measurement and evaluation is no small or brief undertaking. Nor is it an unimportant one. To the contrary, I take my hat off to those who have planned and initiated the ‘march’ and I urge those not yet enjoined in the quest to do so. Your future, and the future of public relations, depends on it.
People have been looking for ways to measure and demonstrate the value of public relations ever since the birth of modern PR as a defined function itself, but still there remains no universal agreement, or even majority agreement, on a single method to effectively measure the effectiveness of public communications activities.
While there are some shining lights in measurement, practitioners mostly measure outputs and are still caught in a deadlock when it comes to identifying the outcomes of PR and corporate communication and its value to an organization – and its stakeholders.
Standards breakthrough, but still gaps and inconsistencies
Thirty or more metrics are currently used for measuring PR and corporate communication. While I strongly support my colleagues who argue that there is no single ‘silver bullet’, the plethora of metrics, such as “awareness”, “engagement” and “influence”, just to name a few, is more confusing than clarifying, most relate to basic, low-level output measurement, and some are of questionable validity and value.
In most advertising, marketing and PR literature, engagement is poorly defined and described in superficial ways that regard clickthroughs, ‘following’ and ‘likes’ as engagement when, in fact, engagement is a deep psychological concept, involving significant levels of emotional involvement and participation.
The search for an ROI still goes on in many sectors of the PR industry. In my research, I identified 10 different types of ROI and “quasi-ROI” discussed in industry literature. This is not conducive to standards or to achieving understanding of the value of PR and communication. Meanwhile, a number of other tools for measurement and evaluation have not been given significant or, in some cases, any attention in the ‘march to standards’.
But even if we address these gaps and refine draft standards, two big questions remain:
1. Why are we still here after more than 40 years of intensive efforts unable to reliably and clearly demonstrate the value of PR and corporate communication?; and
2. How can we break the deadlock in implementing measurement and evaluation in practice?
While acknowledging that numbers have “a rigour and logic” about them, as John Durham Peters says, numbers have “a serene indifference to the world of human things”. Communication must always have a degree of subjectivity about it, an element that does not fit into the following categories:
1. The reductionist processes of science which limit knowledge to certain types of observable data;
2. The notion of commensurability, a belief that diverse qualities can be measured by a common standard and reduced to a metric; and
3. The underpinning claim of the scientific method to objectivity achieved through detachment. Scientific research is purportedly conducted from a dispassionate perspective, detached from all human subjectivity and emotion. Therein lies its greatest limitation as a method for studying human communication.
Quantitative methodology dominates the research landscape and is influential in corporate and marketing communication. For instance, the tag line of the major PR industry research organization, the Institute for Public Relations (IPR), is “the science beneath the art”. This clearly indicates a view that PR should be underpinned by scientific knowledge and quantitative methods of research.
I would like to challenge this notion, not to weaken PR and corporate communication, but to liberate them from the straitjacket of “numbers” and allow us to reveal their true value.
Understanding human communication
We have all heard and probably use the phrase ‘perception is reality’. When we look at human communication and practices such as public relations and corporate communication, we need to recognize that the outtakes and outcomes include:
• Awareness of a product, service, feature, problem or condition – not simply in terms of level, but qualitatively such as positive awareness rather than negative awareness;
• Perceptions, such as reputation and brand attributes;
• Attitudes, such as goodwill, support, or intention to buy;
• Opinion, which involves attitudes but is often publicly expressed, whereas attitudes may remain latent;
• Relationships; and
• Behaviour, such as buying, joining, voting, getting fit, advocacy, and so on.
Interpretations and feelings such as these are based on emotion as well as rational logical reasoning. They are subjective, not objective. They are infinitely variable and diverse, not stable phenomena. Yet, we try to measure and evaluate these outtakes and outcomes using scientific methods and quantitative data.
This is a bit like trying to give close personal relationships a financial value or a score out of 100. And, by the way, you need to be prepared to show it to your loved one and be able to scientifically prove that it is a correct calculation.
The point of this simple demonstration is that human interactions, relationships, feelings, attitudes, loyalties, perceptions and engagement do not yield easily, if at all, to numeric quantification and some things cannot be explained in scientific terms. In focusing predominantly on quantitative research and searching for numbers to define our work and our value, we are trying to measure air with a ruler.
Quantitative research is reading a temperature gauge to describe the weather; qualitative research is being out in it.
Qualitative research is not a softer or weaker type of research. Quantitative research produces averages, ratings, scores, pretty charts that look good in marketing meetings, and generalisations. They give rudimentary ‘temperature gauge’ measures of organisation reputation and stakeholder perceptions. Only qualitative research and content analysis can provide understanding and insights into human thinking, perceptions and attitudes.
Learn more about our media analysis and the insights we offer by visiting http://www.isentia.com/product/media-analysis-reports
Shifting focus from looking back to looking forward
Current measurement and evaluation processes predominantly look backwards – at what has been done in the past. In most cases, measurement and evaluation fail to give an organisation and its stakeholders anything other than a retrospective performance review of work done. Sometimes M&E is seen as little more than an exercise in post-rationalisation and self-justification by practitioners.
So what should we do? Let me now introduce you to a new model of measurement and evaluation that changes the game considerably.
A new model for PR measurement and evaluation
Measurement and evaluation obviously must begin with measurement, involving data collection and data analysis. But the key point about this new model is that measurement should collect qualitative data as well as quantitative data.
Measurement should then be followed by in-depth analysis. The second stage of this expanded model looks beyond measurement metrics collected by the organisation. It can draw on available information such as case studies, theories and models. This stage can also incorporate market analysis, competitor analysis and business analysis. This provides a deeper richer data pool, including ‘big data’ if relevant, and a focused process to produce findings.
This deep analysis is undertaken for two reasons. First, before evaluation, analysis informed by measurement and other data is designed to identify insights that can inform future business or organisation strategy – the third stage of this model. Rather than simply reporting past achievements, insights are forward-looking, creating potential for value adding initiatives by the organisation, whether these create value through increases (e.g., in sales, reputation, or employee loyalty) or reductions (e.g., in costs or risk). Whereas traditional evaluation findings are descriptive, insights involve inferences, predictions, suggestions and recommendations.
Insights might include, for example, identification of a gap left by competitors, an opportunity to seize thought-leadership on an emerging issue, a likely legislative initiative based on patterns of political comment, or a mood swing among stakeholders that can be productively addressed at an early stage.
This forward-looking approach designed to provide insights that contribute to future business or organisation strategy, as well as inform performance management, addresses two other key obstacles that have been identified in recent research.
1. It helps bridge the gap between PR and organisational outcomes
Rather than trying to retrospectively link PR to business or organisational outcomes, which can be seen as post hoc rationalisation, this approach produces positive contributions to the future success of the organisation.
2. It addresses the contradiction at the heart of the PR measurement dilemma
Despite demands for results and accountability, employers often will not pay for and sometimes even do not want rigorous measurement and evaluation. Some do not want to pay for what they feel they already know and what cannot be changed. But they are far more likely to pay for what they don’t know and what can change the future.
Where do insights come from?
Generally speaking, insights are gained when multiple pieces of information and perspectives are brought together. Insights generally do not emerge from a single data set and they do not simply pop into your head. They very often emerge from conflicting data, contrasting data, and finding the signal in the noise.
One of the other barriers to effective measurement and evaluation already identified in a number of studies is lack of research knowledge among practitioners. There is no escaping that professionals today need high levels of knowledge. Here are some examples of analysis steps and techniques that can create an insightful value-adding approach to measurement and evaluation:
1. Have enough information
Usually, the more than better, provided you have tools and skills for analysing large amounts of data. This is more obtainable now than it has ever been before, e.g. social media provides a huge pool of qualitative data, the biggest readily available focus group in history.
2. Cross verify
This involves collecting and comparing two, three or more data sets related to the same issue. If one data set suggests something, it is possibly a correct conclusion, but if two or particularly three data sets gained in different ways all suggest the same thing, you can be very confident of the finding.
3. Immersion in the data
There is no substitute for properly engaging with the information. Too many times we skip over data and reports, picking up the general sense of them. A general sense does not produce insights. They are often hard-won, gleaned out of mountains of material through perspicacity and perseverance, or using the right analytical tools to filter the noise;
4. Data organisation
Notwithstanding the need for immersion in the data, it is essential in research to be able to organise and summarise data in lists such as rankings, tables, diagrams, charts, graphs, infographics, and maps to help you understand it. Qualitative data in text form such as interview transcripts can be condensed by coding and categorising, and visualisations such as ‘tag clouds’.
5. Seek independent analysis
For example in content analysis of media articles and interview transcripts, use independent and multiple coders whenever possible and do intercoder reliability assessment to identify common patterns of meaning. Also, at an analysis stage, colleagues who come fresh to the data and who have no bias towards particular strategies or activities can sometimes see things that those most involved cannot;
6. Pick it apart –
The refutability principle should be applied, trying to refute initial assumptions or findings to see if they can stand up to scrutiny and contradiction. In other words, deliberately try to prove your own findings wrong.
7. So what?
Ask yourself repeatedly the important research question: ‘So what?’ For every finding and conclusion you draw, quantitative or qualitative, ask yourself what does it mean and does it matter? What are the implications? What should the organisation do? What should the organisation not do?
There are also a number of other things that researchers do to interpret data and gain insights into their significance and implications including peer review and presentations to groups to test findings and gain feedback.