Caroline Topping, marketing services manager at BP Lubricants, is not impressed with media evaluation providers. Indeed, last year, the evaluation reports sent to her by one such firm were so difficult to comprehend that she moved the function in-house. Topping now uses her own methodology – one which she says the rest of her team finally understands.
Many PROs will recognise the problem faced by Topping. The diverse, and often confusing, tags that some evaluation companies use to describe ‘tone’ or ‘reach’ can make benchmarking almost impossible.
Topping previously used two PR agencies for media evaluation – Bell Pottinger for consumer media and Circle Communications for trade press. With some reports describing coverage as ‘negative’, ‘neutral’ or ‘positive’ – and others using the descriptors ‘factual’ or ‘favourable’ – Topping says she could not confidently ascertain the effectiveness of the campaigns being measured.
Indeed, some providers use numerical scores – say six out of ten – to grade coverage, while others stick to the type of words used above. Given that so much evaluation is individually produced, one person’s ‘neutral’ could well be another’s ‘negative’ or ‘positive’.
Topping says she wants some form of standardisation. ‘This would provide a benchmark to measure PR objectives better along with an agency’s performance,’ she says. ‘It also has a large role to play in saving time and money in analysing results, helping to drive efficiency and effectiveness.’
Put simply, media evaluation is subject to personal value judgements, and many PR professionals would feel more comfortable with a standardised methodology linked to scientifically robust formulae.
Evaluating the evaluators
This problem is not confined to the users of evaluation – it also affects evaluation companies themselves. Karen Pritchard is practice head for utilities, government and the environment at Echo Research. She says her firm was recently approached by a client to apply trend data that had been generated by a different evaluation provider, using measures such as ‘factual’, ‘beneficial’ and ‘adverse’. She explains: ‘There was no way for us to establish a benchmark, so we couldn’t do what the client wanted.’
Elsewhere, United Utilities has just completed a six-month review of the media evaluation services it uses. Media relations officer Louise Wylie, who oversaw the review, says the process would have been more straightforward if media evaluators had been using ‘equivalent’ terms of measurement. ‘I’m not saying that media analysis cannot be an interpretative art,’ she says. ‘But it would be good to standardise the factors related to gross media coverage.’
She adds: ‘We deal with words and phrases that have the term “water” in them, but it can be very difficult, using the evaluation I receive, to make like- for-like comparisons.’
So, what is being done to help PR practitioners make sense of evaluation? Trade body the Association for Measurement and Evaluation of Communication (AMEC) now demands that full members to sign up to a Quality Assurance Code. This governs areas including data handling and checking procedures.
In addition, in order to encourage greater sharing of best practice, the organisation recently broadened its membership to encompass groups and individuals with an interest in communication planning, research and evaluation.
But AMEC chairman Nick Grant says he is keen to introduce a technical committee to clarify how quantitative measures should be used. ‘I would never advocate any form of a single standard, but there is work we can do to bring a bit more order to the measures that are used,’ he adds.
If the AMEC board approves his idea, Grant envisages the committee providing guidance in areas such as readership figures – for example, addressing issues around the acceptable age and sources of data and demographic segmentation.
The arguments against creating a descriptive standard to measure content centre on the fact that it is the qualitative, rather than quantitative, measures that are the hardest to standardise. And, of course, it is the qualitative that provides the most value. Therefore there is concern that the pursuit of commonality will create an evaluation language so simple as to render the service misleading.
‘It’s important to look at what you’re analysing in context, and realise that not everything is equal for each client or programme,’ says Metrica founder and director Mark Westaby.
He points to ‘favourability’, highlighting that on its own, knowledge of whether an article or broadcast item is positive or negative is worthless.
‘What makes that information valuable is considering who you are reaching. Otherwise you could have 100 items of coverage, but only half of them may be useful to a certain company,’ he says.
All about value
Controversially of course, Advertising Value Equivalents (AVEs) have long been a measure of choice for many, whereby a news item would be physically measured – and a published advertising rate and a multiplier applied – in order to estimate PR’s tangible monetary value.Easy to understand by those outside the communications industry – especially finance and sales personnel – AVEs are now generally viewed as flawed, and are gradually falling out of favour.
But there is still demand. And on the basis that the customer is always right, the indication is that the driver of standardisation is likely to be those commissioning, rather than those conducting, the research.
‘Standardisation will come because of client demand, from those senior board directors who need businesses benchmarked in a transparent way,’ says Romeike business development director Edward Bird.
He claims PR has yet to meet the professional benchmarks set by, say, a company’s financial function. He adds: ‘PR evaluation is still young and the measurement of results only recently became commonplace. But there is likely to be an increasing emphasis on the delivery of measurable value.’
One technological advance though could become one of the most important drivers of standardisation: ‘automated sentiment analysis’ software reads articles and decides whether they are positive, negative or neutral without human interpretation. Although this would appear less exact, the idea is that because such software runs to a program, at least consistency of evaluation is achieved
Internet monitoring expert Infonic, a subsidiary of AIM-listed text analysis specialist Corpora, is an advocate of the approach. It has spent five years developing software that calculates the sentiment of text, enabling instant analysis of tens of thousands of news articles. The software assesses the grammatical nature of each word and sentence, giving a positive, neutral or negative score to various parts of the text. The program then calculates the overall effect of these words and phrases to arrive at a ’sentiment score’ for the article, company or brand in question.
Corpora head of knowledge management Saul Haydon Rowe says the technology is currently being tested by banks and news vendors. ‘As reputation increasingly affects share price, and quantitative reputation scores are applied to financial trading, this software allows automatic assessment across thousands of different companies in a flash,’ he adds.
Aside from the question of whether this technology will find mass adoption, perhaps it is regrettable that reputations might rest on media relations alone. However, Haydon Rowe ‘s argument does highlight that rather than standardisation, PR people may be more concerned with accessibility and speed.
Durrants managing director Jeremy Thompson says: ‘We did some research last summer among senior comms decision makers. It showed that because of the proliferation of media, respondents felt they had less and less control of managing their messages.’ In response, his organisation has launched a service that allows users to plan, monitor and evaluate their media coverage online, generating real-time quantitative results.
The quality question
At the heart of the standardisation debate, however, is the question of whether a one-size-fits-all solution will improve the quality and value of media evaluation. And the consensus seems to be a firm ‘no’.
‘Media evaluation is an ongoing learning process, which helps identify what works and what does not, so as a programme progresses you respond and adapt,’ points out Sara Balme, director of consumer lifestyle agency Focus PR.
Her organisation conducts in-house media analysis for clients including Cadbury, Pernod Ricard and Piaggio scooters. She says that in the case of the latter, photography is a priority, so a full-page visual of a model draped over a Vespa rates highly. On the other hand, Pernod Ricard is focusing on wine-trade journalists’ perceptions of its Jacob’s Creek brand.
Having conducted an initial benchmarking audit, Balme says: ‘A large part of our evaluation is looking at how we can change, or have changed, those perceptions.’
Similarly, Porter Novelli research and business development director Mary Baker argues that as PROs try to target audiences more accurately, traditional media evaluation is becoming redundant. ‘We need to see whether we are influencing people to do something different,’ she says.
PN therefore takes a three-tier approach to evaluation, looking at which, and how many people, were reached; whether attitudes were changed, or awareness raised; and ultimately whether coverage changed behaviour.
The priority is to enable understanding of a campaign’s effectiveness. The debate about whether this is best achieved through a standard – or ‘the ongoing learning process’ described above – will no doubt rage on.
In September 2005, BP appointed Echo Research to evaluate coverage of the company as part of its Reputational Research Programme. The energy group was particularly keen to receive clear-cut reports that could be digested easily by parties outside the communications function, including the board.
The main challenge for Echo was not only condensing large amounts of information, but deciding how to depict positive and negative coverage. Echo decided that the clearest way to illustrate tone would be via a thermometer at the top of each report. This showed BP’s overall rating that month in terms of favourability on a scale of 0 to 100oC, with warmth denoting positivity.
The graphic represents analysis of issues including company profits, rising oil prices, alternative energy sources, environmental concerns and CSR. ‘Analysis of individual articles began from the neutral starting point of 50oC,’ explains Echo practice director Karen Pritchard. ‘Depending on factors such as placement and tone, readers marked up or down in increments of five degrees.
‘Occasionally the mathematical formula might rate an article 75oC, for example, but our analysts – who are extremely experienced – might know from reading the piece that it should be more realistically rated at 70oC.’
BP director of UK reputation and internal communications Ian Adam describes the scorecard as ‘a concise, consistent and meaningful monthly report’. He adds: ‘The document is actually read and understood by all, rather than simply sent to all.’
Canada’s Attempts at Standardisation
Canada is among the first countries to tackle standardisation. Last April the Canadian Public Relations Society (CPRS) launched a Media Relations Rating Points (MRP) system, designed to standardise the measurement and reporting of editorial media coverage.
It was developed over four years by an evolving group of volunteer PR professionals across client companies, government departments and agencies. Its aim is to provide an accessible and affordable tool for the qualitative evaluation of any media relations campaign, from a planned PR initiative to a crisis situation.
Available to download for an annual subscription of £317, the system includes a media report template, rating system and tool for obtaining up-to-date, accurate ‘reach’ numbers.
The scoring system is based on a scale of zero to ten. ‘Tone’ awards a maximum of five points, with five other customised criteria a possible one point each. Depending on campaign objectives, these criteria range from the positioning of an article and whether it uses a picture, to additional messages or a quote.
The system then conducts a cost per contact analysis, based on total programme spend over reach.
According to CPRS measurement committee chair Tracey Bochner, success for a straightforward national campaign would be upwards of 7.5 out of ten.
‘Fundamentally, a standardised system is only as good as the data that supports it and this means that we’re all using the same numbers,’ she says.
It is not without its critics though. Andrew Laing, president of Toronto-based media analysis firm Cormex, says the MRP measure is fairly simplistic: ‘If a programme scores 8.5 that doesn’t tell you much, so I don’t think it’s a good strategic or tactical tool.’
Furthermore, he is troubled that the system encourages PROs to rate their own work and fails to provide clear metrics for ‘tone’.
He says: ‘Half of the points are awarded for tone, which allows a level of bias. And none of the media evaluation people in Canada were consulted when the system was put together.’
To date, 50 PR agencies, plus many major companies, have subscribed to the Canadian MRP system.‘TONE’ TO THE TEST...
How differently would a selection of evaluators grade the same articles in terms of their positivity, negativity or neutrality?
Four evaluators read two articles, provided by Launch Group, about clients BP and Carling. The BP article (in the Financial Times) reported BP’s launch of targetneutral, which lets motorists calculate (and voluntarily offset) their carbon emissions; the Carling article (in The Sun), was about Coors’ low-alcohol beer brand C2.
ARTICLE ABOUT BP
TNS says Neutral/balanced: ‘Overall this article is neutral/balanced towards BP as both sides of the argument are presented in detail. On a more detailed level, we would classify the CSR and leadership messages for BP as positive, but the environmental message as neutral.’
Romeike says1 Partially positive: ‘The BP article is complex. It starts out neutral, then Robin Oakley [of Greenpeace] brings a negative slant to the story, but the final five paragraphs are all positive in tone.’
Media Report says2 Balanced: ‘Taking four descriptions of “negative”, “positive”, “neutral” and “balanced” and looking at the “authorial voice” – the comment pieces of the article rather than a straightforward evaluation of whether the story is “good” or “bad” news – the BP story would be rated as “balanced”.
Infonic says Negative: ‘The overall balance of the article leans subtly to the negative side. The tipping point is the headline, which appears to apply an unreasonable spin to the story given the facts presented in the article.’
Andy Nash, head of media at Launch group, says: ‘We were happy with the FT story because the key messages about target neutral all come through strongly and they used a quote from environmentalist Jonathon Porritt backing the scheme, which effectively neutralised Greenpeace’s anti-offsetting stance. We see what Infonic means about the headline, but we would rate this piece as neutral/balanced.’
ARTICLE ABOUT CARLING
TNS says Positive: ‘This piece is positive towards Carling. It also communicates positive messages for responsible drinking and innovation. It sounds as if it has been lifted from a press release.’
Romeike says1 Wholly positive: ‘The Carling article is wholly positive on our five-point scale. The journalists’ comments within both articles are neutral. However, [Carling brand director] Andy Cray’s comments are positive in tone and raise the overall favourability.’
Media Report says2 Favourable: ‘Taking out four descriptions of “negative”, “positive”, “neutral” and “balanced” and looking at the “authorial voice” – the comment pieces of the article rather than a straightforward evaluation of whether the story is “good” or “bad” news – the Carling article would be neutral.
Infonic says A predominately neutral article: ‘The headline is tabloid in style, while the article text uses prudent, but not discernably sentiment-bearing, equivocation. The balance is tipped by the inclusion of a statement from the Carling brand director encapsulating the attributes of the product, and presenting them in a positive way.’
Andy Nash, head of media at Launch group, says: ‘Again, we were pleased with the Carling C2 story. The tone of the whole piece feels upbeat. The evaluators appear to agree, apart from Infonic whose ‘predominantly neutral’ rating seems better suited to the BP piece. It’s interesting that the evaluators couldn’t pick up on the communications context to these stories. BP has had a hard a time of late, so a story that was non-negative was exceptionally good news.’
1 Romeike uses a five-point scoring system: ‘wholly positive’ ; ‘partially positive’; ‘neutral’; ‘partially neutral’; and ‘wholly negative’.
2 Neutral: The article is a simple reporting of facts with no opinion in relation to the client.Positive: The article has positive comment in relation to the client. Negative: The article has negative comment in relation to the client. Balanced: The article has a roughly equal number of negative and positive comments in relation to the client.
In this week's PRWeek/CTN podcast you can listen to Andy Nash talk further on the subject of media evaluation.