Driving hard for one formula: As part of the Proof campaign to improve standards of research and evaluation in PR, we invited a dozen practitioners to discuss what constitutes best practice. Stephanie France reports

Last month, PR Week invited a handful of industry practitioners to discuss best practice in PR research and evaluation and whether it is possible to set industry benchmarks in this area. The meeting formed part of PR Week’s Proof campaign, launched in February, which aims to encourage practitioners to allocate ten percent of their PR budget to research and evaluation.

Last month, PR Week invited a handful of industry practitioners to

discuss best practice in PR research and evaluation and whether it is

possible to set industry benchmarks in this area. The meeting formed

part of PR Week’s Proof campaign, launched in February, which aims to

encourage practitioners to allocate ten percent of their PR budget to

research and evaluation.



Unsurprisingly, the debate threw up more questions than it answered.



At times it seemed as if delegates were in search of the Holy Grail,

rather than attempting to agree upon best practice.



Clearly the industry is committed to research and evaluation in

principle, but is still confused over which particular strands of a

campaign ought to be evaluated and how this should be effected. However,

agreement was reached on one point - the old system of totting up column

inches has been debunked as the most reliable method of evaluation.



Tom Wells, a board director of Consolidated Communications, says:

’Evaluation of a campaign consists of three stages - output, uptake and

outcome. So much evaluation has traditionally concentrated on output -

how many column inches a campaign generates - but this doesn’t tell us

which messages have actually been seized when someone is reading that

newspaper or listening to that programme. Nor does it tell us what

consumers have taken out, how it has changed their perceptions or what

has made them switch from one brand to another.’



Kim Fernihough, head of PR and communications at Avon Cosmetics,

agrees.



’All the consultants I work with can tell me how much coverage a

campaign has generated, but few can tell me what impression that

campaign has left on the consumer. I don’t know what the reader thinks

when she sees it or what she takes out of it when she hears it.’



However, Peter Crowe, director of planning and research at Metrica, is

not convinced. He believes tracking the customer to such a degree may

lead PR practitioners up a blind alley. ’Do you really want to get too

involved in the ’Road to Damascus’ moment when people are actually

reading an article?’ he asks.



’It would be like looking for a needle in a haystack. If you are getting

movement in the right direction and this is leading to improved sales, I

wouldn’t worry too much about the exact psychological details behind

it.’



Peter Walker, president of the IPR, is also sceptical. He says: ’The one

thing we have to learn from the advertising industry is not to go down

the route of measuring awareness. You can put pounds 11 million through

an advertising agency in three weeks (for a TV commercial) and then say

’look at all this awareness.’ But no one has measured how quickly the

message becomes degraded.’



Mike McHale, communications development manager of Rover Group goes one

step further to say audiences are actually filtering messages.



’I was at a football match recently where a well-known company had

sponsored the score line. An hour into the game, I asked people who was

sponsoring the score and no one knew. There is a difference between the

opportunity to see something and registering it.’



Whether tapping into the psyche of the consumer proves to be a PR red

herring is open to debate. However, most practitioners are on more solid

ground when research and evaluation is tied to specific, strategic and

measurable objectives. These are generally linked to business

objectives, such as improved sales, a raised share price or effecting a

change in Government policy.



For example, Wells at Consolidated is involved in a campaign for Jose

Cuervo Tequila. He explains: ’In this campaign we have a very specific

objective for a specific client, namely to get the product down the

necks of 18-25 year olds in about 35 key West End (London) bars.’



He says it is relatively easy to keep tabs on such a specific campaign,

and to evaluate it against sales. But, as David St George, Bayer’s

healthcare relations manager, pharmaceutical division, points out, it is

a little more tricky to set clear, measurable objectives when the

campaign is international.



St George says: ’If I’m working on a product campaign, I also have to

think I belong to a pharmaceutical division, in a chemical company,

within the UK, within the world. So which objectives should I be

fulfilling?



If it is the product campaign objective, does it also fulfil the

corporate objective that Bayer has to be seen as an acceptable

company?’



He adds that if the campaign is global, it can take two to three years

to analyse consumer perceptions, by which time, he may have switched

consultancies.



Peter Hehir, chairman of Countrywide Porter Novelli, believes it is

important to set objectives at the very beginning of a campaign ’so that

everybody can buy into it.’ And Fernihough of Avon believes in the

importance of clearly understanding the objectives of a campaign before

embarking on it.



’Take the Fashion Targets Breast Cancer campaign, which Avon is

sponsoring,’ she says. ’The objectives there are to increase consumer

awareness that Avon cares about women, improve the way people feel about

Avon and increase customers’ propensity to purchase because they realise

we are not just another beauty company.’



Fernihough says the campaign, which is aimed at 25- to 45-year-olds, is

being evaluated in several ways, including counting the number of

responsesto a special telephone number and branded t-shirt sales. It is

also measured by the amount of radio interviews given and through market

research, in which women were asked to list beauty companies involved in

breast cancer awareness. However, Fernihough says difficulties have

arisen when attempting to put the results into some kind of context.



’It was the first time we had carried out such a campaign, so we could

not set a target indicating how many calls we should be getting,’ she

explains.



Clearly no two campaigns are identical, meaning the objectives of each

will be different, as will the means of evaluating them. But many PR

people would still like to see a series of benchmarks against which they

can measure the effectiveness of a campaign. Once this practice has been

established and accepted by the majority of the industry, then

practitioners can start to convince clients and CEOs that research and

evaluation should be built into every campaign at the beginning.



Crowe of Metrica says: ’We have to ask what is the role of PR in this

particular launch or campaign? Everything from developing the strategy

to determining how to evaluate the campaign flows from that point. Of

course the role of PR is going to be different with each product launch

or campaign.’



Katie King, account director at Text 100, agrees: ’We need to offer a

series of tools which practitioners can draw upon to measure the

different strands of a PR campaign. We need to ask what is the

appropriate PR objective to be set for this type of campaign? We also

need to illustrate case studies of good practice and bad practice. We

should not be aiming for one particular measure, but a series of

tools.’



Jackie Elliot, CEO of Manning Selvage and Lee and PRCA chairman is in

favour of setting benchmarks to rate the success of a campaign. ’We

should agree on and then submit to an external benchmark or standard.

That is what we should be driving for.’



Her view is backed by Adrian Wheeler, managing director of GCI Group and

chairman-elect of the PRCA. He says: ’We see a universal evaluation

system as being the next step in our pursuit of professionalism. It

should be a comprehensive blueprint which will include all appropriate

methods of linking objectives with evaluation and which will be adopted

by own own members and the client community alike.’



Have you registered with us yet?

Register now to enjoy more articles and free email bulletins

Register
Already registered?
Sign in

Would you like to post a comment?

Please Sign in or register.