Evaluation. It's the word on everyone's lips in the PR community
these days as the industry seeks to argue its place in front of its
older and brasher sister, advertising.
But, as with many areas of public relations, the proof often rests on
coming up with more water-tight technical systems for handling media
Starting on page 16 we take a close look at an issue that has been
raging in our news and letters pages for weeks - how to handle the
myriad of different ways that VNR audiences are measured.
Although the majority of PR agency people we spoke to still favor using
Nielsen figures gained through its Sigma encoding system, it appears
that most people tend to use a menu of systems. Relying on Nielsen as a
primary source, supplemented by studying the captioning data, is
But the problem remains that working out basic audience figures is only
the starting point for evaluating the effectiveness of VNRs.
Even if the average viewership of a show is known, how is it possible to
know whether the particular show that the VNR ran in had especially high
or low ratings because of some other factor? Or in what context the VNR
was shown? Or for how long?
Short of briefing your execs to watch programs you're confident the VNR
will run in (which happens to some limited degree already) it is always
going to be difficult to report with certainty how well that VNR has
contributed to a campaign.
This appears to go against the grain, when PR is so intent to prove its
worth. But maybe it isn't so bad. Look at the ad industry, which has
traditionally done such a good job of marketing its wares to clients.
Can they isolate how one particular ad has contributed to a sales shift?
Generally not. Does it prevent clients from using advertising? No.
In the quest for better VNRs, PR should focus on the creative quality of
the product instead of getting too caught up on exactly how many people
will see it. It's time to put more work into quality rather than