Lies, damned lies and statistics

Confusion, sloppiness and downright lies mean many published stats are not all they seem, finds Alex Benady.

The Conservatives came to power in 2009 promising to fix the econ­omy. But by January this year it was far from fixed. In fact it seemed to be floundering. Anxious to remedy that perception, David Cameron appeared in a party political broadcast that set out to prove his policies were working.

Cameron said he would "give people the facts". Then a caption came up saying: "The deficit has been red­uced by 25 per cent." Towards the end of the broadcast Cameron looked into the camera and concluded: "So though this Government has had to make some difficult decisions, we are making progress. We are paying down Britain’s debts."

It sounded persuasive. Even more so when the claim was rep­eated in news reports the next day. It seemed that the Tories were ind­eed fixing our finances. The only problem was it wasn’t strictly true. Far from being paid down, the national debt was increasing. What’s more, although it is true the deficit was down 25 per cent, if you exclude a one-off £28bn contribution from the Royal Mail pension fund, the reduction was closer to five per cent.

Cameron’s broadcast committed two serious errors, or what you might call stat crimes. First it muddled ‘deficit’ (the amount by which spending exceeds income in any year) with ‘debt’ (the total amount owed by the Government). You might think it is a distinction a Prime Minister should understand. And then it confused a ‘fall in the rate of increase’ with ‘a fall’.

Only the Racing Post is always accurate with numbers. And that is because money is involved

Lord Lipsey, chair, All Party Parliamentary Group on Statistics

It may sound like hair splitting. But the broadcast and its subsequent reporting had a serious eff­ect on national debate. It created a state of such confusion about the basic facts of the economy that even well informed people could not engage with the subject. The head of economics at one of the UK’s top universities was moved to comment : "I can’t listen to stories about the debt any more because I just don’t understand what is being said."

The truth is that most of us do not give much thought to statistics most of the time, even though they are the way we inform ourselves about almost everything beyond our own direct experience. They are the thin layer of steely fact that runs through the mush of wishful thinking, impression and anecdote to form our view of the world.

And even though they are so powerful, or perhaps because they are so powerful, they remain curiously unaccountable, impervious to logical analysis in a way we would not tolerate with words. "Too many of us, when we see a number, just let it wash over us because so many of us have a problem with even basic statistics," says Will Moy, director of Full Fact, an organisation that promotes accuracy in public debate.

"Bad information makes for bad decisions and misunderstanding of how the world works on important issues." And, it has to be said, on lesser commercial issues that can be presented as significant because they have numbers att­ached; which town buys the most marital aids or how many women are happy with their dress size, for example.

The overwhelming majority of the audience did not even notice Cameron’s broadcast mistakes. But Sir Andrew Dilnot, chair of the UK Statistics Authority, or ‘Ofstat’ if you like, was so concerned that he wrote a letter, copied to Cameron, pointing out that the national debt had risen from £811bn in 2009 to £1,111bn at the end of 2012. "The rate at which the debt was growing was slowing," he explains. "But the debt was not being paid down."

But at least when politicians misuse statistics the UKSA is there to bring them to book. After a series of serious stat gaffes this year, 15 advisers at the Department for Work and Pensions were reprimanded by the UKSA and made to attend a statistics summer school.

The UKSA has just completed an audit of 1,700 sets of government statistics and gave them a clean bill of health generally. "By and large the methodology looks fine. There’s no evidence we’ve found of political interference," says Dilnot.

The same cannot be said for statistics from the commercial sector, where neither methodology nor the way the figures are used can be monitored effectively. The apparent authority of numbers, combined with a lack of numeracy on the part of PR people and journalists, produces floods of nonsense stories based on ropey statistics.

"I rarely complete my breakfast with equanimity," comments Lord David Lipsey , chair of the All Party Parliamentary Group on Statistics. "Only the Racing Post is always accurate with numbers. And that’s because money is involved."

Probably the most basic statistical tool is the survey. Surveys are only valid if the people they question accurately represent the relevant population demographically. "The most important things are to get the sampling right and to make sure your questions are free from bias," says Michael Marshall of the Merseyside Skeptics Society, who also writes a blog called Bad PR.  "The trouble is many commercial surveys just grab the opinions of their customers and present them as typical of the whole population. That’s because brands often decide on their angle or story and then arrange their survey to prove it."

He cites the example of a recent story that ran in the Mail Online, the Daily Star and other titles, claiming that "50 per cent of all first-year students get an STD". "On closer inspection it turned out the survey was carried out by a contact website called Shagu@uni which only surveyed its members. And not many of them at that. These would be particularly promiscuous single young men, hardly representative of the student population as a whole," says Marshall.

So many of us have a problem with even basic statistics

Will Moy, director, Full Fact

There cannot be a PR practitioner alive who does not know a survey is the quickest way to gain coverage. And if the survey is just a show of hands in the office, or a question to your friends on Facebook, isn’t it up to the media to decide whether the story is strong enough to run?

The problem is that taking adv­antage of a dodgy poll to generate a story may be good for you in the short term, but it muddies the water and may erode the effectiveness of all PR in the longer term. "The statistical skills in the lower reaches of PR are appalling and have become one of the major reasons PR has earned itself such a bad reputation in many circles," says Marshall.

Holly Sutton, founder of PR agency Journalista, agrees numeracy is not really part of the PR industry skill set: "I rarely get CVs from people with an A-level in maths or with a statistics component at
degree level. But the ability to drill down into statistics transforms you from ordinary to excellent."

If you are stats literate, often there is no need for dubious surveys, she argues: "The Government produces so much high quality data that often you can look through data sets to match up issues with your client’s agenda. You can find whole new narratives."

But hopefully not in a Tory party political broadcast kind of way.

Common Statistical Mistakes

1

Sometimes errors are very basic indeed, such as the failure to distinguish between millions and billions, says Labour peer Lord Lipsey.

2

The use of what Will Moy calls  "zombie statistics" – ones that sound plausible but that on closer inspection are five, ten or 20 years old.

3

Consistently quoting the top figure from a range. Often statistics come as a range of possibilities. To pick the top figure may create a good headline, but it probably doesn’t represent the truth.

4

False inference. A million foreigners may have entered the country and not be in registered employment. It does not mean there are a million foreigners claiming benefits.

5

Confusing decline and rate of decline. In year one I picked 500 apples. In year two I picked 520, in year three 525. The rate of growth has declined, although the absolute number of apples picked has increased.

6

Ascribing false significance. Just by random variation some samples produce slightly bigger or smaller results than others. If changes are small –under three per cent – there is no way of knowing whether they reflect something different in the real world or are a sampling error.

7

Biased sampling. You cannot make valid conclusions about the population as a whole unless your sample reflects the population as a whole. So any online poll has to be questionable because a quarter of the population does not own a computer. Their views may differ from those of computer owners.

8

Subsets. Your sample may be legitimate in the first place. But by the time you get down to a subset of a subset (e.g. young women in Yorkshire) the sample size may be too small to allow for valid conclusions.

9

Truncating axes. I picked 505 apples last year and 500 apples this year. A small decline. If, however, I start my axis at 500 and run it to 510, it will look like apple picking has fallen off a cliff.

10

Percentage points. The interest rate rose from five per cent to six per cent. Is that an increase of one per cent or 20 per cent? It’s a one percentage point increase. Or a 20 per cent rise.

Would you like to post a comment?

Please Sign in or register.

News by email...