h1

Nick Sparrow’s February PB Polling Column

February 3rd, 2011

Smoke and Mirrors. The art of poll weighting

So far, in 2011, the polls have had the Conservatives as high as 41% and as low as 32%, Labour as high as 44% and as low as 39%, meanwhile the estimate of support for the LibDems has ranged from 15% to just 7%. Such variation can suggest only one of two things; either the electorate is in a nervous state of flux or the methods used by the polling companies produce different readings

Of course some pollsters rely on telephone based samples while others use online methods. But all companies use weighting schemes to ensure their samples are “representative”; so you would think that we are comparing like with like. But pollsters do not rely on the same set of variables to weight their data and while some weighting variables are simply explained others are not. Not a big problem until you realise that some of the weights being applied can sometimes be huge, thus potentially having a very large impact on the reported data.

The easy ones to explain. We know from census data (carefully updated) the proportions of the population in each demographic group (age, sex, region, work status etc). It seems entirely reasonable to weight on computer to make the profile of the achieved sample match the facts. Weights applied are usually small and therefore have a marginal impact on the reported results.

Trickier ones. But not all the weighting done by the pollsters is by reference to fact. Following the polling debacle in 1992 (and the ongoing problem of overstating Labour in elections between 1983 and 2001), some pollsters started to weight the recall of past votes back to the actual result of the previous election ….. but “aimed off” because of faulty recall (some people it appears align their recall of past votes to their current intentions). So the weighting employed has not been to the actual result last time, but involved an element of judgement as to the nature and extent of misremembering. More recently, in the run-up to the 2010 election, pollsters used target percentages that closely resembled the outcome in 2005. Now the target weights used or samples achieved by ICM, Populus, ComRes (for both online and telephone polls), and Angus Reid closely match the actual result in 2010. Neither Mori nor YouGov weight by past votes.

Even trickier. YouGov choose to weight their data by reference to the party respondents say they most closely identify with (party ID), matching the party ID percentages obtained in each new poll back to the party ID percentages obtained by the company at the time of the last election. So this weighting is not to targets derived from census data or fact (such as the result of the election last time) but to YouGov’s own estimates of party ID in May 2010. One potential problem is that many voters, unsurprisingly, do not make a real distinction between present party “support” and the party they “identify with most strongly” and the available data shows that the two measures move together over time. Danger is therefore that movements in party ID may not be a measure of an imbalance in the sample, but a reflection of the changing mood of the electorate.

And even trickier. Another form of weighting used by YouGov and Angus Reid is on the basis of newspaper readership. Both YouGov and Angus Reid ask respondents which daily newspaper they read “most often”.

In both cases the results appear to be weighted to data obtained by the National Readership Survey (NRS). (see www.nmauk.co.uk/nma/downloads/NRS_guide.ppt) Angus Reid say “newspaper readership [weighting] is derived from a combination of NRS data and our own research to account for variations in questions asked”. Nevertheless the targets used by Angus Reid appear to be very similar to those used by YouGov.

The NRS attempts to measure the average number of readers for each issue of the daily newspapers, called Average Issue Readership or AIR for short. For example the AIR for the Sun was recently recorded as 8.8 million people, 5.8 million for the Daily Mail. For the NRS calculation “readership” is defined as “read yesterday for at least 2 minutes”. This is, of course, a very different measure to use for weighting the people who say which newspaper they read “most often”.

The circulation (number of copies sold) of the Sun stands at around 3 million and the circulation for the Daily Mail is 2.1 million, figures that are of course much lower than AIR. So each copy of the Sun is read by an average of 2.9 people, and each copy of the Mail is read by around 2.8 people. Clearly, some people buy a newspaper and only read it themselves and for those people their readership may indicate political affiliation. But, clearly, there are many situations in which people are picking up and reading a newspaper chosen and paid for by someone else. The fact that these people read a particular title potentially tells us little about them, or their political affiliation.

The AIR of 8.8 million for the Sun plus 1.8 million for the Star expressed as a percentage of the 49 million or so adults gives us 21 or 22%, the targets YouGov and Angus Reid use to weight people who count the Sun or Star as the papers they read most often.

So what? People who say they read a Red Top daily newspaper most often are upweighted by YouGov by a factor of about 2 to match an estimate derived from AIR, meaning that every respondent reading a Red Top most often has their voting preference counted twice (Angus Reid weight = @1.4). At the other end of the spectrum, quality newspaper readers get given a weight of about 0.6 by both Angus Reid and YouGov; meaning that their votes count for little more than half a person. (Data obtained from tables for YouGov tracker poll 23-24th Feb, Angus Reid Omnibus 6-7th Jan)

So when pollsters say they have weighted the data to be “representative” of the adult population they have actually used different criteria and while some weights are derived from simple observable fact, others are not so easily explained. In some cases the weights are large.

The danger, particularly inherent in the more advanced forms of weighting, is that poll results thus produced are not seen by readers as the simple addition of the views of 1,000 or 2,000 people (perhaps corrected for small and obvious imperfections) but descriptions of public opinion that they could be forgiven for thinking are influenced by the judgements of the pollsters themselves.

Nick Sparrow was formerly head of polling at ICM