Three words pollsters would rather you didn’t mention; differential non response

Three words pollsters would rather you didn’t mention; differential non response

A special column by ex-ICM boss & polling pioneer, Nick Sparrow

While trumpeting the fact that samples are representative of the adult population, researchers seldom, if ever, publish response rate data. Truth is that for telephone polls, response rates are frighteningly low and falling. The reasons for this are varied, but include the fact that many of us have become wary of calls from strangers, having been bombarded with unsolicited sales calls and by “suggers”, the industry term for people selling or list building under the guise of market research. The Telephone Preference Service which should stop these calls is ineffective. Meanwhile caller identification technology, and increased use of mobiles add to the problems.

Online pollsters cannot provide meaningful response rate data because they use volunteer panels of willing respondents who sign up in the hope of participating in polls. Nevertheless we might guess that respondents to internet panel polls are a vanishingly small proportion of all those who have ever given the pastime any serious thought.

Low response rates are not a problem in themselves, only if certain types of people are more likely to refuse than others. In this respect research by Pew in the US is not comforting. Unsurprisingly, they found that people who respond to political surveys are more interested in politics. A bias that could not be eliminated by weighting. That finding is also likely to apply here; people are more likely to participate in surveys if the subject matter interests them.

If we think about recent electoral tests here in the UK, the decisions could be simplified to change versus continuity. At the basic level, referenda on membership of the EU, or independence for Scotland, and even the General Election all ask voters to choose between the status quo and something new and different.

    For polls to be accurate we need the proportions of people interested in politics and wanting change or continuity to be the same as those who are not interested in politics. That might happen, but the tendency will be for people who haven’t really thought through all the pros and cons of change to opt in greater numbers for continuity. In other words the people who can be interviewed may easily have somewhat different attitudes to those who cannot

If this is a problem for the polls, then it will be most acute among groups we know are least interested in politics. For example pollsters find it very hard to interview 18-24 year olds, a group far less interested in politics than any other. It could be seen as evidence of differential refusal; the worry being that willing 18-24 year olds respondents are more interested than others in politics generally and in the subject of the poll, available for interview and most likely to vote. But pollsters treat the problem as one solely of availability, and simply work harder to get the right number in the sample, or cheaper still, just upweight the sample achieved to the correct proportion. But while this approach will, on the surface, appear to make the sample representative, it may well exacerbate the problem of differential non response.

If this theory is right then polls will contain a few more people in all age groups who are sympathetic to the idea of change than they should, and rather fewer people interested in continuity than they ought. What is the evidence? Polls in advance of the Scottish Referendum, the 2011 AV referendum, the 2015 General Election, and indeed the pollsters other debacle in 1992 all overestimated the appetite for change.

The obvious danger of polls exaggerating the mood for change is that they create a bandwagon effect. Alternatively they may promote a spiral of silence in which those tending to want continuity will perceive that their views are in a minority and thus become inclined to silence, thus further aggravating the polling error.

The problem for pollsters trying to wrestle with this problem is obvious; how can you interview by phone or recruit to an online panel those who aren’t interested in answering your questions?

The solution for vote intention polls must be methods that achieve much higher response rates, and thus include more respondents who are not particularly interested in politics. Achieving that is unlikely to make polls cheap or quick to conduct. Unwelcome news to editors accustomed to a regular diet of headline grabbing news stories from polls predicting sensational (if unlikely) outcomes.

Maybe the answer is to become far less obsessed with who will win, and focus instead on understanding the appeal of the arguments both sides are making, where appropriate the attributes of the personalities involved, and how these attitudes change over time. Such polls might point to likely outcomes, without risking the potentially false and misleading impression of deadly accuracy.

At this point, dare I mention that pollsters can ill afford an EU referendum in which they predict a better result for the “out” camp than it actually achieves?

Nick Sparrow

Comments are closed.