General Election projection: Tories 48 seats short

General Election projection: Tories 48 seats short


For interactive map go to PoliticsHome

Dr Rob Ford (University of Manchester), Dr Will Jennings (University of Manchester), Dr Mark Pickup (Simon Fraser University) and Professor Christopher Wlezien (Temple University) explain their vote projection method now published and updated regularly at Politics Home.

As PB readers and regulars know, interpreting the torrent of polling during an election campaign is a difficult task. The ‘horse race’ attracts widespread interest and can shape the tone of the campaign. Yet, true patterns of public opinion can get lost in the erratic shifts that are a consequence of survey error and the differences in the methods that different pollsters apply to estimate public opinion. We aim to extract the true preference signal from the noise by employing statistical techniques that can combine different sources of polling information while accounting for their differences. Our method is both sophisticated and transparent in its approach. In this post we explain our vote projection method that estimates the strength of the parties in Westminster if an election were held today. Note that our projections refer to the information that we have now and do not constitute a prediction of the final result.

Our projection on April 23rd suggested that the Conservatives would fall short of an overall majority by 48 seats. Much can change before Election Day, however. We are political scientists, not clairvoyants, after all.

The first step of our method is to produce an estimate of current electoral sentiment by pooling all the currently available polling data, while taking into account the estimated biases of the individual pollsters (‘house effects’), the effects of sample size on the likely accuracy of polls, and the effects of the sampling decisions pollsters make, which mean their samples are not truly random (‘design effects’). To estimate ‘house effects’, we make use of the 2005 election result—our model treats the 2005 result as a reference point for judging the accuracy of pollsters, and adjusts the poll figures to reflect the estimated biases in the pollsters figures based on this reference point.

We do not believe that house effects are the result of polling houses adjusting their samples or results to fit their ideological preferences. Pollsters’ public and commercial reputations depend upon accurate polling which makes this is unlikely. Rather these effects are the result of the methodological decisions polling houses make to attempt to gather a representative sample of voters in a short time frame at reasonable cost. In general, we find that the ‘house effects’ are relatively small. Historical analyses we have conducted suggest that they have become smaller since 1997, with pollsters gradually improving the accuracy of their techniques.

The second step of our method is to project this estimate of electoral sentiment from the polls across individual constituencies, to calculate what the balance of power in the House of Commons would look like if the vote shares reflected our current figures.

Several websites already estimate the translation of votes to seats by applying ‘uniform national swing’: estimating the change in parties’ vote shares implied by current polling, applying this change in every constituency and then adding up the seats won by each party under the new shares. This simplified method gives us a general idea about the state of play politically, but it can be misleading for two reasons.

Firstly, it assumes that the ‘swing’ will be the same everywhere, which may not be the case. Secondly, it makes no allowance for uncertainty. If the UNS calculation indicates that the Conservatives will win a seat by 1%, the seat is allocated to them with the same certainty as another seat where the expected margin is 15%.

To improve upon the UNS model we use a method developed by David Firth and John Curtice and used by the BBC in their exit poll based forecasts in 2005. Our method applies the estimated national changes in vote to seat shares, but allows for random variation in the changes that actually occur in individual seats, and systematic variation in the pattern of swing. We have estimated the degree of random variation in swing using historical data from the 2001 and 2005 elections; using these parameters, our change model estimates the probability that each party will win a given seat. Our projection of overall seat shares is then a sum of these probabilities. If the Conservatives are very narrowly ahead, they may be assigned a probability of 60% – out of every 10 seats with such a probability, our model expects them to win 6. Where the Conservatives have a very solid expected lead, the probability might be 90%. In this case, the model expects them to win 9 out of every 10 seats. For our published seat-by-seat projection, constituencies are considered ‘safe’ where a party has a greater than 98% likelihood of winning, ‘likely’ when this is between 98% and 75%, and ‘lean’ between 75% and 50%.

We incorporate two important deviations from uniform swing. Firstly, there is fairly consistent polling evidence that the political landscape looks different in Scotland to the rest of Britain, perhaps in part due to the change in government at Holyrood in 2007. To allow for this, we employ a separate estimate of Scottish opinion derived from the most recent Scotland specific polls available. Secondly, a series of polls have indicated that the level of swing from Labour to the Conservatives will be higher in marginal constituencies, a pattern which has also been observed in past elections. To allow for stronger performance in the marginals, we incorporate adjust for an extra two points of swing in seats where Labour hold majorities of between 6% and 14% compared with other seats. To ensure the total pattern of change sums to what we expect, we also recalibrate the expected swing in other seats to correct for the deviations in Scotland and in the marginals.

We do not claim that our method is a perfect solution to the issue of poll averaging, nor would we claim that averages necessarily provide a better guide to the likely election result than individual pollsters. But we do believe our approach delivers a clearer picture of the political landscape underlying the confusion of ever-shifting daily polls, and what this may mean for the parties in terms of Westminster seats.

A fuller description of our method can be found at the poll centre website at PoliticsHome:

Comments are closed.