
The 2010: UNS, Proportional Swing and all that…
Monday, May 31st, 2010

Dr Rob Ford (University of Manchester), Dr Will Jennings (University of Manchester), Dr Mark Pickup (Simon Fraser University) and Professor Christopher Wlezien (Temple University) reflect upon models and methods for projecting the vote at the 2010 British election.
In this post we consider the performance of our projection model and the other models published in the run-up to the May 2010 UK Election, discuss the methodological issues involved, and consider some of the future directions for projection both in the UK and elsewhere.
There were a number of different forecasts of the 2010 election result. The models differed quite a lot and the performance did too. There were some notable similarities, however. First, all of the models over-predicted the number of Liberal Democrat seats, by at least 24 seats. Second, all of the models under-predicted the number of Labour seats, by at least 23 seats in all but one case. Third, the models tended to best predict the number of Conservative seats.
Our PoliticsHome Poll Centre projection was the only model to exactly predict the number of Conservative seats. Like the other models, however, our projection underestimated Labour seats and overestimated Liberal Democrat seats. Our model relied on recent voting intention data and a variant of uniform swing. The model that was closest across the three parties was in fact the Hix-Vivyan pooling-the-polls model that assumed a uniform swing, one of three alternative models they offered—see the table above. The superior performance of these models, which relied upon uniform swing or some variant on it, contrasted with the claims of some other forecasters, who argued in favour of proportional swing. Interestingly, the FiveThirtyEight projection model, based on estimation of proportional swing, was the worst performing of all the models.
Projecting the Vote
As noted both on pb.com and elsewhere, our method for estimating electoral sentiment pooled all the currently available polling data, while taking into account the estimated biases of the individual pollsters (‘house effects’), and the effects of random sampling, adjusted for the effects of the sampling decisions pollsters make, which mean their samples are not truly random (‘design effects’).
Using this method, our final projected Tory share of the Great Britain vote, excluding Northern Ireland which is not surveyed by pollsters, was accurate within 0.1% at36.9%. The projection underestimated Labour’s share of the vote by over two percentage points (projected 27.6%, actual 29.7%) while also significantly overestimating the Lib Dem share (projected 27.2%, actual 23.6%). The model had a lower projection for the share of the vote gained by other parties (projected 8.3%, actual 9.8%).
By giving extra weight to the last day’s polls, we also reduced the error on our forecast by successfully detecting the late swing to the Conservatives and Labour evident in these polls – our final forecast upgraded the Conservatives’ share by 2 percentage points and Labour by 1 percentage point.
Our vote projections significantly underestimated Labour’s share and but reflected the general tendency of the polls to overestimate the Liberal Democrats’ share. The British Polling Council statement on the accuracy of the final polls reflected that the polls ‘nevertheless told the main story of the 2010 election – that the Conservatives had established a clear lead. All but one of the nine pollsters came within 2% of the Conservative share’. This explains why the error on seat projections was lower for the Conservatives than for Labour and the Liberal Democrats for all but one of the projection models. The British Polling Council also noted, however, that
‘The tendency at past elections for polls to overestimate Labour came to an abrupt end, with every pollster underestimating the Labour share of the vote, though all but one were within 3%. However, every pollster overestimated the Liberal Democrat share of the vote.’
Rank
Pollster
CON
LAB
LD
Error
1
RNB India: Phone
37 (+0.1)
28 (-1.7)
26 (+2.4)
4.2
2
ICM phone/past vote weighted
36 (-0.9)
28 (-1.7)
26 (+2.4)
5
3
Ipsos-MORI: phone
36 (-0.9)
29 (-0.7)
27 (+3.4)
5
4
Populus: phone/past vote weighted
37 (+0.1)
28 (-1.7)
27 (+3.4)
5.2
5
Harris: Online
35 (-1.9)
29 (-0.7)
27 (+3.4)
6
6
ComRes: phone/past vote weighted
37 (+0.1)
28 (-1.7)
28 (+4.4)
6.2
7
Opinium: online
35 (-1.9)
27 (-2.7)
26 (+2.4)
7
8
YouGov: online
35 (-1.9)
28 (-1.7)
28 (+4.4)
8
9=
Angus Reid: online
36 (-0.9)
24 (-5.7)
29 (+5.4)
12
9=
BPIX: online
34 (-2.9)
27 (-2.7)
30 (+6.4)
12
9=
TNS-BMRB: face to face
33 (-3.9)
27 (-2.7)
29 (+5.4)
12
12
OnePoll: online
30 (-6.9)
21 (-8.7)
32 (+8.4)
24
–
Actual GB share
36.9
29.7
23.6
–
This election demonstrated – with a number of new and relatively unknown pollsters in the field – that the simple averaging of all polls can present difficulties. While there was a general tendency to underestimate the Labour vote share, this was greater for some pollsters, such as Angus-Reid and Opinium, which had the effect of distorting the vote share projections – across all the projection models noted above. Simply pooling the pools to estimate an average did not resolve this problem, so our attempt to control for house effects was vindicated.
Another problem, little discussed in the pooling of polls was averaging of rolling surveys, such as by ComRes in the final weeks of the election campaign, which double-counted respondents and introduced a ‘moving average’ into the time series.
From Votes to Seats
The PoliticsHome Poll Centre’s projected outcomes in individual constituencies were correct in 86% of seats. Although our model performed well, there was a large slice of luck involved. We were in fact wrong to assume that the Tories would outperform in the marginals, but this was balanced by Lib Dem underperformance everywhere to deliver roughly the right result.
There were very clear patterns of differential swing in Scotland, as we predicted, although the differences were even larger than the polls had suggested. There were also differential patterns in Wales and in seats with large ethnic minority populations. These would both have been near the top of our list of expected differential effects, but we had no polling evidence on them so did not incorporate them in our model.
The massive over-estimate in Liberal Democrat support caused us (and others) to substantially overestimate the number of Liberal Democrat seats. This had less effect on our model and others relying on uniform swing as this gave the Liberal Democrats less chance of winning over a large number of seats even with large swing.
The big story, though, with regards the UNS vs. differential swing debate is that the pattern of swing was remarkably uniform:
The change in Conservative vote varied by less than two percentage points moving from their weakest to their strongest areas, and they actually underperformed somewhat in their weakest areas relative to the average
The change in Labour vote varied somewhat more, but there was no systematic relationship with prior strength – if anything the party performed worse in areas where it started off somewhat weaker.
The change in Liberal Democrat vote showed more evidence of proportionality, falling back three points in the strongest areas while rising in the weaker areas. But even here the evidence of proportional swing was weak and patchy at best.
Given the lack of any clear relationship between prior strength and outcomes, we expected proportional swing based models to perform quite poorly, and so it has proved.
By contrast, the PoliticsHome Poll Centre model performed very well given the poor performance of the polls. Indeed, had the polls been exactly right, our forecast would have been very close to the actual result —Conservatives 305, Labour 249, Liberal Democrats 65. This would have given a total seat error of 19 seats. Only the election night exit poll analysis run by the BBC, which built in demographic and political predictors of deviation from uniform swing, did better.
Visit the Poll Centre.

Dr Rob Ford (University of Manchester), Dr Will Jennings (University of Manchester), Dr Mark Pickup (Simon Fraser University) and Professor Christopher Wlezien (Temple University) reflect upon models and methods for projecting the vote at the 2010 British election.
In this post we consider the performance of our projection model and the other models published in the run-up to the May 2010 UK Election, discuss the methodological issues involved, and consider some of the future directions for projection both in the UK and elsewhere.
There were a number of different forecasts of the 2010 election result. The models differed quite a lot and the performance did too. There were some notable similarities, however. First, all of the models over-predicted the number of Liberal Democrat seats, by at least 24 seats. Second, all of the models under-predicted the number of Labour seats, by at least 23 seats in all but one case. Third, the models tended to best predict the number of Conservative seats.
Our PoliticsHome Poll Centre projection was the only model to exactly predict the number of Conservative seats. Like the other models, however, our projection underestimated Labour seats and overestimated Liberal Democrat seats. Our model relied on recent voting intention data and a variant of uniform swing. The model that was closest across the three parties was in fact the Hix-Vivyan pooling-the-polls model that assumed a uniform swing, one of three alternative models they offered—see the table above. The superior performance of these models, which relied upon uniform swing or some variant on it, contrasted with the claims of some other forecasters, who argued in favour of proportional swing. Interestingly, the FiveThirtyEight projection model, based on estimation of proportional swing, was the worst performing of all the models.
Projecting the Vote
As noted both on pb.com and elsewhere, our method for estimating electoral sentiment pooled all the currently available polling data, while taking into account the estimated biases of the individual pollsters (‘house effects’), and the effects of random sampling, adjusted for the effects of the sampling decisions pollsters make, which mean their samples are not truly random (‘design effects’).
Using this method, our final projected Tory share of the Great Britain vote, excluding Northern Ireland which is not surveyed by pollsters, was accurate within 0.1% at36.9%. The projection underestimated Labour’s share of the vote by over two percentage points (projected 27.6%, actual 29.7%) while also significantly overestimating the Lib Dem share (projected 27.2%, actual 23.6%). The model had a lower projection for the share of the vote gained by other parties (projected 8.3%, actual 9.8%).
By giving extra weight to the last day’s polls, we also reduced the error on our forecast by successfully detecting the late swing to the Conservatives and Labour evident in these polls – our final forecast upgraded the Conservatives’ share by 2 percentage points and Labour by 1 percentage point.
Our vote projections significantly underestimated Labour’s share and but reflected the general tendency of the polls to overestimate the Liberal Democrats’ share. The British Polling Council statement on the accuracy of the final polls reflected that the polls ‘nevertheless told the main story of the 2010 election – that the Conservatives had established a clear lead. All but one of the nine pollsters came within 2% of the Conservative share’. This explains why the error on seat projections was lower for the Conservatives than for Labour and the Liberal Democrats for all but one of the projection models. The British Polling Council also noted, however, that
‘The tendency at past elections for polls to overestimate Labour came to an abrupt end, with every pollster underestimating the Labour share of the vote, though all but one were within 3%. However, every pollster overestimated the Liberal Democrat share of the vote.’
| Rank | Pollster | CON | LAB | LD | Error |
|---|---|---|---|---|---|
| 1 | RNB India: Phone | 37 (+0.1) | 28 (-1.7) | 26 (+2.4) | 4.2 |
| 2 | ICM phone/past vote weighted | 36 (-0.9) | 28 (-1.7) | 26 (+2.4) | 5 |
| 3 | Ipsos-MORI: phone | 36 (-0.9) | 29 (-0.7) | 27 (+3.4) | 5 |
| 4 | Populus: phone/past vote weighted | 37 (+0.1) | 28 (-1.7) | 27 (+3.4) | 5.2 |
| 5 | Harris: Online | 35 (-1.9) | 29 (-0.7) | 27 (+3.4) | 6 |
| 6 | ComRes: phone/past vote weighted | 37 (+0.1) | 28 (-1.7) | 28 (+4.4) | 6.2 |
| 7 | Opinium: online | 35 (-1.9) | 27 (-2.7) | 26 (+2.4) | 7 |
| 8 | YouGov: online | 35 (-1.9) | 28 (-1.7) | 28 (+4.4) | 8 |
| 9= | Angus Reid: online | 36 (-0.9) | 24 (-5.7) | 29 (+5.4) | 12 |
| 9= | BPIX: online | 34 (-2.9) | 27 (-2.7) | 30 (+6.4) | 12 |
| 9= | TNS-BMRB: face to face | 33 (-3.9) | 27 (-2.7) | 29 (+5.4) | 12 |
| 12 | OnePoll: online | 30 (-6.9) | 21 (-8.7) | 32 (+8.4) | 24 |
| – | Actual GB share | 36.9 | 29.7 | 23.6 | – |
This election demonstrated – with a number of new and relatively unknown pollsters in the field – that the simple averaging of all polls can present difficulties. While there was a general tendency to underestimate the Labour vote share, this was greater for some pollsters, such as Angus-Reid and Opinium, which had the effect of distorting the vote share projections – across all the projection models noted above. Simply pooling the pools to estimate an average did not resolve this problem, so our attempt to control for house effects was vindicated.
Another problem, little discussed in the pooling of polls was averaging of rolling surveys, such as by ComRes in the final weeks of the election campaign, which double-counted respondents and introduced a ‘moving average’ into the time series.
From Votes to Seats
The PoliticsHome Poll Centre’s projected outcomes in individual constituencies were correct in 86% of seats. Although our model performed well, there was a large slice of luck involved. We were in fact wrong to assume that the Tories would outperform in the marginals, but this was balanced by Lib Dem underperformance everywhere to deliver roughly the right result.
There were very clear patterns of differential swing in Scotland, as we predicted, although the differences were even larger than the polls had suggested. There were also differential patterns in Wales and in seats with large ethnic minority populations. These would both have been near the top of our list of expected differential effects, but we had no polling evidence on them so did not incorporate them in our model.
The massive over-estimate in Liberal Democrat support caused us (and others) to substantially overestimate the number of Liberal Democrat seats. This had less effect on our model and others relying on uniform swing as this gave the Liberal Democrats less chance of winning over a large number of seats even with large swing.
The big story, though, with regards the UNS vs. differential swing debate is that the pattern of swing was remarkably uniform:
The change in Conservative vote varied by less than two percentage points moving from their weakest to their strongest areas, and they actually underperformed somewhat in their weakest areas relative to the average
The change in Labour vote varied somewhat more, but there was no systematic relationship with prior strength – if anything the party performed worse in areas where it started off somewhat weaker.
The change in Liberal Democrat vote showed more evidence of proportionality, falling back three points in the strongest areas while rising in the weaker areas. But even here the evidence of proportional swing was weak and patchy at best.
Given the lack of any clear relationship between prior strength and outcomes, we expected proportional swing based models to perform quite poorly, and so it has proved.
By contrast, the PoliticsHome Poll Centre model performed very well given the poor performance of the polls. Indeed, had the polls been exactly right, our forecast would have been very close to the actual result —Conservatives 305, Labour 249, Liberal Democrats 65. This would have given a total seat error of 19 seats. Only the election night exit poll analysis run by the BBC, which built in demographic and political predictors of deviation from uniform swing, did better.
Visit the Poll Centre.






