My correct election prediction
Let’s be clear – polling is dead. A Nobel Prize-winning method killed it in 2000.
I knew the Conservatives would lose their majority – and put money on it. Why was I so sure? I ignored opinion polls because their snapshots of opinion on a single left versus right dimension don’t work anymore. People consider multiple dimensions that defy such labelling, like social policy, identity politics and Europe. GCSE (Year 11) maths tells us you can’t use a single data point (vote) to disentangle multiple dimensions. Yet that’s what pollsters do and why they always fight the last battle.
The solution is choice modelling. It asks multiple “how would you vote?” questions of a respondent, changing the manifestos and other influential factors in a systematic (but realistic) way each time. Why does it work?
At its heart is a remarkably simple assumption: how often I choose A over B indicates how much I value A over B. When A and B are complete potential manifestos we can observe how individual policies (or leaders) influence votes for an individual voter. A manifesto can then be broken down into its constituent parts (like a Lego set) and rebuilt to form a different one, with different predicted voting patterns.
No opinion poll could legitimately say “Mrs Smith gave Jeremy Corbyn a positive 22% favourability rating and Theresa May a negative 11% rating”. It asks a one-shot question, giving you ‘Mr Corbyn’ or ‘Mrs May’ as your answer and net favourability ratings are produced only after you average over people. Choice modellers have long known that this is a ludicrous way to do things. People are inconsistent. They change their minds. They are influenced by various factors. A good choice model varies these systematically to calculate their effects. Via multiple questions, it tells us Mrs Smith’s depth of support and how stable it is.
Just before the election was called I ran a national choice model survey to quantify what influenced votes in the 2016 EU referendum. It’s how I know that the East Midlands (where I live) was the only English region to display a positive net favourability rating for EU migration. Yet we voted LEAVE in huge numbers. Why?
The reason is the same as why we give ‘long battery life’ as a highly desirable feature of a smartphone, yet buy the latest model that needs charging twice a day. We ignore certain attitudes and concentrate on others when buying (or voting). My modelling quantified how attitudes – which are fairly stable once we reach adulthood – were up or downweighted by events and the media to form a vote in 2016 and then again in my 2017 survey.
Crucially it showed that had the referendum been rerun in March then REMAIN would have won by 51% – coincidentally the same margin for LEAVE last year. Why? After all, the pollsters were correct that the LEAVE vote was ‘solidifying’ (with more former REMAIN supporters switching to LEAVE than vice versa). One thing they have wrong (as usual) is turnout – how many people would actually vote.
Yet this is where choice models excel. Professor Dan McFadden won the 2000 ‘Economics’ Nobel Prize for perfectly predicting demand for a light rail system before a single rail was laid. The decision to travel on it (or crucially not to) is equivalent to whether to vote.
A March 2017 referendum rerun would have caused an additional 5 million 2016 non-voters to turn out, with 3.5 million in favour of REMAIN (and heavily leaning Labour). I knew these 2017 promises weren’t optimistic because their views were just as strong as those of voters. Perhaps last year they couldn’t stomach following Cameron when everyone thought REMAIN would win anyway. Furthermore half of them didn’t vote in the 2015 general election. That’s 1.75 million extra voters I knew were up for grabs should Labour keep the door ajar to REMAIN. Initially I wrote Labour off for this reason: Corbyn didn’t say what these people wanted to hear.
However, his subsequent refusal to rule out remaining in the EU made me change my mind. Furthermore LEAVE views, though more ‘solid’, have recently coalesced around ‘soft’ not ‘hard’ BREXIT. A more realistic referendum would give a choice between the EU and ‘soft BREXIT’ and the UK splits almost 50/50, which the Conservatives never spotted but which defused the issue somewhat for Labour. So I placed bets that enough of the 1.75 million would see the possibility of revisiting the issue under Labour.
Of course the dementia tax debacle and Mrs May’s appearance as cheerleader for a re-energised ‘nasty party’ (contrary to the May of yesteryear) didn’t help the Tories. However, my study showed that Theresa May was in trouble long before the pollsters saw it.
When the Tory lead did eventually shrink dramatically, I found it strange that a ‘new improved’ type of survey that even its implementer struggled to endorse got media attention whilst a Nobel-Prize winning method with a 90 year pedigree was ignored. Perhaps this was because companies using it to beat the competition keep quiet. The blue chip bank in Australia, Macquarie, uses it in place of discredited psychometric tests to recruit people who have their desired personality traits. When I worked as an academic in Sydney BOSE was our biggest client but forbade us ever mentioning them in our marketing: why tell your competitors how you came to dominate your markets?
People ask what I’d do to change things. I’d make all influential journalists publish a (large) bet they made on the outcome. As Upton Sinclair said “It is difficult to get a man to understand something, when his salary depends on his not understanding it.” If reporters’ incomes depend on getting it right maybe they’ll stop following the pollster lemmings over the cliff every 5 years. Meanwhile I collected my winnings from the bookmaker yesterday.