SINGLE POST

Quick guide to why polling goes wrong – part 2

  • An observed frequency – how often ‘Labour’ is chosen over ‘Conservative’ – is a raw estimate of the probability of voting Labour over Conservative
  • This probability is actually a perfect confound of a mean (how far apart on the political spectrum) a respondent places ‘Labour’ and ‘Conservative’) and a variance (the error distributions around each party FOR THAT INDIVIDUAL)
  • Thus a 60/40 split, if you got him/her to make the choice a number of times, might be due to a big difference on the political spectrum coupled with a lot of uncertainty surrounding what the Conservatives and/or Labour really stand for these days (noise almost drowns out signal). It might equally be due to him/her being very aware (certain) as his/her preferences, (s)he merely doesn’t think the two are far apart on the spectrum any more (signal dominates noise).
  • It could be any of an infinite number of combinations of such means and variances in between – so there are an INFINITE number of interpretations of this observed 60/40 frequency
  • Conventional pollsters (and stats programs) are not used to thinking like this. They ‘normalise’ (set) the variance to be one, thus interpreting all difference to be entirely of arising from means.
  • In the modern world people’s degree of certainty is frequently rooted in the factors that go together to make a ‘Conservative’ or ‘Labour’ manifesto. So your overall prediction probabilities are BIASED.
  • There is NO WAY to separate these mean and variance effects correctly – you have a single equation with two unknowns. You need judgment (usually only gained with 10+ years of looking at these models), intimate knowledge of your data, sources of information that are external to the model etc to make INFORMED estimates as to which of the infinite number of solutions are ‘definitely silly’, which are ‘possible’ and which are ‘highly likely’.
  • A choice model, if used WITH CARE, can do this – you can ‘net out’ the different variance effects, then you can aggregate across people and segment according to REAL differences in where people place parties and policies on the latent (voting) scale.
  • THEN, knowledge of how the variance effects arise, where they arise, and how they differ according to observable (attitudinal, sociodemographic) variables, helps you understand whether a potential voter merely needs more information, or whether they must be persuaded to alter the trade-offs they are willing to make between (for example) the importance of a European policy vs. the importance of a domestic economic policy.
  • You CANNOT properly do this without getting a handle on the heteroscedasticity in the first place – getting ‘inside an individual voter’s head’.
  • Thus why polling is unfit for purpose in the 21st century, where there is no longer a single ‘left-right’ dimension that could have allowed you (to some extent) to appeal to the central limit theorem (CLT).
  • Thus the CLT doesn’t work in the new world (part 3).