In the aftermath of 9/11 terrorist attacks, the US Secretary of Defense was lampooned for saying something that momentarily sounded like non-sense:
‘There are known knowns. There are things we know that we know. There are known unknowns. That is to say, there are things that we now know we don’t know. But there are also unknown unknowns. There are things we do not know we don’t know.’
Although in an afterthought it seemed convincing. A large part of communications efforts leading up to a political campaign are based on the known unknowns and unknown unknowns. These are the things that we try to discover them through election surveys:
- What people believe (unknown unknowns)
- How they feel about them (known unknowns)
- How they are likely to act based on it (interpretations which could largely be subjective)
What is the role of election surveys
Surveys can help campaign managers understand how to use their resources most effectively – to get the most out of the communications and outreach efforts they make. It’s partly figuring out who are the voters that you can truly influence (swing voters), and partly avoiding voters who are already on board with you (safe voters) and those who will probably never vote for you (difficult voters). This knowledge can help you target your communications in the best way. The content of that communication, can also be derived out of intelligence from election surveys – which issues to talk about, what to talk about them and how to say it.
How accurate are election surveys
The simple answer is – not very. Opinion research inherently harbours several factors which can lead to errors. India’s national and regional news channels have been frequently and hilariously inaccurate in predicting the outcome of elections. Although there are ways to minimise the effect of such errors, eliminating them entirely is not a promise that a surveyor may make, and will have to accept a variance. Broadly there are three sources of errors:
- Pre-observational: When a wrong population is selected for surveying, or the sample of that population chosen is not representative of the larger population universe (known as sampling error). For instance, if you survey the voting inclination of Bandra-Kurla area, where our institute is based and an area that has unfailingly voted for a single party, and assume that to be the representative of Mumbai at large, you will be mistaken. Furthermore, if you run those election surveys in housing societies only while ignoring the affordable housing or slums nearby, you will have results that are skewed for only a certain background.
- Observational: These are errors made while capturing insights – the questions asked, the order, format and way of articulating and asking them, even the media used to capture responses. At the core this error is caused by a certain human psychological tendency to not be entirely truthful for certain types of questions. But here is the real catch – the respondent isn’t always lying, but believe the answer they are giving is correct. Its biggest cause is the tendency to respond in a way that makes them likeable (or less detestable) – social desirability bias. A respondent is, for instance, unlikely to exhibit how conservative or liberal they are on certain issues, if that does not conform to their immediate circle. Just the mere order of questions asked can change their response drastically. I’ll let this clip from Yes Prime Minister do the explaining:
- Inferential: Such errors emerge out of misread and misinterpretation of data by analysts. In my experience, these can arise if the dependent and independent variables are not accurately identified, or interdependency or correlation inaccurately interpreted. For a very crude example, if you survey a group of engineering students on their views on legalisation of gay marriage (the dependent variable) assuming them all to be homogenous because of their education, without taking into consideration their socio-economic backgrounds (the independent variable), you are likely to be misled. Another factor that can cause these is if the sampling biases are not adequately adjusted for. One more than one occasion, I’ve found surveyors struggle to infer the right insights because the real measurable were never included in the survey to begin with.
In our subsequent posts, I will share some insights from my experience on how these errors are eliminated, minimised, mitigated or accounted for.
Why else do election surveys fail? Share your views with me here in the comments below or on Twitter at @HemantGaule
Previously on State Craft: