How are people selected or recruited for interview?
Traditionally, polls were conducted by one of two methods. The first is random sampling, whereby those to be interviewed are selected (by a computer) at random. The second is quota sampling – interviewers are given a ‘quota’ of people with different demographic characteristics to interview, and they then find the people to interview who will fulfil that quota. The former approach, still used in many academic and government surveys, has a foundation in statistical theory, but the history of opinion polling indicates that quota surveys can be just as reliable as random ones.
Nowadays, most polling is undertaken online, using a somewhat different approach again. People are recruited into a panel of those willing to answer polls. In signing up they give a lot of detail about themselves. These details are used to select a sample of people representative of the general population who are asked to respond a particular poll.
More important than exactly how respondents are chosen is that they have been chosen at all – rather than having decided for themselves to answer a poll. Any poll in which the survey organisation chooses who is invited to take part is likely to be more reliable than one where anybody can take part – and there is no attempt to ensure that the demographic profile of those who answer looks like that of the country. Examples of the latter kinds of ‘polls’ are those conducted on Twitter or on a newspaper website.
When are polls conducted?
Polls are often conducted over just one or two days, especially if undertaken online, though in some instances the interviews may have been obtained over a week or so. When public opinion is shifting, perhaps in response to events, speed helps polls to reflect the latest public mood, though it means they are reliant on the responses of those who can be successfully contacted over a short period of time.
Polls published at the same time may not necessarily have been conducted at the same time. One of the polls might have been done over the last two or three days, the other more than a week ago. If opinion is shifting, then, all other things being equal, the more recent poll is more likely to reflect the current position. In contrast, if there has not been any change in attitudes over this time, the difference should not matter.
Any report of a poll should always say when it was conducted.
How many people are interviewed?
Statistical theory indicates that the more people who are interviewed in a poll, then, other things being equal, the more likely it is that it will accurately reflect the views of the population.
However, there is a law of diminishing returns. A poll of 2,000 people is not twice as likely to be accurate as one of 1,000 people. As a result, there is no “minimum” sample size which is acceptable. But bearing in mind cost and likely accuracy, the established norm for an opinion poll in Great Britain is that it interviews at least 1,000 people.
This figure applies irrespective of whether the poll is of people across Britain or of only part the country. The likely accuracy of a poll depends on how many were interviewed, not on the proportion of the population sampled. Thus, a poll of 1,000 designed to represent the whole of Great Britain is no worse than a sample of 1,000 designed to represent the population of a single constituency.
The ’response rate’ can also matter. If someone sends out a questionnaire to a million people and only gets ten thousand back — a response rate of just 1% — this poll is less likely to be reliable than one of just a thousand people where most of those who were selected for interview have actually responded.
Beware of attempts to over-interpret sub-group differences in polls. For example, it might be claimed that in a national poll of 1,000, support for a particular party is much higher among respondents with an Asian ethnicity than among other ethnicities. A nationally representative sample of one thousand is likely to contain only around sixty people of an Asian background, too few to be worthy of comment.
Where are polls conducted?
Many opinion polls published in the UK do not interview people in Northern Ireland. This is mainly because of the different political system there. They should thus be described as representing the views of people in Great Britain, not of the United Kingdom as a whole. There is nothing wrong with a poll which excludes Northern Ireland so long as everyone is clear this is what has happened.
Similarly, there is nothing wrong with a poll which only interviews people in a part of Britain, such as everywhere south of Sheffield, so long as it is made absolutely clear that this is a poll which excludes the north of England and Scotland, and no attempt is made to suggest that it is representative of Britain as a whole. Indeed, thanks to devolution, polls that only cover people living in Scotland, Wales, or Northern Ireland are not uncommon.
How accurate is a poll likely to be?
It is quite common to see the results of a poll accompanied by a statement that the results, “are subject to a sampling error of + or – 3%” (often called the “margin of error”, but more technically the “confidence interval”).
Although technically it is only possible to calculate the confidence interval for random samples, historically it has appeared that quota samples tend to have similar sampling errors.
What this confidence interval means is that if 20 polls were conducted separately, all at the same time and all in exactly the same way, and the average level of support for the Conservatives measured by those polls was 30%, one would expect 19 of these 20 polls to come up with a result of between 27% and 33%. However, there is always a possibility of a “rogue poll”, the one in 20 reading that by chance falls outside this range.
This has implications for how polls should be interpreted. If in a poll of a thousand people the levels of support for two political parties are within two or three percent of each other, there is a reasonable chance that other polls conducted at the same time in the same way would have found them tied, or even reversed their positions. A result this close is “too close to call”, because a single poll cannot give a precise enough measurement to be sure which party is ahead; all the poll tells us is that the parties are roughly “neck and neck”.
However, ‘sampling error’ is not the only possible source of inaccuracy in a poll.
Why else might polls be wrong?
There is a lot else that can affect the accuracy of poll apart from random sampling error. Moreover, whereas random sampling error means the true level of support for a party or proposition is just as likely to be above the estimate of the poll as below it (and vice-versa), this is not necessarily true of other sources of error. These other sources may mean that a poll is ‘biased’ in one direction or the other, and repeatedly tends to over- or under-estimate a party’s support.
There are numerous ways in which ‘bias’ can arise. It maybe that the way in which people are selected for interview inadvertently results in a greater likelihood of one party’s supporters being approached to take part. Or maybe one party’s supporters are more likely to respond to the poll. In election surveys, pollsters also have to deal with the question of turnout: not everybody who gives a voting intention will necessarily vote, and maybe one party’s supporters are more likely to do so than another.
Biases like these can arise because things change over time. A poll which has produced unbiased estimates in the past might begin to produce biased measurements. It may be that the poll has always interviewed too many graduates, but hitherto this has not mattered because graduates are too evenly spread across the political parties. However, if graduates then start to support one party in particular, the poll may then become biased. This is one reason why pollsters constantly have to re-assess how they conduct their polls.
The wording of a question can also make a difference to the answers respondents give, especially on polls of political attitudes rather than voting behaviour. As a result, two polls carried out on the same subject, say, abortion, at the same time in much the same way may give very different results because one portrays abortion differently from the other. It is always important to check precisely what was asked on a poll before jumping to conclusions.
Who pays for polls?
Many published polls are conducted for news organisations and television programmes. While they may be undertaken to gain publicity rather than simply provide a service for readers or viewers, the media have an interest in commissioning polls that are accurate.
Sometimes polling organisations fund political polls out of their own resources, again as a way of gaining publicity. But this strategy is only likely to be worthwhile if they can develop a reputation for accuracy.
Some polling is commissioned by pressure groups. These polls are often conducted with the aim of ‘proving’ that the view of the pressure group in question represents the majority view. Even if the wording of the questions in the poll are not manifestly biased, it may be that it only touches on one aspect of the issue and thus may only paint a partial picture.
Political parties also commission opinion polls, usually to inform their campaigning and with a view to keeping the results private. A poll for a party that is trying to establish its own strengths and weaknesses, or to work out what the potential support is for a particular issue, may well ask questions that are thought to be biased. This is fine so long as the results are only used internally within the party, but it means that claims about what a party’s private polling says have to be treated with caution unless full details of that polling are eventually published.
Where can I find out more about a poll?
Polling companies who are members of the British Polling Council have committed themselves to making extensive details of their published polls publicly available. This includes information on where, when and how a poll was conducted, how many people were interviewed, what questions were asked, and who paid for the poll. These details, together with detailed statistical tables, are posted on each member’s website. The aim is to ensure that anyone can make an informed decision about whether or not to trust the results of a poll.
Should you be unable to find these details, or want to know more, contact the polling organisation in question. BPC members are committed to responding to all reasonable requests for further information about their polling.
January 2023