Why are polling sample sizes so small?
The reason for the differences is not hard to find. American polling organisations tend to rely on relatively small samples (certainly judged by British standards) for their results, often somewhere between 500 and 700 likely voters, compared to the more usual 1000-2000-plus for British national polls. The recent New York Times poll that gave Obama a 12 per cent lead was based on interviews with just 283 people. For a country the size of the United States, this is the equivalent to stopping a few people at random in the street, or throwing darts at a board. Given that American political life is generally so cut-throat, you might think there was room for a polling organisation that sought a competitive advantage by using the sort of sample sizes that produce relatively accurate results. Why on earth does anyone pay for this rubbish?
The answer is that in an election like this one, the polls aren’t there to tell the real story; they are there to support the various different stories that the commentators want to tell. The market is not for the hard truth, because the hard truth this time round is that most people are voting with the predictability of prodded animals. What the news organisations and blogs and roving pundits want are polls that suggest the voters are thinking hard about this election, arguing about it, making up their minds, talking it through, because that’s what all the commentators like to think they are doing themselves. This endless raft of educated opinion needs to be kept afloat on some data indicating that it matters what informed people say about politics, because it helps the voters to decide which way to jump. If you keep the polling sample sizes small enough, you can create the impression of a public willing to be moved by what other people are saying. That’s why the comment industry pays for this rubbish.