I was trying my best not to dignify the results of the latest Elon “poll” with a response, but since the media seems to simply parrot their results without any scrutiny on their methodology, I feel the need to respond.
1. First, let’s discuss who is sampled. Elon polls all North Carolina households. So as long as you have a phone you have an equal chance of getting called. Whether you a registered to vote or not, or are in the country legally or not is irrelevant to the Elon poll. Almost every other political polling will at the very least call from the voter file or ask if the person answering the phone is registered to vote. Because, frankly, if you’re not registered to vote, your opinion on politics really doesn’t matter.
Now, there is a place for polling all households for general interest questions or consumer behavior type research. Do you like Coke or Pepsi? Did you watch the Super Bowl? Those are questions to be asked of all households. Asking all households regarding the re-election prospects of a politician is useless at best and actually may do more harm than good by clouding reality for the media and interested observers.
2. Because Elon’s goal is to sample the entire NC population, their demographics are completely off when it comes to a political survey. Take a look at the demographic breakdown of Elon’s poll. While that may be an accurate reflection of the adult population of North Carolina, it is very far off from being an accurate reflection of the voters in North Carolina. Even if we look at the 2008 voter turnout statistics (which had the highest turnout for all traditionally low voter turnout groups — minorities and youth).
Elon’s sample is 63.6% white. In 2008, roughly 75% of the general election voters were white. Why does that matter? Well, white voters generally break 65-35 for Republican candidates for one. Second, it’s just not an accurate sample of the voters.
Similarly, Elon’s poll is 22.9% black and 8.9% Hispanic. Again, comparing that to 2008 voters, the percentage black is pretty close, but the Hispanic portion is far off (only 2.5% of registered voters say they are Hispanic).
But it’s not just on ethnicity where Elon’s poll has variance with the voting population. Its age breakdown varies from that of actual voters as well.
3. The third factor that everyone should question in Elon’s poll is it’s weighting methodology. Like many polls, the people who actually respond to the poll may not fit the demographic profile to what the pollster wants to model their data. In Elon’s case, too many white voters responded and not enough minority voters. Because 81.5% of the actual respondents to the poll were white and Elon’s target white percentage was 63.6%, each white voter’s answers were diluted by about 3/4ths. Meaning each white respondent really only counted as three-quarters of a response.
Conversely, only 1.3% of the respondents were Hispanic (more than likely because the interviews were done in English only, but that’s a different issue for a different day). But Elon’s target for Hispanic participation was 8.9%. Thus, each Hispanic’s response was counted roughly 7 times its actual response rate. Elon took the responses of 8 or 9 Hispanics and made them worth the responses of roughly 60 Hispanic voters.
Similarly, 8.9% of the actual respondents to the poll were black, but Elon weighted their responses by about 2.5 times to account for the 22.9% of the poll that is black in their final results.
Whenever you are weighing something that much, small variances lead to big margins of error. I’ve seen a lot of polls and talked to a lot of pollsters, but I’ve never met one that would weigh a demographic group 700% of their responses. It just makes for bad math and introduces a higher likelihood of error.
While the results of the Elon poll should not be completely dismissed, they should be highly scrutinized and questioned. The fact that they are weighing responses heavily to the entire population should make it practically useless as an instrument for gauging political questions. If we cared about if people in North Carolina preferred Coke or Pepsi it may be more reliable, but in a political world, it doesn’t pass muster.
Andrew Perrin says
This analysis seems really overly harsh. The Elon Poll doesn’t make a claim to predicting elections; it specifically says it’s seeking to represent what North Carolinians *think*. So to call it out for not asking a question it doesn’t really claim to ask is quite unfair! Here are some reasons one might care what people think on political questions (not just “Coke or Pepsi”) even if they are unlikely or even unable to vote:
– They may form an “attentive public” – that is, if their views are systematically unrepresented in policy they may *become* voters in the future.
– To the extent that we care about representative government, policy outcomes are supposed to reflect the preferences of all citizens, not just voters.
These points refer to the question of voters vs. citizens. You could still make a case that sampling illegal immigrants is a problem, and I agree that the Hispanic weighting is problematic.