10 Things You Ought to Know About Polls
- Polls are a snapshot of the way people feel at that particular moment. Things can change. They often do.
- Polls that regularly release their findings to the public — with information about survey procedures and question wordings — are usually pretty good.
- Always look at the way poll questions are worded. That can make a huge difference.
- The best way to look at polls is to look at several of them taken at about the same time asking similar questions.
Some of us love polls and some of us hate polls. Most of us love them and hate them at the same time.
We love them when they tell us something we like to hear — such as the fact that nearly 90 percent of Americans favor background checks for gun purchasers.
We hate them when they get things wrong — like in this year’s midterm election when polls kept telling us that many Senate races would be close. They weren’t.
Here are ten things you ought to know about polls.
1. Polls are not designed to be predictive.
You’ve heard this before: polls are a snapshot of the way people feel at that particular moment. Things can change. They often do.
All the polls this year showed Republicans leading Democrats going into the midterm. They just didn’t show how big a lead Republicans would eventually have. It’s likely that most of the people who told polltakers they “didn’t know” or were “not sure” how they were going to vote ended up voting Republican. 2014 was the “Nobama election,” and late deciders (who are typically very weak partisans) got swept up in the mood of the country.
2. Randomness is the the key to polling.
How can a sample of one thousand respondents represent the views of 245 million voting-age Americans? The answer is, you have to follow elaborate procedures to make sure that every adult American has a mathematically equal chance of being interviewed. If respondents are truly picked at random, their responses can be inferred to the whole adult population within an acceptable margin of error (often plus or minus 5 percent).
Random selection is not easy. Or cheap. A lot of younger people are rarely at home. And they don’t have land lines. The public is increasingly wary of callers who may be trying to to sell them something. Or asking for a contribution. Nowadays, it takes as many as 20 attempts before you can complete an interview with someone willing to spend 15 minutes on the telephone.
Pollsters now try to reach some people on their cell phones. That can be expensive. Batteries give out. Pollsters are also interviewing people over the Internet. But many lower income people don’t have Internet access. So pollsters try to weight their samples to compensate for people who are hard to reach.
Some pollsters are even giving up on true random samples. Instead they are using panels of respondents purposely selected to represent the adult population — imposing quotas for, say, Republicans and unmarried women and retirees. That’s risky and controversial. If you select the known percentage of, say, men under 30, you may be ignoring other factors like education or race.
The hardest thing to do is to identify “likely voters.” Ask people if they intend to vote in the coming election and 80 percent will say “Yes.” Yet fewer than 40 percent actually voted this year. Pollsters try to identify likely voters by asking people whether they voted in the last election, whether they know where their polling place is, how closely they are following the campaign, etc. Then they lop off the 40 percent who seem most likely to vote and call them “likely voters.” That’s not science. It’s educated guesswork.
3. It’s often difficult to tell a good poll from a bad poll.
You have to be cautious. If the responses seem out of line, check the make-up of the sample. Are there too many Democrats? Not enough African-Americans? Too many seniors? And check the wording of the questions. That can make a huge difference (see #6 below).
Generally speaking, reputation matters. Polls that regularly release their findings to the public — with information about survey procedures and question wordings — are usually pretty good. That includes most polls done for the media (the CBS News-Washington Post poll, the NBC News-Wall Street Journal poll, the ABC News-Washington Post poll, etc.). It also includes non-partisan polls that regularly release their findings to the press (Pew Research Center, Gallup). Those polls live and die by their reputations, so they have to be more careful.
4. No single poll should ever be taken as authoritative.
No, not even polls taken by academic institutions (like the University of Michigan Survey Research Center, the National Opinion Research Center at the University of Chicago, the Quinnipiac University poll, the Marist College poll). Those polls are usually pretty reliable. But they are not immune from differences caused by different question wordings and “noise” (unavoidable random variation).
The best way to look at polls is to look at several of them taken at about the same time asking similar questions. If they all show roughly the same thing — like a Republican trend in 2014 — you can feel more confident that the result is true. If the polls are all over the place, beware. But even gold standard polls like Pew and Gallup should never be taken as authoritative.
5. Polls of specific subgroups can be tricky.
Suppose you want a sample of African-Americans. Interviewing people who live in heavily black neighborhoods may be the cheapest way to find respondents. But you will be excluding a lot of African-Americans who live in mixed neighborhoods. They may have very different views.
Suppose you want a sample of Jews. It’s not a good idea to go out looking for Jews, by interviewing people in kosher markets or people with religious insignia on their doorposts. You may end up with a sample of very religious Jews, whose views are often quite different from those of more secular Jews. Some pollsters interview people with “distinctive Jewish names” (Cohen, Goldberg, etc.). But they would miss people like Daniel Radcliffe, the actor who played Harry Potter and who happens to be Jewish (who knew?).
The best way to sample subgroups is to let them fall into your larger sample at random. If you interview 1,000 Americans, about 25 of them will likely identify themselves as Jewish. That’s not a large enough sample for statistical reliability. But if you combine Jewish respondents who fell into twenty different national samples at random, you will end up with 500 Jews. That’s about the minimum size for a reliable sample.
6. Words matter.
Always look at the way poll questions are worded. That can make a huge difference. Ask people their opinion of the Affordable Care Act and you will get one answer. Ask them their opinion of “Obamacare” and the answer is likely to be different.
A 2013 Gallup poll press release was headlined, “In U.S., Most Reject Considering Race in College Admissions.” The question asked whether college applicants should be admitted solely on the basis of merit, “even if that results in few minority students being admitted,” or should applicants’ racial and ethnic backgrounds be considered to help promote diversity, “even if that means admitting some minority students who otherwise would not be admitted.” Answer: stick to merit, 67 to 28 percent.
Now here’s the headline of a Pew Research Center press release from 2014: “Public Strongly Backs Affirmative Action Programs on Campus.” The Pew question asked, “Do you think affirmative action programs designed to increase the number of black and minority students on college campuses are a good thing or a bad thing?” By better than two to one (63 to 30 percent), the Pew respondents said affirmative action is a good thing. The public supports programs that help disadvantaged groups meet the prevailing standards of competition. The public opposes making exceptions to those standards for certain groups.
Or take the case of immigration. “Amnesty” for illegal immigrants is not very popular. But the public supports allowing people who came to the U.S. illegally to obtain legal residence if they meet certain conditions (pay a fine, have an otherwise clean record, speak English, have a job, pay taxes). It all depends on how you ask the question.
Another example: a Yale University poll found “Americans Much More Worried about `Global Warming’ than `Climate Change.”’ Global warming sounds dangerous. Climate change sounds natural.
7. Beware of complex questions.
Why? Because if you give respondents many different signals, you don’t know which ones they are actually responding to.
Here’s a question from a poll taken this summer:
“Now I am going to read you some things a Republican is saying about the economy and what needs to be done to make things better. Please tell me whether you find it very convincing, somewhat convincing, a little convincing or a not at all convincing statement about the economy and how to make it better.
“President Obama has failed on the economy. The middle class is struggling with declining paychecks, high unemployment and the rising costs of health care, college and even a tank of gas. Bigger government, higher spending and never-ending deficits are not the answer. We need to repeal Obamacare, cut regulations and lower taxes. We need to build the Keystone pipeline and use our energy to create jobs and lower gas prices. We can bring back opportunity and balance the budget and spark an economic recovery if we get big government out of the way.”
As it happens, 72 percent of likely voters found that statement “somewhat” or “very” convincing. But what exactly did they find convincing? That President Obama has failed on the economy? That we should repeal Obamacare? Cut taxes? Build the Keystone pipeline? Balance the budget? There’s no way to tell because all those signals are bundled together.
8. Respondents tend to think a poll is a quiz.
Have you ever listened to a poll interview over the telephone? I have. Many times. Here’s what often happens.
The interviewer asks, “Do you favor or oppose sending U.S. ground troops to help people in Ukraine resist Russian aggression?” The respondent thinks about it — often for the first time — and answers the question cautiously, “No, I don’t think that would be a good idea. . . . Um, was that the right answer?”
Questions should never give any clue that the question has a right answer. Because respondents are inclined to impress the interviewer. That’s also the reason why telephone polls often work better than in-person interviews. Over the telephone, people feel less compelled to try to impress the interviewer.
Some polls do away with human interviewers altogether. They “robocall” respondents and ask them to answer questions by pushing buttons on their phones (“Press 1 if your answer is yes and 2 for no”). Robocalls have many problems, but no one feels compelled to impress a machine (unless they think someone is secretly recording their answers).
9. Polls are opinions, not behavior.
People sometimes express opinions that may or may not reveal what they would actually do. Gallup asks people if they would vote for a generally well-qualified candidate for President who happens to be a woman. In 2012, 95 percent said yes. That may be because they know what the right answer is: “Of course I would vote for a woman. I’m not prejudiced.”
But would they? What the poll may actually reveal is that people know what the social norms are: don’t endorse prejudice. In the same poll, 68 percent said they would vote for a gay or lesbian candidate for President. 58 percent said they would vote for a Muslim candidate. But are they saying that because they think those are the right answers?
10. A 50-50 result does not necessarily mean people are sharply divided.
Ask people whether they would prefer to have apple pie or ice cream. The answer you get would be something like 50 percent pie and 50 percent ice cream. Is that because the public is deeply polarized between pie-lovers and ice cream-lovers? No. It’s because people like both. They are picking a response at random.
Polls have asked Americans whether they consider Edward Snowden a hero or a traitor. The results usually come out to about 50-50. A lot of people can’t make up their minds about Snowden, or feel that he is both. The 50-50 result seems to suggest that Americans are deeply divided over Snowden. What it may reveal is that people are picking an answer at random. That’s what happens if you force people to make a choice when they feel both ways, or don’t have a strong opinion. They shrug.
Bill Schneider is CNN’s former Senior Political Analyst. He has served on the faculty at Harvard University and Stanford University and taught courses on public policy and political science at Boston College and George Mason University’s School of Public Policy, where he presently holds an endowed chair. The Boston Globe named him, “the Aristotle of American politics”, for his keen insights and storytelling skill. Follow him on Twitter @BillSchneiderDC.
This is his second column for The Communications Network.