New page: 2012 Polls!

Starting today I have a new “page” on my website.   Click the second tab near the top of the webpage labeled “2012 Polls.”   The page will be updated daily, taking into account new poll results and what that means for the election.  I’ll discuss state and national polls, and the state of the 2012 races.

I’m a political scientist but my specialty is international relations and Europe, not polling or even American politics.  I’m doing this for fun.   If you want to read what experts on polls say, check out Nate Silver’s blog at the NY Times, or Amy Fried’s blog at the Bangor Daily News.   I’ll often take into account what they and others say in putting together my own analysis (and cite them as appropriate).

But first, a little bit about polls:

Margin of Error:   Most polls will post a margin of error, which can vary widely.   As the election nears some pollsters like Gallup will get margins of error down to 2%, others can be up near 5%.   That margin of error means that there is a 95%  chance that the race is within that framework, assuming a 95% confidence interval (which is common).

An example:  Let’s say you find a poll that says Romney leads Obama 45% to 44% with a 3.5% margin of error.  That means you say with 95% certainty that if the methodology of the pollster is accurate the race is anywhere between Romney up 48.5 to 40.5 (Romney +8) to Obama up 47.5 – 41.5 (Obama +6).  This is also assumed to be on a “normal curve,” meaning the probability is that the poll is accurate falls off at an even rate as you head towards the edge of the margin of error.

Outliers:  However, it is probable that one in twenty polls (and near the election we’ll get over 20 national polls a week) is outside the margin of error.  That means some polls we see are, even if the methodology is correct, way off.    This is why most people looking at polls ignore the outliers.   Results that are way out of line with most polls are more likely to be one of those outside or on the edges of the margin of error.    Even if the methodology is good, this happens because the sample happened to have more Republicans that support Obama or Democrats that support Romney than usually is the case.

Often outliers will get a lot of attention.   If an outlier shows a large and unexpected lead for Romney (either in general or in a state poll), you can bet the Drudgereport will make a big deal out of it, for example.  But until outliers get verified by other polls, they are usually the result of statistical noise.

Tracking Polls:  There are polls, and there are tracking polls.   Tracking polls usually roll information between three and up to seven days, meaning that some of the data is relatively old.   The Gallup tracking poll on Saturday, September 1 has Obama up 1.   Does that mean Romney got no convention bounce?   No – the Gallup poll has a seven day roll, meaning it includes data from as far back as Sunday before the GOP convention.

Tracking polls often have smaller margins of error than standard polls, in part because standard polls try to gather data over two days to give a “snapshot,” while tracking polls stretch it out.   What does this mean?   For me, it means I trust the accuracy of a traditional poll more in reflecting the current state, even if the margin of error is larger than the tracking poll.   However, the tracking poll is very good at showing trends.    So far both the Rasmussen and Gallup tracking polls show a very close race with no clear trends.

Tracking polls have become very popular, and by October there will be a bunch of them.  First, watch for trends, and compare trends between polls.  If the polls agree on a particular trend, it’s probably real.  Second, don’t over-react to sudden changes.  Sometimes because the poll dumps old data each day and replaces it with new data, there can be a quick jump.   In 2008 after John McCain’s decision to suspend his campaign the next Gallup found McCain and Obama at 49-49.  Within two days Obama had a seven point lead.  But that was due to data dumped which covered five days.

Methodologies:  Polls have different methodologies.   Polling of all registered voters usually does not give one a very good sense of the final turnout on election day, and most pollsters focus on likely voters as the election nears.  Pollsters have learned that asking questions about whether or not a voter is likely to vote, and then about a voter’s recent voting history renders a solid result.   They also use demographic data.   The use of demographic data requires pollsters estimate what kind of voter turnout is likely.   Differences in such estimates, or about how likely voters are identified can yield different results from the same data — different methodologies thus can often explain why two polls released at the same time might disagree.   If you want to really dissect a poll, many of them post their complete results.   This will often break down the results by age, gender, ethnic group, and some polls go into detail about their assumptions and methodology.

Some pollsters do things a tad differently.  Reuters-Ipsos uses internet interviews, which many consider suspect.  As a tracking poll they use a “credibility interval” rather than a confidence interval.   The credibility interval takes past data into account and is essentially a different way of estimating the probability that the result is accurate.

Sample:   One mistake people make is to over react to the sample.   Recently a poll came out showing an Obama lead, but the sample included 9% more Democrats than Republicans.   Those on the right were quick to condemn the poll as being biased and designed to make Obama look better than he is.    Don’t fall for that!   Pollsters get the sample they get based in part on chance, then they weight the sample to reflect their demographic assumptions.   If that leads to more Democrats or Republicans, that probably reflects shifting attitudes.

But most people don’t want to dig deep into the polls themselves, and usually there’s no reason to.   Know only that polls use different methods.  Most don’t go into too much public detail about how exactly how they weight the data.   All (with some exceptions) want to be accurate.    I tend not to worry too much about methodological differences because I don’t know enough about polls to really judge which is accurate (I’ll read people like Nate Silver and Amy Fried for that).   I don’t think it’s necessary to get into that detail to make sense of the myriad of election polls.

Partisan Polls:   There are also partisan polls, and usually these aren’t as trustworthy.   Some are very good – Public Policy Polling (PPP) is Democratic, while Rasmussen is Republican.   Both are good polls, however.   Still, partisan polls are more likely to have assumptions and methodologies that serve their purposes.   Better are polls from the campaigns themselves, and when and if those get leaked, they are often very good — the campaigns have the most at stake, they want accurate numbers.   But they usually guard that data, and leaks are hard to confirm.

In general, political scientists trust polls because it is a science, and most are quite professional.  But we know the limits of polling, and how a mistaken assumption can yield a faulty methodology, or how inevitably there will be polls outside the margin of error.    That means there is only so much one can glean from watching polls, but I’ll try to give my take daily updating my “page.”    It should be a fun election season!

Advertisements
  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: