Straw Polls – Should we listen to them?

Date: September 22, 2011 | Shawn Herbig | News | Comments Off on Straw Polls – Should we listen to them?

As a researcher, I absolutely love election season.  While I could say that the reason for this is that I am simply living up to my obligations as a citizen (partly true), the real reason I enjoy it so is because of all the polls that are released.  And because so many polls are released, it can become difficult to decipher which ones are good and which ones are political nonsense.  That is what makes it interesting for a researcher!

There has been a lot of talk in the recent Republican primary race about straw polls.  And each of these polls seem to declare a different victor.  Mitt Romney won the New Hampshire poll, Rep. Ron Paul won both the Washington, D.C. and the California polls, Herman Cain won the Arizona poll, Michele Bachmann was victorious in the Iowa poll.  So many polls, so many different winners.  This begs the question, what exactly are straw polls and should we as potential voters listen to them?

Let’s begin with the first question – what is a straw poll?  There are two broad categories of polling: scientific and unscientific.  Scientific polling uses random sampling controls so that the results from a sample that is drawn is statistically representative of the population.  Previous posts have discussed this greater detail.  Unscientific polling, on the other hand, has no systematic sampling controls in place that would allow for representation of a population.  Historically, a lot of straw polls in the United States have been political in nature, and are usually fielded during election season by a particular political party.  The very name “straw poll” alludes to their nature – it is thought that this idiom alludes to a piece of straw being held in the air to determine which direction the wind is blowing.

Most straw polls are very targeted, very narrow surveys of opinion.  Their main purpose is to take a “snapshot” of a general opinion during a particular point in time.  This seems valid enough, but the difference between scientific and straw polls exists within the methodology.  Most straw polls use a form of convenience sampling that is a bit unorthodox, and the selection bias associated with can be extreme.

It is hard to assign a broad methodology to all straw polls (as each one is different in its own right), but many of them have candidates, such as in the Ames Straw Poll in Iowa, attract voters to cast their vote on who they believe should be the Republican candidate.  If it sounds like political grandstanding, it’s because it is to some degree.  It uses somewhat of an “honor system” whereby anyone can vote (within the parameters), which opens up a whole argument regarding the validity of the polls.

This brings us to our second question – should we pay any heed to the results of these polls?  I previously stated many of the recent straw polls and their victors.  There have been many polls, and there have been many different winners.  But to answer this question, we only need to look at the candidates themselves.  And they certainly place weight on these polls.  Tim Pawlenty dropped out of the Republican primary because of the lack of support the Iowa poll showed for his campaign.  Entire strategies are formulated based on results of straw polls.  That is because these polls show the weaknesses of particular candidates.  And for this reason, candidates are perhaps wise to take caution to what the polls are telling them.

However, are they good predictors of ultimate outcomes?  In answering this question, we are reminded of the 1936 presidential election.  The Literary Digest conducted its own straw poll, which showed Franklin Delano Roosevelt being defeated by a large majority.  We all know this was not the case, and the reason for this catastrophic (as it led to the downfall of the Digest) miscalculation was in the methodology of the poll, which is the main criticism of any straw poll.  The Digest used their mailing list to administer the poll, which consisted of motor vehicle registries and telephone books.  The problem here?  It was the Great Depression – many Americans were too poor to own a car or telephone, and thus a large sector of the population was neglected in this poll (selection bias at its finest), the very sector that was more likely to vote for FDR and his economic reforms.

The point of this post is this: take what you hear from these straw polls with a grain of salt.  They do little to predict outcomes, but can be very valuable to the candidates themselves in adjusting and fine tuning their campaigns.  Although there is a vast expanse of difference that exists between a lot of straw polls and scientific research, it can be surprisingly easy to muddle the reliability of each. However, knowing how to digest the results of research, both good and bad, will help you to avoid unsettling surprises.

view all

“DEWEY DEFEATS TRUMAN” – A case study in trusting the untrustworthy

Date: June 1, 2011 | Shawn Herbig | News | Comments Off on “DEWEY DEFEATS TRUMAN” – A case study in trusting the untrustworthy

Last week, I posted a commentary on the dangers of trusting polls and research derived from samples of convenience.  In this, the infamous 1948 Dewey-Truman election was referenced, in which the Chicago Tribune headlined that New York Governor and Republican challenger Thomas E. Dewey defeated incumbent President Harry S. Truman.  The headlined letters were simple, and they were big:  “DEWEY DEFEATS TRUMAN.”

Now, we all know that President Dewey went on to accomplish great and terrible things during his reign as commander and chief of the United States of America.  During his term, he helped establish NATO, fought the communist accusations and Red Scare of Senator McCarthy, sent troops to help with the Korean War, fired beloved war hero General MacArthur, and renovated the White House, thus ending his term with a dismal 22 percent approval rating.  Yes, President Dewey was indeed a controversial, both beloved and hated, president.  He is the talk of history classrooms throughout the nation!

Pardon the sarcasm.  In fact, Dewey never won the election.  Despite the Tribune’s headlines, Truman went on to win the electoral vote 303-189, and democrats regained the House and Senate.  After I posted this reference in last week’s post, I slowly began to realize, with the help of a fellow colleague, that perhaps not everyone remembers or knows about this infamous blunder.  And lest we forget, as history is forgotten, so it repeats itself.  So I am doing my part to help such disasters (though comedic as they are) from repeating themselves.

Some have claimed that this blunder was a result of conservative bias within the Tribune, but what underlay this was more so trusting inaccurate exit polling and data sources.  Such controversies have occurred since (namely the Bush-Gore election of 2000), but whereas the media were blame for lax voting in these instances, this was a product of trusting untrustworthy data.

A cautionary tale, to be sure, and one we can still learn from.  I can assure you that newspaper editors joke about this incident to others, but deep inside under locked doors they fear that their own paper may fall victim to such missteps.  Organizations and businesses should take heed as well, as trusting data that is not gathered accurately can lead to decisions that are not in the best interest of your organizations.

And for posterity’s sake, let us once again be reminded of this infamous photograph (Truman taunting the media the day after his victory as he boards a train in St. Louis). 

view all

Polling: A double-edged sword

Date: May 26, 2011 | Shawn Herbig | News | Comments Off on Polling: A double-edged sword

Let us pretend for a moment that we all understand the foundations of probability theory – because this is a necessity for the purposes of this post.  Even the most seasoned of researchers and statisticians cannot possibly fully grasp something as ethereal as probability.  This is because in a sense probability of occurrences is somewhat akin to gravity – we know it exists because it works.  So long as we don’t go spinning off into space, we know that gravity is indeed doing its job well enough.  Probability is the same way.  We know that if we flip a coin 1 million times, 500,000 of those times will be a heads up occurrence.  (Of course, if gravity were to fail then so would the laws of probability, because once we flip the coin into the air, it would float out into the great unknown reaches of space!)

So, why am I saying this?  Surely it’s not because I have given up on trying to understand why I can do what I do as a researcher without question (though some still question it).  My previous post talked a bit about the power of random sampling.  Similar to gravity and coin flipping, we know that if we randomly choose people out of a particular population, then those people will truly be representative of that population.

Which brings me to this post – a second in a series of the power of sampling, if you will.  Many times, businesses and organizations will throw a short survey up on their website for any “passerby” to take.  These are called polls, and usually consist of a few quick questions aimed at gathering a pulse of a certain group of people.  They have their uses, but they should never be confused with scientific research.  In order for survey research to be scientific, a sample must be collected at random.  Non-random sampling is indeed sampling, but leads to results that cannot be claimed as representative.

Now, we are all familiar with political polling, and some of these polls are indeed scientifically gathered.  However, because of the changing nature of political attitudes, political polling often only is accurate in a particular point in time.  Non-random polling (appropriately referred to as convenience sampling), however, is only accurate of the people who participate in the poll to begin with.  One of the first things you’ll learn (or at least should) in any statistics course is that people who take the time to fill out a poll of convenience (what you typically find in pop up windows when you visit a website) are impassioned to do so.  In other words, they have had either great or terrible experiences with a particular item.  They rarely capture apathetic viewpoints – and let’s face it, most people are indifferent to most things.

But some may argue: “What polls lack in representation, they certainly make up for in convenience.”  And when organizations are concerned about quick answers to their questions, then perhaps that argument makes sense.  But when scrutinized sufficiently, such an argument shatters as quickly as glass house when the ground starts shaking.  Yes, convenience sampling, by its very nature and name, is designed to give quick and cheap estimates.  However, when answers are trying to be forged from intricate questions, decisions should not be made from such unrepresentative findings.  (Hence the double-“edgedness” of polling.)   

Good research demands the appropriate and arduous steps to ensure that what you are basing decisions on, whether they be on how to bolster sales and tackle a new market or printing tomorrow’s news headline on who won the presidency (Dewey ring a bell? Just look to your right ), are accurate and representative.  Again, convenience polling and sampling have their purposes (umm…I guess), but they only tell one side of an infinite sided die.  Such is bliss, but randomness is science!

What about you out there?  Have you stumbled across examples of poorly conducted research (namely from the perspective of sampling issues)?  We would like to hear some of your experiences – and they don’t have to be as mind-blowing and historically signficant as the Dewey-Truman headline.

view all