Understanding Research – Political Polls and Their Context

Date: October 26, 2012 | IQS Research | News | Comments Off on Understanding Research – Political Polls and Their Context



Yesterday, the President of IQS Research Shawn Herbig spent an hour on the radio discussing some of the intricacies involved in the research and polling process.  Given the current election season, one thing we know for certain is that there is no shortage of polling results being released.

So that begs the question, how do we know which polls are right and which are not?  Is each new poll released on a daily basis reflecting real changes in how we think about the candidates?  Is polling and research indicative of emotions or behaviors, or both?  These are some the things Herbig tackled yesterday.

We posted a discussion late last year about how it may be a good idea to look at what are called polls of  polls, which take into consideration the summation of research done on a particular topic (in this case, political polling).  This will help to “weed out” fluff polls that may not be very accurate, and to place a heavier emphasis on the trend rather than specific points in time.

But beyond this, understanding the the  methodology behind polls is useful when deciding whether or not those results are reliable.  A few things to note:

1. What is the sample size? – Political polls in particular are attempting to gauge what an entire country of over 200 million registered voters think about an election.  A sample size needs to be 385 to be representative of a population of 200 million.  But oftentimes you see polls with around 1,000 respondents.  Oversampling allows researchers to make cuts in the data (say, what women think , or what what African Americans think) and still maintain a comfortable confidence level in the results.

2. How was the sample collected? – Polls on the internet, or ones that are done on media websites, aren’t too trustworthy.  They attract a particular group of respondents, thus skewing the results one way or another.  Scientific research maintains that a sample must be collected randomly in order for those results to be Representative in a population.  In other words, each person selected for a political poll, for instance, must have an equal chance to be selected as any other person in the population.

3.  Understand the context of poll/research – When the poll was taken is crucial in understanding what it is telling us.  For instance, there was a lot of polling done after each one of the presidential debates.  Not only did researchers ask who won the debate, but they also asked who those being polled were going to vote for.  After the first debate (which we could argue went in Romney’s favor), most polls showed the lead Obama had going into the debate had vanished.  Several polls showed Romney with a sizable lead.  But was this a statistical push due to the recent debate and the emotion surrounding it? Or was this increase real?

Recent polls show a leveling between the two candidates now that the debates are over, and a more objective look at the candidates can be achieved.  However, it is nearly impossible to eliminate emotion in responses, especially in a context as controversial a politics.

4. Interpreting Results – Interpretation ties in nicely with understanding the context of the research that you are viewing.  But there is a task for each of us as we interpret, and that is to leave behind our preconceived notions about the results.  This is very hard to do, as it is a natural human instinct to believe what justifies our own reasoning.  This is know as Confirmation Bias, and it can impact the way we accept or discount the research.

Taking all this into account can help us to sift through the commotion and find the value of the research being produced.  This isn’t just for political polling, but can be used for all research that you encounter.  Being good consumers of research can take a lot of effort, but it is the only way to gain a more realistic view of the world around you.

view all

City Research: Beyond the Political Polls, What Does Your Community Really Think?

Date: May 29, 2012 | Shawn Herbig | News | Comments Off on City Research: Beyond the Political Polls, What Does Your Community Really Think?

Professional, targeted research is in a different league than political polling. The kind of institutional research that IQS Research does is not the same kind of research that a pollster does.

Polls can play an important role in city governments understanding their constituents, but the polls are often surrounded by political messages that can be misleading. To really get a tactical pulse on your community and the opinions of constituents, it’s vital to talk to the silent majority who will not hastily and loudly volunteer their needs and views.

From town hall meetings to city and county message boards, often the people who participate — the squeaky wheels — do not represent the majority opinion of the community. It can be too easy to take these participants’ input and run with it because it’s so accessible, but that’s a mistake.

You need to get the opinions of the silent majority, the ones who are sitting in their living rooms or are out working, or are in their neighborhoods volunteering, but are not showing up at meetings. Targeted professional research does just that, and it can reap tremendous benefits for your city or community, as a politician or community manager. Polling cannot.

A good example of this misunderstanding can be the perceptions of the downtown area of a community. Most people think they know what suburbanites think of downtown…it’s dirty, parking is hard to find, it’s dangerous, confusing to maneuver, etc.

But what we typically find is that most people don’t actually think these things at all. Most people who aren’t engaging with your city’s downtown don’t hate anything—they’re just apathetic. There’s a lot competing for people’s attention these days, and most of the time your downtown or the causes you’re focused on just don’t make the cut. It’s not about problems to be fixed; it’s about giving people a reason to care because apathy is the enemy, not negativity.

This insight only comes with a higher level of research, not with political polling. With this higher level research, you can make the changes in your community that will motivate people to make the trek downtown — or whatever large-scale community issue you’re dealing with — and not be distracted by the hidden agendas of a vocal few..

Enhanced by Zemanta
view all

What Makes a Statistically Valid Sample?

Date: February 23, 2012 | Shawn Herbig | News | Comments Off on What Makes a Statistically Valid Sample?

Most people have pretty limited understanding of statistics and research analytics, and they’d probably say they are thankful for that fact, but the reality is that we are bombarded with stats and surveys and analytics every time we watch TV, listen to the radio, log onto the Internet, or go grocery shopping.

So how do companies gather the information they use in their advertising and marketing? How do we know what we’re hearing is legitimate?

When we do a survey and want to determine whether we’re getting an accurate measurement, we need to know if the sample is “statistically valid.” In other words, did we ask enough people to represent the entire population of people we’re studying.

Let’s try an example: Let’s say we want to find out what percentage of the population of Louisville, Kentucky, would say vanilla is their favorite flavor of ice cream.

The best, most accurate way to find out would be to ask every single person in Louisville. But at 1,000,000 people in the region, that’s impossible. So statisticians and researchers have created methods that allow us to ask a sample of the population to get that information we want.

So what is a small enough sample that we can predict how many Louisvillians like vanilla ice cream? Half a million? No. How about 100,000? No, smaller. Maybe 10,000? Nope.

Surprisingly enough, for a population of about one million people, we would only need to survey 384 people to get a reliable answer. There are specific factors we have to consider regarding who to pick, making sure all pockets of the population are included, and the quality of the research basis, but if we can satisfy all those factors, we only have to talk to 384 people total.

And if 80% of those 384 people said vanilla’s the way to go, we would have a margin of error of 5% and we could be confident that 75% to 85% of the entire one million people in Louisville said vanilla was their favorite flavor. (We discussed margin of error in a previous post.)

What’s fascinating is if we wanted to widen our population count to, say 100 million people, you might think we have to ask 38,400 people. Not so. If you wanted to find out how many people out of 100 million would vote for vanilla, you only need to survey 385 people. It is still a statistically valid sample with just 385 people, and still able to gauge the ice cream preferences of one-third of the United States.

After you hit the 10,000 mark in your population, the number needed to survey goes up very slowly.

The world of surveys and statistics is fascinating to say the least, and there’s a lot to understand, but making sure your numbers are on par and statistically valid, is one essential element that makes research the amazing tool for business it can be.

view all

“DEWEY DEFEATS TRUMAN” – A case study in trusting the untrustworthy

Date: June 1, 2011 | Shawn Herbig | News | Comments Off on “DEWEY DEFEATS TRUMAN” – A case study in trusting the untrustworthy

Last week, I posted a commentary on the dangers of trusting polls and research derived from samples of convenience.  In this, the infamous 1948 Dewey-Truman election was referenced, in which the Chicago Tribune headlined that New York Governor and Republican challenger Thomas E. Dewey defeated incumbent President Harry S. Truman.  The headlined letters were simple, and they were big:  “DEWEY DEFEATS TRUMAN.”

Now, we all know that President Dewey went on to accomplish great and terrible things during his reign as commander and chief of the United States of America.  During his term, he helped establish NATO, fought the communist accusations and Red Scare of Senator McCarthy, sent troops to help with the Korean War, fired beloved war hero General MacArthur, and renovated the White House, thus ending his term with a dismal 22 percent approval rating.  Yes, President Dewey was indeed a controversial, both beloved and hated, president.  He is the talk of history classrooms throughout the nation!

Pardon the sarcasm.  In fact, Dewey never won the election.  Despite the Tribune’s headlines, Truman went on to win the electoral vote 303-189, and democrats regained the House and Senate.  After I posted this reference in last week’s post, I slowly began to realize, with the help of a fellow colleague, that perhaps not everyone remembers or knows about this infamous blunder.  And lest we forget, as history is forgotten, so it repeats itself.  So I am doing my part to help such disasters (though comedic as they are) from repeating themselves.

Some have claimed that this blunder was a result of conservative bias within the Tribune, but what underlay this was more so trusting inaccurate exit polling and data sources.  Such controversies have occurred since (namely the Bush-Gore election of 2000), but whereas the media were blame for lax voting in these instances, this was a product of trusting untrustworthy data.

A cautionary tale, to be sure, and one we can still learn from.  I can assure you that newspaper editors joke about this incident to others, but deep inside under locked doors they fear that their own paper may fall victim to such missteps.  Organizations and businesses should take heed as well, as trusting data that is not gathered accurately can lead to decisions that are not in the best interest of your organizations.

And for posterity’s sake, let us once again be reminded of this infamous photograph (Truman taunting the media the day after his victory as he boards a train in St. Louis). 

view all

Polling: A double-edged sword

Date: May 26, 2011 | Shawn Herbig | News | Comments Off on Polling: A double-edged sword

Let us pretend for a moment that we all understand the foundations of probability theory – because this is a necessity for the purposes of this post.  Even the most seasoned of researchers and statisticians cannot possibly fully grasp something as ethereal as probability.  This is because in a sense probability of occurrences is somewhat akin to gravity – we know it exists because it works.  So long as we don’t go spinning off into space, we know that gravity is indeed doing its job well enough.  Probability is the same way.  We know that if we flip a coin 1 million times, 500,000 of those times will be a heads up occurrence.  (Of course, if gravity were to fail then so would the laws of probability, because once we flip the coin into the air, it would float out into the great unknown reaches of space!)

So, why am I saying this?  Surely it’s not because I have given up on trying to understand why I can do what I do as a researcher without question (though some still question it).  My previous post talked a bit about the power of random sampling.  Similar to gravity and coin flipping, we know that if we randomly choose people out of a particular population, then those people will truly be representative of that population.

Which brings me to this post – a second in a series of the power of sampling, if you will.  Many times, businesses and organizations will throw a short survey up on their website for any “passerby” to take.  These are called polls, and usually consist of a few quick questions aimed at gathering a pulse of a certain group of people.  They have their uses, but they should never be confused with scientific research.  In order for survey research to be scientific, a sample must be collected at random.  Non-random sampling is indeed sampling, but leads to results that cannot be claimed as representative.

Now, we are all familiar with political polling, and some of these polls are indeed scientifically gathered.  However, because of the changing nature of political attitudes, political polling often only is accurate in a particular point in time.  Non-random polling (appropriately referred to as convenience sampling), however, is only accurate of the people who participate in the poll to begin with.  One of the first things you’ll learn (or at least should) in any statistics course is that people who take the time to fill out a poll of convenience (what you typically find in pop up windows when you visit a website) are impassioned to do so.  In other words, they have had either great or terrible experiences with a particular item.  They rarely capture apathetic viewpoints – and let’s face it, most people are indifferent to most things.

But some may argue: “What polls lack in representation, they certainly make up for in convenience.”  And when organizations are concerned about quick answers to their questions, then perhaps that argument makes sense.  But when scrutinized sufficiently, such an argument shatters as quickly as glass house when the ground starts shaking.  Yes, convenience sampling, by its very nature and name, is designed to give quick and cheap estimates.  However, when answers are trying to be forged from intricate questions, decisions should not be made from such unrepresentative findings.  (Hence the double-“edgedness” of polling.)   

Good research demands the appropriate and arduous steps to ensure that what you are basing decisions on, whether they be on how to bolster sales and tackle a new market or printing tomorrow’s news headline on who won the presidency (Dewey ring a bell? Just look to your right ), are accurate and representative.  Again, convenience polling and sampling have their purposes (umm…I guess), but they only tell one side of an infinite sided die.  Such is bliss, but randomness is science!

What about you out there?  Have you stumbled across examples of poorly conducted research (namely from the perspective of sampling issues)?  We would like to hear some of your experiences – and they don’t have to be as mind-blowing and historically signficant as the Dewey-Truman headline.

view all