Downtown Development Corp. 3rd Annual State of Downtown

Date: March 15, 2013 | iqsresearch | News | Comments Off on Downtown Development Corp. 3rd Annual State of Downtown

The Third Annual State of the Downtown

April 9th, 2013 marks the annual meeting. Join the discussion and hear our very own Shawn Herbig. Shawn along with  three other presenters will be talking about the future of Louisville. Below you will find links to more information, how to get tickets and a list of presenters.

Event information: The Third Annual State of the Downtown

Reservations: Single Reservation or Corporate Tables Available



Economic and Demographics of Downtown

Janet Kelly, Executive Director, Urban Studies Institute, University of Louisville

Michael Price, State Demographer, Kentucky State Data Center


Public Perceptions: 2013 Metro Survey on Downtown

Survey of Downtown Executives- New Feature

Shawn Herbig, President, IQS Research


Policies and Initiatives Going Forward

Alan DeLisle, Executive Director, LDDC

view all

Analyzing your data: Is Excel enough?

Date: June 17, 2011 | Shawn Herbig | News | Comments Off on Analyzing your data: Is Excel enough?

Good question.  Being an analyst, I have become familiar with various platforms for analysis, from basic spreadsheet software, to open-source statistical packages, to top-of-the-line products, all of which serve the needs of researchers with varying levels of complexity.  While spreadsheets are perfectly ample for your “run of the mill” percentages and distributions, you certainly wouldn’t want to use it for, say, logistical regression.  (Even if such programs offered the calculation, it would take a bold individual to trust the accuracy of the statistic if throws at you.)

But let’s be honest, much of the market research community only ever runs basic distribution statistics, and for that, a program such as Excel is perfectly ample.  It certainly provides the nice graphics you would want to pretty up your reports, something SPSS, for example, has not yet mastered.  I have spent many a day mulling over how to create decent looking charts in SPSS, but always to no avail.  Then again SPSS isn’t designed to be a graphical leader, but certainly trumps Excel in analytical capacity.

Which leads me to the question: when is Excel not enough?  IBM (the parent company of SPSS) recently released a white paper on the potential dangers of using spreadsheet applications in complex analytical procedures.  Aside from this paper being an obvious sales pitch for SPSS, it does bring up some very valid points.

Before diving in to these potential dangers, let’s first discuss what’s good about spreadsheets:
1. They are great tools for organizing data quickly and efficiently.  The sorting functions and vertical lookup capacity of Excel, for example, is unmatched.
2. For a quick analysis of basic characteristics of your data, spreadsheets can more than handle these simple calculations.  If all you want is to report the percentage of customers who are satisfied with your product, then spreadsheets are indeed all you will need.

Okay, but what happens when I want to go beyond the basic?  You can use Excel to calculate t-tests and correlations to more accurately describe relationships that exist in your data, but beyond that, Excel becomes a little risky to trust.  It certainly has calculations to perform regression analysis and all sorts of fancy forecasting tools, but to be quite frank, Excel’s algorithms to perform such tasks simply aren’t as advanced as the ones you would use in SPSS, SAS, or STATA.  Yes, you can perform regression modeling in Excel, but statistical software packages not only do the modeling, but also test for certain assumptions that improve the accuracy of the testing and indicate significance at levels spreadsheets cannot.

Of course, programs such as SPSS and SAS take much more knowledge and training to utilize their full capabilities, but the end results will be much more illustrative and solid should the research call for a deeper analysis beyond the basic.  Here at IQS, we use a myriad of programs, including spreadsheet software, but realize the limitations and functions of each of the programs we use.  When we create forecasting models, for instance, we realize the complexity involved in the calculations, so we do not use Excel for such tasks.  Research shows that 90% of all spreadsheets contain at least one error.  And you can be certain that as calculations become more complex, the preponderance of these errors increases.  The scary thing is that most of these mistakes go unnoticed, so companies can literally be making decisions based off faulty or inaccurate data.

The simple moral of this story is this:  Know the limitations of spreadsheets.  While they can be the best thing in the world for some projects, they can be very dangerous and risky to use for others.  We’ve all been there, realizing a mistake after a report has been released.  Most of the time they are minor, but I personally would hate to be the one having to retract modeling results because a mistake made in my calculations.

Here is a link to the article released by IBM on The Risks of Using Spreadsheets for Statistical Analysis.

view all

The Power of a Sample – Voodoo or Science?

Date: May 18, 2011 | Shawn Herbig | News | Comments Off on The Power of a Sample – Voodoo or Science?

A recent study carried out by our company and The Civil Rights Project for Jefferson County Public Schools came under fire for a common misconception among those who don’t fully understand the power of random sampling. Without going into a long, drawn-out discussion of what the study entailed, the project aimed to gain an understanding of the Louisville community’s perceptions of the student assignment plan and the diversity goals it seeks to accomplish. Perhaps the methods would not have come under such scrutiny had the findings been less controversial, but regardless, the methods did indeed come under attack.

But if we take a moment to understand the science behind sampling methods, and realize that it is not voodoo magic, then I think the community can begin to focus on the real issues the study uncovered. To put it simply, sampling is indeed science. Without going into the theory of probablity and the numerous mathematical assesssments to test the validity of a sample, we can say that a random sample, so long as the laws of probablity and nature hold true, and some tear in the fabric of the universe has not occured, is certainly representative of any population it attempts to embody.

Let us first begin to understand why this is so. When I taught statistics and probability to undergrads during my days as an instructor, I found I needed to keep this explanation simple – not because my students lacked the intellengence to fully understand this, but more so because probablity theory can get a little sticky, and keeping the examples simple seemed to work best. Imagine we have a coin – a fair sided coin that is not weighted in any way (aside from a screw up from the Treasury, in which case your coin could be worth a bundle of cash). We all know this example. If you flip it, you have a 50-50 chance of getting a particular side of that coin. In essence, that is the law of probability (the simplest of many).

Random sampling is the same way. While there are various methods to go about sampling a population randomly, Simple Random Sampling is the easiest and most commonly used. To put it simply, each member of a population is assigned a unique value, and a random generator picks values within a defined range (say 1 to 1,000,000). Each member of that population has an equal chance of being selected. These chosen members become the lucky ones to be a true representation of a population. They are not “chosen” in the sense that they get to drink the Koolaid and ascend beyond, but they are chosen to speak on behalf of an entire population. Pretty cool, huh?!

These samples are representative because, well, probability tells that it is. I can spend pages and pages of your precious, valuable time discussing why this is the case, but that discussion will undoubtedly put you to sleep. However, this is why not every person in a population needs to be surveyed. And, it is a great cost conserving measure when you only have to sample, say, 500 people to respresent a much larger population. Here I can bore you again with monotonic relationships and exponential sampling benefits, but I will not do that. (You can thank me later).

Now for the real bang! Say you want to measure satisfaction of city services within a small city of 50,000 people. In order to have a representative sample, all you need is a sampling of 382 people (with a 5% margin of error). Now, say that you want to do the same study, only on the entire city of Louisville, with a population of nearly 1.5 million. What size sample do you think you need? Are you ready for this? The number is 385! Wow. Only 3 more randomly selected residents are needed for a population 30 times greater. The beauty of sampling, and wonders of monotonic relationships! More on that later. You can play around with all sorts of sample size calculators (or do it by long hand, if you dare). I suggest this site.

Of course, if you want a smaller margin of error (in essence, if you want to be more confident that your sample is truly accurate of your population), you need to a larger sample. But I’ll post a discussion on margins of error and confidence levels another day. I leave you now to ponder the brillance of statistics!!

view all