How many people really know what “correct research” is? Could you define it? Unless you were a stats major or have worked professionally in the research field, the answer to this is probably no.

We often think we know how to do something we’ve read about, or “intuitively” know what it is. But the reality is that laypeople really don’t know how precise and detailed effective, results-based research actually is. There are innumerable details and turns of phrase involved in the wording, ordering, and layering of questions, and every piece matters. There’s nothing that doesn’t factor in, and if the formula isn’t built correctly, the “results” can be anything from skewed to reflecting the wrong information to just being completely useless. Or worse, seemingly correct, you make decisions based on it and have negative consequences to your business.


The way you phrase questions is very important. Something as simple as the word “and” can render a potentially effective survey question worse than useless. If it’s included in a research question, you can almost always be assured that it is a bad question.

The goal of market research is to get precise, focused answers to specific, directed questions. One of the keys of research is measuring a precisely defined, single entity. And the way you measure it is essential.

Let’s say you want to find out how customers’ experience was at your restaurant. If you ask, “How was your experience today?” But is that what you really want to know? Will that really give you actionable information?

Or are you really interested to learn such factors as; their likelihood to return, their likelihood to recommend the restaurant to others, or their perception of value, their perception of you compared to your competitors, and so on and on?

Here’s an example of junk question that arrived on a survey we received recently.

Please tell us about your experience having your car at our service department this morning. (scale of 1-5). Are these things important to you?

While this question may look okay, there are several fundamental flaws that will ensure the information generated will be misleading and inconclusive. Such problems as What are they measuring? What are the scale definitions? What is 1 supposed to mean? Why are they asking for comments and providing a scale in the same question? In this one question they ask for importance, an assessment and a comment which should be at least three questions. In the end this question will provide answers and I am sure the service department will try diligently to use that information. But the results will lack context and in the end, will become an exercise in reading comments. Not a bad idea, but clearly not an effective customer feedback program.

In the end, you have to decide the ultimate purpose of your customer feedback. If you want to general comments then overall opinions, then go it alone. Otherwise, turn to a professional, sit back, and enjoy the information that comes rolling in, because you can count on it.

Enhanced by Zemanta