The popularity of the marketing measurement movement seems to have every PR-hungry consultant jumping on the “survey says” bandwagon to create some “content”. You know the type. “We asked 1,000 people what they thought about….”

 

The answers are supposed to provide you, the marketing executive, with a benchmark of what your “peers” are doing, so you can gauge the relative performance of your own company or department. Only they don’t. They just manipulate your desire to know and play off of your lack of technical knowledge in reading research results.

 

It’s ironic, isn’t it, that people who supposedly specialize in credible marketing measurement resort to scientifically flawed methods in their own marketing efforts:

 

- The survey samples are drawn from convenience and are representative of no larger group (except the group of people who happened to respond to the survey).

 

- The motivations of the respondents to be truthful seem to pass without question.

 

- There is no attention paid to the non-respondents (whom one might presume are protecting some real, non-public insights).

 

- And the summaries exclaim how 40% of respondents said this while another 52% said that, all the while ignoring the fact that the error rates for the study may be +/- 20% or more.

If you presented such garbage information to your executive committee, chances are you’d be out on your ass quicker than your resume could get updated.

I’m not diminishing the importance of qualitative research by any means. I’m simply calling on the emerging industry of measurement consultants to adhere to the same standards they advise their clients on. If you seek publicity for qualitative work, be sure to clearly label it as such and use as many words to explain the limitations of the conclusions as you employ in proposing them.

On the client side, you should have higher standards. Ask a few key questions about anything labeled “research”:

1. Is this qualitative or quantitative? Qualitative research summaries shouldn’t be rooted in numerical comparisons across sub-samples. Their findings are only valid at the level of broad observations and hypotheses.

2. What universe is this sample representative of? Understanding the sample number of respondents in the context of the non-respondents and the group selected to receive the opportunity to respond will tell you if those who did respond are really reflective of your “peer” group and if the differentials reported are meaningful, or manufactured.

3. What is the error factor of the findings? If they can’t say for sure, then it’s not a quantitative study, which means you should pay no attention to the actual numbers and percentages reported.

If we all apply a bit higher standards for credibility in our work, we will collectively advance the credibility of the marketing discipline in its ability to self-measure. Failing that, we’ll continue to be accused of being more interested in PR than real results.