The Trick to Seeing Through Marketing Statistics

Tortured StatisticIf you torture the data long enough, it will confess to anything.

One of the challenges with relying on research, particularly research companies use in their own marketing, is that the statistics used have been tortured.

When you see a sales pitch full of statistics, how do you differentiate the real story from the spin? Short of dismissing all of the data presented (a good idea, but not realistic for most of us), these five methods will be a good starting point.

1. Turn Stats Upside Down

I was in a recent meeting where a sales person claimed 46% of B2B buyers have purchased a product because of a video they watched. They didn’t just notice it or remember it, but nearly half of potential buyers that watch a video actually bought a product because of it!

Before you react, turn the same metric upside down. Look at the other side of the same metric: despite all of the videos vendors are creating, more than half of potential buyers say they have never purchased a product because of a video.

Check IconLook at the other side of the same statistic to get a more balanced perspective.

But continue on, we will look at this same statistic again.

2. Put Stats Back in Context

Marketing often isolates statistics, removing them from any context that would allow for appropriate comparisons.

In the example above, the video statistic was highlighted on its own. But is 46% a lot more, or a lot less, than the same metric for other types of content?

Here is a comparison, from the same study: 41% purchased a product because they saw a banner ad on a tablet.

Wow. When compared to a banner ad on a tablet, watching a video is just slightly better. Barely better than a banner ad will never be a claim to fame.

Check IconPut metrics into context of other measurements before evaluating them.

3. Consider What Isn’t Said

Companies that field research for their own marketing activity never share all of the results. They choose what to use and, more importantly, what not to use in their marketing.

Consider carefully the comparisons that are not made or the metrics that are not reported. What would you like to know that isn’t included? It is probably safe to assume that if it was included, the story wouldn’t be nearly as rosy.

Here is a hypothetical example:
79% of end users agree that our solution is easy to use. But what isn’t said? Here are a couple of possibilities: would they recommend it to someone else? Would they buy it again? Our oven is easy to use but it is way to small. I definitely wouldn’t recommend it to someone else or choice it for myself.

Check IconIf you were doing the research, what else would you want to know? Assume that information is less favorable than what the marketer has chosen to present.

4. Learn How The Questions Were Asked

Market researchers can bias the response based on how the question is asked or by the choices that were given. Consider the potential results of these two surveys about marketing priorities:

Which of the following most closely matches your top marketing priority?
A. Improving brand metrics (including awareness, perception, intent to purchase and likelihood to recommend).
B. Demand generation (including lead generation, email marketing, content marketing, pipeline development and sales enablement).

In B2B marketing, B is probably the clear answer for many marketers.

However, if we list the answers differently, we might get a significantly different result. Consider the following list of choices:
A. Brand awareness
B. Content marketing
C. Sales enablement
D. Lead generation
E. Marketing automation

Demand generation marketers will split their answers among the last four choices and brand awareness might become the top priority based on this survey structure.

Check IconGo back to the original question, the full list of choices and the responses to see where the results were biased.

Bonus: here is a classic example of how marketers biased their research to make bacon part of breakfast.

5. Check the Sample

This is a favorite trick among magazine publishers. Publishers field a readership survey and, surprise! every publisher reports the audience likes their specific publication best.

Well, no duh. They send the survey to their own readers and it was only opened and answered by the people who care about and read the publication regularly. Anyone that doesn’t care that much about the publication, particularly those folks that get a free copy of a controlled circulation magazine, ignored the survey as well.

Social media research often has the similar problem. Instead of recruiting a cross-section of the audience, respondents are recruited through social media or their own database. No wonder the results say social media is so important, the heaviest social media users are the only people who responded!

An example I recently came across is Passle’s State of Business Blogging 2013 report, with markedly different results from the Content Marketing Institute’s 2014 Benchmarks report.

When you look at the sample audiences, they are vastly different. Which one matches the audience you are looking to understand? (For most content marketers, the answer will be CMI’s research). Hat tip to Ardath Albee for information on the Passle sample audience.

Check IconCheck that the sample and recruitment are appropriate before relying on the results.

Your Turn

It’s your turn. What would you add to this list, or what would you take off? Let me know in the comments below or on Twitter (@wittlake).

Photo Credit: Shadow Viking via Flickr, modified by author. cc

Get every post delivered directly to your inbox.

Your email address will not be shared or sold. I hate spam too.