Monthly Archives: March 2016

Science Is Dead – Long Live Marketing Research

ASLlogo300There are some days when I think I should just stay in bed. But today I’m working off jet-lag in Shanghai, so I’m up at 4am, catching up on my RBDR Daily News Report. There’s Bob Lederer’s smiling face, on his 28 May broadcast (free plug, Bob), extolling the importance of a study done by Instantly. This study purports to show interesting differences between using a mobile device and PC online to respond to surveys. I believe research-on-research is a good thing for us to do, but this study isn’t a very good exemplar. Because it is getting talked about in a number of blogs, it deserves some skewering.

Instantly begins by claiming, “This research was… designed with an open mind to prove or disprove that mobile gives more accurate insights than online.” First, nobody who isn’t selling mobile research claims or believes that mobile necessarily gives more accurate insights, so proving or disproving this isn’t a burning issue in researcher’s lives. As you read the report, you’ll notice that the writing either favors mobile or apologizes for mobile’s shortcomings; at least the latter are reported.

To run this study, Instantly recruits two panels of shoppers, one who participates via mobile and one who responds on a PC (the online sample). It’s a three-part test. The first part is a shopability study for Lays potato crisps/chips – Prawn Cocktail in the UK and Cheesy Garlic Bread in the US. Prawn Cocktail has been around as long as I’ve been doing research internationally – since 1995 at least. Cheesy Garlic Bread was an in-and-out product for Lays in 2013. Shoppers were asked to go to the store and buy this product, then answer some questions about it – how long it took them to find it and where it was located on the shelf. Mobile shoppers could do this “in the moment” (a phrase I’m coming to abhor for its misuse and overuse), while the online panel had to wait to get back home to respond (at the earliest).

You, dear reader, should be banging your head on the nearest hard surface, asking who, in their right mind, would do a shopability study with an online sample? You’ll be shocked to learn that the study finds major differences between mobile and online responses – shocked I say! (with apologies to Claude Rains).  Mobile users were much more accurate in recalling the location of the product and claimed much shorter shopping times. This is a blinding flash of the obvious. Moreover, any researcher who did a shopability study like this online deserves the bad data they get. NOBODY should ever, ever, ever do this – there is just no excuse. Indeed, mobile is perfect for this type of work.

The next phase of the test was an in-home usage test. Now, I’ve always thought an IHUT was pretty simple. You ask the participant to try your product and then respond to some questions about it. In the case of a one-time trial, like a snack product, you might say something like, “When you’re ready, we’d like you to try the [Prawn Cocktail/Cheesy Garlic Bread] product you bought, then log in to take a quick survey”. Apparently, this was too sophisticated an instruction set. 25% of the mobile users and over half of the online users answered the questions three or more hours after trying the product. I’m thinking that if this were my study, I’d delete those who didn’t taste the product just before answering some sensory questions as part of data cleaning.

Online has a higher Purchase Intent score than does mobile, which the authors believe is another argument in favor of mobile (remember – they are claiming objectivity). The authors state, “…product owners following the online data would over-invest on positioning and product supply”. They do not state the obverse – that believing the lower mobile-generated PI scores could lead to under-investment and product out-of-stocks. They do not do the obvious – tell us which version gives the more accurate sales forecast. Actual sales was a known quantity – they could have told us whether online over-projected or mobile under-projected. In claiming big differences, they also ignore the fact that UK top 2 box scores are identical; this would be the likely case for an existing product.  They make no mention of the demographics of the two groups of panelists – if they are different, this could very well account for PI differences. Salty snacks are one of those categories that have an age profile for flavor preferences – different ages, different responses. And we would expect an older online sample compared to a younger mobile sample (I note that space limitations in a promotional piece may have kept them from telling us about the sample).

Finally, they want to claim that the diagnostic data one gets from mobile is much richer. Mobile panelists use an average of eight words per diagnostic, while online users only employ seven words. Such a big difference should overwhelm us? While statistics may show this to be statistically significant, I’m not sure how meaningful that difference is. But then, I’m a quant type of guy, not a qual expert. I do note that they quickly gloss over the fact that sensory ratings show little difference between the panels and are directionally inconsistent, suggesting nothing is there.

While working hard to appear unbiased, they do mention that mobile had a significantly higher drop-out rate, took twice as long to run, and costs 55% more than the online study. But, remaining unbiased, they ask, Is it wise to spend money on an online study for in-store work when it is proven to be flawed and subject to inaccurate data?” No, it’s not wise to do an online study for in-store work like this. But choosing this topic and technique to compare mobile and online research is at best a straw man game with a foregone conclusion, and not much of an addition to our body of knowledge about the differences between the tools.

Originally published on the Greenbook Blog on 6 June 2015 http://www.greenbookblog.org/2015/06/15/science-is-dead-long-live-marketing-research/

Let Me Tell You A Story

ASLlogo300A priest, a rabbi, and a minister walk into a bar…. No wait, that’s a joke, not a story. Back in the day, we were all encouraged to start our research presentations with a joke to endear ourselves to the audience. We were also told, if we were particularly nervous about presenting, that we should imagine our audience sitting in their underwear. Neither piece of advice was very good, but apparently it worked for some people.

Nowadays, a LinkedIn day or a conference doesn’t go by that we aren’t told that we’re not doing our job as researchers if we don’t tell a story. Pity the poor researcher who’s been beaten up in the blogs for not knowing enough and not doing enough and not being focused enough, and now on top of all that, you’ve got to be a great storyteller too! Let me tell you, storytellers are not a dime a dozen (for my British friends, that’s sixpence a dozen – there you go, Ray). Anyone who’s listened to my wife try to tell a story will know that this is not a native skill, and 32 years of marriage suggests it’s not necessarily a teachable skill. Why do we think we have to be storytellers? There are three reasons, none of them very good.  This is their story.

First, we’ve been told for years, mostly in surveys of marketers (that are of questionable quality, both the surveys and the marketers) that we are not being as impactful as we could be and that research presentations are boring. This is hard to argue with – we’ve all sat through some terribly painful hour-long data presentations. It does not follow that telling stories solves this problem. What does follow is that we have to do better presentations. I’d like to think the days of 200 charts of data are over, but I know that’s not true. I just finished a study where I had a nice, compact, 30-page deck that focused on the answer to a very specific question. The client’s marketing group’s reaction? They wanted to see a bunch of other numbers charted, none of them relevant to the research issue. I notice that this tends to happen more when the results of the study are not very positive. Presumably, they are going digging for some positive nugget; this is counterproductive. It’s an activity that is unlikely to produce much of anything useful and will therefore reinforce the image of research as a waste of time. So let’s not always blame the researcher for long, boring decks. Make it short, sweet, and keep to the point. If the point isn’t a positive one, then that’s learning – maybe you’ve just saved your company a bucket of money.

Second, some people have glommed on to psychological and neurobiological research that suggests all kinds of good things happen when you hear a story. There’s oxytocin release, enervation of larger percentages of the brain, better retention, and many other good things. The research I’ve seen, though, has two components that we don’t normally encounter when we’re doing research presentations. Typically, the stories told in these studies are “good”, in that they are engaging and usually have a happy ending. Also typically, the academic studies’ audiences are not interested, a priori, in the outcome of the story.  When you have a negative story to tell, making it cute and fluffy is not going to make it any more palatable. Our audiences should be vitally interested in the outcome of the research – they should need much less in the way of a story to comprehend what we found. You’ve got a group waiting to hear what you have to say – don’t beat around the bush, tell them what they need to know.

 

This brings us to the third reason we think we need to tell stories – researchers think our audience is dumb. Sure, we’ve all run into some people and wondered how they possibly got into the positions they are in, but for the most part, our audience is pretty good at what they do. What they are not good at is what we do, and they are never going to be that good at what we do. The sooner we learn that, the happier we will all be. By the time we get to presenting the results of the study, everyone in the room should have heard about the study, agreed to the sample and the methodology, and, in my smarter clients, agreed to what we’re going to do with the results, no matter which way they turn out. Leave out the [gory] details of sampling and methodology when presenting – stick it in the deck but don’t waste the audience’s time presenting it. To the audience – you have better things to do than discuss this stuff in a management presentation.

Storytelling does not necessarily make research more impactful. Storytelling does not make negative results more palatable. Your audience is usually smart enough to understand what you have to say if you keep it simple and keep it focused on the issue at hand.

Is there room for storytelling? Of course there is. I’ve seen some wonderful stuff from Niels Schillewaert at InSites Consulting and Fiona Blades at MESH using video to convey ideas that data tables would never communicate. Their work is particularly potent when they are trying to convey what consumers are thinking and the videos they use are powerful in driving a shift in organizational perspective. That said, much our research is more focused, looking at the potential for a set of marketing ideas to drive behavioral change. In that case, it’s not storytelling, it’s not harmonizing (as someone recently claimed on LinkedIn as the next stage of evolution), it’s keeping it short, simple, logical, and focused on the answer.

Now, about that priest, the rabbi and the minister.

This post originally appeared in the Greenbook blog on 1 March 2016 

Let Me Tell You a Story…