Monthly Archives: September 2016

A Bad Week For Neuroscience

ASLlogo300

For those of us in the business who are scientists, or have pretensions of being scientists, neuroscience is a ridiculously compelling concept. Science uses the term sui generis (because we can’t help but use Latin terms profusely and indiscriminately), which means “cannot be reduced to a lower concept”.  The ability to define the neurological processes, the lowest level of an individual’s response, to marketing concepts is a holy grail for us. If we can map the process to the response, we get away from asking people questions and all the uncertainty, all the biases, and all the controversy in our scientific endeavors.

So it is with both great sadness and a certain amount of smug self-satisfaction that I read two publications this past week that raise serious doubts and cast aspersions on neuroscience. The first is an article in Proceedings o f the National Academy of Sciences (no, I haven’t heard of it either) by Eklund, Nichols, and Knutsson called “Cluster failure: Why fMRI inferences for spatial extent have inflated false positive rates”. I give you fair warning – this article is as dense as they come, both from a neurological and a statistical perspective; it is not for the faint of heart. Fortunately, they summarize the issue and the results in a language we can all understand. The three most common statistical packages that analyze fMRI data have a positive bias in the range of 70%. That’s worth repeating – they found that 70% of the time these packages deliver a false positive and they call into question the results of some 40,000 fMRI studies. In practical terms, when someone tells you that this type of stimulus excites this area of the brain and that means it’s good or bad, they are likely wrong.

The other publication I read was the July 2016 issue of Quirk’s Marketing Research Review. I recognize that this is an advertiser-supported publication – it’s free to subscribers and I usually find at least one interesting article an issue in there – sometimes more. There’s an ad (on the inside back page) from a major marketing research supplier who will go unnamed, promoting their neuroscience business. The headline says, “Think your ad is good? We can make it GREAT. “ They use EEG, Biometrics, Facial Coding, Eye-Tracking, and Self-Report to “get at the truth of how people respond to your ad, so you can run your campaign with confidence.”

No, not really. They can tell you if there is or isn’t neurological stimulation and probably can tell you where in the ad the stimulation occurs or doesn’t occur. That will tell you two things – it generates some stimulation or not and does that stimulation occur when you think it should. Neither of these will make the ad great, or even good for that matter – it’s a report card. They can tell you whether it is more or less stimulating than other ads in your category that have been tested. They can tell you whether people liked the ad or not via facial coding and by asking them. That won’t make the ad great either. Why not? Simple – we don’t understand the relationship between neurological stimulation and purchasing and we barely understand the relationship between ad-liking and purchasing. At the end of the day, the question is whether advertising drives increased purchasing, and we have yet to establish the necessary linkages to define this neurologically. Research doesn’t make anything great – it tells us if it will likely be great.

I’ve argued for some time that neuromarketing is its own worst enemy, over-promising and under-delivering. Thankfully, we’ve seen less hyperbole in the last couple of years. Until this week.

Reference – www.pnas.org/cgi/doi/10.1073/pnas.1602413113

Originally published on www.greenbookblog.com on 8 August 2016

Feelings. Nothing More Than Feelings.

ASLlogo300

Feelings, a 1974 song by Morris Albert, might be the worst song ever to hit the charts (at least for males in the early 70s). A recent Quirk’s article (http://www.quirks.com/articles/2016/20160808.aspx) about feelings may not be the worst article they’ve ever published, but it ranks up near the top. The author is flogging a new voice analysis system that claims to detect the passion in response to a new product concept and enables better performance forecasts. Full disclosure – what I know about audio engineering and sentiment analysis would fit in a cocktail glass and leave plenty of room for my martini. The good news – you don’t need to know anything about either of these topics to appreciate the lack of empirical evidence in this article.

The article starts with, “Fundamentally, consumers adopt new products and services that improve their lives.” This is by no means fundamental; neither my new Smucker’s Chocolate Coconut Ice Cream Topping nor my new Gia Russa Bolognese Pasta Sauce is likely to improve my life, although they both taste good. Most things in our pantries are not life-altering, and food is what we all buy most of on a transactional basis.

The author believes the “enormous question on the table…is how to identify product concepts that have high probabilities of building deep emotional connections with consumers.” No, the enormous question on the table, from a new product forecasting point of view, is how to better predict new product trial; that may or may not involve emotions. “Consumers do not talk about products using multi-point scales”, the author claims, “They are unnatural modes of expression.” Consumers don’t talk much about products at all unless we ask them – they have better things to do with their lives (except those who live their lives on Facebook). A well-constructed set of scalar items is no less sensitive or less informative than the open-ended questions they would have us use. Neither modality is more or less natural to consumers.

In a comparative test, “the language-based sentiment metric [open end] yielded a coefficient of variation that was five times higher than the scalar method.” As if this is a good thing. What they obtained was a highly variable metric relative to scales, and that’s not very good for a prediction exercise.  By the way, we do not expect much variation from a 5-point scale either, unless the product is particularly polarizing. And we won’t mention the issues of a COV with a non-ratio scale or the wisdom of asking 35 purchase intent questions at one time.

In lauding their voice analysis method rather than the typing-an-open-end-answer, success for the former is claimed because respondents produced 83 words per stimulus compared to the typed 14 words. I can write a scathing review of a restaurant in lots of words or I can just say “It Sucked!!!!” I’m pretty sure the latter conveys exactly how I feel about the restaurant and why you shouldn’t go there. Word counts are a ridiculous measure of anything.

There’s a lot of fake math in this article surrounding the audio sampling rate – just ignore it, it’s wrong.

Using voice analysis, the author reconfigures the maximum trial potential of a product as the subset of respondents who (a) express positive sentiment towards the concept, (b) show positive activation (an expressed desire to do something), and (c) do so in a passionate way (as defined by the voice analysis). Spoiler alert – there is no validation for this assertion. I’m not saying we, as an industry, are great at predicting new product success. I do, however, expect someone who’s claiming to have a better way to do this to show us some data to back up the claim.

I’m sure this company wants to know what I’m feeling about their new voice analysis approach. I’m not a likely adopter, because while I experienced positive activation in a passionate way, my sentiment is anything but positive.

Originally published on www.greenbookblog.com on 2 September 2016