Monthly Archives: January 2018

PIG is a Great Name for This Strategy

I find it fascinating, as a psychologist, that so many marketing researchers are very quick to reference psychology as a justification for their musings, even though they have little knowledge of psychology. Lately, it appears that we believe that a cursory reading of the literature suffices to make one an expert in that topic.  I understand the desire to add a “why” to a “what” and a “why” that has some validity based on some psychological principle. I understand research vendors trying to go beyond making things up and actually having a rationale based in science. “Good for us”, I say, because I believe that if we deeply understand why shoppers behave the way they do, we’ll be better at marketing to those shoppers.  And it’s nice to see my academic profession being used in my making-a-living profession.

Except when people take one little idea, misunderstand it, misrepresent it, and build a whole (and silly) theory behind it. Take the PIG strategy espoused by Mr. Sampson Lee. Were this not published on LinkedIn and available in book form (from the author) you might find yourself inside a Monty Python skit or reading about it on The Onion. Here’s how Mr. Lee’s logic goes:

  • He references Kahneman and says that all we remember about an experience is the average of the peak of an experience and the end-point of an experience.
  • Therefore, instead of using resources to make the whole experience good, Lee says we should make only a small part of the experience great – the rest of the experience doesn’t need to be good.
  • Indeed, causing some friction in the process heightens the amplitude of the peak of an experience – we should make things a little difficult.
  • With a high positive peak during the experience and a good ending, you create a better memory of the experience.
  • Hence the PIG (Pain Is Good) strategy (you can read more at linkedin.com/public/pain-good-sampson-lee, among other posts of his).

There are a number of problems with Mr. Lee’s strategy:

  • First, Kahneman doesn’t say this is how memories are formed or retained. He does say that the average of the peak and the ending are good predictors of affective recall.
  • Second, all the work Kahneman (and Lee) cites is unidirectional. That is, the subject’s experience is all positive or all negative, not a mix of the two. We have no idea whether this encoding would hold up for experiences that are both positive and negative.
  • Third, there are simpler psychological explanations for the examples Mr. Lee cites, ones that don’t require the creation of friction points in a shopper’s experience in order to heighten the good parts. For example, a business improving the end of the experience can be recognized as making an effort and achieve better ratings.
  • Fourth, if pain is good, we would not be seeing the relatively painless growth of online sales on Black Friday. Assuming the same high from finding what you want and buying it online or in brick and mortar, the greater hassle of brick and mortar should be more appealing – it’s not.

Mr. Lee has taken one sentence from Kahneman’s book, taken it incorrectly, and come up with a whole theory that says customer experience isn’t worth a bucket of spit except for the peak and the end. All you need is one good piece of the experience and a happy ending (no snickering, please). I suspect that if you provide lots of bad experiences and one good one, and a happy ending, this will not bode well for your business – memory is just not that simple or that lazy.

All that said, to the extent that Kahneman is right, there’s a great lesson for retailers. Providing at least one peak experience during a shopping trip and a great ending (i.e. checkout) may go a long way towards improving customer satisfaction. This is why stores like Publix and Wegman’s are always highly rated and Walmart is not – differences at checkout. Hey Walmart – this would make a great test.

A quote attributed to William James goes, “Psychology is the study of the obvious. It tells us things we already know in a language we can’t understand.” Sometimes life is simpler than Mr. Lee would have us believe.

 

 

This is a Test

Words to live by if you grew up in America during the Cold War (and anyone over 40 is reminded of this in our current political climate).  However, this is not about politics but about understanding a simple fact of a researcher’s life – sometimes you have to test an idea to see if it will work. Unfortunately in our line of work we don’t have a lot of “facts”. Here are some common things we don’t know:

  • Does social media make a difference? There’s some research that says yes and some research that says no. Worse, some of the studies that say a social media push is only effective in specific ways for specific things. Not very helpful, is it? But we want to believe that social/digital media matters, so we keep on trying.
  • How should we advertise? Never mind the newer question of digital versus traditional, which is a whole other issue. What should our ads look like? How should they be delivered (reach, frequency)? On what device(s)? How often should they be changed? You would think that with all the years of research on this topic we would have a pretty good blueprint by now.
  • Does eye-tracking matter? Beyond the simple, “if you don’t see it you don’t buy it” axiom, we haven’t been able to show that attracting more attention to a product increases its sales. Again, we all believe it, we just haven’t proved it.
  • Aren’t all the answers in Big Data? Gee, if I had a dollar for every time someone says this, I could retire soon. Sometimes the answer is in Big Data, sometimes it’s not. Sometimes a database has a great covariance story; every time a brand does this, here’s what happens. That’s a pretty good indicator that if you try it, you’ll come up with the same outcome. More often, you find the data is equivocal, usually because there are additional factors you aren’t considering. Second, you want to try something new, something no one else has tried before. Big Data won’t be that useful in that case.
  • Should brands be stocked horizontally or vertically? This is actually something I know about and the answer, of course, is it depends on the category. Again, not helpful.

Experimentation needs to be in our research toolbox. It’s always been a hallmark of the scientific method, just as observation (think ethnography or data-mining) is and just like hypothesis formation is. Whether you are doing a simple online A/B test, a virtual reality test, a controlled store test, , or a live test market, sometimes the solution to a whether a marketing idea will work is to test it. Over the 25 years we’ve been doing our research, we’ve found that, contrary to expectations:

  • You can charge more for your product than you may think.
  • Making packages more convenient for consumers may not improve sales.
  • SKU reductions can improve sales for the brand and for the retailer.
  • Shelf signage and displays are not always a good thing – they can actually hurt your sales.

Well-designed experiments need not be costly nor do they need to be time-consuming. They can:

  • Provide a causal, rather than correlation-based answer to your question; you know the differences you see are due to the test variable.
  • Reduce the risk associated with a marketing action. How many new products would perform better had they been test marketed?
  • Resolve disagreements on how to market a product when competing ideas exist; test them both and see which wins.

Harvard economist Sendhil Mullainathan has a great quote:

No one would say, “Hey, I think this medicine works, go ahead and use it”. We have testing, we go to the lab, we try it again, we have refinement. But you know what we do on the last mile? “Oh, this is a good idea. People will like this. Let’s put it out there.”

Test more.

I Have a Dream

So began a pretty famous speech, and so begins any number of stories about Category Management 2.0, in which there is once again a call for retailers and manufacturers to work together to deliver what shoppers want.  For those of you unfamiliar with CatMan (as those in the know call it, because who can resist a catchy abbreviation), it was the 1990s brainchild of Brian Harris. He created a process by which retailers would have category captains – a major player in the category – who would help guide the retailer to better assortments, shelf layouts, and pricing. The retailers got more profitable categories and, in return, the captains got somewhat preferential treatment when it came time to allocating space or promotion slots or delisting.

As I wrote for ESOMAR back in 2007, this all went horribly wrong. The process to do CatMan “correctly” filled large 3-ring binders with pages of forms that nobody ever looked at. Manufacturers created armies of category analysts at major retailers to assist them with the intricacies of the process, which often ended up being free labor for the retailer (planogramers of the world unite!). The town of Rogers, Arkansas, it is said, was created for this very purpose. I talked with a number of manufacturers for the ESOMAR paper and asked them the very pregnant question, “Are you making any money off this process?” Mostly people would look with eyes downcast and not say much; they either didn’t know or, more often, didn’t want to know.

The concept was broken for a two very simple and obvious reasons:

  • What is good for the retailer is not always good for the manufacturer, and vice-versa.
  • Understanding shoppers, which is supposed to be underpinning the whole CatMan process, is not an easy thing and few do it well.

ASL has been doing CatMan research for 25 years now. As we’ve repeatedly shown, the ability to produce a shelf assortment and/or a shelf layout that helps both the category and the manufacturer’s brand has been limited. Only 15% of the time have we seen a win-win outcome, in a process where all the outcomes should be win-win. Trust me when I tell you that most of the scenarios we’ve tested have never seen the light of day at a retailer presentation.

RESULTS OF 327 CATEGORY MANAGMENT TESTS

 

CATEGORY

   
 

POSITIVE

NEUTRAL NEGATIVE  
BRAND POSITIVE

15%

18%

7%

40%
BRAND NEUTRAL

1%

38%

6%

44%
BRAND NEGATIVE

1%

7%

7%

15%
  18% 63% 19%

100%

 

There are a number of reasons why these attempts to improve the assortment or the shelf layout failed; but mostly, they fell into two camps:

  • We think we understand the shopper but our understanding is incomplete or incorrect. So when we make a change, it’s not aligned with what the shopper wants or needs.
  • We understand what the shopper wants or needs, but we either can’t translate that to the shelf or we translate it incorrectly.

The new calls for category management simply echo the past rationales for the process. CatMan has always been designed to be shopper-centric, it has always been designed for all parties to share relevant information, and it has always been Pollyanna-ish in believing altruism will prevail over self-interest.

At its heart, category management and its poor step-child, shopper marketing, has always been about fact-based selling, which existed well before these came to life. We have yet to meet a retailer who wasn’t interested in a better way to merchandise a category. Being a neutral third party, we always get a hearing for our recommendations. Do good shopper research. Design better in-store programs based on that research. Test the programs before going to the retailer. Stop dreaming about unicorns and true category management.