Thursday, August 27, 2009

Understanding Satisfaction

Each year we – Carlisle & Company – receive tens of thousands of satisfaction surveys. When we look at survey data we tend to ask two questions: (1) What good is it?, and (2) What can each company do to improve? Survey results should show relative positions in specific areas that generally reflect process competencies. If you plot them out, they should look like positions on a map, stretching from origins to destinations. Positions closer to destinations should reflect OEMs that are more evolved. When we look at the 2009 NASPC Auto Service Manager Survey results, we largely see “map positions” that reflect evolution – it mostly makes sense to us. So, we conclude it is “good.”

However, how each company can improve is a different matter; OEMs don’t spend enough time thinking this through. Many look at summary metrics without delving below the surface. Dealers are not homogenous and shouldn’t be treated as such. What makes a dealer dissatisfied? Neutral? Satisfied? Is it always the same thing or lack thereof? This issue is important, so we will be devoting several up-coming blogs on some of the ways – both conventional and unconventional – that best-in-class OEMs are utilizing their survey data to “evolve.”

Today, let’s focus on a somewhat more unconventional approach that has its roots in what we can call “human nature.” Fundamentally, we all want to be liked, so we naturally gravitate toward anything that helps measure our likeability. Therefore, we typically focus on survey scores that reflect “satisfaction” with us and what we do. If we are at the top of the pack we feel good. If we are not at the top of the pack, we: (1) blame it on the survey, (2) blame it on the survey timing, (3) blame it on our customers, (4) rationalize/minimize our results, or (5) we try to get better. Those who gravitate toward (5) are the only ones who can march to the top of the pack. But how? Very simply – and maybe against our nature – by focusing on survey scores that reflect “DISsatisfaction.”

How is the “DISsatisfaction” approach different from the conventional “satisfaction” approach? The conventional approach to getting better involves identifying the areas that have the biggest influence (statistical or otherwise) on Overall Satisfaction. Since most survey respondents typically are in the “satisfied” categories (around 80%), they dictate the “drivers”. This approach has been very successful for some; we will document this in future blogs.

In contrast, the “DISsatisfaction” approach claims that the most important results coming from satisfaction surveys is the feedback you receive from those that are the least satisfied. Human nature throws us a curve ball in that it incorrectly points our common sense towards matters of the heart (How liked am I?) vs. matters of the mind (Why don’t people like me?).

However, we must fight the urge to dismiss the complainers and, instead, focus on their criticisms. Dissatisfied customers are a totally different animal and should be looked at individually. The actions that we most often think of as increasing satisfaction are typically effective in converting “satisfied’s” into “very satisfied’s”, but can be less effective in positively impacting the “dissatisfied’s.” For them, we may need a different approach.

Let’s look at some real survey data. We isolated scores for one participating OEM in our 2009 NASPC Parts Manager Survey and segmented the results into two groups: (1) the “satisfied’s” (“Very” or “Somewhat satisfied” in terms of Overall Satisfaction) and (2) the “dissatisfied’s” (”Neutral”, “Somewhat” or “Very dissatisfied” in terms of Overall Satisfaction). For both groups, we calculated the drivers of satisfaction and then ranked them from high to low. Looking at the results, we see that the groups have some very different issues impacting Overall Satisfaction:



We are not trying to make a statistical point here – there are many ways to look at survey data. In some cases, simply reading the verbatims may be most effective. The bottom line is, “Do you understand what makes your dissatisfied customers unique?” And, “Is there benefit from focusing exclusively on your dissatisfied customers?”

The chart below provides a different way of looking at dealer satisfaction across the industry (from the 2009 NASPC Service Manager Survey).


Group 1 OEMs all have highly satisfied customers. Over 90% of all service managers fall in the “satisfied” category. For this group the number of neutrals or discontents is so small that there is little to learn from them. So, other than feeling really good once a year about the high scores, what should this group do? Conventional wisdom says focus on converting “satisfied’s” into “very satisfied’s”. Unconventional wisdom says look across the industry to understand the unique nature of the “dissatisfied’s” and make sure your strategy is designed to avoid those issues. We are often asked to do the former analysis, but never the latter.

Group 2 OEMs are satisfaction “middling’s.” Looking at this group, the size of the “neutrals” is larger than the two “dissatisfied” groups (4th and 5th boxes). This group is at their tipping point and can go either way – we should understand the characteristics of these respondents. In addition, the small size of their “dissatisfied’s” put them in a similar situation to Group 1. Again, we are never asked to probe beyond basic drivers here either.

Group 3 OEMs have a case of bulging neutrals – third-box satisfaction scores that are disproportionate to where you’d expect a company with top-box scores of this magnitude. This could be the equivalent of early cancer detection – they need to probe into why their neutrals are neutral …or things could get worse in the future. Never been asked about this either.

Group 4 OEMs do get involved in trying to fix things, but most tend to get bogged down in the details. For example, you can call each dealer service manager and ask precisely why they gave you middling to low satisfaction scores. After a hundred or so calls you conclude that (1) there are a hundred or so different stories out there, and (2) just the process of calling the dealers was restorative. True on both counts, but pretty unhelpful. What we need to do is understand (1) the trends in our middling-to-low-scoring dealers, and (2) the trends in this group that transcend our brand – that are common to all middling-to-low-scoring dealers. Never asked.

Over the next few weeks, we will discuss more about how best to leverage your data, covering both conventional and unconventional approaches. But, what’s the bottom line for this week?
  • Remember, we proved that “good to great” really works – that was the message from customer survey work done for NASPC – and “great” customers come back more and buy more.
  • We all need to think more about the needs of various customer constituents – satisfieds, neutrals, and dissatisfieds.
  • It might be that each group has different characteristics and different change-levers.
  • We might have to do things differently to change.

No comments: