Indulge us for at least one more week (actually next week may be on a similar topic – then we promise to get back to supply chain issues).
How good intentions may – as a result of unintended consequences – be leading the industry inadvertently down the road to hell!
Let’s talk about survey coaching. We all know it happens. Most OEs have taken aggressive action to try to control survey coaching, but we also know that it still happens. I have very recent experience.
I decided I would turn in my current leased vehicle early, both to do my patriotic duty by spending money and to get that hot new truck that I coveted. I actually had a pretty good sales experience and everything started out innocently enough. I had done my research – going in with money factors, depreciation, and dealer cash information in hand, and I was ready to complete the sale quickly. The salesman recognized an informed (and willing) consumer and closed the sale in 35 minutes by effectively getting out of the way. Needless to say, I was very pleased with the experience and the dealership was guaranteed a “completely satisfied” sales satisfaction evaluation.
Well, almost. . .
As soon as we closed the sale, the salesperson let me know that I would be asked to evaluate the dealership. “Fair enough”, I thought – this guy was great!! At this point though, he started pressuring me to give him all “5s”. When I asked what “5” meant, it became apparent that the salesman wasn’t concerned about that. All I needed to know was that I was supposed to give him all 5s. This pressure was unpleasant, but I quickly forgot about it as soon as I fired up my new V8.
Then the calls started. . .
Over the next week, I started receiving calls from the dealership about my evaluation. Had I received the evaluation? Had I given them all “5s”? One dealer representative actually told me that if I wasn’t going to give them all “5s,” I shouldn’t even send in the evaluation. I was floored! Then I received a call from the salesman. After a quick inquiry about my new truck, he also reiterated that I should only submit the evaluation if it contained all “5s”. I started expecting to see a horse’s head in the back seat of my truck if I didn’t “do the right thing”.
At this point, the dealer had taken a great experience and, in the name of customer satisfaction, turned it into a negative one. In my mind (and likely other customers) this experience tarnished both the dealer brand and the vehicle brand. Certainly the vehicle manufacturer was not blameless in this. After all, they are the ones that placed incentives and rewards on dealers that drive them to survey-management. . . all in the name of customer satisfaction!
How did we get here? How did we go from good intentions (the desire to measure customer satisfaction so that we could improve customer treatment) to the road to hell (creating a system that makes customer treatment worse when we try to measure it)?
It all started out reasonably. When J.D. Power started measuring customer satisfaction on a syndicated basis there were many good sides to it. It put the focus on improving the customer experience, and because it was national and random it could not be gamed. Any one dealer had little to gain from coaching since they did not know whether their customers would be surveyed.
The weakness was that this approach did not provide OEs with the level of detail needed to take focused actions to improve. They needed more data and more numbers for that. So in an effort to improve (good intentions), OEs started surveying their own customers and (in good capitalist fashion) instituted rewards to individual dealers for good customer satisfaction performance.
Road to Hell
It was those rewards that pushed us down the road to hell – by giving rise to coaching. The very act of trying to more finitely measure customer satisfaction was giving rise to actions that directly changed what we were trying to measure (as predicted by the Hawthorne Effect).
So, in fact, the way we chose to measure satisfaction actually changed the process and the outcome. This may also lead us back to that conundrum we have been discussing for the past several weeks – why don’t customer satisfaction results better predict performance?
Stay with me, I promise to get back to that point soon.
If you graduated from high school, you know about the Scientific Method. This means developing a hypothesis, testing that hypothesis with data and experiments, and evaluating if the data proves or disproves the hypothesis.
In this application, the hypothesis is “Better customer satisfaction should lead to improved business results.”
As we have seen though, the data does not prove out the hypothesis. There are two basic possibilities:
- The hypothesis is wrong
- The data we used is not reliable and, therefore, we cannot make inferences based on that data.
Almost certainly the hypothesis, if not wrong, is simplistic. As we discussed in the last blog, there are other factors that play important roles: past customer experience, product quality, product execution, customer expectations. These and more impact our customer experience and whether we return to a given brand.
However, it is almost as certain that the data we have is wrong! By creating an environment that incentivizes coaching, the measures we get back will not be accurate.
I don’t think that these insights are particularly unique. Many have commented on similar concerns. But if this is so, then why do we keep doing it? Why stick with something we know (as an industry) is flawed? It could be for the same reasons we keep doing other things that aren’t working: inertia (we have always done it), lack of a better alternative, or feeling we have to do something without a better idea of what to do.
So what is that better idea?
There are some things that would move us in the right direction:
- You want to change behavior? Change the incentive
- Take the directly-related money or rewards out of the easily-gamed satisfaction scores.
- Instead, continue to obtain survey results by making it part of the “required dealer package” in order to continue getting the most favored dealer benefits. You require your dealers to participate in RIM? Report their inventory? Well, now they also have to conduct the surveys. No longer is the reward tied to the surveys per se, but the entire package of requirements.
- The biggie – reward dealers for “survey response rate”, not “survey results”.
- But don’t we care about results?
- Of course we do, but we use the results to help dealers improve their performances.
- Many of us compensate dealers for using a consultant to conduct an inventory audit. We pay for the audit, not the results, assuming that, of course, the dealer will want to improve their performance. Why should this be any different?
- We addressed this in an earlier blog. The survey should specifically ask the respondent if they’ve been coached in any way. If greater than 20% of respondents for a given dealer say “yes”, exclude all surveys. Remember, exclusion of surveys means the dealer doesn’t meet their base package of requirements, which is tied to financial incentives.
- Customer satisfaction has long been recognized as an interim measure, not the final outcome we want. What we really want is customers that bring their business and refer other business back to us.
- Measuring service loyalty to dealers and vehicle repurchase loyalty directly is the way to do this. Instead of paying for high scores in these areas, use the results of these surveys to remediate and train dealers.