Wednesday, May 28, 2014

Russia Focus Day

Over the last three years, most OEMs have seen significant growth of spare parts sales in Russia – up to 100% growth for automotive OEMs and up to 250% growth for Heavy Equipment OEMs. Compounding this growth is a continuously strong outlook across the board.

Most OEM Aftersales business units have developed and implemented plans to accommodate growth in Russia. However, for many Europe Parts Benchmark (EPB) participants, volume in this region is growing even more quickly than anticipated. Also, dealer and customer requirements are changing and becoming more demanding. If this situation is not addressed, customers will experience lower satisfaction with their service and eventually OEMs risk losing vehicle and whole good repurchase sales. Addressing this situation provides significant benefits in a quickly growing market, in terms of higher customer satisfaction, high parts profits, and, eventually, higher vehicle and whole good profits. This is why, in early April, we facilitated a Russian Market Focus Day in Moscow with EPB participants from both the heavy equipment and the automotive industries.

The first thing that struck us was how quickly and robustly the networks have developed – when we first had a close look at the Russian market in 2008, only a couple of OEMs had a presence in the market. Compare this to today, where all EPB participants have at least one warehouse in the region from which they fill both stock and emergency orders, and many OEMs are planning significant structural changes in the upcoming years.

The challenges OEMs face in this marketplace are indicative of a market that is “Emerging, verging on Maturing” - the top five perceived risks are transportation infrastructure, currency, buy/lease real estate, customs, and reverse logistics.

The highest risk factors we discussed were:

First, let’s start with transportation infrastructure – OEMs indicated that finding the right mix of carriers is key since there is no “one-stop shop” available. OEMs experience delivery issues in metro areas due to traffic and zone restrictions. Eastern Russia is challenged by the distant and remote delivery points. In these cases, having the right carrier can help mitigate these issues substantially.

Second, real estate risk, in terms of the availability of suitable warehouse space, is a key concern for many OEMs. Many OEMs turn to leasing to manage the associated risks and try to negotiate favorable contracts, taking into account the currency risk.

Third, there was broad agreement that finding the right customs broker who has the right relationships and is close to the clearing location is key. The complex clearing process, coupled with a requirement of high document accuracy, can be an impediment, especially on emergency orders.

Bottom Line: OEMs have significantly expanded their presence and volumes in Russia over the last few years and some challenges that EPB participants are tackling, such as an evolving transportation infrastructure and a lack of real estate availability, go hand-in-hand with this growth.

Saturday, May 10, 2014

“Top Box or Top Two Box … Or Other Metrics – That Is the Question.”

Or is it really? In other words, does it really matter which metric we use to measure and report dealer satisfaction with the support they receive from their manufacturer? A debate – if not as old as Shakespeare’s “Hamlet” – at least as old as our surveys themselves. In this blog, we’re going to try to settle that debate.

While we conduct our surveys all around the globe, let’s start closer to home and use the 2013 North American Automotive Parts and Service Manager Surveys for our analysis. For the Parts Manager Survey, 11,700 responses were submitted. The overall average response rate for 19 U.S., 8 Canadian, and 10 Mexican OEMs was 66%. Nearly 9,000 Service Managers from 19 U.S. and 10 Canadian brands responded to the Service Manager Survey for an even higher overall response rate of 76%. Clearly, a very robust foundation. This analysis will look only at 19 U.S. brands and at the scores to the “all-in” question, “How satisfied are you with [OEM]’s total support for your parts/service business?”

In our surveys, we use a five-point satisfaction scale, ranging from “Very Satisfied” and “Somewhat Satisfied” over “Neutral” to “Somewhat Dissatisfied” and “Very Dissatisfied”. Now, we could also write a blog (or several blogs) about the nature of satisfaction scales. Suffice it to say that the academic discussion about which satisfaction scale to use probably predates “Hamlet”, and is equally undecided. We prefer the simple and intuitive five-point scale: our respondents have businesses to run, and we need to make it as easy as possible for them to respond. Needless to say, once a survey is running, it is difficult to change scales, as we would lose historical comparability – in our case, more than ten years.

So, based on the five-point scale, what metrics are available? We will take a closer look at four candidates:
  • Top Box Score: This is calculated by dividing the number of “Very Satisfieds” by the total number of respondents for a particular question: (Very Satisfieds)/(Total number of respondents). This is the “official” metric for our North American Surveys.
  • Top Two Box Score: Similar to “Top Box”, but adds the Second Box (“Somewhat Satisfieds”) to the numerator, so: (Very Satisfieds+Somewhat Satisfieds)/(Total number of respondents). We used this metric until 2012. We will explain why we switched to Top Box in a bit.
  • Average Score: For this calculation, we convert the score labels into numeric values: “Very Satisfieds” = “5”, “Somewhat Satisfieds” = “4” … down to “Very Dissatisfieds” = “1”, and then simply calculate the average. Traditionally, we have not used this metric for reporting scores.>/li>
  • Net Promoter Score (NPS): To be clear: this methodology is rarely used to measure satisfaction, but the strength of customers’ recommendations and advocacy. NPS typically uses a 10-point scale, where “9” and “10” are the “Promoters”, “7” and “8” the “Passives”, and the rest are “Detractors”. You then take the percentage of customers who are “Promoters” and subtract the percentage who are “Detractors”. Obviously, our five-point scale does not perfectly fit this methodology, but let’s give it a try. To mimic NPS the best we can, we are going to use: (Very Satisfieds - (Somewhat Dissatisfieds + Very Dissatisfieds))/(Total number of respondents).
After calculating the scores for the “Overall Satisfaction” question, we have a set of four scores for each OEM. What is next?

Obviously, each individual OEM’s absolute scores matter, and it is great if an OEM shows year-over-year improvement. But, clearly, it is NOT so great if others have improved MORE than you have. So, we not only care about absolute performance, but also relative performance – or an OEM’s RANK within the industry. Going back to our original question: Does the scoring methodology we use significantly impact the relative rank of an OEM? Let’s determine the rank of each OEM for each of the four scoring methodologies and look for significant differences in rank across methodologies.

For both surveys, the differences in ranks are quite minor across scoring methodologies: “green” OEMs for Top Box are almost always “green” for the other methods. The same is true for “yellow” and “red”.

In most cases, ranks differ by one or two positions; sometimes not at all. Bigger rank differences (>3) tend to occur for OEMs with smaller networks; scores will vary more in these cases.

The biggest difference in rank is 6, and it is for OEM 9 in the Service Manager Survey. Compared to OEMs below it, this OEM has more “Very Satisfieds” than “Very Satisfieds” plus “Somewhat Satisfieds”, respectively (which is a good thing). It is also the only case where the scoring makes the difference between “mid-pack” and “bottom of the barrel”. In general, the rank differences are greater in the Service Manager Survey, but at an average of 2.1 vs 1.4 for the Parts Manager Survey, this does not lead to significant shifts in rankings either way.

So, is this it: scoring methodologies do not really matter? Not so fast. Consider this picture:

Obviously, there is a big difference here: to the left is “life”, with its constant change and variability. To the right is … well, not sure what there is and can’t really ask anyone. But at a minimum, there is not much to see, unless you like flat lines. So, what’s the picture created by the different scoring methods?

As we would expect in an industry that is mature, but still very much alive, we don’t quite see flat lines. In fact, the picture created by each scoring method is radically different: both Top Box and NPS show a significant difference between the High and the Low; Top Two Box and Average significantly less so. Look closely at the Average score chart: there is little difference between the eight highest scoring participants, almost half the participants, and it’s as close to a flat line as you can get.

Supply chain folks hate variability (they prefer the flat, steady, predictable lines); research folks love it. Simply put, where there is variability, there is “life”, and the opportunity to learn by figuring out where you stand and where you need to go. Thus, scoring methodologies tend to not significantly affect the ranking of survey participants, but they impact the score differences between participants, with some methodologies making the differences more visible and others, well, almost obliterating them.

Of course, applications do exist for methods with less variability. For instance, you may want to consider the Average or Top Two Box scoring methods if you tie performance targets and compensation elements to the survey scores. This is why we supply all box and raw data reports; participants can pursue the scoring methodology that best fits the purpose.

However, we prefer the Top Box method and use it as our official survey metric in North America. It acts as a “magnifying” glass to more clearly differentiate satisfaction performance. (As discussed, we don’t use NPS, as it is not really a satisfaction measurement.)

There are good reasons for our choice. When someone asks you “How are you doing?” you will most likely say “Good”, without even thinking about it. Most of us will not say “Very good” unless it has been a REALLY good day for us. Of course, this is what we REALLY want … and what we prefer over yet another boring regular, “good”, “flat line” day.

Take a look at the chart below from our 2013 Consumer Sentiment Survey, directed at vehicle owners. It shows the likelihood that a vehicle owner will return to the same dealer for service. The data is segmented by customers’ satisfaction with their most recent service event: 90% of the “Very Satisfieds” are “Very Likely” to go back to the same dealer vs. 63% of the merely “Satisfieds”. 27 points – quite a difference!

Yet, that difference disappears when you combine, in the ”Very Satisfied” and “Satisfied” bars, the “Very Likely” to repurchase with those “Somewhat Likely”; they’re both about 90%. You see, Top Two Box masks a large and important difference of 27 points, as well as the fact that going from “Good” to “Great” matters!

By the way, dealer personnel behave the same way as their customers do; they want to be “delighted” and not merely be “Satisfied”. Let’s take a look at parts manager purchase loyalty, which can be seen as equivalent to consumers’ “Service Repurchase Likelihood” in the prior chart.

The chart below is based on our 2013 Automotive Parts Manager Survey and cuts parts purchase loyalty by overall satisfaction level. There is a substantial difference between the purchase loyalty of “Very Satisfied” vs. “Satisfied” parts managers – almost 1.5 points. Multiply that by tens, hundreds or even billions in parts sales and you’ll see that this seemingly small difference matters … big time! And, because it does matter, our survey participants have decided to report satisfaction scores as “Top Box” (% “Very Satisfied”) instead of “Top Two Box” (% “Very Satisfied” plus % ”Satisfied”).

To summarize, it will become (and has already become) too easy to say you are “satisfied” in a mature industry where big mistakes and huge performance gaps tend to be rare. This becomes even more important as OEM participants take action based on the survey results and drive performance levels and satisfaction scores even higher. “Satisfied” becomes the new “Neutral”.

In other words, Top Box appropriately applies a stricter standard, by only looking at the “Very Satisfieds” who, conveniently, are also most likely the strongest promoters. (Note the similarity between the Top Box and NPS ranks in the tables above.) So, here is our answer to the question we posed above:
“Top Box or Top Two Box … or other metrics – that is the question.

We think 'tis nobler in the survey to score

By looking at Very Satisfieds only.”

Friday, May 2, 2014

A Tale about Tire Sales

I bought a new car last summer. The car came with nice 235/40R18 Goodyear Eagle F1 asymmetrical summer performance tires, which are great for tearing up country back roads in 90 degree summer heat. However, when the weather turns cold, these fantastic tires have the grip of hockey pucks. As the temperature dropped last fall, I decided to buy a separate set of wheels and winter tires.

I had seen a lot of advertisements for tires through the OEM dealer, including banner ads online and TV advertising, so I figured I’d give them a shot. Here’s what happened when I called them up:

Problem 1: I kept getting pushed back and forth between parts and service. I started in service, but since I wanted wheels to go with the tires, they pushed me over to Parts. When I explained to the parts guy what I wanted, he was able to quote me a price on the four wheels I would need. Then I got pushed back to service for a quote on the tires and mounting.

Problem 2: The dealer actually didn’t have tires that met my needs. In fact, the dealer was unable to locate a wheel and winter tire package for the vehicle through the OEM, and also unable to put together a custom package.

Problem 3: The dealer explicitly told me to go to the independent aftermarket to meet my needs.

So, following the parts counterman’s advice, I called up Discount Tire.

Differentiator 1: I was greeted in a polite and friendly manner by the clerk, who asked me what he could do to help. (How many dealerships’ parts departments answer the phone with “Parts, hold please”?)

Differentiator 2: The clerk was able to look up wheel and tire packages that fit my vehicle, and offered a number of options for both the wheel and tire.

Differentiator 3: The clerk was able to quote an out-the-door price, once I told him what I wanted. He let me know that the tires were in stock at a facing warehouse and would be arriving in three days.

So what are some of the things that the dealer and OEM should do to replicate the aftermarket tire experience?
  1. Availability. In this case, either the tire package didn’t exist, or the dealer did not know how to find and order it. When I went on the OEM website, I found that they did not offer a winter tire for this car, despite the fact that it came from the factory with summer-only performance tires. In contrast, Discount Tire was able to provide pricing and lead-time for several different options.
  2. Single point of contact who can handle the entire order. The service advisor should be able to look up parts prices or package prices and provide a quote to the customer. Bouncing between parts and service does not make for a good customer experience.
  3. Phone handling skills. Greeting the customer, asking what you can do to help and introducing yourself by name makes the customer feel valued.
Bottom line: OEMs have expended a lot of effort to pursue tire sales, but the system still doesn’t match the experience provided at the specialized aftermarket alternatives. If OEM dealers are serious about selling tires, they need to offer availability and customer service that can beat tire stores at their own game.