Monday, January 16, 2012

What Good Is the J.D. Power IQS for Picking Out a Good Car?

I got some interesting feedback last week on the blog. A consistent theme of this feedback was Consumer Reports was not without sin. I thought I’d have a go at this one more time, but with a vivid example. Conveniently, Monday morning, Automotive News announced that the Hyundai Elantra won car of the year at the Detroit Auto show. That seemed top-of-mind enough for me.

Bottom line: Is the Hyundai Elantra any good? Like a typical digital customer, I dredged up a goulash of inputs. I thought I’d see how four different “perspectives” compared; (1) the panel of automotive journalists at the Detroit Auto Show, (2) J.D. Power IQS, (3) Consumer Reports, and (4) TrueDelta.

OK, I know that the “car of the year award” is very different from Power/Consumer Reports/TrueDelta evaluations, but you’d think that the car of the year would shine across a broad array of inputs. Consumers buy stuff based on a very complex set of inputs – quality, reliability, styling, cost, what other customers say, and the list goes on and on. I approached this from the left side of my brain, not the right.

The good thing about the North American Car of the Year is that it represents the professional opinions of 50 journalists – a big number – who spent significant time driving all of the nominated vehicles throughout the year (probably a week with each) and then had a final chance to drive the finalists a couple of months before the final vote. The Hyundai Elantra in 2011 was the second best selling vehicle in the world, and only 10,000 units behind the Toyota Corolla at roughly 1,010,000 units.

It deserves to be car of the year, despite what J.D. Power IQS says.


The J.D. Power IQS was published in June and the study focused on three components: (1) design problems, (2) defect / malfunction problems, and (3) unspecified other problems. J.D. Power defines these as: “Initial Quality: Taken from the Initial Quality Study (IQS), which looks at owner-reported problems in the first 90 days of new-vehicle ownership. This score is based on problems that have caused a complete breakdown, malfunction, or where controls or features may work as designed, but are difficult to use or understand.” So, IQS is about quality and personal gripes.

It is hard to split hairs on some of these categories. The 2011 Elantra was all new and did not do particularly well in the IQS. In fact, I heard that it fell quite a bit from 2010. Why? Did Hyundai take a nosedive in quality? No, the survey problems were of the apples, pears, and oranges variety. Comparing the 2011 Elantra to the 2010 is comparing a car at the end of its production cycle (2010), where it had been somewhat commoditized, to a brand new car (2011) that is in short supply and typically sold loaded with high trim levels and lots of options. Every OEM has had experience with this for about 100 years or so. That’s like comparing apples to pears. Now, comparing the 2011 “loaded” Elantra (about 85% sold with Bluetooth) with the 2011 Honda Civic (no Bluetooth) is like comparing pears to oranges. If a customer does not understand how to use Bluetooth, or is not terribly accepting of its peculiarities, well, there’s a lot to have “gone wrong”.


The IQS score for the 2011 Elantra looks pretty much like a fruit salad. Compounding the confusion was that the Elantra also had high conquest sales and brought new people to the brand. There can be a huge negative bias for “design problems” from customers who are new to the brand. This is easy to imagine. For example, what happens when someone goes from a brand with different seat controls than the Elantra? They don’t like the Hyundai seats. Other brand owners who switch don’t like Hyundai windshield wipers because the switch is on a peculiar side of the stalk.. If you are not used to this, it can show up as “difficult to use” … a design defect. It might take 6 months to adjust to the different location and understand the value of not having to take your hands off the steering wheel to turn on the wipers.

IQS is not without some value; it does identify some legitimate problems; although most OEMs already pick up most of these with internal surveys. It is a good tool to use with assembly plants to give them a target to shoot for in improving their assembly quality. There really is no other independent benchmark for assembly quality.

I went online and looked up the Elantra in Consumer Reports (CR) and they show steadily improving reliability and named it as CR’s “top-rated small car”. Even though CR’s reliability history gives the Elantra top marks for all eight reported “trouble spots by year”, the CR ratings report card spots it as only average for predicted reliability (looking further into the details, the reliability score was dragged down by average scores for body hardware, power equipment, and audio system).

Elantra’s overall score is a composite of 50 different, and unidentified, “tests”. Basically, they bury you with detailed color splotches that have real numbers behind them, and average it all up into a malted milkshake overall score. Consumer Reports has a relatively small staff of engineers who evaluate the vehicles. They have their own biases, such as what Ford experienced with My Ford Touch (Ford dropped from #8 in IQS to # 23 because of this and their new Dual Clutch Transmission). CR seems to like big buttons/knobs and doesn’t like touch screens. They tend to like a sporty feel and not a soft ride.

So where are we so far?

  1. Elantra won car of the year at the Detroit Auto Show.
  2. J.D. Power IQS ranked Elantra as less than average within the compact car segment (from eyeballing the circles). Reliability measured from another J.D. Power survey has the Elantra as above average.
  3. Consumer Reports pegs the Elantra as its “top-rated small car” and shows that it has top score blotches for all 8 reported CR trouble spots. However, CR predicts the Elantra’s reliability as average. This is a different characterization than J.D. Power’s.
TrueDelta is a great source for information that deserves serious consideration. It represents data from a vehicle panel that is 75,826 members strong, with quarterly survey response rates of about 23,000. Not bad. TrueDelta shows the Elantra (http://www.truedelta.com/car-reliability.php?brand=13&model=119) as being very reliable. TrueDelta posts representative member comments about their cars. Digital customers really value this.

One key thing to realize is that everyone asks different questions, and that different questions can legitimately yield different answers. With ANY survey, it's very important to know the actual wording of the questions.
  • J.D. Power: "Things gone wrong – Please mark below ALL problems you have experienced with your new vehicle.” They then ask you about things that are broken/not working, controls that are difficult to understand or use, and controls in a poor location. I can understand if it’s broke I can conclude that it has poor quality. But, if I don’t like where it’s located (e.g, the wipers) how can I conclude that this, too, is poor quality?
  • CR: "Did the car have any problems that you considered serious” This sort of wording opens the door wide for any owner biases; they can honestly under-report problems if they like the brand/car, as the problem did not seem serious enough to report.
  • TrueDelta: "Did the car have any successfully completed repairs?" TrueDelta currently focuses on this because they can be sure there was a problem, as the repair shop agreed there was a problem and was able to do something to make it go away. They also request that all repairs be reported, even minor ones. This minimizes the role of subjectivity, so their data isn’t impacted by differing perceptions and attitudes as much as the other two, especially Consumer Reports. As with the other surveys, routine maintenance and preventive repairs (including those related to recalls) aren't included in TrueDelta’s analysis. They also exclude computer re-flashes, as long as these are free to the car owner.
TrueDelta appeals to the purist in me. Their results, on average, are over nine months ahead of the others, and often over a year ahead. This is a key benefit. Do you want to know how reliable a car was a year ago, when it was a year newer, or how reliable it has been recently? For the Elantra, both J.D. Power and CR are measuring how well the Elantra did in its first few months. TrueDelta reports how it did during its first year, through the end of September and soon through the end of 2011. TrueDelta is the only public information source that provides the actual repair frequencies, and not just dots. The dots used by the others can make differences between models seem much larger than they actually are.

Bottom Line: I can’t find much good in the J.D. Power IQS, because I can’t separate “quality” (which I internalize as reliability and durability) from personal preferences. If I were to care about other people’s personal preferences, I’d go to on-line customer reviews or depend on the auto rags. Using my Volt as an example, just about everything inside the car is different, and I can imagine how this could really disturb some customers. My guess is those folks are the typical ones who would invest 23.4 minutes of their time to take the survey. Not me. I want to be surprised and I don’t gauge my surprise by some average surveyed “surprise” expectation. Now, I would like to know more about the Volt’s “quality”, but I can’t get it from the NY Times advertisements on who wins the J.D. Power IQS competitions. I agree that the Elantra deserved to be car of the year. It is a very good car. Couldn’t get that out of Power IQS. J.D. Power IQS is like Netflix – huge product momentum that is, ultimately, incredibly fragile. I can poke a lot at Consumer Reports, but, it, too, confirmed that the Elantra is a very good car. Power seems to me to be “no brain”, whereas Consumer Reports is both left and right brain. TrueDelta is all left brain. It is about the facts. TrueDelta is what our emerging digital customers really want. They want a kettle full of left-brain facts so that they can make their own right-brain decisions.

No comments: