Skip navigation

Ins and Outs of Those Surveys

When J.D. Power & Associates recently announced a first-place tie between the Buick and Jaguar brands, overtaking Toyota and Lexus in its closely watched Vehicle Dependability Study, many industry observers were left scratching their heads. It's like I woke up in Bizarro Land this morning, wrote one poster on an auto fansite. Jaguar on top of ANY dependability study?! Others noted the poor finish

When J.D. Power & Associates recently announced a first-place tie between the Buick and Jaguar brands, overtaking Toyota and Lexus in its closely watched Vehicle Dependability Study, many industry observers were left scratching their heads.

“It's like I woke up in Bizarro Land this morning,” wrote one poster on an auto fansite. “Jaguar on top of ANY dependability study?!”

Others noted the poor finish for Toyota Motor Corp.'s Scion brand, which placed near last, while the Lexus and Toyota marques were second and third, respectively.

Aren't Scion models heavily based on Toyotas sold overseas and assembled in the auto maker's prized manufacturing plants in Japan?

Similarly odd results seen in J.D. Power's other well-know survey, the Initial Quality Study, as well as Consumer Reports magazine's Reliability Survey, have some asking if surveys of auto owners are too subjective.

And with quality scores ever improving and the gap narrowing between good, better and best, are such surveys still relevant?

“I think it is fair to conclude the auto industry has not been helped much by these measures,” says Claes Fornell, a professor of business administration at the University of Michigan's Ross School of Business and overseer of the American Customer Satisfaction Index.

“The problem, in my view, is that they are not based on scientific methods and therefore have too much error and confuse cause-and-effects.”

Fornell's American Customer Satisfaction Index reports scores nationally on a 100-point scale and produces indexes for 10 economic sectors, 43 industries and more than 200 companies and government agencies.

For 22 years, J.D. Power's Initial Quality Study has measured how well vehicles perform during the first 90 days of ownership, while the Vehicle Dependability Study, in its 19th year, tracks how well vehicles hold up after three years.

Keeping its surveys relevant is important to J.D. Power, which periodically tweaks them, says Dave Sargent, vice president-automotive research for the organization.

J.D. Power also asks for more detail from respondents on its mail-in questionnaire than it did early on.

For instance, Sargent says the topic of wind noise was a small part of the first-generation IQS questionnaire. Now, “there will be a heading, ‘wind noise,’ and a place to mark wind noise on the driver-side front door, passenger-side door, the sunroof, etc.”

One of the biggest beefs against IQS hasn't been its lack of questions but rather the way J.D. Power presents the survey results to the public, combining what Sargent calls “hard” and “soft” problems.

Hard problems include parts that are “broken, not working, fallen off,” Sargent says. Soft problems are components difficult to operate, poorly located or seats that are uncomfortable.

As auto makers have improved vehicles' mechanical integrity over the years, hard problems have been supplanted by soft ones in IQS, Sargent admits.

A prominent soft issue in recent years has been consumer dissatisfaction with BMW AG's infamous iDrive human-machine interface.

“People look at that and say, ‘Wow, BMW's making a bunch of crap cars, aren't they?’” says Michael Karesh, founder of TrueDelta.com, an online surveyor of vehicle reliability, and a frequent critic of both J.D. Power and Consumer Reports.

“Well no, it's just because of iDrive. It doesn't have anything to do with reliability at all,” Karesh says.

J.D. Power splits out the hard and soft problems in a more in-depth version of IQS, purchased by auto makers. Sargent says the onus then is on the OEMs to decide whether to make public how many of their issues are classified as hard or soft.

J.D. Power's for-profit status comes under fire regularly on the Internet, with many critics alleging the organization is cozy with auto makers, which regularly hire the firm for research projects.

Sargent says there are “Chinese walls” separating J.D. Power's automotive research unit, which he oversees, from the client-management and solutions groups, the latter working with the auto companies to help them improve their performance in surveys.

“I'm compensated (based) on how good is the research, not who wins,” Sargent says.

He says J.D. Power's clients use the survey information on “the assumption that it's absolutely accurate.” Sargent says if clients believed it didn't represent “the voice of the consumer, they would stop using it, and our whole business would fall apart.”

Unlike J.D. Power, Consumer Reports, published by the 73-year-old non-profit Consumers Union, crafts its survey for the consumer, not the OEM.

But as with J.D. Power, Consumer Reports' annual car Reliability Survey has been skewered by industry watchdogs.

The consumer survey amasses data “typically from readers, a non-random sample,” Fornell says of a possible methodology issue.

Critics question the validity of the reliability survey because the magazine uses its import-oriented subscriber base. Consumer Reports officials say readers can't be pigeon-holed.

“We found that we have large sample size on a lot of the domestic cars as well, so I don't know where that comes from,” says Anita Lam, automotive data program manager at the magazine.

From readers, Consumer Reports gets “an enormous” amount of data on domestic vehicles, namely Ford Motor Co.'s F-150 pickup and Chrysler LLC's minivans, says Jake Fisher, a senior automotive engineer for the magazine.

However, he admits the publication “might have a little more (import owners) than the average, it's true. Obviously people read us (and take our advice),” Fisher says, referring to the magazine's history of rating import auto brands above those of the Detroit Three.

Even though he has criticized Consumer Reports, Karesh doesn't think it is biased toward import brands, specifically the Japanese.

Yet, he says biases of subscribers can come into play when they respond to a well-known question from Consumer Reports: “Did you have a problem you considered serious?”

“Honest people can distort the results, because they honestly think their problem's serious because, at this point, they hate Ford,” Karesh says as an example. “So any problem Ford has feels more serious than it would otherwise.”

Yet, with his own surveys, Karesh often sees people downplay problems with their import-brand vehicles, because they perceive the problems as just a fluke. “They say, ‘It's not because Honda is this sloppy, evil company and they gave me a piece of crap,’” Karesh says.

“‘Honda gave me the best car they could, and this one just happened to be bad.’ I see that in emails to me all the time.”

What may be “serious” varies from one person to another, he says, and can change over time, which is why Karesh takes issue with Consumer Reports and J.D. Power asking people to recall problems over the past 12 months with their vehicle.

“It feels a lot more serious if (a problem occurred) last week vs. eight months ago,” he notes.

To prevent car buyers from having to search far back into their memory to recall possible maintenance issues, Karesh's email surveys go out once a month.

“People ask, ‘Why do you have to send it so often?’ Well, because most people can't remember,” he says.

Sargent says J.D. Power will do monthly surveys if asked by auto makers, using owner data they supply.

But in the case of the independent IQS, which culls participants from state vehicle- registration data, “by the time we have access to all the (registration) information, it's almost been 90 days,” he says.

The Institute of Social Research at the University of Michigan estimates 30% of information gleaned from consumer surveys is “random noise.”

That happens when “a respondent misunderstands the question, deliberately gives false information, answers too quickly, and on the other side, the interviewer pushes the wrong button on the computer,” Fornell says of possible ways human error can skew survey scores.

Both Sargent and Consumer Reports' Lam and Fisher say survey questionnaires are scrutinized for abnormalities.

J.D. Power also filters out questionnaires with “an excessive number of problems” marked, Sargent says.

Lam says Consumer Reports' survey department randomly pulls out and checks some of the data. She, for instance disregards a form if someone has listed “an unusually low or high mileage” for their vehicle.

But the magazine did miss one big error involving the Honda Fit.

Consumer Reports asked subscribers to report on problems they had with their vehicle from March 2005 to March 2006. The subcompact went on sale in the U.S. in April 2006. A number of vehicle owners responded anyway.

Lam and Fisher say Consumer Reports planned to recommend the car nonetheless, based on Honda's past history of “basically being 100% in terms of reliability.” With survey data in hand, Lam says the magazine decided to use it.

She admits the flub caused the magazine to change the survey period listed on the questionnaire to the “past 12 months.”

How do auto makers feel about the surveys? “A lot of people don't like them but they will never go on record,” says an ex-auto executive. — with James Amend

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish