As a follow up to Online Reviews and Small Businesses, check out this posting about the J curve of ratings. The most salient paragraph is:
Across many clients in diverse industries this “U” curve turns out to be more like a “J” curve…almost a reverse “L”. The average rating across all clients is 4.3 out of 5 stars. The distribution looks like a J, where there are more 1s than 2s, but far more 4s and 5s than the lower ratings.
It could simply be that mostly happy customers will fill in a survey, but it looks like there is little downside to enabling online ratings. The goal of online surveys being, frankly, not only getting the “truth” but also spreading the word.
I remember from my days as a teacher having students evaluate classes – most would give a 4 or 5 no matter how bad the teacher was. Thus the real measure of the evaluation was how close the average was to 5 rather than 4.
The J curve doesn’t surprise me.
It also doesn’t distinguish between reviewers who know what they are talking about and those who don’t.
People who don’t like something will walk rather than rate – that’s another reason for the J curve.
Guy,
Great to see you last night in Cleveland.
And you’re right… WHO does wear suspenders anymore?
-Ed
Well it is not a surprise but it is an interesting phenomenon considerering the upsurge in social networks and other media that take people’s opinion as input.
I think the best way to discover what’s wrong with a business is to actually go in there and find out “What’s up?”
This article shares some similarities with the word-of-mouth marketing items from your 28 August post. For example, negative reviews stem from bad services rather than bad products, which would explain why the people who have heard about an incident are more likely to stay away; they fear there would be no justice should the product be defective. Likewise, a great product elicits raves from people who have experienced mediocrity from other vendors in the past. It seems that the conclusion is to offer the best products (the “wow” items) and then, should something go wrong, do everything to ensure that the customer is vindicated.
The J Curve of Online Reviews
by: Guy Kawasaki As a follow up to Online Reviews and Small Businesses, check out this posting about the J curve of ratings. The most salient paragraph is:…
Hi Guy
Along the lines of what the last commenter said… this is as much of a question anchoring problem as it is a problem of certain customers responding. I’m a survey researcher, and attended a presentation recently which covered this issue (I’ve hunted, but cannot find a relevant link, sorry). The gist was that more useful, distinguishing, information could be obtained by chaninging the response options from somthing like:
Extremely good
…
Extremely bad
To something like:
Way beyond expectations
Better than expected for a good firm
About what I expect from a good firm
Not as good as expected for a good firm
Way below expectations
The effect is to gravitate answers to the midpoint, so that the extremes actually serve their purpose: to discriminate between the really good and the really bad.
I would add to Ben’s point that another enhancement is to by offer an even number of options so that some degree of preference or disatisfaction has to be given and the mid range answer (the equivalent of don’t know) is thus avoided.
Six stars (or any even number over 2) is a better option than five, I think, if you really want feedback. Apart from 10 – people think 5/10 is average, but it isn’t.
Ok, before I comment on this, admittedly I am somewhat biased over reports like this as my firm does online reviews (surveys, polls, data collection, etc.) every day for clients domestically and internationally.
Based on my experiences and training, getting people to respond, especially the “happy” ones, boils down to providing enough of an incentive for them to do so. Moreover this doesn’t always mean offering a cash or prize reward. You must also answer the “what’s in this for me” question before they ask you!
Despite what anyone says, whatever the subject of the online review, the most critical questions are the ones usually left out. “Would you recommend this company/product/service/tool/etc to your family and friends and would you personally use/buy these products/services/etc. again?
Lastly, I do agree with one underlying theme of the authors post. If the client is not conducting the online review for the right reasons then the validity of said reviews are meaningless outside of potential PR benefits.
I also think it’s because most products/services today is actually suprisingly good once you choose them.
I think it comes down to: “Is great, good enough?”. I mean, there’s so many great products out there that great products are mediocre.
Seth Godin pointed this out years ago with his book “Purple Cow”.
I think people star mediocre products as well as remarkable ones – they don’t, however, talk about them.
I wonder if it would be more of a U shape if the reviews were anonymous. Perhaps people are less likely to give negative ratings or sumbit any ratings if they know their identity will be revealed. E.g., how many times have you given you waitress negative feedback on those restaurant surveys that occassionally accompany the bill? I’m running a small experiment myself WRT rating people anonymously online (www.tomslist.net). There’s only a couple hundred ratings so far and the distribution looks a bit more U-ish but it’s still quite early. We’ll see how it goes.
I’m really confused about the presumptions being made with a “study” such as this one. We’re talking Real people, Real products/services, Real opinions (all TM :P ).
Assuming that the end product “should” be a U instead of a J presumes:
1) the business put out an equal number of bad vs good or consistently mediocre products
2) the business gives an equal number of bad vs good or consistently mediocre services
3) the people experiencing said services are equally in a bad vs good mood &/or mediocre mood
4) the people who are happy or unhappy with services are equally as likely to answer surveys
5) the people giving the services or providing the product have no idea whether or not the services or product in question at any given moment are in the customer’s better interest, of quality, perform to expectations, etc.
I’ve taken statistics and experimental psychology. That’s enough statistics & psychology to know that the data is biased. Why?
1) any business even able to get off the ground and have enough customers to be worthy of surveying about must have a product or service that people want or need — straight from Guy’s lectures ;)
2) same business would have to be run by someone smart/savvy/diligent (or rich :P ) enough to get it off the ground in the first place
3) if the owner/managers are smart enough to keep it afloat long enough to get enough customers to warrant a survey, they may presumably also know that customer satisfaction = success.
Since the people giving the products and services are presumably TRYING to elicit good opinions about said products &/or services, the data is biased via the ATTEMPT to elicit a positive response. The products are biased towards people pleasing. The services are biased towards people pleasing. That’s the point of a real business.
It’s also not a double blind survey. The response being elicited is a product of both the product/service being biased towards customer satisfaction AND the providers (the humans – managers, sales people, customer service, product design team — everyone) KNOWING the response they wish to elicit from the customer.
People are social creatures. If you’re genuinely hoping they’ll like you or your services, guess what? They’re more likely to like you or your services. Funny, that eh?
*********************
Criss,
The issue is that you’re assuming that the purpose of online ratings is to find the “truth” in an objective, scientific sense. That’s really not the case. The real purpose of online ratings is to foster a sense of community, enable people to praise/vent, and for the company to receive casual, informal, and unscientific feedback.
Guy
Then again, perhaps it has to do with conscious winnowing. We all self-censor, self-edit, look at what others say and tweak our presentations a bit. Eventually, the nett result is material or output which garners upward-biased results. Which is good for everyone, in this scenario.
Without a report saying what data this assertion is based on, it’s impossible to know if and when it’s true.
Interestingly, BazaarVoice, who put out this report, sells online reviewing software to businesses. On its web page (http://www.bazaarvoice.com/solution.html), they explain that businesses who use their software can screen what reviews they publish:
“You own 100% of your rating/review content. You stay in control and protect your brand.”, and
“Every submission is reviewed by professionals to ensure that only accurate, relevant, and appropriate reviews are posted to your site.”
From this, it seems that users can ask BazaarVoice to withhold negative reviews.
It would be interesting if the “1 to 5” rating system was changed to a “0 to 7” rating. If You have a group of “1” ratings, but at least one of the reviewers would have voted “0” if offered, this would stretch and change things a bit….