No one likes Yelp.
If you are not already familiar with Yelp, the crowd
sourcing, review-based site has been one of the web’s venerable longtime establishments.
Their local reviews are brimming with massive amounts of opinions, way too much
data to sort by yourself, so the Yelp machine sorts the most appropriate
reviews for you. Relevant, recent, and trusted reviews float to the top, so you
read as many reviews as you feel comfortable with.


Crowd sourced reviews are arguably the most unbiased way of obtaining real opinions, trustworthy qualitative data from the common man or woman. But the way the data is sorted can actually create incredible bias, because… there is no such thing as (*ahem) a free lunch. Let me explain. Realize that these are restaurants (which are businesses) interacting with Yelp (a business too (now listed on the public stock exchange)). Money changes hands so that Yelp’s more than 900 employees can put food on the table (*ahem again). Every profitable business needs a business model.
One of those businesses, 125-year old restaurant Fior d’
Italia, known for its great food and service, did not see the need to use Yelp.
It had an established clientele and refused to advertise on the site. The owner’s
complaint
of Yelp was that Fior had 218 posted reviews
averaging 2 1/2 stars, with many terrible one-star reviews showing.
Unseen were the “filtered reviews” that would average out to more than four
stars. The only way Yelp staff could ‘help us’ was if Fior would advertise with
them.
How does a customer cut out the bias? Can a diner trust the data or
not?

Funded by a grant from Google, a study turned web site called RevMiner (the name a combination of Review + Miner) attempts to solve for the inefficiency of the massive Yelp data into manageable chunks. As an added beneficial consequence, I think it may have re-democratized the data as well. Jeff Huang, a PHD candidate at the University of Washington, set out to improve the experience of the smart phone app. In the research article, RevMiner: An Extractive Interface for Navigating Reviews on a Smartphone," he first identifies the problem as a struggle to fit content from the big PC screen onto the smartphone touchscreen.
The mechanics within RevMiner use algorithms to summarize reviewer opinions using Natural Language Processing (NLP) into attributes (words that embody the restaurant). If you need a reason why those keywords were used, then a quick hover over the attribute would explain itself. It is a clever way of taking all of the Yelp reviews and making them available at a glance. It’s all of a sudden easy to see what is largely being said about the restaurant with key words. Further still, color coding is used to make the association of what is “good” versus what is “bad” instantly recognizable. With this app, now the qualitative data is suddenly searchable, browsable, and graphical. So RevMiner is much faster in usability than Yelp. And there’s real beauty in that the Review Mining machine removes bias from the order position of the reviews. This truly is unbiased data at its most lucid, yes?
Similarly, the word “good” equates to barely positive. A review might say the food was good, but either the service or atmosphere sucked! I noticed that I actually take this into consideration when I see a Yelp endorsement sticker on the outside of a business. Truly I do not know what to expect and I walk in fully knowing that the atmosphere is a tossup. I attended a Yelp elite event as a +1 (accompanying my legit Yelper friend and taking advantage of the free hors d’oeuvres). The event was held at a total dive bar, well-loved on Yelp. Indeed, the typical Yelper likes getting a lot for a little (perhaps similar to a Groupon customer?). "They love us on Yelp" is an award that must take context into consideration. The Yelp star rating system has a meaning in itself.
So there are some self-proclaimed limitations of the RevMiner app. Qualitative reviews that are transformed them into quantitative data is not a perfect carryover. Perhaps what the RevMiner study calls out best are inefficiencies in the Yelp model. In answering the customer's main question "What do I want to eat?," RevMiner solves for efficiency but may lack the effectiveness that Yelp’s opinions bring. Within the study, there are several iterations including using different priorities, word clouds, and color bars for ratings based on key words. All were tested against Yelp to find what the user preferred. In general use, participants preferred RevMiner in 45% of cases, whereas Yelp was preferred 30%. In performing a conjunctive query (using more than one attribute), RevMiner earned 62% versus Yelp’s 27% preference. So there is something there. Perhaps the two can meet in the middle. Perhaps Yelp can be streamlined, RevMiner more engaging, to solve the question we all ask ourselves multiple times per day, every day, "What do I want to eat?"
No comments:
Post a Comment