Barley Mowat 

The Problem With Beer Ratings

with 5 comments

As some of you might have noticed, I’ve started putting up beer reviews. I’ve avoided doing just this very thing for a long time because, much as with Most Things In General, I have a problem with beer ratings.

It all boils down to this main issue: people are not objective. We spend a lot of energy prancing around the issue and claiming to solve the problem through various ingenious inventions, but ultimately people suck at rating things.

If I were to take a bucket of my very own homebrew swill and pour it into three bottles, marked as “OK Spring”, “Driftwood” and “Chuck’s Discount Swillery” then ask you to rate all three, people would rate each very different.

Like it or not, your impression of the label sets an expectation in your mind and you adjust your review accordingly. It’s not just brand-blindness, beer from a bottle labeled “Driftwood” actually, truthfully, tastes better to you than the same beer sopped up with a mop and wrung out into the glass in front of you (it’s a clean mop… made of hops).

Okay, so let’s remove the labels; this is where it gets interesting. You see, it doesn’t really matter WHAT is in the bottles. You’re going to give the beer a 7/10, or a 4/5, or a 20/25. Sure, maybe not every time, but overall that will be your average, despite the very real fact that “average” should trend towards 50%. As well, statistically you won’t deviate very much, and when you do deviate, your mood at the time of rating will have a much, much higher effect on the result than the actual beer.


Ironically beer salted with tears is actually quite delicious, but no one has realized it yet

Don’t believe me? Take a look at the recent ratings from two professional reviewers, Joe Wiebe (@thirstywriter) and Jan Zeschky (@jantweats). Sure looks like a lot of 4/5 and 20/25. Whoa, all the way down to 3.5/5? That beer must have sucked. 21.5/25? Best Beer Ever.

I don’t want to take anything away from Joe and Jan. They’re rating beers for publications that demand a simple number, and rating beer is freaking hard. Plus, it’s hard to balance out not wanting to rate too highly (and lose credibility with people who don’t like that beer) with not wanting to rate too lowly (and risk the publication not running the review out of fear of insulting advertisers). In their shoes I’d likely have the exact same results. It’s not the people that are at fault, it’s our desire to put a number on things.

It isn’t just beer, either. The wine world has been struggling with this problem for a while, too. Ever seen a wine rated less than 80/100? Me neither. This is so prevalent that now some reviewers have switched to rating wine out of 20… only to see their average reviews move up to compensate.

It’s almost as if we don’t want to use the lower end of the scale because we’re afraid of offending the beer (or wine, or restaurant). However, the reality of the matter is that there ARE shitty wines and beers out there, and there ARE products deserving of 1/5, 1/10, 1/25 or even 1/100.


Or even 1/10030, as the case may be.

This is why I like RateBeer for beer ratings, as imperfect as it is. It lets all the humans go fuss about whether a beer is 3.80 or 3.85 out of 5 (or maybe even 4.05), and then it applies cold hard statistics to rank all beers based on percentile. That horrible horrible beer we inexplicably gave 2.5/5.0 to? RateBeer boils that down to a much more honest 15/100.

And now to bring all this back to me and my beer reviews, so I can wrap this up and go have a Friday pint. As part of my reviews, you might notice there are no ratings. RateBeer has that covered; you want a single number to impossibly summarize the complexity that is a particular beer? Go there.

Do you want my opinion on what’s going on with this beer? Read my review. I will slip in a Barley Mowat Seal of Approval, but that is not a rating. That’s just how excited I, personally, am about this beer. Bronze is “yeah, I’ll buy it again if I see it.” Silver is “I will go out of my way to purchase this” and Gold is “If you are holding the last bottle of this, I will slit your throat and take it. And then kick you in the jewels, because that’s just the kind of guy I am.”

The vast majority of beers, though, get nothing, not because they’re awful (although some truly are), but rather because they don’t excite me… no, not like that… okay, maybe a little like that.

Written by chuck

November 2nd, 2012 at 3:42 pm

Posted in Beer and You

Tagged with ,

5 Responses to 'The Problem With Beer Ratings'

Subscribe to comments with RSS or TrackBack to 'The Problem With Beer Ratings'.

  1. Two things:

    1) Published reviewers don’t rate all beers. There aren’t many of reviews of Bud Lite Tea or whatever, even if that’s a bit (understandably) biased.

    2) Brewers aren’t making beer at random–when they make a bad beer, they try to improve. Or, they create Bud Light and just produce the same beer forever. A failing grade is below 50% but 50% of beers are not undrinkable, though 50% are inherently below average.

    RicardoB

    2 Nov 12 at 19:25

  2. This is something I’ve wrestled with since I started my blog, Chuck.
    I considered not giving ratings to my reviews because of their complete subjectivity in the many factors you already give: my personal taste, my mood that day, where I’m drinking the beer, how little sleep my daughter’s given me the previous night (cf. “mood”), etc. I’ve never pretended that my reviews are anything less than completely subjective.
    But in the end I felt a rating was necessary as a kind of entry point for readers. I try to make my blog as accessible as I can for the layman and beer geek alike, and a rating makes for a nice, tidy summation of my experience with the beer.
    I guess “experience” is the crucial word here, as my reviews and ratings comprise my impression of my experience(s) with any given beer, much like what I would do with a review of a pub, restaurant, beer dinner or event.
    I’ll be the first to admit that there’s a problem with the rating scale I use in that it’s gauged to every kind of beer out there, not just craft beer, which is what 99% of my reviews are about.
    So Coors Light would probably come in at 0/5 or 0.5/5, while most craft beer scores at least a 3/5 (though I have gone as low as 2.5).
    But I think a genuine factor behind the many 4/5 ratings is that a lot of the beer I review I find to be genuinely excellent, which I think is a reflection on the quality of beer brewed in B.C.
    A couple of final clarifying points, regarding the “balance” you mention in rating: I don’t feel my ratings are influenced in any way by peer pressure from other craft beer fans — I don’t really give a crap if anyone mutters about a high rating I gave to an unpopular beer; and the ratings I give are in absolutely no way influenced by advertising pressure. My blog is my own domain, so to speak.
    Your system is probably the best solution to the whole ratings dilemma and if I had a chance I’d switch to something similar (sidestepping any trademark issues, of course…). But that’d mean going back and editing 100+ reviews and I’m time impoverished as it is.
    Your post? I’d give it a 4/5 😉

    Jan

    2 Nov 12 at 22:03

  3. @RicardoB Yup. People do tend to leave out the crappy beer. Several beer reviewers have commented to me that they simply don’t post bad reviews out of fear of being blackballed by the brewers.

    @Jan Thanks for the great comment. I don’t really have much to add 🙂 If you’d like to use my SOA graphics, have at ‘er, or we could feed the talented Gardy a bunch of beer and see if she’ll make a Jan’s Dolphin of Good Beer…

    chuck

    3 Nov 12 at 14:32

  4. I never considered reviewing beers publicly before urbandiner.ca asked me to start doing so in the spring. Prior to that, if an article I was writing called for me to make a judgement about a beer I would do so but not with any sort of numerical rating system. Then, when I started writing for urbandiner.ca, I had to adopt their format, which is based on five categories being scored out of 5 totalling 25 maximum points. I took a look at what previous reviewers had done, and you’re right, Chuck, they rarely dipped below 3 and rarely topped 4.5, with most ratings ending up around 3.5. I thought about how I would approach it and decided that it was similar to what I had to do when marking papers in the Freelance Writing course I taught for 5 years. A mark below 50% could potentially result in a student failing the course so an assignment had to be an abject failure to get such a grade. (I still handed out that grade from time to time!) That translates into anything below 2.5/5 being a fail. Here’s how I look at each numerical grade:

    2.5 – acceptable: the baseline for craft beer
    3.0 – satisfactory though certainly not spectacular
    3.5 – showing some positive characteristics or at least some promise
    4.0 – very good
    4.5 – exceptional
    5.0 – buy a case!

    In a general sense, I don’t think any craft beer produced in BC should receive a grade lower than 15/25. Some beers might deserve a lower rating, but given the quality of the overall industry and the perceptiveness of consumers, I doubt any self-respecting craft brewer would try to sell a product that was that poor. I certainly wouldn’t consider it craft beer if it fell below that grade, and likely, I wouldn’t bother reviewing it.

    On the other end of the scale, I believe that some beers do deserve a 5/5 in certain categories, or even potentially, a 25/25 overall. If something cannot reach that grade, then why include a 5 on the scale? It’s not to say that I don’t believe I will ever taste a better beer—I hope I will—but in the meantime, why not celebrate an incredible beer? The highest rating I’ve given on urbandiner.ca so far was the 22.5/25 I gave the Central City Imperial IPA (http://urbandiner.ca/2012/06/19/central-city-brewing-imperial-ipa/) and even that one could have been higher since I marked down the Overall rating to 4/5 because I felt it was overpriced at more than $10/bomber. Assuming I gave it a 5/5 in that category, it could have hit 23.5. I gave this year’s Sartori a 21.5/25 (http://urbandiner.ca/2012/09/27/driftwood-sartori-harvest-ipa-2012/) and I’ll admit I ranked that score against the Central City score because I felt that Sartori was a notch lower overall.

    As for the issue of the ratio of good-versus-bad reviews, I’ve been reviewing books for 15 years or so and I’ve also taught reviewing in my freelance writing course so I have thought a lot about it. I have written scathingly bad reviews, but not many, and I imagine if I looked at all the reviews I’ve written over the years, the majority would be considered positive. Part of that is because I often request certain books by authors I know I like, and usually those books get a deservedly positive review. But occasionally I have been disappointed with a book I expected to like and in those cases I have written a negative review, usually making the point that it didn’t meet my expectations based on the author’s previous work(s).

    Another factor affecting the good/bad ratio is the editorial stance of the publication. In the case of my urbandiner.ca reviews, I am free to write whatever I like, but one newspaper books editor I used to work with would trim negative reviews considerably, arguing that he didn’t want to waste the limited space on his page. Occasionally, he would choose not to run a negative review at all. But in my experience, that is rare. Most books editors I work with publish my reviews without touching them, regardless if they are positive, negative, or middling.

    When I teach freelance writing, I always include a lesson on reviewing and make my students to write a review. I teach them that they must take an opinionated stance and back it up. They have a responsibility to their reader to do so. Why else would someone read the review? One can easily find and read a description of the thing being reviewed—on the back of the book or the label of the beer. Many students struggle with this and end up writing a rave review that doesn’t really stand up to scrutiny. Or, if not a rave, they write a bland, middling description that has no real opinion behind it, which a reader will quickly get bored of and stop reading. Most students tell me they don’t enjoy the assignment—it’s a challenge, for sure—but every once in a while, it will strike a chord with someone. One or two of my students have become regular reviewers in newspapers or online publications.

    And as for being concerned about what a brewery might think if I wrote a negative review, well, that comes with the territory of writing reviews. I accepted this long ago when I began reviewing novels while I was trying to write novels myself. What if I turned off an editor or publisher who would later hold my own manuscript’s fate in his/her hands? I am writing a guidebook to BC craft breweries now and I have the same responsibility there to write honest assessments for my readers. Obviously, most of the book will be positive – otherwise there would be no logical need for the book. Again, it comes down to thinking about my reader first and foremost, not the brewery and what they will think.

    I enjoy reviewing but I do find it challenging sometimes, and I certainly don’t take it lightly. I’d be interested in hearing what readers think. Do they value a numerical rating or would they prefer a general assessment?

    Joe Wiebe

    6 Nov 12 at 10:12

  5. @Joe and @Jan : Wow. Thanks for the great insight guys; as with my post on the CAMRA Awards and beer judging, the comments are again far more valuable than the article itself.

    chuck

    6 Nov 12 at 10:55

Leave a Reply