The Journal of Wine Economics has just published a study authored by Robert T. Hodgson titled An Examination of Judge Reliability at a major U.S. Wine Competition. The reported findings should make the fodder for about 10,000 wine blog articles over the next few weeks.
The study tracked the ability of wine competition judges to replicate the scores that they gave to wines (during blind tasting competition) at the California State Fair. The study found that (emphasis is mine):
...judges were perfectly consistent... about 18 percent of the time. However, this usually occurred for wines that were rejected. That is, when the judges were very consistent, it was often for wines that they did not like...Let the blood-letting commence!
I fear that the media will take hold of this and start to sound the death knell for the ability of so-called experts to taste and rate wines (again), or use it to shake up an already arguably unfavorable view that wine appreciation and competition is the height of snobbery.
Neither are true, and this study does little to bolster either point. Why? Because wine tasting is, at its heart, heart a subjective exercise.
The study is clear on its intentions, which was not to shake up the world of wine competition, but to "provide a measure of a wine judge’s ability to consistently evaluate replicate samples of an identical wine. With such a measure in hand, it should be possible to evaluate the quality of future wine competitions using consistency as well as concordance with the goal to continually improve reliability and to track improvements associated with procedural changes..."
To understand why this study doesn't ring so true with me, I need to give you a little detail on the mechanics of the study:
When possible, triplicate samples of all four wines were served in the second flight of the day randomly interspersed among the 30 wines. A typical day’s work involves four to six flights, about 150 wines... The judges first mark the wine’s score independently, and their scores are recorded by the panel’s secretary. Afterward the judges discuss the wine. Based on the discussion, some judges modify their initial score; others do not. For this study, only the first, independent score is used to analyze an individual judge’s consistency in scoring wines.In summary: the judges weren't consistent when faced with tasting hundreds of wines in a day, and there revised scores (based on panel discussion - which can have a huge impact on how you would evaluate a wine) weren't used.
If the study proves anything, I think shows that trying to judge hundreds of wines in a day is a first-class non-stop ticket to palate fatigue, even for experienced wine judges.
Now that I think about it, blind tasting is so notoriously difficult that I give the judges in this study credit for being consistent almost 20% of the time. That would be a respectable hitting percentage in baseball (not sure... I don't follow baseball actually)...
While the media may latch onto this one, the study hinted that there is some modicum of possible salvation for the madness surrounding wine competitions in general - not by way of wine judges, but by way of the ultimate judges of wine: the Consumer.
...a recent article in Wine Business Monthly (Thach, 2008) conducted as a jointInteresting conclusion. And a hopeful one.
effort by 10 global universities with specialties in wine business and marketing found that consumers are not particularly motivated by medals when purchasing wine in retail stores. If consumer confidence is to be improved, managers of wine competitions would be well advised to validate their recommendations with quantitative standards.
(images: legaljuice.com, wine-economics.org)