Wine Scoring & Marketing Measurement

Tuesday evening I was browsing the latest edition of the LCBO’s ‘Vintages Release Catalogue’. This catalogue provides descriptions and sometimes wine critics’ quality scores for the new wine products about to be released through Vintages stores in Ontario. As I browsed, two thoughts came to mind.

Firstly, I noticed that most of the scores in this catalogue were between 88 and 92 on a 100-point scale. It struck me that this suggested the majority of the wines in this catalogue were of very similarly high quality, with almost all wines rated within a narrow 5-percentage point range. I found that odd, perhaps unrealistic, and decided to think about it. Secondly, I noticed that the wine descriptions were making me thirsty.

Wine Glass, Red Wine

Seeing the wisdom in choosing the beverage that best suited the task at hand, I poured myself a glass of red to compliment my thinking, sat down with the catalogue and made a few calculations and notes. Here are some highlights.

  • Vintages published scores for 57 of the 120 wines in this catalogue. The wine critics quoted used the 100-point scale for 48 of the 57 rated wines. The other nine were based on 20, 5 or 3-point scales.
  • Of the 48 using the 100-point scale, 41 (85.4%) received a score between 88 and 92, and 30 of those were either 90 or 91, which confirmed my first observation. The other 7 wines were rated higher than 92, leaving no scores below 88.

Taking a sip from my glass, I contemplated why so many wines received such similar scores, and how all this relates to marketing measurement. Here are a few thoughts:

Wine Scoring: The LCBO is in the business of selling wine and I suspect they have a policy of only publishing scores of 88 or higher. I tested this theory by looking at the two previous Release Catalogues and wasn’t able to find a wine scoring 87 or lower. Perhaps they’ve learned that lower scores reduce sales and so don’t publish scores below 88.

Marketing Measurement: Marketers are in the business of spending money effectively to drive positive business outcomes. Instead of measuring only the best marketing programs or those you might want to cast in a favourable light, measure and rank all programs so you can identify which are most and least effective, and then optimize future strategies accordingly.

Wine Scoring: By my rough count, the 48 scores using the 100-point scale were sourced from 26 different wine critics. While each used a 100-point scale, I have a hard time believing all 26 used the scale in exactly the same way. I also suspect that some critics are more generous with their scores than others, like my calculus teacher in CEGEP. The other important issue is that scoring wine is a highly subjective exercise. It isn’t at all uncommon for two or more tasters to disagree on a wine’s quality and the corresponding score. Experts have different opinions on subjective matters.

Marketing Measurement: To minimize inconsistencies, reduce or eliminate subjectivity and personal bias from your measurement processes. Having 26 experts using similar but sometimes different methods of scoring your marketing programs based on their personal opinions would not be a recipe for consistency. One person needs to lead your measurement efforts using one methodology that your organization understands and supports.

Wine Scoring: I did a little reading on wine scoring and discovered that wine critics can be inconsistent in the scores they award to the exact same wine on different occasions. For example, the influential wine critic Robert M. Parker has apparently pointed out that he sometimes assigns different scores to the same wine at different tastings, but that those scores tend to be no more than 3 points apart. It seems that differences in tasting conditions and the taster’s emotions can lead to different scores. To address this, I believe Robert Parker publishes average scores when multiple tastings produce different scores.

Marketing Measurement: Consistency is important in making comparisons meaningful. Pick one methodology that can be used consistently across all programs. Consistency should help you to avoid having all your scores cluster within a narrow range where differences may not be significant, or actionable. Programs can differ significantly in their effectiveness at meeting your objectives, and so their scores should reflect those differences. Also, if data for one metric is collected at various times, or from different sources, you might want to follow Robert Parker’s lead and use an average score for that metric.

Advice for Wine Drinkers: Don’t worry about the difference in quality between a wine that scores 88 and another that scores 92.  Both are high quality wines and the difference in scores may come down to who tasted it, under what conditions and the tasters’ preferences. Here’s the fun part. Through trial and error, you should eventually be able to determine which wine critic your tastes best align with, and then the ratings and tasting notes from that critic will help you to make better wine purchasing decisions.

Advice for Marketers: Similarly, there will be some trial and error involved, but not nearly as much fun. Select a measurement methodology that you can apply fairly, without personal bias, and consistently across all programs. Be disciplined about measurement and it will ultimately highlight which marketing programs best meet your objectives and create value for your business. That will help you to make better marketing decisions, which you may wish to celebrate by opening a bottle of your favourite wine!

Apples, Oranges and Bananas

A funny thing happened yesterday on my way to the refrigerator.  I was working from home.  It was mid-afternoon and time for my snack.

I rose from my desk, went downstairs and walked my appetite into the kitchen, but stopped short of opening the fridge door.  I paused, wondering what to eat.  With the Christmas eating marathon still fresh on my mind, and around my waist, I was looking for a healthy snack, likely a piece of fruit, but which one?

My choice of available fruit came down to an apple, an orange and a banana.  I considered my options.

  • Apples: Are high in pectin, a fibre which has a long list of health benefits, the flavonoids reduce diabetes risk, and they taste refreshing.
  • Oranges: The antioxidants offer protection from all sorts of disease, the vitamin C supports the immune system, and they taste great.
  • Bananas: The potassium lowers stroke risk, the vitamin B6 keeps the nervous system in top shape, and they are more filling than the other two.

Hmmm…  They’re all good, I thought, but in different ways.  While as fruit they have their similarities, they are each designed to meet different objectives.  How do I compare them?  How do I choose?

Naturally, as you might expect, my first big decision of 2011 reminded me of the problem marketers face when trying to decide which of a group of marketing programs was most effective.  Deciding which piece of fruit or marketing program was most effective depends heavily on my objectives related to eating, or on the marketer’s objectives related to each program.

One of the challenges in comparing Marketing Program (or fruit) A to B to C is that they each have different objectives.  That means, the right metrics to measure each program might be quite different from the metrics to measure the other programs.  This fact makes comparison very difficult. As they say; it’s like comparing apples to oranges.

To make this comparison easier, you need to focus on comparing how effective each marketing program is at doing whatever it is supposed to do. Let’s start with the last six words of that sentence.

Step 1:  Decide which metrics to use. Answer two simple questions about each program:

  • Who are you targeting?
  • What do you want them to do?

For example, consider the different metrics you might use to measure:

  • A public relations campaign to raise awareness among non-customers
  • An email program to incent loyalty and improve customer retention
  • An online contest to add email addresses to your customer database and incent referrals to non-customers

Step 2:  Level the playing field. This is the part where you compare the relative effectiveness of programs measured with different metrics:

  • Create a standard scorecard for your business. This becomes your template.  Your scorecard needs to have the flexibility to measure all types of marketing programs, and accommodate all types of metrics.  For a simple program you might need 5 to 10 metrics, whereas for a complex one you might need 30 to 40.
  • Customize your template to create a scorecard for each program. Some metrics will appear on each program’s scorecard, while others will vary from one scorecard to the next, given that the programs each had different objectives.
  • Score each metric according to how it performed vs. its objective.  (actual/objective X 100%) This is the pivotal step that converts all metrics into one common metric, in this case a percentage.  Working with a common metric enables scoring each one and totaling your scores for each scorecard.

That last step is critical to enabling you to compare programs with differing metrics.  Instead of figuratively looking at apples, oranges and bananas and trying to figure out which is better, now you’re just looking at fruit, with a simple comparable rating for each. Then rank them, and you’ll know which programs were best and worst at meeting their objectives and delivering the results you wanted.

To solve my little dilemma yesterday, I suppose I could have created a Fruit Measurement Scorecard, based on my specific eating objectives at that moment, to give me a way to rate and rank three different pieces of fruit, but that would have been a bit weird.  OK, a lot weird.  Anway, I was hungry, there just wasn’t time.

Oh, if you’re wondering which fruit I chose, without a scorecard to assist me, I caved and ate the last piece of blueberry pie.  Hey, those blueberries are loaded with antioxidants!