Counting or Measuring?

I recently attended a marketing industry event in Toronto where a presenter made a statement about marketing measurement that got me thinking and eventually inspired this month’s newsletter.

While talking about current trends in marketing, the speaker identified marketing measurement as one of the top issues currently facing marketing executives. He talked about how technology and big data have become pervasive in marketing, while also suggesting that as an industry we have yet to figure out how to measure marketing properly and make sense out of all the data.

This got me thinking because I realized an aspect of this point of view was consistent with the mixed messages I have been noticing in marketing measurement circles. Measurement is a hot topic and a growing industry. Yet, for all the talk and activity, and the apparent progress we’ve made, there still seems to be a strong point of view out there that marketers aren’t measuring as well as they would like.

Measurement is Everywhere

On one hand, it seems as though wherever I go, whatever I read, whoever I talk to, measurement keeps coming up. With all the buzz about data and analytics tools, you would think that everyone is measuring effectively.

I also keep hearing about new and/or improved marketing analytics tools and approaches. This is especially true regarding digital, social and mobile marketing as there is so much innovation happening in these channels. As new platforms emerge with new ways for brands to interact with customers, so do new data gathering and analytics tools to measure the interactions on those and other platforms.

We’re Not Measuring Very Well

On the other hand, while there certainly is a lot of talk about measurement, and probably a lot of action, I’m sensing that few companies are truly happy with their measurement efforts. There is excitement about the availability of so much data, yet there are also growing challenges related to the complexity and cost of sorting out all that data and clarifying the degree to which marketing is working.

Social media provides a great illustration of the notion that measurement is everywhere but we’re not measuring very well. There are plenty of social media platforms with which to engage consumers, and dozens, if not hundreds, of tools for gathering data about those engagements.

Yet, despite all the data that is available, I keep hearing suggestions that we’re not measuring very well. In the same week earlier this month, a digital marketing agency president told me that no one has figured out how to measure social media properly, and a marketing researcher told me her clients don’t measure social media, they just keep trying an assortment of things in the hopes that enough of it will work.

Why the Mixed Messages?

So, why all the talk but seemingly so little satisfactory action? I’m starting to think part of the problem is that we don’t all define measurement the same way. Let’s look at a dictionary definition of measurement.

  • to ascertain the extent, dimensions, quantity, capacity, etc., of, especially by comparison with a standard: to measure boundaries.
  • to estimate the relative amount, value, etc., of, by comparison with some standard: to measure the importance of an issue.

Contrast that with a dictionary definition of counting:

  • to check over (the separate units or groups of a set) one by one to determine the total number; add up; enumerate: He counted his tickets and found he had ten.
  • to include in a reckoning; take into account: There are five of us here, counting me.

Are you Counting or Measuring?

Maybe a lot of the measurement that is happening out there is really just counting, and maybe those counting are the ones feeling the most discomfort. To be fair, there’s no shame in counting well, which involves gathering data accurately, reliably and consistently. Proper counting is an essential step that comes before measurement. Good quality data makes proper measurement possible.

Still, those who are mostly just counting and who may well be gathering great data with the latest analytics tools are likely not learning enough from their “measurement” efforts. Pure counting is not nearly as actionable as measuring against something, such as an objective or a comparable benchmark, such as another marketing program.

Measurement delivers its greatest benefits when it enables you to understand which marketing programs are creating the most value for your company and, therefore, how best to deploy your marketing budget in the future. Good quality measurement makes better decision making possible.

So, what action can you take if you are feeling frustrated by marketing measurement? Start by questioning whether you are counting or measuring. Look at the important data you have about any marketing program. Taking one metric at a time, ask yourself questions like “How good is that number?” and “Compared to what?”.

If most of the time you don’t have a good answer, you may well be stalled at the counting stage. On the other hand, if you can assess how good a metric is vs. an objective or some sort of standard, or if you can rank programs according to an overall rating, then you are measuring. If you are able to make better decisions about how to allocate your marketing budgets and that leads to better business outcomes, then you may already be well ahead of the pack!

Predictions and Marketing Knowledge Succession

On January 15th, I attended Deloitte’s Technology, Media and Telecommunications Predictions 2013. The presentation was delivered at The Carlu in Toronto by Duncan Stewart, Deloitte Canada’s Director of Research.

I enjoy attending events like this for two main reasons. The first is for the content, as it helps me to stay on top developments and trends that impact my clients and the environment in which they operate.

The second reason is for the networking and to possibly meet new people or bump into a familiar face or two. One of the familiar faces I saw after the event belonged to my friend and fellow consultant, Rob Coatsworth. After a quick hello, we decided to go chat over a coffee.

One topic of our conversation was knowledge succession, which is one of Rob’s areas of expertise. Knowledge succession helps organizations to capture, retain and pass on the experience-based knowledge that individual employees accumulate, rather than have the knowledge leave when employees leave the organization.

While knowledge succession is hardly a new challenge, I found our conversation thought provoking because I think the challenge is now greater than ever, and I believe this is especially true for marketers. Here are some reasons why:

General Environmental Factors

Multiple Jobs Per Career. The days of working for one company your whole career and getting the gold watch upon retirement are long gone. Employer/employee relationships are less loyal and have evolved from being career-long marriages to serially monogamous relationships. At some point, either or both parties decide that it’s time to move on and try something new.

Short-Term Financial Pressures. The financial markets exert tremendous pressure on publicly traded companies to hit their quarterly financial targets. Senior executives whose compensation is tied to hitting those financial targets often reduce headcount and salary expense in order to hit those targets. Well paid long service employees, who generally have accumulated the most knowledge are attractive targets to be let go for the expense that can be saved.

Aging Baby Boomers Will Retire. While the impact of this has been delayed by the volatility in financial markets, a large number of highly experienced and long service employees are at an age where they would like to retire if they could, and will when they can, taking their knowledge with them.

Marketing Environmental Factors

Marketers Change Jobs Frequently. Employers value marketers with a range of experiences working on different brands in different categories for different companies. That encourages marketers to keep changing jobs to drive up their marketability and market value.

Accelerating Marketing Complexity and Speed. Marketing is changing rapidly, with innovation, technology and media fragmentation driving much of that change. There are more ways to communicate with consumers and many more touch points along the path to purchase. The pace at which marketers have to execute campaigns leaves them little time between campaigns to measure and learn from past efforts.

Measurement and Marketing Knowledge Succession

Organizations that want to improve their marketing effectiveness need to learn from their successes and failures. That means they need to measure and learn which campaigns are most and least effective, so they can adjust their strategies going forward.

Without an organizational approach to capturing and retaining the lessons learned, it is left to individual marketers to do their own learning. So long as they stay, the organization will benefit but if they decide to leave, the knowledge will leave with them.

The reality is that employees leave and unless companies find ways to learn along with the employees, they are just training their competitors’ future marketing talent. A big step in the right direction is to measure all marketing.

Here’s a five step plan for Marketing Knowledge Succession:

  1. Commit to learning. Broaden your focus on executing marketing programs efficiently and effectively to also include learning from those programs to enhance future strategies and tactics.
  2. Embrace measurement as the key to learning what works and what doesn’t for your brands. Track your success at meeting each campaign’s objectives.
  3. Adopt a measurement methodology that can be applied consistently across brands, programs and time. A consistent methodology will give you benchmarks and a basis for rating and ranking programs.
  4. Measure the things that matter. Choose metrics that indicate whether your marketing is driving profitable customer behaviour and creating value for your business. Keep in mind those Key Performance Indicators that matter to financial markets, owners, investors and business managers and find a way to connect your marketing measurement to the health of the business.
  5. Keep good records. Make sure you have a way to securely collect, store and share the results of your measurement efforts. You’ll be building a knowledge data base that current and future marketers in your organization can use to refine their strategies and execute more effective programs.

Knowledge succession has a medium to long term focus, but committing to it also provides short-term benefits. To be able to aggregate and pass knowledge along, you first have to capture it.

To learn what works and doesn’t work in marketing, you have to measure it, and what you learn can pay dividends on your very next campaign. That will benefit both marketers and their employers, today and in the future.

This is not a prediction, unlike those presented by Deloitte, but I do firmly believe that organizations who commit to marketing measurement and knowledge succession will have a brighter future.


Christmas Giving

I used to think that Christmas was all about the gifts, especially the receiving of gifts. As a kid, I remember the excitement of flipping through the Eaton’s and Sears catalogues and telling my parents which gifts I wanted Santa to bring me. My Christmas spirit was all about what was in it for me!

Of course, as I grew older my focus on gifts gradually shifted from receiving to the thrill of giving great gifts. Seeking out those great gifts became a big part of my getting into the Christmas spirit.

For a number of years, my siblings and I shared a fun tradition of going to our local mall on the afternoon of December 24th to pick up last minute gifts and stocking stuffers. Part of the fun was seeing who we’d bump into at the mall, like my high school gym teacher, Mr. Martin. I didn’t have the wisdom to ask him but I suspect he was there every year to see who he could bump into. Maybe that was one of his Christmas traditions and a way for him to get into the spirit.

With time, I’ve come to realize that much of my Christmas spirit comes from the traditions I’ve shared with my family. For example, we have many eating traditions; tourtière and French onion soup for Christmas Eve dinner, croissants on Christmas morning, the big turkey dinner, and of course a vast selection of desserts suitable for any meal or between meals!

A fun tradition emerged quite by accident one year when my mom gave each of us four kids a flexible, pliable little Santa Claus. We quickly realized we could entertain ourselves and each other by bending and twisting Santa into a new position, and then giving him a new name. I’m not quite sure how it started but it’s likely that my brother Jim had something to do with it. Here are a few favourites:

You get the idea. The possibilities are endless. This year, I have decided to print Christmas cards for my three nephews featuring these and other versions of Santa in the hopes it might inspire them to develop some silly traditions of their own.

I still enjoy giving gifts and in that spirit, I’ve got one for you. All you have to do is ask for it.

As a reader of this newsletter, you are probably aware that I advocate using a scorecard to measure your marketing effectiveness. I’ve developed a new version of my scorecard and I’d like to know what you think of it.

I’d like to give you a generic template of my scorecard so that you can customize and use it in your business to measure your marketing. I’ll also include an example to help you understand how to use it. I’ll send it to you as an Excel file so it will be easy to work with.

The scorecard is not designed to be a stand-alone product but rather part of a larger measurement process, so you may need a little help to get started. I’d be happy to provide some guidance by phone or whatever method makes sense.

In exchange, I’ll ask you for some feedback to see if I’m on track with this new version of the scorecard. In general, here are the kinds of things I’ll want to learn about:

  • How was the overall experience of working with this scorecard?
  • How do you feel using the scorecard has helped or could help to improve your marketing effectiveness?
  • Is this approach to measuring marketing suitable for your business?
  • How could the scorecard be improved?
  • What would stop you from using it?

I’m open minded about where our follow up discussion might go and what we might each learn in the process. I think that we should both benefit from this and that we might learn something unexpected.

If you’d like to receive your Optiv8 Christmas gift, email me at rick@optiv8.com and I will send you the scorecard. Please make sure to include your contact information so I can follow up with you in the new year.

I look forward to hearing from you. In the meantime, I’ll be making Christmas cards for my nephews!

Leap of Faith

I have to admit that marketing measurement was the last thing on my mind as I watched daredevil Felix Baumgartner take his extraordinary leap of faith from a balloon at an altitude of 128,100 feet. The only thing on my mind was that Felix was clearly out of his.

Few things in life are certain, but I am quite certain that I could never do anything like that jump. It would require a bravery possessed by very few people on the planet. I’m not one of them, nor am I out of my mind, at least that’s what I think!

As I thought more about whether Felix might actually be out of his mind, I decided that while he definitely was brave, he probably wasn’t crazy. I also realized he needed something more than his bravery to make such a jump.

I consider Felix’s plunge towards the New Mexico desert a leap of faith because, before he could hop off that ledge into a four plus minute free fall, Felix absolutely had to believe in three things:

  • His Team – That they knew what they were doing and wouldn’t let him down.
  • His Technology – That his spacesuit would protect him and that his parachute would open BEFORE he hit the ground.
  • Himself – That no matter what happened, he could handle it, such as pulling out of a wild spin before blacking out or dying.

Without these beliefs, I’m pretty sure Felix wouldn’t have jumped, as the risks and the price of failure would have seemed insurmountable. One mistake, one miscalculation or one malfunction could certainly have killed him.

By contrast, a decision to measure marketing is considerably less risky and dramatic than a decision to jump to earth from the edge of space. Still, it can seem daunting to leap into marketing measurement as there are some risks, including that you might:

  • Not learn anything that helps you improve your marketing
  • Waste precious resources, like money and time
  • Expose the fact that some of your marketing is ineffective

While these are legitimate concerns, there are lessons from Felix’s leap of faith that we can apply to marketing measurement which also help to mitigate those risks.

Believe in what you’re doing: There are many ways to approach marketing measurement. What matters is to commit to a methodology that you can execute consistently. If your organization can commit to an approach and stick with it, then you greatly improve your chances of success. Much of what you will learn will come from applying one approach across all forms of marketing spending.

Get all team members on the same page: Successful teams focus on common goals. Everyone needs to understand and agree on your reasons for measuring, on what you’re trying to learn and on how you define measurement success.

Get the help you need: Support your measurement efforts appropriately, with the people, time, expertise and funding you need. You may have sufficient internal resources or you may need to supplement those resources with external help. It’s tough to take that leap if you think you will be out there on your own.

Remember that it’s a journey: Your efforts to develop effective measurement practices will likely be a long journey with a lot of small victories along the way, and probably a few mistakes, too. The full experience of that journey with all the victories and mistakes is where you’ll learn what you need to know to succeed. The things you’ll learn along the way will often pay dividends immediately, like helping to identify and eliminate ineffective marketing programs.

I can imagine that Felix overcame many obstacles in the years, months and days leading up to his big jump. The spectacular success of his jump was not so much a single event as it was an end point in a journey, and while it may be an end point for Felix, it is also a key milestone in an ongoing journey for science and space exploration.

Those who succeed at marketing measurement make a commitment to the journey and begin that journey believing they have what it will take to overcome obstacles, mitigate risks and achieve success. They also know that by making sure they have the right stuff for measurement – a blend of people, expertise, technology and methodology – they can believe in their journey and take their own much less risky leap of faith.

Taste of Marketing Measurement

Last month I attended ‘Taste of the Danforth‘, my favourite summer event in Toronto. Now in its 19th year, this weekend long event along a 3km stretch of Danforth Avenue typically attracts around 1.3 million people over the weekend. I live nearby, attend almost every year and generally eat too much while I’m there.

I planned to meet a friend on the Danforth Friday evening but when the forecast called for rain, we called off our plans. The forecast turned out to be wrong and as I sat at home on a fairly dry Friday night, I wondered just how many other people didn’t go to Taste of the Danforth due to the threat of rain and how much that might have reduced sales for each restaurant participating in the event.

Marketing Effectiveness

I also thought about how event organizers seemed to have been very effective at creating awareness for the event. I saw a lot of local media coverage and I noticed how Taste of the Danforth was listed in every what’s-going-on-in-Toronto/Ontario listing I saw in print and on-line.

Marketing effectiveness measurement tends to focus on whether specific program objectives were achieved, such as attracting and keeping profitable customers and creating value for the business. Yet, as I was reminded by sitting at home when I should have been out eating too much, marketers can do everything right and still end up with bad results due to factors outside of their control, such as bad weather (or a bad forecast).

Event Objectives

I’m guessing the event’s main marketing objectives are to generate awareness, attendance and trial of both the area and of individual restaurants, and to also increase post-event traffic and revenue for local merchants.

While I sat home that night, I watched a segment of a local newscast about Taste of the Danforth in which a restaurant owner told the interviewer that this event generally makes his year. I imagine that some restaurants might aim to sell enough souvlaki in three days to possibly pay for a renovation or to simply make enough that weekend to stay in business for another year.

Objectives like this point the way towards some of the metrics I would include on a scorecard to measure the effectiveness of the marketing, but there are other factors to consider.

Those Pesky External Factors

For this event, the number one external factor outside of marketing’s control that can impact its success is the weather. I imagine that the event organizers must make the appropriate sacrificial offering (lamb would seem appropriate) to the Greek god (Zeus?) most responsible for weather. A hard rain could severely reduce attendance for an evening or whole weekend.

The Measurement Dilemma

There are three general approaches to choose from regarding external factors.

1. Ignore Them

This option will always be as tempting as all the delicious foods one finds at Taste of the Danforth. I can’t imagine how I’d ever determine how many people didn’t go to Taste the Danforth that Friday night because they thought it might rain, or how many sticks of souvlaki didn’t get sold as a result.

2. Model Them

This option is worth considering when you have a lot of data. If the organizers have 19 years worth of data that would correlate daily attendance with weather forecasts and actual rainfall, then that would be a start. Still, for most marketers, the costs of sophisticated models and analytics can quickly become too high relative to the size of the marketing expense they’re meant to measure.

3. Track Them

I think it is well worth tracking any external factor that could impact results, such as weather, competitive activity and labour disruptions. I arbitrarily score each external factor on a five-point scale, where the low end of the numerical scale corresponds to “very negative” and moves up through “somewhat negative”, “no impact”, “somewhat positive” and “very positive”.

I keep this very unscientific scoring of external factors separate from the rest of the scoring I do on the factors that seem to be within marketing’s control and on the results that can reasonably be attributed to the marketing. I don’t muddy the waters by including the external factors in the calculation of the overall score, but I do note them and score the severity of those factors.

The value of tracking external factors comes when you analyze a group of marketing programs, such as past years of Taste of the Danforth. Imagine looking back and seeing great year to year variations in attendance and not knowing in which years it rained all weekend, or there was a transit strike.

Without tracking external factors, it would be easy to come to the wrong conclusions about the effectiveness of specific marketing programs. It would be hard to decide whether specific programs should be repeated or changed and also to learn which tactics were the most and least effective.

Always track those factors outside of your control and the degree to which they may have helped or hindered your results. That will at least give you a taste of what else might have been going on at the time that may have impacted your results.

Down In The Alley

A few years ago, the couple who live across the alley behind my house decided to host a small gathering on their driveway for those of us living nearby on either side of the alley. A few homes had sold recently and they decided that this would be a good way to welcome the new neighbours and help everyone get to know each other a little better. It was fun, and this annual tradition lives on 4 or 5 years later.

This year’s edition of the alley event happened last Saturday afternoon. One of the things I enjoy about these neighbourhood gatherings is having the time to learn about each other in a relaxed setting.

On Saturday, I was having a good chat with my exceedingly well-named neighbour, Rick. He asked about the nature of my work and so I gave him a bit of an overview about how I measure marketing effectiveness. That quickly led to him asking me this question:

“What did you think of General Motors’ announcement that they were pulling their Facebook ads?”

My first reaction was that I supposed GM felt their ad spending on Facebook wasn’t working.  As we continued, our conversation shifted to the timing of GM’s announcement, mere days before Facebook’s highly anticipated initial public offering. We concluded that something must have gone wrong in their relationship with Facebook for GM to announce their decision at such a sensitive time.

After Rick, who is an actor, entertained the local kids by juggling while walking on stilts, I went home, considered our conversation, did some research and organized my thoughts.

What I Found & My Thoughts – #1

In case you missed it, GM announced they would eliminate $10 million of advertising spending on Facebook. This still leaves another $30 million which they spend on their Facebook marketing initiatives, although I don’t believe any of that spending becomes Facebook revenue.

Clearly, GM thinks there’s an audience on Facebook worth engaging through marketing, but not so much for advertising, at least not yet.

What I Found & My Thoughts – #2

The $10 million is a drop in the bucket compared to GM’s 2011 total US ad spending of $1.8 billion ($3 billion globally), and Facebook’s 2011 revenue total of $3.7 billion, most of which was for advertising.

Smart marketers who spend $3 billion annually on advertising almost certainly also measure the effectiveness of that spending pretty rigorously. It is a natural part of the process to question, evaluate and optimize all parts of that spend on an ongoing basis, and the Facebook ad spend would be subject to that scrutiny.

What I Found & My Thoughts – #3

It has recently been reported that Facebook and GM are back in talks to renew GM’s advertising and that GM is asking Facebook for more data to bolster their measurement efforts.

Perhaps the problem was not so much that GM’s Facebook advertising didn’t work, but rather that GM couldn’t prove whether or to what degree it did, or didn’t. I also wonder whether GM’s pre-IPO announcement was a negotiating tactic to get the data they want from Facebook.

What I Found & My Thoughts – #4

I noticed that following GM’s announcement, their rival Ford tweeted something to the effect that Facebook ads are effective when used properly. Let’s assume the people at Ford are also pretty smart and measure rigorously, too. By implying they know their ads are effective, their tweet also implies they are better than GM at measuring Facebook ad success, and thereby raises some related questions:

  • Does Ford use Facebook ads differently and in a way that makes measurement easier?
  • Is Ford better than GM at setting measurable objectives for each ad?
  • Does Ford already get better Facebook data than GM?
  • Was Ford’s tweet was just an attempt to position themselves as smarter than GM?

We can’t know the answers to these questions, but we can remind ourselves of a few marketing measurement fundamentals:

Set clear and measurable marketing objectives: To know whether a marketing program worked, you have to first define exactly what it would mean for your program to “work”. In other words, what outcomes would make you happy?

Your objectives must be reasonable and attainable: A clearly defined objective isn’t necessarily attainable. A good outcome can still fall well short of an unreasonable objective, and be classified as a failure, when in fact the failure was in the setting of the objective.

You need to be able to get the data you need, consistently, reliably and cost-effectively: This may be at the crux of GM’s discussions with Facebook. GM may know exactly where they want to go with their Facebook ads, but they just can’t tell if they’re getting there, which when you’re behind the wheel of a $10 million dollar ad spend, is sort of important.

It will be interesting to see whether GM and Facebook can reach an agreement. My guess is that GM won’t want to walk away from advertising to Facebook’s massive and targetable audience, particularly if it seems their competitor(s) may be having success in this regard. Maybe GM just needs to know if they’re meeting their objectives and whether their Facebook ad spend has them driving on a six-lane superhighway, or somewhere down in the alley.

Happy or Not Happy?

Last week I spoke on Marketing Measurement at an event called ‘Effective Marketing in a Digital World’. During the pre-event networking, I ran into a friend I hadn’t seen since he and his wife had their first child.

Naturally, our conversation revolved around his daughter and he showed me a photo of her that he keeps on his phone. As I remarked on her beautiful blue eyes (all credit goes to his wife) and how cute she is, he also pointed out what a good baby she is and how she doesn’t cry too much or too loudly.

Then, perhaps influenced by the topic on which I was about to speak, we started joking about how his daughter rates quite highly on two important metrics for measuring baby quality; cuteness and crying volume. We decided one could use these two metrics to categorize all babies into one of four quadrants of a matrix, as follows:

  • Quadrant #1 – Very Cute babies that cry quietly
  • Quadrant #2 – Very Cute babies that cry loudly
  • Quadrant #3 – Not Very Cute babies that cry quietly
  • Quadrant #4 – Not Very Cute babies that cry loudly

Of course, few parents would put their babies into the 3rd or 4th quadrants, but assuming some did, here’s how parents in each quadrant might feel:

#1 – Pleased to have a quiet and very cute baby
#2 – Hoping baby will grow out of this ‘Loud Crying’ phase, but thankful for the cuteness.
#3 – Hoping baby will grow out of this ‘Not Very Cute’ phase, but thankful for the quietness.
#4 – Hoping for improvement on both characteristics.

We laughed about the inappropriateness of categorizing babies this way, agreed to get together soon and continued networking with others.

The next day, as I reflected on the great people I met and the conversations we had, I recalled that silly matrix conversation. Then I remembered how I had once devised a similar matrix to categorize all marketers by their measurement efforts and whether they were happy with those efforts. In this case, the four quadrants were:

  • Quadrant #1 – Companies who measure marketing and are happy with their measurement
  • Quadrant #2 – Companies who don’t measure marketing and are happy they don’t
  • Quadrant #3 – Companies who measure marketing and are not happy with their measurement
  • Quadrant #4 – Companies who don’t measure marketing and are not happy they don’t

Unlike the baby matrix where most parents would say they are in the 1st or 2nd quadrants, I think a lot of companies would say they are in the 3rd or 4th quadrants. This is not surprising as marketing measurement cannot be done perfectly and so there is always a way to improve.

It can be helpful to decide in which quadrant your company sits, and why, as this can lead to improving your marketing measurement.  Let’s look a few key characteristics of companies in each quadrant:

Quadrant #1

  • Spending enough on marketing that they need to evaluate and manage that spending.
  • Learning what they need to know to improve marketing decisions and business results.
  • Spending appropriately on measurement relative to the size of their marketing budget.

Quadrant #2

  • Those with small marketing budgets have little or nothing to measure.
  • Those with larger marketing budgets who don’t measure are either making great instinctive marketing decisions based on limited information, or they may just be unaware of their ineffectiveness and any missed chances for improvement.

Quadrant #3

  • Measuring but not learning enough to improve marketing decisions.
  • Current measurement efforts may be inconsistent or sporadic.
  • May not have a standardized approach, making it difficult to compare individual program results to benchmarks and other programs.
  • Might be overspending on measurement, making it too big a percentage of their marketing budget.

Quadrant #4

  • May not be achieving their business objectives and are feeling pressure to better manage their marketing spending to that end.
  • May not measure due to a shortage of resources, such as time, money, people and expertise.
  • May lack clear, measurable marketing objectives that facilitate effective measurement.
  • Measurement may seem too daunting to undertake, given the increasing complexity of marketing and customer decision-making processes and, possibly, a resulting perception that the only suitable approach to measurement must also be complex and therefore too costly.

Where Are You Now and Where Do You Want To Be?

I know, that’s a little vague. I’m not looking for answers like “I’m at the office and I want to be at the cottage”, although that is a very good answer. I’m wondering which quadrant your company is in currently, and whether that’s good enough for you.

If you’re already happily in Quadrant #1, congratulations, you can leave now for the cottage! However, if you’re in one of the other quadrants and you’re spending a significant amount of money on marketing, you may still have some work to do before you pack your SUV, particularly if you’re not achieving your business objectives.

One of the points I made in my presentation is that to be effective at marketing, you have to do four things well:

  1. Research: Insights about markets, competitors, customers, etc.
  2. Strategy: For the business, your brands and how you will go to market
  3. Execution: The marketing programs that help you find, develop and keep profitable customers
  4. Measurement: To know if your strategies and executions are delivering

One of the main benefits of measurement is the ability it gives you to make improvements to your strategies and executions. If you are not measuring, or if you are not happy with your current measurement efforts, there is a solution, and that is to build an effective measurement process. If you need my help, I’ll be at the cottage.

 

 

 

 

 

 

 

 

 

Inputs & Outputs

One of the challenges in writing a monthly newsletter is writer’s block. It generally hits me in one of two ways. Either I have no idea what to write about, or I have an idea, but no story or setting for the idea.

I have two approaches to deal with writer’s block. I find that going for a walk in nearby Monarch Park is a great way to clear my head and then somehow the ideas come to me. Finding a story or a setting for my idea can be harder. Something has to happen so I can connect the idea to a story. Usually, I need to read something or get out and do something. Through interacting with a new person or situation, a story sometimes emerges.

Monday evening, faced with neither an idea nor a story for this newsletter, I ventured out to a McGill Alumni event at the Carlu where I could mingle and meet people. Among those I met were two relatively recent graduates (relative to me, that is) with whom I had a very enjoyable, wide ranging conversation. Unfortunately, nothing in our conversation triggered an idea or a story for this month’s newsletter, although I was happy to learn about “The Undercover Economist” Tim Harford, whose writing I’m already enjoying.

On my way home, I thought about other people I’d met lately and then the idea came to me. I realized how a discussion a couple of weeks ago with a highly skilled and experienced market researcher related to how marketing scorecards are an effective way to organize diverse types of data.

We discussed how the various things that can be measured about marketing are either inputs, the things that influence the desired customer behaviour, or outputs, the results of that customer behaviour. This concept can be very helpful in determining how to organize the marketing metrics on your scorecard, and in deciding how to weight them within your overall scoring system. Let’s look at some examples.

Marketing Input Metrics

First of all, there are two broad categories of inputs; those you control and those you don’t. Inputs under your control are generally related to how well you execute the program you are measuring. Examples could include:

  • The percentage of the in-store displays or signs you printed and distributed that were actually and properly put up in store
  • The percentage of all the promotional labels or neck tags your merchandizing partner actually affixed to your products
  • The number of and cost per impression of all your on and off-line marketing communications related to this program

Inputs outside of your control that might impact the success of your program could include:

  • Competitive activity – they dropped or increased their price, promoted heavily while your program was in market, had a PR disaster on Twitter, etc.
  • Weather – no one showed up at your well promoted event because of a massive snow storm

Marketing Output Metrics

There are also two types of outputs, but they are defined a little differently. The first are those outputs or results that are directly attributable to your marketing program. Examples might include:

  • Number of unique visitors to a landing page on your website built for this program
  • Click through rate from your landing page to the buying page
  • Number of new customers who bought using your promotion codes

The other type of outputs are those that are potentially but not definitely or entirely attributable to your program.  These are typically key business performance metrics that can be influenced by a variety of inputs. Examples might include:

  • Revenue for the brand being promoted
  • Market share of that brand
  • Average price per unit sold during the program

Grouping your metrics in this logical fashion on your scorecard can make it easier for you to select your metrics and make decisions about how to weight them by group. Inputs directly under your control and outputs directly attributable to your program should be more heavily weighted than outputs potentially attributable to your program. This is especially true if you tend to have a lot of programs in the market simultaneously. Whatever weightings you use, be consistent over time to ensure you can meaningfully compare programs to each other.

Exclude those inputs outside of your control from your overall calculations. It would be very hard to set objectives and to score against those objectives, or to know how much of an impact they really had. We know that a blizzard of the century will keep more people home than a light dusting of snow, but the amount of snow that makes people decided to stay home is different for everyone. Still, note whether you think external factors significantly impacted your results.

As I wrote this, I realized my opening story does connect to the idea for this newsletter, after all. My story was about an input, an activity under my control, in this case networking and meeting people. That created an output that was at least partially attributable to my networking efforts. I may have still come up with the idea without going to the Carlu, but I might not have found my story!


Measure Well, Sleep Well

If you know me or have been reading this newsletter for any length of time, you may know that photography is my favourite pastime. What you may not know is that organizations sometimes bring me in to take photos of their events, which is how I found myself at the AllerGen 2012 Annual Research Conference.

AllerGen is a not-for-profit organization whose role is to mobilize Canadian science to reduce the illness, mortality and socio-economic costs of allergic disease. The conference showcased the latest research in this regard and while often over my head scientifically (not hard to do), I found it quite interesting.

During an afternoon break at the conference, a distinguished looking gentleman named Douglas Barber approached me to talk photography. Our pleasant conversation eventually shifted to the conference and he told me a story that I quickly realized fit my thinking on marketing measurement.

Douglas explained he is on AllerGen’s board and that an issue of concern to him is the cost to the Canadian economy from the “asthma drag” on productivity. He explained how asthmatics can be less productive at work or even miss entire days of work following sleepless nights caused by asthma. Parents of asthmatic children can also experience the same productivity losses. Douglas also told me how he once did a quick “back of the envelope” calculation to estimate that asthma costs our economy between $10 and $20 billion per year in lost productivity.

Sometime after Douglas did his quick calculation, a full study was done to properly analyze and estimate the economic impact of asthma’s drag on productivity. The study concluded the annual costs are $15 billion. That’s right; a costly and complex measurement process produced the same answer as one expert using a pen and the back of an envelope.

Two aspects of this story relate to my views on marketing measurement:

  • Douglas’s back of the envelope calculation relative to the full study is similar to how a marketing scorecard can be a proxy for a sophisticated and costly marketing measurement process. In both cases, the less sophisticated approach doesn’t need to be perfect, just accurate enough to support analyzing options and making the right decisions. As I like to say, it’s not about precision, it’s about the decision.
  • The back-of-the-envelope estimate worked because it was done by an expert using a sound methodology. Douglas has an extensive business background and apparently knows more than just a little about productivity and related calculations. Scorecards are a proven methodology that you can enhance with expertise about your marketing and your business.

There is another lesson in Douglas’ story, and that’s the need to right size your measurement efforts to the magnitude of the decisions you need to make.

Research Investment Decision

  • Douglas’ back of the envelope calculation and the full-blown study produced essentially the same estimate and both pointed toward making the same decision. It’s a pretty compelling proposition if investing perhaps a few hundred million dollars into research would lead to recovering even just 10%, or $1.5 billion of the lost productivity, especially as that benefit would be realized every year.
  • The problem is that any decision to potentially invest a few hundred million dollars needs to be substantiated by more than a back of the envelope calculation. In this case, the cost of the research needed and the probability of recapturing that 10% are two other variables that I think would need to be estimated. It’s understandable that a full-blown study was needed to examine the overall business case.

Marketing Investment Decision

  • Similarly, for companies that invest tens of millions annually in marketing, it makes sense to support the decisions that need to be made with sophisticated marketing measurement efforts that might cost hundreds of thousands, or more.
  • For most companies with smaller marketing budgets, a practical lower cost approach such as one using a scorecard may well be the right sized measurement solution. In most cases, the overall measurement expense likely needs to be a small single digit percentage of the total marketing budget.

I like simple and elegant solutions that deliver what you need. A marketing scorecard’s simplicity keeps measurement costs down, while its elegance allows the flexibility to include a suitable level of expertise and sophistication to right size your measurement efforts to your marketing budgets.

Whichever measurement approach you choose, be sure to combine a sound methodology with the right expertise to learn what you need to know to make the right decisions. Measuring well will help you to sleep well and be a productive marketer!

Wine Scoring & Marketing Measurement

Tuesday evening I was browsing the latest edition of the LCBO’s ‘Vintages Release Catalogue’. This catalogue provides descriptions and sometimes wine critics’ quality scores for the new wine products about to be released through Vintages stores in Ontario. As I browsed, two thoughts came to mind.

Firstly, I noticed that most of the scores in this catalogue were between 88 and 92 on a 100-point scale. It struck me that this suggested the majority of the wines in this catalogue were of very similarly high quality, with almost all wines rated within a narrow 5-percentage point range. I found that odd, perhaps unrealistic, and decided to think about it. Secondly, I noticed that the wine descriptions were making me thirsty.

Wine Glass, Red Wine

Seeing the wisdom in choosing the beverage that best suited the task at hand, I poured myself a glass of red to compliment my thinking, sat down with the catalogue and made a few calculations and notes. Here are some highlights.

  • Vintages published scores for 57 of the 120 wines in this catalogue. The wine critics quoted used the 100-point scale for 48 of the 57 rated wines. The other nine were based on 20, 5 or 3-point scales.
  • Of the 48 using the 100-point scale, 41 (85.4%) received a score between 88 and 92, and 30 of those were either 90 or 91, which confirmed my first observation. The other 7 wines were rated higher than 92, leaving no scores below 88.

Taking a sip from my glass, I contemplated why so many wines received such similar scores, and how all this relates to marketing measurement. Here are a few thoughts:

Wine Scoring: The LCBO is in the business of selling wine and I suspect they have a policy of only publishing scores of 88 or higher. I tested this theory by looking at the two previous Release Catalogues and wasn’t able to find a wine scoring 87 or lower. Perhaps they’ve learned that lower scores reduce sales and so don’t publish scores below 88.

Marketing Measurement: Marketers are in the business of spending money effectively to drive positive business outcomes. Instead of measuring only the best marketing programs or those you might want to cast in a favourable light, measure and rank all programs so you can identify which are most and least effective, and then optimize future strategies accordingly.

Wine Scoring: By my rough count, the 48 scores using the 100-point scale were sourced from 26 different wine critics. While each used a 100-point scale, I have a hard time believing all 26 used the scale in exactly the same way. I also suspect that some critics are more generous with their scores than others, like my calculus teacher in CEGEP. The other important issue is that scoring wine is a highly subjective exercise. It isn’t at all uncommon for two or more tasters to disagree on a wine’s quality and the corresponding score. Experts have different opinions on subjective matters.

Marketing Measurement: To minimize inconsistencies, reduce or eliminate subjectivity and personal bias from your measurement processes. Having 26 experts using similar but sometimes different methods of scoring your marketing programs based on their personal opinions would not be a recipe for consistency. One person needs to lead your measurement efforts using one methodology that your organization understands and supports.

Wine Scoring: I did a little reading on wine scoring and discovered that wine critics can be inconsistent in the scores they award to the exact same wine on different occasions. For example, the influential wine critic Robert M. Parker has apparently pointed out that he sometimes assigns different scores to the same wine at different tastings, but that those scores tend to be no more than 3 points apart. It seems that differences in tasting conditions and the taster’s emotions can lead to different scores. To address this, I believe Robert Parker publishes average scores when multiple tastings produce different scores.

Marketing Measurement: Consistency is important in making comparisons meaningful. Pick one methodology that can be used consistently across all programs. Consistency should help you to avoid having all your scores cluster within a narrow range where differences may not be significant, or actionable. Programs can differ significantly in their effectiveness at meeting your objectives, and so their scores should reflect those differences. Also, if data for one metric is collected at various times, or from different sources, you might want to follow Robert Parker’s lead and use an average score for that metric.

Advice for Wine Drinkers: Don’t worry about the difference in quality between a wine that scores 88 and another that scores 92.  Both are high quality wines and the difference in scores may come down to who tasted it, under what conditions and the tasters’ preferences. Here’s the fun part. Through trial and error, you should eventually be able to determine which wine critic your tastes best align with, and then the ratings and tasting notes from that critic will help you to make better wine purchasing decisions.

Advice for Marketers: Similarly, there will be some trial and error involved, but not nearly as much fun. Select a measurement methodology that you can apply fairly, without personal bias, and consistently across all programs. Be disciplined about measurement and it will ultimately highlight which marketing programs best meet your objectives and create value for your business. That will help you to make better marketing decisions, which you may wish to celebrate by opening a bottle of your favourite wine!