Where the Rubber Hits the Road

Indy CarWhile walking east along the south side of Bloor Street in mid-town Toronto with my colleague Bob, I saw something quite unusual. Hovering above a large wooden table on the sidewalk in front of William Ashley’s store at 55 Bloor Street West was a full sized Indy race car. I don’t know a lot about racing, but I do know that cars and their tires are supposed to be on the road, so I was intrigued.

On closer inspection, I discovered the 1,400 pound replica car wasn’t hovering but was sitting on four delicate looking bone china tea cups, one under each tire. The tea cups themselves were part of an elegant table setting featuring William Ashley’s finest.

A little online snooping uncovered that William Ashley launched the display in 2011, 25 days before the 25th anniversary of the Honda Indy race in Toronto. The race organizers had approached William Ashley, a long-time sponsor of Toronto’s annual Indy race, with this outdoor display idea to jointly promote the race and the store, along with the superior strength, durability and performance of bone china.

As we discussed the uniqueness of this marketing idea, Bob turned to me and asked the question I get asked the most. “So, how would you measure that marketing program?”

Without flinching, I replied with my favourite answer, “It depends”. Since I didn’t know William Ashley’s actual planned objectives for creating this display, I couldn’t know exactly how to measure whether or not it worked.

Despite that roadblock, we agreed to speculate on what their objectives might generally have been, and I’ve added some measurement thoughts for each:

  • Objective #1: Create a unique and interesting event to generate press coverage.
  • Measurement #1: In its simplest form, this is a matter of tracking the number of impressions through the various stories and mentions about their launch event through various print, broadcast, digital and social media.
  • Objective #2: Communicate the key attributes of bone china.
  • Measurement #2: While the first objective relates to how much coverage, this one relates to more important issues, such as the quality, accuracy and tone of the coverage. It could get into things like media monitoring, text analytics and sentiment analysis of the various forms of coverage. You could supplement that with before and after surveys and by intercepting people on the street to see if they saw the display and understood the message.
  • Objective #3: Increase store traffic.
  • Measurement #3: Count the customers, of course, but you need to compare the count to something, like how many customers they normally get on Wednesdays in June, or when they’ve created similar displays previously.
  • Objective #4: Increase bone china sales.
  • Measurement #4: It’s easy enough to add up the sales, but it would be helpful to compare the total to an average, or baseline, as with the store traffic example. You’d also have to decide how long that display might affect bone china sales. Seeing that display made me think (and write) about the superior strength, durability and performance of William Ashley’s bone china, and maybe now you’re starting to think about it. I’m not in the market for bone china right now, but maybe in the future and perhaps you will be too!

The key lesson in all of this is that you need to set clear measurable objectives when planning your marketing in order to know what to measure and learn whether you’ve succeeded. In other words, measurement should be directly linked to your planning process. Defining how you will assess whether a marketing program is successful should be an integral part of planning.

Good objectives will define the metric(s) that will be used to measure success, and the specific numerical outcome you want to achieve. For example, it can be a percentage change from a comparable period, or a specific outcome that you’ve determined would be worthwhile relative to the cost of the program.

My four speculated objectives above were purposely vague to highlight the challenges presented by the lack of clarity.  When you set your objectives, be very clear about the outcome you’re looking for. Here’s a better version of Objective #4: “Increase bone china (all brands) sales for June and July by 20% vs. June and July of last year”. That, you can measure.

Without proper objectives, how or what to measure becomes an exercise in guessing, much like Bob and I had to do. To take the guesswork out of your marketing measurement, it needs to begin as part of your planning process. That’s where the rubber hits the road.

 

Aligning Interests

Introduction

Last week something strange happened in front of my house that knocked out the power to my house. I’m glad it happened on a relatively cool day rather than on a hot, sticky day like today.

While I was inconvenienced for about 12 hours in which I had no power, I’m thankful for the inspiration for this month’s newsletter which talks about the importance of aligning marketing’s interests with those of the whole organization and how doing measurement properly helps to get you there.

__________________________________________________________________________

As I opened my front door Wednesday morning to toss a few items into the recycle bin, I quickly realized that something had gone terribly wrong in front of my house.

From my front door, I saw a police officer, two firefighters, three Toronto Hydro linemen, an Atlas Van Lines driver, their respective vehicles, flashing lights, barricades, pylons and a cat. Taking in this scene, it quickly sank in that the large Altas moving van with live hydro wires draped across it, was probably the cause of all the commotion.

Here’s what happened. The van was very tall. The overhead hydro wires, which reach across the street to feed electricity into the houses on my side of the street, hang very low. Tall van + low wires = problem. As the van drove up my street, it snagged and pulled down the wires that feed electricity into MY house, which knocked out my power.

My neighbour Blair saw the whole thing happen. He called 911. The 911 dispatcher called the police, the firefighters and the hydro guys. No one knows who called the cat, or why the cat was there other than to hold the humans in contempt.

Over the next three hours, I had productive conversations with all of the aforementioned, as well as with my insurance agent, the claims adjuster, a contractor and an electrician. All were professional and courteous. But, here’s the thing.

Everyone I talked to had a different agenda, a different boss to answer to, and a different view on how to proceed. I found this both interesting and frustrating, yet not at all unusual. To varying degrees, most organizations experience this.

While this “organization” had been assembled hastily to address the downed power lines situation, it behaved as most organizations behave. That is, the first concerns of the individuals involved were guided by their own self-interests. More importantly, they were able find common ground within those interests and come together around common objectives.

What does this have to do with marketing measurement? I thought you’d never ask! Some of the biggest challenges and benefits of marketing measurement are related to getting everyone on the same page and aligning their interests.

Aligning marketing’s objectives with those of the organization is critical to both the success of marketing measurement efforts and the success of the organization at meeting its overall objectives.

Alignment

Here are three key principles to achieving both types of success:

1. The whole organization must commit to marketing measurement.

Marketing and other parts of the organization need to mobilize around a clearly defined measurement objective, such as finding the best and most effective ways of spending marketing budgets. Marketing can’t and shouldn’t go it alone. It needs support and commitment from the rest of the organization for measurement to work and lead to better spending decisions.

2. The organization and marketing must jointly commit to a measurement methodology.

If marketing unilaterally develops an approach to marketing measurement, others in the organization might think that marketing developed their approach with their own self-interests in mind. That is, they might assume that the methodology is biased towards showing that the marketers in question are brilliant and highly effective.

On the other hand, if marketing involves other elements of the organization that might naturally have competing interests or alternate perspectives on how to measure marketing, then those “competing” interests will bring more balance to the methodology and more acceptance by all of the results.

A joint commitment to a methodology means they must agree on a way to measure marketing’s success. I define success as marketing meeting its objectives and helping the organization to meet its objectives. Objectives-based measurement forces alignment around the objectives themselves.

3. The organization and marketing must jointly decide what to measure.

Focus

Remember that marketing’s purpose is to attract customers who create the most value for the whole organization. That means you need input from each key functional area to know how they each define value, high value customers and profitable customer behaviour. Those definitions will point the way to the key performance indicators that should be included in your measurement.

Including a range of metrics that matter to all aspects of the organization will mean that you will be measuring marketing according to how the whole organization defines success. It will also be easier to get the support and data you need to measure marketing in terms that everyone will understand.

In an organization, everyone has a role to play. Each person has his or her own biases and priorities. At times, it can seem as though different people and parts of the organization have competing interests.

Effective organizations find and focus on the common objectives within those competing interests. One of the most important benefits of measuring marketing based on results vs. objectives is that to do it properly those objectives will need to align with those of the overall organization.

Taste of Marketing Measurement

Last month I attended ‘Taste of the Danforth‘, my favourite summer event in Toronto. Now in its 19th year, this weekend long event along a 3km stretch of Danforth Avenue typically attracts around 1.3 million people over the weekend. I live nearby, attend almost every year and generally eat too much while I’m there.

I planned to meet a friend on the Danforth Friday evening but when the forecast called for rain, we called off our plans. The forecast turned out to be wrong and as I sat at home on a fairly dry Friday night, I wondered just how many other people didn’t go to Taste of the Danforth due to the threat of rain and how much that might have reduced sales for each restaurant participating in the event.

Marketing Effectiveness

I also thought about how event organizers seemed to have been very effective at creating awareness for the event. I saw a lot of local media coverage and I noticed how Taste of the Danforth was listed in every what’s-going-on-in-Toronto/Ontario listing I saw in print and on-line.

Marketing effectiveness measurement tends to focus on whether specific program objectives were achieved, such as attracting and keeping profitable customers and creating value for the business. Yet, as I was reminded by sitting at home when I should have been out eating too much, marketers can do everything right and still end up with bad results due to factors outside of their control, such as bad weather (or a bad forecast).

Event Objectives

I’m guessing the event’s main marketing objectives are to generate awareness, attendance and trial of both the area and of individual restaurants, and to also increase post-event traffic and revenue for local merchants.

While I sat home that night, I watched a segment of a local newscast about Taste of the Danforth in which a restaurant owner told the interviewer that this event generally makes his year. I imagine that some restaurants might aim to sell enough souvlaki in three days to possibly pay for a renovation or to simply make enough that weekend to stay in business for another year.

Objectives like this point the way towards some of the metrics I would include on a scorecard to measure the effectiveness of the marketing, but there are other factors to consider.

Those Pesky External Factors

For this event, the number one external factor outside of marketing’s control that can impact its success is the weather. I imagine that the event organizers must make the appropriate sacrificial offering (lamb would seem appropriate) to the Greek god (Zeus?) most responsible for weather. A hard rain could severely reduce attendance for an evening or whole weekend.

The Measurement Dilemma

There are three general approaches to choose from regarding external factors.

1. Ignore Them

This option will always be as tempting as all the delicious foods one finds at Taste of the Danforth. I can’t imagine how I’d ever determine how many people didn’t go to Taste the Danforth that Friday night because they thought it might rain, or how many sticks of souvlaki didn’t get sold as a result.

2. Model Them

This option is worth considering when you have a lot of data. If the organizers have 19 years worth of data that would correlate daily attendance with weather forecasts and actual rainfall, then that would be a start. Still, for most marketers, the costs of sophisticated models and analytics can quickly become too high relative to the size of the marketing expense they’re meant to measure.

3. Track Them

I think it is well worth tracking any external factor that could impact results, such as weather, competitive activity and labour disruptions. I arbitrarily score each external factor on a five-point scale, where the low end of the numerical scale corresponds to “very negative” and moves up through “somewhat negative”, “no impact”, “somewhat positive” and “very positive”.

I keep this very unscientific scoring of external factors separate from the rest of the scoring I do on the factors that seem to be within marketing’s control and on the results that can reasonably be attributed to the marketing. I don’t muddy the waters by including the external factors in the calculation of the overall score, but I do note them and score the severity of those factors.

The value of tracking external factors comes when you analyze a group of marketing programs, such as past years of Taste of the Danforth. Imagine looking back and seeing great year to year variations in attendance and not knowing in which years it rained all weekend, or there was a transit strike.

Without tracking external factors, it would be easy to come to the wrong conclusions about the effectiveness of specific marketing programs. It would be hard to decide whether specific programs should be repeated or changed and also to learn which tactics were the most and least effective.

Always track those factors outside of your control and the degree to which they may have helped or hindered your results. That will at least give you a taste of what else might have been going on at the time that may have impacted your results.

Down In The Alley

A few years ago, the couple who live across the alley behind my house decided to host a small gathering on their driveway for those of us living nearby on either side of the alley. A few homes had sold recently and they decided that this would be a good way to welcome the new neighbours and help everyone get to know each other a little better. It was fun, and this annual tradition lives on 4 or 5 years later.

This year’s edition of the alley event happened last Saturday afternoon. One of the things I enjoy about these neighbourhood gatherings is having the time to learn about each other in a relaxed setting.

On Saturday, I was having a good chat with my exceedingly well-named neighbour, Rick. He asked about the nature of my work and so I gave him a bit of an overview about how I measure marketing effectiveness. That quickly led to him asking me this question:

“What did you think of General Motors’ announcement that they were pulling their Facebook ads?”

My first reaction was that I supposed GM felt their ad spending on Facebook wasn’t working.  As we continued, our conversation shifted to the timing of GM’s announcement, mere days before Facebook’s highly anticipated initial public offering. We concluded that something must have gone wrong in their relationship with Facebook for GM to announce their decision at such a sensitive time.

After Rick, who is an actor, entertained the local kids by juggling while walking on stilts, I went home, considered our conversation, did some research and organized my thoughts.

What I Found & My Thoughts – #1

In case you missed it, GM announced they would eliminate $10 million of advertising spending on Facebook. This still leaves another $30 million which they spend on their Facebook marketing initiatives, although I don’t believe any of that spending becomes Facebook revenue.

Clearly, GM thinks there’s an audience on Facebook worth engaging through marketing, but not so much for advertising, at least not yet.

What I Found & My Thoughts – #2

The $10 million is a drop in the bucket compared to GM’s 2011 total US ad spending of $1.8 billion ($3 billion globally), and Facebook’s 2011 revenue total of $3.7 billion, most of which was for advertising.

Smart marketers who spend $3 billion annually on advertising almost certainly also measure the effectiveness of that spending pretty rigorously. It is a natural part of the process to question, evaluate and optimize all parts of that spend on an ongoing basis, and the Facebook ad spend would be subject to that scrutiny.

What I Found & My Thoughts – #3

It has recently been reported that Facebook and GM are back in talks to renew GM’s advertising and that GM is asking Facebook for more data to bolster their measurement efforts.

Perhaps the problem was not so much that GM’s Facebook advertising didn’t work, but rather that GM couldn’t prove whether or to what degree it did, or didn’t. I also wonder whether GM’s pre-IPO announcement was a negotiating tactic to get the data they want from Facebook.

What I Found & My Thoughts – #4

I noticed that following GM’s announcement, their rival Ford tweeted something to the effect that Facebook ads are effective when used properly. Let’s assume the people at Ford are also pretty smart and measure rigorously, too. By implying they know their ads are effective, their tweet also implies they are better than GM at measuring Facebook ad success, and thereby raises some related questions:

  • Does Ford use Facebook ads differently and in a way that makes measurement easier?
  • Is Ford better than GM at setting measurable objectives for each ad?
  • Does Ford already get better Facebook data than GM?
  • Was Ford’s tweet was just an attempt to position themselves as smarter than GM?

We can’t know the answers to these questions, but we can remind ourselves of a few marketing measurement fundamentals:

Set clear and measurable marketing objectives: To know whether a marketing program worked, you have to first define exactly what it would mean for your program to “work”. In other words, what outcomes would make you happy?

Your objectives must be reasonable and attainable: A clearly defined objective isn’t necessarily attainable. A good outcome can still fall well short of an unreasonable objective, and be classified as a failure, when in fact the failure was in the setting of the objective.

You need to be able to get the data you need, consistently, reliably and cost-effectively: This may be at the crux of GM’s discussions with Facebook. GM may know exactly where they want to go with their Facebook ads, but they just can’t tell if they’re getting there, which when you’re behind the wheel of a $10 million dollar ad spend, is sort of important.

It will be interesting to see whether GM and Facebook can reach an agreement. My guess is that GM won’t want to walk away from advertising to Facebook’s massive and targetable audience, particularly if it seems their competitor(s) may be having success in this regard. Maybe GM just needs to know if they’re meeting their objectives and whether their Facebook ad spend has them driving on a six-lane superhighway, or somewhere down in the alley.

Inputs & Outputs

One of the challenges in writing a monthly newsletter is writer’s block. It generally hits me in one of two ways. Either I have no idea what to write about, or I have an idea, but no story or setting for the idea.

I have two approaches to deal with writer’s block. I find that going for a walk in nearby Monarch Park is a great way to clear my head and then somehow the ideas come to me. Finding a story or a setting for my idea can be harder. Something has to happen so I can connect the idea to a story. Usually, I need to read something or get out and do something. Through interacting with a new person or situation, a story sometimes emerges.

Monday evening, faced with neither an idea nor a story for this newsletter, I ventured out to a McGill Alumni event at the Carlu where I could mingle and meet people. Among those I met were two relatively recent graduates (relative to me, that is) with whom I had a very enjoyable, wide ranging conversation. Unfortunately, nothing in our conversation triggered an idea or a story for this month’s newsletter, although I was happy to learn about “The Undercover Economist” Tim Harford, whose writing I’m already enjoying.

On my way home, I thought about other people I’d met lately and then the idea came to me. I realized how a discussion a couple of weeks ago with a highly skilled and experienced market researcher related to how marketing scorecards are an effective way to organize diverse types of data.

We discussed how the various things that can be measured about marketing are either inputs, the things that influence the desired customer behaviour, or outputs, the results of that customer behaviour. This concept can be very helpful in determining how to organize the marketing metrics on your scorecard, and in deciding how to weight them within your overall scoring system. Let’s look at some examples.

Marketing Input Metrics

First of all, there are two broad categories of inputs; those you control and those you don’t. Inputs under your control are generally related to how well you execute the program you are measuring. Examples could include:

  • The percentage of the in-store displays or signs you printed and distributed that were actually and properly put up in store
  • The percentage of all the promotional labels or neck tags your merchandizing partner actually affixed to your products
  • The number of and cost per impression of all your on and off-line marketing communications related to this program

Inputs outside of your control that might impact the success of your program could include:

  • Competitive activity – they dropped or increased their price, promoted heavily while your program was in market, had a PR disaster on Twitter, etc.
  • Weather – no one showed up at your well promoted event because of a massive snow storm

Marketing Output Metrics

There are also two types of outputs, but they are defined a little differently. The first are those outputs or results that are directly attributable to your marketing program. Examples might include:

  • Number of unique visitors to a landing page on your website built for this program
  • Click through rate from your landing page to the buying page
  • Number of new customers who bought using your promotion codes

The other type of outputs are those that are potentially but not definitely or entirely attributable to your program.  These are typically key business performance metrics that can be influenced by a variety of inputs. Examples might include:

  • Revenue for the brand being promoted
  • Market share of that brand
  • Average price per unit sold during the program

Grouping your metrics in this logical fashion on your scorecard can make it easier for you to select your metrics and make decisions about how to weight them by group. Inputs directly under your control and outputs directly attributable to your program should be more heavily weighted than outputs potentially attributable to your program. This is especially true if you tend to have a lot of programs in the market simultaneously. Whatever weightings you use, be consistent over time to ensure you can meaningfully compare programs to each other.

Exclude those inputs outside of your control from your overall calculations. It would be very hard to set objectives and to score against those objectives, or to know how much of an impact they really had. We know that a blizzard of the century will keep more people home than a light dusting of snow, but the amount of snow that makes people decided to stay home is different for everyone. Still, note whether you think external factors significantly impacted your results.

As I wrote this, I realized my opening story does connect to the idea for this newsletter, after all. My story was about an input, an activity under my control, in this case networking and meeting people. That created an output that was at least partially attributable to my networking efforts. I may have still come up with the idea without going to the Carlu, but I might not have found my story!


From My Perspective

I like walking. Many mornings, my walk takes me to nearby Monarch Park. Over the last few years, I’ve frequently taken my camera to the park as part of a photographic self-improvement exercise which involves photographing the same subjects over and over.

Going back to the same park repeatedly forces me to develop my ability to see and capture photos of the same subjects in new and creative ways. I’ve learned there isn’t one right way to capture any one subject, and there are usually many fine ways to capture the same subject.

Here’s what I’ve observed about the two main variables I have to work with; light and composition.

  • The light in the park can look very different in different seasons, at different times of day and in different weather.
  • As for composition, I see the best new photos when I change my perspective by changing where I’m walking or standing.

A recent foggy morning created new circumstances. The same old views looked very different due to the soft light and the masking effect of the fog.  In search of a new composition, I changed my perspective by leaving my usual route and walking through a more wooded area. Through the combination of the fog and a different perspective, I quickly saw something I had never seen, which led to my capturing one of my favourite images of the park.

When I first developed my approach to measuring marketing, my perspective at the time was that I was trying to solve the following problem. I was trying to help marketers answer questions like “Did that marketing program work?” or “Did I use my money wisely on that program?”. From that perspective, I designed a scorecard to measure individual marketing programs.

I started using that approach but quickly realized that my perspective on marketers’ problem needed to shift slightly.

  • Instead of asking “Did that marketing program work?” marketers wanted to know “Which of my programs worked best?”.
  • Instead of asking “Did I use my money wisely on that program?” they wanted to know “What are the best ways for me to use my money?”.

The differences between the original and the revised question in each pair are small in words but large in meaning. Answering the original questions can provide some insights about individual programs, whereas answering the second questions goes well beyond those insights.

With my slight perspective shift came more clarity about the problem marketers need solved. I developed a more robust scorecard, using a methodology that could be applied consistently across all programs. That change enables marketers to compare programs to each other so they can see which programs are most and least effective, and then adjust their marketing strategies and improve business results.

Just as importantly, I created an effective process for identifying and ensuring the right things would be measured on that scorecard.

Light and composition are the two main variables that impact taking photos, while a marketing measurement system’s two main variables are the design of the scorecard and the choice of metrics to put on the scorecard. In both cases, there isn’t one right or perfect approach, and many will provide worthwhile results if you get the fundamentals right and focus on solving the right problem.

  • The Right Problem to Solve: The reason to measure your marketing is to optimize your marketing decisions and improve your business results.
  • Scorecard Design Fundamentals: It needs to be flexible enough to measure any kind of marketing program, while also consistently using a standardized methodology that makes it meaningful to compare each program to all the others.
  • Choice of Metrics Fundamentals: Understand how your company creates value, who your ideal customers are and how you define profitable customer behaviour. Your marketing should target those customers and that behaviour, and the metrics you chose should help you to see whether marketing is helping to create value for your business.

Measurement is an integral part of continuously improving your marketing effectiveness. With a steady effort, an occasional shift in perspective and an eye on the fundamentals, your measurement will evolve and improve over time, as will your marketing.  In the meantime, I’m here if you need my help, unless I happen to be out in the park changing my perspective.


Social Media and Social Eating

Somewhere in the middle of a Dim Sum eating frenzy last Sunday at Rol San in Chinatown, my friend Elliot pointed out that five of our group of seven sitting around the table worked in marketing. Despite the fact that marketers can be creative and some in our group of seven are rather artistic, we’re nothing at all like Canada’s renowned Group of Seven painters. After our efforts last Sunday though, I’d say we are a group of seven skilled in the art of social eating.

Elliot’s comments came during a discussion about how a certain academic institution appeared to be measuring the success of a controversial event they had publicized through their website and social media.  In response to criticism of this event, they pointed to their number of subscribers, as if that somehow indicated a level of support for their controversial point of view.

Of course, just being a subscriber doesn’t automatically imply agreement with every point of view expressed. In this case, the number of subscribers was irrelevant. It would be more relevant to know the ratio of subscribers for vs. against the event taking place, and/or the point of view being presented.

Around the table we began discussing how to measure social media and quickly agreed that volume or Activity metrics aren’t as relevant as metrics that track customer Engagement. Even more important to track is a third group called Conversion metrics. To illustrate, let’s look at these three types of metrics in the context of measuring social media and also our customer experience at Rol San.

Activity Metrics

  • Social Media: Examples include number of subscribers, followers, followers/following ratio, tweets, fans, and links clicked. You can get a sense of what people are doing, but less about why or how they’re feeling.
  • Dim Sum Customer Experience: Examples include the total plates ordered, the average items eaten per person, and the average revenue per person. These metrics would tell Rol San how much we ate, but they wouldn’t know whether we were satisfied customers.

It is generally more relevant to look at:

Engagement Metrics

  • Social Media: Examples include forwards, mentions, likes, comments, retweets and the sentiment of comments, tweets and blog posts. These types of metrics can provide more insights into what your customers are thinking and feeling about your brands and marketing programs.
  • Dim Sum Customer Experience: Rol San might want to know if that second order we placed repeated any items from our first order. (It was hard to tell amidst the flurry of plates and chopsticks.) Did anyone tweet or blog about our meal, or post a review somewhere? Were the comments or reviews positive or negative? Are the people who posted comments influential with the right audience?

It can be hard to tell what customers think and whether they are truly satisfied. That’s why so many eating establishments include a customer satisfaction survey with your bill. Many of these direct you to a website to give your feedback, which can then be linked to your transaction (what you ordered, your server’s name, etc.) to help round out the customer experience picture.

Still, engagement metrics and customer satisfaction scores have their limits. What customers say can often be different from what they actually do. Attitudes and opinions can help to predict behaviour, but all that investors, shareholders and bankers really care about is profitable customer behaviour, and how that behaviour converts into value for the business.

Conversion Metrics

  • Social Media: The greatest Conversion metrics of all are revenue and profit. Other examples include qualified leads generated, content downloads, registrations, reservations and orders; basically anything that might track key steps in acquiring, keeping and cultivating profitable customers.
  • Dim Sum Customer Experience: Rol San should care about whether we come back as a group, or individually with more friends, and whether we recommend to others to dine there. In a retail business, these metrics can be hard to track, which is one of the reasons loyalty and viral marketing programs exist, to both incent and track profitable customer behaviour. It’s also why hosts or greeters sometimes ask “Is this your first time here?” or “How did you hear about us?”

I can’t speak for the others in our Group of Seven Social Eaters (G7SE?), but I think I will probably return to Rol San someday.  How’s that for mildly positive sentiment and uncertain repurchase intent? Rol San could invest a lot of money trying to predict my behaviour, but even I can’t predict what I’m going to do. They’d be better off tracking what I actually end up doing.

Conversion metrics are the most important metrics to track and they should be more heavily weighted on your scorecard. At the same time, don’t ignore Activity and Engagement metrics, as they are predictors of conversion. They can help you to identify where programs are succeeding and failing in creating the customer behaviour that leads to profits.

Why am I hungry?

Warren and Me

While reading my good friend Warren Buffett’s 2010 letter to his Berkshire Hathaway shareholders, I found myself smiling and nodding on several occasions. Before I explain, I should point out that Warren and I are not actually friends; I just said that so you’d keep reading. I suppose it would be fair to say that I know Warren a lot better than he knows me, which is not at all.

The reason I referred to Warren as a friend, aside from the attention grabbing value of doing so, is that when I read his various comments about how he measures his company’s performance, I saw many parallels to my own views on measuring marketing performance. In that sense, we are friends. Here are a few examples featuring excerpts from Warren’s well crafted letter.

Example 1

  • Warren: “I believe that those entrusted with handling the funds of others should establish performance goals at the onset of their stewardship. Lacking such standards, managements are tempted to shoot the arrow of performance and then paint the bull’s-eye around wherever it lands.”
  • Me: Those managing marketing budgets have the same responsibility. Set performance goals up front so everyone is clear on how marketing spending will be judged. Selecting goals after the fact introduces a bias towards using metrics that prove marketing worked rather than determining whether it worked.

Example 2

  • Warren: “Our job is to increase per-share intrinsic value at a rate greater than the increase (including dividends) of the S&P 500.” … “The challenge, of course, is the calculation of intrinsic value. Present that task to Charlie (Vice Chairman, Charlie Munger) and me separately, and you will get two different answers. Precision just isn’t possible.” … “To eliminate subjectivity, we therefore use an understated proxy for intrinsic value – book value – when measuring our performance.”
  • Me: Marketing’s duty is to run programs whose objectives align with those of the organization. Any business exists to make money but, I don’t try to measure the exact financial ROI of each program because I feel that type of precision just isn’t possible. My proxy for ROI is to measure program results against their objectives, which should be focused on driving profitable customer activity and creating value for the business.

Example 3

  • Warren: In writing about how he values Berkshire, Warren explains why he doesn’t use net income as a metric. “Regardless of how our business might be doing, Charlie and I could – quite legally – cause net income in any given period to be almost any number we would like.”
  • Me: Choose metrics that are reliable and meaningful, and above suspicion of being manipulated to tell the story you want to tell. You want the people that matter to trust that your numbers accurately reflect the truth, not your version of the truth.

Example 4

  • Warren: Berkshire uses a well accepted accounting standard (Black-Scholes) for valuing option contracts, a standard that Warren doesn’t seem to like because under certain circumstances it can produce “wildly inappropriate values”. On this, Warren writes “Part of the appeal of Black-Scholes to auditors and regulators is that it produces a precise number. Charlie and I can’t supply one of those.” … “Our inability to pinpoint a number doesn’t bother us: We would rather be approximately right than precisely wrong.”
  • Me: I love that last sentence! There is a natural inclination to want to measure marketing precisely but I don’t think a high level of precision is needed to make good decisions. If you can be approximately right at identifying which marketing programs were most and least effective at meeting their objectives and creating value for your business, then you can make very good decisions that will optimize your marketing effectiveness.

I was glad to read how Warren’s point of view aligns with my thinking on marketing measurement. Any good measurement process just needs to be right enough to be an effective decision support tool. We need to measure the right things well enough that we learn what we need to know to make better decisions.

Warren and I may not be friends, but he’s a guy that I’d love to sit down with, have a hamburger (he apparently loves hamburgers) and soak up any wisdom he’d like to share. Since that’s not likely to happen, I’ll have to make do with a pretty good letter from a wise man.

PS. If you’d like to read Warren’s full letter, you can find it at the Berkshire Hathaway website: http://www.berkshirehathaway.com/letters/2010ltr.pdf

Apples, Oranges and Bananas

A funny thing happened yesterday on my way to the refrigerator.  I was working from home.  It was mid-afternoon and time for my snack.

I rose from my desk, went downstairs and walked my appetite into the kitchen, but stopped short of opening the fridge door.  I paused, wondering what to eat.  With the Christmas eating marathon still fresh on my mind, and around my waist, I was looking for a healthy snack, likely a piece of fruit, but which one?

My choice of available fruit came down to an apple, an orange and a banana.  I considered my options.

  • Apples: Are high in pectin, a fibre which has a long list of health benefits, the flavonoids reduce diabetes risk, and they taste refreshing.
  • Oranges: The antioxidants offer protection from all sorts of disease, the vitamin C supports the immune system, and they taste great.
  • Bananas: The potassium lowers stroke risk, the vitamin B6 keeps the nervous system in top shape, and they are more filling than the other two.

Hmmm…  They’re all good, I thought, but in different ways.  While as fruit they have their similarities, they are each designed to meet different objectives.  How do I compare them?  How do I choose?

Naturally, as you might expect, my first big decision of 2011 reminded me of the problem marketers face when trying to decide which of a group of marketing programs was most effective.  Deciding which piece of fruit or marketing program was most effective depends heavily on my objectives related to eating, or on the marketer’s objectives related to each program.

One of the challenges in comparing Marketing Program (or fruit) A to B to C is that they each have different objectives.  That means, the right metrics to measure each program might be quite different from the metrics to measure the other programs.  This fact makes comparison very difficult. As they say; it’s like comparing apples to oranges.

To make this comparison easier, you need to focus on comparing how effective each marketing program is at doing whatever it is supposed to do. Let’s start with the last six words of that sentence.

Step 1:  Decide which metrics to use. Answer two simple questions about each program:

  • Who are you targeting?
  • What do you want them to do?

For example, consider the different metrics you might use to measure:

  • A public relations campaign to raise awareness among non-customers
  • An email program to incent loyalty and improve customer retention
  • An online contest to add email addresses to your customer database and incent referrals to non-customers

Step 2:  Level the playing field. This is the part where you compare the relative effectiveness of programs measured with different metrics:

  • Create a standard scorecard for your business. This becomes your template.  Your scorecard needs to have the flexibility to measure all types of marketing programs, and accommodate all types of metrics.  For a simple program you might need 5 to 10 metrics, whereas for a complex one you might need 30 to 40.
  • Customize your template to create a scorecard for each program. Some metrics will appear on each program’s scorecard, while others will vary from one scorecard to the next, given that the programs each had different objectives.
  • Score each metric according to how it performed vs. its objective.  (actual/objective X 100%) This is the pivotal step that converts all metrics into one common metric, in this case a percentage.  Working with a common metric enables scoring each one and totaling your scores for each scorecard.

That last step is critical to enabling you to compare programs with differing metrics.  Instead of figuratively looking at apples, oranges and bananas and trying to figure out which is better, now you’re just looking at fruit, with a simple comparable rating for each. Then rank them, and you’ll know which programs were best and worst at meeting their objectives and delivering the results you wanted.

To solve my little dilemma yesterday, I suppose I could have created a Fruit Measurement Scorecard, based on my specific eating objectives at that moment, to give me a way to rate and rank three different pieces of fruit, but that would have been a bit weird.  OK, a lot weird.  Anway, I was hungry, there just wasn’t time.

Oh, if you’re wondering which fruit I chose, without a scorecard to assist me, I caved and ate the last piece of blueberry pie.  Hey, those blueberries are loaded with antioxidants!

 

 

 

Return On Corona

My friend Dan was in town recently.  Our friendship goes back to our university days at McGill, which is another way of saying we’ve known each other for a very long time.  Of course, we’ve both aged quite gracefully.  We get together when we can, and when Dan had to be in town for meetings a couple of weeks ago, we made plans for Saturday night.

Dan and I decided to get caught up while watching a rare live performance of their Paul McCartney tribute called ‘Getting Better’ by my musician friends, The Weber Brothers.  The guys delivered a great performance, as always, with a set list that included ‘Yesterday’, ‘Let It Be’ and ‘Maybe I’m Amazed‘.  I was also thankful that Ryan and Sam Weber chose not to perform ‘Silly Love Songs’.

Whenever we get together, Dan and I usually pass some of our time updating each other on our business endeavours.  I always enjoy hearing Dan’s perspective and he usually asks great questions that help me to focus on the right issues.

As we discussed my marketing measurement work, Dan questioned whether I measure Return On Investment (ROI), which is a natural question and one I’m commonly asked.  My answer went something like this.

As we sat at the bar, I looked down at the clear glass bottle in my right hand.  I said, “Let’s use my Corona as an example.  I don’t remember what marketing program caused me to try it years ago for the first time, I can’t tell you why it’s among the half dozen or so brands that I tend to order, and I don’t know what caused me to order it tonight.”

Corona

Let’s suppose Molson-Coors made $0.50 profit on the sale of my one bottle.  To calculate the ROI on their marketing for this transaction, they’d have to understand which marketing investments influenced my buying decision, and by how much.  Here are some thoughts on their marketing programs that I can recall:

  • I know I like watching their commercials
  • I’m sure I’ve seen several print ads, and the image of their clear glass bottle sparkling in the sun and a wedge of lime lingers in my mind
  • Not too long ago, I noticed a contest to win a bar fridge
  • I remember a great poolside bar promotion while vacationing at an all-inclusive a few years back that likely still influences my purchases.

Those are the ones I can recall, but I’m sure there are others I don’t remember that have influenced me.  Here’s where calculating ROI gets more complicated.

  • I have no idea which of these marketing investments influenced me most, or least, nor how much of the $0.50 profit to attribute to each.
  • I can’t begin to consider how to account for the combined impact of all those marketing investments that somehow accumulate within me over the years to influence my buying decisions.

The key point is, if I can’t do the profit allocation for my own buying decision, even if Molson-Coors could somehow get inside my head and have a good look around (it wouldn’t take long…) they wouldn’t figure it out either. To further complicate things, all their other customers each have their own influences and reasons for buying.

We humans each make our own very complex buying decisions, often influenced by factors outside the marketers’ control, in ways we may not consciously understand.  It’s extremely difficult and costly to isolate all the variables involved to truly and accurately measure financial return on investment of marketing spending. We end up having to make too many assumptions, or guesses at allocations.

However, this doesn’t mean we shouldn’t measure something.  Instead of ROI, I focus on measuring how effective marketing is at meeting objectives, using metrics that involve as few assumptions as possible.  Here are a few thoughts on metrics:

  • Rather than trying to focus on one killer metric, like ROI, select a group of metrics that together give you a balanced view of whether a specific marketing program drove value in your business.
  • Assemble your various metrics in a scorecard that allows you to evaluate each metric against its objectives.
  • Decide which metrics you want to use before you launch your marketing program in case you need to gather data while the program is in market.
  • Just because I’m letting you off the hook on measuring ROI, it doesn’t mean you should ignore financial metrics.  Your scorecard should definitely include financial metrics, such as revenue, and average transaction value, which tends to be a good indicator of profit.

I’m not comfortable making decisions or recommendations supported by numbers that are based on a lot of assumptions or guesses.  Build your marketing measurement process on as many facts and clean data as you can find.

Oh, and one more thought.  My Return On Corona (ROC) a couple of Saturdays ago was exceptional, given my objectives to hang out with a great friend and to be entertained by talented musicians!