Where the Rubber Hits the Road

Indy CarWhile walking east along the south side of Bloor Street in mid-town Toronto with my colleague Bob, I saw something quite unusual. Hovering above a large wooden table on the sidewalk in front of William Ashley’s store at 55 Bloor Street West was a full sized Indy race car. I don’t know a lot about racing, but I do know that cars and their tires are supposed to be on the road, so I was intrigued.

On closer inspection, I discovered the 1,400 pound replica car wasn’t hovering but was sitting on four delicate looking bone china tea cups, one under each tire. The tea cups themselves were part of an elegant table setting featuring William Ashley’s finest.

A little online snooping uncovered that William Ashley launched the display in 2011, 25 days before the 25th anniversary of the Honda Indy race in Toronto. The race organizers had approached William Ashley, a long-time sponsor of Toronto’s annual Indy race, with this outdoor display idea to jointly promote the race and the store, along with the superior strength, durability and performance of bone china.

As we discussed the uniqueness of this marketing idea, Bob turned to me and asked the question I get asked the most. “So, how would you measure that marketing program?”

Without flinching, I replied with my favourite answer, “It depends”. Since I didn’t know William Ashley’s actual planned objectives for creating this display, I couldn’t know exactly how to measure whether or not it worked.

Despite that roadblock, we agreed to speculate on what their objectives might generally have been, and I’ve added some measurement thoughts for each:

  • Objective #1: Create a unique and interesting event to generate press coverage.
  • Measurement #1: In its simplest form, this is a matter of tracking the number of impressions through the various stories and mentions about their launch event through various print, broadcast, digital and social media.
  • Objective #2: Communicate the key attributes of bone china.
  • Measurement #2: While the first objective relates to how much coverage, this one relates to more important issues, such as the quality, accuracy and tone of the coverage. It could get into things like media monitoring, text analytics and sentiment analysis of the various forms of coverage. You could supplement that with before and after surveys and by intercepting people on the street to see if they saw the display and understood the message.
  • Objective #3: Increase store traffic.
  • Measurement #3: Count the customers, of course, but you need to compare the count to something, like how many customers they normally get on Wednesdays in June, or when they’ve created similar displays previously.
  • Objective #4: Increase bone china sales.
  • Measurement #4: It’s easy enough to add up the sales, but it would be helpful to compare the total to an average, or baseline, as with the store traffic example. You’d also have to decide how long that display might affect bone china sales. Seeing that display made me think (and write) about the superior strength, durability and performance of William Ashley’s bone china, and maybe now you’re starting to think about it. I’m not in the market for bone china right now, but maybe in the future and perhaps you will be too!

The key lesson in all of this is that you need to set clear measurable objectives when planning your marketing in order to know what to measure and learn whether you’ve succeeded. In other words, measurement should be directly linked to your planning process. Defining how you will assess whether a marketing program is successful should be an integral part of planning.

Good objectives will define the metric(s) that will be used to measure success, and the specific numerical outcome you want to achieve. For example, it can be a percentage change from a comparable period, or a specific outcome that you’ve determined would be worthwhile relative to the cost of the program.

My four speculated objectives above were purposely vague to highlight the challenges presented by the lack of clarity.  When you set your objectives, be very clear about the outcome you’re looking for. Here’s a better version of Objective #4: “Increase bone china (all brands) sales for June and July by 20% vs. June and July of last year”. That, you can measure.

Without proper objectives, how or what to measure becomes an exercise in guessing, much like Bob and I had to do. To take the guesswork out of your marketing measurement, it needs to begin as part of your planning process. That’s where the rubber hits the road.


Aligning Interests


Last week something strange happened in front of my house that knocked out the power to my house. I’m glad it happened on a relatively cool day rather than on a hot, sticky day like today.

While I was inconvenienced for about 12 hours in which I had no power, I’m thankful for the inspiration for this month’s newsletter which talks about the importance of aligning marketing’s interests with those of the whole organization and how doing measurement properly helps to get you there.


As I opened my front door Wednesday morning to toss a few items into the recycle bin, I quickly realized that something had gone terribly wrong in front of my house.

From my front door, I saw a police officer, two firefighters, three Toronto Hydro linemen, an Atlas Van Lines driver, their respective vehicles, flashing lights, barricades, pylons and a cat. Taking in this scene, it quickly sank in that the large Altas moving van with live hydro wires draped across it, was probably the cause of all the commotion.

Here’s what happened. The van was very tall. The overhead hydro wires, which reach across the street to feed electricity into the houses on my side of the street, hang very low. Tall van + low wires = problem. As the van drove up my street, it snagged and pulled down the wires that feed electricity into MY house, which knocked out my power.

My neighbour Blair saw the whole thing happen. He called 911. The 911 dispatcher called the police, the firefighters and the hydro guys. No one knows who called the cat, or why the cat was there other than to hold the humans in contempt.

Over the next three hours, I had productive conversations with all of the aforementioned, as well as with my insurance agent, the claims adjuster, a contractor and an electrician. All were professional and courteous. But, here’s the thing.

Everyone I talked to had a different agenda, a different boss to answer to, and a different view on how to proceed. I found this both interesting and frustrating, yet not at all unusual. To varying degrees, most organizations experience this.

While this “organization” had been assembled hastily to address the downed power lines situation, it behaved as most organizations behave. That is, the first concerns of the individuals involved were guided by their own self-interests. More importantly, they were able find common ground within those interests and come together around common objectives.

What does this have to do with marketing measurement? I thought you’d never ask! Some of the biggest challenges and benefits of marketing measurement are related to getting everyone on the same page and aligning their interests.

Aligning marketing’s objectives with those of the organization is critical to both the success of marketing measurement efforts and the success of the organization at meeting its overall objectives.


Here are three key principles to achieving both types of success:

1. The whole organization must commit to marketing measurement.

Marketing and other parts of the organization need to mobilize around a clearly defined measurement objective, such as finding the best and most effective ways of spending marketing budgets. Marketing can’t and shouldn’t go it alone. It needs support and commitment from the rest of the organization for measurement to work and lead to better spending decisions.

2. The organization and marketing must jointly commit to a measurement methodology.

If marketing unilaterally develops an approach to marketing measurement, others in the organization might think that marketing developed their approach with their own self-interests in mind. That is, they might assume that the methodology is biased towards showing that the marketers in question are brilliant and highly effective.

On the other hand, if marketing involves other elements of the organization that might naturally have competing interests or alternate perspectives on how to measure marketing, then those “competing” interests will bring more balance to the methodology and more acceptance by all of the results.

A joint commitment to a methodology means they must agree on a way to measure marketing’s success. I define success as marketing meeting its objectives and helping the organization to meet its objectives. Objectives-based measurement forces alignment around the objectives themselves.

3. The organization and marketing must jointly decide what to measure.


Remember that marketing’s purpose is to attract customers who create the most value for the whole organization. That means you need input from each key functional area to know how they each define value, high value customers and profitable customer behaviour. Those definitions will point the way to the key performance indicators that should be included in your measurement.

Including a range of metrics that matter to all aspects of the organization will mean that you will be measuring marketing according to how the whole organization defines success. It will also be easier to get the support and data you need to measure marketing in terms that everyone will understand.

In an organization, everyone has a role to play. Each person has his or her own biases and priorities. At times, it can seem as though different people and parts of the organization have competing interests.

Effective organizations find and focus on the common objectives within those competing interests. One of the most important benefits of measuring marketing based on results vs. objectives is that to do it properly those objectives will need to align with those of the overall organization.

Taste of Marketing Measurement

Last month I attended ‘Taste of the Danforth‘, my favourite summer event in Toronto. Now in its 19th year, this weekend long event along a 3km stretch of Danforth Avenue typically attracts around 1.3 million people over the weekend. I live nearby, attend almost every year and generally eat too much while I’m there.

I planned to meet a friend on the Danforth Friday evening but when the forecast called for rain, we called off our plans. The forecast turned out to be wrong and as I sat at home on a fairly dry Friday night, I wondered just how many other people didn’t go to Taste of the Danforth due to the threat of rain and how much that might have reduced sales for each restaurant participating in the event.

Marketing Effectiveness

I also thought about how event organizers seemed to have been very effective at creating awareness for the event. I saw a lot of local media coverage and I noticed how Taste of the Danforth was listed in every what’s-going-on-in-Toronto/Ontario listing I saw in print and on-line.

Marketing effectiveness measurement tends to focus on whether specific program objectives were achieved, such as attracting and keeping profitable customers and creating value for the business. Yet, as I was reminded by sitting at home when I should have been out eating too much, marketers can do everything right and still end up with bad results due to factors outside of their control, such as bad weather (or a bad forecast).

Event Objectives

I’m guessing the event’s main marketing objectives are to generate awareness, attendance and trial of both the area and of individual restaurants, and to also increase post-event traffic and revenue for local merchants.

While I sat home that night, I watched a segment of a local newscast about Taste of the Danforth in which a restaurant owner told the interviewer that this event generally makes his year. I imagine that some restaurants might aim to sell enough souvlaki in three days to possibly pay for a renovation or to simply make enough that weekend to stay in business for another year.

Objectives like this point the way towards some of the metrics I would include on a scorecard to measure the effectiveness of the marketing, but there are other factors to consider.

Those Pesky External Factors

For this event, the number one external factor outside of marketing’s control that can impact its success is the weather. I imagine that the event organizers must make the appropriate sacrificial offering (lamb would seem appropriate) to the Greek god (Zeus?) most responsible for weather. A hard rain could severely reduce attendance for an evening or whole weekend.

The Measurement Dilemma

There are three general approaches to choose from regarding external factors.

1. Ignore Them

This option will always be as tempting as all the delicious foods one finds at Taste of the Danforth. I can’t imagine how I’d ever determine how many people didn’t go to Taste the Danforth that Friday night because they thought it might rain, or how many sticks of souvlaki didn’t get sold as a result.

2. Model Them

This option is worth considering when you have a lot of data. If the organizers have 19 years worth of data that would correlate daily attendance with weather forecasts and actual rainfall, then that would be a start. Still, for most marketers, the costs of sophisticated models and analytics can quickly become too high relative to the size of the marketing expense they’re meant to measure.

3. Track Them

I think it is well worth tracking any external factor that could impact results, such as weather, competitive activity and labour disruptions. I arbitrarily score each external factor on a five-point scale, where the low end of the numerical scale corresponds to “very negative” and moves up through “somewhat negative”, “no impact”, “somewhat positive” and “very positive”.

I keep this very unscientific scoring of external factors separate from the rest of the scoring I do on the factors that seem to be within marketing’s control and on the results that can reasonably be attributed to the marketing. I don’t muddy the waters by including the external factors in the calculation of the overall score, but I do note them and score the severity of those factors.

The value of tracking external factors comes when you analyze a group of marketing programs, such as past years of Taste of the Danforth. Imagine looking back and seeing great year to year variations in attendance and not knowing in which years it rained all weekend, or there was a transit strike.

Without tracking external factors, it would be easy to come to the wrong conclusions about the effectiveness of specific marketing programs. It would be hard to decide whether specific programs should be repeated or changed and also to learn which tactics were the most and least effective.

Always track those factors outside of your control and the degree to which they may have helped or hindered your results. That will at least give you a taste of what else might have been going on at the time that may have impacted your results.

Down In The Alley

A few years ago, the couple who live across the alley behind my house decided to host a small gathering on their driveway for those of us living nearby on either side of the alley. A few homes had sold recently and they decided that this would be a good way to welcome the new neighbours and help everyone get to know each other a little better. It was fun, and this annual tradition lives on 4 or 5 years later.

This year’s edition of the alley event happened last Saturday afternoon. One of the things I enjoy about these neighbourhood gatherings is having the time to learn about each other in a relaxed setting.

On Saturday, I was having a good chat with my exceedingly well-named neighbour, Rick. He asked about the nature of my work and so I gave him a bit of an overview about how I measure marketing effectiveness. That quickly led to him asking me this question:

“What did you think of General Motors’ announcement that they were pulling their Facebook ads?”

My first reaction was that I supposed GM felt their ad spending on Facebook wasn’t working.  As we continued, our conversation shifted to the timing of GM’s announcement, mere days before Facebook’s highly anticipated initial public offering. We concluded that something must have gone wrong in their relationship with Facebook for GM to announce their decision at such a sensitive time.

After Rick, who is an actor, entertained the local kids by juggling while walking on stilts, I went home, considered our conversation, did some research and organized my thoughts.

What I Found & My Thoughts – #1

In case you missed it, GM announced they would eliminate $10 million of advertising spending on Facebook. This still leaves another $30 million which they spend on their Facebook marketing initiatives, although I don’t believe any of that spending becomes Facebook revenue.

Clearly, GM thinks there’s an audience on Facebook worth engaging through marketing, but not so much for advertising, at least not yet.

What I Found & My Thoughts – #2

The $10 million is a drop in the bucket compared to GM’s 2011 total US ad spending of $1.8 billion ($3 billion globally), and Facebook’s 2011 revenue total of $3.7 billion, most of which was for advertising.

Smart marketers who spend $3 billion annually on advertising almost certainly also measure the effectiveness of that spending pretty rigorously. It is a natural part of the process to question, evaluate and optimize all parts of that spend on an ongoing basis, and the Facebook ad spend would be subject to that scrutiny.

What I Found & My Thoughts – #3

It has recently been reported that Facebook and GM are back in talks to renew GM’s advertising and that GM is asking Facebook for more data to bolster their measurement efforts.

Perhaps the problem was not so much that GM’s Facebook advertising didn’t work, but rather that GM couldn’t prove whether or to what degree it did, or didn’t. I also wonder whether GM’s pre-IPO announcement was a negotiating tactic to get the data they want from Facebook.

What I Found & My Thoughts – #4

I noticed that following GM’s announcement, their rival Ford tweeted something to the effect that Facebook ads are effective when used properly. Let’s assume the people at Ford are also pretty smart and measure rigorously, too. By implying they know their ads are effective, their tweet also implies they are better than GM at measuring Facebook ad success, and thereby raises some related questions:

  • Does Ford use Facebook ads differently and in a way that makes measurement easier?
  • Is Ford better than GM at setting measurable objectives for each ad?
  • Does Ford already get better Facebook data than GM?
  • Was Ford’s tweet was just an attempt to position themselves as smarter than GM?

We can’t know the answers to these questions, but we can remind ourselves of a few marketing measurement fundamentals:

Set clear and measurable marketing objectives: To know whether a marketing program worked, you have to first define exactly what it would mean for your program to “work”. In other words, what outcomes would make you happy?

Your objectives must be reasonable and attainable: A clearly defined objective isn’t necessarily attainable. A good outcome can still fall well short of an unreasonable objective, and be classified as a failure, when in fact the failure was in the setting of the objective.

You need to be able to get the data you need, consistently, reliably and cost-effectively: This may be at the crux of GM’s discussions with Facebook. GM may know exactly where they want to go with their Facebook ads, but they just can’t tell if they’re getting there, which when you’re behind the wheel of a $10 million dollar ad spend, is sort of important.

It will be interesting to see whether GM and Facebook can reach an agreement. My guess is that GM won’t want to walk away from advertising to Facebook’s massive and targetable audience, particularly if it seems their competitor(s) may be having success in this regard. Maybe GM just needs to know if they’re meeting their objectives and whether their Facebook ad spend has them driving on a six-lane superhighway, or somewhere down in the alley.

Inputs & Outputs

One of the challenges in writing a monthly newsletter is writer’s block. It generally hits me in one of two ways. Either I have no idea what to write about, or I have an idea, but no story or setting for the idea.

I have two approaches to deal with writer’s block. I find that going for a walk in nearby Monarch Park is a great way to clear my head and then somehow the ideas come to me. Finding a story or a setting for my idea can be harder. Something has to happen so I can connect the idea to a story. Usually, I need to read something or get out and do something. Through interacting with a new person or situation, a story sometimes emerges.

Monday evening, faced with neither an idea nor a story for this newsletter, I ventured out to a McGill Alumni event at the Carlu where I could mingle and meet people. Among those I met were two relatively recent graduates (relative to me, that is) with whom I had a very enjoyable, wide ranging conversation. Unfortunately, nothing in our conversation triggered an idea or a story for this month’s newsletter, although I was happy to learn about “The Undercover Economist” Tim Harford, whose writing I’m already enjoying.

On my way home, I thought about other people I’d met lately and then the idea came to me. I realized how a discussion a couple of weeks ago with a highly skilled and experienced market researcher related to how marketing scorecards are an effective way to organize diverse types of data.

We discussed how the various things that can be measured about marketing are either inputs, the things that influence the desired customer behaviour, or outputs, the results of that customer behaviour. This concept can be very helpful in determining how to organize the marketing metrics on your scorecard, and in deciding how to weight them within your overall scoring system. Let’s look at some examples.

Marketing Input Metrics

First of all, there are two broad categories of inputs; those you control and those you don’t. Inputs under your control are generally related to how well you execute the program you are measuring. Examples could include:

  • The percentage of the in-store displays or signs you printed and distributed that were actually and properly put up in store
  • The percentage of all the promotional labels or neck tags your merchandizing partner actually affixed to your products
  • The number of and cost per impression of all your on and off-line marketing communications related to this program

Inputs outside of your control that might impact the success of your program could include:

  • Competitive activity – they dropped or increased their price, promoted heavily while your program was in market, had a PR disaster on Twitter, etc.
  • Weather – no one showed up at your well promoted event because of a massive snow storm

Marketing Output Metrics

There are also two types of outputs, but they are defined a little differently. The first are those outputs or results that are directly attributable to your marketing program. Examples might include:

  • Number of unique visitors to a landing page on your website built for this program
  • Click through rate from your landing page to the buying page
  • Number of new customers who bought using your promotion codes

The other type of outputs are those that are potentially but not definitely or entirely attributable to your program.  These are typically key business performance metrics that can be influenced by a variety of inputs. Examples might include:

  • Revenue for the brand being promoted
  • Market share of that brand
  • Average price per unit sold during the program

Grouping your metrics in this logical fashion on your scorecard can make it easier for you to select your metrics and make decisions about how to weight them by group. Inputs directly under your control and outputs directly attributable to your program should be more heavily weighted than outputs potentially attributable to your program. This is especially true if you tend to have a lot of programs in the market simultaneously. Whatever weightings you use, be consistent over time to ensure you can meaningfully compare programs to each other.

Exclude those inputs outside of your control from your overall calculations. It would be very hard to set objectives and to score against those objectives, or to know how much of an impact they really had. We know that a blizzard of the century will keep more people home than a light dusting of snow, but the amount of snow that makes people decided to stay home is different for everyone. Still, note whether you think external factors significantly impacted your results.

As I wrote this, I realized my opening story does connect to the idea for this newsletter, after all. My story was about an input, an activity under my control, in this case networking and meeting people. That created an output that was at least partially attributable to my networking efforts. I may have still come up with the idea without going to the Carlu, but I might not have found my story!

From My Perspective

I like walking. Many mornings, my walk takes me to nearby Monarch Park. Over the last few years, I’ve frequently taken my camera to the park as part of a photographic self-improvement exercise which involves photographing the same subjects over and over.

Going back to the same park repeatedly forces me to develop my ability to see and capture photos of the same subjects in new and creative ways. I’ve learned there isn’t one right way to capture any one subject, and there are usually many fine ways to capture the same subject.

Here’s what I’ve observed about the two main variables I have to work with; light and composition.

  • The light in the park can look very different in different seasons, at different times of day and in different weather.
  • As for composition, I see the best new photos when I change my perspective by changing where I’m walking or standing.

A recent foggy morning created new circumstances. The same old views looked very different due to the soft light and the masking effect of the fog.  In search of a new composition, I changed my perspective by leaving my usual route and walking through a more wooded area. Through the combination of the fog and a different perspective, I quickly saw something I had never seen, which led to my capturing one of my favourite images of the park.