Can Communication Success be Quantified?

31 03 2010

Can communicators quantify their success? The short answer is sort of. Measuring success in public relations is a controversial and messy business, which is why I didn’t even mention it in Explaining Research. I felt that detailed discussion of the issues would detract from the utility of the book for researchers, who are more interested in learning how to explain their research than how public information officers grapple with the “sausage-making” of measurement.

However, I was reminded of how persistent and frustrating the measurement issue remains when a PIO colleague at a major research laboratory asked for advice about a new boss’s request for a quantitative measure of the office’s media relations success. The new metric-minded boss came from a marketing background—where measuring results is a holy quest—rather than a science communication background, a more complex communication environment. In an e-mail message, my colleague asked some  experienced communicators, including me, to discuss “what captures how well we’re doing our jobs without bogging us down so much with collecting or analyzing information that we can’t do our jobs.”

So, for the benefit of PIOs—and for those researchers interested in such sausage-making—here are some of the issues and pitfalls we explored:

One major measurement pitfall in my opinion is reliance on a long-criticized measurement called “advertising value equivalent” (AVE), which is a dollar amount that quantifies how much media stories would have been worth if they were paid advertising. Developing AVEs for news stories is an incredibly expensive proposition. One news office manager at a university where I worked spent well over $10,000 per year (she wouldn’t reveal the actual cost) with a company that produced an annual AVE for the office’s media clips. Of course, the AVE was huge—many hundreds of thousands of dollars as I recall—and she advertised that amount to her superiors as a meaningful quantitative measure of media relations success.

But AVEs are very poor measurements for many reasons. The best-articulated case against them that I’ve found is a post on the blog MetricsMan that I recommend reading. Basically, MetricsMan declares AVEs invalid because

  • They don’t capture the inherent value of news articles as credible, independent validation of a story; as opposed to the paid appearance of an ad.
  • They don’t measure the impact of an article on a reader.
  • They don’t take into account other important media relations services such as strategic counsel, crisis communications and viral campaigns.
  • They don’t measure the value of keeping negative news out of the media or of coping with it in communication terms.
  • They don’t distinguish between articles that appear in publications important to the institution, versus those that are less important. AVEs only count the cost of advertising in the publication.
  • AVEs count as positive in terms of comparison value even those articles that may be predominantly negative.
  • There is no way to calculate the value of a hit on the front page of a newspaper or a cover story in a magazine, because ads aren’t sold in those places.
  • AVE results may be going up when other legitimate communication measures, such as communication of messages, or share of positive coverage, may be going down.
  • AVEs don’t cover such non-traditional media as blogs or positive conversations on social networking sites.

In our e-mail discussion, veteran research communicator Rick Borchelt summarized the problem of quantification by telling our fellow PIO

I think the take away message is that there is no really good quantitative metric for media relations success, since media relations is/are an assessment of your relationships with media, not with how much ink they spill about you. You can’t really say with a straight face most of the time that the release you put on EurekAlert! generated the story in the New York Times that was read by the junior staffer of the senior Senator who put an earmark in the DOE appropriations bill that got the new molecular biology building. What we struggle with is how to prove the negative:  how much worse would a story about the lab have been if you didn’t know the reporter and could talk her off the ledge of a sensational (but inaccurate) story? Or factor in the opportunity cost of giving one reporter an exclusive that pisses off a dozen others. Or how more likely a reporter with whom you have a relationship is to come to the lab for comment on a breaking story out of all the contacts in his Rolodex. These are intangibles.

Another veteran communicator, Ohio State’s Earle Holland, recommended that our colleague ask some basic questions before even beginning to address the measurement issue:

You said that the new boss asked you to “come up with a way to measure how well we’re doing our jobs.”  First, you need to answer the question of “What is your job?” both in yours and his eyes. Otherwise you won’t know what’s there for comparison—you can’t have a metric without a scale to gauge it against…. Is the goal to get mountains of news media coverage? To what end?  Is it to protect the reputation of [the laboratory]—only good news goes out? Is it to motivate actions or opinions of key constituencies—something that’s probably impossible to gauge causally. Or is it to convey interesting, accurate science information to the public because [the laboratory] is a publicly supported enterprise and the public deserves to know? Who are the constituencies you want to reach and which are more important than others—you can’t just say “they all are. ” My point is that you have to know what would be seen as success before you try to measure how successful you are.

To those cogent comments, I would add that when a boss asks for any kind of measurement, a reasonable response is “What will you use that measurement for?” I have always followed a rule that if some piece of data is not necessary for making a specific managerial decision, then it is not worth gathering.

In the case of the news office manager cited above, she declared that “I will use the AVE for our news clips in advocating for our budget.” But in my experience, such information has never had any significant effect on budget-making. Other factors, such as the economic state of the institution, political advocacy, and persuasion have been far more important.

Even given the caveats and complexities of quantification, though, there are some legitimate numbers that PIOs can offer management, as long as they are put in the context of the overall communications program.

For example, Holland and his colleagues in OSU’s Research Communications office produce an annual report that includes numbers: how many stories produced, how many times they appeared in major media, how big the audiences for those publications were, etc. But these numbers are intended only to give a sense of productivity, not to suggest impact.

The report also explains how the stories were distributed—via blogs, posting on EurekAlert!, Newswise, etc.—and quantifies the audiences for those outlets. And the report quantifies the number of visitors to OSU’s research Web sites. Such data are available directly from the news services, and for the Web sites by using Google Analytics. Also, the appearance of news stories on Google News can be monitored using Google Alerts.

Importantly, however, the annual report also documents the research areas of the university from which news came, to demonstrate the comprehensiveness of coverage. And, it discusses the broad range of other ways the office uses stories, interacts with reporters and serves faculty members. Thus, the annual report goes beyond mere numbers to present a full picture of the office’s activities.

Such documentation of productivity is important. However also critical, and often neglected, is demonstrating productivity by proactively making sure that key administrators and other audiences are aware of  news stories and other communications achievements.

My favorite example of such proactive demonstration is the process that Borchelt established to remedy the lack of visibility for important media stories, when he was communications director at Oak Ridge National Laboratory. “Here was an institution focused on media stories as their goal. So, they would get this great story in the New York Times, and they would mention the story when visiting their congressman, and he’d ask ‘What story?’ ”

Thus, Borchelt began sending major media stories, along with a letter from the director, to important members of congress, as well as program officers and directors of the DOE, which funds the laboratory. “The letter would say ‘Thank you so much for giving us the opportunity to work on this exciting research that is reported in today’s New York Times,’ ” said Borchelt. “And we would often append the new release, because it tended to have a better explanation of what we were doing; and also because we could acknowledge the funding agency, so they could see that they got credit. It was hellishly labor-intensive, but incredibly useful,” said Borchelt. Members of Congress would use the articles in their communications to colleagues and even read them into the Congressional Record.

So, although media relations productivity can be sort of quantified, numbers are not enough. They must constitute only one part of a comprehensive effort to communicate productivity in all its forms to the people who sign the paychecks.


Actions

Information

2 responses

31 03 2010
Rick Borchelt

Well, actually, I think “communications” success can be quantified — in public relations the best measures are indicators of satisfaction with the relationship(s) that exists between the lab/university and its various stakeholders (c.f. Jim Grunig’s work for the Institute for Public Relations in the “Excellence Study.” These are quantifiable (also qualifiable) and can be tracked over time, and perhaps even correlated with the presence of a communications campaign if you use focus groups and recall studies. Quantifiable, but VERY expensive. “Media relations” success as a single parameter — and that’s what we were discussing with Dennis and Earle — not so much, although you can also use indicators of satisfaction that various members of the media enjoy with your organization. What you still can’t do is track an individual placement to a particular media relations strategy with any degree of fidelity. Absent perjury.

27 05 2010
George Rossolatos

http://www.grossolatos.com/blog/

From simple advertising message recall to depth of message inscription: How to quantify a missing link in advertising effectiveness

As is well-known among brand savvy marketers the recording and close monitoring of ad message recall in the context of an advertising effectiveness tracking survey is a must-have. However, when it comes to quantifying the depth of inscription in a segment’s memory, which works as an indispensable qualifier and major determinant of the level of a campaign’s memorability , then first level descriptive stats won’t do the job.
As a meta-analytic approach to determining the depth of main message recall based on data gathered from longitudinal tracking surveys I designed the Episodic/Semantic Index in an attempt to quantify that qualifying difference between an aggregate score that is normally yielded from tabulating raw percentage data gathered from open-ended questions and what it practically means for decision making when it comes to discontinuing or adding further fuel to an ad campaign, based on effective brand associations that are built over time.
The composite index, which draws on the relevant literature on episodic/semantic memory (eg http://www.allbusiness.com/marketing/advertising/316202-1.html ), yields crucial insights on the effectiveness of a campaign, especially once regressed against media pressure (actual GRPs back-weighted to 30’’ spot equivalents, the same one would do in an awareness index modeling exercise). In combination with qualitative insights about the various executional elements of a campaign, the episodic/semantic index can yield a useful benchmark for monitoring the ongoing impact of a campaign’s main message, as well as gauge its memorability in off-air periods. The degree of episodic memory is used as a proxy for the various elements recalled in an ad-content recall section, whereas the degree of semantic memory is used as a proxy for correct main message recall (basically the commercial’s tagline).

On a methodological level, here’s how it works:

– First, you select correctly semantically processed statements from the main message recall section of the tracker and produce a single score
– Then you identify correctly episodically processed statements from the ad content recall of the tracker and produce a single score
– Category averages are produced for both episodically and semantically processed statements and straight after that inter-brand indices
– Finally, having imported the cross-category comparative aspect into the above index generation procedure you return to an intra-brand level and produce a ratio per brand by dividing the semantic index by the episodic index, which constitutes the episodic/semantic ratio, or the depth of inscription of brandcomms in a segment’s memory

For the sake of clarity in cases where multiple executions are aired in the same time-frame you may want to append a note on the relevant contribution of each campaign’s elements in the production of the above indices, thus shedding further light on the level to which a variation of an executional strategic platform contributes to effective and long-term brand associations or even, in case a previous campaign has been discontinued, gauging what mnemonic traces are still operative.




%d bloggers like this: