Can Communication Success be Quantified?

31 03 2010

Can communicators quantify their success? The short answer is sort of. Measuring success in public relations is a controversial and messy business, which is why I didn’t even mention it in Explaining Research. I felt that detailed discussion of the issues would detract from the utility of the book for researchers, who are more interested in learning how to explain their research than how public information officers grapple with the “sausage-making” of measurement.

However, I was reminded of how persistent and frustrating the measurement issue remains when a PIO colleague at a major research laboratory asked for advice about a new boss’s request for a quantitative measure of the office’s media relations success. The new metric-minded boss came from a marketing background—where measuring results is a holy quest—rather than a science communication background, a more complex communication environment. In an e-mail message, my colleague asked some  experienced communicators, including me, to discuss “what captures how well we’re doing our jobs without bogging us down so much with collecting or analyzing information that we can’t do our jobs.”

So, for the benefit of PIOs—and for those researchers interested in such sausage-making—here are some of the issues and pitfalls we explored:

One major measurement pitfall in my opinion is reliance on a long-criticized measurement called “advertising value equivalent” (AVE), which is a dollar amount that quantifies how much media stories would have been worth if they were paid advertising. Developing AVEs for news stories is an incredibly expensive proposition. One news office manager at a university where I worked spent well over $10,000 per year (she wouldn’t reveal the actual cost) with a company that produced an annual AVE for the office’s media clips. Of course, the AVE was huge—many hundreds of thousands of dollars as I recall—and she advertised that amount to her superiors as a meaningful quantitative measure of media relations success.

But AVEs are very poor measurements for many reasons. The best-articulated case against them that I’ve found is a post on the blog MetricsMan that I recommend reading. Basically, MetricsMan declares AVEs invalid because

  • They don’t capture the inherent value of news articles as credible, independent validation of a story; as opposed to the paid appearance of an ad.
  • They don’t measure the impact of an article on a reader.
  • They don’t take into account other important media relations services such as strategic counsel, crisis communications and viral campaigns.
  • They don’t measure the value of keeping negative news out of the media or of coping with it in communication terms.
  • They don’t distinguish between articles that appear in publications important to the institution, versus those that are less important. AVEs only count the cost of advertising in the publication.
  • AVEs count as positive in terms of comparison value even those articles that may be predominantly negative.
  • There is no way to calculate the value of a hit on the front page of a newspaper or a cover story in a magazine, because ads aren’t sold in those places.
  • AVE results may be going up when other legitimate communication measures, such as communication of messages, or share of positive coverage, may be going down.
  • AVEs don’t cover such non-traditional media as blogs or positive conversations on social networking sites.

In our e-mail discussion, veteran research communicator Rick Borchelt summarized the problem of quantification by telling our fellow PIO

I think the take away message is that there is no really good quantitative metric for media relations success, since media relations is/are an assessment of your relationships with media, not with how much ink they spill about you. You can’t really say with a straight face most of the time that the release you put on EurekAlert! generated the story in the New York Times that was read by the junior staffer of the senior Senator who put an earmark in the DOE appropriations bill that got the new molecular biology building. What we struggle with is how to prove the negative:  how much worse would a story about the lab have been if you didn’t know the reporter and could talk her off the ledge of a sensational (but inaccurate) story? Or factor in the opportunity cost of giving one reporter an exclusive that pisses off a dozen others. Or how more likely a reporter with whom you have a relationship is to come to the lab for comment on a breaking story out of all the contacts in his Rolodex. These are intangibles.

Another veteran communicator, Ohio State’s Earle Holland, recommended that our colleague ask some basic questions before even beginning to address the measurement issue:

You said that the new boss asked you to “come up with a way to measure how well we’re doing our jobs.”  First, you need to answer the question of “What is your job?” both in yours and his eyes. Otherwise you won’t know what’s there for comparison—you can’t have a metric without a scale to gauge it against…. Is the goal to get mountains of news media coverage? To what end?  Is it to protect the reputation of [the laboratory]—only good news goes out? Is it to motivate actions or opinions of key constituencies—something that’s probably impossible to gauge causally. Or is it to convey interesting, accurate science information to the public because [the laboratory] is a publicly supported enterprise and the public deserves to know? Who are the constituencies you want to reach and which are more important than others—you can’t just say “they all are. ” My point is that you have to know what would be seen as success before you try to measure how successful you are.

To those cogent comments, I would add that when a boss asks for any kind of measurement, a reasonable response is “What will you use that measurement for?” I have always followed a rule that if some piece of data is not necessary for making a specific managerial decision, then it is not worth gathering.

In the case of the news office manager cited above, she declared that “I will use the AVE for our news clips in advocating for our budget.” But in my experience, such information has never had any significant effect on budget-making. Other factors, such as the economic state of the institution, political advocacy, and persuasion have been far more important.

Even given the caveats and complexities of quantification, though, there are some legitimate numbers that PIOs can offer management, as long as they are put in the context of the overall communications program.

For example, Holland and his colleagues in OSU’s Research Communications office produce an annual report that includes numbers: how many stories produced, how many times they appeared in major media, how big the audiences for those publications were, etc. But these numbers are intended only to give a sense of productivity, not to suggest impact.

The report also explains how the stories were distributed—via blogs, posting on EurekAlert!, Newswise, etc.—and quantifies the audiences for those outlets. And the report quantifies the number of visitors to OSU’s research Web sites. Such data are available directly from the news services, and for the Web sites by using Google Analytics. Also, the appearance of news stories on Google News can be monitored using Google Alerts.

Importantly, however, the annual report also documents the research areas of the university from which news came, to demonstrate the comprehensiveness of coverage. And, it discusses the broad range of other ways the office uses stories, interacts with reporters and serves faculty members. Thus, the annual report goes beyond mere numbers to present a full picture of the office’s activities.

Such documentation of productivity is important. However also critical, and often neglected, is demonstrating productivity by proactively making sure that key administrators and other audiences are aware of  news stories and other communications achievements.

My favorite example of such proactive demonstration is the process that Borchelt established to remedy the lack of visibility for important media stories, when he was communications director at Oak Ridge National Laboratory. “Here was an institution focused on media stories as their goal. So, they would get this great story in the New York Times, and they would mention the story when visiting their congressman, and he’d ask ‘What story?’ ”

Thus, Borchelt began sending major media stories, along with a letter from the director, to important members of congress, as well as program officers and directors of the DOE, which funds the laboratory. “The letter would say ‘Thank you so much for giving us the opportunity to work on this exciting research that is reported in today’s New York Times,’ ” said Borchelt. “And we would often append the new release, because it tended to have a better explanation of what we were doing; and also because we could acknowledge the funding agency, so they could see that they got credit. It was hellishly labor-intensive, but incredibly useful,” said Borchelt. Members of Congress would use the articles in their communications to colleagues and even read them into the Congressional Record.

So, although media relations productivity can be sort of quantified, numbers are not enough. They must constitute only one part of a comprehensive effort to communicate productivity in all its forms to the people who sign the paychecks.

Advertisements