“Wormholes” Author is a Liar and Thief. . . But Says It’s O.K.

27 09 2013

As the author of the new sci-fi adventure novel Wormholes, I’m a liar and a thief. I’ll explain why, and the reasons I think it’s O.K.

Some background: The idea for the novel had its beginning years ago in a simpleWormholes cover question “What if holes were to suddenly open up into other universes?” As is perhaps the case with most novelists, from that seed of an idea, I began to build a story. And in the process, I invented all kinds of physics. That’s when I became a liar.

I had to invent a scientific-sounding explanation of why, in its travels through the galaxy, our solar system enters a region of lurking wrinkles in spacetime. These wrinkles, I fabricated, constitute weaknesses in the spacetime fabric that cause holes to seemingly arbitrarily open up from our universe into other universes. So on Earth and on other planets, holes suddenly appear that might intrude into other universes’ interstellar space, into the fiery centers of stars, or onto the surfaces of alien planets. To drive my fictional story, I also invented some exotic physical properties for these “transdimensional apertures” that enabled me to plunge my characters into all kinds of perilous adventures. (I won’t reveal details, because that would give away the plot, and I’d like readers to be surprised.)

I was a bald-faced liar because my physics was all scientific poppycock.

Then I became a thief. I misappropriated the term wormholes to name these apertures, because it was popular and would attract readers. Again, it was poppycock, because scientifically, my “wormholes” are nothing like the theoretical wormholes of real astrophysics.

So, why should I care that I was propagating poppycock? After all, other sci-fi authors devise scientifically ridiculous stuff all the time, from Star Trek to Dr. Who. And sci-fi fans are perfectly willing—like the Queen in Through the Looking Glass—to believe six impossible things before breakfast.

However, I felt guilty because in my profession  as a science communicator, for decades I tried to write accurately about real astronomy and astrophysics, working at three of the country’s top universities in the field—Caltech, MIT, and Cornell. Was I betraying my own principles, and incurring the scorn of scientists whom I greatly respect?

Fortunately, I’ve been able to bury that nagging guilt beneath some pretty substantive—and I think interesting—rationales.

Wormholes sources 2 (300x276)For one thing, I wanted to grab readers and lure them into exploring real science, just as I was captivated as a boy by the imaginative writings of legendary science fiction writers Isaac Asimov, Robert Heinlein, Ray Bradbury and Arthur C. Clarke. Their books, which launched flights of fictional fancy from real science, inspired me to want to know more about science, and ultimately to write about it. To give readers a path to that science, I even added a this list of sources of real science and engineering that inspired the book to the Wormholes web site.

My lying and thievery was also justified because I sought in the novel to reveal some greater truths about science and scientists.

For one thing, they’re an incredibly courageous and indefatigable lot. Few lay people realize that the vast majority of scientific experiments are failures. Scientists only advertise their successes, in scientific journal articles and news releases. But despite failure after failure, scientists persist, laboring away until they achieve success. And so, in Wormholes, my characters—including intrepid geologist Dacey Livingstone and iconoclastic physicist Gerald Meier—suffer failures that are sometimes deadly, resolutely learning from each failure and trying again.

The novel also portrays another greater truth—that scientists have been censured and censored for their theories, even in the face of good evidence. Among the most notorious modern examples is the censorship of climatologist James Hansen for his assertions that global warming is caused by human activities like burning fossil fuels.

I also experienced censorship in my career as a public information officer, which is a particular reason I wanted to portray it in the novel. At Caltech for example, in 1983 the administration killed a news release I’d written about economist Roger Noll. He had analyzed the organizational structure of large government R&D programs, including the then-new Space Shuttle. He declared the Shuttle program a “catastrophe,” because it rushed headlong into a massive construction program without carefully evolving the technology over multiple generations. Roger Noll’s criticisms were borne out by the Shuttle’s massive cost overruns, under-performance, and of course the subsequent, tragic Challenger and Columbia disasters. When my release was killed, I suspected it had to do with Caltech’s ties with NASA, via its Jet Propulsion Laboratory. But I thought maybe the administration knew something I didn’t about Noll or the Shuttle program.

Another egregious example: At Caltech, I’d written a news release about a paper by geochemist Clair Patterson on the health hazards of global lead pollution. The head of his Caltech division killed that release, even though Patterson’s evidence was solid and widely accepted. At the time, I believed that the censorship was due to some scientific issue I wasn’t aware of. Today I believe it might well have been fear of offending the powerful oil industry. Patterson’s advocacy ultimately led to a removal of lead from gasoline and other products.

So, perhaps I am a liar and a thief. But I can live with it, because not only have I tried to spin an entertaining sci-fi adventure tale. I’ve also tried to inspire readers to explore real science, and given them some real insight into scientists and their quests for discovery.





Coping with a Hyperstory: Lessons from a Biologist’s Ordeal

19 07 2010

Being inundated by a “hyperstory” that attracts white-hot media attention can be disconcerting and even traumatizing for researchers used to the

Samantha Joye

Samantha Joye coped with a tidal wave of media

relative anonymity of the laboratory and the seminar room.  The best recent example is that of University of Georgia biologist Samantha Joye’s experience when her research revealed the presence of underwater oil plumes in the Gulf of Mexico during the BP oil spill. Her communication response, and that of the university’s news service, offers lessons in how scientists and their institutions should—and should not—handle a hyperstory.  Joye’s research and experience with the media were covered in a July 2, 2010, Science Magazine article by Erik Stokstad. I should emphasize that my critique of this case is in no way meant as a criticism of the competence or professionalism of Joye or the university’s news service. Nobody who has not found themselves inundated by a hyperstory could possibly get everything right the first time in terms of communications. Also, I could not know the politics, and organizational and resource limitations, that would affect the university’s communication response.

With those caveats in mind, first the apparent missteps:

  • According to the Science article when Joye first recognized the existence of the underwater plume, she tipped off New York Times reporter Justin Gillis, who wrote a story that was published on May 15. Giving such an exclusive might seem logical to a media-naive scientist, since a Times story would more likely be accurate. But it was a poor decision for two reasons: first it shut out the huge cadre of other media covering the story, which invariably generates ill feelings and legitimate charges of unfairness; and second such an exclusive means that the scientist is at the total mercy of what one reporter decides to write. Instead, Joye should have first notified the university news service, worked with its science PIO to come up with a comprehensive statement and press kit, and held a news conference. The news conference could have included audio and even video teleconference feeds to enable reporters worldwide to participate.
  • When the inevitable flood of media calls began, Joye simply unplugged her phone, according to the Science article—an unwise move in terms of communication. Far more effective would have been to simply change her voice message to refer reporters to the news service, where calls would be answered, background information provided, and her response organized. The message also could have included reference to a Web site which would have contained a comprehensive set of materials on her research, her findings, and her plans.
  • The university has created a page covering Joye’s work, but it is minimalist. The page does include such information as a notice of media briefings, a podcast of a June news conference, and Joye’s Congressional testimony. However, it does not include other useful content such as  a gallery of publication-quality photos of Joye and her work, and links to news stories in such publications as the  Christian Science MonitorScience’s ScienceInsider column, the Wall St. Journal, or Stokstad’s Science article. It does not even include a  link to Joye’s laboratory site or to a three-part background video produced by the university that as of this writing is available on the university’s home page. Ironically, the NSF’s release on its grant to Joye (which for some reason is provided as a pdf file on the university news service page, rather than as a link) does offer a set of images produced by Joye. Generally, the news service page does not reflect a new understanding of such institutional Web sites, which is that they no longer merely serve the media, but the public directly. This new mission influences their design to be more than simple link lists, but full-fledged news sites with a visual design quality rivaling commercial media sites.

However, there were also positive steps taken by Joye and the news service that should be emulated:

Although the researchers did include a videographer on their cruise, who produced the video series, they could also have embedded a public information officer, as discussed in this chapter of Working with Public Information Officers . Such an embedded PIO could produce blog posts, news releases, photos, and videos. At the least, Joye could designate one of her team members to act as an information officer, who with some training by communicators could produce such material.

Coping with a hyperstory is challenging enough with plenty of preparation, but the instant hyperstory—as was the case with Joye’s research—can be a nightmare. However, by developing a general communication plan for handling crises and hyperstories, and adopting an “all-hands-on-deck” approach to managing them, communicators can make such events reflect well on both the researcher and the institution.





Can Communication Success be Quantified?

31 03 2010

Can communicators quantify their success? The short answer is sort of. Measuring success in public relations is a controversial and messy business, which is why I didn’t even mention it in Explaining Research. I felt that detailed discussion of the issues would detract from the utility of the book for researchers, who are more interested in learning how to explain their research than how public information officers grapple with the “sausage-making” of measurement.

However, I was reminded of how persistent and frustrating the measurement issue remains when a PIO colleague at a major research laboratory asked for advice about a new boss’s request for a quantitative measure of the office’s media relations success. The new metric-minded boss came from a marketing background—where measuring results is a holy quest—rather than a science communication background, a more complex communication environment. In an e-mail message, my colleague asked some  experienced communicators, including me, to discuss “what captures how well we’re doing our jobs without bogging us down so much with collecting or analyzing information that we can’t do our jobs.”

So, for the benefit of PIOs—and for those researchers interested in such sausage-making—here are some of the issues and pitfalls we explored:

One major measurement pitfall in my opinion is reliance on a long-criticized measurement called “advertising value equivalent” (AVE), which is a dollar amount that quantifies how much media stories would have been worth if they were paid advertising. Developing AVEs for news stories is an incredibly expensive proposition. One news office manager at a university where I worked spent well over $10,000 per year (she wouldn’t reveal the actual cost) with a company that produced an annual AVE for the office’s media clips. Of course, the AVE was huge—many hundreds of thousands of dollars as I recall—and she advertised that amount to her superiors as a meaningful quantitative measure of media relations success.

But AVEs are very poor measurements for many reasons. The best-articulated case against them that I’ve found is a post on the blog MetricsMan that I recommend reading. Basically, MetricsMan declares AVEs invalid because

  • They don’t capture the inherent value of news articles as credible, independent validation of a story; as opposed to the paid appearance of an ad.
  • They don’t measure the impact of an article on a reader.
  • They don’t take into account other important media relations services such as strategic counsel, crisis communications and viral campaigns.
  • They don’t measure the value of keeping negative news out of the media or of coping with it in communication terms.
  • They don’t distinguish between articles that appear in publications important to the institution, versus those that are less important. AVEs only count the cost of advertising in the publication.
  • AVEs count as positive in terms of comparison value even those articles that may be predominantly negative.
  • There is no way to calculate the value of a hit on the front page of a newspaper or a cover story in a magazine, because ads aren’t sold in those places.
  • AVE results may be going up when other legitimate communication measures, such as communication of messages, or share of positive coverage, may be going down.
  • AVEs don’t cover such non-traditional media as blogs or positive conversations on social networking sites.

In our e-mail discussion, veteran research communicator Rick Borchelt summarized the problem of quantification by telling our fellow PIO

I think the take away message is that there is no really good quantitative metric for media relations success, since media relations is/are an assessment of your relationships with media, not with how much ink they spill about you. You can’t really say with a straight face most of the time that the release you put on EurekAlert! generated the story in the New York Times that was read by the junior staffer of the senior Senator who put an earmark in the DOE appropriations bill that got the new molecular biology building. What we struggle with is how to prove the negative:  how much worse would a story about the lab have been if you didn’t know the reporter and could talk her off the ledge of a sensational (but inaccurate) story? Or factor in the opportunity cost of giving one reporter an exclusive that pisses off a dozen others. Or how more likely a reporter with whom you have a relationship is to come to the lab for comment on a breaking story out of all the contacts in his Rolodex. These are intangibles.

Another veteran communicator, Ohio State’s Earle Holland, recommended that our colleague ask some basic questions before even beginning to address the measurement issue:

You said that the new boss asked you to “come up with a way to measure how well we’re doing our jobs.”  First, you need to answer the question of “What is your job?” both in yours and his eyes. Otherwise you won’t know what’s there for comparison—you can’t have a metric without a scale to gauge it against…. Is the goal to get mountains of news media coverage? To what end?  Is it to protect the reputation of [the laboratory]—only good news goes out? Is it to motivate actions or opinions of key constituencies—something that’s probably impossible to gauge causally. Or is it to convey interesting, accurate science information to the public because [the laboratory] is a publicly supported enterprise and the public deserves to know? Who are the constituencies you want to reach and which are more important than others—you can’t just say “they all are. ” My point is that you have to know what would be seen as success before you try to measure how successful you are.

To those cogent comments, I would add that when a boss asks for any kind of measurement, a reasonable response is “What will you use that measurement for?” I have always followed a rule that if some piece of data is not necessary for making a specific managerial decision, then it is not worth gathering.

In the case of the news office manager cited above, she declared that “I will use the AVE for our news clips in advocating for our budget.” But in my experience, such information has never had any significant effect on budget-making. Other factors, such as the economic state of the institution, political advocacy, and persuasion have been far more important.

Even given the caveats and complexities of quantification, though, there are some legitimate numbers that PIOs can offer management, as long as they are put in the context of the overall communications program.

For example, Holland and his colleagues in OSU’s Research Communications office produce an annual report that includes numbers: how many stories produced, how many times they appeared in major media, how big the audiences for those publications were, etc. But these numbers are intended only to give a sense of productivity, not to suggest impact.

The report also explains how the stories were distributed—via blogs, posting on EurekAlert!, Newswise, etc.—and quantifies the audiences for those outlets. And the report quantifies the number of visitors to OSU’s research Web sites. Such data are available directly from the news services, and for the Web sites by using Google Analytics. Also, the appearance of news stories on Google News can be monitored using Google Alerts.

Importantly, however, the annual report also documents the research areas of the university from which news came, to demonstrate the comprehensiveness of coverage. And, it discusses the broad range of other ways the office uses stories, interacts with reporters and serves faculty members. Thus, the annual report goes beyond mere numbers to present a full picture of the office’s activities.

Such documentation of productivity is important. However also critical, and often neglected, is demonstrating productivity by proactively making sure that key administrators and other audiences are aware of  news stories and other communications achievements.

My favorite example of such proactive demonstration is the process that Borchelt established to remedy the lack of visibility for important media stories, when he was communications director at Oak Ridge National Laboratory. “Here was an institution focused on media stories as their goal. So, they would get this great story in the New York Times, and they would mention the story when visiting their congressman, and he’d ask ‘What story?’ ”

Thus, Borchelt began sending major media stories, along with a letter from the director, to important members of congress, as well as program officers and directors of the DOE, which funds the laboratory. “The letter would say ‘Thank you so much for giving us the opportunity to work on this exciting research that is reported in today’s New York Times,’ ” said Borchelt. “And we would often append the new release, because it tended to have a better explanation of what we were doing; and also because we could acknowledge the funding agency, so they could see that they got credit. It was hellishly labor-intensive, but incredibly useful,” said Borchelt. Members of Congress would use the articles in their communications to colleagues and even read them into the Congressional Record.

So, although media relations productivity can be sort of quantified, numbers are not enough. They must constitute only one part of a comprehensive effort to communicate productivity in all its forms to the people who sign the paychecks.





How Caveats Evaporate: Facebook Study Offers a Cautionary Tale

22 08 2009

Ohio State research communicator Earle Holland has written an insightful account in the online Columbia Journalism Review recounting media misrepresentation of a pilot study by an OSU graduate student that showed a link between Facebook usage and lower grades.

The basic problem was that, while the study found only a correlation between the two, The Sunday Times of London published a story declaring a causative link, saying that “the website is damaging students’ academic performance.” This erroneous report set the tone for much subsequent inaccurate coverage.

OSU did everything right in issuing the news release. It quotes the researcher up front saying that “We can’t say that use of Facebook leads to lower grades and less studying–but we did find a relationship.”

And further down in the news release: “There may be other factors involved, such as personality traits, that link Facebook use and lower grades,” she said. . . . “It may be that if it wasn’t for Facebook, some students would still find other ways to avoid studying, and would still get lower grades.  But perhaps the lower GPAs could actually be because students are spending too much time socializing online.”

While the reporters who confused correlation and causation are the obvious culprits in the miscommunication, some people criticized the university news office for issuing a news release on such a preliminary study. But that criticism misses an important point: that the poster session was at a meeting of a major scientific society. And since the study was on a hot topic–social media–it was likely to be noticed by reporters and stories written anyway.

So, the most responsible course of action by the news office was to prepare a carefully written news release that got the facts straight, put them in context, and protected the researcher against claims of misrepresentation.

One clear lesson here is that when in doubt, the best course is to produce a news release on a piece of research that clearly explains the findings and spells out caveats. A second lesson is to assume that those caveats may well be ignored, so you and your public information officer should carefully monitor coverage to respond to inaccuracies.