‘Gaiter Gate’ exposed the Achilles heel of quality journalism: The rush to be the first notification you see and the most gripping headline you read.
I get a tip — an email, a phone call, or a friend who knows somebody who heard a thing. I tell my editor, who confirms it’s worth looking into. It’s a good story, we’re both excited — but also wary. What do we know? All we have is a tip, a juicy tip that could drive big traffic numbers on the site.
For us writers and editors, it’s exciting to imagine all those eyeballs seeing your work — your story. But it’s worth nothing if you aren’t the first to publish. After all, some other crack reporter is always hot on your heels.
It’s a rush — and damned if it isn’t good. Breaking a story feels like hitting a home run. Everything comes together perfectly and, for a moment, everyone’s eyes are on you. It’s no exaggeration, being a journalist can feel amazing. So it’s no wonder that in the chase to capture that feeling — hit that home run — sometimes you swing big at the wrong pitch.
Last week, many major news outlets took a mighty swing at a big story — and whiffed. “Neck Gaiters Do Not Curb COVID-19, Study Finds,” Forbes wrote. “Wearing a neck gaiter may be worse than no mask at all, researchers find,” The Washington Post headline read. “Study Finds Neck Gaiters May Not Prevent the Spread of COVID-19,” USA Today proclaimed.
Even outdoor-endemic media jumped on the bandwagon: “Thinking About Using Your Neck Gaiter As a Coronavirus Mask? Don’t,” Adventure Journal blogged.
The problem here: That’s not what the study found.
Gaiter Gate: Right Story, Wrong Headlines
“We tried to be as careful with our language as possible in interviews,” one of the study’s co-authors, Warren S. Warren, told Science News. “The press coverage has careened out of control.”
That’s because Warren and a team of Duke University researchers suddenly found themselves in the public spotlight after their study — now the focus of conflicting media headlines — published in the journal “Science Advances” this month. But their research didn’t conclude anything relating to the efficacy of masks.
Rather, the study — titled, “Low-cost measurement of facemask efficacy for filtering expelled droplets during speech” — evaluated whether readily available and cheap materials could be used to create a device that may help evaluate non-medical-grade face coverings.
In short, the Duke researchers wanted to demonstrate they could create a new type of tool. They did not set out to immediately apply this as yet untested tool to research anything else conclusively.
What Went Wrong
Still, it’s easy to see why so many news headlines drew the conclusions they did.
“In proof-of-principle studies, we compared a variety of commonly available mask types and observed that some mask types approach the performance of standard surgical masks, while some mask alternatives, such as neck fleece or bandanas, offer very little protection,” the study’s abstract reads.
It certainly seems to say this team of highly regarded researchers found that neck fleeces and bandanas are effectively useless in preventing the spread of germs. But according to medical professionals, that’s not at all what it says.
“In my view, their stated goal was to determine if they could use inexpensive materials to create a tool to assess masks,” Dr. Andrew Thompson, a board-certified clinician in Internal Medicine and Infectious Diseases, told me.
“The conclusion I draw is that droplets can be reduced by masks and quantified with simple, inexpensive materials. I disagree that wearing a gaiter is worse than no mask at all. The error bars, or standard deviation, was too wide to draw any firm conclusion.”
Dr. Jenny Morber — a freelance science writer who has edited scientific papers for universities and written for the Proceedings of the National Academy of Sciences — agrees.
In a series of tweets, Dr. Morber pointed out a host of issues journalists should have flagged before drawing up those headlines. The order in which researchers evaluated the masks, whether or not a test subject took a sip of water between tests, or if they sat in the exact same position when wearing different masks — according to Dr. Morber, any of these can influence findings.
And again, because this study did not set out to specifically test masks, but rather show this tool could be used to test masks, no one should expect this study to account for any of that.
“All of which is to say, yes, of course material makes a difference and if your mask material is super thin it is probably not as good as one with thicker, layered material. Also fit makes a big difference. And this study was never intended to provide efficacy information,” Dr. Morber concluded.
“However, many journalists covered the paper uncritically, without outside expert opinion, and the authors made conclusions in the paper and in interviews that I feel were unsupported, or weakly supported, by the data.”
Stories Should Steer Journalists, Not Vice Versa
So what happened? A terrific storm of factors, none of which are new or all that surprising.
First, the internet has profoundly accelerated the news cycle. Push notifications and social media sharing mean that whoever is first to publish stands the best chance of receiving the most clicks. Unfortunately, that pressure completely undermines the second, point: Research studies require thorough reading and vetting to properly report.
Dr. Thompson said, “The ‘smoking gun’ headlines really never accurately reflect scientific reality. Good science is almost never good clickbait. The goal of research in medicine and science is usually incremental and seldom results in firm conclusions. I think the popular press seeks definitive conclusions that grab attention.”
Rare is the study that blows the doors off the world as we know it. Yet time and again, we see bombastic headlines like, “Chemical in McDonald’s French Fries Could Cure Baldness: Study,” which the New York Post ran in 2018. And we see these because they’re clickable.
But let’s face it, baldness remains, and lots of bald people enjoy French fries — some regularly. (For the real truth on why neither eating French fries nor rubbing them on your head will do anything other than make you fat and greasy, click here.)
Finally, COVID presents the ideal catalyst to exploit both of the above factors. It is the most far-reaching, gripping news cycle in decades. A pandemic, the likes of which the modern world hasn’t seen, is stoking fascination, fear, and hope in equal measure. It offers the potential for massive traffic if you break a story.
But, being rooted in medicine, it also demands skepticism, patience, and a nuanced understanding of some very complicated (and often unexciting) topics.
Responsibility of Journalists, Readers
Where does that leave us? I reached out to Steve Casimiro, the founder of Adventure Journal. As noted above, Adventure Journal ran the story titled, “Thinking About Using Your Neck Gaiter As a Coronavirus Mask? Don’t.”
I asked Casimiro if, in the wake of all the disagreement over what the study actually said, he stood by that headline.
“Yes, we stand by the headline and the story. Our view is that resolving the coronavirus crisis requires best practices on the part of individuals,” he told me.
“The science around coronavirus is evolving, and we understand that not everyone will draw the same conservative line that we do. That said, it’s important to note that our piece was not designed to dive into the details of the study or differences between masks. It was a short blog post created as a heads-up for our readers and to point them to the study and Duke’s video so they could draw their own conclusions.”
Personally, I think the headline goes too far. I don’t think this study suggests anyone should forgo a neck gaiter as a precautionary measure. But I agree with his point that you, as readers, share the same responsibility we, as publishers, do: to vet all information thoroughly before drawing any conclusions.
In this age, you share in that rush; being the first to share a big headline makes you a veritable co-author. You bask in the likes and attention just like we do.
But we both need to pause. What do we know? Did we make an assumption or fill in a gap by accident? Did we ask the wrong questions? Heck, did we just misread something? At some point, I’ve made all of those mistakes.
We should be doing our homework to give you the best information. But you should always remain skeptical and hold us to account. If we all do that, we can wait for the right pitch before swinging for the fences.