THE PREMISE: Every time we read an article or turn on the TV, there seems to be a study warning consumers about something. But how reliable are they? Are they based on scientific facts -- or are they just scary gotcha headlines used to draw in readers and gin up ratings?
THE PROBLEM: Reporters are blindly covering ideological-driven studies that lack any scientific integrity whatsoever – presenting the findings as sound, credible facts worthy of widespread public awareness.
WHY IT’S HARMFUL: The public looks to the news media to provide factual information they can use to enhance their lives. But when reporters give baseless studies undeserved attention, cities and communities across our nation are the ones who endure the consequences. Such publicity can cause unnecessary panic, instilling fear and worry on the minds of the consumer population. And when the news media promote these invalid studies and give them national exposure, regulatory entities often feel pressured to take swift and decisive action – which can deny consumers the ability, the right, to make their own choices about safe products offering convenience and benefit to their everyday lives.
ARE ALL STUDIES BAD? No, and that’s important to note. Studies that can withstand the rigor of proper scientific review and analysis, regardless of the sponsor, are worthy of attention and focus by the media. If a study upholds responsible scientific standards, we must give them careful consideration. And it’s important to confront this, if we are to distinguish studies that demand our collective focus from those that simply don’t deserve it.
GIVE ME AN EXAMPLE: Recently, Belgian researcher Soren Verstraete gave a presentation to the Endocrine Society capturing highlights from a March 2016 study asserting a correlation between ADHD and phthalates from vinyl blood bags, catheters, intubation devices and other sterile vinyl components and devices key to patient treatment and recovery. As bizarre as this research premise appears, the study claimed a connection between these bags and the development of ADHD later in life by those who were evaluated. The problem is that it was an observational study, where no causation data was included in the review. There were no controls on the study, and no additional testing was conducted. Also noteworthy, the ADHD diagnosis was subjective, as there was no biomarker in the study. And there was no consideration given to any number of other factors that could have contributed to the study participants’ ADHD diagnosis, such as pre-existing conditions, other drugs administered before or during their care in the hospital – or even events that occurred throughout their lives after their hospital stay. The researcher may as well have asked these individuals if they’d ever used the telephone – and drawn the same conclusions.
To give this study any respect or notable attention would be ludicrous. Yet, the Washington Post’s Amy Ellis Nutt did just that – authoring an article that repeated, almost verbatim, a press release that was distributed to promote the researcher’s findings. Ms. Nutt failed to question the scientific integrity of the results because it was a story too good for her not to tell. And in doing so, she no doubt spread unnecessary fear on the part of hospital visitors everywhere that they, too, might develop ADHD if treated with vinyl medical products – which, for years, have saved countless human lives.
Then again, let’s remember The Washington Post is the same outlet that previously reported an observational analysis claiming a link between ADHD and kids with August birthdays (you read that correctly). Which raises some rather serious questions as to the organization’s standards in assessing the credibility of the studies it covers.
WHAT NEEDS TO BE DONE? Special interests will no doubt continue to promote studies to advance their respective agendas with the news media. That won’t change. But it’s the media’s responsibility to scrutinize these studies and determine if they have the scientific strength to warrant exposure with readers and viewers. The simple “scare-factor” of any given study cannot – must not – be included in this calculus. To this end, reporters, such as Ms. Nutt, must ask themselves all of the following questions when evaluating a study’s integrity:
- Is it an observational study? If it is, the results are inconclusive. Any study that fails to provide in-depth scientific analysis, and is based solely on observing the behavior of a group of participants, lacks proper scientific credibility, as any number of unknowns, such as those revealed in the ADHD example, can be contributing factors.
- Is it peer-reviewed? If not, it fails to include the proper checks and balances, and represents only the views of the authors themselves, without any independent validation.
- Is causation proven? Researchers often confuse correlation with causation. Just because two data points may share a relationship doesn’t mean one causes the other. (Example: People who like bananas tend to drink more coffee. Does eating bananas cause one to drink coffee? Probably not.) Studies that cannot demonstrate a clear causal connection between the claim and the conclusion cannot be taken seriously.
- Is the study published in a reputable journal? Research appearing in obscure outlets that fail to garner the support and respect of the broad scientific or medical community must be approached with a sensible degree of skepticism. And when those outlets have a specific ideological outlook, where the study’s conclusions support the advocacy efforts of the publisher, it’s usually because the quality of the research wasn’t strong enough to appear in a more reputable venue.
- Is the sample size substantial? If the number of people in a given study is an insignificant sum, how can any reliable scientific conclusions be drawn from it?
- Is it an outlier? One study that challenges the body of scientific literature on a particular subject should not reflexively attract media attention just because it’s different, as is often the case today. In fact, such findings should be approached with a strong sense of suspicion, and if the research lacks any of the standards described here, it doesn’t deserve our attention.
- Are the same standards being applied to both NGO and industry studies? Industry studies are often automatically dismissed by the media – regardless of their scientific vigor – while many NGO studies lacking any legitimacy are provided widespread exposure. To ensure the journalism ethos of fairness and objectivity is achieved, both NGO and industry funded research must be weighted equally against these standards.
It’s the media’s responsibility to ask these responsible questions when determining whether a scientific study merits public awareness. And when reporters fail to do this, they perform a disservice to their readers. Which is why we will continue to use this forum to expose media organizations, such as The Washington Post, when they forsake their obligations in this regard.
5.11.16 UPDATE: John Oliver recently touched on many of these exact points in a recent “Last Week Tonight” segment. He overlooks the fact that ideological-minded groups should be held to the same high standards expected of industry in assessing the credibility of sponsored scientific studies. But he makes a strong case that the media too often shirks its responsibility to the public in failing to distinguish junk science from the real thing.
Watch Here: https://www.youtube.com/watch?v=0Rnq1NpHdmw