We’ve all seen news segments and click-bait headlines touting a “new scientific study” containing alarming health claims that make us question if we should avoid certain products or change our daily behaviors. (We’ve touched on this topic before). Some of these reports are grounded in sound science and are worthy of our time – the American Cancer Society’s recent recommendation lowering the suggested age for colonoscopy screenings from 50 to 45 is one example. But in today’s report-first / clarify-later (or never) news environment, many studies that are utterly worthless gain national attention – because the views they generate are simply too tempting for news organizations to fact check. And when news outlets blindly cover these “junk” studies without determining if they have any redeeming scientific qualities, they do a great disservice to the American public.
Nearly 50% of all studies the news media report turn out to be wrong. And when a more robust study on the same topic reaches a different conclusion, the facts rarely receive the same level of exposure as the initial faulty coverage. That gives readers and viewers little reason to question the legitimacy of bad studies, which in turn incites baseless public fear and leads many to make unnecessary changes to the way they lead their lives.
Elements of Good Research
The characteristics of a credible study aren’t complicated: They must be rigorous, thorough, and replicable. But many studies today fall well short of those standards and make observational claims with small sample sizes where no reliable scientific conclusions can possibly be drawn.
Statistical Significance
Statistically significant and clinically important results exhibit important, meaningful differences between study subject groups. If a study does not have a large enough sample size, it could fail to detect an important difference between subject groups. If a study has enough participants, even the most trivial differences between groups might indicate a statistical trend, but the data doesn’t necessarily mean these differences are clinically important or meaningful to normal people. The best practice is not solely about selecting a certain sample size – it’s the extent that the researcher goes to eliminate doubt and to build trust in the results by employing the best methods of study design.
Methodology
Trustworthy studies (1) rely on epidemiological research, assuming the sample sizes are sufficient and follow the study subject group practices noted above (2) use meta-analysis (3) follow a peer-review process or (4) apply a randomized controlled trial, the gold standard. In public health, epidemiologists are taught to look at the probability value (p-value) to determine if the observed differences between subject group outcomes (i.e. ill and not ill) are true differences, or if the observed outcomes are simply a result of chance. But many academic researchers don’t take the time to use rigid epidemiological methods, instead making far-reaching conclusions based on limited evidence or small numbers of people. Some even engage in data manipulation to achieve dubious statistical outcomes (this practice is called p-hacking, you can read more about it here). According to the NIH, a best practice for all researchers would be to explain why the claimed relationship may indeed be real, and not a rare fluke or misuse of data analysis methods. For example, demonstrating why results agree with other studies in different human populations would further clarify statistical value. Clinical researchers are also encouraged to consider other relevant measures because of the severe limitations on patient sampling and hypothesis testing.
Peer-Review
Peer-review is one of the most important processes in scientific research. In the best case, peer-review strengthens quality control on studies before they are released to the public. For that reason, trustworthy scientific journals require all submissions to be peer-reviewed. But not all peer-reviews are equally rigorous. A good peer-review should be critical and performed by qualified researchers who do not share affiliations or ideological biases with the author of the study being reviewed (you can read more about what makes a good peer-review from the American Chemical Society here). Unfortunately, researchers often fail to use objective peer-reviewers and opt for those who are already predisposed to support their conclusions.
Junk Science in Action
Vinyl material is often the target of this kind of shoddy science and reporting. Activists, agenda-driven researchers, and some journalists often promote poorly designed and executed studies that malign vinyl. Because vinyl products and manufacturing processes are complex and highly technical, most readers aren’t able to sort fact from fiction in these studies. Here are a few recent examples:
New York Times on ‘chemicals in your mac and cheese.’ In 2017, the New York Times ran a story based on an observational study that purported to identify toxic chemicals in mac and cheese from vinyl containers. Just a few problems: the study doesn’t actually say anything about the effects of those chemicals, it just assumes that they’re inherently toxic, which isn’t true. Second, the study invents a new standard for exposure which contradicts the scientifically accepted standard. The reporter didn’t note either of these facts even though they were provided to her before publication. You can read more here: NYT’s Rabin Compromises Journalistic Integrity In “Click-bait” Clown Story On Phthalates
ABC News promotes a flawed study on phthalates and language delays. In 2018, an ABC affiliate in Duluth, MN ran a segment about a study that claimed exposure to vinyl products could cause language delays in children. The study in question was an observational study with a minuscule sample size of only about 1,200 people (a small sample for this type of study), upon which it based a far-fetched cause-and-effect relationship. Additionally, the study relied on self-reported questionnaires which are highly unscientific. The reporter failed to provide any of this important context for viewers; in fact, she didn’t even identify the study so that viewers could verify her report. More about this report here: Correcting The Record On Vinyl And Phthalates
Environmental Health News article on phthalates and motor skills. Most recently, Environmental Health News published an article about a study on phthalate exposure and vinyl material. EHN starts off with the assertion that “[k]ids exposed to phthalates prenatally and as 3-year-olds have decreased motor skills later in their childhood.” They don’t acknowledge until the second-to-last paragraph that the study doesn’t actually prove that. And once again, the study observed only 209 kids; a sample size far too low to draw such overbroad conclusions.
Why Good Science Matters
When reporters allow sensational findings of flawed studies to drive their decision-making in what they relay to the public, they spread confusion and promote inaccurate information that can have real-world consequences. One only has to look as far as the child vaccination issue to understand how widely-reported junk science can lead people to make poor decisions that can impact public health and safety. For years, observational studies claiming a link between the MMR (measles) vaccine and autism dominated the headlines, leading many parents to avoid immunizing their children. As a result, measles outbreaks are now on the rise. But, landmark research on the MMR vaccine which was recently completed that included 657,461 children studied over 20 years in a randomized control trial (the most rigorous standard in study design) found no link between the vaccine and autism. And as expected, the attention this study received paled in comparison to the misleading reports from the past decade.
Media responsibility matters. And when journalists reflexively publish click-bait stories based on studies they know have no scientific credibility, they willfully deceive readers and viewers. It doesn’t take much to ascertain whether a particular research report passes the red face test.
It just requires the reporters who cover them to care.
MORE: Scientific Studies, Last Week Tonight