Three Ways to Spot Junk Science
“Law lags science; it does not lead it.” Our legal system requires proof, and in many cases, only scientific evidence can provide it. With controversies swirling around about fraud and misconduct in scientific publications, how do product liability lawyers distinguish between credible scientific evidence and science that is, as Justice Scalia would say, “junky”? Here are three ways to spot junk science before it trashes your case:
1. Give the study a background check.
The prestigious and highly respected journal Science recently retracted a major study that purported to show short conversations with openly gay canvassers could change people’s minds on same-sex marriage. The study was retracted when questions surfaced about the reliability, and existence, of the underlying data. The recent headline-grabbing incident in Science is not an isolated example of a study being pulled because of suspected scientific misconduct. According to the New York Times, a scientific paper is, on average, retracted every day because of misconduct. This scientific article retraction epidemic is widespread and being chronicled at Retraction Watch, a blog that is “[t]racking retractions as a window into the scientific process.”
Acknowledging that “[t]ransparency, open sharing, and reproducibility are core features of science, but not always part of daily practice” more than 100 journals – – including Science – – and more than 40 organizations, recently became signatories to new guidelines for publishing scientific studies. The Transparency and Openness Promotion Guidelines, were introduced in a Science article, and include “eight modular standards, each with three levels of increasing stringency.” They “provide flexibility for adoption depending on disciplinary variation, but simultaneously establish community standards.”
While it may seem counter intuitive, increasing retraction rates may actually be a sign of greater reliability in the scientific field. “[T]he next time you hear of a big scientific retraction, take hope: it may be that the system is getting stronger.” There is evidence that the growing number of retractions is not the result of more scientific fraud, but rather an indication that instances of misconduct are being caught more often and more quickly than before.
[I]ncreasing retraction rates may actually be a sign of greater reliability in the scientific field.
That there may be no increase in fraud or misconduct in scientific publications, just more instances of fraud and misconduct being discovered, is not very comforting. That some journals are getting better in preventing, finding, and correcting/retracting articles that turn out to be unreliable is good, but far from sufficient. It is critical that studies being relied on in litigation are met with skepticism and thoroughly vetted – – including investigating the study, its authors, its findings, and the journal in which it is published.
2. Be aware of correlation vs. causation.
A 2012 New England Journal of Medicine article reported a relationship between a country’s chocolate consumption and its number of Nobel laureates. The paper concluded that “[c]hocolate consumption enhances cognitive function, which is a sine qua non for winning the Nobel Prize, and it closely correlates with the number of Nobel laureates in each country. It remains to be determined whether the consumption of chocolate is the underlying mechanism for the observed association with improved cognitive function.” A second article on the chocolate habits of Nobel winners appeared in the journal Nature in 2013.
Some media outlets ran with the tasty story, encouraging readers to eat more chocolate and possibly get their Nobel Prize. Some, however, saw the study as a prime example of researchers forgetting that correlation does not mean causation. A much simpler possible explanation for the phenomenon? Wealth, according to the Scientific American blog: “[p]ut simply, people who eat more chocolate are likely to be better off . . . . Greater affluence means better higher education, research opportunities and perhaps Nobel Prizes.”
Just because two things happen at the same time, does not mean the events are connected, or that one has caused the other. In other words, correlation does not equal causation. That distinction continues to befuddle and be misused. While correlation may demonstrate a relationship between two factors, it cannot explain WHY or HOW a relationship exists. Proving causation is much harder, and it is causation not simply correlation that is relevant in product liability cases. Correlations can also be found where there is no legitimate relationship between the two factors at all – – like per capita cheese consumption and the number of people who die by becoming tangled in their bedsheets:
Thus, it is important to thoroughly analyze studies purporting to make causal links. The scientific method is an ongoing process that tests hypotheses by trying to disprove them, not just seeking evidence that supports the theory. Finding a correlation may be necessary to, but it does not by itself, establish causation.
3. Popularity is not a substitute for reliability.
Tom Siegfried at ScienceNews argues that you should be skeptical about studies that make big headlines. Scientific studies covered by the media, he suggests, are less likely to stand up to scrutiny:
Journalists like to write stories about findings that are “contrary to previous belief,” for instance. But such findings are often bogus, at least in cases where the “previous belief” was based on actual sound scientific evidence. . . .
Journalists also prefer to write about the “first report” of a finding. First reports are notoriously unreliable. Effects in first reports are commonly exaggerated or even wrong.
Rebecca Canary-King is co-author of this article.