Fanelli’s findings

On first blush, the results of the PLoS ONE study by Daniele Fanelli don’t seem so surprising: Researcher productivity positively correlates with experimental finds that support a given hypothesis. What stands behind this, though, is a general, cross-discipline, elephantine bias: As the pressure to publish increases, so does the need to support a given hypothesis, irrespective of what the experimental results might suggest.

[Pause to allow the significance of this to sink in.]

No doubt, this is a very difficult claim to isolate, given all the factors that might account for academic productivity, but Fanelli’s study admirably controls for research discipline, methodology, and funding, so it’s not so easy to whistle our way past the results. Just to put this in its proper perspective:

Numerous states reported between 95% and 100% positive results . . . . In absence of bias of any kind, this would mean that corresponding authors in these states almost never failed to find support for the hypotheses they tested. But negative results are virtually inevitable, unless all the hypotheses tested were true, experiments were designed and conducted perfectly, and the statistical power available were always 100% — which it rarely is, and is usually much lower.

Seriously. Based on articles analyzed for this study, researchers in Nebraska, Arizona, Vermont, Idaho, Hawaii, Wyoming, South Dakota, and the District of Columbia have the extraordinary gift of producing results that only support their hypotheses, whatever they may be. Researchers from at least a dozen other states are nearly as canny.

Fanelli’s best explanation for the positive-results bias is that researchers are much less likely to scrutinize and second-guess findings that support a hypothesis compared with those that don’t, and this bias only increases as the pressure to publish increases. His explanation about the ecology of research publishing is worth quoting at length:

Many factors contribute to this publication bias against negative results, which is rooted in the psychology and sociology of science. Like all human beings, scientists are confirmation-biased (i.e. tend to select information that supports their hypotheses about the world), and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappoint. This bias is likely to be reinforced by a positive feedback from the scientific community. Since papers reporting positive results attract more interest and are cited more often, journal editors and peer reviewers might tend to favour them, which will further increase the desirability of a positive outcome to researchers, particularly if their careers are evaluated by counting the number of papers listed in their CVs and the impact factor of the journals they are published in.

Confronted with a “negative” result, therefore, a scientist might be tempted to either not spend time publishing it (what is often called the “file-drawer effect”, because negative papers are imagined to lie in scientists’ drawers) or to turn it somehow into a positive result. This can be done by re-formulating the hypothesis (sometimes referred to as HARKing: Hypothesizing After the Results are Known), by selecting the results to be published, by tweaking data or analyses to “improve” the outcome, or by willingly and consciously falsifying them. Data fabrication and falsification are probably rare, but other questionable research practices might be relatively common.

All this appears to point toward a persistent, intense form of peer pressure among knowledge makers to equate professional success with experimental success. Is it possible for someone to be a successful professional while conducting experiments that frequently work against his or her own predictions? To the extent that we regard knowledge making as a kind of business, then the answer would seem to be no — negative results don’t lend themselves to profitability in the marketplace or in academia.

Someone like Thomas Kuhn might regard this trend as emblematic of the present scientific paradigm and which trend points toward a growing disconnect between what we know and the knowledge we are able to talk about. Feel free to hypothesize about the nature of the scientific revolution to follow.

Link to the Science Daily summary.

Link to the PLoS ONE study.


About michael

Marketing & Sales Manager since 2012