Researchers from the University of California San Diego have found that non-replicable data is cited 153 times more often than data that has been replicated. They say that this is because non-replicable findings are often more ‘interesting’ than those that can be reproduced.
For the study, the researchers examined data from three influential replication projects that tried to systematically replicate findings in well-known journals such as Nature and Science. Among psychology studies, only 39% of the 100 experiments analyzed were successfully replicated. The same was true for 61% of 18 studies in economics and 62% of 21 studies in science topics.
After this, the researchers went on to see whether papers that failed to replicate were cited more often than those that were successfully replicated. In doing so, they found that non-replicable papers were significantly more likely to be cited than those that were replicated. Science papers presented the largest gap- those that were non-replicable were cited 300 times more than those that were replicated.
The researchers further found that on average, papers that are not replicated are cited 16 times more per year. They added that just 12% of post-replication citations of non-replicable findings recognize a replication failure.
This relationship remained even after accounting for other characteristics of the studies. These included the number of authors, how many male authors there were, study location and language as well as the field in which the paper was published.
As for how and why this happens, the researchers pointed towards several reasons. Results that can not or have not been replicated may be seen as more ‘interesting’ or ‘groundbreaking’ than those that are replicable. These results then get more media coverage and are shared more often on social media.
They also noted that journals may feel pressure to publish interesting findings and that they may apply lower standards regarding reproducibility for ‘interesting’ results. Moreover, academic institutions tend to use citations as a metric on whether a faculty member should be promoted or not.
“We hope our research encourages readers to be cautious if they read something that is interesting and appealing,” says Marta Serra-Garcia, one of the study’s authors. “Whenever researchers cite work that is more interesting or has been cited a lot, we hope they will check if replication data is available and what those findings suggest.”