Researcher going therough notes in labjournal

Do you have a though time reproducing experiments?

Picture contact

How big is the reproducibility problem?

"It is unclear why our results...", "Interestingly, our results did not...", "In our hands...": recognizing any of these tell-tale sentences?
In summary?
  1. How big is the issue of reproducing scientific results?
  2. What are the facts?
  3. Why do we fail in reproducing experiments?
  4. Our advice in prevention.

What is bias?

How big is the issue of reproducing scientific results?

More than two-thirds of the researchers have tried and failed to reproduce another scientist's experiment, according to a survey among 1576 researchers carried out by Nature.

Although there is a growing alarm about experiments that fail to reproduce outcomes, the reproducibility crisis goes back to the 1950s. Nature performed an online questionnaire on reproducible research among 1576 researchers to investigate this issue's scope. They asked whether researchers have had problems replicating another scientist's experiments. 60-85% did. And 40-65% failed to replicate their experiments.

The results are striking. Less than 31% of those surveyed think that failure to reproduce published results means that the result might be wrong and, therefore, the study's conclusion is incorrect. Those surveyed say that they still trust the published literature. How come?

In 2014, the Center for Open Science and Science Exchange collaborated to independently replicate selected results from 50 high-profile papers in the field of cancer biology. The project aims to provide evidence about reproducibility in cancer biology and an opportunity to identify factors that influence reproducibility more generally.
Researchers looking at results on computer

Recognizing it

What are the facts?

Reproduction is crucial to build proof to support a hypothesis. But it is barely occurring. Many factors relate to intense competition and time pressure. Only 13% succeeded in publishing an unsuccessful attempt to repeat outcomes. (Monya Baker, Nature, 2016).

Nature's survey showed striking similarities with another study published in the American Society for Cell Biology.

  • In both surveys, more than half agree there is a significant reproducibility crisis; the percentage of researchers that tried and failed to reproduce another scientist's experiment is over 60%.
  • Only about 20% of the surveyed researchers said another scientist who attempted to reproduce their work questioned them.
  • Nature even revealed that scientists published unsuccessful reproduction attempts in 13% of the cases.
  • The primary factor contributing to so little replicated research is the pressure to publish in a high-profile journal (~40% say so).


Why do we fail in reproducing experiments?

It is critical to take a more in-depth look at the various aspects of the crisis:

First: why is this crisis 'so big'?

Let's not be naive: there is little or no reward for reproducing the work of other scientists, so there is also no urge to reproduce experiments if you don't have to. Most scientists assume published results are valid and build their studies based on them. Let alone reanalyze your own published results.

Second: where do you publish irreproducible results?

Although science would benefit from sharing the results of failed replication experiments, high-profile journals generally refuse replication attempts as the research is not innovative enough. Since your career as a scientist depends on your output, publishing your results in journals with a lower impact factor does not sound appealing.

Third: the replication ecosystem, such as it is, lacks visibility, value, and conventions.

When browsing through the Method sections of scientific papers, you will find tremendous variability in the details provided. The level of detail can be frustratingly low, even in high-impact journals like science. So even if you want to replicate a published experiment, the lacking Method details will hinder you. Also, more than 70% of the researchers will not contact the author for those missing details (Monya Baker, Nature, 2016). It lacks the standard of method recording, and it is uncommon to share failed attempts with the science community, so there is no easy way for peers to learn about this.

Finally, personnel turnover at labs is high.

Most researchers stay a few years to get their PhD or do a postdoctoral fellowship. With leaving the lab, sometimes the skills are gone too. It is not fraud in most cases but rather the loss of knowledge. Even detailed protocol books aren't always enough to back up this problem.

Discussing accurate results in scientific paper

Our solutions

Our advice in prevention

Although individual laboratories increase their replication attempts, results are often not shared. As stated before, the basis for this problem probably relies upon innovation being highly prioritized over scientific replication (Alberts et al., 2014; Nosek, et al., 2012).

But in that respect, things are improving! Researchers who want to tell the scientific community about their replication studies use new methods. Just recently, the online platform F1000 launched the dedicated Preclinical Reproducibility and Robustness channel.

"The Preclinical Reproducibility and Robustness channel is a platform for open and transparent publication of confirmatory and non-confirmatory studies in biomedical research. The channel is open to all scientists from both academia and industry and provides a centralized space for researchers to start an open dialogue, thereby helping to improve the reproducibility of studies."

In addition, scientific journals are now tightening their guidelines for authors to be more elaborate on their methods and raw data. Some journals, like Perspectives on Psychological Science, have even begun publishing alternative types of articles to get the replication of studies discussed.

Whether the above measures work to get a hold on the problem can be questioned. However, the foremost important thing, as researchers see it, is that the word is out there, and scientists themselves start thinking about how they perform their experiments and how to pin that down on paper.

The best practice is to use reliable and consistent methods to replicate your experiments easily. And give fellow researchers a hand in replicating your setup. The good news is that you don't have to figure it out yourself. We can guide you through the steps.


Success stories

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc nec nibh sed sem mattis malesuada. Aenean lobortis vulputate massa sit amet pellentesque. Duis quam sapien, fermentum nec porta vitae, laoreet sed nunc. Suspendisse scelerisque arcu eu molestie mattis. Ut ornare sit amet ante viverra luctus. Integer eleifend sollicitudin purus, eget suscipit massa blandit in. Fusce neque tortor, laoreet vel justo vitae, mollis dignissim ex. Morbi quis ultrices nulla, eget accumsan augue. Sed ut lacus vitae velit tincidunt mollis ac eu sapien. Phasellus ullamcorper est quis massa pharetra, sit amet molestie mi molestie. Etiam laoreet, leo ac tincidunt commodo, elit eros eleifend nisi, sit amet convallis nulla libero id lacus. Vivamus facilisis gravida varius. Praesent efficitur venenatis augue et efficitur.