By: Jenny Lausch
Public trust in our work as scientists is of the utmost importance. While some scientists receive funds from private companies and individuals, most science in the United States is funded and made possible by tax dollars that are allocated by the government through the National Institute of Health (NIH) and National Science Foundation (NSF). The NIH alone supplies nearly 40 billion dollars annually to support researchers at universities and institutions! As scientists, we have an obligation to ensure that these public funds are being used ethically and responsibly. Each day we spend in the lab is (hopefully) a testament to our desire to benefit human health and promote its flourishing through our efforts.
Notably, one essential foundation of public trust in science is ensuring that different scientists trust each other’s work. One major way this is done scientifically is showcasing a discovery’s replicability. If a given research finding cannot be replicated by another scientist using the same methods and systems, then the validity of the finding is called into question, and it is less likely to be useful in furthering our understanding of key biological pathways or diseases.
For example, if I as a graduate student want to repeat an experiment from a published paper to verify their results (a very common occurrence), this would be considered a reproducibility experiment. With a reproducibility experiment – in contrast with a hypothesis-driven experiment – I should already know the expected result. Enter the reproducibility crisis: there is a rapidly growing concern that scientific findings too often fail to be repeatable by other scientists. The reproducibility crisis poses a huge issue for science; not only does it waste time, the public’s money, and valuable resources, but it also hinders discovery of novel targets for disease treatment and the broad expansion of knowledge.
A bigger issue than we may have thought
Unfortunately, the issue of reproducibility may be widespread. A Nature survey of 1500 scientists found that 52% think there is a “significant” reproducibility crisis and another 38% said there was a “slight” crisis.1 While this is just the subjective opinion of scientists, actual studies that have looked at reproducibility in fields such as cancer biology reflect the severity of the problem, finding just 10% of studies that were tested were reproducible.2 Surprisingly, this study found that the lack of demonstrated reproducibility of a paper does not prevent a study from being cited, and that an article having a high citation count does not automatically mean its results are more likely to be reproducible (Table 1). Many scientists think that the culture of science is the problem. The pressure in science to publish results quickly may be what is pushing people towards questionable research practices.1–3 So, what is driving this reproducibility crisis, and what can be done about it?
“Publish or perish”: academia’s true fear
Publishing is the currency of academic science. At the end of the day, if you fail to publish your findings, your results do not reach or help anyone. Publishing in an academic science journal often requires what is known as a novel discovery, the identification of a key scientific idea that no one else has found before. Studies that only reproduce data from other studies are nearly impossible to publish in any quality journal, providing very little incentive for primary investigators (PIs), the leaders of individual research labs, to invest in this work. Similarly, this means that negative data experiments – those that don’t support the initial hypothesis (i.e., don’t yield the results that a researcher expected), are inconclusive, contradict other results, or fail to find something novel – are almost never published on their own. While this may not seem problematic, knowing what has already been tried and failed is arguably as important as knowing what works. For current graduate students, spending years on a project that leads only to negative data not only will not only prevent you from graduating and publishing – it also means that others that will never know what you tried failed and could fall into the same “trap”.
As an academic scientist, the quality of your research is often calculated via citations. When you publish a peer-reviewed article, other scientists can cite your work in their own paper, either corroborating your work or showing that your work has made a deep impact in the field. This becomes a sink-or-swim situation when a PI is interested in obtaining funding from the NIH and they submit a grant application to fund their scientific ideas. This grant application is then reviewed by a panel of experts who score their ideas in many categories including: significance (is the research important?), innovation (is the idea novel?), approach (do the techniques suggested answer the question?), environment (can your institution support you?) and investigator (do you have a history of success?). Publishing is currency in academic science, because publishing in a high-impact journal, journals that have an extremely high citation count per article, is one of the major indicators of a PI getting NIH funding.
Additionally, PIs are heavily evaluated by what is known as their h-index, a metric which calculates the number of publications that a PI has and how many times those papers have been cited by other papers. If I PI has an h-index of 15, that means they have 15 papers which individually have been cited at least 15 times, but not 16 papers cited 16 times. Thus, getting publications out in a timely manner and into the best journal possible is of utmost importance to one’s scientific career. It is no wonder that pressure to publish is one of the highest factors researchers believe leads to reproducibility issues! (Figure 1). Reproducibility can seem at odds with these goals, as replicating experiments is often both time-consuming and expensive, further delaying publication.
So, what’s the solution? It’s simple: if we want people to invest in reproducibility efforts, we must make it worth their while.
Table 1. The number of times a peer-reviewed paper was cited was not affected by its impact factor or reproducibility status.2

Cherry-picking: selectively choosing the data that fits your hypothesis
When scientists were asked what contributes most to the publication of unreproducible data, the most common response was selective reporting, or what is colloquially known as cherry-picking data (Figure 1).1 Cherry-picking occurs when a scientist only selects the data from their results that fit their hypothesis, and fails to report any data that contradicts their ideas. This process is extremely biased, leading to articles that are fundamentally inaccurate and deceitful while failing to acknowledge the true difficulty of science. Many think this is brought on by many top tier journals requiring “perfect” stories, with no experiments that fail, to support a hypothesis being acceptable.
While we (as scientists) generally believe that most scientists are trustworthy and well-intentioned and do not intend on cherry-picking data,4 setting standards to prevent this is always the best approach. One way this is already avoided is through submitting grant applications for review, where scientists write what we plan to do, our methods, expected outcomes and potential pitfalls. This helps the NIH to understand our ideas and decide if they are worth investing in. One way we could continue to avoid cherry-picking is to pre-submit hypotheses to a third-party, known as pre-registration.1 In addition to supporting reproducibility, the practice of pre-submitting also forces scientists to think through their design and potential results or pitfalls, leading to more rigorous experiments.

So, is science destined for failure?
Despite disheartening metrics, 73% of people stated that they trust at least half the papers that are published in their field, and 31% said that just because a result cannot be reproduced does not mean that the research is wrong.1 How is it possible that there is this discrepancy? Feedback from the survery1 suggests that researchers often question themselves when results don’t replicate, or assume there is a good reason (such a technical or methodical difference) as to why the results are conflicting, instead of assuming that the work is fraudulent.
Other scientists cite the immense struggle of science, saying that a lack of robust reproducibility is just the result of doing difficult science and discovering new things, rather than a failure of the scientists. In fact, some researchers believe that scientific morals are not getting any worse, and that the crisis is simply a result of a lack of research misconduct guidelines and misguided incentives.5 Trueblood et al. says “the problem is that the goals of publishing—the documentation of new knowledge and establishing scientific credentials—are often in tension.”3 Thus, if we are going to fix the reproducibility crisis, we need to fight for institutional changes and a cultural shift that ultimately benefits society, rather than that which promotes the status of individual scientists.
How to solve the reproducibility crisis
Luckily, the NIH is beginning to acknowledge and take steps to solve this crisis. The current NIH director, Dr. Jay Bhattacharya, outlined his plan to change the culture of academic science in a recent podcast. His thoughts and ideas for potential reforms are summarized below:
- Create incentives for PIs to do replication work: Currently, large funding grants cannot be received based on replication work alone, nor can a PI receive tenure for this work (tenure allows a PI to investigate ideas or invest in projects that may take a while to produce results, allowing for job security even in funding droughts, which are extremely common). Still, we must reward people who are doing replication work in a creative, efficient, or high-throughput way. Not everyone can do replication work, or we risk hampering scientific discoveries; therefore, some have suggested “counting” replication efforts,3 and rewarding those who are first to repeat a prominent experiment.
- Establish NIH-based journals and databases to publish both replication work and negative results: Creating an outlet for replication work to be published is essential for researchers to spend their money and time repeating experiments. Additionally, creating journals to publish negative data will disincentivize researchers from cherry-picking and committing misconduct, because they will be able to publish their results regardless of the outcome.
- Measure scientific prosocial behavior: one of the major barriers to reproducibility is the perceived lack of benefit that comes from sharing your work with other scientists. When you hear that someone is repeating your work, we are quick to see this as a threat rather than a chance to support and validate our work! By objectively measuring a PI’s (or any researcher’s) sharing of materials and data, mentorship of graduate students, or active engagement in scientific meetings, we could reframe the value of someone’s career beyond their publication achievements and focus instead on their holistic contributions to scientific advancements.
At the end of the day, institutional changes need to be made to help solve the reproducibility crisis. Doing so is essential to ensure that both scientific and public communities can trust the work that we publish, and that we leave as our legacy to future researchers as robust and reliable a foundation as possible for further advancement.
TL; DR:
- Selectively choosing data and pressure to publish papers are driving a science reproducibility crisis.
- Plans to increase reproducibility include providing funds for reproducibility work, creating journals to publish scientific findings that do not turn out to be “true”, and measuring scientific collaboration.
Reference
- Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454. https://doi.org/10.1038/533452a.
- Begley, C.G., and Ellis, L.M. (2012). Raise standards for preclinical cancer research. Nature 483, 531–533. https://doi.org/10.1038/483531a.
- Trueblood, J.S., Allison, D.B., Field, S.M., Fishbach, A., Gaillard, S.D.M., Gigerenzer, G., Holmes, W.R., Lewandowsky, S., Matzke, D., Murphy, M.C., et al. (2025). The misalignment of incentives in academic publishing and implications for journal reform. Proceedings of the National Academy of Sciences 122, e2401231121. https://doi.org/10.1073/pnas.2401231121.
- Fanelli, D. (2009). How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLOS ONE 4, e5738. https://doi.org/10.1371/journal.pone.0005738.
- Fanelli, D. (2018). Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences 115, 2628–2631. https://doi.org/10.1073/pnas.1708272114.