Skip to main content
Open Research Europe logoLink to Open Research Europe
letter
. 2025 Jun 30;5:25. Originally published 2025 Jan 27. [Version 2] doi: 10.12688/openreseurope.19408.2

Achieving reproducibility in the innovation process

Maurice Whelan 1,a, Eann Patterson 2
PMCID: PMC12397739  PMID: 40895215

Version Changes

Revised. Amendments from Version 1

To better distinguish the current work described in this letter from prior efforts, we have more clearly defined both our intention and its novelty. The set of references cited has been updated with more recent publications and ones from a broader range of subject areas. A definition of innovation has been included to help clarify the target of this work and the use of the TRL terminology. A paragraph has been added to explain and expand on the use of terminology related to reproducibility and how this varies across subject areas and domains. Finally, the issues encountered in the second and third phases of the innovation cycle related to reproducibility have been further elaborated.

Abstract

Reproducibility is essential for innovation but is often hard to achieve in practice. One reason for this is a lack of appreciation of what needs to be reproduced and how in each phase of the innovation process. In the discovery phase, conclusions need to be reproduced through orthogonal investigation. In the translation phase, key attributes and outputs of derived products or processes should be reproducible by defining transferable specifications and protocols, whereas in the application phase, the goal is to achieve reproducible performance in real-world environments through appropriate quality assurance systems.

Keywords: Innovation, reproducibility, discovery, translation, application

Introduction

It has been widely reported that there is a reproducibility crisis in science. In 2016 for example, Baker reported in a survey of 1575 scientists from biology, chemistry, earth & environment, medicine and physics & engineering, that most scientists had experienced a failure to reproduce results 1 . In a very recent survey, Chakravorti et al. surveyed 452 professors in India and the USA across a wide spectrum of disciplines and described ‘national and disciplinary gaps in attention to reproducibility and transparency in science’ 2 . There are reputational costs to both individuals and the scientific community 3 of results that cannot be reproduced, as well as economic costs such as in medicine when drug trials fail 4, 5 . Conversely, a recent and extensive set of reproducibility and replicability studies in the Netherlands across social sciences, medical sciences and humanities, found that replication corroborates original findings, enhances understanding, and is an educational tool 6 .

There is much confusion in the literature about the terms ‘reproducibility’ and ‘replicability’ with different disciplines adopting contradictory definitions, as discussed by Barba 7 and by Plesser 8 , and more recently by Antunes & Hill 9 in context of high performance computing, who highlight that the Association of Computer Machinery (ACM) swapped their definitions of the terms in 2020. A dictionary definition of the adjective ‘reproducible’ is ‘ that can be produced or done again in the same way 10 , while in metrology it is defined in terms of precision or closeness of agreement between replicate measurements 11 . A 2015 report to the US National Science Foundation (NSF) 12 defined reproducibility as ‘the ability of a researcher to duplicate the results of a prior study using the same materials and procedures as were used by the original investigator’, which can be expressed as the same raw data and the same statistical analysis; while replicability refers to the ability of a researcher to ‘duplicate results of a prior study if the same procedures are followed but new data are collected’. Similar definitions were adopted by the US National Academies of Sciences, Engineering and Medicine in 2019 13 and the ACM 14 , although the latter’s definition of replicability is somewhat broader: ‘obtaining consistent results across studies aimed at answering the same question, each of which has obtained its own data’.

Plenty of advice can be found in the scientific literature about conducting studies that are more likely to be reproducible, most of which is based on three tenets: good study design based on a sound understanding of the underpinning science 5 ; robust methodology execution following the study design using appropriately calibrated and maintained equipment operated by trained personnel 15 ; and open, transparent reporting of study protocols, measurement procedures, data acquisition and analysis, including algorithms and codes 16 . However, most guidance, whilst sound, does not address the issues that a lack of reproducibility and replicability can create downstream in the innovation process. Hence, in addition to this guidance and in a novel departure, in this letter, we propose that the three major phases of the innovation process, namely discovery, translation and application, represent different contexts in which to consider reproducibility and replicability; and thus a different approach is required for each phase, as illustrated in Figure 1. Successful innovation, which is the practical implementation of ideas resulting in new or improved products or processes, requires a good understanding and appreciation of the challenges faced in each phase and thus the intent of our proposal is to stimulate productive discussion and interactions about these challenges between the stakeholders in any innovation process.

Figure 1. Schematic representation of the three phases of the innovation process and how to approach reproducibility in each.

Figure 1.

Proposed approach

The discovery phase typically involves demonstration of basic principles to proof-of-concept, representing a Technology Readiness Level (TRL) from 1–3 17 . In this initial phase, it is the generalisation of the original conclusion that is paramount, which needs to be more than producing the same result in the same way by the same team (repeatable) or by a different team (reproducible). To provide sufficient confidence in a discovery for the innovation process to progress, it is necessary to arrive at the same conclusion via multiple orthogonal routes, i.e., independent demonstrations of the same conclusion. In other words, epistemic parity needs to be achieved by establishing that the same conclusion can be reached using a range of methodologies that generate their own appropriate data, which goes beyond replicability to ‘generalised’ outcomes. Hence, it is important that new knowledge arising from the discovery is described in a manner that facilitates the design of appropriate orthogonal confirmatory routes.

The translation phase of the innovation process is from laboratory demonstration of a prototype (TRL 4) to full-scale demonstration in an environment representing the real world (TRL 6). In this phase, the prototype product or process must be reproducible in terms of predefined attributes and outputs associated with fulfilling its purpose. For this, it is essential to have transferable specifications and protocols that allow prototype products and processes to be deployed in different representative environments by different users while achieving parity of operation and output (different team, same setup = reproducibility). When describing an element of a specification or a step in a protocol therefore, the appropriate level of detail is needed, neither too little nor too much, together with the rationale for including particular elements or steps. In a commercial environment, reproducibility will need to be achieved within an organisation perhaps at different locations but probably not in different organisations, unless the intention is to licence the technology to other organisations. In some cases, the value of an organisation will be related to its ability to reliably reproduce prototype results for stakeholders, including potential investors.

The final phase covers demonstration in the operational environment (TRL 7) of an operational product or process (TRL 9), termed the application phase. Here the aim is to achieve parity in overall performance. Customers expect reliability in products and processes, i.e., that they will perform in a consistent manner over time. This requires both robustness in the design of the final product or process and the implementation of appropriate quality assurance procedures so that the expected success rate is achieved across both production items and their lifecycle. Delivery of a consistent success rate is the form of reproducibility expected in the application phase of the innovation process (different team, same setup = reproducibility). If an organisation fails to deliver this form of reproducibility, then it is unlikely to be commercially successful or viable. At the same time, commercial success is likely to depend on protecting the intellectual property underpinning the innovation; hence, there is a balance to be achieved between achieving the expected reliability of products and preventing competitors from reproducing the same success. In this context, it is important to be able to define the attributes of reproducibility in order to be able to protect them.

It is usual for each of the three phases of the innovation process to be undertaken by separate groups of people, often in different organisations with differing motivations and incentives. Discovery generally occurs in research laboratories in universities or specialist research institutes where the drivers are advancing science, publishing scientific papers and attracting funding. Translation is often performed in applied research organisations that have a specific mission and setup to bridge the gap between discovery and application. The application phase is predominantly undertaken by commercial organisations when they see the potential of a new product or process to generate revenue and capture market share. In some scenarios these distinctions are blurred, for example when a university spinout company attempts to pursue the entire innovation process. While this blurring can avoid the creation of silos that can stall innovation, it also has the potential to cause a failure to recognise the different contexts of reproducibility in the phases of the innovation process.

In conclusion, successful innovation requires a better understanding and appreciation of the philosophical, contextual and practical differences in establishing reproducibility within the discovery, translation and application phases of the innovation process. This will improve the design, conduct and outcomes of reproducibility studies and facilitate more fruitful discussion and cooperation between key actors.

Disclaimer

The views expressed in this article are those of the authors. Publication in Open Research Europe does not imply endorsement by the European Commission.

Ethics and consent

Ethical approval and consent were not required.

Funding Statement

This project has received a funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No. 754660 (project name: Matrix Optimization for Testing by Interaction of Virtual And Test Environments - MOTIVATE) and No. 820951 (project name: Development of Integrated MEasurement System - DIMES), and the institutional work programme of the European Commission’s Joint Research Centre (JRC)

[version 2; peer review: 1 approved, 3 approved with reservations]

Data availability

No data are associated with this article.

References

  • 1. Baker M: Reproducibility crisis. Nature. 2016;533(26):353–66.27193681 [Google Scholar]
  • 2. Chakravorti T, Koneru S, Rajtmajer S: Reproducibility and replicability in research: what 452 professors think in Universities across the USA and India. PLoS One. 2025;20(3): e0319334. 10.1371/journal.pone.0319334 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Hagiopol C, Leru PM: Scientific truth in a post-truth era: a review. Sci & Educ. 2024;1–34. 10.1007/s11191-024-00527-x [DOI] [Google Scholar]
  • 4. Prinz F, Schlange T, Asadullah K: Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10(9):712. 10.1038/nrd3439-c1 [DOI] [PubMed] [Google Scholar]
  • 5. Mann DL: The rising cost of developing cardiovascular therapies and reproducibility in translational research: do not blame it (all) on the bench. JACC Basic Transl Sci. 2017;2(5):627–629. 10.1016/j.jacbts.2017.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Derksen M, Meirmans S, Brenninkmeijer J, et al. : Replication studies in the Netherlands: lessons learned and recommendations for funders, publishers and editors, and universities. Account Res. 2024;1–19. 10.1080/08989621.2024.2383349 [DOI] [PubMed] [Google Scholar]
  • 7. Barba LA: Terminologies for reproducible research. arXiv preprint arXiv: 1802.03311.2018. 10.48550/arXiv.1802.03311 [DOI] [Google Scholar]
  • 8. Plesser HE: Reproducibility vs. replicability: a brief history of a confused terminology. Front Neuroinform. 2018;11: 76. 10.3389/fninf.2017.00076 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Antunes B, Hill DRC: Reproducibility, replicability and repeatability: a survey of reproducible research with a focus on high performance computing. Comput Sci Rev. 2024;53: 100655. 10.1016/j.cosrev.2024.100655 [DOI] [Google Scholar]
  • 10. Oxford english dictionary. J Simpson & E Wiener (eds): 2 ndEdition, Oxford: Oxford University Press,1989. Reference Source [Google Scholar]
  • 11. Joint Committee for Guides in Metrology: International vocabulary of metrology—basic and general concepts and associated terms. Paris, France: International Organization of Legal Metrology,2006. Reference Source [Google Scholar]
  • 12. Bollen K, Cacioppo JT, Kaplan R, et al. : Social, behavioral, and economic sciences perspectives on robust and reliable science. National Science Foundation, Arlington, VA,2015. Reference Source [Google Scholar]
  • 13. Committee on Reproducibility and Replicability in Science: Reproducibility and replicability in science.2019; last accessed June 11 th, 2025. 10.17226/25303 [DOI] [PubMed] [Google Scholar]
  • 14. 2024 Association of Computer Machinery Conference on Reproducibility and Replicability. last accessed 11 thJune 2025. Reference Source [Google Scholar]
  • 15. Lusoli W, (ed): Reproducibility of scientific results in the EU.European Commission, Directorate-General for Research and Innovation, Brussels,2020. 10.2777/341654 [DOI] [Google Scholar]
  • 16. Goodman SN, Fanelli D, Ioannidis JPA: What does research reproducibility mean? Sci Transl Med. 2016;8(341): 341ps12. 10.1126/scitranslmed.aaf5027 [DOI] [PubMed] [Google Scholar]
  • 17. Olechowski A, Eppinger SD, Joglekar N: Technology readiness levels at 40: a study of state-of-the-art use, challenges, and opportunities.In: 2015 Portland international conference on management of engineering and technology (PICMET). IEEE, August,2015;2084–2094. 10.1109/PICMET.2015.7273196 [DOI] [Google Scholar]
Open Res Eur. 2025 Sep 1. doi: 10.21956/openreseurope.22484.r56904

Reviewer response for version 2

Yibei Chen 1

This paper starts with the reproducibility crisis in science and then argues that the practices recommended for improving reproducibility in science do not apply to the innovation process, which is essentially the industrialization of scientific products. This is an interesting perspective. However, it also implies that the transition from science to innovation/technology may only apply to certain types of science.

The paper then proposes three phases: (1) demonstration of basic principles to proof-of-concept (TRL 1–3), (2) translation of lab prototype (TRL 4) to full-scale demonstration in the real world (TRL 6), and (3) demonstration in the operational environment (TRL 7) of an operational product or process (TRL 9).

But what about TRL 5 and TRL 8? Are they intermediate steps between 4 and 6, and 7 and 9?

I’m not familiar with the industry context, but here are some comments that may help make the article more accessible to readers:

1. If the goal of this paper is to propose how to achieve reproducibility in the innovation process, the introduction should start with the core concepts, “reproducibility” and “innovation process”, rather than focusing first on reproducibility and replicability in science. The opening paragraph could go straight to the core argument instead of providing such a broad context.

2. In the “Proposed approach” section, the three phases are quite abstract. It would help if the author provided examples to illustrate them. This could be done with one example that moves through all three phases, or three separate examples, one for each phase.

3. It would also be helpful if the paper included a clear, concrete definition of Technology Readiness Levels, ideally covering TRL 1 through 9, so readers can see the full picture.

4. “Achieving reproducibility” in the three phases remains quite conceptual. The paper could be strengthened by including operationalized definitions along with examples. A table could work well here, listing: (1) Rules/advice for achieving reproducibility, (2) Conceptual definition, (3) Operationalized definition, and (4) Real-world example

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Partly

Does the article adequately reference differing views and opinions?

Partly

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Partly

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

neuroscience

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Open Res Eur. 2025 Aug 29. doi: 10.21956/openreseurope.22484.r56902

Reviewer response for version 2

Joris Frese 1

In this open letter, the authors combine discussions about reproducibility/replicability in science with discussions about the innovation process in industrial settings. The key claim that the paper argues for is that there are three distinct phases in the innovation process of companies (discovery, translation, application), that each require distinct approaches to reproducibility. The authors propose ways to take reproducibility into account during each phase.

While I am not an expert whatsoever on industrial innovation processes (my expertise is rather related to replicability in scientific research), I appreciate the attempt to use this scientific lingo and transfer it to other areas. The authors´ argument that reproducibility is important yet underappreciated in industry lingo seems plausible to me and if that is indeed the case, then their letter makes a valuable contribution.

I also appreciate that the authors addressed various comments made by the previous reviewers compared to version 1 of this letter. I do however think that two points of the previous reviewers (which I agree with) have not yet been sufficiently addressed:

1) As mentioned in a previous review, there is quite an abrupt switch of focus from the introduction to the main text. First, the authors talk almost exclusively about reproducibility in scientific research, but then the last paragraph of the introduction and the main text are suddenly adopting a very industry/engineering-focussed perspective. I believe it might be worthwhile to add another paragraph in the introduction that clearly lays out the benefits of adopting reproducibility-lingo from science/academia to problems in industry/engineering/innovation enterprises and then keep the rest of the letter focussed on those latter aspects, rather than seemingly being stuck in the twilight zone between industry and academia (I believe academia already has more than its fair share of opinion pieces on the reproducibility crisis, so a piece focussed primarily and clearly on reproducibility issues in industry might be more valuable).

2) As also mentioned in a previous review, the letter would benefit substantially from incorporating some more concrete examples into the main text (e.g., what is one concrete example of a situation where companies need to worry about reproduciblity during the translation and/or the application phase of the innovation process). Keeping everything at the rather abstract level that it is currently at might risk confusing many non-expert readers who could otherwise benefit from this piece.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Partly

Does the article adequately reference differing views and opinions?

Yes

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Yes

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Partly

Reviewer Expertise:

metascience; open science; reproducibility; interdisciplinary collaboration

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Open Res Eur. 2025 Aug 21. doi: 10.21956/openreseurope.22484.r56277

Reviewer response for version 2

Tim Booth 1

The revised manuscript addresses the majority of the concerns raised in my initial review.

The authors have clarified their intent and strengthened the philosophical and contextual framing of reproducibility across the three proposed phases of the innovation process. The addition of a paragraph discussing the divergent uses of "reproducibility," "replicability," and "repeatability" across disciplines, including references to Barba, Plesser, and Antunes & Hill, significantly improves the manuscript’s clarity and positioning. Their choice to retain a broad, epistemological use of "reproducibility" is now more clearly justified and appropriately caveated.

Further, the discussion of the translation and application phases has been expanded and sharpened. The authors now acknowledge that reproducibility in these contexts may be internal to organisations and shaped by commercial concerns such as IP protection and stakeholder confidence. This framing is more useful and realistic.

The reference base has been broadened and now better reflects the interdisciplinary nature of the topic, though life sciences still dominate the examples.

That said, one point remains underdeveloped. While the manuscript now does a good job of outlining *why* reproducibility takes different forms in discovery, translation, and application, it still leaves open the question of *what should be done differently*. Having mapped reproducibility needs onto TRL stages and organizational types, the logical next step would be to indicate how stakeholders in each context might act on these insights. What should academic researchers working in early-stage discovery consider when designing reproducible claims for downstream translation? What protocols or specification strategies might applied research centers adopt to enable transfer? How can quality assurance in the application phase be framed not only as a technical obligation but as a communicable standard for investors or regulatory bodies? Even brief examples or references to existing practices would help anchor the authors’ framework in practical decision-making. As it stands, the manuscript remains largely conceptual, and readers looking for guidance may be left unsure of how to act on the insights presented by the end of the article.

Finally, I would note that a means of identifying the changes in the revised manuscript would have been greatly appreciated. As it stands, there is no documentation of the edits, and readers are left to manually compare versions. A brief point-by-point summary or marked-up copy would have facilitated a more efficient review process.

In summary, the revisions significantly improve the manuscript, and I would now consider it suitable for indexing, with the noted reservations.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Partly

Does the article adequately reference differing views and opinions?

Partly

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Partly

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

2D Materials, Innovation

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Open Res Eur. 2025 Aug 21. doi: 10.21956/openreseurope.22484.r56276

Reviewer response for version 2

Antunes Benjamin 1,2

The added section about reproducible research terminology, and the correct usage of the different terms (repeatability, reproducibility, and replicability), makes the paper clearer.

We can now better see the connection between TRL terminology used in industry and the concept of reproducibility.

The authors highlighted the importance of the reproducibility process for industry, while acknowledging that it can vary depending on intellectual property. In my opinion, the different contexts are now well defined.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Not applicable

Does the article adequately reference differing views and opinions?

No

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Partly

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Partly

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

Reproducibility in computer science

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Open Res Eur. 2025 May 19. doi: 10.21956/openreseurope.21002.r53244

Reviewer response for version 1

Tim Booth 1

The authors argue that reproducibility, a cornerstone of scientific and technological credibility, must be understood in relation to three distinct phases of the innovation process: i) Discovery, ii) Translation, and iii) Application. Each phase is associated with different reproducibility criteria: epistemic confirmation for discovery, specification transferability for translation, and performance consistency for application. These phases are mapped to Technology Readiness Levels (TRLs), offering a structured lens to evaluate reproducibility across the innovation lifecycle.

The conceptual framing of reproducibility in innovation in terms of these three phases is a thoughtful attempt to bridge epistemological concerns from academia with operational needs in industry. This appears to be a novel and potentially impactful contribution, particularly for research managers and policymakers. However, the letter does not sufficiently distinguish itself from prior work on the reproducibility crisis in academic contexts, nor does it fully resolve tensions between academic and industrial interpretations of reproducibility.

Data and methodology

This is a conceptual piece without original data, but the reasoning is generally coherent. However, the proposal would strongly benefit from concrete examples or case studies illustrating how reproducibility efforts differ (or should differ) across TRLs.

Terminology: The authors' use of "reproducibility" to cover what is elsewhere called "replicability" or "repeatability" may confuse readers, especially in computational and biomedical domains where these terms have distinct meanings. While some justification is provided through epistemological framing, a clearer acknowledgment of prior work that sharpens the definition of competing terminologies (e.g. refer 1, refer 2) would improve clarity and credibility. Readers may disagree that confirmation of conclusions through orthogonal experimental routes qualifies as reproducibility. Conversely, if one cannot independently verify a conclusion using different methods, samples, and data, it might be difficult to argue that the original conclusion is not reproducible in a precise sense, rather than irreplaceable in the current investigation. The paper would benefit from a more explicit justification of the terminology used, particularly the umbrella use of "reproducibility" in light of current consensus in relevant fields such as computational science, engineering, and biology.

Further, if the aim is to influence innovation practice, the authors should enrich the article by proposing concrete tools or mechanisms (for example, protocol repositories, QA systems, or documentation practices) that could enhance reproducibility at each phase. This would allow stakeholders to more easily action the authors' recommendations.

Many references are drawn from biomedical sciences. Inclusion of examples from materials science, computer science, or engineering would help the piece resonate across more disciplines. While biomedical sciences have naturally dominated the literature on reproducibility due to high stakes, system complexity, and challenges in methodological transparency, there is a wide acknowledgement of the problem across fields.

In their effort to encompass and speak to as many stakeholders as possible, the authors risk conflating issues in reproducibility of knowledge generation (academic) with those of production and deployment (industrial). The three-phase breakdown used by the authors is a positive step, though arguably, it is the challenges in the Discovery phase that lie at the heart of most reproducibility concerns. The Translation and Application steps, for example, rarely require communication of protocols to other stakeholders for reasons of industrial competition and secrecy. It is difficult to imagine start-ups spending much time assisting other start-ups in reproducing their competitive advantage, or incumbent industry leaders helping competitors achieve parity. The authors might assist readers by clarifying the specific issues in phases ii) and iii).

Several minor language issues (such as missing commas or slightly awkward constructions) could be addressed with a careful copy-edit.

Conclusions

The article makes a worthwhile conceptual contribution by linking reproducibility concerns to the innovation pipeline and suggesting that different forms of reproducibility are relevant at different stages. However, in its current form, the piece straddles the boundary between commentary and actionable guidance without fully satisfying either mode. With the addition of clearer definitions, concrete examples, and interdisciplinary breadth, the article could be a useful resource for those involved in translational research, policy, and innovation governance.

Recommendation

Major revision. The core idea is solid but requires clarification, expansion, and sharper framing to fully realize its potential contribution.

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Partly

Does the article adequately reference differing views and opinions?

Partly

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Partly

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Yes

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

2D Materials, Innovation

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

References

  • 1. https://arxiv.org/abs/1802.03311.
  • 2. https://doi.org/10.1016/j.cosrev.2024.100655.
Open Res Eur. 2025 Feb 19. doi: 10.21956/openreseurope.21002.r50404

Reviewer response for version 1

Antunes Benjamin 1,2

This open letter discusses reproducibility in science. The paper proposes that the innovation process is divided into three categories: discovery, translation, and application. It states that reproducibility concerns vary across these categories.

For the first category, the focus is on the reproducibility of scientific conclusions. For the second, it concerns the reproducibility of the protocols and processes used to develop the product. For the third, it addresses the reproducibility of the final quality of the product, which is essential for commercial success.

For all these stages, they use the “Technology Readiness Level” (TRL) scale, that is a “tool for assessing the maturity of technologies during complex system development” to measure the maturity of the project.

My comments:

The introduction of this paper begins by presenting the challenges we face regarding the reproducibility of published research papers. In my opinion, the references cited [1-10] primarily address academic research, particularly the lack of reproducibility in published studies (due to multiple reasons, such as not sharing the data, the code, bad statistics, …). The paper also provides some clarification on three key tenets they consider essential for reproducibility: good study design, robust methodological execution, and open science, which includes sharing code, data, etc.

However, it seems that the authors then shift their focus to industrial processes rather than academic research. In my view, concerns about reproducibility in industry differ from those in academic research. In science (academia), our objective is to build knowledge upon the existing knowledge of others. The reproducibility crisis mentioned in the introduction refers to this issue: if research papers cannot be reproduced, it raises concerns about the reliability of published results and our ability to build upon previous research.

In an industrial context, however, a probable goal of the company is growth and profitability. Of course, standardizing the products they sell is essential (as discussed in the "application" category), but can we truly talk about reproducibility in this context? A company may aim for internal reproducibility, but its objective might not be to make its protocols and processes (category “translation”) or its final quality assurance (category “application”) easily reproducible by others.

Therefore, in my opinion, this paper might be mixing these two distinct concepts: the reproducibility crisis in academic research (which pertains to scientific publications) and the industrial processes a company establishes to produce and sell standardized products. I fail to see the relevance of considering the latter in this discussion in the context of reproducibility.

A scientific publication is an innovation in itself; it does not necessarily have to lead to the commercialization of a new product. In practice, only a very small percentage of scientific publications result in the production and commercialization of new products.

More focused comments:

The paper uses the TRL scale to describe the maturity of a project. While this may be well-suited for industrial projects, I am unsure whether it effectively applies to scientific publications. The discovery phase corresponds to TRL levels 1 to 3, which include: Basic principles observed and reported, Technology concept and/or application formulated and Analytical and experimental critical function and/or characteristic proof-of-concept.

The paper states “ In this initial phase, it is the reproducibility of the original conclusion that is paramount, which needs to be more than producing the same result in the same way”.  

First, in my opinion, a published paper is a complete entity in itself. The process of reaching a conclusion is not limited to TRL levels 1 to 3. The tools and methods used—such as code, data, and other resources—significantly influence the final result and, consequently, the conclusion. I believe that a scientific publication is a project/innovation on its own, and this is where the reproducibility crisis lies, as well as the reproducibility issues highlighted in the references cited by the authors.

Secondly, in the context of scientific research, the current consensus is not to use the term “reproducibility” when referring to obtaining the same scientific conclusion using a different method. In this context, the appropriate term is “replicability.” However, I acknowledge that in epistemology, "reproducibility" remains the broader term. Here are some references:

BARBA, Lorena A. Terminologies for reproducible research. arXiv preprint arXiv:1802.03311, 2018[Ref-2].

ANTUNES, Benjamin and HILL, David R.C. Reproducibility, Replicability and Repeatability: A survey of reproducible research with a focus on high performance computing, Computer Science Review, Volume 53, 2024 [Ref-2], 100655, ISSN 1574-0137, https://doi.org/10.1016/j.cosrev.2024.100655.

https://www.acm.org/publications/policies/artifact-review-and-badging-current

As a result of discussions with the National Information Standards Organization (NISO), it was recommended that ACM harmonize its terminology and definitions with those used in the broader scientific research community, and ACM agreed with NISO’s recommendation to swap the terms “reproducibility” and “replication” with the existing definitions used by ACM as part of its artifact review and badging initiative. ACM took action to update all prior badging to ensure consistency

Conclusion:

Overall, in conclusion, I feel that I did not fully grasp the purpose of the article. Perhaps the issue is on my side. I believe that both the references cited and my own perspective focus more on the reproducibility of scientific research papers—whether academic or industry-based—, due to lack of data, code, and so on, while this article seems to address the reproducibility of industrial processes for producing a final commercial product.

However, in this context, no specific tools are proposed. I am not an expert of industry, but considering the concept of reproducibility (if the entity producing and selling the final product wants everyone to be able to reproduce it), the authors could have suggested tools for different phases. For the discovery phase, they could have introduced tools for sharing code, data, or documentation methods. For the translation phase, maybe some workflow tools that facilitate process replication could have been discussed. For the application phase, maybe standardized quality assurance methods might have been relevant.

Additionally, for a general-purpose paper on reproducibility, the references seem overly concentrated on biology (or related fields).

Maybe I didn't grasp the interest of this publication. I am open for discussion.

Some typos:

Abstract:

Missing comma between “how” and “in”: One reason for this is a lack of appreciation of what needs to be reproduced and how in each phase of the innovation process.

Introduction:

Miss comma between “this” and “however”: In addition to this however, as illustrated

Where applicable, are recommendations and next steps explained clearly for others to follow? (Please consider whether others in the research community would be able to implement guidelines or recommendations and/or constructively engage in the debate)

Not applicable

Does the article adequately reference differing views and opinions?

No

Are all factual statements correct, and are statements and arguments made adequately supported by citations?

Partly

Is the rationale for the Open Letter provided in sufficient detail? (Please consider whether existing challenges in the field are outlined clearly and whether the purpose of the letter is explained)

Partly

Is the Open Letter written in accessible language? (Please consider whether all subject-specific terms, concepts and abbreviations are explained)

Yes

Reviewer Expertise:

Reproducibility in computer science

I confirm that I have read this submission and believe that I have an appropriate level of expertise to state that I do not consider it to be of an acceptable scientific standard, for reasons outlined above.

References

  • 1. : Reproducibility, Replicability and Repeatability: A survey of reproducible research with a focus on high performance computing. Computer Science Review .2024;53: 10.1016/j.cosrev.2024.100655 10.1016/j.cosrev.2024.100655 [DOI] [Google Scholar]
  • 2. : Terminologies for Reproducible Research.2018; Reference source

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Data Availability Statement

    No data are associated with this article.


    Articles from Open Research Europe are provided here courtesy of European Commission, Directorate General for Research and Innovation

    RESOURCES