Abstract
The field of prevention science aims to understand societal problems, identify effective interventions, and translate scientific evidence into policy and practice. There is growing interest among prevention scientists in the potential for transparency, openness, and reproducibility to facilitate this mission by providing opportunities to align scientific practice with scientific ideals, accelerate scientific discovery, and broaden access to scientific knowledge. The overarching goal of this manuscript is to serve as a primer introducing and providing an overview of open science for prevention researchers. In this paper, we discuss factors motivating interest in transparency and reproducibility, research practices associated with open science, and stakeholders engaged in and impacted by open science reform efforts. In addition, we discuss how and why different types of prevention research could incorporate open science practices, as well as ways that prevention science tools and methods could be leveraged to advance the wider open science movement. To promote further discussion, we conclude with potential reservations and challenges for the field of prevention science to address as it transitions to greater transparency, openness, and reproducibility. Throughout, we identify activities that aim to strengthen the reliability and efficiency of prevention science, facilitate access to its products and outputs, and promote collaborative and inclusive participation in research activities. By embracing principles of transparency, openness, and reproducibility, prevention science can better achieve its mission to advance evidence-based solutions to promote individual and collective well-being.
Keywords: Open science, Prevention, Replication, Reproducibility, Research transparency
Transparent, Open, and Reproducible Prevention Science
The field of prevention science aims to generate reliable evidence concerning the etiology of and responses to educational, health, and social issues—and to translate that evidence into policies and practices to promote individual and collective well-being (Botvin, 2000). The potential influence of prevention science on significant societal problems and inequities demands a high level of scientific rigor and research integrity (Catalano et al., 2012). Consequently, prevention scientists have worked to establish methodological and ethical standards that yield valid and actionable evidence (Crowley et al., 2018; Flay et al., 2005; Gottfredson et al., 2015; Leadbeater et al., 2018; Spoth et al., 2013). As it has done previously, prevention science continues to revisit its standards and norms in response to new opportunities and concerns, starting with constructive discussion and debate.
There is growing interest in prevention science and related disciplines in the transparency, openness, and reproducibility of scientific research (see Table 1). This movement—most commonly referred to as “open science”—aims to make the scientific process public and auditable, and to ensure the free availability and usability of scientific knowledge (Bezjak et al., 2018). Examples of open science practices include registering studies, protocols, and analysis plans; sharing data, analytic methods, and other research materials; reporting and disclosing all study methods and findings; and disseminating research outputs via open access outlets (for a more complete review, see (Miguel et al., 2014; Munafò et al., 2017; Nosek et al., 2015). As awareness and support of these practices has increased in recent years (Christensen et al., 2020), conversations in most disciplines have evolved from “whether” open science should be the norm to “how” to implement transparent, open, and reproducible research practices (National Academies of Sciences, 2018, 2019). Notably, as fields transition to open science, researchers often raise pragmatic concerns about the potential for additional burdensome bureaucracy and regulation, stifled creativity and discovery, and inappropriate application to studies not based on hypothetico-deductive models (Academy of Medical Sciences, 2015).
Table 1.
Glossary
Term | Definition | Reference |
---|---|---|
Analysis Plan | Technical, detailed elaboration of the procedures for executing the analysis described in the protocol | Gamble et al. (2017) |
Availability Standards | Guidelines for making data, analytic code, and research materials findable, accessible, interoperable, and reusable | Nosek et al. (2015) |
Dynamic Documents | Documents combining code, rendered output, and prose that can be continually edited and updated | Xie et al. (2020) |
Inferential Reproducibility | Making knowledge claims of similar strength from a study replication or reanalysis | Goodman et al. (2016) |
Methods Reproducibility | Ability to implement study procedures as exactly as possible, with the same data and tools, to obtain the same results | Goodman et al. (2016) |
Open Access | Free, immediate, online availability of research articles, with copyright that allows sharing and adaptation | Tennant et al. (2016) |
Open Notebook | Practice of making the primary record of a research project publicly available (e.g., online as it is recorded) | Schapira and Harding (2020) |
Open Source | Software source code released with a copyright that allows use, adaptation, and distribution for any purpose | Peng (2011) |
Preprints | Version of a scientific manuscript posted on a public server prior to formal peer review | Sarabipour et al. (2019) |
Protocol | Document with comprehensive details on study background, rationale, objectives, design, and methods | Chan et al. (2013) |
Registered Reports | Publishing format in which protocols undergo peer review, followed by in-principle acceptance of the results paper | Chambers (2013) |
Reporting Standards | Minimum set of study information needed for an accurate, complete, and transparent account of what was done and found | Simera et al. (2010a, 2010b) |
Research Lifecycle | Stages of a research study, such as prioritization, design, conduct, reporting, and overall management of a research study | National Academies of Sciences (2018) |
Results Reproducibility | Production of corroborating results in a new study, having followed the same methods as the original study | Goodman et al. (2016) |
Scientific Ecosystem | Interacting community of scientific stakeholders and their environments | Moher et al. (2016) |
Study Registration | Process of entering a minimum dataset about an empirical study in an independently controlled registry that is accessible to the public | De Angelis et al. (2005) |
Version Control | System that records changes to files over time in a way the facilitates later recall of specific file versions | Gentzkow and Shapiro (2014) |
Workflow | Management and organization of folders, files, metadata, analytic code, and other study data documentation | Project TIER (2016) |
The Society for Prevention Research (SPR) has identified the relevance of specific open science practices in prior work, such as task forces on standards of evidence for prevention interventions (Gottfredson et al., 2015) and ethical issues encountered by prevention scientists (Leadbeater et al., 2018). More recently, a featured roundtable session at the 2019 SPR Annual Meeting explicitly focused on open science within prevention science (Bradshaw et al., 2019). To promote further discussion on this critical issue, the panel participants and session attendees recommended that Prevention Science publish a special issue on transparency, openness, and reproducibility—which three session participants and co-authors of this manuscript (SG, FG, and CPB) subsequently pursued. This paper serves as a primer introducing and reviewing key concepts for this special issue, with subsequent papers providing deeper analysis on the implications of specific concepts to prevention science. In this paper, we review the opportunities and concerns motivating the wider open science movement. We also consider core practices, resources, and stakeholders involved in advancing an open science reform effort, with attention to the intersection of open science practices and prevention science methods. We conclude with some challenges to consider in future discussions about the transition to a transparent, open, and reproducible prevention science.
Factors Motivating the Open Science Movement
Proponents of open science advocate for transparency, openness, and reproducibility as mechanisms to align scientific practice with scientific ideals, accelerate scientific discovery, and broaden access to scientific knowledge (National Academies of Sciences, 2018, 2019). Depending on the nature and importance of a study, these principles are operationalized as one or more relevant open science practices (Mayo-Wilson & Dickersin, 2018). In this section, we summarize these factors motivating our call for concerted efforts to align prevention science with the open science movement.
Aligning Scientific Practice with Scientific Ideals
Transparency, openness, and reproducibility are inherent in fundamental scientific ideals, such as communality, universalism, disinterestedness, and organized skepticism (Merton, 1973). For example, open science practices better enable researchers to verify the work of others. Verifiability relates to the ideal of science as “self-correcting,” which means the scientific community governs itself in order to calibrate evidentiary claims and limit unavoidable errors, thereby safeguarding credibility and instilling trust in the scientific literature (Vazire, 2018). Because verifiability requires researchers to provide empirical support for scientific claims, practices like data sharing enable the verifiability of empirical evidence. Toward that end, open science bolsters research integrity by facilitating verifiability. As researchers increasingly espouse these ideals, making open science the norm would better align actual scientific practice with the ideals to which scientists subscribe (Anderson et al., 2016; Anderson et al., 2010).
Accelerating Scientific Discovery and Progress
Open science also can accelerate the progress of science as a cumulative enterprise. Transparency and reproducibility facilitate reuse and building on the work of others, leading to greater returns on research investments (Academy of Medical Sciences, 2015). For example, sharing data, code, and materials allows a greater proportion of products from previous research to influence new studies (Goodman et al., 2016). These practices can speed the process of new discoveries and expedite error detection, thereby redirecting unproductive lines of research more quickly (Vazire, 2018). A new research team can better check the internal consistency of another team’s results, reanalyze data using the original analytical strategy, and examine robustness to alternative analytical choices, prior to conducting a new study (Nuijten, 2022). In addition, openness enables collaborations not possible through siloed research, such as crowdsourced initiatives that build large datasets to create opportunities for a greater number of rich data analyses (Moshontz et al., 2018). Data sharing also yields greater power to investigate new or more complex questions (e.g., intervention effects on rare outcomes, subgroup effects, or moderated mediation) that require larger sample sizes than are typically found in one study (Leijten et al., 2020). Adopting protocols, software, and analytic strategies from previous studies can increase standardization, facilitating more efficient discoveries and research syntheses that summarize the cumulative evidence within a line of scientific inquiry (Goodman et al., 2016).
Broadening Access to Scientific Knowledge
The open science movement also focuses on making research products and outputs more usable and freely available to everyone, broadening access to scientific knowledge and resources. For example, disparities in financial, human, and physical resources across research institutions can be mitigated by the free availability and reuse of protocols, data, code, software, and materials from previous research (Gennetian et al., 2020). In addition, open access articles can be read online or downloaded freely by stakeholders not affiliated with research institutions that have journal subscriptions, such as non-governmental organizations, policymakers, and engaged citizens. Through this focus on free availability of research findings and products, open science can accelerate the flow of scientific evidence to the public.
Need for an Open Research Lifecycle
Researchers make numerous decisions across all stages of research, or the research lifecycle, including question formulation, study design, data collection and analysis, and reporting and dissemination (National Academies of Sciences, 2018). Without transparency, researchers have undisclosed flexibility in making these decisions (sometimes called “researcher degrees of freedom”), which enable specific concerns motivating the open science movement (Wicherts et al., 2016). For example, a “closed” research lifecycle hinders the ability to reproduce previous research (Goodman et al., 2016), facilitates selective non-reporting of studies and results (i.e., publication bias and outcome reporting bias) and other detrimental research practices (Dwan et al., 2013), prevents detection of unintentional errors and intentional misconduct (Fanelli, 2009), and exacerbates perverse incentives for career scientists (Smaldino & McElreath, 2016). In this section, we consider some of the concerns and challenges for the field of prevention science that can be addressed by adopting open science.
The “Reproducibility Crisis”
Over the last decade, scientists and other stakeholders have contended that the behavioral, social, and health sciences are experiencing a “reproducibility crisis” (Fidler & Wilcox, 2018). Numerous large-scale collaborative efforts have found low reproducibility rates in psychology (Klein et al., 2014; Open Science, 2015), economics (Camerer et al., 2016; Chang & Li, 2015), the social sciences (Camerer et al., 2018), and medicine (Nosek & Errington, 2017). While irreproducibility can occur for substantive reasons, scientific stakeholders are concerned that the number of key research findings that cannot be reproduced is higher than expected or desired, particularly in high-profile scientific journals (Shrout & Rodgers, 2018). Viewing ability to reproduce findings as one indicator (of many) for the truth of a scientific claim (Goodman et al., 2016), these results are commonly taken as evidence that a greater proportion of published research findings are likely false than has previously been believed (Baker, 2016; Gall et al., 2017). A high proportion of false research findings can hinder scientific progress, delay translation of research into policy and practice applications, lead to waste of resources, and threaten the reputation of and public trust in science (Academy of Medical Sciences, 2015).
Goodman et al. (2016) offer a three-term taxonomy that may be helpful to facilitate shared understanding of and clear communication about reproducibility within prevention science. First, “methods reproducibility” refers to the ability to implement the same methodological and computational procedures with the same data to obtain the same results as a previous study. It facilitates trust that data and analyses are as represented, requiring provision of enough detail about original study methods and data for another to repeat the same procedures. “Results reproducibility” refers to the ability to implement the same methodological procedures with a new, independent dataset to produce results corroborating a previous study. Using this terminology, a replication study generally refers to a study designed to examine or test the results reproducibility of a previous study, with the potential to provide new evidence for a scientific claim (Academy of Medical Sciences, 2015). Finally, “inferential reproducibility” refers to the ability to draw conclusions that are qualitatively similar to a previous study, either from an independent replication or reanalysis of the original study data. All three types of reproducibility are relevant to the field of prevention science and germane to the open science movement, each with important considerations across stages of the research lifecycle.
Detrimental Research Practices
While there are various determinants of reporting findings that are false and irreproducible, common “detrimental research practices” may be important contributors (Munafò et al., 2017). Some researchers intentionally engage in these practices with full knowledge of their negative consequences; however, most researchers likely do so unknowingly or under the belief that these practices are acceptable and compatible with research integrity (John et al., 2012). Regardless of intention or understanding, these practices have detrimental effects on research integrity by inflating the false positive error rate in the research literature (National Academies of Sciences, 2017; Simmons et al., 2011). Unfortunately, evidence suggests that many of these practices are not only common, but may be increasing over time (Chavalarias et al., 2016; Fanelli, 2012; Masicampo & Lalande, 2012; Pocock et al., 2004).
Chief among these detrimental research practices is selective non-reporting of studies and results, which occurs when the nature of study findings (rather than methodological rigor) influences the decision to submit, disseminate, or publish them (Chalmers, 1990; Chan, 2008). There is ample and long-standing evidence across disciplines that “statistically significant” or “positive” results are more likely to be published than results that are “not statistically significant,” “negative,” “null,” “inconclusive,” or otherwise countervailing (Axford et al., 2020; Dwan et al., 2013; Fanelli, 2010a, 2010b; Franco et al., 2014; Hartgerink et al., 2016; Sterling, 1959). Researchers may selectively refrain from writing-up and submitting entire studies for publication based on the nature or direction of results (Rosenthal, 1979), leading to a biased subsample of studies being published in the literature on a research topic. For example, selective non-reporting of entire studies (“publication bias”) has been documented in psychology and education using evidence from institutional review boards and doctoral dissertations that shows studies with statistically nonsignificant results are less likely to be published (Cooper et al., 1997; Pigott et al., 2013). In clinical psychology and medicine, interventional trials with statistically nonsignificant results are less likely to be published than clinical trials with statistically significant results (Cuijpers et al., 2010; Cuijpers et al., 2010; Dwan et al., 2013; Niemeyer et al., 2012, 2013; Song et al., 2010). An evaluation of trials funded by the National Institute of Mental Health found that studies with small effects were less likely to be published than studies with large effects, inflating the apparent effectiveness of psychotherapies (Driessen et al., 2015).
There is also selective non-reporting of study results (“reporting bias” or “selective outcome reporting”), which occurs when researchers choose a subset of outcomes to report in manuscripts (Axford et al., 2020; Chan et al., 2004). Selective non-reporting of results may be difficult to detect because it tends to be apparent only when study protocols and statistical analysis plans are registered prospectively, and when reviewers or readers check published results against registered outcomes and analyses. Nonetheless, there is also empirical evidence that statistically nonsignificant results are more likely to be omitted selectively from manuscripts (Chan & Altman, 2005; Staines & Cleland, 2007). In contrast, selective non-reporting of whole studies may be more apparent, especially in the case of large prevention trials. Scientific claims based on these bodies of evidence are undermined as a result of authors, journal editors, and peer reviewers using the statistical significance, magnitude, or direction of results to make publication decisions (Dickersin, 1992; Emerson et al., 2010; Olson et al., 2002).
Selective non-reporting is related to other detrimental research practices that virtually guarantee (spuriously) finding statistically significant or interesting results (Goodman et al., 2016). For example, “p-hacking” refers to repeatedly searching a dataset or trying multiple alternative analyses until a statistically significant or desired finding is obtained—and then failing to fully report how this result was obtained (Simonsohn et al., 2014). “Hypothesizing After Results are Known,” or “HARKing,” involves reporting a hypothesis formed after seeing study results as if it were an a priori hypothesis formed before collecting or analyzing study data (Kerr, 1998). Undisclosed flexibility in the research lifecycle also can hinder the ability of peer-reviewers and readers to detect research practices that result in overfitted statistical models, i.e., overly optimistic “findings” from the statistical model of a dataset that do not occur in the target population due to idiosyncrasies of the sample at hand (Babyak, 2004). Spurious findings from overfitted statistical models (such as linear regressions, logistic regressions, structural equation models, and other models common in prevention sciences) are highly likely to fail to replicate in future samples, threatening the credibility of scientific claims supported by these findings. In addition, overinterpretation and misuse of inferential statistics can occur in cases of low statistical power (Button et al., 2013; Szucs & Ioannidis, 2017), lenient and arbitrary thresholds for statistical significance (Benjamin et al., 2018; Lakens et al., 2018), errors in reporting p-values (Nuijten et al., 2016), and inappropriate application of null hypothesis significance testing (Goodman et al., 2016).
While more rare than the aforementioned detrimental research practices, high-profile cases of intentional research misconduct have also generated recent interest in reproducibility (Stroebe et al., 2012). These intentional practices include fabrication, falsification, and plagiarism (National Academy of Sciences, National Academy of Engineering, & Institute of Medicine, 1992). Fabrication involves making up data or results, while falsification involves misrepresenting research through the manipulation of materials or data. In contrast, plagiarism is appropriating another person’s work without due credit when proposing, performing, or reporting research. As human beings, researchers also make honest technical or human errors—such as model misspecification or data entry errors in a spreadsheet (Academy of Medical Sciences, 2015). Whether intentional or not, closed research lifecycles hinder the ability to detect these issues that negatively impact the validity of reported research findings.
An Overview of Core Open Science Practices
In response to what has been called the reproducibility crisis, the open science movement represents part of what is being called a credibility revolution (Spellman, 2015), by promoting standards and norms that increase the reliability of scientific research (Goodman et al., 2016; Vazire, 2018). One of many scientific reform efforts (Munafò, 2019), open science aims to promote a shift from traditionally closed to more open research lifecycles (Miguel et al., 2014; Nosek et al., 2015). To achieve this shift, open science proponents commonly advocate for a core set of practices (see Fig. 1).
Fig. 1.
Roadmap for a Transparent, Open, and Reproducible Research Lifecycle. Note: Figure adapted from the roadmap co-developed by SG for the Berkeley Initiative for Transparency in the Social Sciences Research Transparency and Reproducibility Training (RT2) workshops: https://www.bitss.org/resource-library/
It is important to note, however, that this set of core open science practices has largely arisen from idealized versions of studies using the hypothetico-deductive scientific method, such as randomized trials and experiments testing confirmatory analyses via null hypothesis significance testing in a frequentist statistical framework (Munafò et al., 2017). While principles of transparency and openness are relevant to all empirical research in prevention science, it may be premature or undesirable to require each open science practice for every type of empirical study a prevention scientist might conduct. Rather, a goal of this paper is to provide readers (particularly those new to transparency and reproducibility) with an overview of prominent open science practices. In turn, this article (and the Special Issue of Prevention Science that it anchors) can serve as a foundation for further discussion about the various elements of open science that can be applied in prevention science. In the sections that follow, we describe open science practices in greater detail, along with a description of the stakeholders and contexts of research that contribute to a need for greater openness and transparency. We then connect these activities and concepts to prevention science research practices.
Study Registration
Study registration is a time-stamped entry of a minimum set of information in a publicly accessible, independently controlled registry (De Angelis et al., 2005). Researchers may register their studies before collecting new data or accessing existing datasets (De Angelis et al., 2004). Study registration can address publication bias by documenting that particular studies exist, and they can serve as “identification numbers” that link various products and outputs of a study, such as protocols, data, code, research materials, and manuscripts (Altman et al., 2014). In this way, a study registration acts as a “one-stop shop” for other researchers and interested stakeholders to discover and gather information on planned, current, and completed research on a topic, even if that research is unpublished (Harrison & Mayo-Wilson, 2014).
Protocols and Analysis Plans
A protocol is a document with details on study background, rationale, objectives, design, and methods (Chan et al., 2013). An analysis plan provides a technical and detailed elaboration of the procedures for executing the analysis described in the protocol (Gamble et al., 2017). Researchers can register, publish, or share these documents in advance of data collection and analysis in order to pre-specify the rationale, proposed methods, analysis plan, ethical considerations, and management of a research study (Nosek et al., 2012). Protocol and analysis plan registration does not prevent exploratory analyses (Wagenmakers et al., 2012). Rather, prospective registration limits opportunities for detrimental research practices (e.g., outcome switching, selective outcome reporting, and HARKing) by facilitating external identification of planned versus actual study procedures and analyses (Goodman et al., 2016). Protocols and analysis plans also encourage research teams to more carefully plan in advance (Nosek et al., 2018). They also help research teams to conduct the study; human research protection programs to assess the risks and benefits of proposed study procedures; and research consumers to monitor and evaluate changes throughout the research lifecycle (Tetzlaff et al., 2012).
Organized Workflows
Study workflow involves folders, files, metadata, code for analyses, and other data documentation (Project TIER, 2016). Study workflows can be organized coherently and document file management procedures clearly (Long, 2008). A reproducible workflow includes (a) clear computing and communication, (b) version control that tracks changes in real time across collaborators and versions (ideally with a cloud-based mirror), (c) tracking the chronology and origin of research objects (e.g., data, source code), (d) maximum programmatic automation and minimal manual file edits, and (e) containment of a computational environment to share with others who would like to repeat the workflow (Martinez et al., 2020). Ideally, researchers maintain a dynamic, digital notebook that records decisions made throughout the research lifecycle, and make this notebook publicly available after study completion (Schapira & Harding, 2020). In addition, a well-commented markdown file can capture which and how many analyses were performed and ultimately reported in published manuscripts (Goodman et al., 2016).
Transparent Reporting
Incomplete reporting leads to the omission of information essential to appraise study quality, reproduce findings, and synthesize a body of evidence (Grant et al., 2013). Reporting guidelines use explicit methodology to provide standards on the minimum set of study information to include in a manuscript for an accurate, complete, and transparent account of what was done and found in a study (Moher et al., 2010). These reporting standards are organized in a checklist according to the introduction, methods, results, and discussion sections of an article, as well as a diagram for capturing the flow of participants through study stages (Simera et al., 2010a, 2010b). The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network is an international initiative that provides a catalog of reporting guidelines for various study designs, as part of its mission to improve the quality of scientific publications through transparent and accurate reporting (Simera et al., 2010a, 2010b).
Data, Code, and Materials Sharing
Sharing analytical datasets with relevant metadata, code, and related research materials facilitates reproducibility. To safeguard quality, researchers can carefully plan and describe management procedures at the beginning of a study, make those procedures accessible to the research team during the study, and communicate these procedures to external stakeholders after the study (Nosek et al., 2015). Data relevant for sharing can range from initial raw data to the final processed dataset—given the many judgments and choices are made by study teams along the path of cleaning, transformation, and preparation for analysis (Goodman et al., 2016). Availability standards offer guidelines for managing data, code, and research materials, and for making them findable, accessible, interoperable, and reusable (Wilkinson et al., 2016). Sharing allows reported findings to be verified directly through reproducibility and sensitivity checks, as well as enable other investigators to pursue new questions in secondary data analyses (Gilmore et al., 2018). Barring legal, ethical, and proprietary constraints, researchers can share their data, code, and materials in permanent repositories dedicated to archiving, such as GitHub, Dataverse, Dryad, Vivli, and the Interuniversity Consortium for Political and Social Research (Christensen et al., 2019).
Stakeholders in the Scientific Ecosystem
Researcher behaviors leading to reproducibility problems are influenced by aspects of the current context, culture, and incentive structure of scientific careers (Fidler & Wilcox, 2018). Incentives for academic researchers have become increasingly perverse over the last several decades because of the hyper-competitive research environment and its focus on productivity, novelty, and innovation (Edwards & Roy, 2017). Competition for publications, research funding, media coverage, and permanent employment can incentivize detrimental research practices (Smaldino & McElreath, 2016), creating a tension between career advancement and the credibility of the published scientific literature (Nosek et al., 2012). The resulting “publish or perish” and “funding or famine” culture—in combination with “closed” research lifecycles—is a key determinant of the proportion of false positives, irreproducible results, and detrimental research practices in the scientific literature (Moher et al., 2018). Given that these systemic factors facilitate detrimental research practices, cultural changes are needed to improve the reproducibility of the scientific literature. Consequently, open science reform efforts target not only researcher behaviors, but also focus on changing the scientific ecosystem, i.e., the way that scientific stakeholders, the wider research environment, and the resultant incentives interact as a system (Moher et al., 2016). Several specific stakeholders in the scientific ecosystem are commonly targeted to encourage researcher adoption of open science practices (see Table 2). Engaging multiple stakeholders in promoting transparency, openness, and reproducibility is particularly important for decentralized fields like prevention science that have less regulations compared to biomedicine (Dal-Ré et al., 2015; Faggiano et al., 2014).
Table 2.
Proposed Stakeholder Actions for Supporting Open Science
Stakeholder | Proposed Action | Reference |
---|---|---|
Researchers | Adopt transparent, open, and reproducible research practices in empirical prevention science | Christensen et al. (2019) |
Universities and Research Institutions | Explicitly support, recognize, and reward the use of transparent, open, and reproducible practices by faculty and researchers | Moher et al. (2019) |
Students, Postdocs and Early Career | Enroll in courses and training in open science practices, build coalitions for peer support, and incorporate open science into daily lab work, dissertations, and research | Morling and Calin-Jageman (2020) |
Journals and Publishers | Implement policies and procedures that promote publishing articles of transparent, open, and reproducible research | Nosek et al. (2015) |
Funders | Promote or mandate adherence to open science practices in grant applications and funded projects | National Academies of Sciences (2018) |
Scientific Societies | Advance the use of transparent, open, and reproducible research practices in guidance, conference proceedings, and trainings | McVay and Conroy (2019) |
Practitioners | Advocate for transparent, open, and reproducible research practice in empirical scientific studies of and standards for establishing evidence-based interventions | Mayo-Wilson et al. (2020) |
Policymakers | Incorporate transparent, open, and reproducible research practices into policy analysis and standards of evidence | Hoces de la Guardia et al. (2021) |
Media | Incorporate considerations and standards related to transparency, openness, and reproducibility when reporting about science to the public | Academy of Medical Sciences (2015) |
The Public | Participate and collaborate in scientific research to increase scientific knowledge and address problems of concern in their local communities | Chari et al. (2019) |
Journals and Publishers
As dissemination of research through journal articles influences career opportunities, peer review and publication models are important for the open science movement. Journals have been criticized for traditional publication models that focus on novelty rather than reproducibility, results rather than methods, and narrative rather than data and analysis (Fidler & Wilcox, 2018; Neuliep & Crandall, 1990; Schmidt, 2009). In addition, publishers are a stakeholder group distinct from journals that create their own policies and procedures that shape research practice and career incentives for scientists (Nosek et al., 2015). For example, publishers influence (and in many cases prescribe) the standards, language, and format of the instructions for authors pages of their journal websites. They also set the fields and functionalities of article submission systems, review systems, and the templates for journal articles and their landing webpages (Mayo-Wilson et al., 2021). Moreover, active debate abounds regarding for-profit versus non-profit publishers, open access fees, and relevant consequences on the representativeness and welfare of the research community (McNutt, 2019). To address these observations and concerns, publishers can enable journal editors to adopt policies and procedures that promote transparency, openness, and reproducibility of the science that they publish (Azar et al., 2015; Cybulski et al., 2016; Grant et al., 2013; Knüppel et al., 2013; Milette et al., 2011; Riehm et al., 2015; Scott et al., 2015).
The Transparency and Openness Promotion (TOP) Guidelines comprise eight modular standards that journals can incorporate into their policies for manuscript submission and publication (Nosek et al., 2015). In tandem with the TOP Guidelines, some journals award digital “open science badges” to manuscripts that involve these practices, such as data sharing, materials sharing, or study registration (Kidwell et al., 2016). Furthermore, journals can offer Registered Reports: a two-step submission process where the protocol is reviewed prior to conducting the research, with in-principle acceptance of the subsequent results papers should second-stage review confirm that any deviations from the approved protocol are justifiable (Chambers, 2013). In addition to addressing publication biases, this model also allows feedback on the protocol to improve the actual conduct of the study, as changes to design and conduct can still be incorporated and lead to a more constructive peer review process (Chambers, 2019). To complement Registered Reports, journals can offer “Exploratory Reports” for empirical submissions that address relatively open research questions using abductive and inductive approaches without strong a priori predictions (McIntosh, 2017). Offering Exploratory Report and Registered Report formats would respect both the exploratory and confirmatory phases of discovery vital to prevention science. Lastly, to increase the transparency of the review process itself, journals and publishers are increasingly trialing “open peer review models,” including making reviewer and author identities known, publishing review reports alongside articles, and crowdsourcing participation in the peer review process (Ross-Hellauer, 2017).
To facilitate accessibility and more efficient discovery of their completed work, journals can have options for authors to publish open access manuscripts, as well as post working versions or preprints of submitted papers (Tennant et al., 2016). Open access publication involves free and immediate online availability of research articles, with copyright that allows sharing and adaptation (Tennant et al., 2016). Preprints are publicly available scientific manuscripts posted on dedicated servers prior to journal-managed peer review (Sarabipour et al., 2019). While benefits of preprints include more rapid dissemination of and feedback on academic work, concerns include sharing and subsequent media coverage of substandard work with significant implications (Kaiser, 2017). Journals and publishers vary in their policies on open access publishing and pre-print sharing, with evidence to suggest a growing number of journals with options for both practices (da Silva & Dobránszki, 2019; Laakso, 2014).
Funders
As grants and contracts also influence career opportunities, funders can implement policies and procedures to promote the transparency, openness, and reproducibility of the research that they sponsor. To ensure maximal return on their investments, funders could require that researchers be transparent about their procedures and share all products of their funded scientific research (Gennetian et al., 2020). For example, the San Francisco Declaration on Research Assessment (2018) recommends considering the value and impact of research outputs beyond publications—such as datasets, software, computational environment, and code—when evaluating the scientific productivity of grant applicants. The National Institutes of Health has policies that set explicit expectations on sharing data, open access publication, and registering and sharing results of clinical trials. Funders also can have specific “requests for proposals” related to transparency, openness, and reproducibility. For instance, the Institute of Educational Sciences (2021) has a dedicated request for applications on systematic replication studies that vary one or more aspects of a previous study to better understand which interventions improve education outcomes, for whom, and under what conditions. Funders also can dedicate resources to infrastructure, training, and staff required for open science practices, such as the National Institute of General Medical Sciences (2018) clearinghouse of training modules to enhance reproducibility. Such dedicated resources are essential, given robust evidence that actual rates and quality of data sharing by principal investigators is suboptimal, even when support for and willingness to share data are high (Ohmann et al., 2021).
Universities and Research Institutions
Universities and research institutions also influence researcher behaviors. Given their role in enabling perverse “publish or perish” and “funding or famine” incentives (Bouter, 2020), open science proponents are increasingly calling on universities and research institutions to empower researchers to stop using detrimental research practices (Woolston, 2021) and to normalize committees to reward transparent, reproducible research practices for career advancement (Moher et al., 2020). Hiring, promotion, and tenure assessments of faculty at universities could reward transparently publishing all research results and openly sharing data, code, protocols, and other research materials (Moher et al., 2018). Universities also can provide training on open science practices through formal coursework on transparency, openness, and reproducibility for graduate students and postdoctoral fellows (Krishna & Peter, 2018), as well as support through fostering Open Science Communities at their institutions (Armeni et al., 2021). Given the costs involved in learning new knowledge and skills, universities and research institutions also can seek mechanisms to provide their students, faculty, and researchers with protected funding and time to develop proficiency in open science practices, such as resources to support data archiving (Gilmore et al., 2020). Universities also could consider leveraging existing research administration and quality assurance offices—such as clinical trials offices (Mayo-Wilson et al., 2018) and human subjects research protection programs (Grant & Bouskill, 2019)—to help facilitate the transparency, openness, and reproducibility of ongoing research. Lastly, universities and research institutions can adopt policies signaling support for open science. For example, several research institutions—including Child Trends (https://www.childtrends.org/policies-on-integrity-independence-and-transparency), the International Initiative for Impact Evaluation (https://www.3ieimpact.org/our-work/research-transparency), and MDRC (https://www.mdrc.org/publication/research-transparency-and-replication-mdrc)—have created research transparency policies that support practices such as study registration, data archiving, and open access publication. The National Academies of Sciences (2021) recently developed an extensive toolkit of resources that universities and research institutions can use to foster open science.
Policymakers and Practitioners
Policymakers and practitioners would benefit from the more efficient scientific discoveries and accessible evidence afforded by transparency, openness, and reproducibility. For example, incorporating open science practices into the standards used by clearinghouses to designate interventions as “evidence-based” could influence researchers to use these practices in program evaluations, as well as lead to an even more reliable evidence-base for decision-making (Buckley et al., 2021; Mayo-Wilson et al., 2020). Federal agencies that oversee policy and program evaluation efforts have demonstrated a growing interest in open science methods as critical to fulfilling obligations to be a steward of and efficiently use taxpayer dollars (Holzwart & Wagner, 2020). The Administration for Children and Families (2014) has created an evaluation policy that includes a commitment to transparency and openness via publishing study plans in advance, comprehensively presenting all results, and making timely information about planned, ongoing, and completed evaluations easily accessible. The Office of Evaluation (2020) likewise publishes analysis plans prospectively, and it provides resources on pre-registration of and handling null results from program evaluations. Furthermore, the U.S. Department of Agriculture requires contractors to adhere to specific data management processes and then reviews these processes all materials to ensure compliance (Burdg, 2019). In addition, the Foundations for Evidence-Based Policymaking Act of 2018 (P.L. 115–435) includes requirements related to transparency and openness of federal research and evaluation—including public-facing annual evaluation plans, open data, and data inventories—as part of enhancing federal government capacity for evidence building. These practices will lead to more credible and useful evidence on policies and programs that directly impact prevention efforts.
Media and the Public
Open science also facilitates the inclusion of media and the public in the scientific enterprise. Issues of reproducibility in science recently have garnered attention in popular media (Harris, 2017; Yong, 2018). Engaging the media as part of open science efforts can facilitate better communication about the scientific process in the popular press (Academy of Medical Sciences, 2015; Sumner et al., 2014). In addition, open science offers unique opportunities for public engagement in research. The new paradigm of “citizen science” allows members of the general public to collect scientific data for freely available datasets that provide actionable information for their local community (Chari et al., 2019). These practices provide promising mechanisms for improving public discourse on and trust in science.
Applying Open Science to Prevention Science
Each field—including prevention science—has its own standards, approaches, methods, and culture that need to be considered in reform efforts (Academy of Medical Sciences, 2015). The problems addressed by and implementation of specific open science practices vary in relevance across different phases of prevention research. In this section, we consider different types of prevention science research and ways in which they can adopt elements of open science. While not intended to serve as formal standards for the field, they may serve as a foundation for discussion about and the creation of such standards and recommendations for different types of prevention science research by an established task force or working group (Hesse et al., 2021).
Epidemiology and Etiology
A core aspect of prevention science is the investigation of the distribution and causes of physical, mental, and social health problems among populations. Epidemiological research within prevention science may be at greater risk of multiple hypothesis testing, and the selective non-reporting of studies and results, because of increased capacity to fit increasingly complex models (Goodman et al., 2016). Non-reporting of results from epidemiological research wastes resources, and can increase the chances that the wrong risk and protective factors are pursued in future intervention research (Chan et al., 2014; Glasziou et al., 2014). Project management systems, such as the Open Science Framework, offer prevention scientists conducting epidemiological research with free, open, and online platforms to collaboratively organize workflows, manage files, and share notebooks (Foster & Deardorff, 2017). In addition, using reporting guidelines such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement (von Elm et al., 2014) can lead to more transparently disseminated observational studies, of which most epidemiological research consists.
Development and Testing of Interventions
The development and testing of interventions are fundamental to prevention science. From an open science perspective, study registration and transparent reporting are essential practices for these stages of research. Trial registration involves recording important information about trial design, particularly complete and transparent definitions of all planned outcome measures (Dickersin, 1992; Simes, 1986). All studies that prospectively assign human participants to one or more interventions should be registered, regardless of phase, setting, intervention, and outcome (World Medical Association, 2001). Trial registration is a long-standing practice in clinical medicine, with the International Committee of Medical Journal Editors requiring prospective trial registration as a condition for publication since 2005 (De Angelis et al., 2004; De Angelis et al., 2005). Because trials funded by the NIH after 2019 must be registered and their results must be reported on ClinicalTrials.gov, the practice of registration is expected to increase in prevention science and related disciplines. Prevention scientists conducting trials with health outcomes can use the ClinicalTrials.gov (Zarin et al., 2016), while those working on non-health topics may prefer subject-specific registries such as the Registry of Efficacy and Effectiveness Studies in education (Spybrook et al., 2019) or the American Economic Association registry.
As recommended in Standard 8 of the SPR Standards of Evidence for Efficacy (Gottfredson et al., 2015), prevention scientists can consult reporting guidelines when writing manuscripts to ensure accurate representations of their intervention evaluations (Morris & Clark, 2013). For example, the Consolidated Standards for Reporting Trials (CONSORT) Guidelines have been officially endorsed by over 600 journals that implement these guidelines as part of manuscript submission, peer-review, and editorial decision-making (Shamseer et al., 2016). The CONSORT extension for Social and Psychological Interventions (CONSORT-SPI) identifies the minimum information needed to understand and apply the results of randomized controlled trials (RCTs) that evaluate interventions thought to work through social and psychological mechanisms of action (Montgomery et al., 2018). To facilitate adherence, the user’s manual provides guidance tailored to concepts, theories, and taxonomies used in the social and behavioral sciences (Grant et al., 2018).
Translational Research, Policy, and Practice
Translational research is the intersection between prevention science and public policy, in which insights from epidemiological and interventional research inform real-world policies and practices that promote individual and collective well-being. Public policy research includes not only researchers within academic institutions, but also individuals located within government agencies, nonprofits, and other settings where research is often inaccessible due to journal paywalls. Open information systems are critical in building and using knowledge management systems to advance dissemination and implementation science (Chorpita & Daleiden, 2014). Preprints and open access articles allow consumers of evidence and practitioners to have more direct access to findings and (in the case of preprints) make evidence more timely. Decisions to scale particular evidence-based programs often are based on windows of opportunity and funding allocations, introducing higher-stakes in the decisions about using evidence (Fagan et al., 2019). This higher-stake nature of using evidence makes open science practices even more important (Supplee & Meyer, 2015). Registration can support testing the reproducibility of research on innovations, as it includes the details as a necessary first step in those processes. Prespecified tests would earn more confidence from the public policy community—and therefore, more utility in decisions around scaling-up programs. Study registration is also critical for high-quality research synthesis that informs policy. Currently, conclusions drawn from meta-analysis and systematic reviews can be limited by “closed” primary research, as reviews cannot fully assess the extent to which particular programs have been tested and not found to be significant. Finally, reproducible workflows in combination with archiving data and code could allow the necessary reproducibility to increase confidence in findings and whether to scale a particular program.
Innovative Methods and Statistics
Open science can advance prominent methods and approaches in prevention science, such as community-based participatory research, qualitative methods, and administrative data.
Community-Based Participatory Research
Community-based participatory research (CBPR) entails unique challenges and opportunities for transparency and openness, given shared power structures with non-scientists, use of a broad range of methods, concerns about privacy, and unstructured data. The forward planning and transparency demands of the open science movement may initially seem like an anathema to prevention scientists working in the context of partnerships with communities in the development, evaluation, and dissemination and scale up of preventive interventions; however, open science practices could improve capacity for ongoing communication, transparency, accountability, reliability, and reciprocity in relationships with community stakeholders across all phases of prevention science. Discussion of ethics in community-based or participatory-action research have identified the clear need for openness and transparency and for ongoing review of assumptions and objectives throughout the lifecycle of a research-practice partnership (Hiriscau et al., 2014; Leadbeater et al., 2006; Leadbeater et al., 2018; Solomon et al., 2016; Tamariz et al., 2015). Rather than constraining action, open science approaches may offer a structure for establishing agreements about key expectations, workflow, data-sharing, dissemination, and reproducing findings, and for reviewing and revising these agreements as the research progresses. For example, collaborative partnerships between researchers and community members can include equitable access to and use of datasets (Gennetian et al., 2020).
Several challenges of the CBPR process could be anticipated and avoided by following open science principles that could lead the partnership systematically, through a series of planning discussions that are open and transparent, not only to the researchers, but also to their community partners. To date, the open science movement has focused primarily on the need for transparency in relation to statistical problems of defining research hypotheses, data collection, workflow for analyses, data sharing, and reproducibility. However, jointly clarifying research and community goals for a project at the outset could also enhance overall project transparency and reproducibility. For example, written agreements could be beneficial in bridging academic and community cultural gaps by jointly considering:
registration of agreed plans to clarify aspirations, objectives, and expected outcomes;
specifying how work will progress and what timelines are realistic;
delineating plans for data analysis, ownership, sharing, and publication;
reviewing cultural values and ethical concerns that guide the partnership and define limits of the partnership and protections for vulnerable individuals and communities;
defining the scope of independent and collaborative roles in adapting, controlling, and implementing knowledge gained from the partnership; and
creating mechanisms for reproducibility (e.g., manuals, protocols, codebooks) so that communities not originally involved can benefit from the knowledge generated.
Funding to do this up-front work is also more likely if it is clearly spelled out in a systematic framework and connected to defining the nature of the community-based collaboration. While an open science approach may not be the only way to organize this foundational knowledge for community-based research, following the intent of open science to improve the transparency and clarity of research partnerships with community partners may strengthen these relationships and the quality of the research produced through their collaborations.
Qualitative Research
Given the amount of attention to experimental and quantitative approaches, open science practices present unique epistemological and methodological issues for qualitative and mixed-methods research (Chauvette et al., 2019; Pownall et al., 2021). Qualitative scholars are exploring how open science from hypothetico-deductive frameworks can be translated to qualitative inquiry and its commitments to validity, transparency, ethics, reflexivity, and collaboration (Humphreys et al., 2021). For example, rather than being used to establish experimental predictions, registration could define the aims of a project, outline presuppositions, be updated as data are collected and analyzed to track the development of the interpretative framework, and combat dissemination biases in the qualitative literature (Haven & Van Grootel, 2019; Lorenz & Holland, 2020). Qualitative researchers can aspire to share materials like detailed memos, codebooks, and information on inter-rater reliability (Lorenz & Holland, 2020). Qualitative researchers are also demonstrating ways in which data management plans can be developed to share various forms of data—such as photos, audio recordings, interview transcripts, and field notes—in an ethically and legally appropriate manner (Antonio et al., 2020). Prevention scientists could contribute empirical examples to the nascent but dynamic literature on making qualitative research more transparent and open (Kapiszewski & Karcher, 2021).
Administrative Data
Administrative data involve information that organizations routinely collect to monitor and evaluate how well their operations achieve their intended goals (Goerge et al., 2017). For example, McNeely et al. (2019) created a panel dataset of all students enrolled in a public school in a metropolitan county in a Midwestern state between 2004 and 2015 by linking data from the state’s Department of Education, the state’s Department of Human Services, and the county attorney’s office. With this dataset, they conducted a quasi-experimental difference-in-differences analyses to evaluate long-term effects of a truancy diversion program on school attendance. The decreasing costs of obtaining big datasets, combined with improved technology, make research using administrative data easier to conduct over time. While these advances allow for higher-powered analyses, they also risk spurious findings if multiple results are calculated but reported incompletely (Huffman, 2018). Prevention scientists using administrative data would gain efficiency and accuracy in their research processes by leveraging principles of data and computational science with powerful, existing open source software. The study of computational reproducibility is an emerging area, powered by recent advancements in computational and data sciences (Stodden et al., 2016). Although other social and behavioral disciplines have made advancements in these areas, these computational principles and tools have yet to gain a strong foothold in prevention science. For example, research using administrative data would benefit from organized workflows with consistent and predictable structures. A basic research study with a reproducible workflow would contain a folder structure for storing analytic code, raw data, processed data, outputs, and narrative reports using version control (Wilson et al., 2017). Project folders contain a “README” file” that describes each folder in sufficient detail for another researcher to understand their contents and how to reproduce any analyses generating processed data, outputs, and narrative reports. The DRESS (Documenting Research in the Empirical Social Sciences) Protocol provides a set of standards for organizing and documenting workflows for reproducibility purposes (Project TIER, 2016). Projects in RStudio, with the R programming language, provide an excellent starting point to build reproducible workflows for each prevention research project or manuscript and are easily extensible to other collaborative and interactive programmatic tools such as web applications (Gandrud, 2013; Kitzes, 2018).
Leveraging Prevention Science to Advance the Open Science Movement
Open science proponents often refer to their work as “meta-science” or “meta-research,” i.e., the scientific study of science itself in order to evaluate and improve research practices (Ioannidis et al., 2015; Munafò et al., 2017). Following a translational framework of science (Hardwicke et al., 2020), open science reforms require a broad communal effort, involving a collaborative ecosystem of scientists, research institutions, journals, funders, and other stakeholders across disciplines and countries to change researcher behaviors and scientific culture (Holzwart & Wagner, 2020). A “one-size-fits-all” approach therefore will not be effective: multiple measures must be identified, tailored, and implemented from both the “top-down” and the “bottom-up” (Academy of Medical Sciences, 2015).
Prevention science is well-positioned to engage with the open science movement, given its focus on examining and addressing complex social and behavioral issues. Prevention scientists have unique expertise in socio-ecological, systems-based, context-sensitive approaches needed to identify, develop, and implement open science reforms (Fawcett et al., 2000). For example, open science efforts can be operationalized and approached using established frameworks for intervention development, evaluation, and implementation (Craig et al., 2008). In terms of intervention development, the design, conduct, analysis, and reporting of any study can be seen as behaviors of researchers embedded within a complex social system of stakeholders (Norris & O’Connor, 2019). Open science efforts, therefore, are an attempt to change behavioral and social causes of problems in the research process, requiring the use of tools from behavior change interventions and complex social systems science to help stakeholders adopt desired practices across the research lifecycle (Bartholomew Eldredge et al., 2016; Michie et al., 2014). Once designed, theories from implementation science can be used to identify potential facilitators and obstacles to the delivery of open science efforts (Atkins et al., 2017). Once implemented, interventions to promote open science should be evaluated rigorously to examine whether they are delivered as intended, achieve desired effects, and avoid unintended negative consequences (Craig et al., 2017; Moore et al., 2015).
Compared to other applied disciplines, prevention scientists could be particularly helpful to the open science movement through the use of program planning models to rigorously develop, organize, and guide strategic actions intended to improve transparency, openness, and reproducibility (Green & Kreuter, 2005). That is, the motivations for and efforts of the open science movement can be conceptualized as problem and program theory, with a continuum of interventions to promote open science across primary, secondary, and tertiary levels of prevention (see Fig. 2). Adapting a disease prevention perspective, the distal outcome of the open science movement can be conceptualized as the prevalence of reported research findings that are false (Ioannidis, 2005). A key issue perceived to have a significant impact on or contribute significantly to this distal outcome is the irreproducibility of research findings. The behavioral and social determinants of this issue are selective non-reporting, research misconduct, and misaligned incentives in the scientific ecosystem (Ioannidis, 2014). A key factor enabling these behavioral and social determinants is the traditionally “closed” lifecycle of human subjects research in the health, social, and behavioral sciences. Following this problem theory, open science efforts can be framed positively using the “program theory” of a strengths-based intervention approach (Staudt et al., 2001). That is, rather than assuming malicious intent and policing bad behavior, the ultimate goal of the open science movement can be conceptualized from a health promotion perspective as protecting and further advancing the value of (Macleod et al., 2014) and public trust in science (National Academies of Sciences, 2017). Key distal outcomes include increasing the prevalence of research findings that are “true” as the indicator for more rigorous and reliable bodies of research (Ioannidis, 2014), as well as promoting more inclusive creation of scientific knowledge and accelerated scientific progress (National Academies of Sciences, 2018). This program planning model can underpin an iterative, continuous quality improvement process that ensures open science efforts are theoretically sound, empirically based, and outcome-oriented.
Fig. 2.
Logic Models of Open Science "Problem Theory" and "Program Theory"
Potential Challenges of a Transparent, Open, and Reproducible Prevention Science
Challenges to the movement toward a transparent, open, and reproducible prevention science include both warranted concerns and misconceptions (see Table 3). For example, prevention science commonly involves collecting sensitive personal information from vulnerable populations. This requires special care to ensure that sharing de-identified data, code, and materials does not increase risks to participants through violations of privacy and confidentiality (Grant & Bouskill, 2019). In addition, researchers have expressed concern about work being “scooped,” excessive criticism by others, and tension with intellectual property restrictions in the context of transparent, open research (Gilmore et al., 2020). To allay these concerns, appropriate embargo periods could provide researchers with protected time to be the first to analyze their data and publish findings, followed by appropriate rewards for sharing and citation of data, code, and materials after this embargo period (Gennetian et al., 2020; Moher et al., 2018). Open science reforms also need to avoid reinforcing existing inequitable power structures by ensuring stakeholders from under-resourced settings (Nabyonga-Orem et al., 2020), historically underrepresented and excluded groups (Dutta et al., 2021; Fox et al., 2021; Sabik et al., 2021), and diverse epistemic backgrounds (Devezer et al., 2019; Siegel et al., 2021) are included in influential reform discussions. Moreover, proponents need to address concerns about the potential for open science to add burdensome bureaucracy and regulation, stifle creativity and discovery, and be wholly inappropriate outside of the hypothetico-deductive model (Academy of Medical Sciences, 2015). Lastly, proponents need to attend to the potential for open science practices to falsely signal quality and result in the same problems they aim to address (Gorman et al., 2019). Concerted, meaningful discussion about these reservations are needed to yield sustained uptake of open science practices among prevention scientists.
Table 3.
Potential Reservations about Open Prevention Science: A Tool for Promoting Discussion
Reservation | Response |
---|---|
Open Science Generally | |
The field of prevention science is so different from clinical medicine and lab-based experiments that “open science” doesn’t really fit with the type of work we do | Open science is for all fields of science. The goal of open science is to make the scientific process (rationale, design, methods, statistical approaches) transparent and the results more accessible to scientific and public audiences. This goal is especially important for prevention science to fulfill its mission because it must be accessible and trusted by policymakers and the public |
Open science dictates one type of scientific study for everyone and restricts academic freedom and discovery | Open science is a set of principles supporting transparency in scientific discovery across scientific methods. These principles can be operationalized in ways sensitive to the underpinnings of each type of study and that respect exploratory work |
Study Registration | |
Prospective registration doesn’t work for prevention science because it is difficult to predict all the possible outcomes that might result for a preventive intervention (particularly over the life course), and it precludes exploratory work (like subgroup effects) | Prospective registration does not preclude the addition of outcomes over the course of a study. Rather, it transparently documents which research questions, hypotheses, outcomes, and analyses were planned at which points in time of a research project |
Study registration isn’t appropriate or needed for descriptive or epidemiologic studies; it is really only relevant for research using hypothesis testing approaches such as RCTs | Study registration can be useful to document the existence and link products for any empirical study. While prospective registration of study protocols and analysis plans is an established practice in randomized trials, researchers using other study designs are discovering benefits to the transparent documentation of the planned research approach prior to study initiation |
Data Sharing and Archiving | |
Data archiving is an expensive and burdensome process, particularly because it typically happens after the award ends when the grant has already closed out and there is no funding left to support it | While it is true data archiving has costs, many funders are beginning to either require or encourage the practice, opening the potential for archiving processes to be built into grant budgets and timelines |
I don’t have the time or staffing to respond to questions or requests for data files from old projects, and it is too much work to do all of this extra stuff to make my files available to other researchers | Archiving data following best-practices can improve the quality of the data available to external parties, minimizing the amount and intensity of requests. Planning for data archiving at the outset of projects can minimize the “extra” amount of work involved |
I am concerned that if I archive my data sets, someone will try to scoop me before I have had a chance to publish my main findings or supplemental studies | Several platforms allow for archiving data with an embargo period, providing researchers with protected time to publish their findings after self-archiving |
I am concerned that, if I make my data files and code available to others, then someone may try to prove me wrong, make me look bad, or imply I am unethical or biased in my reporting of prior findings | As with any principles, open science practices have the potential for competitive use or personal gains at the expense of others. The documentation of research rationales, designs, methods, data, and analyses may afford protection against nefarious charges |
I worked hard to collect all these data, and my collaborators and study partners trust me to keep the data private | There are sophisticated methods for protecting the privacy of data (e.g., data masking, pseudonymization, data generalization, and synthetic data creation) |
These data reflect years of my effort and energy. Why would I want to just turn them over to anyone else? | Advances in science rely on shared information across researchers and disciplines. Rather than just publishing findings, open science advocates for sharing additional scientific products like data to expand discovery |
Privacy and Ethics | |
Our consent forms did not include the archiving of data for future research use, so we cannot archive or share data | Research conducted in the past needs to follow the data sharing permissions granted in the informed consent. Going forward, investigators should use informed consent templates that allow for future research use |
My IRB or study participants won’t let me archive or share my data, as it is too sensitive | Open science principles encourage data sharing but not at the expense of privacy and confidentiality. There may be some data that is sensitive or subject to privacy concerns. However, much of the data collected in prevention science can be shared in fully de-identified form, while protecting privacy and confidentiality |
I work with Indigenous groups who own data collected and do not want it shared | Research conducted in partnership with Indigenous groups should discuss data sharing as part of study preparation activities. Teams should respect Indigenous Peoples’ rights to control, access, and govern their data |
Genetic data cannot be used for research outside of its intended purpose and therefore cannot be shared, so open science rules cannot apply to these data | Ethical principles in the use and sharing of genetic data are unique and should be clearly discussed in study planning and transparent in consent forms |
Impact on the Future of Prevention Science | |
Prevention studies are not sufficiently funded or resourced to do these additional types of open science activities | As support for open science grows, public and private funders are increasingly including allowances or requirements for open science practices and providing grant support for these activities. In addition, many open science practices are already part of standard research practices, such as clarifying study designs, publishing study protocols, and pre-analysis plans in grant applications |
The effect sizes for many preventive programs are often small, reflecting the complexity of our work. As such, the application of open science standards has a strong potential to undermine our findings and funding for future prevention research | Prevention science is dedicated to a rigorous process for identifying evidence-based practices using high standards of evidence. Where effect sizes are not robust, this might suggest the need to further enhance the impact and potency of preventative interventions to yield larger impacts |
Restrictive rules can set the field back, particularly with regard to public perspective on the impact of prevention, much less scale up of any programs previously thought to be “effective.” | Evidence-based policy depends on the trust and understanding that decision-makers and the public have in research. Support for evidence-based policy will continue if it produces strong replicable outcomes. Open science practices can support these goals |
Incentives | |
My university doesn’t give “credit” for engaging in open science, publishing in open access journals, posting pre-prints, or sharing data. I can’t spend a lot of my time doing something that doesn’t count for promotion | Many open science practices can increase researchers’ impact on the field through broader dissemination of findings, engaging with other researchers in collaboration and dialog, and increasing professional reputations |
Early-career scholars will be negatively affected by having to follow all these new requirements | The science of early-career scholars has the potential to be strengthened through increased collaboration, increased public awareness and value for their science, and the use of resources such as open methods and open data to advance their science |
Conclusion
We have identified open science activities that could strengthen the reliability and efficiency of prevention science, facilitate access to its products and outputs, and promote collaborative and inclusive participation in research activities. Overall, we contend that prevention scientists are well-positioned to engage with the open science movement, especially given their expertise in designing solutions for complex social and behavioral problems. In addition, because prevention scientists intervene in the lives of research participants and seek to impact the lives of others, they are scientifically and ethically obligated to conduct and report research in a manner that is likely to produce accessible, true results. Prevention science can better achieve its mission to advance the promotion of individual and collective well-being by identifying ways to engage with principles of transparency, openness, and reproducibility.
Acknowledgements
This paper was endorsed by the SPR Board of Directors on February 14, 2022, and SPR provided the funds for open access. We thank Max Crowley for insightful suggestions and feedback at the conceptual stage of the paper.
Funding
This material is based upon work supported by Arnold Ventures (SG and EMW), the National Science Foundation Graduate Research Fellowship under Grant Number 006784–00002 (KEW), and the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1TR003015 (CPB). The content is solely the responsibility of the authors and does not necessarily represent the official views of Arnold Ventures, the National Science Foundation, the National Institutes of Health, or Child Trends. All other authors did not disclose any funding sources for developing this manuscript.
Declarations
Ethics Approval
Not applicable.
Research Involving Human Participants
Not applicable.
Informed Consent
Not applicable.
Disclosure of Potential
CPB is the editor of the journal Prevention Science, and SG and FG are guest editors of this special issue of Prevention Science; however, the peer-review of the manuscript was managed by a separate associate editor not affiliated with the paper. SG has received honoraria from the Berkeley Initiative for Transparency in the Social Sciences for serving as faculty on their Research Transparency and Reproducibility Training (RT2), and the Office of Planning, Research, and Evaluation (Administration for Children and Families, US Department of Health and Human Services) for speaking at their 2019 meeting on “Methods for Promoting Open Science in Social Policy Research.” SG is a Senior Research Fellow for the International Initiative for Impact Evaluation (3ie), which includes advising on their research transparency policy.
Conflicts of Interest
All other authors report no conflicts of interest.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- Academy of Medical Sciences. (2015). Reproducibility and reliability of biomedical research: Improving research practice. London, UK: Academy of Medical Sciences.
- Administration for Children and Families. (2014). Evaluation Policy; Cooperative Research or Demonstration Projects (79 FR 51574). Retrieved 19 January 2022, from https://www.federalregister.gov/documents/2014/08/29/2014-20616/evaluation-policy-cooperative-research-ordemonstration-projects
- Altman DG, Furberg CD, Grimshaw JM, Shanahan DR. Linked publications from a single trial: A thread of evidence. Trials. 2014;15:369. doi: 10.1186/1745-6215-15-369. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson, M. S., Martinson, B. C., & Vries, R. D. (2016). Normative Dissonance in Science: Results from a National Survey of U.S. Scientists. Journal of Empirical Research on Human Research Ethics. 10.1525/jer.2007.2.4.3 [DOI] [PubMed]
- Anderson MS, Ronning EA, Vries RD, Martinson BC. Extending the Mertonian Norms: Scientists' Subscription to Norms of Research. The Journal of Higher Education. 2010;81:366–393. doi: 10.1080/00221546.2010.11779057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Antonio MG, Schick-Makaroff K, Doiron JM, Sheilds L, White L, Molzahn A. Qualitative data management and analysis within a data repository. Western Journal of Nursing Research. 2020;42:640–648. doi: 10.1177/0193945919881706. [DOI] [PubMed] [Google Scholar]
- Armeni, K., Brinkman, L., Carlsson, R., Eerland, A., Fijten, R., Fondberg, R., ... Zurita-Milla, R. (2021). Towards wide-scale adoption of open science practices: The role of open science communities. Science and Public Policy, 48(5), 605-611.
- Atkins, L., Francis, J., Islam, R., O’Connor, D., Patey, A., Ivers, N., ... Michie, S. (2017). A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implementation Science, 12(1), 77. [DOI] [PMC free article] [PubMed]
- Axford N, Berry V, Lloyd J, Hobbs T, Wyatt K. Promoting Learning from Null or Negative Results in Prevention Science Trials. Prevention Science. 2020 doi: 10.1007/s11121-020-01140-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Azar, M., Riehm, K. E., McKay, D., & Thombs, B. D. (2015). Transparency of Outcome Reporting and Trial Registration of Randomized Controlled Trials Published in the Journal of Consulting and Clinical Psychology. PLoS One, 10(11), e0142894. [DOI] [PMC free article] [PubMed]
- Babyak MA. What you see may not be what you get: A brief, nontechnical introduction to overfitting in regression-type models. Psychosomatic Medicine. 2004;66:411–421. doi: 10.1097/01.psy.0000127692.23278.a9. [DOI] [PubMed] [Google Scholar]
- Baker M. 1,500 scientists lift the lid on reproducibility. Nature. 2016;533:452–454. doi: 10.1038/533452a. [DOI] [PubMed] [Google Scholar]
- Bartholomew Eldredge LK, Markham CM, Ruiter RAC, Fernandez ME, Kok G, Parcel GS. Planning health promotion programs: An Intervention Mapping approach. 4. Jossey-Bass; 2016. [Google Scholar]
- Benjamin, D., Berger, J., Johannesson, M., Nosek, B., Wagenmakers, E., Berk, R., ... Johnson, V. E. (2018). Redefine statistical significance. Nature Human Behaviour, 2(1), 6-10. [DOI] [PubMed]
- Bezjak, S., Clyburne-Sherin, A., Conzett, P., Fernandes, P., Görögh, E., Helbig, K., ... Heller, L. (2018). Open Science Training Handbook. 10.5281/zenodo.1212538
- Botvin GJ. Inaugural Editorial. Prevention Science. 2000;1:1–2. doi: 10.1023/A:1010091031329. [DOI] [Google Scholar]
- Bouter L. What research institutions can do to foster research integrity. Science and Engineering Ethics. 2020;26:2363–2369. doi: 10.1007/s11948-020-00178-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bradshaw, C. P., Chinman, M., Gardner, F., Grant, S., Lochman, J. E., & Spybrook, J. (2019). Transparency, replication, and open science: implications for the field of prevention science. Paper presented at the Society for Prevention Research Conference.
- Buckley PR, Ebersole CR, Steeger CM, Michaelson LE, Hill KG, Gardner F. The role of clearinghouses in promoting transparent research: A methodological study of transparency practices for preventive interventions. Prevention Science. 2021 doi: 10.1007/s11121-11021-01252-11125. [DOI] [PubMed] [Google Scholar]
- Burdg, J. (2019). Copycat: Data Review in the Office of Policy Support. Paper presented at the OPRE Methods Meeting.
- Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, Munafò MR. Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience. 2013;14:365–376. doi: 10.1038/nrn3475. [DOI] [PubMed] [Google Scholar]
- Camerer, C. F., Dreber, A., Forsell, E., Ho, T. H., Huber, J., Johannesson, M., ... Wu, H. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433-1436. [DOI] [PubMed]
- Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Kirchler, M., ... Wu, H. (2018). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour, 2, 637-644. [DOI] [PubMed]
- Catalano RF, Fagan AA, Gavin LE, Greenberg MT, Irwin CE, Ross DA, Shek DTL. Worldwide application of prevention science in adolescent health. The Lancet. 2012;379:1653–1664. doi: 10.1016/S0140-6736(12)60238-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chalmers L. Underreporting Research Is Scientific Misconduct. JAMA. 1990;263:1405–1408. doi: 10.1001/jama.1990.03440100121018. [DOI] [PubMed] [Google Scholar]
- Chambers C. What’s next for Registered Reports? Nature. 2019;573:187–189. doi: 10.1038/d41586-019-02674-6. [DOI] [PubMed] [Google Scholar]
- Chambers CD. Registered Reports: A new publishing initiative at Cortex. Cortex. 2013;49:609–610. doi: 10.1016/j.cortex.2012.12.016. [DOI] [PubMed] [Google Scholar]
- Chan A-W. Bias, Spin, and Misreporting: Time for Full Access to Trial Protocols and Results. PLoS Medicine. 2008;5:e230. doi: 10.1371/journal.pmed.0050230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chan A-W, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: Review of publications and survey of authors. BMJ (clinical Research Ed.) 2005;330:753. doi: 10.1136/bmj.38356.424606.8F. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chan A-W, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical Evidence for Selective Reporting of Outcomes in Randomized Trials: Comparison of Protocols to Published Articles. JAMA. 2004;291:2457–2465. doi: 10.1001/jama.291.20.2457. [DOI] [PubMed] [Google Scholar]
- Chan, A.-W., Song, F., Vickers, A., Jefferson, T., Dickersin, K., Gøtzsche, P. C., ... Worp, H. B. V. D. (2014). Increasing value and reducing waste: Addressing inaccessible research research. The Lancet, 383(9913), 257-266. 10.1016/S0140-6736(13)62296-5 [DOI] [PMC free article] [PubMed]
- Chan, A. W., Tetzlaff, J. M., Gøtzsche, P. C., Altman, D. G., Mann, H., Berlin, J. A., ... Moher, D. (2013). SPIRIT 2013 explanation and elaboration: Guidance for protocols of clinical trials. BMJ, 346, e7586. [DOI] [PMC free article] [PubMed]
- Chang, A., & Li, P. (2015). Is economics research replicable? Sixty published papers from thirteen journals say 'usually not’. Finance and Economics Discussion Series 2015–083. Washington, D.C.: Board of Governors of the Federal Reserve System.
- Chari R, Blumensthal MS, Matthews LJ. Community citizen science: From promise to action. RAND Corporation; 2019. [Google Scholar]
- Chauvette A, Schick-Makaroff K, Molzahn AE. Open data in qualitative research. International Journal of Qualitative Methods. 2019;18:1609406918823863. doi: 10.1177/1609406918823863. [DOI] [Google Scholar]
- Chavalarias D, Wallach JD, Li AHT, Ioannidis JPA. Evolution of Reporting P Values in the Biomedical Literature, 1990–2015. JAMA. 2016;315:1141–1148. doi: 10.1001/jama.2016.1952. [DOI] [PubMed] [Google Scholar]
- Chorpita B, Daleiden E. Structuring the Collaboration of Science and Service in Pursuit of a Shared Vision. Journal of Clinical Child & Adolescent Psychology. 2014;43:323–338. doi: 10.1080/15374416.2013.828297. [DOI] [PubMed] [Google Scholar]
- Christensen G, Freese J, Miguel E. Transparent and reproducible social science research: How to do open science. University of California Press; 2019. [Google Scholar]
- Christensen, G., Wang, Z., Paluck, E. L., Swanson, N. B., Birke, D., Miguel, E., & Littman, R. (2020). Open Science Practices are on the Rise: The State of Social Science (3S) Survey. Working Paper Series No. WPS-106. Berkeley, CA: Center for Effective Global Action. University of California, Berkeley.
- Cooper H, DeNeve K, Charlton K. Finding the missing science: The fate of studies submitted for review by a human subjects committee. Psychological Methods. 1997;2:447–452. doi: 10.1037/1082-989X.2.4.447. [DOI] [Google Scholar]
- Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: The new Medical Research Council guidance. BMJ, 337. 10.1136/bmj.a1655 [DOI] [PMC free article] [PubMed]
- Craig P, Katikireddi SV, Leyland A, Popham F. Natural Experiments: An Overview of Methods, Approaches, and Contributions to Public Health Intervention Research. Annual Review of Public Health. 2017;38:39–56. doi: 10.1146/annurev-publhealth-031816-044327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crowley, D. M., Dodge, K. A., Barnett, W. S., Corso, P., Duffy, S., Graham, P., ... Plotnick, R. (2018). Standards of Evidence for Conducting and Reporting Economic Evaluations in Prevention Science. Prevention Science, 19(3), 366-390. 10.1007/s11121-017-0858-1 [DOI] [PMC free article] [PubMed]
- Cuijpers P, Smit F, Bohlmeijer E, Hollon SD, Andersson G. Efficacy of cognitive-behavioural therapy and other psychological treatments for adult depression: Meta-analytic study of publication bias. The British Journal of Psychiatry: THe Journal of Mental Science. 2010;196:173–178. doi: 10.1192/bjp.bp.109.066001. [DOI] [PubMed] [Google Scholar]
- Cuijpers P, Straten AV, Bohlmeijer E, Hollon SD, Andersson G. The effects of psychotherapy for adult depression are overestimated: A meta-analysis of study quality and effect size. Psychological Medicine. 2010;40:211–223. doi: 10.1017/S0033291709006114. [DOI] [PubMed] [Google Scholar]
- Cybulski L, Mayo-Wilson E, Grant S. Improving transparency and reproducibility through registration: The status of intervention trials published in clinical psychology journals. Journal of Consulting and Clinical Psychology. 2016;84:753–767. doi: 10.1037/ccp0000115. [DOI] [PubMed] [Google Scholar]
- da Silva JAT, Dobránszki J. Preprint policies among 14 academic publishers. The Journal of Academic Librarianship. 2019;45:162–170. doi: 10.1016/j.acalib.2019.02.009. [DOI] [Google Scholar]
- Dal-Ré, R., Bracken, M. B., & Ioannidis, J. P. A. (2015). Call to improve transparency of trials of non-regulated interventions. BMJ, 350. 10.1136/bmj.h1323 [DOI] [PubMed]
- De Angelis, C., Drazen, J. M., Frizelle, F. A., Haug, C., Hoey, J., Horton, R., ... Weyden, M. B. V. D. (2004). Clinical Trial Registration: A Statement from the International Committee of Medical Journal Editors. New England Journal of Medicine, 351(12), 1250-1251. [DOI] [PubMed]
- De Angelis, C. D., Drazen, J. M., Frizelle, F. A., Haug, C., Hoey, J., Horton, R., ... Van Der Weyden, M. B. (2005). Is This Clinical Trial Fully Registered? Annals of Internal Medicine, 143(2), 146-148. 10.7326/0003-4819-143-2-200507190-00016 [DOI] [PubMed]
- Declaration on Research Assessment. (2018). San Francisco Declaration on Research Assessment. Retrieved 19 January 2022, from https://sfdora.org/read/
- Devezer, B., Nardin, L. G., Baumgaertner, B., & Buzbas, E. O. (2019). Scientific discovery in a model-centric framework: Reproducibility, innovation, and epistemic diversity. PLoS One, 14(5), e0216125. [DOI] [PMC free article] [PubMed]
- Dickersin K. Keeping posted Why register clinical trials?—Revisited. Controlled Clinical Trials. 1992;13:170–177. doi: 10.1016/0197-2456(92)90022-R. [DOI] [PubMed] [Google Scholar]
- Driessen E, Hollon SD, Bockting CLH, Cuijpers P, Turner EH. Does Publication Bias Inflate the Apparent Efficacy of Psychological Treatment for Major Depressive Disorder? A Systematic Review and Meta-Analysis of US National Institutes of Health-Funded Trials. PLoS ONE. 2015;10:e0137864. doi: 10.1371/journal.pone.0137864. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dutta, M., Ramasubramanian, S., Barrett, M., Elers, C., Sarwatay, D., Raghunath, P., ... Zapata, D. (2021). Decolonizing open science: Southern interventions. Journal of Communication. 10.1093/joc/jqab1027
- Dwan K, Gamble C, Williamson PR, Kirkham JJ. Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review. PLoS ONE. 2013;8:e66844. doi: 10.1371/journal.pone.0066844. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards MA, Roy S. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science. 2017;34:51–61. doi: 10.1089/ees.2016.0223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Emerson GB, Warme WJ, Wolf FM, Heckman JD, Brand RA, Leopold SS. Testing for the Presence of Positive-Outcome Bias in Peer Review: A Randomized Controlled Trial. Archives of Internal Medicine. 2010;170:1934–1939. doi: 10.1001/archinternmed.2010.406. [DOI] [PubMed] [Google Scholar]
- Fagan A, Bumbarger B, Barth R, Bradshaw CP, Rhoades Cooper B, Supplee L, Walker D. Scaling up evidence-based interventions in US public systems to prevent behavioral health problems: Challenges and opportunities. Prevention Science. 2019;20:1147–1168. doi: 10.1007/s11121-019-01048-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Faggiano, F., Allara, E., Giannotta, F., Molinar, R., Sumnall, H., Wiers, R., ... Conrod, P. (2014). Europe Needs a Central, Transparent, and Evidence-Based Approval Process for Behavioural Prevention Interventions. PLoS Medicine, 11(10), e1001740. [DOI] [PMC free article] [PubMed]
- Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One, 4(5), e5738. [DOI] [PMC free article] [PubMed]
- Fanelli D. Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS ONE. 2010;5:e10271. doi: 10.1371/journal.pone.0010271. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fanelli D. “Positive” Results Increase Down the Hierarchy of the Sciences. PLoS ONE. 2010;5:e10068. doi: 10.1371/journal.pone.0010068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2012;90:891–904. doi: 10.1007/s11192-011-0494-7. [DOI] [Google Scholar]
- Fawcett SB, Francisco VT, Schultz JA, Berkowitz B, Wolff TJ, Nagy G. The Community Tool Box: A Web-based resource for building healthier communities. Public Health Reports. 2000;115:274–278. doi: 10.1093/phr/115.2.274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fidler, F., & Wilcox, J. (2018). Reproducibility of Scientific Results. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Winter 2018 ed.). Stanford, CA: Metaphysics Research Lab, Stanford University.
- Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., ... Ji, P. (2005). Standards of Evidence: Criteria for Efficacy, Effectiveness and Dissemination. Prevention Science, 6(3), 151-175. 10.1007/s11121-005-5553-y [DOI] [PubMed]
- Foster ED, Deardorff A. Open Science Framework (OSF) Journal of the Medical Library Association : JMLA. 2017;105:203–206. doi: 10.5195/jmla.2017.88. [DOI] [Google Scholar]
- Fox, J., Pearce, K. E., Massanari, A. L., Riles, J. M., Szulc, Ł., Ranjit, Y. S., ... Gonzales, A. L. (2021). Open science, closed doors? Countering marginalization through an agenda for ethical, inclusive research in communication. Journal of Communication. 10.1093/joc/jqab1029
- Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: Unlocking the file drawer. Science. 2014;345:1502–1505. doi: 10.1126/science.1255484. [DOI] [PubMed] [Google Scholar]
- Gall, T., Ioannidis, J. P., & Maniadis, Z. (2017). The credibility crisis in research: Can economics tools help? PLoS Biology, 15(4), e2001846. [DOI] [PMC free article] [PubMed]
- Gamble, C., Krishan, A., Stocken, D., Lewis, S., Juszczak, E., Doré, C., ... Loder, E. (2017). Guidelines for the content of statistical analysis plans in clinical trials. JAMA, 318, 2337-2343. [DOI] [PubMed]
- Gandrud C. Reproducible research with R and R studio. CRC Press; 2013. [Google Scholar]
- Gennetian LA, Tamis-LeMonda CS, Frank MC. Advancing Transparency and Openness in Child Development Research: Opportunities. Child Development Perspectives. 2020;14:3–8. doi: 10.1111/cdep.12356. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gentzkow M, Shapiro JM. Code and data for the social sciences: A practitioner’s guide. University of Chicago; 2014. [Google Scholar]
- Gilmore RO, Cole PM, Verma S, Aken MAGV, Worthman CM. Advancing Scientific Integrity, Transparency, and Openness in Child Development Research: Challenges and Possible Solutions. Child Development Perspectives. 2020;14:9–14. doi: 10.1111/cdep.12360. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gilmore RO, Kennedy JL, Adolph KE. Practical Solutions for Sharing Data and Materials From Psychological Research. Advances in Methods and Practices in Psychological Science. 2018 doi: 10.1177/2515245917746500. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glasziou, P., Altman, D. G., Bossuyt, P., Boutron, I., Clarke, M., Julious, S., ... Wager, E. (2014). Reducing waste from incomplete or unusable reports of biomedical research. The Lancet, 383(9913), 267-276. 10.1016/S0140-6736(13)62228-X [DOI] [PubMed]
- Goerge R, Gjertson L, De La Cruz E. Administrative Data for the Public Good. Chapin Hall at the University of Chicago; 2017. [Google Scholar]
- Goodman, S. N., Fanelli, D., & Ioannidis, J. P. (2016). What does research reproducibility mean? Science Translational Medicine, 8(341), 341ps312. [DOI] [PubMed]
- Gorman DM, Elkins AD, Lawley M. A Systems Approach to Understanding and Improving Research Integrity. Science and Engineering Ethics. 2019;25:211–229. doi: 10.1007/s11948-017-9986-z. [DOI] [PubMed] [Google Scholar]
- Gottfredson DC, Cook TD, Gardner FEM, Gorman-Smith D, Howe GW, Sandler IN, Zafft KM. Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research in Prevention Science: Next Generation. Prevention Science. 2015;16:893–926. doi: 10.1007/s11121-015-0555-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grant S, Bouskill KE. Why institutional review boards should have a role in the open science movement. PNAS. 2019;116:21336–21338. doi: 10.1073/pnas.1916420116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grant S, Mayo-Wilson E, Montgomery P, Macdonald G, Michie S, Hopewell S, Moher D. CONSORT-SPI 2018 Explanation and Elaboration: Guidance for reporting social and psychological intervention trials. Trials. 2018;19:406. doi: 10.1186/s13063-018-2735-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grant, S. P., Mayo-Wilson, E., Melendez-Torres, G. J., & Montgomery, P. (2013). Reporting Quality of Social and Psychological Intervention Trials: A Systematic Review of Reporting Guidelines and Trial Publications. PLoS One, 8(5), e65442. [DOI] [PMC free article] [PubMed]
- Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological Approach. 4. McGraw-Hill; 2005. [Google Scholar]
- Hardwicke TE, Serghiou S, Janiaud P, Danchev V, Crüwell S, Goodman SN, Ioannidis JPA. Calibrating the Scientific Ecosystem Through Meta-Research. Annual Review of Statistics and Its Application. 2020;7:11–37. doi: 10.1146/annurev-statistics-031219-041104. [DOI] [Google Scholar]
- Harris R. Rigor mortis: How sloppy science creates worthless cures, crushes hope, and wastes billions. Basic Books; 2017. [Google Scholar]
- Harrison BA, Mayo-Wilson E. Trial registration: Understanding and preventing reporting bias in social work research. Research on Social Work Practice. 2014;24:372–376. doi: 10.1177/1049731513512374. [DOI] [Google Scholar]
- Hartgerink, C., van Aert, R., Nuijten, M. B., Wicherts, J. M., & Assen, M. A. L. M. V. (2016). Distributions of p-values smaller than .05 in psychology: what is going on? PeerJ, 4, e1935. [DOI] [PMC free article] [PubMed]
- Haven TL, Van Grootel DL. Preregistering qualitative research. Accountability in Research. 2019;26:229–244. doi: 10.1080/08989621.2019.1580147. [DOI] [PubMed] [Google Scholar]
- Hesse, B. W., Conroy, D. E., Kwaśnicka, D., Waring, M. E., Hekler, E., Andrus, S., ... Diefenbach, M. A. (2021). We’re all in this together: Recommendations from the Society of Behavioral Medicine’s Open Science Working Group. Translational Behavioral Medicine, 11(3), 693-698. [DOI] [PubMed]
- Hiriscau IE, Stingelin-Giles N, Stadler C, Schmeck K, Reiter-Theil S. A right to confidentiality or a duty to disclose? Ethical guidance for conducting prevention research with children and adolescents. European Child & Adolescent Psychiatry. 2014;23:409–416. doi: 10.1007/s00787-014-0526-y. [DOI] [PubMed] [Google Scholar]
- Hoces de la Guardia, F., Grant, S., & Miguel, E. (2021). A framework for open policy analysis. Science and Public Policy, 48(2), 154–163.
- Holzwart, R., & Wagner, H. (2020). Methods for promoting open science in social policy research: Summary of 2019 OPRE Methods Meeting (OPRE Report 2020–24). Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.
- Huffman JE. Examining the current standards for genetic discovery and replication in the era of mega-biobanks. Nature Communications. 2018;9:5054. doi: 10.1038/s41467-018-07348-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Humphreys L, Lewis NA, Sender K, Won AS. Integrating qualitative methods and open science: Five principles for more trustworthy research. Journal of Communication. 2021 doi: 10.1093/joc/jqab1026. [DOI] [Google Scholar]
- Institute of Educational Sciences. (2021). Program Announcement: Research Grants Focused on Systematic Replication CFDA 84.305R. Washington, D.C.: U.S. Department of Education.
- Ioannidis, J. P. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. [DOI] [PMC free article] [PubMed]
- Ioannidis JPA. How to Make More Published Research True. PLoS Medicine. 2014;11:e1001747. doi: 10.1371/journal.pmed.1001747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ioannidis JPA, Fanelli D, Dunne DD, Goodman SN. Meta-research: Evaluation and Improvement of Research Methods and Practices. PLoS Biology. 2015;13:e1002264. doi: 10.1371/journal.pbio.1002264. [DOI] [PMC free article] [PubMed] [Google Scholar]
- John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science. 2012;23:524–532. doi: 10.1177/0956797611430953. [DOI] [PubMed] [Google Scholar]
- Kaiser J. The preprint dilemma. Science. 2017;357:1344–1349. doi: 10.1126/science.357.6358.1344. [DOI] [PubMed] [Google Scholar]
- Kapiszewski, D., & Karcher, S. (2021). Transparency in practice in qualitative research. PS: Political Science & Politics, 54(2), 285–291.
- Kerr NL. HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review. 1998;2:196–217. doi: 10.1207/s15327957pspr0203_4. [DOI] [PubMed] [Google Scholar]
- Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., ... Nosek, B. A. (2016). Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. PLoS Biology, 14(5), e1002456. [DOI] [PMC free article] [PubMed]
- Kitzes J. The Basic Reproducible Workflow Template The Practice of Reproducible Research: Case Studies and Lessons from the Data-Intensive Sciences. University of California Press; 2018. [Google Scholar]
- Klein, R. A., Ratliff, K. A., Vianello, M., Adams Jr, R. B., Bahník, Š., Bernstein, M. J., ... Nosek, B. A. (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45(3), 142-152.
- Knüppel, H., Metz, C., Meerpohl, J. J., & Strech, D. (2013). How Psychiatry Journals Support the Unbiased Translation of Clinical Research. A Cross-Sectional Study of Editorial Policies. PLoS One, 8(10), e75995. 10.1371/journal.pone.0075995 [DOI] [PMC free article] [PubMed]
- Krishna, A., & Peter, S. M. (2018). Questionable research practices in student final theses–Prevalence, attitudes, and the role of the supervisor’s perceived attitudes. PLoS One, 13(8), e0203470. [DOI] [PMC free article] [PubMed]
- Laakso M. Green open access policies of scholarly journal publishers: A study of what, when, and where self-archiving is allowed. Scientometrics. 2014;99:475–494. doi: 10.1007/s11192-013-1205-3. [DOI] [Google Scholar]
- Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., ... Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2(3), 168-171.
- Leadbeater BJ, Banister E, Benoit C, Jansson M, Marshall A, Riecken T. Ethical issues in community-based research with children and youth. University of Toronto Press; 2006. [Google Scholar]
- Leadbeater, B. J., Dishion, T., Sandler, I., Bradshaw, C. P., Dodge, K., Gottfredson, D., ... Smith, E. P. (2018). Ethical Challenges in Promoting the Implementation of Preventive Interventions: Report of the SPR Task Force. Prevention Science, 19(7), 853-865. [DOI] [PMC free article] [PubMed]
- Leijten, P., Scott, S., Landau, S., Harris, V., Mann, J., Hutchings, J., ... Gardner, F. (2020). Individual Participant Data Meta-analysis: Impact of Conduct Problem Severity, Comorbid Attention-Deficit/Hyperactivity Disorder and Emotional Problems, and Maternal Depression on Parenting Program Effects. Journal of the American Academy of Child & Adolescent Psychiatry, 59(8), 933-943. 10.1016/j.jaac.2020.01.023 [DOI] [PubMed]
- Long JS. The workflow of data analysis using Stata. Stat Press; 2008. [Google Scholar]
- Lorenz TK, Holland KJ. Response to Sakaluk (2020): Let’s Get Serious About Including Qualitative Researchers in the Open Science Conversation. Archives of Sexual Behavior. 2020;49:2761–2763. doi: 10.1007/s10508-020-01851-3. [DOI] [PubMed] [Google Scholar]
- Macleod, M. R., Michie, S., Roberts, I., Dirnagl, U., Chalmers, I., Ioannidis, J. P., ... Glasziou, P. (2014). Biomedical research: Increasing value, reducing waste. The Lancet, 383(9912), 101-104. [DOI] [PubMed]
- Martinez, C., Hollister, J., Marwick, B., Szöcs, E., Zeitlin, S., Kinoshita, B. P., ... Meinke, B. (2020). Reproducibility in Science: A Guide to enhancing reproducibility in scientific results and writing. Retrieved 19 January 2022, from https://ropensci.github.io/reproducibility-guide/
- Masicampo, E. J., & Lalande, D. R. (2012). A peculiar prevalence of p values just below .05. Quarterly Journal of Experimental Psychology (2006), 65(11), 2271–2279. [DOI] [PubMed]
- Mayo-Wilson E, Dickersin K. Challenges stemming from NIH’s extended registration and reporting requirements. Nature Human Behaviour. 2018;2:97–97. doi: 10.1038/s41562-017-0286-z. [DOI] [Google Scholar]
- Mayo-Wilson, E., Grant, S., & Supplee, L. (2020). Clearinghouse Standards of Evidence on the Transparency and Reproducibility of Intervention Evaluations. MetaArXiv. [DOI] [PMC free article] [PubMed]
- Mayo-Wilson E, Grant S, Supplee L, Kianersi S, Amin A, DeHaven A, Mellor D. Evaluating implementation of the Transparency and Openness Promotion (TOP) guidelines: The TRUST process for rating journal policies, procedures, and practices. Research Integrity and Peer Review. 2021;6:1–11. doi: 10.1186/s41073-020-00104-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mayo-Wilson, E., Heyward, J., Keyes, A., Reynolds, J., White, S., Atri, N., ... Ford, D. E. (2018). Clinical trial registration and reporting: A survey of academic organizations in the United States. BMC Medicine, 16(1), 60. [DOI] [PMC free article] [PubMed]
- McIntosh RD. Exploratory reports: A new article type for Cortex. Cortex. 2017;96:A1–A4. doi: 10.1016/j.cortex.2017.07.014. [DOI] [PubMed] [Google Scholar]
- McNeely CA, Lee WF, Rosenbaum JE, Alemu B, Renner LM. Long-term effects of truancy diversion on school attendance: A quasi-experimental study with linked administrative data. Prevention Science. 2019;20:996–1008. doi: 10.1007/s11121-019-01027-z. [DOI] [PubMed] [Google Scholar]
- McNutt M. “Plan S” falls short for society publishers—and for the researchers they serve. Proceedings of the National Academy of Sciences. 2019;116:2400–2403. doi: 10.1073/pnas.1900359116. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McVay, M. A., & Conroy, D. E. (2019). Transparency and openness in behavioral medicine research. Translational Behavioral Medicine. [DOI] [PMC free article] [PubMed]
- Merton RK. The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press; 1973. [Google Scholar]
- Michie S, Atkins L, West R. The Behavior Change Wheel: A guide to designing interventions. Silverback Publishing; 2014. [Google Scholar]
- Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K. M., Gerber, A., ... Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343(6166), 30-31. [DOI] [PMC free article] [PubMed]
- Milette K, Roseman M, Thombs BD. Transparency of outcome reporting and trial registration of randomized controlled trials in top psychosomatic and behavioral health journals: A systematic review. Journal of Psychosomatic Research. 2011;70:205–217. doi: 10.1016/j.jpsychores.2010.09.015. [DOI] [PubMed] [Google Scholar]
- Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Har Sham, M., Barbour, V., ... Dirnagl, U. (2019). The Hong Kong principles for assessing researchers: Fostering research integrity. OSF Preprints. [DOI] [PMC free article] [PubMed]
- Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Har Sham, M., Barbour, V., ... Dirnagl, U. (2020). The Hong Kong principles for assessing researchers: Fostering research integrity. PLoS Biology, 18(7), e3000737. [DOI] [PMC free article] [PubMed]
- Moher, D., Glasziou, P., Chalmers, I., Nasser, M., Bossuyt, P. M., Korevaar, D. A., ... Boutron, I. (2016). Increasing value and reducing waste in biomedical research: Who's listening? The Lancet, 387(10027), 1573-1586. [DOI] [PubMed]
- Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLoS Biology, 16(3), e2004089. [DOI] [PMC free article] [PubMed]
- Moher, D., Schulz, K. F., Simera, I., & Altman, D. G. (2010). Guidance for Developers of Health Research Reporting Guidelines. PLoS Medicine, 7(2), e1000217. [DOI] [PMC free article] [PubMed]
- Montgomery P, Grant S, Mayo-Wilson E, Macdonald G, Michie S, Hopewell S, Moher D. Reporting randomised trials of social and psychological interventions: The CONSORT-SPI 2018 Extension. Trials. 2018;19:407. doi: 10.1186/s13063-018-2733-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore, G. F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W., ... Baird, J. (2015). Process evaluation of complex interventions: Medical Research Council guidance. BMJ, 350. [DOI] [PMC free article] [PubMed]
- Morling B, Calin-Jageman RJ. What psychology teachers should know about open science and the new statistics. Teaching of Psychology. 2020;47:169–179. doi: 10.1177/0098628320901372. [DOI] [Google Scholar]
- Morris M, Clark B. You want me to do WHAT? Evaluators and the pressure to misrepresent findings. American Journal of Evaluation. 2013;34:57–70. doi: 10.1177/1098214012457237. [DOI] [Google Scholar]
- Moshontz, H., Campbell, L., Ebersole, C. R., Ijzerman, H., Urry, H. L., Forscher, P. S., ... Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network. Advances in Methods and Practices in Psychological Science. 10.1177/2515245918797607 [DOI] [PMC free article] [PubMed]
- Munafò M. Raising research quality will require collective action. Nature. 2019;576:183–183. doi: 10.1038/d41586-019-03750-7. [DOI] [PubMed] [Google Scholar]
- Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Du Sert, N. P., ... Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 0021. [DOI] [PMC free article] [PubMed]
- Nabyonga-Orem, J., Asamani, J. A., Nyirenda, T., & Abimbola, S. (2020). Article processing charges are stalling the progress of African researchers: a call for urgent reforms. BMJ Global Health, 5(9), e003650. [DOI] [PMC free article] [PubMed]
- National Academies of Sciences, Engineering, and Medicine . Fostering integrity in research. The National Academies Press; 2017. [PubMed] [Google Scholar]
- National Academies of Sciences, Engineering, and Medicine . Open science by design: Realizing a vision for 21st century research. The National Academies Press; 2018. [PubMed] [Google Scholar]
- National Academies of Sciences, Engineering, and Medicine . Reproducibility and replicability in science. The National Academies Press; 2019. [PubMed] [Google Scholar]
- National Academies of Sciences, Engineering, and Medicine. (2021). Developing a Toolkit for Fostering Open Science Practices: Proceedings of a Workshop. Washington, DC: The National Academies Press. [PubMed]
- National Academy of Sciences, National Academy of Engineering, & Institute of Medicine . Responsible Science: Ensuring the Integrity of the Research Process. National Academy Press; 1992. [Google Scholar]
- National Institute of General Medical Sciences. (2018). Clearinghouse for Training Modules to Enhance Data Reproducibility.
- Neuliep JW, Crandall R. Editorial bias against replication research. Journal of Social Behavior & Personality. 1990;5:85–90. [Google Scholar]
- Niemeyer H, Musch J, Pietrowsky R. Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for schizophrenia. Schizophrenia Research. 2012;138:103–112. doi: 10.1016/j.schres.2012.03.023. [DOI] [PubMed] [Google Scholar]
- Niemeyer H, Musch J, Pietrowsky R. Publication bias in meta-analyses of the efficacy of psychotherapeutic interventions for depression. Journal of Consulting and Clinical Psychology. 2013;81:58–74. doi: 10.1037/a0031152. [DOI] [PubMed] [Google Scholar]
- Norris E, O’Connor DB. Science as behaviour: Using a behaviour change approach to increase uptake of open science. Psychology & Health. 2019;34:1397–1406. doi: 10.1080/08870446.2019.1679373. [DOI] [PubMed] [Google Scholar]
- Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., ... Yarkoni, T. (2015). Promoting an open research culture. Science, 348, 1422-1425. [DOI] [PMC free article] [PubMed]
- Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proceedings of the National Academy of Sciences. 2018;115:2600–2606. doi: 10.1073/pnas.1708274114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nosek, B. A., & Errington, T. M. (2017). Reproducibility in Cancer Biology: Making sense of replications. eLife, 6, e23383. 10.7554/eLife.23383 [DOI] [PMC free article] [PubMed]
- Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615–631. 10.1177/1745691612459058 [DOI] [PMC free article] [PubMed]
- Nuijten, M. B. (2022). Assessing and improving robustness of psychological research findings in four steps Clinical Psychology and Questionable Research Practices: Springer.
- Nuijten MB, Hartgerink CHJ, van Assen MALM, Epskamp S, Wicherts JM. The prevalence of statistical reporting errors in psychology (1985–2013) Behavior Research Methods. 2016;48:1205–1226. doi: 10.3758/s13428-015-0664-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Office of Evaluation, S. OES Evaluation Process. General Services Administration; 2020. [Google Scholar]
- Ohmann, C., Moher, D., Siebert, M., Motschall, E., & Naudet, F. (2021). Status, use and impact of sharing individual participant data from clinical trials: a scoping review. BMJ Open, 11(8), e049228. [DOI] [PMC free article] [PubMed]
- Olson, C. M., Rennie, D., Cook, D., Dickersin, K., Flanagin, A., Hogan, J. W., ... Pace, B. (2002). Publication Bias in Editorial Decision Making. JAMA, 287(21), 2825-2828. [DOI] [PubMed]
- Open Science, C. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. [DOI] [PubMed]
- Peng RD. Reproducible Research in Computational Science. Science. 2011;334:1226–1227. doi: 10.1126/science.1213847. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pigott TD, Valentine JC, Polanin JR, Williams RT, Canada DD. Outcome-Reporting Bias in Education Research. Educational Researcher. 2013 doi: 10.3102/0013189X13507104. [DOI] [Google Scholar]
- Pocock, S. J., Collier, T. J., Dandreo, K. J., Stavola, B. L. D., Goldman, M. B., Kalish, L. A., ... McCormack, V. A. (2004). Issues in the reporting of epidemiological studies: A survey of recent practice. BMJ,329, 883. 10.1136/bmj.38250.571088.55 [DOI] [PMC free article] [PubMed]
- Pownall, M., Talbot, C. V., Henschel, A., Lautarescu, A., Lloyd, K. E., Hartmann, H., ... Siegel, J. A. (2021). Navigating Open Science as Early Career Feminist Researchers. Psychology of Women Quarterly. 10.1177/03616843211029255
- Project TIER. (2016). The DRESS Protocol (version 1.0): Documenting Research in the Empirical Social Sciences. Retrieved 19 January 2022 from, https://www.projecttier.org/tier-protocol/dress-protocol/
- Riehm KE, Azar M, Thombs BD. Transparency of outcome reporting and trial registration of randomized controlled trials in top psychosomatic and behavioral health journals: A 5-year follow-up. Journal of Psychosomatic Research. 2015;79:1–12. doi: 10.1016/j.jpsychores.2015.04.010. [DOI] [PubMed] [Google Scholar]
- Rosenthal R. The file drawer problem and tolerance for null results. Psychological Bulletin. 1979;86:638–641. doi: 10.1037/0033-2909.86.3.638. [DOI] [Google Scholar]
- Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6, 588. [DOI] [PMC free article] [PubMed]
- Sabik NJ, Matsick JL, McCormick-Huhn K, Cole ER. Bringing an intersectional lens to “open” science: An analysis of representation in the reproducibility project. Psychology of Women Quarterly. 2021 doi: 10.1177/03616843211035678. [DOI] [Google Scholar]
- Sarabipour S, Debat HJ, Emmott E, Burgess SJ, Schwessinger B, Hensel Z. On the value of preprints: An early career researcher perspective. PLoS Biology. 2019;17:e3000151. doi: 10.1371/journal.pbio.3000151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schapira, M., & Harding, R. J. (2020). Open laboratory notebooks: Good for science, good for society, good for scientists. F1000Research, 8, 87. [DOI] [PMC free article] [PubMed]
- Schmidt S. Shall we Really do it Again? The Powerful Concept of Replication is Neglected in the Social Sciences. Review of General Psychology. 2009;13:90–100. doi: 10.1037/a0015108. [DOI] [Google Scholar]
- Scott A, Rucklidge JJ, Mulder RT. Is Mandatory Prospective Trial Registration Working to Prevent Publication of Unregistered Trials and Selective Outcome Reporting? An Observational Study of Five Psychiatry Journals That Mandate Prospective Clinical Trial Registration. PLoS ONE. 2015;10:e0133718. doi: 10.1371/journal.pone.0133718. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shamseer L, Hopewell S, Altman DG, Moher D, Schulz KF. Update on the endorsement of CONSORT by high impact factor journals: A survey of journal ‘instructions to authors’ in 2014. Trials. 2016;17:301. doi: 10.1186/s13063-016-1408-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shrout PE, Rodgers JL. Psychology, Science, and Knowledge Construction: Broadening Perspectives from the Replication Crisis. Annual Review of Psychology. 2018;69:487–510. doi: 10.1146/annurev-psych-122216-011845. [DOI] [PubMed] [Google Scholar]
- Siegel JA, Calogero RM, Eaton AA, Roberts TA. Identifying Gaps and Building Bridges Between Feminist Psychology and Open Science. Psychology of Women Quarterly. 2021 doi: 10.1177/03616843211044494. [DOI] [Google Scholar]
- Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: Reporting guidelines and the EQUATOR Network. BMC Medicine. 2010;8:24. doi: 10.1186/1741-7015-8-24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simera I, Moher D, Hoey J, Schulz KF, Altman DG. A catalogue of reporting guidelines for health research. European Journal of Clinical Investigation. 2010;40:35–53. doi: 10.1111/j.1365-2362.2009.02234.x. [DOI] [PubMed] [Google Scholar]
- Simes RJ. Publication bias: The case for an international registry of clinical trials. Journal of Clinical Oncology. 1986;4:1529–1541. doi: 10.1200/JCO.1986.4.10.1529. [DOI] [PubMed] [Google Scholar]
- Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science. 2011;22:1359–1366. doi: 10.1177/0956797611417632. [DOI] [PubMed] [Google Scholar]
- Simonsohn U, Nelson LD, Simmons JP. P-curve: A key to the file-drawer. Journal of Experimental Psychology: General. 2014;143:534. doi: 10.1037/a0033242. [DOI] [PubMed] [Google Scholar]
- Smaldino, P. E., & McElreath, R. (2016). The natural selection of bad science. Royal Society Open Science, 3(9), 160384. [DOI] [PMC free article] [PubMed]
- Solomon, S., DeBruin, D., Eder, M. M., Heitman, E., Kaberry, J. M., McCormick, J. B., ... Anderson, E. E. (2016). Community-Engaged Research Ethics Review: Exploring Flexibility in Federal Regulations. IRB, 38(3), 11-19. [PMC free article] [PubMed]
- Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., ... Harvey, I. (2010). Dissemination and publication of research findings: an updated review of related biases. Health Technology Assessment (Winchester, England), 14(8), iii,-ix-xi, 1–193. [DOI] [PubMed]
- Spellman, B. A. (2015). A Short (Personal) Future History of Revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899. 10.1177/1745691615609918 [DOI] [PubMed]
- Spoth, R., Rohrbach, L. A., Greenberg, M., Leaf, P., Brown, C. H., Fagan, A., ... Contributing, A. (2013). Addressing Core Challenges for the Next Generation of Type 2 Translation Research and Systems: The Translation Science to Population Impact (TSci Impact) Framework. Prevention Science, 14(4), 319-351. 10.1007/s11121-012-0362-6 [DOI] [PMC free article] [PubMed]
- Spybrook J, Anderson D, Maynard R. The Registry of Efficacy and Effectiveness Studies (REES): A step toward increased transparency in education. Journal of Research on Educational Effectiveness. 2019;12:5–9. doi: 10.1080/19345747.2018.1529212. [DOI] [Google Scholar]
- Staines GL, Cleland CM. Bias in Meta-Analytic Estimates of the Absolute Efficacy of Psychotherapy. Review of General Psychology. 2007;11:329–347. doi: 10.1037/1089-2680.11.4.329. [DOI] [Google Scholar]
- Staudt M, Howardw MO, Drake B. The Operationalization, Implementation, and Effectiveness of the Strengths Perspective. Journal of Social Service Research. 2001;27:1–21. doi: 10.1300/J079v27n03_01. [DOI] [Google Scholar]
- Sterling TD. Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa. Journal of the American Statistical Association. 1959;54:30–34. doi: 10.1080/01621459.1959.10501497. [DOI] [Google Scholar]
- Stodden, V., McNutt, M., Bailey, D. H., Deelman, E., Gil, Y., Hanson, B., ... Taufer, M. (2016). Enhancing reproducibility for computational methods. Science, 354(6317), 1240-1241. [DOI] [PubMed]
- Stroebe W, Postmes T, Spears R. Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science. 2012;7:670–688. doi: 10.1177/1745691612460687. [DOI] [PubMed] [Google Scholar]
- Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C. A., Davies, A., ... Chambers, C. D. (2014). The association between exaggeration in health related science news and academic press releases: retrospective observational study. BMJ, 349. [DOI] [PMC free article] [PubMed]
- Supplee LH, Meyer AL. The intersection between prevention science and evidence-based policy: How the SPR evidence standards support human services prevention programs. Prevention Science. 2015;16:938–942. doi: 10.1007/s11121-015-0590-7. [DOI] [PubMed] [Google Scholar]
- Szucs D, Ioannidis JPA. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLoS Biology. 2017;15:e2000797. doi: 10.1371/journal.pbio.2000797. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tamariz L, Medina H, Taylor J, Carrasquillo O, Kobetz E, Palacio A. Are Research Ethics Committees Prepared for Community-Based Participatory Research? Journal of Empirical Research on Human Research Ethics. 2015;10:488–495. doi: 10.1177/1556264615615008. [DOI] [PubMed] [Google Scholar]
- Tennant, J. P., Waldner, F., Jacques, D. C., Masuzzo, P., Collister, L. B., & Hartgerink, C. H. (2016). The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research, 5, 632. [DOI] [PMC free article] [PubMed]
- Tetzlaff JM, Moher D, Chan A-W. Developing a guideline for clinical trial protocol content: Delphi consensus survey. Trials. 2012;13:176. doi: 10.1186/1745-6215-13-176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vazire S. Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science. 2018;13:411–417. doi: 10.1177/1745691617751884. [DOI] [PubMed] [Google Scholar]
- von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: Guidelines for reporting observational studies. International Journal of Surgery. 2014;12:1495–1499. doi: 10.1016/j.ijsu.2014.07.013. [DOI] [PubMed] [Google Scholar]
- Wagenmakers E-J, Wetzels R, Borsboom D, van der Maas HLJ, Kievit RA. An Agenda for Purely Confirmatory Research. Perspectives on Psychological Science. 2012;7:632–638. doi: 10.1177/1745691612463078. [DOI] [PubMed] [Google Scholar]
- Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7(1832). [DOI] [PMC free article] [PubMed]
- Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., ... Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3, 160018. [DOI] [PMC free article] [PubMed]
- Wilson, G., Bryan, J., Cranston, K., Kitzes, J., Nederbragt, L., & Teal, T. K. (2017). Good enough practices in scientific computing. PLoS Computational Biology, 13(6), e1005510. [DOI] [PMC free article] [PubMed]
- Woolston C. University drops impact factor. Nature. 2021;595:462. doi: 10.1038/d41586-021-01759-5. [DOI] [PubMed] [Google Scholar]
- World Medical Association. (2001). World Medical Association Declaration of Helsinki. Ethical principles for medical research involving human subjects. Bulletin of the World Health Organization, 79(4), 373–374. [PMC free article] [PubMed]
- Xie, Y., Allaire, J. J., & Grolemund, G. (2020). R Markdown: The Definitive Guide. Boca Raton, Florida: Chapman & Hall/CRC.
- Yong, E. (2018). Psychology’s Replication Crisis Is Running Out of Excuses. The Atlantic.
- Zarin, D. A., Tse, T., Williams, R. J., & Carr, S. (2016). Trial reporting in ClinicalTrials. gov—the final rule. New England Journal of Medicine, 375(20), 1998–2004. [DOI] [PMC free article] [PubMed]