Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Dec 27.
Published in final edited form as: Adv Methods Pract Psychol Sci. 2018 Oct 1;1(4):501–515. doi: 10.1177/2515245918797607

The Psychological Science Accelerator: Advancing Psychology through a Distributed Collaborative Network

Hannah Moshontz 1, Lorne Campbell 2, Charles R Ebersole 3, Hans IJzerman 4, Heather L Urry 5, Patrick S Forscher 6, Jon E Grahe 7, Randy J McCarthy 8, Erica D Musser 9, Jan Antfolk 10, Christopher M Castille 11, Thomas Rhys Evans 12, Susann Fiedler 13, Jessica Kay Flake 14, Diego A Forero 15, Steve M J Janssen 16, Justin Robert Keene 17, John Protzko 18, Balazs Aczel 19, Sara Álvarez Solas 20, Daniel Ansari 21, Dana Awlia 22, Ernest Baskin 23, Carlota Batres 24, Martha Lucia Borras-Guevara 25, Cameron Brick 26, Priyanka Chandel 27, Armand Chatard 28, William J Chopik 29, David Clarance 30, Nicholas A Coles 31, Katherine S Corker 32, Barnaby James Wyld Dixson 33, Vilius Dranseika 34, Yarrow Dunham 35, Nicholas W Fox 36, Gwendolyn Gardiner 37, S Mason Garrison 38, Tripat Gill 39, Amanda C Hahn 40, Bastian Jaeger 41, Pavol Kačmár 42, Gwenaël Kaminski 43, Philipp Kanske 44, Zoltan Kekecs 45, Melissa Kline 46, Monica A Koehn 47, Pratibha Kujur 48, Carmel A Levitan 49, Jeremy K Miller 50, Ceylan Okan 51, Jerome Olsen 52, Oscar Oviedo-Trespalacios 53, Asil Ali Özdoğru 54, Babita Pande 55, Arti Parganiha 56, Noorshama Parveen 57, Gerit Pfuhl 58, Sraddha Pradhan 59, Ivan Ropovik 60, Nicholas O Rule 61, Blair Saunders 62, Vidar Schei 63, Kathleen Schmidt 64, Margaret Messiah Singh 65, Miroslav Sirota 66, Crystal N Steltenpohl 67, Stefan Stieger 68, Daniel Storage 69, Gavin Brent Sullivan 70, Anna Szabelska 71, Christian K Tamnes 72, Miguel A Vadillo 73, Jaroslava V Valentova 74, Wolf Vanpaemel 75, Marco A C Varella 76, Evie Vergauwe 77, Mark Verschoor 78, Michelangelo Vianello 79, Martin Voracek 80, Glenn P Williams 81, John Paul Wilson 82, Janis H Zickfeld 83, Jack D Arnal 84, Burak Aydin 85, Sau-Chin Chen 86, Lisa M DeBruine 87, Ana Maria Fernandez 88, Kai T Horstmann 89, Peder M Isager 90, Benedict Jones 91, Aycan Kapucu 92, Hause Lin 93, Michael C Mensink 94, Gorka Navarrete 95, Miguel A Silan 96, Christopher R Chartier 97
PMCID: PMC6934079  NIHMSID: NIHMS1541903  PMID: 31886452

Abstract

Concerns have been growing about the veracity of psychological research. Many findings in psychological science are based on studies with insufficient statistical power and nonrepresentative samples, or may otherwise be limited to specific, ungeneralizable settings or populations. Crowdsourced research, a type of large-scale collaboration in which one or more research projects are conducted across multiple lab sites, offers a pragmatic solution to these and other current methodological challenges. The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. These projects can focus on novel research questions, or attempt to replicate prior research, in large, diverse samples. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Here, we describe the background, structure, principles, procedures, benefits, and challenges of the PSA. In contrast to other crowdsourced research networks, the PSA is ongoing (as opposed to time-limited), efficient (in terms of re-using structures and principles for different projects), decentralized, diverse (in terms of participants and researchers), and inclusive (of proposals, contributions, and other relevant input from anyone inside or outside of the network). The PSA and other approaches to crowdsourced psychological science will advance our understanding of mental processes and behaviors by enabling rigorous research and systematically examining its generalizability.

Keywords: Psychological Science Accelerator, crowdsourcing, generalizability, theory development, large-scale collaboration


The Psychological Science Accelerator (PSA) is a distributed network of laboratories designed to enable and support crowdsourced research projects. The PSA’s mission is to accelerate the accumulation of reliable and generalizable evidence in psychological science. Following the example of the Many Labs initiatives (Ebersole et al., 2016; Klein et al., 2014; Klein et al., 2018), Chartier (2017) called for psychological scientists to sign up to work together towards a more collaborative way of doing research. The initiative quickly grew into a network with over 300 data collection labs, an organized governance structure, and a set of policies for evaluating, preparing, conducting, and disseminating studies. Here, we introduce readers to the historical context from which the PSA emerged, the core principles of the PSA, the process by which we pursue our mission in line with these principles, and a short list of likely benefits and challenges of the PSA.

Background

Psychological science has a lofty goal– to describe, explain, and predict mental processes and behaviors. Currently, however, our ability to meet this goal is constrained by standard practices in conducting and disseminating research (Lykken, 1991; Nosek & Bar-Anan, 2012; Nosek, Spies, & Motyl, 2012; Simmons, Nelson, & Simonsohn, 2011). In particular, the composition and insufficient size of typical samples in psychological research introduces uncertainty about the veracity (Anderson & Maxwell, 2017; Cohen, 1992; Maxwell, 2004) and generalizability of findings (Elwert & Winship, 2014; Henrich, Heine, & Norenzayan, 2010).

Concerns about the veracity and generalizability of published studies are not new or specific to psychology (Baker, 2016; Ioannidis, 2005), but, in recent years, psychological scientists have engaged in reflection and reform (Nelson, Simmons, & Simonsohn, 2018). As a result, standard methodological and research dissemination practices in psychological science have evolved during the past decade. The field has begun to adopt long-recommended changes that can protect against common threats to statistical inference (Motyl et al., 2017), such as flexible data analysis (Simmons et al., 2011) and low statistical power (Button et al., 2013; Cohen, 1962). Psychologists have recognized the need for a greater focus on replication (i.e., conducting an experiment one or more additional times with a new sample), using a high degree of methodological similarity (also called direct or close replication; Brandt et al., 2014; Simons, 2014), and employing dissimilar methodologies (also called conceptual or distant replications; Crandall & Sherman, 2016). Increasingly, authors are encouraged to consider and explicitly indicate the populations and contexts to which they expect their findings to generalize (Kukull & Ganguli, 2012; Simons, Shoda, & Lindsay, 2017). Researchers are adopting more open scientific practices, such as sharing data, materials, and code to reproduce statistical analyses (Kidwell et al., 2016). These recent developments are moving us toward a more collaborative, reliable, and generalizable psychological science (Chartier et al., 2018).

During this period of reform, crowdsourced research projects in which multiple laboratories independently conduct the same study have become more prevalent. An early published example of this kind of crowdsourcing in psychological research, The Emerging Adulthood Measured at Multiple Institutions (EAMMI; Reifman & Grahe, 2016), was conducted in 2004. The EAMMI pooled data collected by undergraduate students in statistics and research methods courses at 10 different institutions (see also The School Spirit Study Group, 2004). More recent projects such as the Many Labs project series (Klein et al., 2014; Ebersole et al., 2016), Many Babies (Frank et al., 2017), the Reproducibility Project: Psychology (Open Science Collaboration, 2015), the Pipeline Project (Schweinsberg et al., 2016), the Human Penguin Project (IJzerman et al., 2018), and Registered Replication Reports (RRR; Algona et al., 2014; O’Donnell et al., 2018; Simons, Holcombe, & Spellman, 2014) have involved research teams from many institutions contributing to large-scale, geographically distributed data collection. These projects accomplish many of the methodological reforms mentioned above, either by design or as a byproduct of large-scale collaboration. Indeed, crowdsourced research generally offers a pragmatic solution to four current methodological challenges.

First, crowdsourced research projects can achieve high statistical power by increasing sample size. A major limiting factor for individual researchers is the available number of participants for a particular study, especially when the study requires in-person participation. Crowdsourced research mitigates this problem by aggregating data from many labs. Aggregation results in larger sample sizes and, as long as the features that might cause variations in effect sizes are well-controlled, more precise effect-size estimates than any individual lab is likely to achieve independently. Thus, crowdsourced projects directly address concerns about statistical power within the published psychological literature (e.g., Fraley & Vazire, 2014) and are consistent with recent calls to emphasize meta-analytic thinking across multiple data sets (e.g., Cumming, 2014; LeBel, McCarthy, Earp, Elson, & Vanpaemel, 2018).

Second, to the extent that findings do vary across labs, crowdsourced research provides more information about the generalizability of the tested effects than most psychology research. Conclusions from any individual instantiation of an effect (e.g., an effect demonstrated in a single study within a single sample at one point in time) are almost always overgeneralized (e.g., Greenwald, Pratkanis, Leippe, & Baumgardner, 1986). Any individual study occurs within an idiosyncratic, indefinite combination of contextual variables, most of which are theoretically irrelevant to current theory. Testing an effect across several levels and combinations of such contextual variables (which is a natural byproduct of crowdsourcing) adds to our knowledge of its generalizability. Further, crowdsourced data collection can allow for estimating effect heterogeneity across contexts and can facilitate the discovery of new psychological mechanisms through exploratory analyses.

Third, crowdsourced research fits naturally with –and benefits significantly from– open scientific practices, as demonstrated by several prominent crowdsourced projects (e.g., the Many Labs projects). Crowdsourced research requires providing many teams access to the experimental materials and procedures needed to complete the same study. This demands greater transparency and documentation of the research workflow. Data from these projects are frequently analyzed by teams at multiple institutions, requiring researchers to take much greater care to document and share data and analyses. Once materials and data are ready to share within a collaborating team, they are also ready to share with the broader community of fellow researchers and consumers of science. This open sharing allows for secondary publications based on insights gleaned from these data sets (e.g., Vadillo, Gold, & Osman, in press; Van Bavel, Mende-Siedlecki, Brady, & Reinero, 2016).

Finally, crowdsourced research can promote inclusion and diversity within the research community, especially when it takes place in a globally distributed network. Researchers who lack the resources to independently conduct a large project can contribute to high-quality, impactful research. Similarly, researchers and participants from all over the world (with varying languages, cultures, and traditions) can participate, including people from countries presently under-represented in the scientific literature. In countries where most people do not have access to the Internet, studies administered online can produce inaccurate characterizations of the population (e.g., Batres & Perrett, 2014). For researchers who want to implement studies in countries with limited internet access, crowdsourced collaborations offer a means of accessing more representative samples by enabling the implementation of in-person studies from a distance.

These inherent features of crowdsourced research can accelerate the accumulation of reliable and generalizable empirical evidence in psychology. However, there are many ways in which crowdsourced research can itself be accelerated, and additional benefits can emerge given the right organizational infrastructure and support. Crowdsourced research, as it has thus far been implemented, has a high barrier to entry because of the resources required to recruit and maintain large collaboration networks. As a result, most of the prominent crowdsourced projects in psychology have been created and led by a small subset of researchers who are connected to the requisite resources and professional networks. This limits the impact of crowdsourced research to subdomains of psychology that reflect the idiosyncratic interests of the researchers leading these efforts.

Furthermore, even for the select groups of researchers who have managed these large-scale projects, recruitment of collaborators has been inefficient. Teams are formed ad hoc for each project, requiring a great deal of time and effort. Project leaders have often relied on crude methods, such as recruiting from the teams that contributed to their most recent crowdsourced project. This yields teams that are insular, rather than inclusive. Moreover, researchers who “skip” a project risk falling out of the recruitment network for subsequent projects, thus reducing opportunities for future involvement. For the reasons elaborated on above, and in order to make crowdsourced research more commonplace in psychology, to promote diversity in crowdsourcing, and to increase the efficiency of large-scale collaborations, we created the Psychological Science Accelerator (PSA).

Core Principles and Organizational Structure

The PSA is a standing, geographically distributed network of psychology laboratories willing to devote some of their research resources to large, multi-site, collaborative studies, at their discretion. As described in detail below, the PSA formalizes crowdsourced research by evaluating and selecting proposed projects, refining protocols, assigning them to participating labs, aiding in the ethics approval process, coordinating translation, and overseeing data collection and analysis. Five core principles, which reflect the four Mertonian norms of science (universalism, communalism, disinterestedness, and skepticism; Merton, 1942/1973), guide the PSA as follows:

  1. The PSA endorses the principle of diversity and inclusion: We endeavor towards diversity and inclusion in every aspect of the PSA’s functioning. This includes cultural and geographic diversity among participants and researchers conducting PSA-supported projects, as well as a diversity of research topics.

  2. The PSA endorses the principle of decentralized authority: PSA policies and procedures are set by committees in conjunction with the PSA community at large. Members collectively guide the direction of the PSA through the policies they vote for and the projects they support.

  3. The PSA endorses the principle of transparency: The PSA mandates transparent practices in its own policies and procedures, as well as in the projects it supports. All PSA projects require pre-registration of the research: When it is confirmatory, a pre-registration of hypotheses, methods, and analysis plans (e.g., Van ‘t Veer & Giner-Sorolla, 2016), and when it is exploratory, an explicit statement saying so. In addition, open data, open code, open materials, and depositing an open-access preprint report of the empirical results are required.

  4. The PSA endorses the principle of rigor: The PSA currently enables, supports, or requires appropriately large samples (Cohen, 1992; Ioannidis, 2005), expert review of the theoretical rationale (Cronbach & Meehl, 1955; LeBel, Berger, Campbell, & Loving, 2017), and vetting of methods by advisors with expertise in measurement and quantitative analysis.

  5. The PSA endorses the principle of openness to criticism: The PSA integrates critical assessment of its policies and research products into its process, requiring extensive review of all projects and annually soliciting external feedback on the organization as a whole.

Based on these five core principles, the PSA employs a broad committee structure to realize its mission (see Appendix for current committees). In keeping with the principle of decentralized authority, committees make all major PSA and project decisions based on majority vote while the Director oversees day-to-day operations and evaluates the functioning and policies of the PSA with respect to the core principles. This structure and the number and focus of committees were decided by an interim leadership team appointed by the Director early in the PSA’s formation. The committees navigate the necessary steps for completing crowdsourced research such as selecting studies, making methodological revisions, ensuring that studies are conducted ethically, translating materials, managing and supporting labs as they implement protocols, analyzing and sharing data, writing and publishing manuscripts, and ensuring that people receive credit for their contributions. The operations of the PSA are transparent, with members of the PSA network– including participating data-collection labs, committee members, and any researcher who has opted to join the network– able to observe and comment at each major decision point.

How the Psychological Science Accelerator Works

PSA projects undergo a specific step-by-step process, moving from submission and evaluation of a study proposal, through preparation and implementation of data collection, to analysis and dissemination of research products. This process unfolds in four major phases.

Phase 1: Submission & Evaluation

Proposing authors submit a description of the proposed study background, desired participant characteristics, materials, procedures, hypotheses, effect-size estimates, and data-analysis plan, including an analysis script and simulated data when possible, much like a Stage 1 manuscript submitted under a Registered Reports model. These submissions are then masked and evaluated according to a process overseen by the Study Selection Committee. If proposing authors are members of the PSA network, they and any close colleagues of proposing authors recuse themselves from participating in the evaluation of their proposals and all proposals submitted in response to that particular call for studies.

The evaluation process includes an initial feasibility check of the methods to gauge whether the PSA could run the proposed project given its currently available data-collection capacity, ethical concerns, and resource constraints; this is decided by vote of the Study Selection Committee. Protocols that use, or could be adapted to use, open source and easily transferable platforms are prioritized. Next, protocols undergo peer review by 10 individuals with appropriate expertise: six qualified committee members of the PSA who will evaluate specific aspects of the proposal, two additional experts within the network, and two experts outside the network. These individuals submit brief reviews to the Study Selection Committee while the Director concurrently shares submissions with the full network to solicit feedback and assess interest among network laboratories regarding their preliminary willingness and ability to collect data, should the study be selected. Finally, the Study Selection Committee votes on final selections based on reviewer feedback and evaluations from the PSA network. Selected projects proceed to the next phase. Proposing authors whose projects are not selected may be encouraged to revise the protocol or use another network of team-based psychology researchers (e.g., StudySwap; McCarthy & Chartier, 2017), depending on the feedback produced by the review process.

Phase 2: Preparation

Next, the Methodology and Data Analysis Committee, whose members are selected on the basis of methodological and statistical expertise, evaluates and suggests revisions of the selected studies to help prepare the protocols for implementation. At least one committee member will work alongside the proposing authors to provide sustained methodological support throughout the planning, implementation, and dissemination of the project. The final protocols and analysis plans that emerge from this partnership are shared with the full network for a brief feedback period, after which the proposing authors make any necessary changes.

Drawing on general guidelines specified by the Authorship Criteria Committee, the proposing authors simultaneously establish specific authorship criteria to share with all labs in the network who might collect data for the study. Next, the Logistics Committee identifies specific labs willing and able to run the specific protocols, bundling multiple studies into single laboratory sessions to maximize data collection efficiency when possible. The Logistics Committee then matches data collection labs to projects. Not every network lab participates in every study. Rather, labs are selected from the pool of willing and able labs based on the sample size needed (derived from power analyses), each lab’s capacity and technological resources (e.g., their access to specific software), and with consideration of the project’s need for geographic and other types of subject and lab diversity. Once data collection labs have committed to collect data for a specific study, including agreeing to authorship criteria and the proposed timeline for data collection, the Ethics Review Committee aids and oversees securing ethics approval at all study sites with consideration given to data sharing during this process. Data-collection labs revise provided template ethics materials as needed for their home institution and submit ethics documents for review. The data-collection labs, aided by the Translation and Cultural Diversity Committee, translate the procedures and study materials as needed following a process of translation, back-translation, and rectifying of differences (Behling & Law, 2000; Brislin, 1970).

Phase 3: Implementation

Implementation is the most time-intensive and variable phase. This process begins with pre-registering the hypotheses and confirmatory or exploratory research questions, the data-collection protocol, and the analysis plan developed in Phase 2, with instructional resources and support provided to the proposing authors as needed by the Project Management Committee. Pre-registration of confirmatory analysis plans, methods, and hypotheses is a minimum requirement of the PSA. The PSA encourages exploratory research and exploratory analyses, as long as these are transparently reported as such. Proposing authors are encouraged (but not required) to submit a Stage 1 Registered Report to a journal that accepts this format prior to data collection. Authors are encouraged to write the analysis script and test it on simulated data when possible. Following pre-registration, but prior to initiating data collection, the lead authors establish and rehearse their data-collection procedures and record a demonstration video, where appropriate, with mock participants. In consultation with the proposing authors, the Project Management committee will evaluate these materials and make decisions about procedural fidelity to ensure cross-site quality. If differences are found by the Project Management committee, contributing labs receive feedback and have a chance to respond. Once approved by the Project Management committee, labs collect data. Following data collection, each lab’s data and final materials are anonymized, uploaded, and made public on a repository such as the Open Science Framework (OSF), in accordance with ethics approval and other logistical considerations. A PSA team is available to review the analysis code, data, and materials after the project is finished. Final responsibility for the project is shared by the PSA and proposing authors.

Phase 4: Analysis and Dissemination

The proposing authors complete confirmatory data analyses, as described in their pre-registration. Once the confirmatory analyses are conducted, the proposing authors draft the empirical report. Drafting authors are encouraged to write the manuscript as a dynamic document, for example using R Markdown. All contributing labs and other authors (e.g., those involved in designing and implementing the project) are given the opportunity to provide feedback and approve the manuscript with reasonable lead time prior to submission. Following the principle of transparency, the PSA prefers publishing in open-access outlets or as open-access articles. At a minimum, by requirement, PSA articles are “green open access,” meaning that proposing authors upload a pre-print of their empirical report (i.e., the version of the report submitted for publication) on at least one stable, publicly accessible repository (e.g., PsyArXiv). Preferably, PSA articles are also “gold open access,” meaning that the article is made openly available by the journal itself.

When the project is concluded, all data, analytic code and meta-data are posted in full and made public, or made as publicly available as possible given ethical and legal constraints (Meyer, 2018), on the OSF by default or on another independent repository on a case-by-case basis (e.g., Databrary; Gilmore, Kennedy, & Adolph, 2018). These data are made available for other researchers to conduct exploratory and planned secondary analyses. Data releases are staged such that a “train” dataset is publicly released quickly after data collection and preparation, and the remaining “test” dataset is released several months later (e.g., as in Klein et al., 2018). The exact timing of data release and the specific method of splitting the sample (e.g., the percentage of data held, whether and how the sampling procedure will account for clustering) is determined on a case-by-case basis to accommodate the unique goals and data structure of each project (Anderson & Magruder, 2017; Dwork et al., 2015; Fafchamps & Labonne, 2017). Plans for staged data release are described in a wide and early public announcement, which will include information about exact timing. Any researcher can independently use additional cross-validation strategies to reduce the possibility that their inferences are based on overfitted models that leverage idiosyncratic features of a particular data set (see Yarkoni & Westfall, 2017). By staging data release, the PSA facilitates robust, transparent, and trustworthy exploratory analyses.

Benefits and Challenges

Our proposal to supplement the typical individual-lab approach with a crowdsourced approach to psychological science might seem utopian. However, teams of psychologists have already succeeded in completing large-scale projects (Ebersole et al., 2016; Grahe et al., 2017; IJzerman et al., 2018; Klein et al., 2014; Leighton, Legate, LePine, Anderson, & Grahe, 2018; Open Science Collaboration, 2015; Reifman & Grahe, 2016; Schweinsberg et al., 2016), thereby demonstrating that crowdsourced research is indeed both practical and generative. Accordingly, since its inception approximately ten months prior to this writing, the PSA community has steadily grown to include 346 labs, and we have approved three projects in various phases of the process described above. As such, we cultivate and work to maintain required expertise to capitalize on the benefits and overcome the challenges of our standing-network approach to crowdsourcing research.

Benefits

Although the PSA leverages the same strengths available to other crowdsourced research, its unique features also afford additional strengths. First, above and beyond the resource-sharing benefits of crowdsourced research, the standing nature of the PSA network further reduces the costs and inefficiency of recruiting new research teams for every project. This will lower the barrier for entry to crowdsourced research and allow more crowdsourced projects to take place.

Second, the PSA infrastructure enables researchers to discover meaningful variation in phenomena undetectable in typical samples collected at a single location (e.g., Corker, Donnellan, Kim, Schwartz, & Zamboanga, 2017; Hartshorne & Germine, 2015; Murre, Janssen, Rouw, & Meeter, 2013; Rentfrow, Gosling, & Potter, 2008). Unlike meta-analysis and other methods of synthesizing existing primary research retrospectively, PSA-supported projects can intentionally introduce and explicitly model methodological and contextual variation (e.g., in time, location, language, culture). In addition, anyone can use PSA-generated data to make such discoveries on an exploratory or confirmatory basis.

Third, by adopting transparent science practices, including pre-registration, open data, open code, and open materials, the PSA maximizes the informational value of its research products (Munafò et al., 2017; Nosek & Bar-Anan, 2012). This results in a manifold increase in the chances that psychologists can develop formal theories. As a side benefit, the adoption of transparent practices will improve trustworthiness of the products of the PSA and psychological science more broadly (Vazire, 2017). Moreover, because education and information often impede the use of transparent science practices, the PSA could increase adoption of transparent practices by exposing hundreds of participating researchers to them. Furthermore, by creating a crowdsourcing research community that values open science, we provide a vehicle whereby adherence to recommended scientific practices is increased and perpetuated (see Banks, Rogelberg, Woznyj, Landis, & Rupp, 2016).

Fourth, because of its democratic and distributed research process, the PSA is unlikely to produce research that reflects the errors or biases of an individual. No one person has complete control of how the research questions are selected, the materials prepared, the protocol and analysis plans developed, the methods implemented, the effects tested, or the findings reported. For each of these tasks, committees populated with content and methodological experts work with proposing authors to identify methods and practices that lead to high levels of scientific rigor. Furthermore, the PSA’s process facilitates error detection and correction. The number of people involved at each stage, the oversight provided by expert committees, and the PSA’s commitment to transparency (e.g., of data, materials, and workflow; Nosek et al., 2012) all increase the likelihood of detecting errors. Driven by our goal to maximize diversity and inclusion of both participants and scientists, decisions reflect input from varied perspectives. Altogether, the PSA depends on distributed expertise, a model likely to reduce many common mistakes that researchers make during the course of independent projects.

Fifth, the PSA provides an ideal context in which to train early-career psychological scientists, and in which psychological scientists of all career stages can learn about new methodological practices and paradigms. With over 300 laboratories in our network, the PSA serves as a natural training ground. Early career researchers contribute to PSA projects by serving on committees, running subjects, and otherwise supporting high-quality projects that have benefited from the expertise of a broad range of scientific constituencies that reflect the core principles discussed above. The PSA demonstrates these core principles and practices to a large number of scientists, including trainees.

Sixth, the PSA provides tools to foster research collaborations beyond the projects ultimately selected for PSA implementation. For example, anyone within or outside the standing network of labs can potentially locate collaborators for very specific research questions by geographic region using an interactive and searchable map (psysciacc.org/map). Because all labs in the network are, in principle, open to multi-site collaborations, invitations to collaborate within the network may be more likely to be accepted than those outside of it.

Finally, the PSA provides a unique opportunity for methodological advancement via methodological research and metascience. As a routine part of conducting research with the PSA, the methodology and translation committees proactively consider analytic challenges and opportunities presented by crowdsourced research (e.g., assessing cross-site measurement invariance, accounting for heterogeneity across populations, using simulations to assess power). In doing so, the PSA can help researchers identify and question critical assumptions that pertain to measurement reliability and analysis generally and with respect to cross-cultural, large-scale collaborations. As a result, the PSA enables methodological insights and research to the benefit of the PSA and the broader scientific community.

Challenges

Along with the benefits described above, the PSA faces a number of logistical challenges arising from the same features that give the PSA its utility: namely, its system of distributed responsibility and credit among a large number of diverse labs. The decentralized approach to decision making, in which all researchers in the network can voice their perspectives, may exacerbate these challenges. By anticipating specific challenges and enlisting the help of people who have navigated other crowdsourced projects, however, the PSA is well-positioned to meet the logistical demands inherent to its functioning.

First, the ability to pool resources from many institutions is a strength of the PSA, but one that comes with a great deal of responsibility. The PSA draws on resources for each of its projects that could have been spent investigating other ideas. Our study selection process is meant to mitigate the risks of wasting valuable research resources and appropriately calibrate investment of resources to the potential of research questions. To avoid the imperfect calibration of opportunity costs, each project has to justify its required resources, a priori, to the PSA committees and the broader community.

Second, because the PSA is international, it faces theoretical and methodological challenges related to translation– both literal linguistic translations of stimuli and instructions, and more general translational issues related to cultural differences. Data integration and adaptation of studies to suit culturally diverse samples come with a host of assumptions to consider when designing the studies and when interpreting the final results. We are proactive in addressing these challenges, as members of our Translation and Cultural Diversity Committee and Methods and Analysis Committee have experience with managing these difficulties. However, unforeseen challenges with managing such broad collaborations will still occur. Of course, the PSA was designed for these challenges and is committed to resolving them. We thus encourage those studies that leverage the expertise of our diverse network.

Third, many of the PSA’s unique benefits arise from its diverse and inclusive nature; a major challenge facing the PSA is to achieve these benefits with our member labs and subject population. The PSA places a premium on promoting diversity and inclusion within our network. As shown in the map in Figure 1, we have recruited large numbers of labs in North America and Europe but far fewer labs from Africa, South America, and Asia. In addition to geographic and cultural diversity, a diverse range of topic expertise and subject area is represented in the network and on each committee in ways that we believe facilitates diversity in the topics that the PSA studies. Maintaining and broadening diversity in expertise and geographical location requires concerted outreach, and entails identifying and eliminating the barriers that have resulted in underrepresentation of labs from some regions, countries, and types of institutions.

Figure 1.

Figure 1.

The global PSA network as of July 2018, consisting of 346 laboratories at 305 institutions in 53 countries.

A fourth challenge facing the PSA is to protect the rights of participants and their data. The Ethics Review Committee oversees the protection of human participants at every site for every project. Different countries and institutions have different guidelines and requirements for research on human participants. The PSA is committed to ensuring compliance with ethical principles and guidelines at each collection site, which requires attention and effort from all participating researchers.

Fifth, because the PSA relies on the resources held by participating labs, as with other forms of research and collaboration, the PSA is limited in the studies that it can conduct without external funding. Some types of studies are more difficult for the PSA to support than others (e.g., small group interactions, behavioral observation, protocols that require the use of specialized materials or supplies). Currently, the studies we select are limited to those that do not require expensive or uncommon equipment and are otherwise easy to implement across a wide variety of laboratories. As such, deserving research questions may not be selected by the PSA for feasibility reasons. We actively seek funding to support the organization and expand the range of studies that will be feasible for the PSA. For now, researchers can apply for and use grant funding to support project implementation via the PSA. There are currently a handful of labs with specialized resources (e.g., fMRI), and we hope that the network will eventually grow enough to support projects that require such specialized resources (e.g., developmental research that requires eye-tracking and research assistants trained to work with young children). Further, we are in the process of forming a new Funding committee devoted solely to the pursuit of financial support for the PSA and its member labs.

A final set of challenges for the PSA arises from the inherently collaborative nature of the research that the PSA will produce. Coordinating decision-making among hundreds of people is difficult. The PSA’s policies and committee structure were designed to facilitate effective communication and efficient decision-making; these systems will remain subject to revision and adaptation as needed. For example, decision deadlines are established publicly, and can sometimes be extended on request. The network’s size is a great advantage; if people, labs, or other individual components of the network are unable to meet commitments or deadlines, the network can proceed either without these contributions or with substituted contributions from others in the network. Another challenge that arises from the collaborative nature of the PSA’s products is awarding credit to the many people involved. Contributions to PSA-affiliated projects are clearly and transparently reported using the CRediT taxonomy (Brand, Allen, Altman, Hlava, & Scott, 2015). Authorship on empirical papers resulting from PSA projects is granted according to predetermined standards established by the lead authors of the project and differs from project to project. Finally, the collaborative and decentralized structure of the PSA increases the risk that responsibility for discrete research tasks like error-checking becomes too diffuse for any one person to take action. Our committee structure was designed in part to address this concern: committees comprised of small groups of people take responsibility for executing specific tasks, such as translation. These committees implement quality control procedures, such as back-translation, to increase the probability that when errors occur, they are caught and corrected. Diffusion of responsibility is an ongoing concern that we will continue to monitor and address as our network expands and changes.

In sum, the PSA faces a number of challenges. We believe these are more than offset by its potential benefits. We take a proactive and innovative approach to facing these and any other challenges we encounter by addressing them explicitly through collaboratively-developed and transparent policies. By establishing flexible systems to manage the inherent challenges of large-scale, crowd-sourced research, the PSA is able to offer unprecedented support for psychological scientists who would like to conduct rigorous research on a global scale.

Conclusion

In a brief period of time, the PSA has assembled a diverse network of globally distributed researchers and participant samples. We have also assembled a team with wide-ranging design and analysis expertise and considerable experience in coordinating multi-site collaborations. In doing so, the PSA provides the infrastructure needed to accelerate rigorous psychological science. The full value of this initiative will not be known for years or perhaps decades. Individually manageable investments of time, energy, and resources, if distributed across an adequately large collaboration of labs, have the potential to yield important, lasting contributions to our understanding of psychology.

Success in this endeavor is far from certain. However, striving towards collaborative, multi-lab, and culturally diverse research initiatives like the PSA can allow the field to not only advance understanding of specific phenomena and potentially resolve past disputes in the empirical literature, but they can also advance methodology and psychological theorizing. We thus call on all researchers with an interest in psychological science, regardless of discipline or area, representing all world regions, having large or small resources, being early or late in career, to join us and transform the PSA into a powerful tool for gathering reliable and generalizable evidence about human behavior and mental processes. If you are interested in joining the project, or getting regular updates about our work, please complete this brief form: Sign-up Form (https://psysciacc.org/get-involved/). Please join us; you are welcome in this collective endeavor.

Figure 2.

Figure 2.

The four major phases of a PSA research project.

Acknowledgments

This work was partially supported as follows. Hans IJzerman’s research is partly supported by the French National Research Agency in the framework of the “Investissements d’avenir” program (ANR-15-IDEX-02). Erica D. Musser’s work is supported in part by the United States National Institute of Mental Health (R03MH110812-02). Susann Fiedler’s work is supported in part by the Gielen-Leyendecker Foundation. Diego A. Forero is supported by research grants from Colciencias and VCTI. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship awarded to Nicholas A. Coles. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. This material is based upon work that has been supported by the National Science Foundation (DGE-1445197) to S. Mason Garrison. Tripat Gill’s work is partially supported by the Canada Research Chairs Program (SSHRC). Miguel A. Vadillo’s work is supported by Comunidad de Madrid (Programa de Atracción de Talento Investigador, Grant 2016-T1/SOC-1395). Evie Vergauwe’s work is supported in part by the Swiss National Science Foundation (PZ00P1_154911). Lisa M. DeBruine’s work is partially supported by ERC KINSHIP (647910). Ana Maria Fernandez’s work is partially supported by Fondecyt (1181114). Peder M. Isager’s work is partially supported by NWO VIDI 452-17-013. We thank Chris Chambers, Chuan-Peng Hu, Cody Christopherson, Darko Lončarić, David Mellor, Denis Cousineau, Etienne LeBel, Jill Jacobson, Kim Peters and William Jiménez-Leal for their commitment to the PSA through their service as members of our organizational committees.

Appendix

The Psychological Science Accelerator: Organizational Structure
Director: The Director oversees all operations of the PSA, appoints members of committees, and ensures that the PSA activities are directly aligned with our mission and core principles. Christopher R. Chartier (Ashland University)
Leadership Team: The LT oversees the development of PSA committees and policy documents. It will soon establish procedures for electing members of the Leadership Team and all other PSA committees. Sau-Chin Chen (Tzu-Chi University), Lisa DeBruine (University of Glasgow), Charles Ebersole (University of Virginia), Hans IJzerman (Universite Grenoble Alpes), Steve Janssen (University of Nottingham-Malaysia Campus), Melissa Kline (MIT), Darko Lončarić (University of Rijeka), Heather Urry (Tufts University)
Study Selection Committee: The SSC reviews study submissions and selects which proposals will be pursued by the PSA. Jan Antfolk (Åbo Akademi University), Melissa Kline (MIT), Randy McCarthy (Northern Illinois University), Kathleen Schmidt (Southern Illinois University Carbondale), Miroslav Sirota (University of Essex)
Ethics Review Committee: The ERC reviews all study submissions, identifies possible ethical challenges imposed by particular projects, and assists in getting ethics approval from participating institutions. Cody Christopherson (Southern Oregon University), Michael Mensink (University of Wisconsin-Stout), Erica D. Musser (Florida International University), Kim Peters (University of Queensland), Gerit Pfuhl (University of Tromso)
Logistics Committee: The LC manages the final matching of proposed projects and contributing labs. Susann Fiedler (Max Planck Institute for Research on Collective Goods), Jill Jacobson (Queen’s University), Ben Jones (University of Glasgow)
Community Building and Network Expansion Committee: The CBNEC exists to improve the reach and access to the PSA, both internally and with regard to public-facing activities. Activities include lab recruitment and social media. Jack Arnal (McDaniel College), Nicholas Coles (University of Tennessee), Crystal N. Steltenpohl (University of Southern Indiana), Anna Szabeska (Queen’s University Belfast), Evie Vergauwe (University of Geneva)
Methodology and Data Analysis Committee: The MDAC provides guidance to team leaders regarding the feasibility of design, power to detect effects, sample size, etc. It is also involved in addressing the novel methodological challenges and opportunities of the PSA. Balazs Aczel (Eötvös Loránd University), Burak Aydin (RTE University), Jessica Flake (McGill University), Patrick Forscher (University of Arkansas), Nick Fox (Rutgers University), Mason Garrison (Vanderbilt University), Kai Horstmann (Humboldt-Universitat zu Berlin), Peder Isager (Eindhoven University of Technology), Zoltan Kekecs (Lund University), Hause Lin (University of Toronto), Anna Szabelska (Queen’s University Belfast)
Authorship Criteria Committee: The ACC assists proposing authors in determining authorship requirements for data collection labs. Denis Cousineau (University of Ottawa), Steve Janssen (University of Nottingham-Malaysia Campus), Jiménez-Leal (Universidad de los Andes)
Project Management Committee: The PMC provides guidance to team leaders regarding the management of crowd-sourced projects. Charles Ebersole (University of Virginia), Jon Grahe (Pacific Lutheran University), Hannah Moshontz (Duke University), John Protzko (University of California-Santa Barbara)
Translation and Cultural Diversity Committee: The TCDC advises the project leaders and committees with regard to standards and best practice of translation procedures and possible challenges in cross-cultural research. It also proposes actions to support cultural diversification of research and participation of otherwise underrepresented cultures and ethnic groups. Sau-Chin Chen (Tzu-Chi University), Diego Forero (Universidad Antonio Nariño), Chuan-Peng Hu (Johannes Gutenberg University Medical center), Hans IJzerman (Université Grenoble Alpes), Darko Lončarić (University of Rijeka), Oscar Oviedo-Trespalacios (Queensland University of Technology), Asil Özdoğru (Üsküdar University), Miguel Silan (University of the Philippines Diliman), Stefan Stieger (Karl Landsteiner University of Health Sciences), Janis Zickfeld (University of Oslo)
Publication and Dissemination Committee: The PDC oversees the publication and dissemination of PSA-supported research products. Chris Chambers (Registered Reports, Cardiff University), Melissa Kline (Pre-prints, MIT), Etienne LeBel (Curate Science), David Mellor (Pre-registration & open-access, Center for Open Science)

Contributor Information

Hannah Moshontz, Duke University.

Lorne Campbell, University of Western Ontario.

Charles R. Ebersole, University of Virginia

Hans IJzerman, Université Grenoble Alpes.

Heather L. Urry, Tufts University

Patrick S. Forscher, University of Arkansas

Jon E Grahe, Pacific Lutheran University.

Randy J. McCarthy, Northern Illinois University

Erica D. Musser, Florida International University

Jan Antfolk, Åbo Akademi University.

Christopher M. Castille, Nicholls State University

Thomas Rhys Evans, Coventry University.

Susann Fiedler, Max Planck Institute for Research on Collective Goods.

Jessica Kay Flake, McGill University.

Diego A. Forero, Universidad Antonio Nariño

Steve M. J. Janssen, University of Nottingham - Malaysia Campus

Justin Robert Keene, Texas Tech University.

John Protzko, University of California, Santa Barbara.

Balazs Aczel, ELTE, Eotvos Lorand University.

Sara Álvarez Solas, Universidad Regional Amazónica Ikiam.

Daniel Ansari, The University of Western Ontario.

Dana Awlia, Ashland University.

Ernest Baskin, Haub School of Business, Saint Joseph’s University.

Carlota Batres, Franklin and Marshall College.

Martha Lucia Borras-Guevara, University of St Andrews.

Cameron Brick, University of Cambridge.

Priyanka Chandel, Pt Ravishankar Shukla University.

Armand Chatard, Université de Poitiers et CNRS.

William J. Chopik, Michigan State University

David Clarance, Busara Center for Behavioral Economics.

Nicholas A. Coles, University of Tennessee

Katherine S. Corker, Grand Valley State University

Barnaby James Wyld Dixson, The University of Queensland.

Vilius Dranseika, Vilnius University.

Yarrow Dunham, Yale University.

Nicholas W. Fox, Rutgers University

Gwendolyn Gardiner, University of California, Riverside.

S. Mason Garrison, Vanderbilt University.

Tripat Gill, Wilfrid Laurier University.

Amanda C Hahn, Humboldt State University.

Bastian Jaeger, Tilburg University.

Pavol Kačmár, University of Pavol Jozef Šafárik in Košice.

Gwenaël Kaminski, Université de Toulouse.

Philipp Kanske, Technische Universität Dresden.

Zoltan Kekecs, Lund University.

Melissa Kline, MIT.

Monica A Koehn, Western Sydney University.

Pratibha Kujur, Pt. Ravishankar Shukla University.

Carmel A. Levitan, Occidental College

Jeremy K. Miller, Willamette University

Ceylan Okan, Western Sydney University.

Jerome Olsen, University of Vienna.

Oscar Oviedo-Trespalacios, Queensland University of Technology.

Asil Ali Özdoğru, Üsküdar University.

Babita Pande, Pt. Ravishankar Shukla University.

Arti Parganiha, Pt. Ravishankar Shukla University.

Noorshama Parveen, Pt. Ravishankar Shukla University.

Gerit Pfuhl, UiT The Arctic University of Norway.

Sraddha Pradhan, Pt. Ravishankar Shukla University.

Ivan Ropovik, University of Presov.

Nicholas O. Rule, University of Toronto

Blair Saunders, University of Dundee.

Vidar Schei, NHH Norwegian School of Economics.

Kathleen Schmidt, Southern Illinois University Carbondale.

Margaret Messiah Singh, Pandit Ravishankar Shukla University.

Miroslav Sirota, University of Essex.

Crystal N. Steltenpohl, University of Southern Indiana

Stefan Stieger, Karl Landsteiner University of Health Sciences.

Daniel Storage, University of Illinois.

Dr. Gavin Brent Sullivan, Coventry University

Anna Szabelska, Queen’s University Belfast.

Christian K. Tamnes, University of Oslo

Miguel A. Vadillo, Universidad Autónoma de Madrid

Jaroslava V. Valentova, University of Sao Paulo

Wolf Vanpaemel, University of Leuven.

Marco A. C. Varella, University of Sao Paulo

Evie Vergauwe, University of Geneva.

Mark Verschoor, University of Groningen.

Michelangelo Vianello, University of Padova.

Martin Voracek, University of Vienna, Austria.

Glenn P. Williams, Abertay University

John Paul Wilson, Montclair State University.

Janis H. Zickfeld, University of Oslo

Jack D. Arnal, McDaniel College

Burak Aydin, RTE University.

Sau-Chin Chen, Tzu-Chi University.

Lisa M. DeBruine, University of Glasgow

Ana Maria Fernandez, Universidad de Santiago.

Kai T. Horstmann, Humboldt-Universität zu Berlin

Peder M. Isager, Eindhoven University of Technology

Benedict Jones, University of Glasgow.

Aycan Kapucu, Ege University.

Hause Lin, University of Toronto.

Michael C. Mensink, University of Wisconsin-Stout

Gorka Navarrete, Universidad Adolfo Ibáñez.

Miguel A. Silan, University of the Philippines Diliman

Christopher R. Chartier, Ashland University

References

  1. Alogna VK, Attaya MK, Aucoin P, Bahník Š, Birch S, Birt AR, … & Buswell K (2014). Registered replication report: Schooler and Engstler-Schooler (1990). Perspectives on Psychological Science, 9, 556–578. 10.1177/1745691614545653 [DOI] [PubMed] [Google Scholar]
  2. Anderson SF, & Maxwell SE (2017). Addressing the “replication crisis”: Using original studies to design replication studies with appropriate statistical power. Multivariate Behavioral Research, 52, 305–324. 10.1080/00273171.2017.1289361 [DOI] [PubMed] [Google Scholar]
  3. Anderson ML, & Magruder J (2017). Split-sample strategies for avoiding false discoveries (No. w23544). National Bureau of Economic Research; 10.3386/w23544 [DOI] [Google Scholar]
  4. Baker M (2016). 1,500 scientists lift the lid on reproducibility. Nature News, 533(7604), 452. [DOI] [PubMed] [Google Scholar]
  5. Banks GC, Rogelberg SG, Woznyj HM, Landis RS, & Rupp DE (2016). Evidence on questionable research practices: The good, the bad, and the ugly. Journal of Business Psychology, 31, 323–338. 10.1007/s10869-016-9456-7 [DOI] [Google Scholar]
  6. Batres C, & Perrett DI (2014). The influence of the digital divide on face preferences in El Salvador: People without internet access prefer more feminine men, more masculine women, and women with higher adiposity. PloS ONE, 9, e100966 10.1371/journal.pone.0100966 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Behling O, & Law KS (2000). Translating questionnaires and other research instruments: Problems and solutions Sage University Papers Series on Quantitative Applications in the Social Sciences, 07–131. Thousand Oaks, CA: Sage. [Google Scholar]
  8. Brand A, Allen L, Altman M, Hlava M, & Scott J (2015). Beyond authorship: Attribution, contribution, collaboration, and credit. Learned Publishing, 28, 151–155. 10.1087/20150211 [DOI] [Google Scholar]
  9. Brandt MJ, IJzerman H, Dijksterhuis A, Farach FJ, Geller J, Giner-Sorolla R, … Van ’t Veer A (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. 10.1016/j.jesp.2013.10.005 [DOI] [Google Scholar]
  10. Brislin RW (1970). Back-translation for cross-cultural research. Journal of Cross-Cultural Psychology, 1, 185–216. 10.1177/135910457000100301 [DOI] [Google Scholar]
  11. Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, & Munafo MR (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376. 10.1038/nrn3475 [DOI] [PubMed] [Google Scholar]
  12. Chartier CR (2017, August 26). Building a CERN for Psychological Science. [blog post] Retrieved from https://christopherchartier.com/2017/08/26/building-a-cern-for-psychological-science/
  13. Chartier CR, Kline M, McCarthy RJ, Nuijten MB, Dunleavy D, & Ledgerwood A (2018, March 7). The cooperative revolution is making psychological science better. 10.17605/OSF.IO/ZU7SJ [DOI]
  14. Cohen J (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65, 145–153. 10.1037/h0045186 [DOI] [PubMed] [Google Scholar]
  15. Cohen J (1992). A power primer. Psychological Bulletin, 112, 155–159. 10.1037/0033-2909.112.1.155 [DOI] [PubMed] [Google Scholar]
  16. Corker KS, Donnellan MB, Kim SY, Schwartz SJ, & Zamboanga BL (2017). College student samples are not always equivalent: The magnitude of personality differences across colleges and universities. Journal of Personality, 85, 123–135. 10.1111/jopy.122 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Crandall CS, & Sherman JW (2016). On the scientific superiority of conceptual replications for scientific progress. Journal of Experimental Social Psychology, 66, 93–99. 10.1016/j.jesp.2015.10.002 [DOI] [Google Scholar]
  18. Cronbach LJ, & Meehl PE (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281–302. 10.1037/h0040957 [DOI] [PubMed] [Google Scholar]
  19. Cumming G (2014). The new statistics: Why and how. Psychological Science, 25, 729 10.1177/0956797613504966 [DOI] [PubMed] [Google Scholar]
  20. Dwork C, Feldman V, Hardt M, Pitassi T, Reingold O, & Roth A (2015). The reusable holdout: Preserving validity in adaptive data analysis. Science, 349(6248), 636–638. 10.1126/science.aaa9375 [DOI] [PubMed] [Google Scholar]
  21. Ebersole CR, Atherton OE, Belanger AL, Skulborstad HM, Allen JM, Banks JB, … Nosek BA (2016). Many Labs 3: Evaluating participant pool quality across the academic semester via replication. Journal of Experimental Social Psychology, 67, 68–82. 10.1016/j.jesp.2015.10.012 [DOI] [Google Scholar]
  22. Elwert F, & Winship C (2014). Endogenous selection bias: The problem of conditioning on a collider variable. Annual Review of Sociology, 40, 31–53. 10.1146/annurev-soc-071913-043455 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Fafchamps M, & Labonne J (2017). Using split samples to improve inference on causal effects. Political Analysis, 25(4), 465–482. 10.1017/pan.2017.22 [DOI] [Google Scholar]
  24. Fraley RC, & Vazire S (2014). The N-pact factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PLoS One, 9, e109019 10.1371/journal.pone.0109019 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Frank MC, Bergelson E, Bergmann C, Cristia A, Floccia C, Gervain J, … Yurovsky D (2017). A collaborative approach to infant research: Promoting reproducibility, best practices, and theory-building. Infancy, 22, 421–435. 10.1111/infa.12182 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Gilmore RO, Kennedy JL, & Adolph KE (2018). Practical solutions for sharing data and materials from psychological research. Advances in Methods and Practices in Psychological Science, 1, 121–130. 10.1177/2515245917746500 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Grahe JE, Faas C, Chalk HM, Skulborstad HM, Barlett C, Peer JW, … Molyneux K (2017, April 13). Emerging adulthood measured at multiple institutions 2: The next generation (EAMMi2). 10.17605/OSF.IO/TE54B [DOI]
  28. Greenwald AG, Pratkanis AR, Leippe MR, & Baumgardner MH (1986). Under what conditions does theory obstruct research progress? Psychological Review, 93, 216–229. 10.1037/0033-295X.93.2.216 [DOI] [PubMed] [Google Scholar]
  29. Hartshorne JK, & Germine LT (2015). When does cognitive functioning peak? The asynchronous rise and fall of different cognitive abilities across the life span. Psychological Science, 26, 433–443. 10.1177/0956797614567339 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Henrich J, Heine SJ, & Norenzayan A (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33, 61–83. 10.1017/S0140525X0999152X [DOI] [PubMed] [Google Scholar]
  31. Ioannidis JPA (2005). Why most published research findings are false. PLoS Medicine, 2, e124 10.1371/journal.pmed.0020124 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. IJzerman H, Lindenberg S, Dalğar İ, Weissgerber SC, Vergara RC, Cairo AH, … Zickfeld JH (2017, December 24). The Human Penguin Project: Climate, social integration, and core body temperature. 10.17605/osf.ioOSF.IO/6bB7neNE [DOI]
  33. Kidwell MC, Lazarević LB, Baranski E, Hardwicke TE, Piechowski S, Falkenberg L-S, … Nosek BA (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14, e1002456 10.1371/journal.pbio.1002456 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Klein RA, Ratliff KA, Vianello M, Adams RB Jr., Bahnik S, Bernstein MJ, … Nosek BA (2014). Investigating variation in replicability: A “many labs” replication project. Social Psychology, 45, 142–152. 10.1027/1864-9335/a000178 [DOI] [Google Scholar]
  35. Klein RA, Vianello M, Hasselman F, Adams BG, Adams RB, Alper S, … Nosek BA (2018). Many labs 2: Investigating variation in replicability across sample and setting. Pre-registered replication report under second stage review at Advances in Methods and Practices in Psychological Science. Retrieved from https://osf.io/8cd4r/wiki/home/
  36. Kukull WA, & Ganguli M (2012). Generalizability: The trees, the forest, and the low-hanging fruit. Neurology, 78, 1886–1891. 10.1212/wnlWNL.0b013e318258f812 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. LeBel EP, Berger D, Campbell L, & Loving TJ (2017). Falsifiability is not optional. Journal of Personality and Social Psychology, 113, 254–261. 10.1037/pspi0000106 [DOI] [PubMed] [Google Scholar]
  38. LeBel EP, McCarthy R, Earp B, Elson M, & Vanpaemel W (in press). A unified framework to quantify the credibility of scientific findings. Forthcoming at Advances in Methods and Practices in Psychological Science. Retrieved from https://osf.io/preprints/psyarxiv/uwmr8
  39. Leighton DC, Legate N, LePine S, Anderson SF, & Grahe JE (2018, January 1). Self-esteem, self-disclosure, self-expression, and connection on Facebook: A collaborative replication meta-analysis. 10.17605/OSF.IO/SX742 [DOI] [Google Scholar]
  40. Lykken DT (1991). What’s wrong with Psychology, anyway? In Cicchetti D & Grove WM (Eds.), Thinking clearly about psychology: Volume 1: Matters of public interest (pp. 3–39). Minneapolis, MN: University of Minnesota Press. [Google Scholar]
  41. Maxwell SE (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9, 147–163. 10.1037/1082-989X.9.2.147 [DOI] [PubMed] [Google Scholar]
  42. McCarthy RJ, & Chartier CR (2017). Collections2: Using “Crowdsourcing” within psychological research. Collabra: Psychology, 3, 26 10.1525/collabra.107 [DOI] [Google Scholar]
  43. Merton RK (1973/1942). The sociology of science: Theoretical and empirical investigations. London: University of Chicago Press. [Google Scholar]
  44. Meyer MN (2018). Practical tips for ethical data sharing. Advances in Methods and Practices in Psychological Science, 1, 131–144. 10.1177/2515245917747656 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Motyl M, Demos AP, Carsel TS, Hanson BE, Melton ZJ, Mueller AB, … Yantis C (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology, 113, 34–58. 10.1037/pspa0000084 [DOI] [PubMed] [Google Scholar]
  46. Munafò MR, Nosek BA, Bishop DV, Button KS, Chambers CD, du Sert NP, … Ioannidis JP (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 21 10.1038/s41562-016-0021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Murre JMJ, Janssen SMJ, Rouw R, & Meeter M (2013). The rise and fall of immediate and delayed memory for verbal and visuospatial information from late childhood to late adulthood. Acta Psychologica, 142, 96–107. 10.1016/j.actpsy.2012.10.005 [DOI] [PubMed] [Google Scholar]
  48. Nelson LD, Simmons J, & Simonsohn U (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511–534. 10.1146/annurev-psych-122216-011836 [DOI] [PubMed] [Google Scholar]
  49. Nosek BA, & Bar-Anan Y (2012). Scientific utopia: I. Opening scientific communication. Psychological Inquiry, 23, 217–243. 10.1080/1047840X.2012.692215 [DOI] [Google Scholar]
  50. Nosek BA, Spies JR, & Motyl M (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615–631. 10.1177/1745691612459058 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. O’Donnell M, Nelson LD, Ackermann A, Aczel B, Akhtar A, Aldrovandi S, … Zrubka M (2018). Registered Replication Report: Dijksterhuis and van Knippenberg (1998). Perspectives on Psychological Science, Advance online publication. 10.1177/1745691618755704 [DOI] [PubMed]
  52. Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349, aac4716 10.1126/science.aac4716 [DOI] [PubMed] [Google Scholar]
  53. Reifman A, & Grahe JE (2016). Introduction to the special issue of emerging adulthood. Emerging Adulthood, 4, 135–141. 10.1177/2167696815588022 [DOI] [Google Scholar]
  54. Rentfrow PJ, Gosling SD, & Potter J (2008). A theory of the emergence, persistence, and expression of geographic variation in psychological characteristics. Perspectives on Psychological Science, 3, 339–369. 10.1111/j.1745-6924.2008.00084.x [DOI] [PubMed] [Google Scholar]
  55. Schweinsberg M, Madan N, Vianello M, Sommer SA, Jordan J, Tierney W, … Uhlmann EL (2016). The pipeline project: Pre-publication independent replications of a single laboratory’s research pipeline. Journal of Experimental Social Psychology, 66, 55–67. 10.1016/j.jesp.2015.10.001 [DOI] [Google Scholar]
  56. The School Spirit Study Group. (2004). Measuring school spirit: A national teaching exercise. Teaching of Psychology, 31, 18–21. 10.1207/s15328023top3101_5 [DOI] [Google Scholar]
  57. Simmons JP, Nelson LD, & Simonsohn U (2011). False-positive psychology. Psychological Science, 22, 1359–1366. 10.1177/0956797611417632 [DOI] [PubMed] [Google Scholar]
  58. Simons DJ (2014). The value of direct replication. Perspectives on Psychological Science, 9, 76–80. 10.1177/1745691613514755 [DOI] [PubMed] [Google Scholar]
  59. Simons DJ, Holcombe AO, & Spellman BA (2014). An introduction to registered replication reports at perspectives on psychological science. Perspectives on Psychological Science, 9, 552–555. 10.1177/1745691614543974 [DOI] [PubMed] [Google Scholar]
  60. Simons DJ, Shoda Y, & Lindsay DS (2017). Constraints on Generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12, 1123–1128. 10.1177/1745691617708630 [DOI] [PubMed] [Google Scholar]
  61. Vadillo MA, Gold N, & Osman M (in press). Searching for the bottom of the ego well: Failure to uncover ego depletion in Many Labs 3. Royal Society Open Science. 10.17605/OSF.IO/JA2KB [DOI] [PMC free article] [PubMed]
  62. Van Bavel JJ, Mende-Siedlecki P, Brady WJ, & Reinero DA (2016). Contextual sensitivity in scientific reproducibility. Proceedings of the National Academy of Sciences of the United States of America, 113, 6454–6459. 10.1073/pnas.1521897113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Van t Veer AE, & Giner-Sorolla R (2016). Pre-registration in social psychology: A discussion and suggested template. Journal of Experimental Social Psychology, 67, 2–12. 10.1016/j.jesp.2016.03.004 [DOI] [Google Scholar]
  64. Vazire S (2017). Quality uncertainty erodes trust in science. Collabra: Psychology, 3, 1 10.1525/collabra.74 [DOI] [Google Scholar]
  65. Yarkoni T, & Westfall J (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122. 10.1177/1745691617693393 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES