Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Mar 5;16(3):e0247986. doi: 10.1371/journal.pone.0247986

A decade of theory as reflected in Psychological Science (2009–2019)

Jonathon McPhetres 1,*, Nihan Albayrak-Aydemir 2,#, Ana Barbosa Mendes 3,#, Elvina C Chow 4,, Patricio Gonzalez-Marquez 5,#, Erin Loukras 5,, Annika Maus 6,#, Aoife O’Mahony 7,#, Christina Pomareda 8,, Maximilian A Primbs 9,, Shalaine L Sackman 10,, Conor J R Smithson 11,#, Kirill Volodko 5,#
Editor: T Alexander Dececchi12
PMCID: PMC7935264  PMID: 33667242

Abstract

The dominant belief is that science progresses by testing theories and moving towards theoretical consensus. While it’s implicitly assumed that psychology operates in this manner, critical discussions claim that the field suffers from a lack of cumulative theory. To examine this paradox, we analysed research published in Psychological Science from 2009–2019 (N = 2,225). We found mention of 359 theories in-text, most were referred to only once. Only 53.66% of all manuscripts included the word theory, and only 15.33% explicitly claimed to test predictions derived from theories. We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building. These data provide insight into the kinds of research psychologists are conducting and raises questions about the role of theory in the psychological sciences.


The problem is almost anything passes for theory. -Gigerenzer, 1998, pg. 196 (1).

Introduction

Many have noted that psychology lacks the cumulative theory that characterizes other scientific fields [14]. So pressing has this deficit become in recent years that many scholars have called for a greater focus on theory development in the psychological sciences [511].

At the same time, it has been argued that there are perhaps too many theories to choose from [3, 1214]. One factor contributing to this dilemma is that theories are often vague and poorly specified [2, 15], so a given theory is unable to adequately explain a range of phenomena without relying on rhetoric. Thus, psychology uses experimentation to tell a narrative rather than to test theoretical predictions [16, 17]. From this perspective, psychology needs more exploratory and descriptive research before moving on to theory building and testing [1820].

Despite these competing viewpoints, it is often claimed that psychological science follows a hypothetico-deductive model like most other scientific disciplines [21]. In this tradition, experiments exist to test predictions derived from theories. Specifically, researchers should be conducting strong tests of theories [2224] because strong tests of theory are the reason some fields move forward faster than others [2, 4, 25]. That is, the goal scientists should be working towards is theoretical consensus [1, 2, 2628]. At a glance, it would appear that most psychological research proceeds in this fashion, because papers often use theoretical terms in introduction sections, or name theories in the discussion section. However, no research has been undertaken to examine this assumption and what role theory actually plays in psychological research.

So, which is it? If there is a lack of theory, then most articles should be testing a-theoretical predictions or conducting descriptive and exploratory research. If there is too much theory, then almost every published manuscript should exist to test theoretically derived predictions.

To examine the role of theory in psychological research, we analysed articles published from 2009–2019 in the journal Psychological Science. We use this data to answer some specific questions. First, we are interested in distinguishing between specific and casual uses of theory. So, we analyse how often theory-related words are used overall and how often a specific theory is named and/or tested. Additionally, given that preregistration can help prevent HARKING [29], we examine whether articles that name and/or test a theory are more likely to be preregistered. Next, it’s possible that some subsets of psychological research might be more or less reliant on theory. To examine this, we investigate whether studies that name and/or test a theory are more likely to generate a specific kind of data. Finally, to provide greater context for these analyses, we examined how many theories were mentioned over this time period and how many times each was mentioned.

Disclosures

All analyses conducted are reported and deviations are disclosed at the end of this section. Our sample size was pre-determined and was based on the entire corpus of published articles. Finally, because this research does not involve human subjects, ethics approval was not sought.

Materials and methods

We accessed all the articles published in Psychological Science from 2009–2019. We chose this journal because it is the flagship journal of the Association for Psychological Science and one of the top journals in the field that publishes a broad range of research from all areas of the discipline. Additionally, this journal explicitly states that theoretical significance is a requirement for publication [30, 31].

As preregistered https://osf.io/d6bcq/?view_only=af0461976df7454fbcf7ac7ff1500764, we excluded comments, letters, errata, editorials, or other articles which did not test original data because they could not be coded or because, in some cases, they were simply replications or re-analyses of previously published articles. This resulted in 2,225 articles being included in the present analysis.

Definition

Many useful definitions and operationalisations of a scientific theory have been put forward [4, 3234] and we drew on these for the present work. The definition of a scientific theory for the purposes of this research is as follows:

A theory is a framework for understanding some aspect of the natural world. A theory often has a name—usually this includes the word theory, but may sometimes use another label (e.g., model, hypothesis). A theory can be specific or broad, but it should be able to make predictions or generally guide the interpretation of phenomena, and it must be distinguished from a single effect. Finally, a theory is not an untested prediction, a standard hypothesis, or a conjecture.

We used this definition in order to distinguish its use from colloquial and general uses of the word, not to evaluate the strength, viability, or suitability of a theory.

Text mining

Article PDFs were first mined for the frequency of the words theory, theories, and theoretical using the TM [35] and Quanteda [36] packages in R. Word frequencies were summed and percentages were calculated for each year and for the entire corpus. We did not search or code for the terms model or hypothesis because these are necessarily more general and have multiple different meanings, none of which overlap with theory (but see the Additional Considerations section for more on this).

Coding

After identifying the articles that used the words theory and theories, 10 trained coders further examined those articles. Instances of the word theoretical were not examined further because it is necessarily used generally (and because it was used less than, but often alongside, theory and theories).

Each article was initially scored independently by two individual coders who were blind to the purpose of the study; Fleiss’ Kappa is report for this initial coding. Recommendations suggest that a kappa between .21-.40 indicates fair agreement, .41-.60 indicates moderate agreement, .61-.80 indicates substantial agreement, and .81–1.0 is almost perfect agreement [37].

After the initial round of coding, two additional blind coders and the first author each independently reviewed a unique subset of disagreements to resolve ties. This means that the ratings we analyse in the following section are the result of ratings only for which two independent coders (or two out of three coders) agreed 100%.

For each article, the following categories were coded:

Was a specific theory referred to by name?

For each article, the coder conducted a word-search for the string “theor” and examined the context of each instance of the string. We recorded whether each paper, at any point, referred to a specific theory or model by name. Instances of words in the reference section were not counted nor coded further. General references to theory (e.g., psychological theory) or to classes or groups of theories (e.g. relationship theories) were not counted because these do not allow for specific interpretations or predictions. Similarly, instances where a theory, a class of theories, or an effect common across multiple studies was cited in-text along with multiple references but not named explicitly—for example, “cognitive theory (e.g. Author A, 1979; Author B, 1996; Author C & Author D, 2004) predicts”—were also not counted because these examples refer to the author’s own interpretation of or assumptions about a theory rather than a specific prediction outlined by a set of theoretical constraints. Initial coder agreement was 78% (and significantly greater than chance, Fleiss’ kappa = .45, p < .001).

Did the article claim to test a prediction derived from a specific theory?

For each article, the coder examined the abstract, the section prior to introducing the first study, the results, and the beginning of the general discussion. We recorded whether the paper, at any point, explicitly claimed to test a prediction derived from a specific theory or model. As above, this would have been needed to be made clear by the authors to avoid categorising general predictions, auxiliary assumptions, indirect and verbal interpretations of multiple theories, models, or hypotheses derived from personal expectations as being theoretically derived. Initial coder agreement was 74% (and significantly greater than chance, Fleiss’ kappa = .24, p < .001).

What was the primary type of data generated by the study?

For each article, the coder examined the abstract, the section prior to introducing the first study, the results, and the beginning of the general discussion. The primary type of data used in the study was coded as either self-report/survey, physiological/biological, observational/behavioural (including reaction times), or other. In the case of multiple types of data across multi-study papers, we considered the abstract, the research question, the hypothesis, and the results in order to determine the type of data most relevant to the question. Initial coder agreement was 64% (and significantly greater than chance, Fleiss’ kappa = .42, p < .001).

Did the article include a preregistered study?

Preregistration is useful for restricting HARKing [29]. It is also useful for testing pre-specified and directional predictions, and hypotheses derived from competing theories. As such, we reasoned that preregistered studies may be more likely to test theoretical predictions.

We coded whether the article included a preregistered study. This was identified by looking for a badge as well as conducting a word search for the strings “prereg” and “pre-reg”. Initial coder agreement was 99% (and significantly greater than chance, Fleiss’ kappa = .97, p < .001).

Theory counting

The number of theories named directly in the text were recorded and summed by year to provide an overview of how frequently each theory was invoked. The goal of this was to simply create a comprehensive list of the names and number of theories that were referred to in the text at any point. To be as inclusive as possible, slightly different classification criteria were used (see S1 File).

Transparency statement

Our original preregistered analysis plan did not include plans for counting the total number of theories mentioned in text, nor for examining the frequency of the words model and hypothesis. Additionally, coding the instances of the word hypothesis was not preregistered, but was added after a round of reviews. Finally, for simplicity, we have focused on percentages out of the total articles coded (rather than presenting separate percentages for frequencies of theory and theories); complete counts and percentages are presented in the S1 File.

Results

Question 1: How often are theory-related words used?

To begin, the complete corpus of articles was analysed (N = 2,225). Between the years 2009 and 2019, the word theory was used in 53.66% of articles, the word theories was used in 29.80% of articles, and the word theoretical was used in 32.76% of articles (note that these categories are non-exclusive). Total percentages and raw counts by year are presented in the S1 and S2 Tables in S1 File.

Question 2: How often was a theory named and/or tested?

The 1,605 articles including the word theory or theories were further coded to examine the context of the word. Of these articles, only 33.58% of them named a specific theory—that is, 66.42% used the word superfluously. Further, only 15.33% of the 1,605 articles explicitly claimed to test a prediction derived from a theory.

To put this differently, only 24.22% of all the articles published over the 11-year period (N = 2,225) actually named a specific theory in the manuscript. For example, they used “psychological theory” or “many theories…” instead of naming and citing a specific theory. This means that the remainder of those papers either 1) did not derive predictions, operationalisations, analytic strategies, and interpretations of their data from theory, or 2) did not credit previous theory for this information.

The words theories and theoretical showed similar patterns, but they were used less often than the word theory; for simplicity, we present a detailed summary of these counts by year in the S2 Table in S1 File. The pattern of these effects by year is depicted in Fig 1, below.

Fig 1. Percentage of total Psychological Science articles from 2009–2019 that use the word theory, name a specific theory, and include a preregistered study.

Fig 1

The percentage of articles that included the words theory/theories, mentioned a theory by name, and were preregistered was calculated out of the total number of articles published from 2009–2019 in Psychological Science excluding comments, editorials, and errata (N = 2,225); note that for simplicity this figure counts all articles that received a preregistered badge (even if they were not coded in the present study).

Question 3: Are articles that name a specific theory more likely to be preregistered?

Because there were no preregistered articles prior to 2014, we considered only articles published from 2014 onwards (N = 737) for this part of the analysis. Articles that named a specific theory were no more or less likely to be preregistered. Specifically, 11.11% of articles that explicitly named a specific theory were preregistered. In contrast, 11.31% of articles that did not name a theory were preregistered.

Conversely, articles that actually tested a specific theory were only slightly more likely to be preregistered. Of the articles that were preregistered, 15.66% stated that they tested a specific theory. Of the articles that were not preregistered, 12.84% stated that they tested a specific theory. See S3 and S4 Tables in S1 File for full counts by year.

Question 4: Are studies that name and/or test theories more likely to generate a specific kind of data?

Of the 1,605 articles coded over the 11-year period, the overwhelming majority (55.26%) relied on self-report and survey data. Following this, 28.35% used observational data (including reaction times), 11.03% used biological or physiological data, and the remaining 5.30% used other types of data or methodologies (for example, they used computational modelling or presented a new method) to answer their primary research question.

However, it does not appear that studies using different types of data are any more or less prone to invoking theory. Of the studies that used self-report and survey data, 26.16% named a specific theory. Of the studies that used biological and physiological data, 19.77% named a specific theory. Of the studies that used observational or behavioural data, 22.20% named a specific theory. Of the studies that used other types of data, 25.88% named a specific theory. See S5 and S6 Tables in S1 File for complete counts.

Further, it does not appear that theoretically derived predictions are more conducive to any specific type of study. Only 17.36% of studies using self-report data, 11.86% of studies using biological/physiological data, 11.87% of studies using observational data, and 20% of studies using other types of data explicitly claimed to be testing theoretically derived predictions.

Question 5: How many theories were mentioned in this 11-year period?

We also counted the number of theories that were mentioned or referred to explicitly in each of the 2,225 manuscripts. As described in the S1 File, slightly different criteria were used for this task so as to be as inclusive as possible. A total of 359 theories were mentioned in text over the 11-year period. Most theories were mentioned in only a single paper (mode = 1, median = 1, mean = 1.99). The full list of theories is presented in S7 Table in S1 File. For ease of reference, the top 10 most-mentioned theories are displayed below in Table 1.

Table 1. The top 10 most mentioned theories sorted according to the total number of mentions.

Name 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Total
Signal Detection Theory 2 2 2 3 2 6 1 2 3 2 2 27
Prospect Theory (Also Cumulative Prospect) 3 2 2 4 2 2 2 1 3 21
Attachment Theory 1 5 2 2 3 2 2 17
Life History Theory 2 3 2 1 4 2 1 15
Construal-Level Theory (Psychological Distance) 2 4 2 2 3 1 14
Social-Identity Theory 2 2 2 2 1 1 3 13
System Justification Theory 1 2 1 1 3 1 2 1 12
Game Theory 1 1 5 1 2 1 11
Item Response Theory 2 1 4 1 1 1 1 11
Self-Affirmation Theory 2 2 2 1 1 1 1 10
Terror Management Theory 1 2 1 1 3 1 1 10

Exploratory analysis: Did authors use the word hypothesis in place of theory?

One concern may be that authors are misusing the word hypothesis to refer to these formal, higher-level theories. That is, that authors are using the word hypothesis when they should be using the word theory. To examine this possibility, we mined all 2,225 documents for the word hypothesis and examined the immediate context surrounding each instance.

If the authors were referring to a formally named, superordinate hypothesis derived from elsewhere (e.g., if it satisfied the criteria for a theory) it was coded as 1. It was coded as 0 if the authors were using hypothesis correctly. Specifically, it received a code of 0 if the authors were referring to their own hypothesis or expectations (e.g., our hypothesis, this hypothesis, etc), if they were describing a statistical analysis (e.g. null hypothesis), or if they were describing an effect or pattern of results (e.g., the hypothesis that…). Instances in the references were not counted. Two independent coders rated each instance of the word. Initial coder agreement was 89.5% and significantly greater than chance (Fleiss’ kappa = .61, p < .001). As before, after initial coder agreement was analysed, a third coder resolved any disagreements and the final ratings (consisting of scores for which at least two coders agreed) were analysed.

Of the 2225 articles published over the 11 years, 62% used the word hypothesis (n = 1,386). Of those, 14.5% (n = 202) used hypothesis in a way to refer to a larger, formal, or externally derived theory. Put differently, this constitutes 9% of the total corpus (N = 2,225). Complete counts according to year are displayed in S8 Table in S1 File. Thus, it appears that this misuse of the word is not very common. However, even if we were to add this total count to our previous analysis of theory, it would not change our overall interpretation: the majority of papers published in Psychological Science are not discussing nor relying on theories in their research.

Discussion

The Psychological Science website states that “The main criteria for publication in Psychological Science are general theoretical and empirical significance and methodological/statistical rigor” [30, 31]. Yet, only 53.66% of articles published used the word theory, and even fewer named or claimed to test a specific theory. How can research have general theoretical significance if the word theory is not even present in the article?

A more pressing question, perhaps, is how can a field be contributing towards cumulative theoretical knowledge if the research is so fractionated? We identified 359 psychological theories that were referred to in-text (see S7 Table in S1 File for the complete list) and most of these were referred to only a single time. A recent review referred to this as theorrhea (a mania for new theory), and described it as a symptom stifling “the production of new research” [38]. Indeed, it’s hard to imagine that a cumulative science is one where each theory is examined so infrequently. One cannot help but wonder how the field can ever move towards theoretical consensus if everyone is studying something different—or, worse, studying the same thing with a different name.

These data provide insight into how psychologists are using psychological theories in their research. Many papers made no reference to a theory at all and most did not explicitly derive their predictions from a theory. It’s impossible to know why a given manuscript was written in a certain way, but we offer some possibilities to help understand why some authors neglected to even include the word theory in their report. One possibility is that the research is truly a-theoretical or descriptive. There is clear value in descriptive research—value that can ultimately amount to theoretical advancement [17, 18, 20] and it would be misguided to avoid interesting questions because they did not originate from theory.

It’s also possible that researchers are testing auxiliary assumptions [39] or their own interpretations (instead of the literal interpretations or predictions) of theories [40]. This strategy is quite common: authors describe certain effects or qualities of previous literature (e.g., the literature review) in their introduction to narrate how they developed a certain hypothesis or idea, then they state their own hypothesis. Such a strategy is fine, but certainly does not amount to a quantifiable prediction derived from a pre-specified theory. Further, given that psychological theories are almost always verbal [2, 15], there may not even be literal interpretations or predictions to test.

An additional possibility is that researchers may be focusing on “effects” and paradigms rather than theories per se. Psychology is organized topically—development, cognition, social behaviour, personality—and these topics are essentially collections of effects (e.g., motivated reasoning, the Stroop effect, etc). Accordingly, researchers tend to study specific effects and examine whether they hold under different conditions. Additionally, a given study may be conducted because it is the logical follow-up from a previous study they conducted, not because the researchers are interested in examining whether a theory is true or not.

However, it’s also important to consider the qualities of the research that did use the word theory and why. Recall only 33.58% of articles using the word theory or theories said anything substantial about a theory. For the remaining articles, it’s possible that these words and phrases were injected post-hoc to make the paper seem theoretically significant, because it is standard practice, or because it is a journal requirement. That is, this may be indicative of a specific type of HARKing: searching the literature for relevant hypotheses or predictions after data analysis, known as RHARKing [29]. For example, some researchers may have conducted a study for other reasons (e.g., personal interest), but then searched for a relevant theory to connect the results to after the fact. It’s important to note that HARKing can be prevented by preregistration, but preregistration was only used in 11.11% of the papers that claimed to test a theory. Of course, it’s impossible to know an author’s motivation in absence of a preregistration, but the possibility remains quite likely given that between 27% and 58% of scientists admit to HARKing [29].

Finally, this data provides insight into the kind of research psychologists are conducting. The majority (55.26%) is conducted using self-report and survey data. Much less research is conducted using observational (28.35%) and biological or physiological (11.03%) data. While not as bleak as a previous report claiming that behavioural data is completely absent in the psychological sciences [41], this points to a limitation in the kinds of questions that can be answered. Of course, self-report data may be perfectly reasonable for some questions, but such questions are necessarily restricted to a narrower slice of human behaviour and cognition. Further, a high degree of reliance on a single method certainly contrasts with the large number of theories being referenced. It is worth considering how much explanatory power each of the theories have if most of them are discussed exclusively in the context of self-report and survey data.

Limitations and additional considerations

The present results describe only one journal: Psychological Science. However, we chose this journal because it is one of the top journals in the field, because it publishes research from all areas of psychology, and because it has explicit criteria for theoretical relevance. Thus, we expected that research published in this journal would be representative of some of the theoretically relevant research being conducted. So, we do not claim that the results described here statistically generalize to other journals, only that they describe the pattern of research in one of the top journals in psychology. One specific concern is that Psychological Science limits articles to 2,000 words, and this may have restricted the ability to describe and reference theories. This may be true, though would seem that the body of knowledge a piece of research is contributing towards would be one of the most important pieces of information to include in a report. That is, if the goal of that research were to contribute to cumulative knowledge, it does not require many words to refer to a body of theory by name.

An additional concern may be that, in some areas of psychology, “theories” may be referred to with a different name (e.g., model or hypothesis). However, the terms model and hypothesis do not carry the formal weight that scientific theory does. In the hierarchy of science, theories are regarded as being the highest status a claim can achieve—that most articles use it casually and conflate it with other meanings is problematic for clear scientific communication. In contrast, model or hypothesis could be used to refer to several different things: if something is called model, then it’s not claiming to be a theory. Our additional analysis only identified a small minority of papers that used hypothesis in this fashion (9% of the total corpus). While this number is relatively small, this does highlight an additional issue: the lack of consistency with which theories are referred to and discussed. It is difficult and confusing to consistently add to a body of knowledge if different names and terms are used.

Another claim might be that theory should simply be implicit in any text; that it should permeate through one’s writing without many direct references to it. If we were to proceed in this fashion, how could one possibly contribute to cumulative theory? If theory need not be named, identified, or referred to specifically, how is a researcher to judge what body of research they are contributing to? How are they to interpret their findings? How is one even able to design an experiment to answer their research question without a theory? The argument has been made that researchers need theory to guide methods [5, 6, 9]—this is not possible without, at least, clearly naming and referencing theories.

A final limitation to note is one regarding the consistency of the coders. While the fair to moderate kappas obtained here may seem concerning at first, we believe this reflects the looseness and vaguery with which words like theory are used. Authors are often ambiguous and pad their introductions and discussion with references to models and other research; it is not often explicit whether a model is simply being mentioned or whether it is actually guiding the research. Further complicating things is that references to theories are often inconsistent. Thus, it can be a particularly difficult task to determine whether an author actually derived their predictions from a specific theory or whether they are simply discussing it because they later noted the similarities. Such difficulties could have contributed to the lower initial agreement among coders. Therefore, along with noting that the kappas are lower than would be ideal, we also suggest that future researchers should be conscious of their writing: it’s very easy to be extremely explicit about where one’s predictions were derived from and why a test is being conducted. We believe this to be a necessary component of any research report.

Concluding remarks

Our interpretation of this data is that the published research we reviewed is simultaneously saturated and fractionated, and theory is not guiding the majority of research published in Psychological Science despite this being the main criteria for acceptance. While many articles included the words theory and theories, these words are most often used casually and non-specifically. In a large subset of the remaining cases, the theoretical backbone is no more than a thin veneer of relevant rhetoric and citations.

These results highlight many questions for the field moving forward. For example, it’s often noted that psychology has made little progress towards developing clearly specified, cumulative theories [2, 15] but what should that progress look like? What is the role of theory in psychological science? Additionally, while it is widely assumed that psychological research follows the hypothetico-deductive model, these data suggest this is not necessarily the case. There are many other ways to do research and not all of them involve theory testing. If the majority of research in a top journal is not explicitly testing predictions derived from theory, then perhaps it exists to explore and describe interesting effects. There is certainly nothing wrong with a descriptive approach, and this aim of psychology has been suggested for at least half a century [20, 42, 43].

To be clear, we are not suggesting that every article should include the word theory, nor that it should be a requirement for review. We are not even suggesting that research needs to be based in theory. Instead, we are simply pointing out the pattern of research that exists in one of the leading research journals with the hope that this inspires critical discussion around the process, aims, and motivation of psychological research. There are many ways to do research. If scientists want to work towards developing nomothetic explanations of human nature then, yes, theory can help. If scientists simply want to describe or explore something interesting, that’s fine too.

Supporting information

S1 File

(DOCX)

Data Availability

All relevant data are available from the Open Science Framework (OSF) database (osf.io/hgn3a). The OSF preregistration is also available (osf.io/d6bcq/).

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Kuhn T., The structure of scientific revolutions, 3rd ed. (The University of Chicago, 1996). [Google Scholar]
  • 2.Meehl P. E., Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology. J. Consult. Clin. Psychol. 46, 806–834 (1978). [Google Scholar]
  • 3.Mischel W., The toothbrush problem. APS Obs. 21 (2008). [Google Scholar]
  • 4.Muthukrishna M., Henrich J., A problem in theory. Nat. Hum. Behav. 3, 221–229 (2019). 10.1038/s41562-018-0522-1 [DOI] [PubMed] [Google Scholar]
  • 5.Fiedler K., What Constitutes Strong Psychological Science? The (Neglected) Role of Diagnosticity and A Priori Theorizing. Perspect. Psychol. Sci. 12, 46–61 (2017). [DOI] [PubMed] [Google Scholar]
  • 6.Fiedler K., The Creative Cycle and the Growth of Psychological Science. Perspect. Psychol. Sci. 13, 433–438 (2018). [DOI] [PubMed] [Google Scholar]
  • 7.Gervais W. M., Practical Methodological Reform Needs Good Theory. PsyArxiv Prepr. (2020). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Oberauer K., Lewandowsky S., Addressing the theory crisis in psychology. Psychon. Bull. Rev. 26, 1596–1618 (2019). 10.3758/s13423-019-01645-2 [DOI] [PubMed] [Google Scholar]
  • 9.Smaldino P., Better methods can’t make up for mediocre theory. Nature 575, 9 (2019). 10.1038/d41586-019-03350-5 [DOI] [PubMed] [Google Scholar]
  • 10.Szollosi A., Donkin C., Arrested theory development: The misguided distinction between exploratory and confirmatory research. PsyArxiv Prepr. (2019). [DOI] [PubMed] [Google Scholar]
  • 11.van rooij I., Psychological science needs theory development before preregistration. Psychon. Soc. (2019). [Google Scholar]
  • 12.Gigerenzer G., Personal Reflections on Theory and Psychology. Theory Psychol. 20, 733–743 (2010). [Google Scholar]
  • 13.Kruglanski A. W., That “vision thing”: The state of theory in social and personality psychology at the edge of the new millennium. J. Pers. Soc. Psychol. 80, 871–875 (2001). [PubMed] [Google Scholar]
  • 14.Tryon W. W., A connectionist network approach to psychological science: Core and corollary principles. Rev. Gen. Psychol. 16, 305–317 (2012). [Google Scholar]
  • 15.Yarkoni T., The generalizability crisis. PsyArXiv, 1–26 (2019). [Google Scholar]
  • 16.Finkelman D., Science and Psychology. Am. J. Psychol. 91, 179–199 (1978). [Google Scholar]
  • 17.Lin H., Werner K. M., Inzlicht M., Promises and perils of experimentation: Promises and Perils of Experimentation: Big-I Triangulation Offers Solutions. PsyArxiv Prepr. (2020) 10.31234/osf.io/hwubj. [DOI] [Google Scholar]
  • 18.Gray K., How to Map Theory: Reliable Methods Are Fruitless Without Rigorous Theory. Perspect. Psychol. Sci. 12, 731–741 (2017). 10.1177/1745691617691949 [DOI] [PubMed] [Google Scholar]
  • 19.Greenwald A. G., Pratkanis A. R., Leippe M. R., Baumgardner M. H., Under What Conditions Does Theory Obstruct Research Progress? Psychol. Rev. 93, 216–229 (1986). [PubMed] [Google Scholar]
  • 20.Rozin P., Social psychology and science: Some lessons from solomon asch. Personal. Soc. Psychol. Rev. 5, 2–14 (2001). [Google Scholar]
  • 21.Laudan L., Science and Hypothesis: Historical essays on scientific methodology (D. Reidel Publishing Company, 1980). [Google Scholar]
  • 22.Bacon F., The new organon and related writing. (Liberal Arts Press, 1960). [Google Scholar]
  • 23.Mayo D. G., Novel evidence and severe tests. Philos. Sci. 58, 523–552 (1991). [Google Scholar]
  • 24.Popper K., Objective knowledge: An evolutionary approach (Oxford University Press, 1979). [Google Scholar]
  • 25.Platt J. R., Strong Inference. Science (80-.). 146, 347–353 (1964). 10.1126/science.146.3642.347 [DOI] [PubMed] [Google Scholar]
  • 26.Cole S., The Hierarchy of the Sciences? Am. J. Sociol. 89, 111–139 (1983). [Google Scholar]
  • 27.van Lange P. A. M., What We Should Expect From Theories in Social Psychology: Truth, Abstraction, Progress, and Applicability As Standards (TAPAS). Personal. Soc. Psychol. Rev. 17, 40–55 (2013). [DOI] [PubMed] [Google Scholar]
  • 28.Lykken D. T., “What’s wrong with psychology anyway?” in Thinking Clearly about Psychology., Ghicetti D., Grove W., Eds. (University of Minnesota, 1991), pp. 3–39. [Google Scholar]
  • 29.Rubin M., When Does HARKing Hurt? Identifying When Different Types of Undisclosed Post Hoc Hypothesizing Harm Scientific Progress. 21, 308–320 (2017). [Google Scholar]
  • 30.APS, 2019 Submission Guidelines (2018) (May 5, 2020).
  • 31.APS, 2020 Submission Guidelines (2020) (May 5, 2020).
  • 32.P. Duhem, The aim and structure of physical theory (Atheneum, 1962).
  • 33.National Academies of Science, Science, evolution, and creationism (National Academies Press, 2008). [Google Scholar]
  • 34.Planck M., Johnston W. H., The philosophy of physics. (W.W. Norton & Co., 1936). [Google Scholar]
  • 35.I. Feinerer, K. Hornik, tm: Text mining package. (2019).
  • 36.Benoit K., et al., quanteda: An R package for the quantitative analysis of textual data. Open Source Softw. 3, 774 (2018). [Google Scholar]
  • 37.Landis J. R., Koch G. G., The Measurement of Observer Agreement for Categorical Data. Biometrics 33, 159 (1977). [PubMed] [Google Scholar]
  • 38.Antonakis J., On doing better science: From thrill of discovery to policy implications. Leadersh. Q. 28, 5–21 (2017). [Google Scholar]
  • 39.Lakatos I., Falsification and the Methodology of Scientific Research Communities (1978). [Google Scholar]
  • 40.McGuire W. J., An Additional Future for Psychological Science. Perspect. Psychol. Sci. 8, 414–423 (2013). [DOI] [PubMed] [Google Scholar]
  • 41.Baumeister R. F., Vohs K. D., Funder D. C., Psychology as the Science of Self-Reports and Finger Movements: Whatever Happened to Actual Behavior? Perspect. Psychol. Sci. 2, 396–403 (2007). 10.1111/j.1745-6916.2007.00051.x [DOI] [PubMed] [Google Scholar]
  • 42.Gergen K. J., Social psychology as history. J. Pers. Soc. Psychol. 26, 309–320 (1973). [Google Scholar]
  • 43.Scheel A. M., et al. , Why hypothesis testers should spend less time testing hypotheses. Perspect. Psychol. Sci. (2020). 10.1177/1745691620966795 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

T Alexander Dececchi

21 Dec 2020

PONE-D-20-28543

A decade of theory as reflected in Psychological Science (2009-2019)

PLOS ONE

Dear Dr. McPhetres,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

I would like to thank you for this submission, I speak for myself and hopefully both reviewers when I say it was a pleasure to read and looks to be a strong and impactful addition to the literature. I agree with the reviewers though that before acceptance some minor additions are required. There was no major conflicts between what the reviewers said, in fact I personally think they compliment each other quite well, so that addressing their concerns in one area should help the entire work overall. While I think that all the suggestions are valid, and they are not too onerous on you and your teams, I believe the most critical issues addressing reviewer 1's statements about the usage of the term "hypothesis", as they correctly point out how your lack of capture in your coding regime may overestimate the atheoretical nature of the field. I would also stress the need to address reviewers 2's concerns about the agreement scores between coders. Once these issues are addressed I believe this work will be strong enough to publish. I understand that, with the upcoming holidays for many institutions as well as Covid restrictions, it may be difficult for you and your team to address all these concerns quickly, therefore while I suggested approximately 48 days as time for resubmission (slightly more than the 45 that is typical), if you require more time please contact us and we can extend this deadline. We all understand that the current pace of the academic and non-academic world is not typical, and we do not want you or your time to feel constrained by this timeline. I thank you for your submission

==============================

Please submit your revised manuscript by January 29th 2021. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

T. Alexander Dececchi, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. We note the study analyses publications from a single publication (Psychological Science) as part of this study. We note that you have acknowledged this as a limitation in the Discussion, and indicate that "we do not claim that the results described here statistically generalize to other journals".

However, some of the conclusions made do appear to suggest the results are generalizable to wider group, e.g. "We interpret this to suggest that most psychological research is not driven by theory, nor can it be contributing to cumulative theory building."

Please revise accordingly. This is required in order to meet PLOS ONE's 4th publication criterion, which states that 'Conclusions are presented in an appropriate fashion and are supported by the data.'

https://journals.plos.org/plosone/s/criteria-for-publication#loc-4

Additional Editor Comments:

First off I would like to apologize for the delays and I thank you for your understanding and patiences. Second I wish to congratulate you on an overall very compelling and informative study. This line of inquiry is needed to help drive psychological research forward. That said I also agree with the reviewers on their most significant suggestions, especially the omission of "hypothesis" from your analysis as brought up by reviewer 1 and the moderate coder agreement scores brought forward by reviewer 2. I believe addressing these in the next version will greatly improve it and make it even more accessible to wider audience. I thank you all for this manuscript and I look forward to your re-submission.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Thank you for the opportunity to review this manuscript. The authors have chosen a fascinating topic and approached it in an elegant and innovative fashion. Their analytic approach is well considered and, given the constraints of their analysis, the conclusions they draw from their results are sound. I certainly believe that this submission contributes in a unique and meaningful way to the literature on theorizing in psychology, and I think it would make a fine addition to your outlet.

My only concern regards the authors’ decision to omit the term “hypothesis” from their analyses. Although the authors go to some lengths to justify this decision, I remain unconvinced by their argument. Anecdotally, I think it is common in the field to refer to theory and hypothesis interchangeably, and there are certain types of hypothesis that satisfy the sort of superordinate status the authors ascribe to “theory”. In the evolutionary psychological literature, for example, theories such as inclusive fitness are referred to as “first order hypotheses”, from which subsidiary, testable hypotheses or predictions can be derived. In a similar vein, it is widely recognised that psychology progresses via cumulative tests of lower-order hypotheses derived from higher-order theories – here, it is quite reasonable to expect that researchers only explicitly refer to the former (i.e., the subject of their analysis), rather than the broader theoretical framework from which their hypotheses are derived; nevertheless, progressive empirical support for lower-order hypotheses constitutes cumulative support for higher-order theories. In short, these two terms cannot be readily individuated. On the other hand, I am sympathetic to the fact that a hypothesis can also refer to its more trivial sense (i.e., specific, testable predictions), which would require a more nuanced, qualitative analysis and coding of target articles to differentiate the more substantive use of the term (i.e., theory) from its more trivial form (i.e., empirical predictions). Nevertheless, I believe that such an analysis is required to demonstrate, convincingly, whether psychological science operates in the atheoretical manner the authors describe.

Otherwise, another, minor suggestion is that the authors might like to consider complementing some of their results with inferential analyses (e.g., chi-square analyses), where appropriate. It would be interesting to see whether the differences they cite reach statistical significance.

In closing, I would like to congratulate the authors on a fascinating submission, and I wish them all the best in their future endeavours.

Reviewer #2: Overview: This manuscript explored mentions of theory in the past 10 years in the journal Psychological Science. This paper attempts to provide an answer about the extent to which modern psychological research is guided by theory. This manuscript is innovative, clever, and overall well-written. The authors present interesting findings about psychological research’s current lack of grounding in theory without necessarily prescribing a need for change. My primary concern is the low agreement between coders on what constitutes a reference to theory, as captured by the Fleiss’ kappas. While these values suggest coders agreed at better than chance rates, their agreement was only fair to moderate at best. This goes back to the authors’ question of how to identify a theory and thus a reference to theory. More detail and explanation for these low agreement scores is needed.

This manuscript examined

1. It would be helpful to readers to include the theories that were mentioned most often in the text section on how many theories were mentioned in addition to the supplemental information.

2. Psych Science article’s introduction and discussion sections are limited to 2000 words. The authors might consider whether this word limit could have contributed to lower rates of including references to theory.

3. The manuscript currently lacks information to interpret Fleiss’ kappa according to cut points (i.e., no agreement, slight agreement, fair agreement, etc.) to help the reader better understand the level of agreement between coders. Furthermore, according to cut points for Fleiss’ kappa, coders showed only moderate agreement for the initial question of referring to a specific theory and only fair agreement for testing a prediction from a specific theory. These low kappas are concerning. The authors should note this is a limitation and offer potential explanations for why coders showed these levels of disagreement. It would help to contextualize the kappas based on what other studies using this as a measure of agreement have found.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Mar 5;16(3):e0247986. doi: 10.1371/journal.pone.0247986.r002

Author response to Decision Letter 0


11 Jan 2021

Response to comments

Editor comments

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Response: I have reviewed the guidelines and believe that my files now satisfy these requirements.

2. We note the study analyses publications from a single publication (Psychological Science) as part of this study. We note that you have acknowledged this as a limitation in the Discussion, and indicate that "we do not claim that the results described here statistically generalize to other journals".

However, some of the conclusions made do appear to suggest the results are generalizable to wider group, e.g. "We interpret this to suggest that most psychological research is not driven by theory, nor can it be contributing to cumulative theory building."

Please revise accordingly. This is required in order to meet PLOS ONE's 4th publication criterion, which states that 'Conclusions are presented in an appropriate fashion and are supported by the data.'

Response: I have removed, to the best of my knowledge, statements that imply a generalisation to all of psychology. For example, I have reworded the statement you pointed out to read “that the research published in this flagship journal is not driven by theory”, in the beginning of the concluding remarks “the published research we reviewed” and “theory is not guiding the majority of research published in Psychological Science.”

Reviewer Comments

Reviewer #1: Thank you for the opportunity to review this manuscript. The authors have chosen a fascinating topic and approached it in an elegant and innovative fashion. Their analytic approach is well considered and, given the constraints of their analysis, the conclusions they draw from their results are sound. I certainly believe that this submission contributes in a unique and meaningful way to the literature on theorizing in psychology, and I think it would make a fine addition to your outlet.

Response: Thank you for the positive evaluation of our work.

1. My only concern regards the authors’ decision to omit the term “hypothesis” from their analyses. Although the authors go to some lengths to justify this decision, I remain unconvinced by their argument. Anecdotally, I think it is common in the field to refer to theory and hypothesis interchangeably, and there are certain types of hypothesis that satisfy the sort of superordinate status the authors ascribe to “theory”. In the evolutionary psychological literature, for example, theories such as inclusive fitness are referred to as “first order hypotheses”, from which subsidiary, testable hypotheses or predictions can be derived. In a similar vein, it is widely recognised that psychology progresses via cumulative tests of lower-order hypotheses derived from higher-order theories – here, it is quite reasonable to expect that researchers only explicitly refer to the former (i.e., the subject of their analysis), rather than the broader theoretical framework from which their hypotheses are derived; nevertheless, progressive empirical support for lower-order hypotheses constitutes cumulative support for higher-order theories. In short, these two terms cannot be readily individuated. On the other hand, I am sympathetic to the fact that a hypothesis can also refer to its more trivial sense (i.e., specific, testable predictions), which would require a more nuanced, qualitative analysis and coding of target articles to differentiate the more substantive use of the term (i.e., theory) from its more trivial form (i.e., empirical predictions). Nevertheless, I believe that such an analysis is required to demonstrate, convincingly, whether psychological science operates in the atheoretical manner the authors describe.

Response: We have included this additional analysis. I now detail the results in the “exploratory analysis” section and have included a table detailing this data by year. The results show that, while some people do use hypothesis in place of theory, this is a minority of papers (only 9% of the total corpus).

2. Otherwise, another, minor suggestion is that the authors might like to consider complementing some of their results with inferential analyses (e.g., chi-square analyses), where appropriate. It would be interesting to see whether the differences they cite reach statistical significance.

Response: We have not included inferential statistics because we have analysed the entire corpus of articles. Thus, there is no ‘population’ to generalise our results to with the interpretation of a p-value That is, because we have all the articles, everything is an actual difference if the numbers differ in their absolute value when you have the whole population and no p-values are needed to determine whether the numbers would differ significantly given a frequentist methodology and interpretation (e.g. what would happen if we repeated the study 100 times).

3. In closing, I would like to congratulate the authors on a fascinating submission, and I wish them all the best in their future endeavours.

Response: Thank you again for your constructive feedback!

Reviewer #2: Overview: This manuscript explored mentions of theory in the past 10 years in the journal Psychological Science. This paper attempts to provide an answer about the extent to which modern psychological research is guided by theory. This manuscript is innovative, clever, and overall well-written. The authors present interesting findings about psychological research’s current lack of grounding in theory without necessarily prescribing a need for change. My primary concern is the low agreement between coders on what constitutes a reference to theory, as captured by the Fleiss’ kappas. While these values suggest coders agreed at better than chance rates, their agreement was only fair to moderate at best. This goes back to the authors’ question of how to identify a theory and thus a reference to theory. More detail and explanation for these low agreement scores is needed.

Response: Thanks for pointing this out- I think this is the result of a miscommunication on my part. I have included additional text in the methods section to clarify how the data were coded by raters and why I do not believe the fair-to-moderate kappas to be a problem. I will also explain a bit more here, though.

First, just to clarify, the coding took place in two stages. Initially, two coders independently reviewed each article and recorded ratings. The kappa reported in the article was computed on this initial coding only. Then, in the second step, a third coder reviewed the disagreements and it was the ratings after this final round of coding which we analyse. So, the resulting code that we analysed for the main results was the result of codes on which one of the two conditions were satisfied: either a) two coders agreed 100% or b) two out of three coders agreed 100%.

This means that that lower level of agreement was corrected when the third coder independently reviewed the disagreements (ie the kappa doesn’t necessarily describe the data we analysed).

Thus, I do not think this is an issue because 1) agreement wasn’t too bad to begin with, it was still at moderate levels for the more complicated ratings, 2) more categories means lower agreement, and 3) the tie-breaker means the ratings are the result of agreement by at least two coders.

In response, I have made the following changes to the manuscript.

On page 3-4 where I describe the coding, I have reworded this to read as follows. Second, I have included some brief rules of thumb on pages 3-4. It now reads as follows:

“Each article was initially scored independently by two individual coders who were blind to the purpose of the study; Fleiss’ Kappa is report for this initial coding. Recommendations suggest that a kappa between .21-.40 indicates fair agreement, .41-.60 indicates moderate agreement, .61-.80 indicates substantial agreement, and .81-1.0 is almost perfect agreement (37).

After the initial round of coding, two additional blind coders and the first author each independently reviewed a unique subset of disagreements to resolve ties. This means that the ratings we analyse in the following section are the result of codes only for which two independent raters (or two out of three raters) agreed 100%. ”

This manuscript examined

1. It would be helpful to readers to include the theories that were mentioned most often in the text section on how many theories were mentioned in addition to the supplemental information.

Response: I have noted this in under the heading of “ Question 5: How many theories were mentioned…” (page 8). I have included a table with the top-10 most mentioned theories.

2. Psych Science article’s introduction and discussion sections are limited to 2000 words. The authors might consider whether this word limit could have contributed to lower rates of including references to theory.

Response: Good point. At the beginning of the limitations sections I have stated the following:

“One specific concern is that Psychological Science limits articles to 2,000 words, and this may have restricted the ability to describe and reference theories. This may be true, though would seem that the body of knowledge a piece of research is contributing towards would be one of the most important pieces of information to include in a report. That is, if the goal of that research were to contribute to cumulative knowledge it, it does not require many words to refer to a body of theory by name.”

3. The manuscript currently lacks information to interpret Fleiss’ kappa according to cut points (i.e., no agreement, slight agreement, fair agreement, etc.) to help the reader better understand the level of agreement between coders. Furthermore, according to cut points for Fleiss’ kappa, coders showed only moderate agreement for the initial question of referring to a specific theory and only fair agreement for testing a prediction from a specific theory. These low kappas are concerning. The authors should note this is a limitation and offer potential explanations for why coders showed these levels of disagreement. It would help to contextualize the kappas based on what other studies using this as a measure of agreement have found.

Response: I have included the rules of thumb for kappa in the methods section, as described earlier. This is related to my previous response regarding the calculation of the kappas. Namely, we only used the codes for which two raters agreed (e.g. after tie-breaking). However, there are a few other practical considerations to be made here.

First, these categories are extremely difficult to code. They may seem straightforward but 1) authors are often extremely vague, 2) we are coding something for which we expect there to be misuses of the word (which adds noise), and 3) coding multiple categories will necessarily reduce agreement.

The coders did almost perfectly when coding whether a study was pre-registered- had this not been the case, I would have been more concerned about the other categories.

Going into this project, I initially thought it would be straightforward to identify what a theory is, but it is not. People use this word so loosely that it makes any coding scheme feel inadequate. Authors contradict themselves and make ambiguous statements. I think this has more to do with the articles rather than the coders or the coding scheme. Some of these thoughts were already in the manuscript though. And I’m hesitant to put all of these thoughts into the paper, but I have added some discussion on this to the end of the limitations section.

Attachment

Submitted filename: Response to comments.docx

Decision Letter 1

T Alexander Dececchi

18 Feb 2021

A decade of theory as reflected in  Psychological Science (2009-2019)

PONE-D-20-28543R1

Dear Dr. McPhetres

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

T. Alexander Dececchi, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

After reading your revisions the reviewers and myself all agree that we should accept your manuscript. Congratulations. I know this was a long time in the works and I apologize for that. I thank you for your patience

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have done a fine job responding to the reviewers' concerns. I wish them all the best in their future endeavours.

Reviewer #2: The authors did a great job incorporating reviewer feedback into the revised document. The only other change I would suggest is tempering some of the strong language in the abstract and discussion somewhat to be more suggestive of potential implications of the findings. For example, in the abstract it states: “We interpret this to suggest that the majority of research published in this flagship journal is not driven by theory, nor can it be contributing to cumulative theory building.” Maybe instead say something like, “Given that the majority of research published in this flagship journal does not derive specific hypotheses from theory, we suggest that theory is not a primary driver of much of this research. Further, the research findings themselves may not be contributing to cumulative theory building“. From what I understand of the findings, several studies did reference theory, they just did not use theory to specifically derive their hypotheses.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Acceptance letter

T Alexander Dececchi

25 Feb 2021

PONE-D-20-28543R1

A decade of theory as reflected in Psychological Science (2009-2019)

Dear Dr. McPhetres:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. T. Alexander Dececchi

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (DOCX)

    Attachment

    Submitted filename: Response to comments.docx

    Data Availability Statement

    All relevant data are available from the Open Science Framework (OSF) database (osf.io/hgn3a). The OSF preregistration is also available (osf.io/d6bcq/).


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES