Skip to main content
Sage Choice logoLink to Sage Choice
. 2020 May 5;15(4):1042–1053. doi: 10.1177/1745691620904775

Unburdening the Shoulders of Giants: A Quest for Disconnected Academic Psychology

Dario Krpan 1,
PMCID: PMC7370646  PMID: 32369707

Abstract

In current academic psychology, scholars typically develop their research and ideas by drawing on the work of other contemporary and preceding psychological scientists and by following certain conventions of the field. I refer to this variant of psychology as connected because the emphasis is on connecting various research findings and ideas generated by different scholars (e.g., by showing how they are related to each other via referencing). In this article, I argue that, although connected psychology advances psychological knowledge, it restricts the total amount of knowledge that could eventually be produced and therefore limits the potential of the discipline to improve the understanding of psychological phenomena. As a solution, I propose that, alongside the currently existing connected psychology, disconnected psychology should be established. In disconnected psychology, researchers develop their ideas by following the main principles of psychological method, but they are disconnected from a “field” consisting of other psychologists and therefore do not follow the discipline’s norms and conventions. By drawing on one of the core constructs from information theory—information entropy—I argue that combining the two streams of psychology would result in the most significant advancement of psychological knowledge.

Keywords: knowledge, psychology, information, entropy, method


The fundamental aim of psychology is to increase psychological knowledge (i.e., the understanding of human mind and behavior). Whereas this objective is typically implicitly understood by psychological scientists and is evident from numerous publications belonging to the discipline without having to be openly stated, various professional organizations that represent psychological scientists, such as the British Psychological Society (BPS) or the American Psychological Association (APA), have articulated it in their definitions of psychology.1 In this article, I propose a fundamental obstacle that limits psychological knowledge, and I develop a solution to overcome this obstacle. I start by proposing how psychological knowledge can be expressed using quantitative language and then argue that the current academic psychology—to which I refer as connected psychology—restricts the quantity of psychological knowledge that could potentially be produced. I then formulate the concept of disconnected psychology and argue that psychological knowledge can be maximized through the interaction of connected and disconnected psychology.

What Is “Knowledge” in the Context of Psychological Science?

Various philosophers across many different traditions have attempted to define knowledge (Audi, 2011; Lehrer, 2018). In the realm of philosophy of science, one of the most influential ideas in this regard was laid out by Popper (1959, 1963), according to whom scientific knowledge is not merely the accumulation of observations. Instead, it constitutes “the repeated overthrow of scientific theories and their replacement by better or more satisfactory ones” (Popper, 1963, p. 215). On the opposite side of the spectrum, Feyerabend (1975) proposed that science is an anarchistic enterprise, which means that it does not involve a series of theories that gradually replace each other. Instead, scientific knowledge is an “increasing ocean of mutually incompatible alternatives” (Feyerabend, 1975, p. 21) where different theories compete against each other and thereby constantly force each other into greater articulation. As exemplified by these two contrasting views, there is no consensus among philosophers concerning what knowledge is (Lehrer, 2018). Therefore, in the present article, my aim is not to follow a particular philosophical definition, but rather to propose a functional definition that describes how psychology currently operates as a field and that can also be expressed quantitatively.

I posit that psychological knowledge can be defined as a reduction of uncertainty regarding the occurrence of phenomena of interest to psychology: mental states and behaviors. Using psychological terminology, we can conceptualize these phenomena as dependent variables (DVs; i.e., measurable behaviors such as eating, discrimination, socialization, etc., and measurable mental states such as attitudes, intentions, affect, etc.). This definition can be extrapolated from sources in which psychological knowledge is typically documented (e.g., journal articles, books, conference proceedings) that can be broadly divided into empirical (e.g., journals such as Psychological Science) and theoretical (e.g., journals such as Psychological Review).

For example, in the case of a typical research project that may be published in an empirical journal, the overarching goal is to investigate whether one or more independent variables (IVs) influence or predict DVs of interest. Researchers may decide on which IVs to test on the basis of previous literature and their own experiences or observations. However, regardless of how well informed the selection of the IVs is, it remains uncertain whether they do in fact influence or predict the DVs of interest until research has been conducted to test this and the appropriate statistical analyses have been implemented. Research, then, to some degree resolves the uncertainty regarding the occurrence of the phenomena studied because it clarifies whether and to what extent the IVs tested influence or predict the occurrence of these phenomena. No study can, of course, provide a definite answer in this regard, but conducting multiple studies and their replications can increase the confidence regarding the existence (or absence) of meaningful IV–DV links (Brandt et al., 2014; Hagger et al., 2016; Koole & Lakens, 2012; Simons, 2014; Verhagen & Wagenmakers, 2014).

Whereas empirical publications reduce uncertainty regarding specific IV–DV links they test, the aim of a typical theoretical publication is to reduce uncertainty regarding whether and how different empirical findings are linked to each other by proposing an underlying principle that connects them. A theory can be developed deductively, by formulating a set of principles based on a large body of available empirical findings and specifying testable predictions stemming from these principles; inductively, by starting with a core set of principles based on one or a few empirical findings and then testing whether these principles apply “universally” across many different phenomena and settings; or abductively, by forming a best explanation about a phenomenon based on one’s own incomplete observations of the world and the limited empirical evidence that exists (Fann, 1970; Locke, 2007, 2015; Locke & Latham, 2002; Muthukrishna & Henrich, 2019; Seth, 2015). Regardless of the process through which a theory is developed, it must eventually connect many empirical publications via an underlying set of principles that reduce uncertainty regarding when and why the phenomena studied occur. For example, writing about construal-level theory, Trope and Liberman (2010) argued that by understanding the psychological distance (i.e., distance in terms of time, space, probability, or social connectedness) of a stimulus (e.g., an event, object, or person), it is possible to understand whether and to what degree certain mental states (e.g., attitudes, affect) or behaviors (e.g., prejudice, politeness, self-control) regarding this stimulus will occur. The theory explains a range of findings from many empirical publications and also makes new testable predictions.

To further clarify psychological knowledge, it is necessary to outline how theoretical and empirical sources interact to resolve uncertainty regarding the occurrence of psychological phenomena. The field does not function in a strict Popperian way, according to which psychological scientists would comprehensively test one or more overarching theories empirically and then gradually replace them by better and more satisfactory theories (Locke, 2007, 2015; Muthukrishna & Henrich, 2019). Instead, it is characterized by a more “anarchistic” set of practices: Although in some cases empirical publications are guided by well-developed theories such as the cognitive dissonance (e.g., Matz & Wood, 2005) or self-affirmation theory (e.g., Martens, Johns, Greenberg, & Schimel, 2006), in many cases, researchers eclectically combine insights from various articles and their own experiences to form “hodgepodge” theories that inform their research.

In this regard, whereas individual empirical articles may reduce uncertainty regarding the specific IV–DV links tested, the accumulation of a large body of empirical articles may increase uncertainty regarding psychological phenomena on a macro level because it is not clear how all of these articles are related and which overarching set of rules they provide regarding the occurrence of these phenomena. Theoretical publications may then attempt to reduce this uncertainty by proposing some underlying principle that links the empirical findings and that can spawn new testable predictions. This description of psychological knowledge is broadly consistent with certain propositions by both Kuhn (1962) and Feyerabend (1975). In line with Kuhn (1962), it indicates that psychology is characterized by a constant increase and reduction of uncertainty. However, in contemporary psychology, this process does not occur via a constant replacement of paradigms (i.e., sets of key theories, methodologies, and metaphysical assumptions) such as behaviorism (Liu & Liu, 1997) by periods of turmoil that are followed by new paradigms. Instead, it is more dynamic and resembles a “sea” of different findings and approaches that, depending on their domain, may draw on each other and/or compete and in some cases be reconciled via rigorous review and theoretical articles, but more frequently via eclectic referencing and argumentation (Feyerabend, 1975). Ultimately, in an ideal future, this dynamic cycle of interactions between theory and research would end in a unified theory that would explain the occurrence of all mental states and behaviors. Even if such theories have been proposed (Henriques, 2003, 2011), they have not come close to accomplishing this objective.

Although I define psychological knowledge as uncertainty reduction and argue how it operates, it needs to be clear how uncertainty reduction must occur in order to count as psychological knowledge. Indeed, stating that psychology aims to reduce uncertainty regarding the occurrence of psychological phenomena is not specific enough because the uncertainty-reduction principle is evident in many different domains of human functioning. For example, it is possible to argue that religion aims to reduce uncertainty regarding different events happening in the world by producing a set of beliefs that can explain these events in relation to one or more gods (Hirsh, Mar, & Peterson, 2012; Hogg, Adelman, & Blagg, 2010). This need to reduce uncertainty in many different domains may in fact reflect one of the core principles of the human brain (Friston, 2009, 2010). What separates science more generally, and psychology more specifically, from other domains is that uncertainty needs to be reduced via some kind of scientific method to count as knowledge (Feyerabend, 1975; Koch, 1981; Rosnow & Rosenthal, 1989).

Broadly speaking, psychological method can be described as a sum of research designs and statistical techniques that have evolved throughout the existence of the field. Many arguments that psychological scientists have had concerning the validity of and preference for different kinds of research designs and statistical techniques (e.g., Benjamin et al., 2018; Held & Ott, 2018; Koch, 1981; Loftus, 1996; Rosnow & Rosenthal, 1989; Wagenmakers, Wetzels, Borsboom, van der Maas, & Kievit, 2012) indicate that psychological method is far from being objective, and one “ideal” method that defines the field does not exist. However, it is possible to identify two general characteristics of psychological method (Cohen, 1977; Popper, 1959, 1963; Rosnow & Rosenthal, 1989; Shrout & Rodgers, 2018): (a) It requires that any claims that psychological scientists make regarding the occurrence of behaviors or mental states need to be validated via observation in the physical world, which implies that these behaviors or mental states need to be measurable, and (b) it requires justifying that the occurrence of behaviors or mental states under a specific set of circumstances (e.g., under the presence of certain IVs) is not a mere chance and would to some extent repeat whenever these circumstances are present. In other words, psychology can reduce uncertainty regarding the occurrence of some behaviors or mental states only if it can observe them in the physical world and show that the instances when they occurred via observation were unlikely to happen coincidentally, by chance. Throughout this article, I refer to the first characteristic of psychological method as observability and to the second characteristic as nonaccidentality. Overall, for any idea expressed in a psychological publication to become part of psychological knowledge, it needs to eventually be supported by psychological method.

Expressing Psychological Knowledge Through Information Entropy

To argue why current academic psychology fails to maximize the quantity of psychological knowledge produced, it is necessary to express this knowledge in a more precise, quantifiable manner. For this purpose, I use a concept from information theory—Shannon’s (1948) information entropy—that quantifies uncertainty and has already been implemented in relation to psychological phenomena and knowledge more broadly (Dretske, 1981, 1983; Fanelli, 2019; Hirsh et al., 2012). Information entropy (H) can be expressed using the following equation:

H=i=1np(xi)log2p(xi) (1)

In Equation 1, x refers to a variable consisting of any events, outcomes, or more generally, possibilities, ranging from {x1 . . . xn}. For example, in information theory, x would typically refer to a string of numbers, whereas in psychology, it has previously been used to denote a set of different behaviors that a person may consider in a given situation, such as walking, running, talking to someone, purchasing a food item, and so forth (Hirsh et al., 2012). The expression p(xi) denotes the probability of a given event belonging to x. For example, if x refers to different behavioral possibilities that a person may consider in a situation, p(xi) corresponds to the probability that this person will undertake one of these behaviors. Let us assume that the person is deciding among five different behaviors. If he or she is highly uncertain and does not know which behavior to undertake, the probabilities of each of the five behaviors will be the same (.20). However, if the person is highly certain of which behavior to undertake, that behavior may, for example, have a high probability of .92, whereas each of the remaining four behaviors may have a probability of .02. If we now calculate information entropy for the high- (vs. low-) uncertainty situation, we will see that it corresponds to 2.32 (vs. 0.56). Higher information entropy therefore indicates higher uncertainty, and the units in which it is expressed are called bits.

Let us first use the concept of information entropy to define what it means to produce knowledge more generally before focusing on psychological knowledge more specifically. If knowledge corresponds to uncertainty reduction, then the amount of uncertainty before some knowledge-producing event (what counts as a knowledge-producing event will depend on the domain of human functioning in question; e.g., psychology, religion, art), which I refer to as entropy prior (Hp), needs to be larger than the amount of uncertainty after this event, which I refer to as entropy final (Hf): Hp > Hf. If this condition has been met, the amount of knowledge produced (K) can be expressed via the following equation:

K=HpHf (2)

From Equation 2, we can see that the largest possible amount of knowledge is produced when Hp is as large as possible and Hf is as small as possible. It is assumed that both Hp and Hf contain exactly the same instances of x ranging from {x1 . . . xn}. However, in Hf, these instances have different probabilities than in Hp because of some knowledge-producing event that changed them. The magnitude of Hf depends on the degree to which a knowledge-producing event that preceded it increased the probability of one specific instance of x while decreasing the probability of other instances (see Equation 1); the smallest possible value of Hf is 0, which indicates complete absence of uncertainty (i.e., some xi has a probability of 1). Therefore, for Hf to be small, it needs to contain some xi that has a high probability relative to all other instances of x, assuming that the knowledge-producing event can effectively demonstrate that this is indeed the case. In contrast, Hp is determined by the total number (n) of all instances of variable x that can range from {x1 . . . xn}. I refer to this number as n(x). Indeed, the higher n(x) for Hp, the higher the maximum possible magnitude of Hp. The relationship between n(x) and maximum possible Hp can be seen in Figure 1. To further clarify why n(x) is important for the quantity of knowledge produced (K), let us assume that we have some Hf that is always 0. In this case, K will depend on the maximum possible Hp: When n(x) = 2, this value is 1; when n(x) = 3, this value is 1.585; when n(x) = 4, this value is 2, and so on (Fig. 1). Therefore, to maximize knowledge, it is not enough to identify xi that can result in a small Hf—it is also necessary to increase n(x).

Fig. 1.

Fig. 1.

The relationship between n(x), which corresponds to the number of different instances of x ranging from {x1 . . . x100} in this example, and Hp, which corresponds to maximum possible entropy prior. When entropy final (Hf) equals 0, the amount of maximum possible knowledge produced (K) for some x corresponds to the value of maximum possible Hp that is determined by n(x). Maximum possible Hp values in the graph are expressed in bits.

Given that Equation 2 is formulated in terms of the difference in entropy before and after some knowledge-producing event, it is informative to relate it to other similar conceptualizations. Technically, this formulation can be linked to the degree of Bayesian belief updating in terms of the relative entropy or uncertainty between prior and posterior beliefs, which is a key quantity in many fields. In the visual neurosciences, for example, this is known as Bayesian surprise (Itti & Baldi, 2009; Sun, Gomez, & Schmidhuber, 2011), whereas in developmental robotics, it is known as intrinsic motivation (Barto, Mirolli, & Baldassarre, 2013; Oudeyer & Kaplan, 2009). This quantity can also be regarded as the information gain afforded by some data, given some prior beliefs or hypothesis space; in statistics, the best experiments maximize this quantity, which underwrites the principles of optimal Bayesian design (Lindley, 1956; MacKay, 1992). The quantity is also an important part of active inference, where it is known as intrinsic or epistemic value (Friston et al., 2015; Moulin & Souchay, 2015).

Now that I have explained Equation 2 and linked it to other similar formulations, I proceed with expressing psychological knowledge production in relation to K. In this regard, a knowledge-producing event is any event that has reduced uncertainty regarding the occurrence of psychological phenomena (i.e., behaviors and mental states) via an application of psychological method. For example, this event can involve a single research study or a set of any number of research studies that investigate some psychological phenomenon and result in some Hf that is smaller than Hp. Parameter x (see Equation 1) used to compute Hf and Hp corresponds to a set of different explanations of the phenomenon of interest (i.e., circumstances that may predict it or give rise to it) that has been tested. This parameter can refer to many different constructs, depending on the nature of research and psychological method used. For example, it can refer to a number of different mediation models (or any other statistical models) that need to be tested to determine which one offers the best explanation of how some group of variables is linked to a psychological phenomenon (DV); it can refer to a number of different theories (e.g., comprehensive theories that would be published in theoretical journals and/or hodgepodge theories that eclectically combine insights from various empirical and theoretical journals) that guided the selection of IVs across one or more studies that constitute the knowledge-producing event, and so forth.2

Considering that the number of methods and approaches in psychology is immense, parameter x can take many other forms and shapes and should not be limited to only a few. The only important criterion is that it involves different “explanations” of some phenomenon—because computing information entropy requires comparing different explanations, depending on how likely it is that each of these explanations is the best explanation of the phenomenon of interest. In some instances, the probabilities for each element of x could potentially be estimated with a relatively high degree of accuracy; for example, as would be the case for x that refers to a set of statistical models such as mediation (Hayes, 2018). In other instances, depending on the psychological method used, the probabilities for each element of x would be rough estimates. However, to discuss why current psychological science does not maximize the production of psychological knowledge, as I propose in the next section, exact computations are not necessary; it is sufficient to understand how K is limited by n(x) and other parameters I have previously tackled.

Why Does Current Psychology Fail to Maximize Knowledge Production?

I have argued that the amount of knowledge produced (K) is determined by Hf, whose magnitude depends on whether it contains some xi that has a high probability relative to all other xi (assuming that the knowledge-producing event can demonstrate this), and by Hp, which depends on n(x). In this section, I posit that current psychology fails to maximize the production of psychological knowledge because it makes the discovery of some xi with highest possible probability less likely and because it restricts n(x).

I first discuss why current psychology fails to maximize the likelihood of discovering some xi with the highest possible probability. I have explained that parameter x, which contains elements xi ranging from {x1 . . . xn}, broadly corresponds to a set of different “explanations” of some psychological phenomenon and can take many forms and shapes (e.g., a set of different theories regarding the phenomenon). Let us say that some x (e.g., a set of theories) consists of a large n(x) that corresponds to the number of all possible instances of x that can exist (e.g., all possible theories) and that this x contains the best possible xi (e.g., a unifying theory, which I refer to as xu) that can lead to Hf = 0. We can never precisely know n(x), but for practical purposes, we can say that it tends to infinity. In this case, what would maximize the possibility of finding xu? Ideally, one would try to randomly “sample” different xi from the entire distribution of x (similar to how psychologists may randomly sample participants from a population of interest to ensure a representative sample) rather than focusing on some very narrow subset of x and predominantly working on very few similar xi. Practically speaking, this would correspond to trying to develop as many diverse theories that are substantially different from each other as possible, as many distinct methodological approaches as possible, and so forth, because this approach would maximize our chances of identifying a theory or methodology that can lead to the smallest Hf. However, the field of psychology (this also applies to other sciences) currently operates in such a way that the focus is on a relatively narrow subset of x (compared with all possibilities of x that could eventually exist) enforced by various conventions, trends, and politics pertaining to either the field more generally or to various research domains within the field (see Medin, Ojalehto, Marin, & Bang, 2017; Rozin, 2001, 2009), even if no objective indications that this subset in fact contains xu exist.

This premise can be supported by arguments on many different levels. For example, it has been acknowledged that APA style, on which psychologists widely rely when writing psychological sources of knowledge, is not just a set of explicit guidelines for presenting information (Budge & Katz, 1995; Madigan, Johnson, & Linton, 1995). In fact, APA style is itself an epistemology that enforces certain values and beliefs regarding psychology as a discipline and reflects its conventions. Moreover, the peer-review process is also guided by various biases and epistemological beliefs of the reviewers and may therefore propel research trajectories that are in line with these biases and beliefs (Blackburn & Hakel, 2006; Marsh, Jayasinghe, & Bond, 2008; Pier et al., 2018; Simon & Fyfe, 1994; Suls & Martin, 2009). Indeed, if psychology generally functions as other sciences, then it may be dominated by a group of highly influential psychological scientists who propel their own ideas and ideas of their collaborators but make it more difficult for other opposing or different ideas to enter the field, either directly or indirectly, by creating conventions that are unfavorable to such ideas (Azoulay, Fons-Rosen, & Graff Zivin, 2019). This empirically supported premise is famously known as Planck’s Principle (Hull, Tessner, & Diamond, 1978). Finally, as proposed by Lakatos (1970), psychology may, similar to other sciences, contain various research programs (i.e., sequences of theories that share some fundamental principles or assumptions) that shape research agendas of groups of psychologists, even if we acknowledge that not all psychologists operate according to the strict definition of research programs. For example, embodied cognition or evolutionary psychology can to some degree be considered research programs.

Another practice in psychology that impedes the discovery of xu by influencing psychologists to focus on a relatively narrow subset of x (e.g., theories and methods) is the referencing convention that it uses to connect different sources of psychological knowledge (i.e., journal articles, books, etc.). In the early days of the discipline, before the onset of the information age spawned by technological advancements, psychologists were generally forced to work more independently because they did not have access to an extensive “knowledge” network consisting of many psychological sources (Leahey, 1987, 1994). Independently developing new methodologies, theories, approaches, and so forth, was therefore a necessity. The advent of the Internet and the explosion of information led to a substantial increase in the number of citations per article that has been more dramatic in psychology than in other sciences, such as physics (Adair & Vohra, 2003; Sigal & Pettit, 2012). Referencing is undoubtedly useful when it comes to connecting various sources of psychological knowledge and understanding how theories, methodological approaches, and empirical findings are related. However, it also forces psychologists to develop their ideas in relation to other published research and theories, to fill in “gaps” in the literature, or to work on research topics that are highly cited to increase their scientific reputation and/or chances of tenure (e.g., Anderson et al., 2019; Moher et al., 2018; Safer & Tang, 2009). Overall, such practices decrease the likelihood of developing as many diverse theories, methodologies, and approaches that are substantially different from each other as possible.

A similar argument can be used to posit that current psychology restricts n(x), thereby lowering Hp, given that various conventions, trends, and practices I have discussed (e.g., APA style and the associated epistemological beliefs, referencing, peer-review process, impact of highly influential scientists on the discipline, research programs) have negative consequence for the diversity of theories, methodologies, and approaches that the field practices. If we assume that parameter x can contain all possible elements xi that correspond to different “explanations” of some psychological phenomenon, then n(x) that psychological scientists could potentially develop is relatively large, and although we do not know its exact value, we can state that it tends toward infinity. Despite this, psychological conventions, trends, and practices prevent psychologists from continuously realizing (i.e., inventing or discovering) many different xi, which would increase Hp at a high rate. Instead, they focus on relatively few xi and develop ideas that are related to or belong to these xi until changes in conventions, trends, practices, programs, and so forth, that allow for the invention of other xi occur.

Solution: Disconnected Academic Psychology

In this section, I introduce disconnected academic psychology as a solution to the knowledge-production problem from which current (i.e., connected) psychology suffers. Rather than proposing that disconnected psychology should replace connected psychology, I argue that psychological knowledge production can be maximized only if these two streams coexist. I start with defining disconnected psychology by contrasting it with connected psychology. I then discuss how the former overcomes some of the problems from which the latter suffers and why psychological knowledge production would be maximized under the existence of both streams.

As shown in Table 1, both connected and disconnected psychology are grounded on the main foundations of psychological method—observability and nonaccidentality—because psychological method is what defines psychological science, and without it, psychology cannot be a science (Cohen, 1977; Feyerabend, 1975; Koch, 1981; Popper, 1959, 1963; Rosnow & Rosenthal, 1989; Shrout & Rodgers, 2018). However, how this method is practiced differs between the two streams. In connected psychology, scholars need to connect their work to other work that has been done in the discipline (e.g., their domain of research or other domains of research). Their application of psychological method needs to be informed by previous work in the discipline, and they need to follow certain reporting and writing conventions. By being connected to a field consisting of other psychologists from their domain of research, connected psychological scientists to some degree operate according to the field’s norms, conventions, trends, or principles.

Table 1.

Main Principles of Connected and Disconnected Academic Psychology

Principle Connected psychology Disconnected psychology
1 Grounded on observability and nonaccidentality as the foundations of psychological method. Grounded on observability and nonaccidentality as the foundations of psychological method.
2 Psychologists “stand on the shoulders of giants”—it is a requirement to refer to previous literature in the field and connect one’s theories and research to previous literature. Psychologists “do not stand on anyone’s shoulders”—they develop their ideas by relying solely on observability and nonaccidentality without following the work of other psychologists. They continuously build on their own previous work; they use referencing, but in relation to their own previous work.
3 Psychologists follow certain widely accepted reporting and referencing guidelines (e.g., APA style). Psychologists use reporting and referencing styles that best suit their work.
4 Psychologists are “connected” to a field consisting of other psychologists. They understand the conventions of their respective research domains and/or the field more generally and may to some degree operate according to the shared norms, beliefs, or trends. They are members of various psychological organizations and/or research groups, etc. Psychologists are “disconnected” from a field consisting of other psychologists. They are not aware of the conventions of connected psychology and instead form their own conventions, norms, and principles over time. They are not members of various psychological organizations and/or research groups, etc.

Note: APA = American Psychological Association.

In contrast, in disconnected psychology, there is no attempt to connect the work of different psychologists. The only requirement is that they ground their work on psychological method, but how they interpret and develop this method is up to them. Their work evolves in line with their own experiences, observations, past ideas, and so forth, and not in relation to other psychologists and the conventions, epistemology, or assumptions these psychologists share. Overall, it can be said that in disconnected psychology, psychologists themselves are a field; each establishes his or her own norms, conventions, and principles over time and may develop one or more research agendas or programs across a lifetime. A critic may object that disconnected psychology cannot be classified as psychological science. However, if psychological method is what defines psychology as a science, then whoever adheres to this method is a psychological scientist, even if he or she chooses to do this without connecting to a field consisting of other psychological scientists and adhering to norms and principles that emerged among them (i.e., without adhering to connected psychology). Adherence to norms, rules, principles, or conventions cannot constitute psychological science because no one can objectively prove that some specific norms or conventions that emerged within the field can lead to greater discoveries via psychological method than some other possible existing principles that a psychologist can develop individually or that may have emerged in the field under other circumstances.

In relation to dividing psychology into connected and disconnected, it is important to understand that, when referring to current psychology as connected, I do not imply that all research findings are perfectly connected and that it is clear how they are theoretically related to each other. In that sense, the term “connected” should not be taken too literally. Indeed, I have argued that the field metaphorically resembles a “sea” of different findings and approaches whose constantly fluctuating degree of connectedness (or lack of it) may depend on the research domains to which they belong and on various other factors (e.g., Feyerabend, 1975; Haslam & Lusher, 2011). By referring to psychology as connected, I posit that connectedness is inherent to the field given that, for published research findings, it is of crucial importance to explain how they are linked to other relevant research and contribute to it (e.g., Safer & Tang, 2009); that it is generally expected that psychologists develop their research and ideas by drawing on the work of other contemporary and preceding psychological scientists (e.g., Adair & Vohra, 2003; Sigal & Pettit, 2012); that psychologists, regardless of their research domain, are broadly connected via the referencing styles and other reporting conventions that prompt shared epistemological underpinnings (e.g., Budge & Katz, 1995; Madigan et al., 1995); that groups of psychologists may be connected via research programs, domains, or agendas (Lakatos, 1970); and so on.

In contrast, in disconnected psychology, all these aspects of connectedness are avoided. Psychologists do not draw on the work of other contemporary and preceding psychologists and do not attempt to link their work to certain domains of research via referencing; they do not share reporting styles or conventions; they are not connected via shared research programs or agendas; and so forth. It could be said that disconnected psychology slightly resembles the field of psychology in its early days, when psychologists were generally forced to work more independently because information could not be easily shared and they were therefore more affected by their immediate environments and life circumstances (Leahey, 1987, 1994). Of course, considering that we currently live in the information age, even if disconnected psychologists were deliberately disconnected from psychology as a field, they would be exposed to an immense amount of information from their immediate environment and from across the world.

The question is how disconnected psychology overcomes the problems related to knowledge production from which connected psychology suffers. The first problem I identified is that connected psychology fails to maximize the likelihood of discovering some xi with the highest possible probability (e.g., xu), because various conventions, trends, and practices of the field (e.g., APA style, referencing, impact of highly influential scientists on the discipline) lead psychologists to focus on a relatively narrow subset of x and to predominantly work on relatively few similar xi instead of maximizing the diversity of theories, methodologies, and approaches. Next to increasing the possibility of attaining xu and therefore producing Hf of lowest magnitude, larger diversity of theories, methodologies, and approaches directly corresponds to larger n(x). Hence, increasing this diversity also resolves the second problem I identified, which is that connected psychology restricts n(x) and in doing so limits the magnitude of Hp.

Therefore, to overcome the problems linked to connected psychology, disconnected psychology needs to increase the diversity of theories, methodologies, approaches, and so forth. One potential argument against the premise that disconnected psychology would indeed achieve this goal is that, without following the work of each other and what has previously been achieved in the discipline, disconnected psychologists would simply repeat each other’s work and the work of connected psychologists and keep reinventing the wheel. Whereas repetitions would undoubtedly happen, as they happen even in connected psychology, where different scientists tend to propose similar concepts under different names (e.g., priming and anchoring; Strack & Mussweiler, 1997), I offer several arguments justifying why disconnected psychology should nevertheless increase the diversity of theories, methodologies, approaches, and so forth.3

All psychological scientists are exposed to a unique set of an immense number of life circumstances (e.g., their childhood, education, culture, everyday events, friends, other influences) that, in interaction with their individual differences (e.g., genetics, personality), shape their thoughts and actions. Without some guiding principles such as rules, norms, and conventions of the field that would regulate their use of psychological method, it would be difficult to expect that they would develop exactly the same ideas, concepts, and terminology that would guide their theories, methodologies, and approaches. In addition to the individual characteristics and life circumstances that would shape their work, their recent ideas would continuously be influenced by their previous ideas, which would result in a body of work that, at the end of their career, would be a unique consequence of the interaction between all these influences and psychological method. Overall, psychologists have already discussed that various constraints that the field imposes on its members may restrict diversity of approaches and ideas (e.g., Medin et al., 2017). Disconnected psychology can be seen as a more extreme extension of this notion, according to which completely removing any constraints by disconnecting psychologists from the “field” would allow the immense number of circumstances present in the world to shape their work, which would, over time, result in unique theories, methodologies, approaches, and so forth, for each psychological scientist.

Although no definite evidence can back up this claim—because there are few if any examples of scientists who developed their ideas according to the principles of disconnected psychology—there are certain events and research findings on which I rely to defend it. For example, we know that there are roughly 6,000 to 8,000 languages that evolved over the course of human history, and these languages are generally diverse and “vary radically in sound, meaning, and syntactic organization” (Evans & Levinson, 2009, p. 429). This language diversity may have been caused by a gradual accumulation of random changes over time in combination with the need to adapt to different environments in which the languages evolved (Lupyan & Dale, 2016). Likewise, over the course of human history, an immense number of cultural traditions and norms have evolved in different geographic areas (United Nations Educational, Cultural and Scientific Organization [UNESCO], 2009), and this cultural diversity may in fact be one of the factors that influenced language diversity (Dunn, Greenhill, Levinson, & Gray, 2011). Cultures and languages generally evolved during periods of human history void of modern communication and information systems, which means that humans could not easily share information unless they were in geographical proximity (Harari, 2014). Therefore, cultural and language diversity indicates that unique sets of circumstances that operate in different environments that are not connected to the same “field” that imposes certain conventions, norms, rules, and so forth, will give rise to unique intellectual creations. Here I am not claiming that individual psychologists who are disconnected from the field can be exactly equated to groups of geographically disconnected generations of people who developed different languages and cultural traditions. However, the basic notion that, in the absence of norms and rules prescribed by some overarching field, different circumstances and environments would give rise to diverse ideas and approaches should also apply to disconnected psychologists over the course of their lifetime.

Another indication that disconnected psychology may lead to diversity of ideas is that original and transformative scientific contributions were in many cases made by “outsiders” who, although not disconnected from their fields in the sense that disconnected psychology advocates, were operating outside of typical confines of these fields. Given that there seems to be no exact statistical information in this regard, I start by giving some examples. Albert Einstein, who created one of the most influential theories of all time (relativity), was working as a patent clerk when he developed some of his most important ideas (Pais, 1982). Sadi Carnot, who originated the second law of thermodynamics, was a military engineer and spent his life working for the military (Erlichson, 1999). Julian Jaynes, who wrote one of the most original psychology books—The Origin of Consciousness in the Breakdown of the Bicameral Mind (1976)—also did not have a typical academic career, and despite giving lectures at many universities, he was not interested in tenure. In general, science seems to evolve when outsiders who do not work with dominant figures in their field manage to introduce their ideas into the field, which in many cases happens after the death of these dominant figures, who may, for different reasons, impede the acceptance of new ideas (Azoulay et al., 2019).

There are also several other anecdotal lines of evidence that may to some extent support my argument regarding disconnected psychology and are linked to the exploration of different hypothesis or theory spaces during evidence-based searches. From the Bayesian perspective, this reduces to optimizing Bayesian model selection and inherent structure learning through a search on model space (Tervo, Tenenbaum, & Gershman, 2016). Perhaps the clearest example of this is evolution, where natural selection becomes Bayesian model selection (Campbell, 2016). In this setting, each phenotype represents a different theory or hypothesis for what kinds of creatures are best suited to a given eco niche. In this regard, diversity plays a key role—and the arguments in this article are essentially akin to those that underwrite speciation and coevolution. Similar kinds of mechanics are found in machine learning and engineering. In this instance, random explorations of hypothesis spaces are implicit in procedures such as stochastic optimal control and Bayesian filters (e.g., particle filters). These structural approaches to optimizing models are based on model evidence (Friston, 2010, 2013), which may be a principle that underlies not just psychological enquiry but also the very existence of psychologists.

Finally, now that I have addressed why disconnected psychology overcomes some of the problems of connected psychology, it is necessary to discuss why psychological knowledge production can be maximized only if these two streams coexist. First, for knowledge (K) to be created, some knowledge-producing event (which in psychology may correspond to a single research study or a set of many research studies) that will lead to Hf needs to occur. Disconnected psychology itself cannot generate a knowledge-producing event (e.g., it cannot produce studies that would test many different theories) because disconnected psychologists develop their own work but do not try to connect it to the work of other psychologists. Therefore, connected psychology would be responsible for generating knowledge-producing events that involve testing explanations of psychological phenomena produced by many different disconnected and connected psychologists. Moreover, given that the two streams of psychology operate according to different principles, the diversity of theories, methodologies, and approaches that corresponds to n(x) should be largest across these streams together rather than in isolation. In other words, this means that n(x) can be maximized when different instances of x from both fields are combined, which can therefore potentially lead to the largest possible Hp.

Conclusion

In this article I argue that connected psychology, in which psychological scientists build on each other’s work and are connected in a “field,” restricts the amount of psychological knowledge produced as a result of various norms, conventions, and rules that reduce the diversity of theories, methods, and approaches. To overcome this problem, I propose disconnected psychology, in which psychological scientists ground their work on psychological method but are disconnected from a “field” that comprises other psychologists. I posit that the production of psychological knowledge can be maximized only if connected and disconnected psychology coexist. In this regard, the role of connected psychology would be to continue operating in the same way that it currently does, but also to constantly browse theories, methodologies, and approaches developed by disconnected psychologists so that it can continuously test them in combination with theories, methodologies, and approaches from connected psychology and determine the ones that best explain psychological phenomena of interest. This may be a challenge given that disconnected psychologists may use different reporting styles and terminologies in their work. However, considering recent advancements in artificial intelligence concerning exploration of scientific literature and data (Extance, 2018), the time may be ripe for disconnected psychology to arise and enrich psychological knowledge in combination with connected psychology.

1.

According to APA (2020), “psychology is the study of the mind and behavior . . .. In every conceivable setting from scientific research centers to mental healthcare services, ‘the understanding of behavior’ is the enterprise of psychologists.” According to BPS (2020), “psychology is the scientific study of the mind and how it dictates and influences our behavior, from communication and memory to thought and emotion.”

2.

If we computed Hf for many different K quantities, with each K pertaining to a different psychological phenomenon, or to a set of different psychological phenomena, a highly successful theory xi would have highest p(xi) in each of these cases, which would mean that it offers the best prediction of many different research findings constituting the knowledge-producing event, and it therefore most successfully connects these findings. A unified theory xi would lead to Hf = 0 for any possible K referring to some psychological phenomenon, which indicates a complete certainty that, out of all examined theories, this theory most accurately predicts the circumstances under which this phenomenon will occur and also implies that the theory produces highest possible K for some n(x).

3.

It is, however, important to emphasize that even the repetitions to which I refer would constitute a valuable contribution to psychological science. For example, if two scientists from the same unit confirm each other’s hypotheses, this is much less compelling than the “convergent evolution” to the same constructs from scientists who have never communicated to each other.

Footnotes

Transparency

Action Editor: Laura A. King

Editor: Laura A. King

Declaration of Conflicting Interests: The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.

References

  1. Adair J. G., Vohra N. (2003). The explosion of knowledge, references, and citations: Psychology’s unique response to a crisis. American Psychologist, 58, 15–23. [DOI] [PubMed] [Google Scholar]
  2. American Psychological Association. (2020). About APA. Retrieved from https://www.apa.org/support/about-apa
  3. Anderson C. A., Allen J. J., Plante C., Quigley-McBride A., Lovett A., Rokkum J. N. (2019). The MTurkification of social and personality psychology. Personality and Social Psychology Bulletin, 45, 842–850. [DOI] [PubMed] [Google Scholar]
  4. Audi R. (2011). Epistemology: A contemporary introduction to the theory of knowledge (3rd ed.). New York, NY: Routledge. [Google Scholar]
  5. Azoulay P., Fons-Rosen C., Graff Zivin J. S. (2019). Does science advance one funeral at a time? American Economic Review, 109, 2889–2920. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Barto A., Mirolli M., Baldassarre G. (2013). Novelty or surprise? Frontiers in Psychology, 4, Article 907. doi: 10.3389/fpsyg.2013.00907 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Benjamin D. J., Berger J. O., Johannesson M., Nosek B. A., Wagenmakers E. J., Berk R., . . . Cesarini D. (2018). Redefine statistical significance. Nature Human Behaviour, 2, 6–10. [DOI] [PubMed] [Google Scholar]
  8. Blackburn J. L., Hakel M. D. (2006). An examination of sources of peer-review bias. Psychological Science, 17, 378–382. [DOI] [PubMed] [Google Scholar]
  9. Brandt M. J., IJzerman H., Dijksterhuis A., Farach F. J., Geller J., Giner-Sorolla R., . . . Van’t Veer A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 217–224. [Google Scholar]
  10. British Psychological Society. (2020). What is psychology? Retrieved from https://www.bps.org.uk/public/what-is-psychology
  11. Budge G. S., Katz B. (1995). Constructing psychological knowledge: Reflections on science, scientists and epistemology in the APA Publication Manual. Theory & Psychology, 5, 217–231. [Google Scholar]
  12. Campbell J. O. (2016). Universal Darwinism as a process of Bayesian inference. Frontiers in Systems Neuroscience, 10, Article 49. doi: 10.3389/fnsys.2016.00049 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Cohen J. (1977). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum. [Google Scholar]
  14. Dretske F. (1981). Knowledge and the flow of information. Cambridge, MA: MIT Press. [Google Scholar]
  15. Dretske F. I. (1983). Précis of knowledge and the flow of information. Behavioral & Brain Sciences, 6, 55–63. [Google Scholar]
  16. Dunn M., Greenhill S. J., Levinson S. C., Gray R. D. (2011). Evolved structure of language shows lineage-specific trends in word-order universals. Nature, 473, 79–82. [DOI] [PubMed] [Google Scholar]
  17. Erlichson H. (1999). Sadi Carnot, ‘Founder of the Second Law of Thermodynamics.’ European Journal of Physics, 20, 183–192. [Google Scholar]
  18. Evans N., Levinson S. C. (2009). The myth of language universals: Language diversity and its importance for cognitive science. Behavioral & Brain Sciences, 32, 429–448. [DOI] [PubMed] [Google Scholar]
  19. Extance A. (2018). How AI technology can tame the scientific literature. Nature, 561, 273–274. [DOI] [PubMed] [Google Scholar]
  20. Fanelli D. (2019). A theory and methodology to quantify knowledge. Royal Society Open Science, 6, Article 181055. doi: 10.1098/rsos.181055 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Fann K. T. (1970). Peirce’s theory of abduction. The Hague, The Netherlands: Martinus Nijhoff. [Google Scholar]
  22. Feyerabend P. (1975). Against method. London, England: New Left. [Google Scholar]
  23. Friston K. (2009). The free-energy principle: A rough guide to the brain? Trends in Cognitive Sciences, 13, 293–301. [DOI] [PubMed] [Google Scholar]
  24. Friston K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127–138. [DOI] [PubMed] [Google Scholar]
  25. Friston K. (2013). Life as we know it. Journal of the Royal Society Interface, 10, Article 20130475. doi: 10.1098/rsif.2013.0475 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Friston K., Rigoli F., Ognibene D., Mathys C., Fitzgerald T., Pezzulo G. (2015). Active inference and epistemic value. Cognitive Neuroscience, 6, 187–214. [DOI] [PubMed] [Google Scholar]
  27. Hagger M. S., Chatzisarantis N. L., Alberts H., Anggono C. O., Batailler C., Birt A. R., . . . Calvillo D. P. (2016). A multilab preregistered replication of the ego-depletion effect. Perspectives on Psychological Science, 11, 546–573. [DOI] [PubMed] [Google Scholar]
  28. Harari Y. N. (2014). Sapiens: A brief history of humankind. London, England: Random House. [Google Scholar]
  29. Haslam N., Lusher D. (2011). The structure of mental health research: Networks of influence among psychiatry and clinical psychology journals. Psychological Medicine, 41, 2661–2668. [DOI] [PubMed] [Google Scholar]
  30. Hayes A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). New York, NY: Guilford Press. [Google Scholar]
  31. Held L., Ott M. (2018). On p-values and Bayes factors. Annual Review of Statistics and Its Applications, 5, 393–419. [Google Scholar]
  32. Henriques G. (2003). The tree of knowledge system and the theoretical unification of psychology. Review of General Psychology, 7, 150–182. [Google Scholar]
  33. Henriques G. (2011). A new unified theory of psychology. New York, NY: Springer Science & Business Media. [Google Scholar]
  34. Hirsh J. B., Mar R. A., Peterson J. B. (2012). Psychological entropy: A framework for understanding uncertainty-related anxiety. Psychological Review, 119, 304–320. [DOI] [PubMed] [Google Scholar]
  35. Hogg M. A., Adelman J. R., Blagg R. D. (2010). Religion in the face of uncertainty: An uncertainty-identity theory account of religiousness. Personality and Social Psychology Review, 14, 72–83. [DOI] [PubMed] [Google Scholar]
  36. Hull D. L., Tessner P. D., Diamond A. M. (1978). Planck’s principle. Science, 202, 717–723. [DOI] [PubMed] [Google Scholar]
  37. Itti L., Baldi P. (2009). Bayesian surprise attracts human attention. Vision Research, 49, 1295–1306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Jaynes J. (1976). The origin of consciousness in the breakdown of the bicameral mind. New York, NY: Houghton Mifflin Harcourt. [Google Scholar]
  39. Koch S. (1981). The nature and limits of psychological knowledge: Lessons of a century qua “science.” American Psychologist, 36, 257–269. [Google Scholar]
  40. Koole S. L., Lakens D. (2012). Rewarding replications: A sure and simple way to improve psychological science. Perspectives on Psychological Science, 7, 608–614. [DOI] [PubMed] [Google Scholar]
  41. Kuhn T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press. [Google Scholar]
  42. Lakatos I. (1970). Falsification and the methodology of scientific research programmes. In Lakatos I., Musgrave A. (Eds.), Criticism and the growth of knowledge (pp. 91–196). London, England: Cambridge University Press. [Google Scholar]
  43. Leahey T. H. (1987). A history of psychology: Main currents in psychological thought. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  44. Leahey T. H. (1994). A history of modern psychology. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  45. Lehrer K. (2018). Theory of knowledge (2nd ed.). New York, NY: Routledge. [Google Scholar]
  46. Lindley D. V. (1956). On a measure of the information provided by an experiment. The Annals of Mathematical Statistics, 27, 986–1005. [Google Scholar]
  47. Liu J. H., Liu S. H. (1997). Modernism, postmodernism, and neo-Confucian thinking: A critical history of paradigm shifts and values in academic psychology. New Ideas in Psychology, 15, 159–178. [Google Scholar]
  48. Locke E. A. (2007). The case for inductive theory building. Journal of Management, 33, 867–890. [Google Scholar]
  49. Locke E. A. (2015). Theory building, replication, and behavioral priming: Where do we need to go from here? Perspectives on Psychological Science, 10, 408–414. [DOI] [PubMed] [Google Scholar]
  50. Locke E. A., Latham G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35-year odyssey. American Psychologist, 57, 705–717. [DOI] [PubMed] [Google Scholar]
  51. Loftus G. R. (1996). Psychology will be a much better science when we change the way we analyze data. Current Directions in Psychological Science, 5, 161–171. [Google Scholar]
  52. Lupyan G., Dale R. (2016). Why are there different languages? The role of adaptation in linguistic diversity. Trends in Cognitive Sciences, 20, 649–660. [DOI] [PubMed] [Google Scholar]
  53. MacKay D. J. (1992). Information-based objective functions for active data selection. Neural Computation, 4, 590–604. [Google Scholar]
  54. Madigan R., Johnson S., Linton P. (1995). The language of psychology: APA style as epistemology. American Psychologist, 50, 428–436. [Google Scholar]
  55. Marsh H. W., Jayasinghe U. W., Bond N. W. (2008). Improving the peer-review process for grant applications: Reliability, validity, bias, and generalizability. American Psychologist, 63, 160–168. [DOI] [PubMed] [Google Scholar]
  56. Martens A., Johns M., Greenberg J., Schimel J. (2006). Combating stereotype threat: The effect of self-affirmation on women’s intellectual performance. Journal of Experimental Social Psychology, 42, 236–243. [Google Scholar]
  57. Matz D. C., Wood W. (2005). Cognitive dissonance in groups: The consequences of disagreement. Journal of Personality and Social Psychology, 88, 22–37. [DOI] [PubMed] [Google Scholar]
  58. Medin D., Ojalehto B., Marin A., Bang M. (2017). Systems of (non-)diversity. Nature Human Behaviour, 1, Article 0088. doi: 10.1038/s41562-017-0088 [DOI] [Google Scholar]
  59. Moher D., Naudet F., Cristea I. A., Miedema F., Ioannidis J. P., Goodman S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), Article e2004089. doi: 10.1371/journal.pbio.2004089 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Moulin C., Souchay C. (2015). An active inference and epistemic value view of metacognition. Cognitive Neuroscience, 6, 221–222. [DOI] [PubMed] [Google Scholar]
  61. Muthukrishna M., Henrich J. (2019). A problem in theory. Nature Human Behaviour, 3, 221–229. [DOI] [PubMed] [Google Scholar]
  62. Oudeyer P. Y., Kaplan F. (2009). What is intrinsic motivation? A typology of computational approaches. Frontiers in Neurorobotics, 1, Article 6. doi: 10.3389/neuro.12.006.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Pais A. (1982). ‘Subtle is the Lord . . . ’: The science and the life of Albert Einstein. New York, NY: Oxford University Press. [Google Scholar]
  64. Pier E. L., Brauer M., Filut A., Kaatz A., Raclaw J., Nathan M. J., . . . Carnes M. (2018). Low agreement among reviewers evaluating the same NIH grant applications. Proceedings of the National Academy of Sciences, 115, 2952–2957. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Popper K. R. (1959). The logic of scientific discovery. London, England: Hutchinson. [Google Scholar]
  66. Popper K. R. (1963). Conjectures and refutations: The growth of scientific knowledge. London, England: Routledge. [Google Scholar]
  67. Rosnow R. L., Rosenthal R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist, 44, 1276–1284. [Google Scholar]
  68. Rozin P. (2001). Social psychology and science: Some lessons from Solomon Asch. Personality and Social Psychology Review, 5, 2–14. [Google Scholar]
  69. Rozin P. (2009). What kind of empirical research should we publish, fund, and reward? A different perspective. Perspectives on Psychological Science, 4, 435–439. [DOI] [PubMed] [Google Scholar]
  70. Safer M. A., Tang R. (2009). The psychology of referencing in psychology journal articles. Perspectives on Psychological Science, 4, 51–53. [DOI] [PubMed] [Google Scholar]
  71. Seth A. K. (2015). Inference to the best prediction: A reply to Wanja Wiese. In Metzinger T., Windt J. M. (Eds.), Open MIND (pp. 1–8). Frankfurt am Main, Germany: MIND Group. [Google Scholar]
  72. Shannon C. E. (1948). A mathematical theory of communication. Bell Systems Technical Journal, 27, 379–423. [Google Scholar]
  73. Shrout P. E., Rodgers J. L. (2018). Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annual Review of Psychology, 69, 487–510. [DOI] [PubMed] [Google Scholar]
  74. Sigal M. J., Pettit M. (2012). Information overload, professionalization, and the origins of the publication manual of the American Psychological Association. Review of General Psychology, 16, 357–363. [Google Scholar]
  75. Simon R. J., Fyfe J. J. (Eds.). (1994). Editors as gatekeepers: Getting published in the social sciences. Lanham, MD: Rowman & Littlefield. [Google Scholar]
  76. Simons D. J. (2014). The value of direct replication. Perspectives on Psychological Science, 9, 76–80. [DOI] [PubMed] [Google Scholar]
  77. Strack F., Mussweiler T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73, 437–446. [Google Scholar]
  78. Suls J., Martin R. (2009). The air we breathe: A critical look at practices and alternatives in the peer-review process. Perspectives on Psychological Science, 4, 40–50. [DOI] [PubMed] [Google Scholar]
  79. Sun Y., Gomez F., Schmidhuber J. (2011). Planning to be surprised: Optimal Bayesian exploration in dynamic environments. In Schmidhuber J., Thórisson K. R., Looks M. (Eds.), International Conference on Artificial General Intelligence (pp. 41–51). Berlin, Germany: Springer. [Google Scholar]
  80. Tervo D. G. R., Tenenbaum J. B., Gershman S. J. (2016). Toward the neural implementation of structure learning. Current Opinion in Neurobiology, 37, 99–105. [DOI] [PubMed] [Google Scholar]
  81. Trope Y., Liberman N. (2010). Construal-level theory of psychological distance. Psychological Review, 117, 440–463. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. United Nations Educational, Cultural and Scientific Organization. (2009). UNESCO world report: Investing in cultural diversity and intercultural dialogue. Paris, France: Author. [Google Scholar]
  83. Verhagen J., Wagenmakers E. J. (2014). Bayesian tests to quantify the result of a replication attempt. Journal of Experimental Psychology: General, 143, 1457–1475. [DOI] [PubMed] [Google Scholar]
  84. Wagenmakers E. J., Wetzels R., Borsboom D., van der Maas H. L., Kievit R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638. [DOI] [PubMed] [Google Scholar]

Articles from Perspectives on Psychological Science are provided here courtesy of SAGE Publications

RESOURCES