Skip to main content
Hogrefe OpenMind logoLink to Hogrefe OpenMind
. 2013 Aug 16;61(1):38–47. doi: 10.1027/1618-3169/a000225

Illusion of Control

The Role of Personal Involvement

Ion Yarritu 1, Helena Matute 1, Miguel A Vadillo 2
PMCID: PMC4013923  PMID: 23948387

Abstract

The illusion of control consists of overestimating the influence that our behavior exerts over uncontrollable outcomes. Available evidence suggests that an important factor in development of this illusion is the personal involvement of participants who are trying to obtain the outcome. The dominant view assumes that this is due to social motivations and self-esteem protection. We propose that this may be due to a bias in contingency detection which occurs when the probability of the action (i.e., of the potential cause) is high. Indeed, personal involvement might have been often confounded with the probability of acting, as participants who are more involved tend to act more frequently than those for whom the outcome is irrelevant and therefore become mere observers. We tested these two variables separately. In two experiments, the outcome was always uncontrollable and we used a yoked design in which the participants of one condition were actively involved in obtaining it and the participants in the other condition observed the adventitious cause-effect pairs. The results support the latter approach: Those acting more often to obtain the outcome developed stronger illusions, and so did their yoked counterparts.

Keywords: illusion of control, illusion of causality, contingency judgments, causal judgments, causal learning


In her seminal work on the illusion of control, Langer (1975) found that people trying to obtain a desired outcome that occurred independently of their behavior tended to believe that they were controlling it. The experiments conducted by Langer were followed by many studies with a common feature: Even though the participants’ behavior was not the actual cause of the outcomes, participants nevertheless believed that they were controlling the outcomes (e.g., Alloy & Abramson, 1979; Matute, 1995, 1996; Ono, 1987; Rudski, Lischner, & Albert, 1999; Thompson, 1999; Vyse, 1997).

A common index to measure the contingency between two events is the normative ∆p rule (Jenkins & Ward, 1965). It is computed as the difference between the probability that an outcome occurs in the presence and in the absence of the potential cause, p(O|C) and p(O|¬C), respectively. If these two probabilities are equal, the contingency between the two events is zero and there is no causal relationship between them. The illusion of control occurs in these cases.

The traditional approach to the illusion of control has been framed in motivational terms (e.g., Koenig, Clements, & Alloy, 1992; Langer, 1975; Thompson, Armstrong, & Thomas, 1998). From this perspective, people’s judgments of control are influenced by subjective needs related with the maintenance and enhancement of the self-esteem (e.g., Heider, 1958; Kelley, 1973; Weiner, 1979). One of those is the so-called need for control (e.g., Adler, 1930; Kelley, 1973; White, 1959). It has been shown that the sense of having control has benefits for well-being (e.g., Bandura, 1989; Lefcourt, 1973). The perception of uncontrollability has been related to negative consequences at emotional, cognitive, and motivational levels (Overmier & Seligman, 1967; Seligman & Maier, 1967), and even to depression (Abramson, Seligman, & Teasdale, 1978).

Given the importance of actual and perceived control, some researchers have suggested that the illusion of control is a self-serving bias that prevents people from the negative consequences of perceiving the uncontrollability of important events (e.g., Alloy & Abramson, 1979; Alloy, Abramson, & Kossman, 1985; Koenig et al., 1992). As other self-serving biases, the illusion of control is seen as a self-esteem enhancing mechanism that allows people to take credit for successful actions and to deny responsibility for failures (Bradley, 1978; Heider, 1976). In that way, when people acting to obtain a desired outcome face a random sequence of successes and failures, they may tend to view themselves as responsible for successes and attribute failures to other causes such as, for example, chance (e.g., Langer & Roth, 1975). Moreover, some researchers have found a positive relationship between the degree of need for an outcome and the participants’ overconfidence in their own chances to obtain it (Biner, Angle, Park, Mellinger, & Barber, 1995).

From this perspective, overestimating the actual degree of control over an event is only important to the extent that controlling it might pose a challenge to self-esteem. Thus, people do not need to overestimate their control over events that are irrelevant for their self-esteem. The extent to which people are involved in obtaining the outcome or the extent to which the outcome is important for them becomes a crucial factor in this approach (see Thompson, 1999). This factor, that we will call personal involvement, depends on the potential causal role of the participant’s actions, as opposed to external causes (Alloy et al., 1985; Langer, 1975; Langer & Roth, 1975). Following this reasoning, Alloy et al. (1985) also claimed that the illusion of control should be larger in situations in which a person’s behavior is the potential cause because these situations are relevant to self-esteem; cases in which the person’s behavior is not a potential cause are irrelevant and should not produce an illusion.

Evidence for this view comes mainly from studies on the depressive realism effect (Alloy & Abramson, 1979; Alloy, Abramson, & Viscusi, 1981; Alloy et al., 1985; Msetfi, Murphy, & Simpson, 2007; Msetfi, Murphy, Simpson, & Kornbrot, 2005; Presson, & Benassi, 2003). In their seminal work, Alloy and Abramson (1979) found that depressed and nondepressed people differed in their ability to detect the absence of control. Nondepressed participants showed an illusion of control when they judged the control they exerted over uncontrollable outcomes. Depressed participants showed an accurate perception of their absence of control. This has generally been interpreted as a lack of motivation of depressive participants to make use of the self-service mechanism that leads to the illusion of control (or vice versa, a weaker susceptibility to the illusion of control being part of the causal chain leading to depression, see Alloy & Abramson, 1979; Alloy et al., 1985).

A very different approach has emphasized the cognitive aspects of the illusion of control. Within this framework, the illusion of control is seen as a deviation from the accurate judgments of contingency (i.e., those based on ∆p; see, e.g., Allan & Jenkins, 1983) that should be expected when participants learn the relationship between their behavior and uncontrollable outcomes. Research in this field has been interested on how people make use of the information derived from cause-outcome pairings, regardless of whether the cause is the behavior of the person who judges the causal relation or an external event (e.g., Allan & Jenkins, 1983; Blanco, Matute, & Vadillo, 2013; Jenkins & Ward, 1965; Kao & Wasserman, 1993; Shanks, 2007; Wasserman, 1990). From this perspective, the illusion of control has been regarded as a special case of a more general illusion which has been called the illusion of causality (see Matute, Yarritu & Vadillo, 2011). Therefore, the illusion of control is expected to work just like any other causal illusions in which the potential cause is an external event.

When participants act (potential cause) to obtain the outcome, their action can be successful (the outcome occurs) or not (the outcome does not occur). These two situations are represented by cells a and b in the contingency table (see Table 1 ), respectively. Similarly, if the participant does not act to obtain the outcome (i.e., the potential cause is absent), the outcome can occur or not. This is represented in Table 1 by cells c and d. The potential cause in this table does not need to be the participant’s behavior. Despite the many differences among the various theories of contingency judgments that attempt to explain the illusion of control and related effects (see Blanco, Matute, & Vadillo, 2011, 2012), they all agree that decades of research in this area have shown that people do not give the same weight to each cell in the contingency matrix (e.g., Kao & Wasserman, 1993). Cause-outcome coincidences (i.e., cells a) are known to be the pieces of information that have the largest impact on contingency judgments (e.g., Anderson & Sheu, 1995; Kao & Wasserman, 1993, Matute et al., 2011, Smedslund, 1963; White, 2003). Thus, a variety of theories of contingency judgments, which are clearly different from each other (see Shanks, 2007; 2010 for comprehensive reviews of associative, inferential, and other theoretical accounts of contingency judgments), will nevertheless predict that any factor that contributes to a higher number cell a events, with respect to the other cells, should promote higher judgments.

Table 1. Contingency matrix containing the four possible cause-outcome combinations.

O (outcome) ¬ O (no outcome)
C (cause) a b
¬ C (no cause) c d

One such factor is the probability of the outcome, p(O). It is well known that when p(O) is high, people tend to overestimate the relationship between the potential cause and the outcome. This is known as the outcome-density bias and is a key factor in the development of the illusion of causality and of control (Allan & Jenkins, 1983; Alloy & Abramson, 1979; Hannah & Beneteau, 2009; Matute, 1995; Msetfi et al., 2005; Tenenn & Sharp, 1983). In addition to the outcome-density bias there is the cue-density bias which refers to an overestimation of contingency judgments when the probability of the potential cause, p(C), is high (Allan & Jenkins, 1983; Hannah & Beneteau, 2009; Vadillo, Musca, Blanco, & Matute, 2011). While the outcome-density bias has been widely studied, both in situation in which participants are personally involved (i.e., the participants’ behavior is the potential cause, see, e.g., Matute, 1995) and in which they are not (i.e., an external event is the potential cause, see, e.g., Allan, Siegel, & Tangen, 2005), the effect of the probability of the cause (i.e., the action) on the illusion of control has received less attention. However, there is evidence supporting the idea that the more the participants act, the greater their contingency judgments will be (e.g., Blanco, Matute, & Vadillo, 2009; Blanco et al., 2011; Matute, 1996).

It follows from these analyses that even when the outcome is uncontrollable, if p(O) is high, a person who acts frequently to obtain the outcome will experience a high number of cause-outcome coincidences and will almost certainly develop an illusion of control (Blanco et al., 2011; Matute, 1996). Importantly, participants who are personally involved in trying to obtain an outcome tend to act with higher frequency than those for which the outcome is irrelevant, who often become mere observers (at best). Thus, these two variables, personal involvement and action probability, may have often been confounded. We therefore propose that if those two variables are tested separately from each other, it might turn out that it is not personal involvement per se, but probability of action, what produces the illusion.

Importantly, there is evidence suggestive that being the one who performs the action is not even necessary. The effect of p(C) has been demonstrated in situations in which the potential cause is an external event (e.g., Kutzner, Freytag, Vogel, & Fiedler, 2008; Matute et al., 2011; Perales, Catena, Shanks, & González, 2005; Vadillo et al., 2011). For instance, in a recent experiment by Matute et al. (2011), the contingency between a potential cause (i.e., a fictitious medicine administered by a fictitious agent) and an outcome (recovery from illness) was zero, but p(O) was .80. For one group p(C) was .80, for the other was .20. Both groups showed an illusion of causality but the former group gave significantly higher judgments than the second. Thus, being personally involved is not necessary to develop the illusion, and a high action probability is not necessary either. Instead, the high frequency with which the potential cause occurs (assuming that the desired outcome is also frequent but regardless of whom is the agent), predicts when the illusion will occur. Nevertheless, it should be noted that those experiments used scenarios in which all participants were observers and did not compare them to conditions in which participants were acting to obtain the outcome and a true illusion of control could develop. The present research aimed to provide such comparison.

To our knowledge, one of the very few studies that empirically compared the illusion of control under conditions in which the potential cause was the participant’s behavior or an external cause was that of Alloy et al. (1985). Their conclusions were opposite to our expectations. They reported that personal involvement, and not p(C), was the necessary factor in the development of the illusion. However, there are several methodological issues in their study that could explain those results. What Alloy et al. (1985) found was that the illusion of control appeared when participants were asked about the causal relationship between their behavior and an outcome, and not when asked about the predictive relationship between external events. This result need not mean that personal involvement is necessary for the illusion of control to occur. Alternatively, it could be due to the fact that different questions (i.e., causal vs. predictive questions) give rise to differential judgments (Matute, Vegas & De Marez, 2002; Vadillo & Matute, 2007; White, 2003). In addition, the difference observed by Alloy et al. could be due to their using causes in one group and predictors in the other, as causes and predictors have also been shown to produce different judgments (Pineño, Denniston, Beckers, Matute, & Miller, 2005). Moreover, Alloy et al. did not report the participants’ number of actions. In their studies, the number and sequence of actions and no action trials given by the participants who were involved in getting the outcome could have been very different from the number of cue events presented to participants who were observers, and this difference might also explain their differential judgments. The fact that this variable was not reported suggests that it was not considered relevant and might have been confounded. Ideally, and in order to compare the judgments in one case or the other, it is necessary that the cue (whether the participant’s behavior or an external event) occurs with the same frequency and distribution in both cases. Moreover, similar cause and effect events and similar assessment questions should be used in both cases. The present research aimed to provide a fairer comparison between the conditions in which the potential cause is the participants’ behavior and those in which the cause is an external event.

Experiment 1

We used a yoked design. Participants were shown the records of fictitious patients who suffered from a fictitious disease. Each participant in Group Active was free to administrate a fictitious medicine to their patients. Each participant in Group Yoked observed the sequence of actions given by their counterpart participant in the Active Group as well as their consequences. Therefore, the probability and sequence with which the cause occurred was defined by Group Active. For participants in Group Active the potential cause of the outcome was their own behavior; for those in Group Yoked it was an external event.

The yoked design allows us to test the effect of two variables that have often been confounded: Personal involvement (Active vs. Yoked Group) and p(C). It is when these two variables become disentangled, that the predictions of the motivational and the cognitive approaches become clearly different. According to the motivational approach, if the two variables are separated from each other, only personal involvement should affect the judgments of contingency. By contrast, according to the cognitive account it is p(C) that should affect the participants’ judgments, regardless of whether they are actors or observers.

Method

Participants and Apparatus

Ninety-two anonymous volunteers participated in the experiment in exchange for a cafeteria voucher. The sequence of cause-outcome pairings presented to each participant in Group Yoked was derived from the performance of the corresponding active participant. Thus, it was necessary to program the computer differently for each yoked participant. For this reason, the first 10 participants were assigned to Group Active. Participants were then randomly assigned to each condition as they arrived at the laboratory, resulting in a total of 46 participants in Group Active and 46 in Group Yoked. The experiment was run on personal computers located in individual booths.

Procedure and Design

The task was an adaptation of the allergy task, which has been widely used in contingency judgment research. This task has proven to be sensitive to the effect of the illusion of causality both when the potential cause is an external event (e.g., Matute et al., 2011) and when it is the participant’s behavior (Blanco et al., 2011). As in Blanco et al.’s study, we modified the standard procedure so that it would allow for the participants’ actions as potential causes. Participants were prompted to imagine being a medical doctor, who specialized on a rare disease called “Lindsay Syndrome”. They were told about a new medicine (Batatrim) that could cure the crises caused by the disease. Their mission was to find out whether this medicine was effective. There were 100 learning trials (i.e., 100 fictitious patients) before the test phase. In each trial, participants in Group Active were free to act (to administer the medicine to a fictitious patient) and observe the effects. Participants in Group Yoked saw, in each trial, whether the patient was given the medicine (cause) as well as whether the patient recovered (outcome). The probability of the cause for each pair of Active-Yoked participants was thus defined by the number of trials in which the active participant decided to administer the medicine, divided by the total number of trials. The sequence of trials in which the cause (i.e., Batatrim) was present or absent for the participants in Group Yoked was also defined by the sequence of trials in which their counterpart active participant decided to administer the medicine. Neither the active nor the yoked participants were aware of this feature of the design. The occurrence of the outcome (recovery from the crises) was independent from the participants’ behavior and followed a predefined pseudorandom sequence, identical for both groups. Therefore, the resulting sequence of cue-outcome pairings was identical for each Active-Yoked pair of participants. The probability of the outcome was high (.80) because, as described above, this is known to lead to a stronger illusion of control.

After completing all 100 training trials, participants were presented with the following question: To what extent do you think that Batatrim was effective in healing the crises of the patients you have seen? In the illusion of control experiments, participants are usually asked about the extent to which they believe that their behavior was effective in controlling the outcome. Because the potential cause in our experiment was an external event for half of the participants, we substituted the standard controllability wording for the more general “effectiveness” phrasing. This allowed us to present the same question to all participants. The answers were given by clicking on a 0–100 scale, anchored at 0 (definitely NOT) and 100 (definitely YES).

Results and Discussion

The mean p(C) was collected from the actions of active participants, so the value was the same for both Active and Yoked Groups. The mean and the standard error of the mean were 0.59 and 0.03, respectively. We conducted a multiple regression analysis including personal involvement, p(C), the interaction between these two factors and the actual experienced1 contingency as predictors of the judgments. The backward elimination method for the regression analysis was used. This method tests a series of regression models, excluding in each new model the worst predictor of the previously tested model according to a statistical criterion (p ≥ .10). Following this strategy reduces the risk of failing to detect a relationship that actually exists (see Menard, 1995). The results of this analysis can be seen in Table 2 . According to this method, actual experienced contingency, personal involvement, and the Personal Involvement × p(C) interaction were excluded, in that order, as predictors of the participant’s judgments. The final and most parsimonious model contained only p(C).

Table 2. Results of backward elimination regression analysis.

Model 1 Model 2 Model 3 Model 4
Predictor β t(91) p β t(91) p β t(91) p β t(91) p
Cause probability 0.54 3.90 0.001 0.48 5.25 0.001 0.48 5.20 0.001 0.483 5.23 0.001
Cause Probability × Personal Involvement 0.41 1.64 0.104 0.41 1.65 0.102 0.03 0.37 0.711
Personal involvement 0.41 1.62 0.109 0.41 1.63 0.107
Experienced contingency −0.08 −0.57 0.569
Summary R2 F(4, 91) p R2 F(3, 91) p R2 F(2, 91) p R2 F(1, 91) p
0.26 7.61 0.001 0.26 10.11 0.001 0.23 13.59 0.001 0.23 27.31 0.001

In order to further assess the influence of p(C), the sample was then classified as a function of the number of actions given by the Active Group, that is, their p(C). We selected participants who were below or equal to the 33.33 percentile of this variable (Low p(C), a probability below or equal to 0.50) and participants who were over or equal to the 66.66 percentile (High p(C), a probability above or equal to 0.68). The mean judgments for each p(C) condition in each of the two personal involvement conditions can be seen in Figure 1 . A 2 (probability of the cause: High vs. Low) × 2 (Personal Involvement: Active vs. Yoked) analysis of variance (ANOVA) showed a main effect of p(C), F(1, 60) = 14.08, p < .001, η p 2 = .19. All other effects were nonsignificant, largest F(1, 62) = 1.91, η p 2 = .03. Thus, as expected from the cognitive account, it was the frequency with which the cause occurred (be it the participant’s behavior or an external event) that favored a higher or lower illusion, and not the fact that some participants acted and others observed.

Figure 1. Mean judgments given by participants of Experiment 1 in the Active and Yoked groups as a function of p(C), high or low. Error bars denote the standard error of the mean.

Figure 1

Despite the fact that these results clearly suggest that the key factor in the development of the illusion of control is p(C), there are reasons why a definitive claim in favor of the cognitive hypothesis must be taken with caution. First, it could be argued that the active participants in this experiment were not really engaged in the task. Given that participants were prompted to detect the relationship between the medicine and recovery from the crises, and not to obtain the outcome (i.e., recovery), it is possible that their motivation to control the outcome was low. Second, it could be argued that the personal involvement and the probability-of-the-cause factors did not have the same chances to affect participants’ judgments. In this experiment p(C) was a continuous variable derived from the action rate of Group Active, while personal involvement was a dichotomous variable that resulted from the experimental manipulation. The greater number of levels of the cognitive variable, p(C), in comparison to the involvement variable, could have favored the observation of a significant correlation between judgments and p(C). The next experiment addresses these concerns.

Experiment 2

Two are the main modifications that we introduced in Experiment 2 with respect to the involvement factor, and one with respect to the cognitive factor. First, we tried to better motivate active participants by changing the overall goal of the task. In this experiment we explicitly informed all participants that the main goal of the task was to obtain as many outcomes as possible. That is, to heal as many patients as possible. Second, we manipulated personal involvement using the actor-observer procedure commonly used in the self-serving literature. To do so, we used an on-line yoked procedure in which, at the time the active participant was performing the experiment on his or her computer, the yoked participant was observing everything (i.e., both the decisions of the active participant and their outcomes) in a cloned screen. In this case both active and yoked participants were aware of this feature. That is, the relevance for self-esteem of the active participant in this experiment did not come only from their agent role and their motivation to obtain more outcomes but also from being observed.

With respect to the cognitive factor, in Experiment 1 we did not manipulate the probability of acting. Instead we simply measured it. Thus, in order to further clarify the effect of this factor, in Experiment 2 we manipulated the probability with which active participants acted (i.e., administered the medicine). This manipulation featured two levels, high and low, thereby also assuring that both the personal involvement and the cognitive factor (probability of the cause) had the same chances to affect the judgments of participants.

As in the previous experiment, the predictions of the two approaches to the illusion of control are also clearly different from each other in Experiment 2. From the motivational approach, it is expected that the illusion of control will be larger when participants judge the effects of their own behavior (active participants) than when they judge the effects of the behavior of others (yoked participants). From the cognitive approach, there is no reason to expect that differences should emerge as a function of whether the potential cause is the participants’ behavior or somebody else’s behavior. From this perspective, only p(C) is expected to influence the judgments.

Method

Participants and Apparatus

One hundred anonymous volunteers were paid €5 for their participation. They were run in pairs in individual booths. For each pair of participants, one of them was randomly assigned to the active cubicle (clearly labeled “Participant A” on the wall above the screen, and including a mouse in addition to the computer screen). The other one was assigned to the yoked cubicle (labeled “Participant B” and containing only a screen). The two screens were connected to the same computer so that they showed identical information at all times.

Procedure and Design

This experiment used an adaptation of the task used in Experiment 1. To manipulate the personal involvement and the probability of the cause in a more comparable manner, the experiment used a 2 × 2 factorial design. Participants of the two involvement conditions were exposed to exactly the same contingency information and were both told that the goal of the active participants was to heal as many (fictitious) patients as possible. The instructions that they received were also identical, with the following paragraph stating what each of them should do: “If you are participant “A” you will have to decide whether or not to administrate Batatrim to each patient. If you are participant “B” you will have to observe those decisions and their consequences.

The probability of the cause was also manipulated in two levels. Participants in the High condition had a maximum of seven doses of Batatrim for every 10 patients (trials). Participants in the Low condition had a maximum of three doses for every 10 patients. Participants were told that every 10 patients they would get a new supply of seven (or three) doses. They were also requested to use them all. Thus, some participants were asked to respond in 30% of the trials (Low p(C) Group) while others were requested to respond in 70% of trials (High p(C) Group).

As in the previous experiment, the probability of the outcome (recovery) was high (.80) regardless of whether or not the cause was presented. The outcome was presented in a predefined pseudorandom sequence. Once the training phase was finished participants gave their effectiveness judgment. The test question was the same as in Experiment 1 but was administered using paper and pencil because each pair of participants shared the same computer. Once participants wrote their judgment, they received a second sheet of paper with the following question, aimed to assess whether the involvement manipulation had been effective: To what extent did you feel involved in the healing of the patients? The answers for both questions were given using a 0–100 scale, anchored at 0 (definitely NOT) and 100 (definitely YES).

Results and Discussion

As the active participants were free to administer Batatrim in each trial (always within the limits of the number of doses imposed by the experimental manipulation), that is, some of them could choose not to act, we first needed to ensure that their action rates coincided with those planned for each condition. To do so, we imposed a selection criterion of action rate that must be satisfied by each active participant in order to include his/her data (and that of the corresponding yoked participant) in the analyses. This criterion is that all active participants must give at least 95% of all possible actions. In the Low p(C) Group the limit is 30 doses and they were asked to use them all. Therefore, if the active participant in this condition administrates the medicine in less than 27 trials (95% of 30), the data of this pair of participants is removed from subsequent analysis. For the High p(C) condition, the criterion is that the active participant must administrate the medicine in 63 trials or more (95% of 70). These criteria were satisfied by 39 of the 50 pairs of participants (a total of 78 participants). Of these 78 participants, 40 (20 active and 20 yoked) were in the Low p(C) condition and 38 (19 active and 19 yoked) were in the High p(C) condition.2

We next conducted an analysis of the answers to the question that we added at the end of the experiment to check whether the involvement manipulation had been effective. Means (and standard errors of the means) in this question for the active and yoked participants were 67.38 (4.02) and 46.36 (4.21), respectively. A 2 (Probability of the Cause) × 2 (Personal Involvement) ANOVA found that, as expected, the degree of personal involvement which the participants felt toward the task was higher for the active participants than for the yoked participants, F(1, 74) = 10.75, p < .005, η p 2 = .13. Also as expected, the main effect of p(C) and the interaction were nonsignificant, largest F(1, 74) = 0.94, η p 2 = .01. That is, the involvement manipulation worked as planned.

The critical results are the mean judgments of effectiveness for each condition. These are shown in Figure 2 . The figure suggests that judgments did not differ between active and yoked participants. Judgments were higher in the High than in the Low p(C) condition. A 2 (Probability of the Cause) × 2 (Personal Involvement) ANOVA confirmed these findings. As expected, a significant main effect of p(C) was found, F(1, 74) = 16.41, p < .001, η p 2 = .18, and no main effect of personal involvement nor an interaction was observed, largest F(1, 74) = 0.47, η p 2 = .01. Therefore, and consistent with our hypothesis, participants’ judgments of contingency were affected by p(C) and not by personal involvement.

Figure 2. Mean judgments given by Active and Yoked groups of Experiment 2 in each p(C) group, high or low. Error bars denote the standard error of the mean.

Figure 2

The results of this experiment are congruent with those of Experiment 1. Moreover, in this case it is difficult to question the validity of the personal involvement manipulation. As shown by the manipulation check, the experimental manipulation affected the extent to which participants felt motivated toward the task. Importantly, this difference between active and yoked participants did not affect their judgments, which were only affected by p(C). This finding leads us to suspect that previous results that have been attributed to personal involvement may not always be due to a direct effect of motivational factors on contingency estimation. Instead, the present results suggest that the apparent effect of personal involvement on judgments might be due to the higher probability of action of participants who are more personally involved.

General Discussion

The results of the two experiments presented here provide little support for the motivational approach. From this approach it is argued that people must be personally involved in trying to obtain the outcome, and their self-esteem at risk, for the illusion to occur (Alloy et al., 1985; Thompson, 1999; Thompson et al., 1998). This claim lies on the idea that the illusion of control is a self-serving bias that activates when the relationship judged is relevant to self-esteem (e.g., Alloy & Abramson, 1979; Dudley, 1999; Koenig et al., 1992). However, we did not find an effect of personal involvement when it was tested independently of p(C). Participants of the Yoked Group showed the illusion of control even though their judgments were not relevant to protect their self-esteem. Moreover, we found a strong effect of p(C). As we have noted earlier, this p(C) effect could explain the results that had been often attributed to personal involvement in previous research, given that participants who are more involved tend to perform more actions to obtain the outcome.

Alloy et al. (1985) had previously reported an investigation in which, as in the present one, personal involvement and p(C) had been separated from each other. They reported that participants who judged the predictive value of an external event did not show a significant overestimation of contingency, while participants who judged the capacity of their own behavior to control the outcome did so. Alloy et al. concluded that people overestimate contingency only when they are judging their own behavior because only this is relevant for self-protection. The present results do not support their conclusions. Instead, the differences observed by Alloy and her colleagues could be due, as mentioned in the Introduction, to the different assessment question that they used in each case (Matute et al., 2002; Vadillo & Matute, 2007; White, 2003), or to the fact that they used causes in one group and predictors in the other (see Pineño et al., 2005, for differences between them). In addition, Alloy et al. did not report the number of attempts (i.e., actions) performed by participants in the active condition, nor the value of p(C) presented to passive participants. The influence of this factor has proven to be significant in the present research and personal involvement has not. As our results show, when p(C) was high, the illusion was high as well. This is in line with previous studies in which the influence of p(C) was tested. Indeed, this p(C) effect is often described more generally as the probability of the cue effect, or the cue-density effect, as it occurs with either causes or predictors as cue events; see, e.g., Blanco et al., 2011, 2013; Hannah & Beneteau, 2009; Matute, 1996; Matute et al., 2011; Perales et al., 2005; Vadillo et al., 2011).

As noted in the Introduction, another factor that is known to favor the illusion of control is p(O). Thus, we used a situation in which this probability was always high. Given that p(O) is high in cases in which the illusion occurs, the effect of p(C) appears to be due to the fact that a high p(C) makes it very likely that the cause and the outcome coincide in many trials (see Blanco et al., 2011, 2013). Moreover, it is well known that these cause-effect coincidences tend to have more weight on the perception of causal relations than trials in which only the cause or the outcome occurs (e.g., Kao & Wasserman, 1993). As noted in the Introduction, this result is predicted by many different theories of contingency judgments (see Blanco et al. 2011, 2012).

The main contribution of the present experiments is that the effects of personal involvement and probability of the cause are tested independently of each other. Even though the predictions of the motivational and the cognitive approaches can often be identical (because increased motivation produces more active behavior), when these two variables are tested separately, the predictions of the two approaches become clearly different. The motivational approach predicts, for these cases, that only those who act to obtain the outcome should develop the illusion. The cognitive approach predicts that only p(C) should influence the illusion. In our experiments, the judgments of participants who were involved in obtaining the outcome can be directly contrasted to the judgments of those who simply observed the identical events. Under these conditions, the results showed that the probability of the potential cause was the only variable that clearly influenced the participants’ judgments.

Although our results suggest that personal involvement has no influence on the illusion of control, we must acknowledge that our conclusions are based on the absence of significant differences with respect to this variable. It is possible that our participants were not sufficiently engaged in the task, so that their performance was actually irrelevant for self-esteem. Nevertheless, in the absence of more convincing evidence about the role of personal involvement in the illusion of control, it seems more parsimonious to assume that a single process (biased contingency detection due to a high probability of the cause) is responsible for the illusions previously attributed to personal involvement (Alloy et al., 1985). Indeed, Matute, Vadillo, Blanco, and Musca (2007) have shown that even an artificial learning system using a very simple and popular learning algorithm such as the Rescorla and Wagner (1972) model will develop these illusions when the outcome occurs frequently and the system acts frequently. On the other hand, although the influence of self-protection may not be ruled out in all cases in which people develop illusions of control, what our results show is that this influence is not necessary to account for all instances of the illusion of control reported in the literature. In any situation in which personal involvement may translate into more active behavior, psychologists need to be aware that the increase in p(C), rather than a need to protect self-esteem, may be producing the illusion.

In closing, it is important to note that even though the motivational approach is normally presented as an explanation of the illusion of control, it does not really provide such explanation. That is, it predicts that the illusion will be stronger when people are more personally involved, but does not attempt to explain how the illusion takes place (see Matute & Vadillo, 2012, for discussion). If this is acknowledged, our proposal becomes perfectly compatible with the motivational framework. The p(C) explanation we have advanced aims to provide just such underlying mechanism.

Acknowledgments

Support for this research was provided by Grant No. PSI2011-26965 from Dirección General de Investigación of the Spanish Government and Grant No. IT363-10 from the Basque Government. Ion Yarritu was supported by fellowship BES-2008-009097 from the Spanish Government. We would like to thank Fernando Blanco, Pablo Garaizar, Cristina Orgaz, Nerea Ortega-Castro, and Sara Steegen for illuminating discussions.

Footnotes

1

Because participants in the Active Group are free to act in each trial and the occurrence of the outcome event is predefined in a pseudo-random sequence, there is some degree of variance in the contingency to which participants are actually exposed, but previous research has shown that this variance does not influence participants’ judgments (Blanco, Matute, & Vadillo, 2011). Nevertheless, and despite this variance being identical for both groups in the present research, we preferred to include this variable in the regression analysis.

2

We also conducted an alternative analysis with the complete sample, including those participants who did not comply with the data selection criterion. The results of this alternative analysis do not differ from the analysis presented here.

References

  1. Abramson L. Y., Seligman M. E. P., & Teasdale J. D. (1978). Learned helplessness in humans: Critique and reformulation. Journal of Abnormal Psychology, 87, 49–74. [PubMed] [Google Scholar]
  2. Adler A. (1930). Individual psychology. In Murchison C. (Ed.), Psychologies of 1930 (pp. 395–405). Worcester, MA: Clark University Press. [Google Scholar]
  3. Allan L. G., & Jenkins H. M. (1983). The effect of representations of binary variables on judgment of influence. Learning and Motivation, 14, 381–405. doi: 10.1016/0023-9690(83)90024-3 [Google Scholar]
  4. Allan L. G., Siegel S., & Tangen J. M. (2005). A signal detection analysis of contingency data. Learning & Behavior, 33, 250–263. [DOI] [PubMed] [Google Scholar]
  5. Alloy L. B., & Abramson L. Y. (1979). Judgements of contingency in depressed and nondepressed students: Sadder but wiser? Journal of Experimental Psychology: General, 108, 441–485. doi: 10.1037/0096-3445.108.4.441 [DOI] [PubMed] [Google Scholar]
  6. Alloy L. B., Abramson L. Y., & Kossman D. A. (1985). The judgment of predictability in depressed and nondepressed college students. In Brush F. R., & Overmier J. B. (Eds.), Affect, conditioning, and cognition: Essays on the determinants of behavior (pp. 229–246). Hillsdale, NJ: Erlbaum. [Google Scholar]
  7. Alloy L. B., Abramson L. Y., & Viscusi D. (1981). Induced mood and the illusion of control. Journal of Personality and Social Psychology, 41, 1129–1140. doi: 10.1037/0022-3514.41.6.1129 [Google Scholar]
  8. Anderson J. R., & Sheu C. (1995). Causal inferences as perceptual judgments. Memory & Cognition, 23, 510–524. doi: 10.3758/BF03197251 [DOI] [PubMed] [Google Scholar]
  9. Bandura A. (1989). Human agency in social cognitive theory. American Psychologist, 44(9), 1175–1184. doi: 10.1037/0003-066X.44.9.1175 [DOI] [PubMed] [Google Scholar]
  10. Biner P. M., Angle S. T., Park J. H., Mellinger A. E., & Barber B. C. (1995). Need state and the illusion of control. Personality and Social Psychology Bulletin, 21, 899–907. doi: 10.1177/0146167295219004 [Google Scholar]
  11. Blanco F., Matute H., & Vadillo M. A. (2009). Depressive realism: Wiser or quieter? Psychological Record, 59, 551–562. [Google Scholar]
  12. Blanco F., Matute H., & Vadillo M. A. (2011). Making the uncontrollable seem controllable: The role of action in the illusion of control. Quarterly Journal of Experimental Psychology, 64, 1290–1304. doi: 10.1080/17470218.2011.552727 [DOI] [PubMed] [Google Scholar]
  13. Blanco F., Matute H., & Vadillo M. A. (2012). Mediating role of activity level in the depressive realism effect. PLoS ONE, 7, e46203 doi: 10.1371/journal.pone.0046203 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Blanco F., Matute H., & Vadillo M. A. (2013). Interactive effects of the probability of the cue and the probability of the outcome on the overestimation of null contingency. Learning & Behavior. Advance online publication. doi: 10.3758/s13420-013-0108-8 [DOI] [PubMed] [Google Scholar]
  15. Bradley G. W. (1978). Self-Serving Biases in the Attribution Process: A Reexamination of the Fact or Fiction Question. Journal of Personality and Social Psychology, 36, 56–71. doi: 10.1037/0022-3514.36.1.56 [Google Scholar]
  16. Dudley R. (1999). The effect of superstitious belief on performance following an unsolvable problem. Personality and Individual Differences, 26, 1057–1064. doi: 10.1016/S0191-8869(98)00209-8 [Google Scholar]
  17. Hannah S. D., & Beneteau J. L. (2009). Just tell me what to do: Bringing back experimenter control in active contingency tasks with the command-performance procedure and finding cue density effects along the way. Canadian Journal of Experimental Psychology, 63, 59–73. doi: 10.1037/a0013403 [DOI] [PubMed] [Google Scholar]
  18. Heider F. (1958). The psychology of interpersonal relation. New York, NY: Wiley. [Google Scholar]
  19. Heider F. (1976). A conversation with Fritz Heider. In Harvey J. H., Ickes W. J., & Kidd R. F. (Eds.), New directions in attribution research, Vol 1, Hillsdale, NJ: Erlbaum. [Google Scholar]
  20. Jenkins H. M., & Ward W. C. (1965). Judgment of contingency between responses and outcomes. Psychological Monographs, 79, 1–17. [DOI] [PubMed] [Google Scholar]
  21. Kao S. -F., & Wasserman E. A. (1993). Assessment of an information integration account of contingency judgment with examination of subjective cell importance and method of information presentation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 1363–1386. doi: 10.1037/0278-7393.19.6.1363 [Google Scholar]
  22. Kelley H. H. (1973). The process of causal attribution. American Psychologist, 28, 107–128. [Google Scholar]
  23. Koenig L. J., Clements C. M., & Alloy L. B. (1992). Depression and the illusion of control: The role of esteem maintenance and impression management. Canadian Journal of Behavioural Science/Revue Canadienne Des Sciences Du Comportement, 24, 233–252. doi: 10.1037/h0078706 [Google Scholar]
  24. Kutzner F., Freytag P., Vogel T., & Fiedler K. (2008). Base-rate neglect as a function of base rates in probabilistic contingency learning. Journal of the Experimental Analysis of Behavior, 90, 23–32. doi: 10.1901/jeab.2008.90-23 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Langer E. J. (1975). The illusion of control. Journal of Personality and Social Psychology, 32, 311–328. doi: 10.1037/0022-3514.32.2.311 [Google Scholar]
  26. Langer E. J., & Roth J. (1975). Heads I win, tails it’s chance: The illusion of control as a function of the sequence of outcomes in a purely chance task. Journal of Personality and Social Psychology, 32, 951–955. [Google Scholar]
  27. Lefcourt H. M. (1973). The function of the illusions of control and freedom. American Psychologist, 28, 417–425. doi: 10.1037/h0034639 [DOI] [PubMed] [Google Scholar]
  28. Matute H. (1995). Human reactions to uncontrollable outcomes: Further evidence for superstitions rather than helplessness. Quarterly Journal of Experimental Psychology, 48B, 142–157. doi: 10.1080/14640749508401444 [Google Scholar]
  29. Matute H. (1996). Illusion of control: Detecting response-outcome independence in analytic but not in naturalistic conditions. Psychological Science, 7, 289–293. doi: 10.1111/j.1467-9280.1996.tb00376.x [Google Scholar]
  30. Matute H., & Vadillo M. (2012). Causal learning and illusions of control. In Seel N. M. (Ed.), Encyclopedia of the Sciences of Learning. Berlin, Germany: Springer; doi: 10.1007/SpringerReference_301904 [Google Scholar]
  31. Matute H., Vadillo M. A., Blanco F., & Musca S. C. (2007). Either greedy or well informed: The reward maximization – unbiased evaluation trade-off. In Vosniadou S., Kayser D., & Protopapas A. (Eds.), Proceedings of the European Cognitive Science Conference (pp. 341–346). Hove, UK: Erlbaum. [Google Scholar]
  32. Matute H., Vegas S., & De Marez P. J. (2002). Flexible use of recent information in causal and predictive judgments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 714–725. doi: 10.1037/0278-7393.28.4.714 [PubMed] [Google Scholar]
  33. Matute H., Yarritu I., & Vadillo M. A. (2011). Illusions of causality at the hearth of pseudoscience. British Journal of Psychology, 102, 392–405. doi: 10.1348/000712610X532210 [DOI] [PubMed] [Google Scholar]
  34. Menard S. (1995). Applied logistic regression analysis. London, UK: Sage. [Google Scholar]
  35. Msetfi R. M., Murphy R. A., & Simpson J. (2007). Depressive realism and the effect of intertrial interval on judgements of zero, positive, and negative contingencies. Quarterly Journal of Experimental Psychology, 60, 461–481. doi: 10.1080/17470210601002595 [DOI] [PubMed] [Google Scholar]
  36. Msetfi R. M., Murphy R. A., Simpson J., & Kornbrot D. E. (2005). Depressive realism and outcome density bias in contingency judgments: The effect of the context and intertrial interval. Journal of Experimental Psychology: General, 134, 10–22. doi: 10.1037/0096-3445.134.1.10 [DOI] [PubMed] [Google Scholar]
  37. Ono K. (1987). Superstitious behaviour in humans. Journal of Experimental Analysis of Behavior, 47, 261–271. doi: 10.1901/jeab.1987.47-261 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Overmier J. B., & Seligman M. E. P. (1967). Effects of inescapable shock upon subsequent excape and avoidance responding. Journal of Comparative and Physiological Psychology, 63, 28–33. [DOI] [PubMed] [Google Scholar]
  39. Perales J. C., Catena A., Shanks D. R., & González J. A. (2005). Dissociation between judgments and outcome-expectancy measures in covariation learning: A signal detection theory approach. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 1105–1120. doi: 10.1037/0278-7393.31.5.1105 [DOI] [PubMed] [Google Scholar]
  40. Pineño O., Denniston J. C., Beckers T., Matute H., & Miller R. R. (2005). Contrasting Predictive and Causal Values of Predictors and of Causes. Learning & Behavior, 33, 184–196. [DOI] [PubMed] [Google Scholar]
  41. Presson P. K., & Benassi V. A. (2003). Are depressive symptoms positively or negatively associated with the illusion of control? Social Behavior and Personality, 31, 483–495. doi: 10.2224/sbp. 2003.31.5.483 [Google Scholar]
  42. Rescorla R. A., & Wagner A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In Black A. H., & Prokasy W. F. (Eds.), Classical conditioning II: Current research and theory (pp. 64–99). New York, NY: Appelton-Century-Crofts. [Google Scholar]
  43. Rudski J. M., Lischner M. I., & Albert L. M. (1999). Superstitious rule generation is affected by probability and type of outcome. Psychological Record, 49, 245–260. [Google Scholar]
  44. Seligman M. E. P., & Maier S. F. (1967). Failure to escape traumatic shock. Journal of Experimental Psychology, 74, 1–9. [DOI] [PubMed] [Google Scholar]
  45. Shanks D. R. (2007). Associationism and cognition: Human contingency learning at 25. Quarterly Journal of Experimental Psychology, 60, 291–309. doi: 10.1080/17470210601000581 [DOI] [PubMed] [Google Scholar]
  46. Shanks D. R. (2010). Learning: From association to cognition. Annual Review of Psychology, 61, 273–301. doi: 10.1146/annurev.psych.093008.100519 [DOI] [PubMed] [Google Scholar]
  47. Smedslund J. (1963). The concept of correlation in adults. Scandinavian Journal of Psychology, 4, 165–173. [Google Scholar]
  48. Tennen H., & Sharp J. (1983). Control orientation and the illusion of control. Journal of Personality Assessment, 47, 369–374. doi: 10.1207/s15327752jpa4704_6 [DOI] [PubMed] [Google Scholar]
  49. Thompson S. C. (1999). Illusions of control: How we overestimate our personal influence. Current Directions in Psychological Science, 8, 187–190. doi: 10.1111/1467-8721.00044 [Google Scholar]
  50. Thompson S., Armstrong W., & Thomas C. (1998). Illusions of control, underestimations, and accuracy: A control heuristic explanation. Psychological Bulletin, 123, 143–161. [DOI] [PubMed] [Google Scholar]
  51. Vadillo M. A., & Matute H. (2007). Predictions and causal estimations are not supported by the same associative structure. Quarterly Journal of Experimental Psychology, 60, 433–447. doi: 10.1080/17470210601002520 [DOI] [PubMed] [Google Scholar]
  52. Vadillo M. A., Musca S. C., Blanco F., & Matute H. (2011). Contrasting cue-density effects in causal and prediction judgments. Psychonomic Bulletin & Review, 18, 110–115. doi: 10.3758/s13423-010-0032-2 [DOI] [PubMed] [Google Scholar]
  53. Vyse S. A. (1997). Believing in magic: The psychology of superstition. New York, NY: Oxford University Press. [Google Scholar]
  54. Wasserman E. A. (1990). Detecting response-outcome relations: Toward an understanding of the causal texture of the environment. In Bower G. H. (Ed.), The psychology of learning and motivation, Vol 26, (pp. 27–82). San Diego, CA: Academic Press. [Google Scholar]
  55. Weiner B. (1979). A theory of motivation for some classroom experiences. Journal of Educational Psychology, 71, 3–25. [PubMed] [Google Scholar]
  56. White P. A. (2003). Effects of wording and stimulus format on the use of contingency information in causal judgment. Memory & Cognition, 31, 231–242. [DOI] [PubMed] [Google Scholar]
  57. White R. (1959). Motivation reconsidered: The concept of competence. Psychological Review, 66, 297–333. doi: 10.1037/h0040934 [DOI] [PubMed] [Google Scholar]

Articles from Experimental Psychology are provided here courtesy of Hogrefe Publishing

RESOURCES