Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Sep 1.
Published in final edited form as: Motiv Sci. 2019 Sep 12;6(3):266–274. doi: 10.1037/mot0000157

On the Mechanics of Goal Striving: Experimental Evidence of Coasting and Shifting

J Lukas Thürmer a,b,c, Michael F Scheier b, Charles S Carver d
PMCID: PMC8022896  NIHMSID: NIHMS1048771  PMID: 33834088

Abstract

Carver and Scheier’s (1990) account of goal striving predicts that unexpectedly fast goal progress leads to reduced effort at that goal (coasting) and to shifting focus toward other goals (shifting). Although these hypotheses are key to this goal-striving account, empirical evidence of coasting and shifting is scarce. Here we demonstrate coasting and shifting in 2 experiments: Participants performed a lexical decision task and were promised a bonus if they delivered a specific number of correct responses (accuracy goal) and a specific number of fast responses (speed goal). After half of the trials, participants received (randomly allocated) feedback on their progress regarding the 2 goals, in which progress toward 1 goal was either above or below the target. In line with hypotheses, better-than-needed progress toward 1 goal led to (a) reduced subsequent progress toward that goal (as reflected in lower goal-related performance; coasting) and (b) a shift of resources toward the alternative goal (as reflected in higher goal-related performance on the alternative goal; shifting). Experiment 1 further demonstrated that positive feedback led to positive affect, and Experiment 2 demonstrated the causal role of affect in coasting and shifting. The implications of the present findings for future research on goal striving are discussed.

Keywords: self-regulation, goal striving, cybernetic control, coasting and shifting, affect and emotion


Goals guide much if not most of human behavior. Such goals include getting a bagel, having a nice day out with family, or finishing a degree. Although these goals may differ in important ways, they all energize and direct behavior. Much of motivation science focuses on these processes. One focus is on how people select their goals (i.e., goal setting; Gollwitzer, 1990). Another is on how people regulate their behavior to move toward goals (i.e., goal striving; Gollwitzer, 1990).

One approach to goal striving utilizes principles of cybernetic control (Carver & Scheier, 1990, 2009, 2017; Powers, 1973). The cybernetic control model explains goal-directed action through the function of a set of feedback loops. These loops entail monitoring the discrepancy between a current state and a reference value and acting to minimize the discrepancy. Carver and Scheier (1990, 1998) argued further that one type of loop monitors the behavior-goal discrepancy (i.e., whether behavior deviates from the goal [direction]) and another loop monitors the rate of progress toward discrepancy reduction (i.e., whether discrepancy reduction is occurring at an acceptable rate [velocity]). Carver and Scheier (1990, 1998) argued that the output function of the second loop is affective in nature: Negative affect is proposed to occur when progress is slower than expected or needed and positive affect when progress is faster than expected, needed, or desired (for greater detail see Carver & Scheier, 1998). Negative affect then leads to investing more effort (pushing) and positive affect leads to investing less effort (coasting).

The pushing hypothesis is quite intuitive and has been researched extensively (e.g., Schmidt & DeShon, 2007). The coasting hypothesis is less intuitive: Why would one reduce effort when making fast progress towards one’s goal and feeling good? One may argue that feeling good is a sign of a high commitment to one’s goal, leading to sustained or even increased effort (Gollwitzer & Rohloff, 1999). However, a number of theoretical approaches have suggested that fast progress may lead to reduced effort, for instance when a positive mood signals that one has invested enough effort to attain one’s goal (Gendolla, 2000; Martin, 2001). Focusing on immediate reactions during ongoing goal pursuit, Carver (2003) argued that people generally pursue many goals at once, even with respect to a single task. He suggested that fast goal progress toward one goal can lead to an openness toward alternative goals where progress is slower (shifting).

As an example, one may want to finish a work report on time (speed criterion) as well as to produce a report that is accurate and thorough (accuracy criterion). Each of these goals is distinct, but they are also interdependent. That is, it is hard to attain both goals simultaneously (speed-accuracy tradeoff; e.g., Wickelgren, 1977). However, unexpectedly fast progress on one of these criteria opens up the possibility to shift one’s effort towards the other one. For instance, realizing that one is completing the report more quickly than expected (overshoot on speed) allows slowing down (coasting) to work on the content more thoroughly (shifting). Such adaptive investment of resources allows satisfactory progress to be made on multiple goals (see Carver & Scheier, 1990, for a more thorough discussion).

Current research on multiple goal pursuit supports this reasoning. Louro, Pieters, and Zeelenberg (2007) observed that participants who made fast progress towards their goal (and consequently felt better) reported investing less effort into their focal goal or showed less goal-directed behavior, at least when the goal was close. Moreover, a program of research by Fishbach and colleagues (Fishbach & Finkelstein, 2012; Fishbach, Zhang, & Koo, 2009; Koo & Fishbach, 2008) has demonstrated that positive feedback reduces subsequent effort when it signals goal attainment. Although this research did not directly test the coasting hypothesis, it indicates that fast goal progress may lead to reduced effort.

We are aware of only one published study that has tested the coasting and shifting hypotheses directly. Fulford, Johnson, Llabre, and Carver (2010) conducted a longitudinal diary study and observed that daily self-reported goal progress negatively predicted subsequent self-reported effort. Despite the longitudinal design, this study leaves open whether variations in personal goals or common method variance (i.e., self-report-error) underlay the observed effects. To rule out such alternative explanations, one would need to experimentally manipulate goal progress and subsequently observe actual behavior.

In addition, Gollwitzer and Rohloff (as described in Gollwitzer & Rohloff, 1999) sought to provide an experimental test of coasting. Participants worked on an arithmetic task and received regular feedback regarding their goal progress. This feedback was experimentally manipulated to be either slow, on-target, fast, or very fast. Participants increased their performance (time and tasks solved) in response to negative feedback but they did not decrease their performance in response to positive feedback (coasting). Unfortunately, this study did not include a secondary goal, so task shifting was not possible. In the current study, we thus sought to provide a direct test of coasting and shifting.

The Present Research

We conducted two experiments to test the coasting and shifting hypotheses. Specifically, we tested in Experiments 1 and 2 whether faster than expected or needed progress on one goal leads to reduced goal-related performance on that goal and enhanced performance on the alternative goal. In Experiment 2, we further tested whether the affect generated from being ahead or behind the targeted rate of progress was responsible for these performance effects. To be able to test these hypotheses, a task needs to fulfill at least three conditions. First, the task must allow for unambiguous performance feedback as to whether participants’ progress is at, over, or under a specific target. This condition implies that participants must be provided with a clear target for performance and progress. Second, because the purpose of the study is to provide causal evidence, it is crucial that we can randomly assign participants to conditions and provide false feedback that is nevertheless credible. Last, Carver (2003; see also Gollwitzer & Rohloff, 1999) suggests that coasting may occur mainly when one can shift to another goal. The task therefore needed to have at least two relevant performance criteria.

We developed a lexical decision task that fulfills all these conditions. We asked participants to determine whether a letter string was a word or not (non-word). This task allowed setting a threshold for participants by asking for a certain number of fast and a certain number of accurate responses. This task also allowed manipulating feedback. Since our stimuli were relatively challenging and participants usually cannot guess their own reaction times, participants were unsure about their performance and thus likely to believe randomly assigned feedback. Lastly, the goal of responding accurately and correctly are negatively interdependent (speed-accuracy tradeoff; e.g., Wickelgren, 1977), and therefore should make coasting possible and functional.

Experiment 1

We first sought to provide experimental evidence of coasting and shifting. To this end, participants performed a language task with the goals of providing fast and correct responses. Halfway through the task, we provided randomly assigned feedback indicating either high or low progress with respect to either the fast or the accurate responses, or to be on-target (control condition). We expected that positive feedback would lead to lower subsequent performance in the respective domain (speed vs. accuracy) than negative feedback (i.e., coasting), and that performance in the other domain would increase (i.e., shifting). We explored the role of affect and effort in participants’ self-reports. Experiment 1 had the approval of Institutional Review Board from Carnegie Mellon University.

Method

Design and participants.

Participants were randomly assigned to one of five conditions following a 2 Progress (overshoot vs. undershoot) × 2 Dimension (speed vs. accuracy) design plus an additional control condition (on target for both speed and accuracy). We sought to collect a sample of at least 150 participants, but also decided (a priori) to collect data until we had used our allocated experimental budget.

One-hundred-seventy-three Amazon’s Mechanical Turk (MT) workers (Mage = 34.75, 58 female) recruited via TurkPrime (Litman, Robinson, & Abberbock, 2017) completed the study in return for a payment of $1 and a bonus of $0.50 (see below). We used the “Prevent Ballot Box Stuffing” option in Qualtrics’ survey software to prevent participants from taking the survey more than once. Four participants failed at least one of the attention checks (i.e., did not select the response they were asked to select; see below), ten participants aborted the task because of computer problems, and eight participants had more than 50% incorrect responses, leaving 151 participants for analyses. Sensitivity analyses (α =.05; 1 – β = .95) for a generic F-Test using G-Power (Faul, Erdfelder, Buchner, & Lang, 2009) indicated that this sample allowed us to detect F(1, 146) = 3.91, λ = 13.168.

Procedure.

Participants who accepted our MT task entitled “Language Task” were redirected to the experimental materials, which stated that the study concerned assessing the language capabilities of MTurkers. To this end, participants would see letter strings and their task would be to decide whether the letter string formed an English word (e.g., HOUSE) or not (e.g., HESOU). Participants were further informed that we needed to obtain at least 120 fast responses below 550 milliseconds and at least 120 correct responses. Since there were only 200 trials in total, participants had to respond quickly as well as accurately. To ensure participants’ goal commitment, we offered them a 50-cent bonus for attaining these goals, which represented a 50% increase in remuneration. Participants were further informed that they would receive feedback halfway through the experiment, and that they should aim to have 60 correct responses and 60 fast responses at that point. Participants then rated their current mood on a 7-point scale (−3: very sad to 3: very happy) and how much effort they planned to invest into the language task (0: no effort at all to 6: complete effort). We included one attention check item into the scales that asked participants to select a specific response (i.e., please select “4”).

Participants then performed a set of 200 trials of the language task. After completing the first block of 100 trials, participants received performance feedback that was in fact randomly assigned. We deemed this deception necessary to manipulate goal progress independently of other factors such as ability, effort, or task difficulty. The feedback contained a horizontal bar graph with two bars, the top one labeled “Fast Responses” and the bottom one labeled “Accurate Responses”. The x-axis was scaled from 0 to 120 with a large black line in the middle (60) labeled “on target”. The area from 0 to 60 was labeled “below target” and the area from 60 to 120 was labeled “above target”. A short paragraph instructed participants on how to interpret the graph.

The speed/undershoot feedback indicated 30 fast responses (below target) and 60 accurate responses (on target) and the accuracy/undershoot feedback had a reversed pattern (60 fast and 30 accurate responses). The speed/overshoot feedback indicated 90 fast responses (above target) and 60 accurate responses (on target) and accuracy/overshoot feedback had a reversed pattern (60 fast and 90 accurate responses). Control feedback indicated 60 fast and 60 accurate responses (both on target). Participants again rated their current mood and how much effort they planned to expand, and then performed the second round of the task (including an attention check item). Lastly, participants provided their age, gender, and educational attainment, and were fully debriefed. All participants received the 50-cent bonus.

Language task.

We sought to create a task that would be sufficiently challenging to make undershooting and overshooting possible without evoking emotions through the stimulus material (i.e., via the words that were used in the task). We therefore used the rated word list from Warriner, Kuperman, and Brysbaert (2013) to carefully select 100 ten-letter words that were neutral in valence (M = 5.17, SD = 0.69), moderate in arousal (M = 4.17, SD = 0.45), and moderate in dominance (M = 5.17, SD = 0.55). We then created one pronounceable non-word letter string from each word and distributed the words and non-words into two blocks each containing 50 words and 50 non-words. The presentation order of the two blocks was counterbalanced and the presentation order within each block was fully randomized. During the task, participants were presented with a fixation cross (500ms) followed by the respective letter string (300ms) and then responded whether they had seen a word (L-key) or not (A-key). Qualtrics recorded their speed and whether their response was correct.

We calculated the number of correct responses per block and the mean response time, excluding all response times on incorrect trials (11.15%), below 150ms (0.03%), and above 1500ms (2.02%), for each participant.

Results

We followed the recommendations by Rosenthal and colleagues (Furr & Rosenthal, 2003; Rosenthal, Robert, & Rosnow, 1985) to test our predictions using a-priori contrast analyses. Rather than conducting multiple tests, such as main effects and interactions in an ANOVA with subsequent contrast tests, one specifies an overall predicted data pattern including all conditions and conducts only one test for this overall data pattern. Such analyses provide greater power than traditional ANOVA analyses and can capture the entire design and sample variance in one analysis.1 In line with the recommendation by Furr and Rosenthal (2003), all reported contrast tests are one-tailed.

We were primarily interested in differences between conditions while accounting for baseline variance and therefore followed an ANCOVA logic (Tabachnick & Fidell, 2013; see Online Supplementary Material, for alternative analytic approaches) using the CONTRAST command within the UNIANOVA procedure in SPSS 25. Unless otherwise indicated, the assumptions of homogeneity of variances and homogeneity of regression slopes were not violated (Tabachnick & Fidell, 2013).

Accuracy.

For our accuracy and speed analyses, we reasoned that (a) participants would be more sensitive to negative than positive feedback (i.e., pushing should be somewhat stronger than coasting), (b) feedback related to the respective dimension should have a greater impact than feedback related to the alternative dimension (i.e., a stronger coasting than shifting effect), and (c) that the neutral feedback in the on-target condition would, if anything, lead to decreased performance (note that setting the contrast weight of the control condition to 0 would exclude this condition from analyses). We thus set the contrast weights for accuracy to 3 (speed undershoot), −3 (speed overshoot), −5 (accuracy undershoot), 4 (accuracy overshoot), and 1 (on-target control). To ensure that random assignment was successful, we first compared the groups on their performance during Round 1, prior to feedback. The contrast was non-significant, F(1,146) = 0.65, p = .211, ηp2 = .004.

We then entered the number of errors in Round 2 into our analysis, including Round 1 errors as a covariate. In line with our prediction, the contrast was significant, F(1,145) = 15.67, p < .001, ηp2 = .098. Follow-up contrasts using covariate-adjusted means (ANCOVA approach) indicated that participants who overshot on correct responses in Round 1 (i.e., received feedback that they delivered more correct responses than needed) made more errors in Round 2 than participants who undershot on correct responses (i.e., received feedback that they delivered fewer correct responses than needed), F(1,145) = 7.01, p = .005, ηp2 = .046 (raw means and standard deviations in Table 1; adjusted means in Figure 1). In line with the idea that overshooting leads to shifting one’s focus to the other aspect of the task, participants who overshot on fast responses in Round 1 (i.e., received feedback that they delivered more fast responses than expected) made fewer errors in Round 2 than participants who undershot on fast responses (i.e., received feedback that they responded slower than expected), F(1,145) = 11.85, p < .001, ηp2 = .076 (raw means and standard deviations in Table 1; adjusted means in Figure 1).

Table 1:

Raw Mean Errors, Reaction Time, Mood, and Effort as a Function of Progress Feedback Provided after Round 1 of the Lexical Decision Task (Experiment 1)

Dimension

Speed Accuracy Control Condition


Measure Undershoot Overshoot Undershoot Overshoot (all on target)

Errors
Round 1 11.26 (9.87) 11.64 (9.24) 10.50 (9.07) 12.94 (9.96) 12.86 (10.76)
Round 2 14.26 (10.42) 9.88 (8.93) 10.50 (9.99) 16.06 (10.06) 13.36 (9.27)
Difference 3.00 (6.28) −1.76 (5.84) 0.00 (4.69) 3.13 (5.49) 0.50 (4.96)

Reaction Time (ms)
Round 1 787.43 (163.39) 790.53 (140.15) 773.81 (164.22) 763.93 (132.13) 831.68 (242.59)
Round 2 710.18 (145.47) 742.38 (120.36) 744.31 (151.56) 714.22 (127.36) 799.06 (156.37)
Difference −77.25 (64.47) −48.15 (83.25) −29.51 (77.79) −49.71 (45.51) −32.62 (108.90)

Mood (−3: very sad to 3: very happy)
Round 1 1.00 (1.10) 0.88 (0.93) 0.88 (1.24) 0.74 (1.15) 1.50 (1.32)
Round 2 0.04 (1.43) 0.79 (1.17) 0.19 (1.65) 0.68 (1.40) 1.46 (1.45)
Difference −0.96 (1.13) −0.09 (0.77) −0.69 (1.40) −0.06 (0.57) −0.04 (0.74)

Effort (0: no effort at all to 6: complete effort)
Round 1 5.78 (0.70) 5.67 (0.69) 5.91 (0.30) 5.71 (0.64) 5.86 (0.52)
Round 2 5.78 (0.58) 5.70 (0.59) 5.78 (0.79) 5.74 (0.68) 5.82 (0.55)
Difference 0.00 (0.28) 0.03 (0.17) −0.13 (0.75) 0.03 (0.31) −0.04 (0.19)

Note. Standard Deviations are in parentheses.

Figure 1.

Figure 1.

Experiment 1: Round 2 marginal mean errors (black bars, left y-axis) and mean reaction times (white bars, right y-axis), adjusted for Round 1, as a function of the progress feedback given after Round 1 of the lexical decision task. Error bars represent standard errors.

Speed.

We next analyzed reaction times using the same approach and setting the contrast weights to −5 (speed undershoot), 4 (speed overshoot), 3 (accuracy undershoot), −3 (accuracy overshoot), and 1 (on-target control). Entering baseline reaction time did not yield a significant contrast, F(1,146) = 0.13, p = .362, ηp2 = .001. Entering Round 2 reaction times as the dependent measure and including Round 1 reaction time as a covariate yielded the predicted contrast, F(1,145) = 7.29, p = .004, ηp2 = .048, and the pattern of covariate-adjusted means was consistent with predictions: Participants who overshot on fast responses in Round 1 responded more slowly in Round 2 than participants who undershot on fast responses (coasting), F(1,145) = 3.27, p = .037, ηp2 = .022; and participants who overshot on correct responses in Round 1 responded faster in Round 2 than participants who undershot on correct responses (shifting; raw means and standard deviations in Table 1; adjusted means in Figure 1), although the respective follow-up contrast was not significant, F(1,145) = 2.02, p = .079, ηp2 = .014. We observed heterogeneity in regression slopes, F(4,141) = 3.43, p = .010, apparently stemming from the control condition. We therefore repeated our analyses for speed excluding the control condition and setting the contrast weights to −5 (speed undershoot), 5 (speed overshoot), 3 (accuracy undershoot), and −3 (accuracy overshoot). Regression slopes were not heterogeneous, F(3,115) = 1.11, p = .348, and the contrast was significant, F(1,118) = 5.07, p = .013, ηp2 = .041.

Self-reported mood and effort.

We next analyzed mood and effort in a similar fashion. We hypothesized that overshooting would lead to more positive mood and less effort than undershooting, with no prediction for the on-target control condition. Accordingly, we assigned the following contrast weights: 1 (speed undershoot), −1 (speed overshoot), 1 (accuracy undershoot), −1 (accuracy overshoot), and 0 (on-target control). Entering the mood variable, we did not observe a significant contrast at baseline, F(1,146) = 0.37, p = .272, ηp2 = .003. Homogeneity of variances was violated at Round 2, F(4,146) = 6.56, p < .001, and we therefore used robust heteroscedasticity-consistent (HC0) standard errors estimations in the L-Matrix command. Participants receiving positive feedback (overshoot) reported better mood in Round 2 than participants receiving negative feedback (undershoot; raw means in Table 1); and the Round 2 contrast using Round 1 as a covariate was significant, T(1) = −4.11, p < .001. Regarding self-reported effort, we neither observed a significant contrast at baseline, F(1,146) = 2.09, p = .075, ηp2 = .014, nor at follow-up, F(1,145) = 0.99, p = .161, ηp2 = .007.

Different data analytic strategies may lead to substantially different results (Silberzahn et al., 2018). The editor and one reviewer pointed to two important alternative approaches to analyzing the data, namely repeated measures and difference scores. The results of these ancillary analyses were largely consistent with our ANCOVA analyses (see Online Supplementary Material [OSM]).

Discussion

Experiment 1 provided a first experimental test of coasting and shifting. Participants made less progress on their accuracy goal in the language task (measured by errors in responses) when they received positive instead of negative feedback regarding the number of correct responses, and this pattern of results also emerged for reaction times. Even though our ANCOVA test results are in line with our hypotheses, it is noteworthy that participants in the control condition showed a different regression slope than participants in the other conditions. In hindsight, this may indicate that some participants interpreted the neutral feedback as positive while others viewed it as negative. Moreover, although our planned contrasts emerged as predicted, one follow-up contrast for reaction times was not significant. Also noteworthy is the finding that self-reported effort did not mirror the observed performance effects. This finding highlights the importance of observing actual behavior or physiological markers of effort (cf. Baumeister, Vohs, & Funder, 2010; Richter, Gendolla, & Wright, 2016). Lastly, we observed that positive feedback improved participants’ self-reported affect (see also Phan & Beck, in press). However, because we measured rather than manipulated affect, we were unable to test whether affect is causally related to performances (cf. Bullock, Green, & Ha, 2010; Spencer, Zanna, & Fong, 2005). The causal role of affect in coasting and shifting thus remained unclear.

Experiment 2

In Experiment 2, we sought to obtain causal evidence for the role of affect. To this end, we either provided participants with an effective emotion-regulation plan or a control plan before we provided positive or negative feedback on correct responses. Since we obtained parallel results regarding reaction times in Experiment 1, we did not manipulate feedback regarding reaction times. We expected that using an effective emotion regulation plan would attenuate the coasting and shifting responses. Experiment 2 had the approval of Institutional Review Board from Carnegie Mellon University.

Method

Design and participants.

Participants were randomly assigned to one of four conditions in a 2 Progress (overshoot vs. undershoot) × 2 Emotion Regulation (yes vs. no) design. We aimed to recruit a sample well above the minimum sample size of N = 150 to account for potential exclusions. Consequently, we collected data for an entire academic year, including fall and spring term, and concluded data collection when the spring term finished. Two-hundred-and-twenty-three undergraduate students (155 female, Mage = 19.42 years) enrolled in psychology courses at Carnegie Mellon University (CMU) participated in return for course credit and a raffle of a $50 Amazon voucher, as a bonus. No participant experienced computer failure; one participant responded incorrectly in 50% of the trials and consequently was excluded, leaving 222 subjects for analyses. Sensitivity analyses (α =.05; 1 – β = .95) as in Study 1 indicated that this sample allowed us to detect F(1, 217) = 3.88, λ = 13.110.

Procedure.

Experiment 2 largely followed the procedures of Experiment 1, with the following exceptions: Participants responded to a posting in the psychology department’s student research participation pool and were redirected to the online questionnaire on Qualtrics. Additionally, before receiving their progress feedback after Round 1, participants completed a filler task, ostensibly to “clear their head” before the second round of the task.

In fact, we used the filler task to introduce unobtrusively the emotion regulation plan or the control plan. Specifically, participants were asked to look at three abstract paintings by the painter Marc Rothko (Blue, Green, and Brown, 1952; Orange and Yellow, 1957; Untitled [Brown and Grey], 1969) and to briefly describe their experience. Participants in the experimental condition were then informed that the pictures may elicit emotions that interfere with the language task. Consequently, they were asked to form the plan “When I’m working on the word recognition task, then I will ignore my feelings.” Past research demonstrates that such an ignore-if-then plan can help deal with the negative effects of emotions (Schweiger Gallo, Keil, McCulloch, Rockstroh, & Gollwitzer, 2009) on performance (Thürmer, McCrea, & Gollwitzer, 2013; Thürmer, Wieber, & Gollwitzer, 2017).

Participants in the control condition were informed that thoughts of the pictures may distract them and were therefore asked to form the plan “When I’m working on the word recognition task, then I will ignore the pictures.” Participants in both conditions typed their respective plan three times, as is common in if-then planning interventions (Gollwitzer, Wieber, Myers, & McCrea, 2010; Wieber, Thürmer, & Gollwitzer, 2015b). Although both conditions are highly similar, only the emotion regulation plan helps participants regulate their emotional response resulting from the feedback. Therefore, provision of this plan should lead to less coasting and shifting.

Participants then received feedback. In the undershoot condition, feedback indicated that participants had provided 33 out of 120 accurate responses. In the overshoot condition, feedback indicated that participants had provided 93 out of 120 accurate responses. Across both conditions, feedback indicated that participants had provided 63 out of 120 fast responses (i.e., on target). Then, the second round of the task followed. The self-report mood and effort scales, as well as the embedded attention checks, were dropped from Experiment 2 to minimize the time it took participants to complete the experiment.

We again calculated the number of correct responses per block and the mean response time, excluding all response times on incorrect trials (14.55%), below 150ms (0.13%), and above 1500ms (3.49%), for each participant.

Results

We again used CONTRAST in the UNIANOVA command in SPSS as a primary test of our hypotheses. Specifically, for the control plan conditions, we predicted that overshoot feedback would lead to decreased accuracy in comparison to undershoot feedback (i.e., coasting), but that participants would respond faster after accuracy overshoot feedback than accuracy undershoot feedback (i.e., shifting). Moreover, we expected that these effects would rely on the affective response of the participant and therefore predicted that feedback would not have these effects in the emotion regulation plan conditions. We thus defined the contrast weights as −1 (undershoot/control plan), 3 (overshoot/control plan), −1 (undershoot/emotion-regulation plan), and −1 (overshoot/emotion-regulation plan).

Accuracy.

Participants received feedback on their accuracy, and this accuracy analysis thus served as a test of the coasting hypothesis. Our contrast was not significant when entering baseline errors (Round 1) into our analysis, F(1,218) = 1.28, p = .130, ηp2 = .006, but we did observe the expected contrast in Round 2, using Round 1 errors as a covariate, F(1,217) = 5.80, p = .009, ηp2 = .026. As the covariate-adjusted means show, participants in the no-emotion regulation conditions made relatively more errors in Round 2 when they received positive feedback on their error rate (overshoot) than when they received negative feedback on their error rate (undershoot), F(1,217) = 5.89, p = .008, ηp2 = .026 (see Table 2, for raw means and standard deviations; Figure 2 for adjusted means). We did not observe this difference when participants received a helpful if-then plan to regulate their emotions, F(1,217) = 1.10, p = .148, ηp2 = .005.

Table 2:

Raw Mean Errors and Reaction Time as a Function of the Progress Feedback Given After Round 1 of the Lexical Decision Task. (Experiment 2)

If-then plan

Control (ignore pictures) Emotion regulation (ignore feelings)


Measure Undershoot Overshoot Undershoot Overshoot

Errors
Round 1 12.35 (7.92) 12.38 (7.35) 15.74 (9.26) 13.40 (9.50)
Round 2 12.27 (8.75) 15.41 (10.00) 15.19 (10.49) 14.75 (8.43)
Difference −0.08 (5.53) 3.03 (8.01) −0.54 (6.70) 1.35 (7.53)

Reaction Time (ms)
Round 1 847.95 (124.62) 823.46 (110.76) 837.64 (136.67) 820.71 (107.56)
Round 2 832.65 (120.33) 749.87 (114.77) 809.39 (125.55) 773.49 (130.52)
Difference −15.31 (71.53) −73.59 (67.02) −28.24 (68.78) −47.22 (78.96)

Note. Standard Deviations are in parentheses.

Figure 2.

Figure 2.

Experiment 2: Round 2 marginal mean errors (black bars, left y-axis) and mean reaction times (white bars, right y-axis), adjusted for Round 1, as a function of emotion regulation versus control plan and the progress feedback provided after Round 1of the lexical decision task, with Round 1 measure included as covariate. Error bars represent standard errors.

Speed.

To test the shifting hypothesis, we next analyzed the reaction times using the same approach. We did not observe a significant contrast at baseline, F(1,218) = 0.44, p = .255, ηp2 = .002. Entering Round 2 reaction times (including Round 1 as a covariate), the contrast was significant, F(1,217) = 18.92, p < .001, ηp2 = .080. Covariate-adjusted means indicated that participants in the no-emotion regulation conditions were relatively faster in Round 2 when they received positive feedback on their error rate than when they received negative feedback on their error rate (undershoot), F(1,217) = 22.61, p < .001, ηp2 = .094 (see Table 2, for raw means and standard deviations; Figure 2 for adjusted means). This difference was smaller and non-significant when participants received a helpful if-then plan to regulate their emotions, F(1,217) = 2.67, p = .052, ηp2 = .012. As in Experiment 1, we performed ancillary repeated measures and difference scores analyses, which yielded results largely consistent with our ANCOVA analyses (see OSM).

Discussion

ANCOVA contrast analyses were in line with the prediction that emotions are causally responsible for reduced performance after positive feedback. Specifically, we again observed coasting in the no-emotion-regulation conditions: Overshooting on correct responses (i.e., positive feedback) subsequently lead participants to make more errors than undershooting (i.e., negative feedback). However, we did not observe this effect among participants who formed an effective if-then plan to ignore their feelings (i.e., regulate their affect).

Fully in line with the shifting hypothesis, complementary analyses with reaction times showed a shift of focus after positive feedback on accuracy: Overshooting on correct responses subsequently led participants to respond faster than undershooting on correct responses. We again only observed this effect in the no-emotion regulation conditions. Emotions were thus responsible for this shift in focus after positive feedback. In sum, the primary results of Experiment 2 are consistent with the position that emotional responses are a causal process in coasting and shifting.

General Discussion

The cybernetic control model is an influential account of goal striving that continues to attract considerable research attention (e.g., DeShon, Kozlowski, Schmidt, Milner, & Wiechmann, 2004; Wang & Mukhopadhyay, 2012; Wilkowski & Ferguson, 2016). Yet, empirical evidence for coasting and shifting, as two key predictions of the model, is scarce. We demonstrated coasting across two experiments. Participants who received feedback that they had made faster (instead of slower) goal progress than needed showed reduced subsequent performance regarding this goal. This coasting response was adaptive: Participants shifted their focus toward the alternative task goal, as reflected in increased performance in that alternative task dimension (i.e., shifting).

We also obtained evidence that the affective response was an important element in the production of coasting. Providing correlational evidence, Experiment 1 showed that positive feedback indeed increased participants’ self-reported affect. Experiment 2 manipulated affect by providing participants with an effective affect regulation plan or a control plan. While we observed coasting and shifting among control participants, participants who had formed an effective if-then plan to regulate their affect did not exhibit this pattern. The present two studies thus provide experimental evidence of the coasting and the shifting hypotheses, as well as the role that affect plays in their occurrence.

We should note that although all main contrast effects in both experiments were consistent with our hypotheses, one of the specific follow-up contrasts was not. Carver (2003) has suggested that coasting and shifting become more likely as the person’s resources are taxed by an increase in the number of goals being pursued. Perhaps stronger effects would be obtained by further increasing the load on the participant. One could achieve this in the present paradigm by superimposing additional, extraneous tasks or by using longer words. Future research should thus explore how variations in the demand placed on participants impact coasting and shifting.

Our experiments differ from Gollwitzer and Rohloff (see Gollwitzer & Rohloff, 1999), who did not find evidence for coasting. One reason for this discrepancy may be that we included a secondary goal, which made goal revisions unlikely and coasting an adaptive response. As Gollwitzer and Rohloff note, “only when other important goals have to be served at the same time might one observe a decrease in effort” (p. 150). Future research should thus replicate our findings and explore the boundary conditions of the cybernetic control model.

For instance, one could argue that cybernetic control governs implemental processes during goal striving but does not account well for deliberative aspects. Such a hypothesis could easily be tested by inducing different mindsets (Gollwitzer, 2018) in participants before providing respective performance feedback. Our research moreover suggests that if-then plans can regulate shifting and coasting, and future research should explore how if-then plans help regulate goal pursuit. For instance, if-then plans can help staying focused when making decisions (Thürmer, Wieber, & Gollwitzer, 2015; Wieber, Thürmer, & Gollwitzer, 2015a), and may therefore help attaining one goal before shifting to the next.

Some limitations of our research warrant discussion. First, it is important to note that we observed performance (speed and accuracy) and not participants’ effort. Moreover, we explicitly instructed participants about the negative interdependence of the two goals, potentially inducing experimenter demand. Although increased effort and performance incentives likely increases performance on speed and accuracy, future research should employ more direct measures of effort (Richter et al., 2016) and make no reference to the interdependence of the different goals. Direct replications of our research can moreover yield more reliable estimates of the observed effect sizes.

Second, we used online questionnaires to administer our study, which reduced our control over the participants’ testing situation. However, it allowed us to reach MTurk samples that are more representative of the general population than student samples. Moreover, attention checks in Experiment 1, and detailed performance data in both experiments, allowed us to identify and exclude inattentive participants.

Lastly, we used a highly controlled experimental task that participants may not perform outside the experiment. However, we incentivized task performance to increase the personal relevance of the task. Recent neuro-scientific evidence, moreover, indicates that performance in experimental reaction time tasks corresponds with performance in everyday tasks (Wolff, Thürmer, Stadler, & Schüler, 2019). In line with this reasoning, existing longitudinal field research has demonstrated the applied relevance of coasting (Fulford et al., 2010).

In sum, the present research is an important direct experimental demonstration of coasting and shifting. Whether you are getting a bagel, having a nice day out with family, or finishing a degree, fast progress towards your goal can lead to reduced subsequent progress and shifting to alternative goals. Our research also suggests how to manage such coasting: Regulating one’s affective response helps stay focused and ensures maximal goal progress.

Supplementary Material

1

Acknowledgments

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no 703042 and the National Institute of Mental Health Approach, Control, and Adaptation, grant no 1R01MH110477. We thank Josh Swanson for his help in programming the computerized task. We regret to inform readers that Chuck Carver, one of the co-authors on this paper, passed away on June 22, 2019 while the paper was under revision. We and the field will miss him.

1 We thank the editor Guido Gendolla for suggesting this analysis.

References

  1. Baumeister RF, Vohs KD, & Funder DC (2010). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? In Agnew CR, Carlston DE, Graziano WG & Kelly JR (Eds.), Then a miracle occurs: Focusing on behavior in social psychological theory and research. (pp. 12–27). New York, NY: Oxford University Press. [Google Scholar]
  2. Bullock JG, Green DP, & Ha SE (2010). Yes, but what’s the mechanism? (don’t expect an easy answer). Journal of Personality and Social Psychology, 98, 550–558. doi: 10.1037/a0018933 [DOI] [PubMed] [Google Scholar]
  3. Carver CS (2003). Pleasure as a sign you can attend to something else: Placing positive feelings within a general model of affect. Cognition & Emotion, 17, 241. [DOI] [PubMed] [Google Scholar]
  4. Carver CS, & Scheier MF (1990). Origins and functions of positive and negative affect: A control-process view. Psychological Review, 97, 19–35. [Google Scholar]
  5. Carver CS, & Scheier MF (1998). On the self-regulation of behavior. Cambridge: Cambridge University Press. [Google Scholar]
  6. Carver CS, & Scheier MF (2009). Action, affect, and two-mode models of functioning. In Morsella E, Bargh JA & Gollwitzer PM (Eds.), Oxford handbook of human action. (pp. 298–327). New York, NY US: Oxford University Press. [Google Scholar]
  7. Carver CS, & Scheier MF (2017). Self-regulatory functions supporting motivated action. In Elliot AJ (Ed.), Advances in Motivation Science (Vol. 4, pp. 1–37). Waltham, MA: Academic Press. [Google Scholar]
  8. DeShon RP, Kozlowski SWJ, Schmidt AM, Milner KR, & Wiechmann D. (2004). A multiple-goal, multilevel model of feedback effects on the regulation of individual and team performance. Journal of Applied Psychology, 89, 1035–1056. doi: 10.1037/0021-9010.89.6.1035 [DOI] [PubMed] [Google Scholar]
  9. Faul F, Erdfelder E, Buchner A, & Lang A-G (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41, 1149–1160. doi: 10.3758/brm.41.4.1149 [DOI] [PubMed] [Google Scholar]
  10. Fishbach A, & Finkelstein SR (2012). How feedback influences persistence, disengagement, and change in goal pursuit. In Aarts H & Elliot A. (Eds.), Goal-directed behavior (pp. 203–230). New York, NY: Psychology Press. [Google Scholar]
  11. Fishbach A, Zhang Y, & Koo M. (2009). The dynamics of self-regulation. European Review of Social Psychology, 20, 315–344. doi: 10.1080/10463280903275375 [DOI] [Google Scholar]
  12. Fulford D, Johnson SL, Llabre MM, & Carver CS (2010). Pushing and coasting in dynamic goal pursuit: coasting is attenuated in bipolar disorder. Psychological Science, 21, 1021–1027. doi: 10.1177/0956797610373372 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Furr RM, & Rosenthal R. (2003). Evaluating theories efficiently: The nuts and bolts of contrast analysis. Understanding Statistics, 2, 33–67. doi: 10.1207/S15328031US0201_03 [DOI] [Google Scholar]
  14. Gendolla GHE (2000). On the impact of mood on behavior: An integrative theory and a review. Review of General Psychology, 4, 378–408. doi: 10.1037/1089-2680.4.4.378 [DOI] [Google Scholar]
  15. Gollwitzer PM (1990). Action phases and mind-sets. In Higgins ET & Sorrentino RM (Eds.), Handbook of motivation and cognition: Foundations of social behavior (Vol. 2, pp. 53–92). New York, NY: Guilford Press. [Google Scholar]
  16. Gollwitzer PM (2018). The goal concept: A helpful tool for theory development and testing in motivation science. Motivation Science, 4, 185–205. doi: 10.1037/mot0000115 [DOI] [Google Scholar]
  17. Gollwitzer PM, & Rohloff UB (1999). The speed of goal pursuit. In Wyer RS Jr. (Ed.), Perspectives on behavioral self-regulation (pp. 147–159). Hillsdale, NJ: Erlbaum. [Google Scholar]
  18. Gollwitzer PM, Wieber F, Myers AL, & McCrea SM (2010). How to maximize implementation intention effects. In Agnew CR, Carlston DE, Graziano WG & Kelly JR (Eds.), Then a miracle occurs: Focusing on behavior in social psychological theory and research. (pp. 137–161). New York, NY: Oxford University Press. [Google Scholar]
  19. Koo M, & Fishbach A. (2008). Dynamics of self-regulation: How (un)accomplished goal actions affect motivation. Journal of Personality and Social Psychology, 94, 183–195. doi: 10.1037/0022-3514.94.2.183 [DOI] [PubMed] [Google Scholar]
  20. Louro MJ, Pieters R, & Zeelenberg M. (2007). Dynamics of multiple-goal pursuit. Journal of Personality & Social Psychology, 93, 174–193. [DOI] [PubMed] [Google Scholar]
  21. Martin LL (2001). Mood as input: A configural view of mood effects. In Martin LL & Clore GL (Eds.), Theories of mood and cognition: A user’s guidebook. (pp. 135–157). Mahwah, NJ: Lawrence Erlbaum Associates Publishers. [Google Scholar]
  22. Phan V, & Beck JW (in press). The impact of goal progress velocity on affect while pursuing multiple sequential goals. Motivation Science. doi: 10.1037/mot0000149 [DOI] [Google Scholar]
  23. Powers WT (1973). Behavior: The control of perception. Oxford: Aldine. [Google Scholar]
  24. Richter M, Gendolla GHE, & Wright RA (2016). Three decades of research on motivational intensity theory: What we have learned about effort and what we still don’t know. In Elliot AJ (Ed.), Advances in Motivation Science (Vol. 3, pp. 149–186). Waltham, MA: Academic Press. [Google Scholar]
  25. Rosenthal R, Robert R, & Rosnow RL (1985). Contrast analysis: Focused comparisons in the analysis of variance. Cambridge: Cambridge University Press. [Google Scholar]
  26. Schmidt AM, & DeShon RP (2007). What to do? The effects of discrepancies, incentives, and time on dynamic goal prioritization. Journal of Applied Psychology, 92, 928–941. doi: 10.1037/0021-9010.92.4.928 [DOI] [PubMed] [Google Scholar]
  27. Schweiger Gallo I, Keil A, McCulloch KC, Rockstroh B, & Gollwitzer PM (2009). Strategic automation of emotion regulation. Journal of Personality and Social Psychology., 96, 11–31. doi: 10.1037/a0013460 [DOI] [PubMed] [Google Scholar]
  28. Silberzahn R, Uhlmann EL, Martin DP, Anselmi P, Aust F, Awtrey E, et al. (2018). Many analysts, one data set: Making transparent how variations in analytic choices affect results. Advances in Methods and Practices in Psychological Science, 1, 337–356. doi: 10.1177/2515245917747646 [DOI] [Google Scholar]
  29. Spencer SJ, Zanna MP, & Fong GT (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89, 845–851. doi: 10.1037/0022-3514.89.6.845 [DOI] [PubMed] [Google Scholar]
  30. Tabachnick BG, & Fidell LS (2013). Using multivariate statistics (6 ed.). Boston, MA: Pearson Education. [Google Scholar]
  31. Thürmer JL, McCrea SM, & Gollwitzer PM (2013). Regulating self-defensiveness: If-then plans prevent claiming and creating performance handicaps. Motivation and Emotion, 37, 712–725. doi: 10.1007/s11031-013-9352-7 [DOI] [Google Scholar]
  32. Thürmer JL, Wieber F, & Gollwitzer PM (2015). A self-regulation perspective on hidden-profile problems: If–then planning to review information improves group decisions. Journal of Behavioral Decision Making, 28, 101–113. doi: 10.1002/bdm.1832 [DOI] [Google Scholar]
  33. Thürmer JL, Wieber F, & Gollwitzer PM (2017). Planning and performance in small groups: Collective implementation intentions enhance group goal striving. Frontiers in Psychology, 8. doi: 10.3389/fpsyg.2017.00603 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Wang C, & Mukhopadhyay A. (2012). The dynamics of goal revision: A cybernetic multiperiod test-operate-test-adjust-loop (TOTAL) model of self-regulation. Journal of Consumer Research, 38, 815–832. doi: 10.1086/660853 [DOI] [Google Scholar]
  35. Warriner AB, Kuperman V, & Brysbaert M. (2013). Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior Research Methods, 45, 1191–1207. doi: 10.3758/s13428-012-0314-x [DOI] [PubMed] [Google Scholar]
  36. Wickelgren WA (1977). Speed-accuracy tradeoff and information processing dynamics. Acta Psychologica, 41, 67–85. doi: 10.1016/0001-6918(77)90012-9 [DOI] [Google Scholar]
  37. Wieber F, Thürmer JL, & Gollwitzer PM (2015a). Attenuating the escalation of commitment to a faltering project in decision-making groups: An implementation intention approach. Social Psychological and Personality Science, 6, 587–595. doi: 10.1177/1948550614568158 [DOI] [Google Scholar]
  38. Wieber F, Thürmer JL, & Gollwitzer PM (2015b). Promoting the translation of intentions into action by implementation intentions: Behavioral effects and physiological correlates. Frontiers in Human Neuroscience, 9. doi: 10.3389/fnhum.2015.00395 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Wilkowski BM, & Ferguson EL (2016). The steps that can take us miles: Examining the short-term dynamics of long-term daily goal pursuit. Journal of Experimental Psychology: General, 145, 516–529. [DOI] [PubMed] [Google Scholar]
  40. Wolff W, Thürmer JL, Stadler K-M, & Schüler J. (2019). Ready, set, go: Cortical hemodynamics during self-controlled sprint starts. Psychology of Sport and Exercise, 41, 21–28. doi: 10.1016/j.psychsport.2018.11.002 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES