Abstract
Einstein et al., (2005) predicted no cost to an ongoing task when a prospective memory task meet certain criteria. Smith et al. (2007) used prospective memory tasks that met these criteria and found a cost to the ongoing task, contrary to Einstein et al.'s prediction. Einstein and McDaniel (in press) correctly note that there are limitations to using ongoing task performance as a measure of the processes that contribute to prospective memory performance, however, the alternatives suggested by Einstein and McDaniel all focus on ongoing task performance and therefore do not move beyond the cost debate. This article describes why the Smith et al. findings are important, provides recommendations for issues to consider when investigating cost, and discusses individual cost measures. Finally, noting the blurry distinction between Einstein and McDaniel's description of the reflexive associative processes and preparatory attentional processes and difficulties in extending the multiprocess view to nonlaboratory tasks, suggestions are made for moving beyond the cost debate.
Keywords: Prospective memory, memory, attention, intentions
What Costs Do Reveal and Moving Beyond the Cost Debate: Reply to Einstein and McDaniel (in press)
Prospective memory researchers have debated the extent to which event-based prospective memory involves resource demanding processes. Resource demands associated with performing a prospective memory task can be determined by examining performance on concurrent ongoing tasks. For instance, participants may perform an ongoing lexical decision task and also try to remember to press the F1 key if their name appears during the lexical decision task (Smith, Hunt, McVay, & McConnell, 2007). Performance on the ongoing task on nontarget trials is examined when the ongoing task is performed alone versus when the ongoing task is performed with the embedded prospective memory task. A cost to the ongoing task is demonstrated when performance on the ongoing task is worse with the embedded prospective memory task relative to the when the ongoing task is performed alone. The cost may reflect preparatory attentional processes (e.g. Smith, 2003), which are resource demanding processes that prepare for a possible switch from the ongoing task response to the prospective memory task response. The preparatory attentional processes can but do not have to include overt monitoring for the prospective memory target. For instance, Einstein et al. (2005) proposed that “If performing a PM task produces substantial increases in the time (or decreases in the accuracy) to perform the ongoing task (for nontarget items), then this would be evidence that participants were relying on monitoring.” (p. 329).
This reply notes what Smith et al.'s (2007) findings do reveal and discusses cost reduction versus elimination, including a report of new data highlighting issues to consider when examining cost. Individual cost measures are considered and the PAM theory and multiprocess view are compared. Finally, suggestions are made for moving beyond the cost debate.
What Costs Do Reveal: Testing a Prediction
Einstein et al. (2005, p. 329) predicted that “There should be no or minimal costs, however, when moderate emphasis instructions, a single target, and an ongoing task that encourages focal processing of the target are used”. Smith et al. (2007) used prospective memory tasks that met the criteria specified by Einstein et al. Contrary to Einstein et al.'s clearly stated prediction Smith et al. demonstrated a significant cost to the ongoing task. Einstein and McDaniel (in press) correctly note that perceived task demands could encourage participants to engage in preparatory attentional processes, even when these processes are not required. Nonetheless, Einstein et al. predicted that no cost would be found when certain criteria are met. Smith et al. met those criteria and, contrary to the multiprocess view's prediction, a cost was found. Thus, those criteria cannot be used to predict when a cost will not be found. New criteria must be specified if the multiprocess view is to be tested.
Eliminating Cost
Einstein and McDaniel's (in press) suggest that “a good starting point for testing whether preparatory attentional processes are necessary for prospective memory retrieval is to observe prospective memory performance under conditions of no task interference” (p. XX). Data showing no cost to the ongoing task and no loss of prospective memory accuracy would support the multiprocess view and contradict the PAM theory in most cases. (See discussion, below, of salience and cost.) However, this conclusion requires the complete elimination of cost to the ongoing task. The resource demands of preparatory attentional processing can be subtle (Smith, 2003, 2008; Smith et al., 2007), thus, demonstrating that the cost associated with prospective memory can be reduced does not counter the proposals of the PAM theory, nor does it indicate that the intention is being retrieved spontaneously. Of course combining various methods for reducing cost may eventually produce a clear and convincing case in which cost to the ongoing task is eliminated. In fact, the experiment by Cohen, Jaudas, and Gollwitzer (2008) that is cited by Einstein and McDaniel has been cited elsewhere as evidence for spontaneous retrieval (e.g. Scullin, Einstein, & McDaniel, 2009). This experiment deserves particular attention because Cohen et al.'s method closely matches that of Smith et al., but the results are quite puzzling in that they directly contradict those of Smith et al.
Cohen et al.'s (2008) suggestion that their results differ from Smith et al.'s (2007) due to instructional differences is ruled out by Smith et al.'s Experiment 3, which used the same instructions and found a cost. A different explanation is suggested by the fact that Cohen et al.'s baseline response time, but not Block 2 response time, for the one target prospective memory condition is rather long compared to similar experiments (e.g. Smith et al.; Loft et al., 2008). This pattern suggests that Cohen et al.'s participants may have sacrificed speed for accuracy in the baseline block. Cohen et al. do not report lexical decision accuracy.
The goal of the current experiment was to simultaneously replicate the response time pattern from Smith et al. (2007) and Cohen et al. (2008), while also examining lexical decision accuracy. Accuracy was emphasized in the baseline block for participants in the accuracy group, while both speed and accuracy were important in the standard group. A cost to ongoing task response times was predicted for the standard group, but not for the accuracy group, however, the accuracy group was predicted to show a cost on ongoing task accuracy. The predicted outcome would indicate that the weight placed upon Cohen et al.'s experiment as evidence for spontaneous retrieval may not be warranted.
Method
Participants and Design
The 159 participants, who completed the experiment for course credit, were randomly assigned to one of four conditions resulting from the orthogonal combination of the variables of condition (prospective memory versus control) and Block 1 emphasis (accuracy versus standard).
Materials and Procedure
Participants signed a consent form and read instructions for the ongoing lexical decision task, which included instructions to place their index fingers on the J and F keys. The assignment of keys as the response for words or nonwords was counterbalanced. Materials for the ongoing lexical decision task were taken from of Smith et al. (2007) and matched those in Cohen et al. (2008). All 126 words and 126 nonwords appeared once in a random order during Block 1 of the lexical decision task. For half of the participants, the lexical decision instructions for Block 1 emphasized accuracy only. At the end of Block 1, participants were informed that they had “finished the first half of the letter string task.”
A pool of six possible target words was selected from the 126 words. The average frequency for the nontargets and targets was 138 with a range of frequencies from 131 to 143 for the target words and 120 to 155 for the nontarget words. The selection of a particular target word for a given subject was counterbalanced. To equate memory load for the prospective memory and control conditions, all participants learned the target word and response, but only participants in the prospective memory condition were instructed to make the response when the target word occurred in the second block of the lexical decision task. The target word was displayed on the screen for 30 seconds, followed by instructions for the filler task. In the two minute filler task participants saw a letter in the center of the computer display and pressed the key corresponding to the letter that immediately followed the displayed letter in the alphabet. Participants were encouraged to perform this task as quickly and carefully as possible. At the end of the filler task participants completed the second block of lexical decision trials, with words and nonwords appearing in a different random order. The target word appeared on trials 20, 41, 62, 82, 103, 123, 144, 164, 185, 205, 226, and 246 of Block 2. All participants were instructed that both speed and accuracy was important in Block 2.
Results: Cost to the Ongoing Task
An alpha level of .05 was used for all statistical tests.1 The manipulation of emphasis in the baseline block was effective, with more accurate lexical decision performance in Block 1 when accuracy was emphasized, M = .98, SEM = .002, relative to when standard instructions were used, M .97, SEM = .003, F(1,160) = 6.73, MSE < .001, p = .01, ηp2 = .04. This improvement in accuracy was accompanied by longer baseline responses times, shown in Table 1, F(1,160) = 5.57, MSE = 8254, p = .02, ηp2 = .03. Neither baseline accuracy nor response time differed between the prospective memory and control conditions and the manipulation of emphasis did not interaction with condition, Fs < 1.63, ps > .20.2
Table 1.
Block 1 |
Block 2 |
Block 2 – Block 1 |
||||||
---|---|---|---|---|---|---|---|---|
Emphasis in Block 1 | Condition | N | M | SEM | M | SEM | M | SEM |
Standard | PM | 40 | 609 | 16 | 617 | 13 | 8 | 9 |
Control | 42 | 604 | 13 | 586 | 11 | −19 | 8 | |
Accuracy Only | PM | 41 | 625 | 10 | 608 | 10 | −17 | 8 |
Control | 41 | 656 | 17 | 619 | 20 | −37 | 17 |
Note: PM = Prospective Memory. Standard = lexical decision instructions request that participants are both fast and accurate on the lexical decision task. Accuracy only = accuracy is emphasized in Block 1.
Cost to the ongoing task was evaluated by calculating a difference score for each participant. Block 1 performance was subtracted from Block 2 performance for both accuracy and response time. Because the emphasis groups differed in baseline performance, the planned comparisons were conducted separately for each emphasis group, starting with an analysis of response time difference scores for the standard group. Replicating Smith et al.'s (2007) findings, the difference scores for the prospective memory and control conditions in the standard group were significantly different, F(1,80) = 4.36, MSE = 3232, p = .04, ηp2 = .05. Furthermore, the difference score for the control condition was significantly different from zero, t(41) = 2.21, p = .03, indicating a practice effect that was not present for the prospective memory condition, t < 1, p > .42. In other words, a significant cost to ongoing task response times was demonstrated in the standard group.3
At the same time, the pattern of results reported in Cohen et al. (2008) showing no cost to ongoing task response times was replicated in the accuracy group. The difference scores in this group were not significantly affected by condition, F< 1.19, p > .27, and both the prospective memory and control conditions showed a significant practice effect on response times, t(40) = 2.06 and 2.21, p = .04 and .03, respectively. Thus, the response times difference scores for the accuracy group suggest that there is no cost associated with prospective memory. However, accuracy difference scores were significantly affected by condition, F(1,80) = 8.96, MSE < .001, p = .004, ηp2 = .10. Furthermore, the decline in performance from Block 1 to Block 2 in the prospective memory condition of 1.6% was significantly different from zero, t (40) = 4.60, p < .001. The decline of 0.3% for the control condition was not significant, t (40) = 1.09, p > .27. In short, in the accuracy emphasis group a cost was found by examining ongoing task accuracy that would not have been detected if only response times had been considered.4
Eliminating Cost: Summary and Recommendations
The current results indicate that in the absence of accuracy data, response time data cannot be clearly interpreted and illustrates the complexity of evaluating null effects, which play a key role in Einstein and McDaniel's (in press) alternative approach. The following recommendations are provided for demonstrating a clear and convincing case of cost elimination accompanied by maintenance of prospective memory performance.
- Trim response time data using individual participant means and standard deviations calculated separately for each item type (e.g. word and nonword) and separately for each block (e.g. baseline block and prospective memory block).
-Analyze data for different types of ongoing task trials separately. In the current experiment targets appear only on word trials, and as in previous studies (e.g. Smith et al., 2007), no cost was found on nonword trials.5 More importantly, analyzing data from the current experiment collapsing across trial type eliminates the effect of condition on the response time difference scores in the standard group, F < 1.45, p> .23, and eliminates the effect of condition on accuracy difference scores in the accuracy group, F < 1, p> .57. In other words, the cost was hidden when the word and nonword trials are analyzed together. Interpreting null effects in data analyzed in this way (e.g. Scullin, McDaniel, & Einstein, 2010) is problematic.
-Design issues must also be considered carefully and as noted by Einstein and McDaniel (in press) there are arguments to be made on both sides.6 While a within-subjects only counterbalanced design is more efficient in terms of number of participants, the usefulness of this design is limited in that a non-significant difference between a baseline control and prospective memory block can in fact indicate a cost (e.g. Smith et al., 2007), therefore a null effect is ambiguous. Furthermore, a within-subjects only design precludes a comparison of baseline response times, which is necessary to assure that conditions are equated, and can result in order effects (e.g. Einstein et al., 2005). On the other hand, everyone agrees that the between-subjects design yields unambiguous data. Thus, the between-subject design provides the strongest case.
-Divided attention has been proposed as a way of discouraging preparatory attentional processing (e.g. Scullin et al., 2010). However, interpretation of ongoing task data under conditions of divided attention is complicated by the fact that the divided attention task forces the participant to be prepared to switch between responding to the primary and secondary tasks. Thus, dividing attention in both the prospective and control block might eliminate evidence for a cost because participants are engaging preparatory attentional processes in both blocks.
-Cost measures are only an indirect measure of prospective memory performance and can be affected by other factors. For instance, a greater cost in one condition versus another may reflect an increased difficulty in target/nontarget discrimination and not an increase in preparatory attentional processes.
-Prospective memory failures can occur because of failures of retrospective memory or failure to complete the intention, which could impact the fine-grained analysis suggested by Einstein and McDaniel (in press). For instance, if there is no difference in cost preceding prospective memory hits and misses, this may indicate that preparatory attentional processes are engaged in both cases and that the misses are due to retrospective memory failures, etc.
-As demonstrated in the current experiment, failure to consider both speed and accuracy in the ongoing task makes interpretation of findings complicated, if not impossible. When interpreting effects on ongoing task response time, speed-accuracy trade-offs must be ruled out. At a minimum, performance on the ongoing task has to be below ceiling.
-Ceiling and floor levels of performance on the prospective memory task must also be avoided. If prospective memory performance is at ceiling or floor across conditions, then claims that performance is unaffected by manipulations that reduce cost are unfounded. Using multiple target occurrences is also preferable as this increases the likelihood of detecting effects while avoiding the need for large numbers of participants.
Examining Cost for Individual Participants
Einstein and McDaniel (in press) argue that cost measures be examined for each participant separately, but Einstein and McDaniel's particular approach has limitations. First, it is not possible to clearly interpret individual ongoing task difference scores. One does not know whether a particularly difference score reflects a cost for an individual or not. How much of a difference is significant? If a person shows no difference between the blocks, does this mean the participant was not engaging in preparatory attentional processes? Perhaps not: that person might have shown a practice effect had they not been performing the prospective memory task. Finally, if a given participants shows a cost, the cost could be attributed to either a practice effect (if the prospective memory block precedes the control block) or fatigue (if the prospective memory block follows the control block). Given that clear interpretation of a single cost measure is not possible, the value of this suggested approach is unclear.
The adjustment procedure for individual cost measures used in Einstein et al. (2005) may distort the results regarding cost. To illustrate, consider the hypothetical data presented in Table 2. As in Einstein et al. the size of the cost is different for the two counterbalancing conditions. When the control block is performed first the average cost is 0 ms and in the other counterbalancing order the average cost is 32.5 ms. Also as in the Einstein et al. study, in the hypothetical data set, the mean response time is faster for Block 2 than for Block 1. Table 2 also shows the data after making the same adjustment procedure used by Einstein et al. The mean response time for Block 2 (821.25 collapsing over counterbalancing order) was subtracted from the mean response time for Block 1 (837.5). This difference (in this example 16.25 ms) was added to Block 2 response times for each participant. Prior to adjustment, three out of the four participants have longer response times in the prospective memory block relative to the control block. After adjustment, only two participants showed this cost pattern.7
Table 2.
Data Before Adjustment |
Data After Adjustment |
|||||
---|---|---|---|---|---|---|
Counterbalancing | Block 1 | Block 2 | Cost | Block 1 | Block 2 | Cost |
Control then PM | 850 | 825 | −25 | 850 | 841.25 | −8.75 |
Control then PM | 825 | 850 | 25 | 825 | 866.25 | 41.25 |
PM then Control | 850 | 800 | 50 | 850 | 816.25 | 33.75 |
PM then Control | 825 |
810 |
15 | 825 |
826.25 |
−1.25 |
Block Average | 837.5 | 821.25 | 837.5 | 837.5 |
Note: PM = Prospective Memory. Cost = PM Block response time minus Control Block response time.
Recommendations. While the demonstration of a positive relationship between cost and prospective memory performance may not provide a definitive test of the multiprocess view, such demonstrations do indicate that the processes that produce the cost to ongoing tasks have a positive impact on prospective memory performance, and thus shed light on how the tasks are performed. On the flip side, if there is no relationship between cost and prospective memory performance this would support the multiprocess view, assuming other factors are not masking the effect, such as differences in the retrospective component. Therefore a correlational approach, and related approaches such as an extreme groups design, can be informative. When examining the relationship between cost and prospective memory it is important to employ tasks with sufficient reliability, as discussed in McDaniel and Einstein (2007), and avoid problems of restricted range arising, for instance, from possible ceiling effects on either task.
The PAM Theory and the Multiprocess View
Are intentions retrieved automatically? At one time this was a key question driving the cost debate. If spontaneous retrieval is not automatic retrieval then it seems that the multiprocess view and the PAM theory provide the same response. Given this, the cost debate may begin to play a less central role in advancing our understanding of prospective memory.
Are Reflexive Associative and Preparatory Attentional Processes Different?
Einstein and McDaniel (in press) note that spontaneous retrieval does not equal automatic retrieval and describe reflexive associative processes in the following way:
...the idea is that after forming a good association between a cue and an action, later occurrence of the cue (and full processing of it) will cause the associated action to be delivered to consciousness... It may be, however, that these processes are not fully automatic and may require some general attentional resources when the cue occurs... For example, dividing attention may interfere with full processing of the cue and/or we may vary our thresholds for allowing cue-driven thoughts into consciousness.... Further empirical work is needed to determine the capacity requirements of these processes. (p. XX)
Einstein and McDaniel (in press) state that they are not proposing preparatory attentional processes. Nonetheless, the distinction between their reflexive associative processes and preparatory attentional processes is blurry. In the case of reflexive associative processes a threshold determines which “cue-driven thoughts unrelated to the ongoing task” enter consciousness. The threshold is set prior to occurrence of the target event and “as the ongoing task becomes more demanding and requires more of our attentional focus, we may set a higher threshold for allowing cue-driven thoughts unrelated to the ongoing task to enter consciousness.” (p. XX) This description suggests that cue driven processing only works if the individual is prepared to allow thoughts unrelated to the ongoing task into consciousness, and therefore, depends upon the state of the individual: Is the individual exclusively focused on the ongoing task or is she prepared to process additional information as well?
Consider the following description of the PAM theory: “For the intention to be executed, I must devote some amount of my limited cognitive resources to making decisions about how to respond to my environment... I must be prepared for the possibility that I will make a decision to change from an ongoing action to some other action. This does not mean that considering the change to a specific action is the focus of attention. Rather, this simply means that I am prepared for the possibility in a general sense” (Smith, 2008, p. 38).
Based upon Einstein and McDaniel's (in press) description of the multiprocess view, the distinction between PAM theory and the multiprocess view is far less clear than when reflexive associative processes were described as automatic, as in the following from Einstein and McDaniel (2005, p. 287): “when the target event occurs, an automatic associative-memory system triggers retrieval of the intended action and delivers it into awareness”. Similarly, in Einstein et al. (2005, p. 328): “an automatic associative system...delivers the intended action to consciousness.” If reflexive associative processes are not automatic then the relationship between the PAM theory and the multiprocess view will be depend upon the specifics of the reflexive associative processes, particularly regarding the threshold. As noted by Einstein and McDaniel, additional work is needed to establish the resource demands of the reflexive associative processes. Hopefully, this will include specifying the relationship between threshold setting and maintenance and attentional control processes. If the reflexive associative version of spontaneous retrieval occurs only when some capacity is available for processes unrelated to the ongoing task, then the PAM theory and multiprocess view are likely to make similar predictions in such cases.
Salience and Cost
It is important to distinguish between automatic retrieval of an intention and spontaneous attention capture. The PAM theory argues that retrieval of the intended action is not automatic. The PAM theory does allow for a spontaneous capturing of attention. The PAM theory predicts that in the case of an attention capturing event, preparatory attentional processes may or may not be engaged prior to the occurrence of the target event, but the attention capturing event can result in the engagement of preparatory attentional processes. As described in Smith (2008, p.37) salient events may “lead to an evaluation of how one is to respond to the stimulus or what the meaning of the stimulus is; that is, preparatory attentional processes are engaged” by the salient targets. If an attention capturing event such as an alarm is used, then participants may be more likely to rely on the capture of attention rather than devoting resources to preparatory attentional processing prior to the timer going off. However, given that capture of attention can be influenced by attentional control (e.g. Al-Aidroos, Harrison, & Pratt, 2010) engaging preparatory attentional processing should facilitate attention capture in such cases.
Regardless of whether preparatory attentional processes are engaged prior to the event or by an attention capturing event, the preparatory attentional processes are resource demanding and do not involve automatic retrieval of the intention. The key point in Smith et al.'s discussion of salient target events is that the PAM theory would not predict automatic retrieval of the intention. If spontaneous retrieval is not automatic retrieval, then the multiprocess view and the PAM theory appear to be in agreement regarding target events that spontaneously capture attention.
Difficulty with the Focal/Nonfocal Classification
Einstein and McDaniel (in press) correctly note that Smith et al. (2007) made errors when classifying prior studies as involving a focal or non focal task. In addition to the Marsh, Cook, and Hicks’ (2006) experiment noted by Einstein and McDaniel, the task used by MacCauley and Levine (2004) was also misclassified as focal when it was nonfocal. These errors in task classification highlight the difficulty of applying this distinction a priori. Application is even more difficult when extending the focal/nonfocal distinction to non-laboratory tasks.
Consider these descriptions from page 157 of Einstein and McDaniel (2008). The following is provided as an example of a focal task: “You need to make an appointment with an office across campus. You need to talk to someone in person rather than leaving a message. You walk past the office on the way to your next class.” The next description is provided as an example of a nonfocal task: “You need to make an appointment with an office across campus. You need to talk to someone in person rather than leaving a message.” From these descriptions it would seem that passing the office as part of your routine is sufficient to make the task a focal task. But if simply passing the office were sufficient, then it seems that the task used by Marsh et al. (2006) would be focal: the word is not only passed but is read by the participant.
If classifying the task used by Marsh et al. (2006) as focal requires that the ongoing task must direct participants to process the target word as a member of the furniture category (e.g. category verification), then this would imply that in the examples from Einstein and McDaniel (2008) passing the office alone is insufficient. Einstein and McDaniel (in press, p. XX) propose that in the case of focal tasks “it is especially critical that the ongoing task encourages processing of the cue in a way that closely matches how it was processed at encoding.” In the office example processing at encoding involves processing the office as a place to make an appointment: while walking to class I need to be thinking about places that I need to make an appointment. In other words, the ongoing activity involves processes directed towards the intention. In what way then is retrieval spontaneous? Ultimately, confusion over the focal/nonfocal distinction as it applies to nonlaboratory tasks limits the usefulness of this classification and further blurs the distinction between the multiprocess view and PAM theory.
Moving Beyond the Cost Debate
Alternative Approaches to Data Analysis
Preparatory attentional processes are not the only factor contributing to successful prospective memory performance and the proportion of observable prospective memory responses is not a direct measure of preparatory attentional processes (Smith & Bayen, 2004). In other words, in contrast to Einstein and McDaniel's (in press) reasoning, a prospective memory hit rate of .85 does not necessarily mean that there was a probability of .85 that participants engaged in preparatory attentional processing. In addition, cost measures can be affected by a variety of factors, including the difficulty in making target/nontarget discrimination, and cost measures provide only an indirect indicator of preparatory attentional processes. Therefore, alternative approaches are needed for investigating the underlying cognitive processes that contribute to prospective memory performance. For example, multinomial modeling (Smith & Bayen, 2004, 2005, 2006; Smith, Bayen, & Martin, 2010) and structural equation modeling (Salthouse, Berish, & Siedlecki, 2004) have been used successfully to investigate prospective memory. Broader use of these methods has the potential to advance our understanding of prospective memory without relying on cost measures.
Investigating the Processes that Produce the Cost
Although the debate over cost continues to motivate prospective memory researchers, the PAM theory and the multiprocess view make contradictory predictions regarding cost in a relatively small set of situations that meet the multiprocess view's criteria for spontaneous retrieval tasks. Furthermore according to Einstein and McDaniel (in press), if a cost is found in cases where the multiprocess view would not predict a cost, this does not falsify the multiprocess view. However this may be, growing empirical evidence shows that resource demanding processes do occur in many, if perhaps not all, prospective memory tasks. Therefore, it will be fruitful to expand the focus to include questions about the processes that are producing the cost to the ongoing task. For instance, what is the exact nature of these processes? Is it the case, as suggested by the PAM theory, that these processes often take place on the periphery of our attentional focus? What are the benefits and trade-offs to shifting these processes to a more central focus? How are these processes affected by different factors, such as motivation, personality, illness or injury, development, etc.? How can we learn to engage these processes more effectively in order to both increase prospective memory success, while minimizing disruptions to other activities? Can tasks be transformed from resource demanding prospective memory tasks to automatic procedural memory tasks? Such investigations have the potential to contribute substantially to our understanding of prospective memory and to the development of techniques for improving prospective memory outside of the laboratory.
Summary
Einstein and McDaniel (in press) suggest that the fact that a cost to an ongoing task is demonstrated does not mean that spontaneous retrieval is not also taking place. This is of course true. What is also true is that the multiprocess view made a specific prediction concerning when a cost would or would not be found. Smith et al. (2007) tested that prediction and the results were inconsistent with the multiprocess view's prediction.
Einstein and McDaniel's (in press) note that spontaneous retrieval does not equal automatic retrieval and that reflexive associative processes depend upon a threshold for determining cue driven processes. Without greater details regarding the nonautomatic reflexive associative processes, and the threshold in particular, it is unclear that the multiprocess view is fundamentally different from the PAM theory. Also missing is a precise description of the conditions under which the multiprocess view would predict no cost to the ongoing task. No doubt researchers will continue to look for such cases, but given that the empirical evidence shows that resource demanding processes can contribute to successful prospective memory performance, perhaps an even more fruitful avenue for future work will be to focus the proverbial microscope on the processes themselves.
Acknowledgments
Support was provided by Grant SC1 AG034965 from the National Institute on Aging at the National Institutes of Health. The following individuals provided assistance with data collection: Kathryn Dunlap, Joshua Lopez, Amy Nurnberger, Catherine Prazak, Laura Randol, Matt Thompson, Marie Villasenor, and Jarryd Willis.
Footnotes
Publisher's Disclaimer: The following manuscript is the final accepted manuscript. It has not been subjected to the final copyediting, fact-checking, and proofreading required for formal publication. It is not the definitive, publisher-authenticated version. The American Psychological Association and its Council of Editors disclaim any responsibility or liabilities for errors or omissions of this manuscript version, any version derived from this manuscript by NIH, or other third parties. The published version is available at www.apa.org/pubs/journals/xlm.
Prospective memory performance was not significantly affected by the manipulation of emphasis in the baseline block, F(1,79) = 2.49, MSE = .05, p > .11. Participants pressed the F1 key on 87% of the target trials (SEM = 3%).
Analyses of lexical decision accuracy and response times excluded the first ten trials from each block, as well as the target trials and three trials following each target. Response time analyses also excluded the following trials: inaccurate trials, trials on which the response time was less than 300 ms, and all trials that were more than 3 standard deviations from the mean, using standard deviations and means calculated individually for each participant separately for each block and each string type.
In the standard group lexical decision accuracy declined in the prospective memory condition from Block 1 to Block 2 by 1.9%, SEM =1.7%, compared to a decline in the control condition of 1.1%, SEM = 0.4%. The effect of condition approached, but did not reach significance, F < 1, p > .06, M = -.01, SEM = .01.
As noted by Einstein and McDaniel (in press), the demonstration of a cost to the ongoing task could reflect unnecessary monitoring and does not rule out spontaneous retrieval.
As in previous articles (e.g. Smith et al., 2007), this report focuses on performance on word trials. A summary of nonword trial performance is provided here. There were no significant effects on baseline accuracy for nonword trials, Fs < 1.89, ps > .17, and difference scores (Block 2 –Block 1) for nonword accuracy did not differ from significantly from zero in any condition, ts <1, ps > .35. Baseline nonword response times were longer in the accuracy group, F(1,160) = 5.71, MSE = 33059, p < .02, ηp2 = .03, but the effect of task condition was not significant and the variables did not interact, Fs < 1.41, ps > .23. Nonword response time difference scores were significantly different from zero in all four conditions, ts > 3.45, ps < .002.
In their discussion of the baseline debate, Einstein and McDaniel (in press) suggest that Smith et al.(2007) implied ... “that the control block [in Einstein et al., 2005] intervened between the prospective memory instruction and the prospective memory block.” (p. XX) Einstein and McDaniel (p. XX) go on to say that “This implied characterization of our procedure, however, is simply incorrect. Participants who received the control block followed by the prospective memory block received the prospective memory instructions right before the prospective memory block.” Einstein and McDaniel are inaccurate in their description of Smith et al. Smith et al. said the following (p. 736): “In all of Einstein et al.'s experiments, they used a within-subjects control condition, counterbalancing the order of the control block and the PM block. Thus, for half of the participants, the PM instructions occurred before the control block.” When the prospective memory block occurs before the control block in Einstein et al. (2005) those participants received the prospective memory instructions at the start of Block1 and then performed the control condition in Block 2. Thus, for these participants the prospective memory instructions were given prior to the control block. Smith et al. did not say this was true for the participants who received the control block first.
An anonymous reviewer noted that Einstein et al. (2005, p. 336) conducted additional analyses of individual cost scores in which they calculated confidence intervals for the individual difference scores. The intervals were based upon each “participant's variability in response times.” (It is not clear whether the confidence intervals were for the adjusted or the actual difference scores.) Participants were then assigned to a cost group or no-cost group depending upon whether the confidence interval included zero. This was done in a variety of ways. In one case, for instance, participants whose confidence intervals included zero or fell below zero were considered to show no cost. A limitation to this particular classification is that a difference score that is not significantly different from zero can indicate a cost (e.g. Smith et al., 2007). Einstein et al. also used a stricter criterion in which only participants whose confidence intervals fell entirely below zero were considered to show no cost. Einstein et al. compared prospective memory in this no cost group (M = .91) with a group of individuals that was determined to have shown a cost (M = .96). The difference in prospective memory performance was not significant. This was taken as evidence for spontaneous retrieval. Leaving aside the fact that only 14 out of 104 participants met this strict criterion for no cost, the comparison of prospective memory is uninformative due to ceiling effects.
References
- Al-Aidroos N, Harrison S, Pratt J. Attentional control settings prevent abrupt onsets from capturing visual spatial attention. Quarterly Journal of Experimental Psychology. 2010;63(1):31–41. doi: 10.1080/17470210903150738. doi: 10.1080/17470210903150738. [DOI] [PubMed] [Google Scholar]
- Cohen A-L, Jaudas A, Gollwitzer PM. Number of cues influences the cost of remembering to remember. Memory & Cognition. 2008;36:149–156. doi: 10.3758/mc.36.1.149. doi: 10.3758/MC.36.1.149. [DOI] [PubMed] [Google Scholar]
- Einstein GO, McDaniel MA. Prospective memory and what costs do not reveal about retrieval processes: A commentary on Smith, Hunt, McVay, and McConnell. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2007 doi: 10.1037/a0019184. in press. [DOI] [PubMed] [Google Scholar]
- Einstein GO, McDaniel MA. Prospective memory and metamemory: The skilled use of basic attentional and memory processes. In: Benjamin AS, Ross B, editors. The Psychology of Learning and Motivation. Vol. 48. Elsevier; San Diego, CA: 2008. pp. 145–173. doi: 10.1016/S0079-742(07)48004-5. [Google Scholar]
- Einstein GO, McDaniel MA, Thomas RA, Mayfield S, Shank H, Morrisette N, Breneiser J. Multiple processes in prospective memory retrieval: Factors determining monitoring versus spontaneous retrieval. Journal of Experimental Psychology: General. 2005;134:327–342. doi: 10.1037/0096-3445.134.3.327. doi: 10.1037/0096-3445.134.3.327. [DOI] [PubMed] [Google Scholar]
- Loft S, Kearney R, Remington R. Is task interference in event-based prospective memory dependent on cue presentation? Memory & Cognition. 2008;36:139–148. doi: 10.3758/mc.36.1.139. [DOI] [PubMed] [Google Scholar]
- Marsh RL, Cook GI, Hicks JL. Task interference from event-based intentions can be material specific. Memory & Cognition. 2006;34:1636–1643. doi: 10.3758/bf03195926. [DOI] [PubMed] [Google Scholar]
- McCauley SR, Levine HS. Prospective memory in pediatric traumatic brain injury: A preliminary study. Developmental Neuropsychology. 2004;25:5–20. doi: 10.1080/87565641.2004.9651919. doi: 10.1207/s15326942dn2501&2_2. [DOI] [PubMed] [Google Scholar]
- McDaniel MA, Einstein GO. Prospective memory: An overview and synthesis of an emerging field. Sage; Thousand Oaks, CA: 2007. [Google Scholar]
- Salthouse TA, Berish DE, Siedlecki KL. Construct validity and age sensitivity of prospective memory. Memory & Cognition. 2004;32(7):1133–1148. doi: 10.3758/bf03196887. [DOI] [PubMed] [Google Scholar]
- Scullin MK, Einstein GO, McDaniel MA. Evidence for spontaneous retrieval of suspended, but not completed prospective memories. Memory & Cognition. 2009;37:425–433. doi: 10.3758/MC.37.4.425. doi: 10.3758/MC.37.4.425. [DOI] [PubMed] [Google Scholar]
- Scullin MK, McDaniel MA, Einstein GO. Control of cost in prospective memory: Evidence for spontaneous retrieval processes. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2010;36:190–203. doi: 10.1037/a0017732. doi: 10.1037/a0017732. [DOI] [PubMed] [Google Scholar]
- Smith RE. The cost of remembering to remember in event-based prospective memory: Investigating the capacity demands of delayed intention performance. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2003;29:347–361. doi: 10.1037/0278-7393.29.3.347. doi: 10.1037/0278-7393.29.3.347. [DOI] [PubMed] [Google Scholar]
- Smith RE. Connecting the past and the future: Attention, memory, and delayed intentions. In: Kliegel M, McDaniel MA, Einstein GO, editors. Prospective memory: Cognitive, neuroscience, developmental, and applied perspectives. Erlbaum; Mahwah, NJ: 2008. pp. 27–50. [Google Scholar]
- Smith RE, Bayen UJ. A multinomial model of event-based prospective memory. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2004;30:756–777. doi: 10.1037/0278-7393.30.4.756. doi: 10.1037/0278-7393.30.4.756. [DOI] [PubMed] [Google Scholar]
- Smith RE, Bayen UJ. The effects of working memory resource availability on prospective memory: A formal modeling approach. Experimental Psychology. 2005;52:243–256. doi: 10.1027/1618-3169.52.4.243. doi: 10.1027/1618-3169.52.4.243. [DOI] [PubMed] [Google Scholar]
- Smith RE, Bayen UJ. The source of age differences in event-based prospective memory: A multinomial modeling approach. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2006;32:623–635. doi: 10.1037/0278-7393.32.3.623. doi: 10.1037/0278-7393.32.3.623. [DOI] [PubMed] [Google Scholar]
- Smith RE, Bayen UJ, Martin C. The cognitive processes underlying event-based prospective memory in school age children and young adults: A formal model-based study. Developmental Psychology. 2010;46:230–244. doi: 10.1037/a0017100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith RE, Hunt RR, McVay JC, McConnell MD. The cost of event-based prospective memory: Salient target events. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2007;33:734–746. doi: 10.1037/0278-7393.33.4.734. doi: 10.1037/0278-7393.33.4.734. [DOI] [PubMed] [Google Scholar]