Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2011 Sep 1.
Published in final edited form as: Interact Comput. 2010 Sep 1;22(5):417–427. doi: 10.1016/j.intcom.2010.03.001

The impact of progress indicators on task completion

Frederick G Conrad 1,*, Mick P Couper 2, Roger Tourangeau 3, Andy Peytchev 4
PMCID: PMC2910434  NIHMSID: NIHMS194609  PMID: 20676386

Abstract

A near ubiquitous feature of user interfaces is feedback on task completion or progress indicators such as the graphical bar that grows as more of the task is completed. The presumed benefit is that users will be more likely to complete the task if they see they are making progress but it is also possible that feedback indicating slow progress may sometimes discourage users from completing the task. This paper describes two experiments that evaluate the impact of progress indicators on the completion of on-line questionnaires. In the first experiment, progress was displayed at different speeds throughout the questionnaire. If the early feedback indicated slow progress, abandonment rates were higher and users' subjective experience more negative than if the early feedback indicated faster progress. In the second experiment, intermittent feedback seemed to minimize the costs of discouraging feedback while preserving the benefits of encouraging feedback. Overall, the results suggest that when progress seems to outpace users' expectations, feedback can improve their experience though not necessarily their completion rates; when progress seems to lag behind what users expect, feedback degrades their experience and lowers completion rates.

Introduction

A friend tries to encourage you by observing “There's light at the end of the tunnel.” The comment may help you persevere because, with the end in sight, the remainder of the task becomes more pleasant or the prospect of abandoning it more unpleasant. In either case, your friend is trying to influence how you perceive the task to help you complete it. The belief seems to be that long, boring tasks will be experienced as shorter and more interesting, or at least more tolerable, when we can tell we are making progress. This appears to be the rationale of designers who provide feedback to users about their progress.

Designers routinely employ “progress indicators” to convey this information across a broad range of tasks: downloading files, installing software, flying across the ocean, responding to on-line questionnaires, waiting to speak with a help desk advisor, and so on. The feedback is often graphical (e.g. bars that change size in accord with the proportion of the task completed, an airplane icon that moves across the flight trajectory to indicate current position), though it can be textual, (e.g. “13% completed”), or spoken (e.g. “There are six callers in front of you.”). In all of these cases the goal is to make users feel better about a task that may seem to be moving slowly and thus to reduce the chances that they will abort the task.

Progress indicators can be used to track how much of the task has been completed by either the system or the user: the airplane icon on an in-flight “distance from destination” map represents the user's progress traveling across the ocean; the graphical bar that grows while software is installing indicates the system's progress completing the installation process. In both cases the rationale seems to be that giving users information about progress – either theirs or the system's – improves the users' experience; designing an interface that does not display this kind of information (whether intentionally or as an oversight) leaves users in the dark and degrades their experience.

Crawford, Couper and Lamias (2001) report an on-line survey experiment in which feedback on the users' progress led to marginally more positive feelings about participating in the survey than no feedback. They suggest that feedback about users' progress can reduce the perception of task burden. Similarly, Myers (1985) reports an experiment in which users carried out a series of search tasks with and without a system progress indicator. The main finding was that users preferred searching when they received feedback on system progress. He suggests that this kind of feedback lowers users' anxiety by allowing them to use “wait” time more efficiently. Meyer, Bitan and Shinar (1995) report that users were more positive about the interface when it displayed information about system wait time in a way that communicated how much waiting remained. For example during a system wait interval they displayed an array of Xs and incremented the array as time passed; Meyer et al. observed more positive evaluations when they enclosed the Xs in a rectangle to indicate how many more Xs could be presented than when the display was unbounded. So there is some evidence that progress indicators positively affect the users' experience. We refer to this as the knowledge is pleasing hypothesis.

However, it could be the case that providing feedback on task completion is only likely to please users and improve their experience to the extent that the feedback communicates encouraging news such as the task will be brief or is moving quickly; when the feedback conveys discouraging news, such as the task will last for a long time or is moving slowly, this may displease users and lower their satisfaction. We refer to this as the knowledge cuts both ways hypothesis.

A study by Lawrence, Carver & Scheier (2002) demonstrates that positive feedback – evidence of better performance – improves participants' affect relative to negative feedback – evidence of worse performance – even though the absolute levels of performance reached by all participants at the end of the experiment were equivalent. The authors asked participants to guess the meaning of unfamiliar words and gave them “false feedback” about their performance indicating either initially strong performance – 90% correct – dropping to 50% “correct” or initially weak performance – 10% correct – increasing to 50% correct. Participants were pleased to be improving even if only to moderate levels of success whereas their counterparts were discouraged by worsening performance even though their overall performance was better than that of their counterparts. Clearly the content of the feedback – progress is being made or not – affects the way people feel. In a real world task where completion is optional, feedback that makes users feel good should be more likely to increase completion rates than discouraging feedback.

A crucial component of how users interpret progress feedback, i.e. whether it is positive or negative, may well be the degree to which it confirms their expectations, particularly expectations about duration. It could be that respondents' metrics for task duration – what is long and what is short – are relative to what they expect. Thus, it will be discouraging to learn that a task is taking longer than expected but encouraging to learn it will end sooner than expected. Boltz (1993) explored the effects of temporal expectations by telling participants they were one third, one half or two thirds finished with an experimental task when in fact they were half way through the task; as a result they expected the task to take more time, as much time or less time than it actually did. She observed that when the task lasted longer than participants expected they judged it to take more time than it actually did and showed evidence of more fatigue (slower response times) than when their expectations were confirmed; when the task lasted less time than participants expected, they perceived it to be shorter in duration than it actually was and responded more quickly. She proposed that people allocate mental resources depending on how long they expect the task to last and if their expectations are violated their perceptions of duration and the efficiency of their performance are affected accordingly.

Web Survey Task

In the current study, we examine when feedback on task completion might help and when it might hurt. We look at how feedback on users' progress affected their perceptions of the task and the likelihood they completed it. Our test bed is the web survey response task where completion rates are intimately related to the quality of the statistics generated by the survey. When respondents (users) abandon the task it is hard to interpret their missing data. How would they have answered if they had completed the questionnaire? Should their missing answers be statistically imputed or ignored? Will the results of the survey be misleading if the respondents who complete the questionnaire provide fundamentally different answers than the nonrespondents would have provided had they answered? Thus if progress feedback increases completion rates (reduces abandonment) it has great value to the survey enterprise which produces critical data for government, social science, market research, public health, political campaigns, and so on.

Given the ubiquity of progress indicators in web surveys, the prevailing view among designers seems to be that knowledge is pleasing and that respondents will be more likely to continue to answer questions if they have some sense of where they are in the process. The unspoken rationale seems to be that progress feedback is an affordance (in Norman's [1988] sense) of paper questionnaires but not web questionnaires and so must be explicitly engineered in the latter. In a paper questionnaire, the proportion of remaining pages, evident by looking at or handling the questionnaire, is a good indicator of the proportion of the task completed. In web questionnaires, most implementations display one page at a time and give no inherent indication of how many pages or questions remain1. The assumption among survey methodologists seems to be that respondents find uncertainty about their position in the questionnaire disconcerting and so attempt to relieve that uncertainty by presenting progress feedback. The assumption is that this will not only increase satisfaction but completion rates as well.

However, the evidence that progress indicators improve users' experience and increase completion rates is mixed. Couper, Traugott and Lamias (2001) found no difference in completion rates when progress indicators were used and when they were not used. They proposed that because the progress indicator was a graphical image (a pie chart indicating percent completed), this increased the download time – page by page – relative to the version of the questionnaire with no progress indicator, thus mitigating any benefits from the feedback. (Note that when the study was conducted most users were likely to have accessed the internet via relatively slow dial-up connections.) Crawford, Couper and Lamias (2001), controlled transfer time and actually found a lower completion rate when progress indicators were used than when they were not. They observed that much of the abandonment occurred on questions requiring users to type free text (“open questions”), presumably a more difficult response task than selecting from fixed choices (usually radio buttons or check boxes). They report a follow-up study in which the open questions were excised and observed a modest but reliable increase in completion rates with a progress indicator.

These mixed results could also support the view that feedback cuts both ways. For example, Crawford, et al. (2001), suggest that the progress indicator understated actual progress because the early difficult items took longer than later ones, thus discouraging users who (correctly) believed they were further along than the feedback indicated. In particular, when the feedback indicated that users had completed 20% of the questions, they had in fact spent about 50% of the time they would ultimately spend on the task. In general, discouraging information (e.g., that the task will take a long time or more time than expected) may well deter users from completing the questionnaire. And the timing of the information may matter as well. Encouraging information that occurs late will not motivate users who have already abandoned the task due to discouraging early information or who have stopped attending to the progress indicator because they have determined that it bears only bad news.

We explored this and related ideas in two experiments. In the first experiment, we investigated how the speed (fast versus slow) and timing (early versus late) of information displayed in progress indicators affected the completion of an on-line questionnaire. In the second experiment we examined how the frequency of progress feedback (always versus sometimes available) and the initiative required to obtain the feedback (the system volunteers it or the user must request it) affected completion rates.

Experiment One

Our first experiment examined whether encouraging feedback led to fewer breakoffs than discouraging feedback and whether early or later progress information had more impact on breakoffs. It could be that users use the early information to calibrate their expectations about the duration and difficulty of the session so that if the feedback indicates the task is easier and quicker than expected they will be more likely to complete the questionnaire (less likely to break off). By the same reasoning, respondents should break off more if the task is harder and slower than expected. We also ask whether progress feedback affects users' subjective experience: does feedback actually make the task seem faster (or slower) and more (or less) interesting or is it just useful information about the pace at which the task is proceeding?

Method

Progress Indicators

A textual progress indicator (e.g. “17% completed”) appeared at the top of each page for half the users (see Figure 1). The other half was not given any feedback about their progress. Whether or not a particular user received feedback was randomly determined when they began to answer the questionnaire. Progress feedback reflected the percentage of pages on which a question was displayed that had been completed at any given point, including the current page. Progress was not incremented for pages that only announced section breaks so, for example, if page 42 presented a question and page 43 did not, they both displayed the same progress information. Questions were displayed on 57 pages and 10 pages served as section breaks (including the final screen) but did not contain questions.

Figure 1.

Figure 1

Example page from questionnaire. Progress is indicated textually above the question.

To test the impact of progress feedback on the performance and experience of users, we calculated progress in one of three ways for the users selected to receive feedback (see Figure 2). For one type of progress indicator (Constant Speed), we divided each page number by 57 and presented the result as a percent. Thus progress increased as a nearly linear function of pages2 and, therefore, at a constant rate across the questionnaire (Figure 2a). This approach to calculating progress is the typical approach, i.e., a linear function of task completion (see, e.g., Myers, 1985).

Figure 2.

Figure 2

Rates of progress displayed in three progress indicators.

For another type of progress indicator (Fast-to-Slow) the rate of progress decelerated across the questionnaire, accumulating quickly at first but more slowly toward the end (Figure 2b). We produced this pattern of feedback by dividing the log of each page by the log of the final page (expressed as a percent). For example, after only 9 pages users would pass the 50% mark (progress for page 9 is 52%) but would need to advance through another 36 pages to reach the 90% mark (progress for page 45 is 91%). Thus, the feedback is more encouraging – progress accumulates faster – in the beginning than the end.

For a third group (Slow-to-Fast), the rate of progress accelerated across the questionnaire, accumulating slowly at first and more quickly toward the end (Figure 2c). We produced this pattern of feedback by dividing the inverse log of the each page by that of the final page. For example, to reach the 50% mark in the full questionnaire, these users would need to complete 60 pages (progress for page 60 is 52%) but only another 7 pages to surpass the 90% mark (for page 66 the amount of progress displayed is 83% and it jumps to 100% for page 67). Thus this feedback is discouraging (i.e., moves slowly) early on and gets more encouraging as the task unfolds.

We varied how progress was calculated in this way for experimental purposes – not because we necessarily advocate its use in production web surveys. The current technique allowed us to vary the speed of progress without varying the actual questionnaire length and content, and to vary the speed at different points in the questionnaire.

The progress indicator was designed so that the time to display a page was unaffected by whether or not any feedback was presented: a small script was executed on the user's computer determining what progress, if any, to display, and this script required the same transfer and execution time, irrespective of what it displayed.

Difficulty of questionnaire items

To the extent that users judge the difficulty of the task on the basis of early progress information, they might experience difficulty for later questions differently because of their early expectations, even if these were no longer consistent with the external feedback. More specifically, if the early progress feedback led users to believe the task was going quickly, they might find a relatively difficult item to be more tolerable than if the early feedback indicated a more laborious questionnaire. To evaluate this possibility, we included a test item in the middle of the questionnaire that appeared in one of two forms. One form of the question was intended to be relatively easy to answer, requiring users to register their answers by selecting radio buttons. The other form was intended to be relatively difficult to answer, requiring users to enter their answers into text boxes. The topic of the item was automobile ownership. The idea was that if the feedback in the first half had been generally positive, then the difficulty of this item might not affect breakoffs. However, if users had received generally discouraging feedback from the progress indicator, then the difficult form of the question could lead to more breakoffs than the easy form. Thus the two main factors in this analysis were speed of progress (4 levels – no progress indicator and three different speeds) and the form of the automobile ownership question (2 levels).

Participants

Users were recruited from two commercial, “opt-in” panels maintained by Survey Sampling, Inc (SSI). These are essentially lists of email addresses provided voluntarily by users who wish to receive survey invitations. Potential users were invited by email to answer a web-based questionnaire concerning a variety of “lifestyle” topics: health, nutrition, exercise and cars. The invitation indicated that the survey would take five to seven minutes to complete. This was our best estimate based on a previous similar survey but turned out to underestimate actual duration for many respondents.

As an incentive to complete the questionnaire, respondents who reached the final page qualified for entry into a sweepstakes in which they could win up to $10,000. A total of 39,217 email invitations to participate in the current questionnaire were sent, in response to which 3,179 persons (8%) logged into the survey3. Approximately half (n = 1563) of those who accessed the questionnaire were randomly selected to receive no progress feedback (the control group)4; the remaining users were randomly assigned to one of the three feedback conditions: constant speed (n = 562), fast-to-slow (n = 530) and slow-to-fast (n = 532).

Questionnaire

Of the 57 pages containing questions, usually just one question was presented per page but in a few cases (e.g. the automobile ownership question) multiple questions appeared on a page. The ten pages on which no questions were presented included nine pages that displayed introductory text about the subsequent section and the final page of the questionnaire that thanked users for their participation.

Users moved between all pages by clicking a navigation (“Next Screen” or “Previous Screen”) button. There was no conditional logic (i.e., no “skip patterns”) controlling the flow of questions to users so that all questions were displayed to all users irrespective of their particular answers. Thus, a “percent completed” figure was associated with each page and if a respondent revisited a page, the progress information was the same as it had been previously5.

Results and Discussion

We first present the objective results (breakoffs, response times and item nonresponse) for the entire questionnaire, then we examine breakoffs for the automobile question designed to differ in difficulty in its two forms, and finally we report subjective results (responses to debriefing questions) for the entire questionnaire.

Breakoffs across the questionnaire

Of the 3,179 users who started the questionnaire 457 broke off, for an overall breakoff rate of 14.4%. The question is how these breakoffs were distributed across the different progress indicator conditions. The breakoff rate was highest (21.8%) when the early progress feedback was discouraging (Slow-to-Fast), lowest (11.3%) when the initial feedback was encouraging (Fast-to-Slow), and intermediate (14.4%) with constant speed feedback and when there was no feedback (12.7%), overall χ2(3) = 31.57, p <.001 (see Table 1, Row 1). This pattern supports the knowledge cuts both ways hypothesis and calls into question the wisdom of implementing progress indicators in web surveys without knowing more about what respondents expect and the likelihood their expectations will match actual progress (as communicated through feedback). Clearly progress indicators can have a deleterious effect on completion rates when the progress moves slowly.

Table 1.

Breakoff rates and breakoff page, Experiment 1 and breakoff rates overall and by sample, Experiment 2.

Progress Indicator None Constant Speed Slow-to-Fast Fast-to-Slow
% Breakoffs (exp. 1) 12.7 14.4 21.8*** 11.3
Median Final Page Number10 (exp. 1) 33 32 27 34
% Breakoffs (exp. 2) 14.3 14.4 19.9* 11.3
 % Breakoffs in SSI sample 15.5 14.8 23.9* 9.5*
 % Breakoffs in AOL sample 13.2 13.8 15.4 13.2
*

p < .05;

***

p < .001; reference category is No Progress Indicator condition

It is clear that early discouraging and encouraging feedback move breakoffs in opposite directions but are their impacts symmetrical? If so, this would indicate that while progress indicators can hurt under discouraging conditions, they can also help when the news is encouraging. To address this issue we compared the effects of each progress indicator condition to no progress indicator in a logistic regression analysis. Initially discouraging (Slow-to-Fast) feedback led to reliably greater chances of breaking off than no progress feedback (p < .001) but the 1.4% reduction in breakoffs from initially encouraging (Fast-to-Slow) feedback relative to no feedback was not reliable. It is possible that there are potential benefits from encouraging feedback under some circumstances but that these were blunted by the underestimated task duration in the survey invitation; in the invitation, the duration was described as 5 – 7 minutes but the median completion time exceeded 20 minutes in all conditions (see Table 2, row 2). As a result, even when progress seemed to be moving relatively fast (Fast-to-Slow) it may still have moved no faster or even slower than expected. Perhaps a more accurate (longer) prediction of task duration would have set respondents' expectations more realistically so that feedback indicating speedier than expected completion would have reliably reduced breakoffs. We cannot know on the basis of the current data but Yan, Conrad, Tourangeau & Couper (2007) observed that progress indicators reduced breakoffs when the task duration was consistent with what was promised in the survey invitation.

Table 2.

Answers to debriefing questions.

Progress Indicator None Constant Speed Slow-to-Fast Fast-to-Slow
Question
How interesting was this survey11? 4.09 4.03 4.07 4.27
How many minutes do you think it took you to complete the survey?* 14.43 (20.63) 13.97 (20.93) 15.38 (21.63) 13.47 (22.06)
Did the progress indicator accurately reflect your progress through the survey?12 94.1 74.7 90.2
How useful did you find the progress indicator?13 2.18 2.54 2.17
*

Actual duration, in minutes, appears in parentheses

The constant speed progress indicator also did not reliably reduce breakoffs compared to no progress feedback, as common wisdom would have assumed. Simply being informed about their position in the sequence of questions did not motivate these users to complete the task. Moreover, if a constant speed progress indicator were displayed with a longer questionnaire than we administered, it could well have increased breakoffs reliably relative to no progress indicator by conveying the discouraging news that the task is long (in absolute terms) and involves substantial effort.

Not only did discouraging news early on lead to more breakoffs than did the information from the other progress indicators but there is also a suggestion that it led to earlier breakoffs. We removed data from users who (1) broke off on the first page (their decision to break off could not have been affected by the type of progress indicator because none had been displayed by this point), and (2) completed the questionnaire (by definition they did not break off), and then analyzed the final page number before these users broke off. The median final page number increases (Table 1, row 2) as breakoff rate decreases (Table 1, row 1), although a comparison of the groups for whom the final page number differs most (Slow-to-Fast and Fast-to-Slow) was not significant (Mann Whitney U = 19230, p = .10).

Response Times

It is possible that, in addition to increasing breakoffs, the discouraging early feedback might have led users to shorten the task by responding more quickly as they advanced through the questionnaire. If this were observed, it would be a concern because faster answers may reflect less careful response processes, which are likely to compromise the quality of information collected by the survey (see Gray & Fu, 2005 and Krosnick, 1991 for discussions of mental shortcuts that lead to sub-optimal performance). Faster responding across the questionnaire when the initially discouraging progress indicator is displayed would be consistent with the suggestion by Boltz (1993) that people allocate mental resources for a task based on how long they expect the task to last; if users' expectations change when the feedback indicates the task will take longer than expected, they might give less time to each question to stretch their allotted resources. By the same logic, the group receiving encouraging feedback might slow down as the progress indicator communicates the apparent brevity of the task and suggests that they have allocated sufficient resources to invest more in each item.

To test this we computed the proportion of total time that had elapsed (elapsed time divided by total time) for each user at each page and tested the linearity of the elapsed time functions for each progress indicator group. If users modified their pace on the basis of the feedback, their elapsed time functions should have appeared as mirror images of the progress feedback in Figure 1 which would be non-linear for the variable-speed progress indicators: when initial progress moved quickly users would slow down and when it moved slowly they would speed up. In fact their elapsed time functions were quite linear and similar across the progress indicator conditions.

In a linear regression analysis in which page number and progress indicator condition predicted percent elapsed time averaged across users, the model fit the data almost perfectly (R2 = .98) and the effect of page number was highly significant, F (1, 195) = 12176, p < .001. Moreover, the variable-speed progress indicators did not interact with page number, F (1,195) < 1, indicating that there was no evidence of speed-up or slow-down for any of the progress indicators. As a further check that users did not change their speed of responding based on the apparent speed of the progress, we examined the percent elapsed time at the middle question in the questionnaire. The elapsed time percentages were virtually identical for the three progress indicator conditions: 46.4% for both of the variable-speed progress indicators and 46.7% for the constant-speed progress indicator. Thus it seems that the rate of apparent progress does not affect users' speed one way or the other and suggests that the effort given to answering individual questions is stable across the questionnaire irrespective of the type of progress indicator.

Item Nonresponse

It is, however, possible that progress indicators can affect users' willingness to answer particular questions, more specifically, the item non-response rate. If so, then we might expect that, for users who complete the questionnaire, early positive information might lead them to skip fewer questions. We do not find reliable support for this proposal, F(3,2718)=.992, n.s.

Impact of Task Difficulty

The breakoff data indicate that the speed with which progress moves initially may affect how users experience the remainder of the task. Although it did not reliably reduce overall breakoffs, it is possible that good news up front might, in effect, motivate users to persevere for items that are particularly difficult or boring. We designed the automobile ownership item to test this possibility. The form of the question intended to be difficult did in fact pose a more difficult task for users than the form intended to be easy: the breakoff rates were higher for the difficult (.021 of all users) than easy (.004 of all users) form, χ2(1)=17.35, p =.001. More importantly, the speed of initial progress affected users' willingness to carry on when the task was difficult. As shown in Figure 3, breakoffs were very low for the easy version of the question irrespective of the type of progress indicator and relatively high for the difficult form of the item for all progress indicators except the one that presented encouraging information in the beginning (Fast-to-Slow), leading to a question form × progress indicator interaction, χ2(3)=28.65, p<.001. For the Fast-to-Slow group the difference between forms of the automotive item was virtually eliminated (and not reliable). However, the form difference was reliable for the groups that received no feedback (z=3.40, p < .001) and constant speed feedback (z=2.62, p < .01). The early good news apparently helps users persevere well after progress begins to accumulate more slowly (for these users progress advanced by only by 1% from the preceding to current page by the time they reached this difficult item).

Figure 3.

Figure 3

Percent of all break-offs on easy and difficult forms of question for different progress indicators

Subjective Experience

If users happened to notice one type of progress indicator earlier than the others they could have devoted more attention to it which might have been at least partly responsible for the pattern of breakoffs rather than the speed of progress. In a debriefing question near the end of the questionnaire, users were asked to indicate when they noticed the feedback on a four point scale running from the beginning of the survey, which we have coded as 1, to the page on which the debriefing question was displayed, which we have coded as 4. The mean score was 1.13 suggesting that most users noticed the feedback very early. Moreover, there were no reliable differences between the three progress indicator groups, suggesting that all groups of users were equally aware of the information. While we cannot say that those who broke off prior to answering this question would have registered similar responses, there is no a priori reason they should have noticed the progress indicators any later than those who answered the debriefing question, especially given the systematic effect of the different progress indicators on their breakoffs.

In general, users' self-reports about the questionnaire, provided in response to debriefing questions, indicate that the speed of initial progress (as communicated by the progress indicator) affected users' experience much as it affected breakoffs (see Table 2). First, the type of progress indicator affected how interesting users found the task, F (3, 2709) = 3.95, p < .01. Those who received good news early on judged the questionnaire to be more interesting than did those in the other progress indicator groups; the comparison of the Fast-to-Slow group to the other three progress indicator groups was significant, t(1,2709)=125.25, p < .001. Although the absolute size of this effect is small, it indicates that progress feedback affects the way users feel. Apparently, people evaluate the content of the questionnaire more favorably when things initially appear to be going well than when they do not.

This finding is consistent with a study by London and Monello (1974) who slowed down or speeded up the movement of an ordinary wall clock while subjects wrote short essays. When the clock moved more slowly than actual time, participants rated the task as more boring than when the clock moved faster than actual time. A study by Sackett, Meyvis, Nelson, Converse, & Sackett (2010) makes a similar point. Participants better tolerated an irritating noise (i.e., rated it less irritating) when “time flew” (the timer was accelerated) than when time dragged (decelerated timer). Moreover, participants rated a favorite song more pleasant when time flew than when it dragged.

Second, the type of progress indicator affected users' judgment of the duration of the task, F (3, 2709) = 4.35, p < .01. The same users who judged the questionnaire to be more interesting, i.e. those who received encouraging feedback first, estimated that it took less time to complete than users in the other progress indicator groups; again, the comparison of the Fast-to-Slow group to the other three progress indicator groups was significant, t(1,2709)=3.12, p < 001. Actual duration did in fact vary with speed of progress indicator, F (3, 2554) = 3.33, p < .05, but the pattern reversed what we observed for the estimated times, i.e., for the group of users who judged the task to take longer, the Slow-to-Fast group, it actually took fewer minutes than it did for the group who judged the task to be quicker, the Fast-to-Slow group (see Table 2, Row 2, parenthesized values). This suggests that the speed of early progress had a powerful impact on users' perceptions of the task duration, over and above actual duration (see Czerwinski, Horvitz, & Cutrell, 2001, for a discussion of subjective duration as a measure of usability). The impact of the feedback on perceived duration is especially striking because users did not speed up or slow down as indicated earlier; the difference in reported duration between progress indicator conditions was perceptual, not based on differences in behavior.

Third, user's judgment about whether the progress feedback accurately reflected their progress (“Yes” versus “No” responses) was affected by the speed of the feedback they received, χ2 (2) = 74.77, p < .001. Those who received discouraging news at first (Slow-to-Fast) were less likely to respond “Yes” than were those presented encouraging information at first (Fast-to-Slow), p < .001, and those who were given Constant rate progress information throughout, p = .027.

Fourth, the type of progress indicator affected users' judgments about how useful they found the feedback, F (2, 1352) = 9.40, p < .001. Even though they found the feedback less accurate than did the other groups, those who received discouraging information first (Slow-to-Fast) reported that it was more useful than did those who received encouraging news first (p < .001) and constant rate information (p < .001). This may indicate that discouraging information is taken to heart to a greater degree than encouraging information, which could be an example of the general principle that losses are often more painful than comparable gains (Kahneman, Knetsch & Thaler, 1991).

Overall, the debriefing results are striking given that, by the time users completed the debriefing questions at the end of the questionnaire, the rate of progress had largely reversed for the variable-speed indicators yet did not seem to reverse users' perceptions. It appears, from these data, that users formulate opinions about the task early on and these first impressions are not substantially modified by later evidence.

Summary

Experiment 1 demonstrated that knowledge about the speed of progress affects breakoffs, driving the phenomenon in different directions depending on whether the initial feedback indicates progress is moving quickly (lowest breakoffs) or slowly (highest breakoffs). This supports the hypothesis that knowledge cuts both ways and casts doubt on the idea that any information about progress necessarily pleases users and increases completion rates. Early feedback that was discouraging increased breakoffs across the questionnaire; early encouraging feedback did not reliably lower breakoffs across the questionnaire but did reduce breakoffs on a difficult task in the middle of the questionnaire, as if increasing users' resolve to get through tough times. Encouraging early feedback also led users to judge the task to be more interesting and shorter in duration than when the early feedback was discouraging. Under the latter conditions, more users judged the feedback to be inaccurate and more useful. Users did not adjust the speed with which they answered questions nor did they skip questions at different rates as a function of the particular feedback they received.

Experiment Two

Many tasks have a rhythm, a pattern of events that is repeated regularly over time. In the web questionnaire response task, the “beat” might be defined by the presentation of each new question. Because the progress feedback in Experiment 1 was displayed on every page of the questionnaire it may well have reinforced this rhythm, emphasizing the question-by-question character of the task. It seems reasonable that the pace at which the feedback is displayed could affect users' perception of the rhythm and by extension the task's duration. This could, in turn, affect their willingness to complete the questionnaire. The hypothesis, then, is that more frequent feedback makes the task feel slower and increases breakoffs.

Several studies provide support for this idea by showing that users perceive system wait time to be longer when the system presents dynamic feedback more frequently over the wait interval. For example, users of a telephone interface judged the duration of a wait interval to be greater when a ticking sound presented during the interval was repeated more than less frequently (Polkoski & Lewis, 2002). Users' self-reports also indicated more anxiety when the tick rate was faster. Meyer and his colleagues (Meyer, Bitan & Shinar, 1995; Meyer, Shinar, Bitan & Leisner, 1996) observed a similar phenomenon with visual presentation. While users waited, a graphical pattern such as a string of Xs increased in number, i.e., it accumulated more Xs. Users judged the wait time to be greater when the pattern increased in number more quickly than when it grew at a slower rate.

An alternative view is that more frequent feedback intensifies the way one experiences the passage of time, whether slow or fast, so that a task that already feels slow will feel even slower with more frequent feedback but a task that seems to be moving quickly will seem even faster with frequent feedback. The studies of system wait time explore the way users passively experience unfilled intervals; this may well feel inherently slow because the users are just waiting, not actively working on a task. But progress indicators in the web questionnaire task concern the speed with which the user is actively working and, as demonstrated in experiment 1, this can feel fast or slow depending in part on the speed of progress conveyed by the feedback. The hypothesis from this perspective, then, is that the frequency with which feedback is displayed interacts with the speed of the progress conveyed by the feedback in affecting breakoffs. Frequent reminders that the task is moving slowly may be more discouraging than intermittent reminders; frequent reminders that the task is moving quickly may be more encouraging than intermittent reminders.

Another feature of the way progress information is displayed that could affect its impact is user initiative. In experiment 1, the feedback was displayed by default, i.e., whether the user wanted it or not. An alternative approach is to allow users to request progress feedback, for example, by clicking a link in the user interface. It could be that the feedback will have more impact, one way or the other, when users request it because they desire the information and devote more attention to it. Heerwegh & Loosveldt (2006) found that giving users the choice at the start of the session of whether to display progress feedback on all pages or not at all led to higher breakoff rates for those who opted to receive the feedback than when feedback was provided by default. In this case, user control increased the impact of the progress information, though not as designers would have wanted. Enabling users to make a similar choice on each page as opposed to once for the entire task could further increase the impact of progress indicators, for better or worse.

Method

We test both of these concerns by displaying feedback at three different frequencies: Always-on (as in Experiment 1), Intermittent (at nine transition points in the questionnaire) and On-Demand (users could obtain feedback by clicking a link labeled “Show Progress” as displayed in Figure 4). Note that we did not know in advance how often users would request progress with the On-Demand interface though based on our work concerning user requests for clarification (Conrad, Couper, Tourangeau & Peytchev, 2006), in which only 13% of respondents ever clicked for a definition, we suspected requests for progress information would be similarly infrequent. If this turned out to be the case, we would be able to compare three levels of frequency, one of which involved user initiative.

Figure 4.

Figure 4

User interface for On-Demand progress feedback.

In addition to the variation in frequency/initiative of progress feedback we also varied the speed of progress as in Experiment 1 (Slow-to-Fast, Fast-to-Slow and Constant speed). This created nine combinations of speed × frequency of feedback. Users were randomly assigned to these nine groups and to a tenth group which received no progress information. The ten groups were approximately equal in size.

Questionnaire

Similar to experiment 1, the questionnaire in the current experiment also concerned life style issues and was about 80 pages long. The progress indicator text appeared at the top of each page as in the earlier questionnaire. Again there were no skip patterns.

Participants

As noted earlier the members of opt-in web survey panels may differ from the general population in how certain characteristics (e.g., age, computer use) are distributed. To test the generality of the results across different kinds of web survey respondents, we recruited respondents from two sources associated with users of different levels of experience answering web-based questionnaires. One group of participants was recruited from one of the SSI opt-in panels used in experiment 1; many of these respondents have taken part in 30 or more web surveys. The other group was recruited from AOL “Opinion Place,” a service that routes AOL members to web surveys through banner advertisements. We expected many of the AOL users to be first time web survey respondents.

The SSI users received an email invitation similar to what was sent in the first experiment, indicating that the questionnaire would take five to seven minutes to complete6; the AOL users received no advance information about the duration of the questionnaire. The SSI users received the same incentive as in the first experiment (entry into a sweepstakes) and the AOL users were offered American Airlines miles.

Results and Discussion

The impact of speed of progress very closely replicated what was observed in Experiment 1. Breakoff rates varied as a function of speed (χ2(3)=27.92, p < .001) in a pattern that closely matched what was observed in the first experiment: 14.3, 14.4, 19.9 and 11.3 percent for None, Constant Speed, Slow-to-Fast and Fast-to-Slow Progress Indicators, respectively (see Table 1, row 3). Again, early discouraging (Slow-to-Fast) information led to reliably more breakoffs than did no progress indicator (p < .01) but early encouraging (Fast-to-Slow) feedback did not reliably lower breakoffs. And as in experiment 1, Constant Speed feedback did not affect breakoffs.

Breakoffs rates were ordered similarly for the SSI and AOL samples (see Table 1, rows 4 and 5). In both samples breakoffs are highest when the early feedback was discouraging (Slow-to-Fast) and lowest when the early feedback was encouraging (Fast-to-Slow). None of the differences are significant in the AOL sample but in the SSI sample both the increase in breakoffs due to the Slow-to-Fast feedback and the decrease in breakoffs due to the Fast-to-Slow feedback differ reliably from no progress indicator (p < .05 in both cases). This suggests that the degree to which progress indicators affect performance may depend, in part, on who the participants are. It also raises the question of why the encouraging early news led to reliably lower breakoffs for the SSI sample in the current experiment but not in experiment 1. One possibility is that encouraging news can reduce breakoffs but did not in experiment 1 because of a floor effect; breakoffs were substantially lower in experiment 1 than for the SSI sample in experiment 2, potentially serving as a lower bound below which breakoff rates could not be moved. In contrast the higher breakoff rates in experiment 2 provided the “room” for positive feedback to have its effect.

Turning to frequency of progress feedback, users in the On-Demand group rarely requested progress feedback – only 1.1 times on average by users who requested feedback and only 37.4% of the users in this group ever requested it. Because the variation in frequency of feedback conformed to the expected pattern (Always-On = 100%, Intermittent = 11.3%, On-Demand = 1.1%), it was possible to examine the impact of frequency of feedback on breakoffs over a wide range. As it turns out, the frequency of progress feedback did not reliably affect the rate of breakoffs (χ2(2)=.19, n.s.).

Because the user requests in the On-Demand group were so rare, it is hard to reach any conclusions about the impact of user initiative on breakoffs. The relative non-use of the progress indicator by this group is consistent with our observations of on-demand definitions in web surveys (Conrad, et al., 2006); simply making interactive features available to users does not guarantee they will use them. Perhaps clearer instructions about the On-Demand feedback would have increased its use but this did not increase requests for definitions in our study of definition seeking.

Frequency of feedback did affect breakoffs differently for different speeds, interaction Likelihood Ratio χ2(4)= 14.14, p < .01, as displayed in Figure 5. The highest breakoff rate in the study was observed when discouraging early news (Slow-to-Fast) was displayed on every page (Always-on). However early discouraging news is substantially less detrimental when delivered intermittently and even less harmful when barely ever delivered (On-Demand). In contrast, encouraging early feedback (Fast-to-Slow) led to lower breakoffs at all speeds of delivery but especially for Always-on and Intermittent. This pattern of results supports the hypothesis that more frequent feedback intensifies the content of what is communicated by the early feedback, whether progress seems to be moving fast or slow. This is especially clear for the Constant Speed (linear) progress indicator, which produced a breakoff pattern that reverses the results for the Slow-to-Fast indicator. In both Constant and Fast-to-Slow speeds, On-Demand feedback leads to higher breakoffs than Always-On and Intermittent feedback.

Figure 5.

Figure 5

Break-offs due to speed and frequency of feedback

Intermittent feedback may be one way for designers to hedge their bets: presenting feedback only at section transitions led to breakoff rates that were either intermediate or lowest levels across the different frequencies we tested. This result is consistent with the finding by Thaler (1992) that investors were more tolerant of undulations in the value of their portfolio when the feedback was infrequent than when it was frequent.

General Discussion

Progress indicators had a substantial impact on users' decisions to continue or break off in both experiments. In experiment 1 there is nearly a two-fold difference in the breakoff rates depending on the feedback users got about their progress. The nature of the feedback (encouraging, discouraging, etc.) given early in the task seems to be the critical factor in determining how the progress indicators affected users' decision to continue or not. If the early feedback indicated slower than expected progress, this seemed to discourage users from sticking with the task; encouraging feedback seemed to help (e.g., users were more likely to complete the difficult version of an item in experiment 1) and in experiment 1, the encouraging early feedback led users to perceive the whole experience more favorably.

Encouraging early feedback did not reliably reduce overall breakoffs relative to no feedback in experiment 1 or in the AOL sample in experiment 2, but it did reliably reduce breakoffs in the SSI sample in experiment 2. Thus, the results are not conclusive about whether encouraging feedback can ever increase perseveration. The current data suggest that encouraging progress feedback may reduce breakoffs when (1) users' expectations about duration are closer to actual completion time than in the current study so that encouraging feedback is clearly encouraging, and (2) overall breakoff rates are high enough so that a reduction in breakoffs is noticeable.

Intermittent Feedback

Intermittent feedback may strike a balance between communicating good news and downplaying, but still reporting, discouraging news. In experiment 2, it never led to the highest rate of breakoffs but always produced intermediate or the lowest rates. The other frequencies (Always-on and On-Demand) led to the highest breakoff rates for at least one speed of feedback. What was in effect no feedback (On-Demand) outperformed intermittent feedback when the initial progress was slow, but only under these conditions. And feedback that was always displayed outperformed intermittent feedback only when the progress moved at a constant speed. Because designers usually lack knowledge about the task's duration and difficulty, they are unlikely to know how users will perceive progress feedback. Under these conditions, intermittent feedback may be a reasonable gamble.

More study of intermittent feedback is certainly necessary before its use can be recommended or discouraged. This is the case in part because there are numerous ways to present feedback intermittently. We provided feedback at section breaks. Alternatively, one could notify the user when he or she passes milestone percentages such as those divisible by 10. Yet another approach is to indicate the number of the section just started and total number of sections in the questionnaire (e.g., “section 3 of 6”). Clearly there are many design possibilities and little is known about their impact on completion rates7.

Do Users Actually Desire Progress Feedback?

One implication of the current work is that if the task is very long, a garden-variety (constant speed) progress indicator may increase breakoffs relative to no feedback because the progress will appear to accumulate slowly. One could therefore make the case for presenting no feedback, intentionally keeping users in the dark about their progress. The assumption that users desire knowledge about how much of the task remains – that any knowledge is preferable to none – is not supported by the current research. Users in experiment 2 virtually never requested progress feedback even though it was easy to do so. It is possible that with an even lower-effort interface like a roll-over, more users would have requested progress feedback – we (Conrad, et al., 2006) observed a near three-fold increase in the number of web respondents who requested definitions of question wording via mouse roll-overs relative to mouse clicks – but clearly users are more willing to complete the task without progress feedback than click a mouse to obtain it. Thus if designers were to choose not to display progress feedback, users seem unlikely to miss it. Of course, they might have sought feedback more often if the questionnaire was longer than the one we used.

Communicating Task Structure

Whether the feedback is intermittent or always displayed, it may better help users assess the amount of remaining time if the feedback communicates the structure of the task. Experimental participants more accurately judge task duration when the task has a coherent, predictable structure. For example, Boltz (1998) asked participants to perform four tasks (Word Processing, Proof Reading, Data Entry, and Computation of Means) two times and varied whether the order of tasks was the same in the two blocks of trials. When the order of the tasks was repeated across blocks, the participants were quite accurate judging overall duration and much less accurate when the order was different. Boltz suggests that the repetition made it easier for users to extract temporal information without explicitly attending to duration; with a predictable and familiar structure, duration is easily inferred from the task.

A progress indicator that makes explicit the structure of the questionnaire and the user's location at any moment may similarly reduce the amount of attention required to determine how much time has passed and how much remains, leading to more accurate estimates of task duration. For example a section map could present the length (number of questions) of each section and possibly the difficulty of the questions8 within each section, as well as the user's current position in the section – much like the feedback displayed on exercise machines which often includes elapsed and remaining time, some difficulty measure like incline or resistance, and current position. Progress feedback like this would take much of the guesswork out of the duration estimation process allowing users to make accurate duration judgments on the basis of implicit temporal information and potentially freeing resources for allocation to the task. The question is whether it is desirable to help users make accurate assessments of remaining time or to only encourage them about their progress.

Impact of Up-Front Duration Statement

A set of issues raised but not systematically explored by our study concerns the relationship between users' expectations about task duration before reading the first question and the degree to which these expectations are confirmed or disconfirmed by progress feedback. The only information available to the users in the current experiments was the estimated duration in the invitation email, “five to seven minutes,” which unintentionally underestimated actual duration. Users in all groups in experiment 1 recognized that the task took considerably longer than this estimate – their subjective duration reports varied between about 14 and 16 minutes – but they substantially underestimated actual duration (which averaged about 21 to 22 minutes per progress indicator group).

It is possible that, because of the relatively brief estimated duration in the invitation email, at least some of those invited to participate were willing to do so but only for five to seven minutes.9 Consistent with this idea, Yan, Conrad, Tourangeau & Couper (2007) observed that more people started a web questionnaire when the invitation provided shorter (5 or 10 minutes) than longer (25 or 40 minutes) duration estimates but more broke off when the expected short task (5 or 10 minutes) turned out to take longer than promised; this was intensified by the display of a progress indicator. Although fewer started a task advertised as taking longer (25 or 40 minutes), those who did start seemed more tolerant of a task that outlasted expectations and did not break off at higher rates under these circumstances, even with a progress indicator. Crawford et al. (2001) report that users invited to participate in a shorter survey broke off at higher rates than those invited to take part in a longer one when a progress indicator was presented. Heerwegh and Loosveldt (2006) observed that giving a precise duration in the invitation led to more breakoffs than a vague description of the duration, but only when a progress indicator was displayed in the questionnaire. Presumably the vague duration did not produce a clear expectation. It would seem that one's expectations going into the task contribute to one's willingness to persevere but even the most willing users may breakoff when a progress indicator underscores the longer-than-expected duration of the task.

Conclusion

The display of progress information can change users' perception of the task difficulty and duration, thus affecting their moment-to-moment decisions to continue or abandon the task. The influence of early progress information is particularly strong but can cut both ways. As a result, designers need to evaluate the costs and benefits of giving users (1) accurate information about their progress, (2) encouraging, though only relatively accurate, information about their progress, and (3) no information at all. Any one of these approaches might be the most appropriate as long as the consequences for the particular application are weighed and considered.

Acknowledgments

This research was supported by National Science Foundation Grant # SES-0106222 and National Institutes of Health Grant # R01 HD041386-01A1. We are very grateful to Reg Baker, Scott Crawford and Chan Zhang.

Footnotes

1

Scrolling designs, of course, do communicate progress through the position of the slider but these designs are now relatively rare.

2

The function was not perfectly linear because progress was unchanged for the 10 pages which served as section breaks.

3

The 8% response rate is low for surveys in general but is typical for web surveys. The primary concern with a low response rate is that the nonrespondents may differ from respondents on the characteristics measured by the survey. If this is the case, survey statistics may not generalize to the entire population – the goal of most surveys. The debate about this issue for web surveys is on-going (e.g. Chang & Krosnick, 2009; Kaplowitz, Hadlock & Levine, 2004) and it concerns serious topics for social scientists. However, the current study was not conducted to produce population estimates but to explore the impact of progress indicators on completion rates among web respondents. The low response rates do not prevent us from generalizing the current results to other samples of web respondents and, in fact, we test this by partially replicating the current experiment with a somewhat different sample in experiment 2.

4

We designed the control group to be relatively large because the survey in which the current experiment was embedded had numerous other purposes that we did not want to compromise in the event that certain progress indicators seriously lowered completion rates. Had this been the case we could have restricted analyses to 50% of users in the control group, a large enough sample to produce reliable data in the other studies. In fact, this was not a problem and we used a smaller control group in experiment 2.

5

This would have been more complicated if the questionnaire had implemented a skip pattern because “percent completed” might surge forward when respondents are skipped over a series of questions or might actually move backward at branch points when the denominator in the progress calculation increases. See Kaczmirek, Neubarth, Bosnjak & Bandilla, 2004 for a discussion of how to smooth the changes in progress feedback when a questionnaire branches.

6

We learned in experiment 1 that actual completion time was substantially greater than 5- 7 minutes but retained the figure in the experiment 2 invitations to enable comparisons across experiments.

7

Yan et al. (2007) observed no difference in breakoffs with a section level progress indicator, e.g., “3 of 9,” and percent completed feedback as in the current study. They did not, however, present the section level feedback intermittently.

8

The drawback to presenting difficulty information or remaining time is that particular questions may be more or less difficult for some users than others. This kind of feedback may lead a particular user to invest less or more effort/time than the task warrants in his or her particular case if the user attempts to conform to the normative levels of performance presented in the progress feedback. This might affect both the quality of the answers as well as breakoffs if the user feels the individual response tasks are more difficult and take longer than indicated in the feedback.

9

It seems unlikely that the duration mentioned in survey invitations, whether accurate or not, could have produced the different patterns of breakoffs we observed for the different progress indicators primarily because users were randomly assigned to progress indicators. In addition, in experiment 2, the AOL sample did not receive any advance indication of duration and the different speed progress indicators affected them in the same way (pattern of breakoffs) as they affected SSI participants who did receive the duration invitation.

10

Users who broke off on the first page (where no progress is displayed) and users who completed the questionnaire were removed from this analysis.

11

1=Not at all interesting; 6=Extremely interesting

12

% “Yes”responses; note those who were not presented any feedback did not answer this question.

13

1=Not at all useful; 6=Extremely useful; note those who were not presented any progress feedback did not answer this question.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Contributor Information

Frederick G. Conrad, Institute for Social Research, University of Michigan, Joint Program in Survey Methodology, University of Maryland.

Mick P. Couper, Institute for Social Research, University of Michigan, Joint Program in Survey Methodology, University of Maryland

Roger Tourangeau, Institute for Social Research, University of Michigan, Joint Program in Survey Methodology, University of Maryland.

Andy Peytchev, Research Triangle Institute.

References

  1. Boltz MG. Time estimation and expectancies. Memory and Cognition. 1993b;21:853–863. doi: 10.3758/bf03202753. [DOI] [PubMed] [Google Scholar]
  2. Boltz MG. The processing of temporal and nontemporal information in the remembering of event durations and musical structure. Journal of Experimental Psychology: Human Perception & Performance. 1998;24:1087–1104. doi: 10.1037//0096-1523.24.4.1087. [DOI] [PubMed] [Google Scholar]
  3. Chang L, Krosnick J. National surveys via RDD telephone interviewing versus the Internet: Comparing sample representativeness and response quality. Public Opinion Quarterly. 2009;73:641–679. [Google Scholar]
  4. Conrad FG, Couper MP, Tourangeau R, Peytchev A. Use and non-use of clarification features in web surveys. Journal of Official Statistics. 2006;22:245–269. [Google Scholar]
  5. Couper M, Traugott M, Lamias M. Web survey design and administration. Public Opinion Quarterly. 2001;65:230–253. doi: 10.1086/322199. [DOI] [PubMed] [Google Scholar]
  6. Crawford S, Couper MP, Lamias M. Web surveys: Perception of burden. Social Science Computer Review. 2001;19:146–162. [Google Scholar]
  7. Czerwinski M, Horvitz E, Cutrell E. Subjective duration assessment: An implicit probe for software usability. Proceedings of IHM-HCI 2001 Conference; September, 2001; Lille, France. 2001. pp. 167–170. [Google Scholar]
  8. Gray WD, Fu W. Soft constraints in interactive behavior: The case of ignoring perfect knowledge in-the-world for imperfect knowledge in-the-head. Cognitive Science. 2004;28:359–382. [Google Scholar]
  9. Heerwegh D, Looseveldt G. An experimental study on the effects of personalization, survey length statements, progress indicators, and survey sponsor logos in web surveys. Journal of Official Statistics. 2006;22:191–210. [Google Scholar]
  10. Kaczmirek L, Neubarth W, Bosnjak M, Bandilla W. Progress indicators in filter-based surveys: Computing methods and their impact on drop out. Paper presented at the RC33 6th International Conference on Social Science Methodology; Amsterdam. August.2004. [Google Scholar]
  11. Kahneman D, Knetsch JL, Thaler RH. Anomalies: The endowment effect, loss aversion and the status quo bias. Journal of Economic Perspectives. 1991;5:263–291. [Google Scholar]
  12. Kaplowitz MD, Hadlock TD, Levine R. A comparison of Web and mail survey response rates. Public Opinion Quarterly. 2004;68:94–101. [Google Scholar]
  13. Krosnick JA. Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology. 1991;5:213–236. [Google Scholar]
  14. Lawrence JW, Carver CS, Scheier MF. Velocity toward goal attainment in immediate experience as a determinant of affect. Journal of Applied Social Psychology. 2002;32:788–802. [Google Scholar]
  15. London H, Monello L. Cognitive manipulation of boredom. In: London H, Nisbett RE, editors. Thought and Feeling: Cognitive Alteration of Feeling States. Chicago: Aldine; 1974. pp. 74–81. [Google Scholar]
  16. Meyer J, Bitan Y, Shinar D. Displaying a boundary in symbolic and graphic “wait’ displays: Duration estimates and users' preferences. International Journal of Human-Computer Interaction. 1995;7:273–290. [Google Scholar]
  17. Meyer J, Shinar D, Bitan Y, Leiser D. Duration estimates and users' preferences in human-computer interaction. Ergonomics. 1996;39:46–60. doi: 10.1080/00140139608964433. [DOI] [PubMed] [Google Scholar]
  18. Myers BA. The importance of percent-done progress indicators for computer-human interfaces. Proceedings of SIGCHI; 1985. pp. 11–17. [Google Scholar]
  19. Polkoski MD, Lewis JR. Effect of auditory waiting cues on time estimation in speech recognition telephony applications. International Journal of Human-Computer Interaction. 2002;14:423–446. [Google Scholar]
  20. Sackett AM, Meyvis T, Nelson LD, Converse BA, Sackett AL. You're having fun when time flies: The hedonic consequences of subjective time progression. Psychological Science. 2010;21:111–117. doi: 10.1177/0956797609354832. [DOI] [PubMed] [Google Scholar]
  21. Thaler RH. The Winner's Curse. New York: Russell Sage Foundation; 1992. [Google Scholar]
  22. Yan T, Conrad F, Tourangeau R, Couper M. Should I stay or should I go: The effects of progress indicators, promised duration, and questionnaire length on completing web surveys. Paper presented at the annual conference of the American Association of Public Opinion Research; Anaheim, CA. 2007. [Google Scholar]

RESOURCES