Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jul 7.
Published in final edited form as: Surv Res Methods. 2015;9(2):101–109.

Assessing Quality of Answers to a Global Subjective Well-being Question Through Response Times

Ting Yan, Lindsay Ryan, Sandra E Becker, Jacqui Smith
PMCID: PMC4936784  NIHMSID: NIHMS798925  PMID: 27398099

Abstract

Many large-scale surveys measure subjective well-being (SWB) through a single survey item. This paper takes advantages of response time data to explore the relation between time taken to answer a single SWB item and the reliability and validity of answers to this SWB item. We found that reliability and validity of answers to the SWB item are low for fast respondents aged 70 and above and slow respondents between the age of 50 and 70. The findings indicate that longer time spent answering the single SWB item is associated with data of lower quality for respondents aged between 50 and 70, but data of higher quality for respondents aged 70 and above. This paper speaks to the importance of capitalizing response times that are readily available from computerized interviews to evaluate answers provided by respondents and calls for survey researchers’ attention to differences in time taken to answer a survey question across respondent subgroups.

Keywords: response times, validity, reliability, subjective well-being

Introduction

Assessment of the quality of answers to attitudinal survey questions is often challenging, if not impossible. First of all, true values are needed to validate answers reported by survey respondents. For behavioral or factual survey questions (such as medical expenses in the past 12 months), which can be validated in theory, true values are often not readily available. Even in the case that true values can be obtained from administrative records or some other external data sources, there are still problems with accessibility, timeliness, record linkage, and the accuracy of those records themselves. But for attitudinal items (such as subjective well-being), which ask about subjective evaluations of respondents, true values do not exist at all. As a result, survey methodologists and researchers turn to other data sources (in the absence of true values or gold standards) that could provide indirect indication of data quality. Response times or response latencies are such an alternative source of data utilized by survey researchers and methodologists to evaluate data quality.

From survey methodologists’ point of view, the task of answering an attitudinal survey question is a complex one and is prone to error (Tourageau, Rips, and Rasinki 2000). Take a sample life satisfaction question as an example: “Please think about your life-as-a-whole. How satisfied are you with it? Are you completely satisfied, very satisfied, somewhat satisfied, not very satisfied, or not at all satisfied?” Respondents first need to first understand what the question means and determine what the question refers to. They are then assumed to review and retrieve all relevant aspects of their lives. They retrieve an existing judgment on their well-being if one exists or construct, on the spot, evaluative judgments about their well-being based on what they have retrieved. Respondents then have to map their judgment onto one of the response categories. The response options are rather vague and respondents have to decide, again on their own, what constitutes as ‘somewhat satisfied’ and how it differs from ‘not very satisfied.’ Obviously, things could go wrong at any of the response stages (Tourangeau, Rips, and Rasinski 2000). Respondents might have misunderstood the intent of the question. They may not be able to retrieve all relevant information. They could have trouble weighing and integrating information retrieved into an evaluative judgment. They might not be able to map their evaluative judgment into one of the response option provided.

Furthermore, undergoing these cognitive processes and carrying out these cognitive tasks take time and require cognitive effort. Insufficient cognitive capacity or unwillingness to exert required cognitive effort may cause some respondents to adopt a less optimal processing route by either processing all tasks less sufficiently or skipping certain tasks completely. Such a less optimal processing strategy is called “satisficing” in survey literature (Krosnick 1991; 1999; Tourangeau et al. 2000). For instance, respondents interviewed on the telephone may simply opt for the last response option regardless of their evaluative judgment because that is what is retained in their working memory (Krosnick 1991; 1999). A top-down judgment strategy could be taken in lieu of conducting all required cognitive tasks (Diener, Inglehart, and Tay 2013). All these satisficing behaviors are undesirable and are believed to lead to answers of low quality (Krosnick 1991; 1999).

Unfortunately, the actual answers alone do not provide enough information about how respondents come up with their answers and how good or bad their answers are. Response times have been used as a proxy measure of the amount of processing taken to answer a survey question (Bassili and Fletcher 1991) and response times data are increasingly used to examine the quality of survey responses (Yan and Olson 2013). Survey literature has shown that response times are highly correlated with survey question characteristics, respondent characteristics, and interviewer characteristics. For instance, longer questions in terms of the number of words, the number of clauses or sentences, and the number of response options take longer to answer (Couper and Kreuter 2013,Yan and Tourangeau 2008). Questions requiring extensive retrieval and integration (such as complex attitudinal and behavioral questions) and open-ended questions are subject to longer processing time (Bassili and Fletcher 1991; Yan and Tourangeau 2008). Furthermore, poorly-designed or flawed survey questions take longer to answer (Bassili 1996; Bassili and Scott 1996; Lenzner, Kaczmirek, and Lenzner 2010). In terms of respondent characteristics, people with a lower level of cognitive ability such as old people and people with less education are found to need more time to come up with answers (Couper and Kreuter 2013; Yan and Tourangeau 2008). Those experienced with completing web surveys and those experienced with using the Internet take less time to answer web survey questions (Yan and Tourangeau 2008). Interviewers consistently are found to speed up as they conduct more interviews (Olson and Peytchev 2007; Olson and Bilgen 2011).

Of particular interest to this paper are several studies that demonstrate empirically the relationship between response times and data quality. Using data from two telephone surveys, Draisma and Dijkstra (2004) examined response times as an indicator of response error. They selected survey questions for which the true scores can be determined for individual respondents and found a negative relation between response times and respondents’ likelihood to provide correct answers to these questions. Their findings show that nonsubstantive answers (e.g., Don’t Know answers) produce the longest response times, followed by incorrect answers, and correct answers. In a web study, Heerwegh (2003) showed that respondents who did not know the answer to a knowledge question took longer to come up with an answer. Furthermore, two studies found that people with unstable or weak attitudes needed more time to answer attitudinal questions than whose with stable or firm attitudes (Bassili and Fletcher 1991; Heerwegh 2003). The four studies provide evidence that long response times can be used as an indicator of response error due to uncertainty and inability to answer.

By contrast, short response times are found to be associated with the tendency to acquiesce or answer positively regardless of the content (Knowles and Condon 1999; Bassili 2003) and the tendency to engage in satisficing or less thoughtful processing (Malhotra 2008; Callegaro, Yang, Bhola, Dillman, and Chin 2009; Kaminska, McCutcheon, and Billiet 2010; Zhang and Conrad, 2013). For instance, Malhotra (2008) showed that respondents speeding through a questionnaire are more likely to exhibit primacy effects (i.e., selecting the first response option presented on a computer screen regardless of what is asked and what is his/her true values). Zhang and Conrad (2013) demonstrated that respondents who answered questions very fast are also more likely to straightline by providing identical answers to a series of questions.

Therefore, both long and short response times could reflect poor response quality. In this paper, we propose to make use of response times to examine answers to a global single-item measure of subjective well-being. Global single-item measures of subjective well-being (SWB) have found their way into many large-scale surveys such as the British Household Panel Survey (BHPS), the German Socioeconomic Panel (SOEP), the Swiss Household Panel (SHP), the American National Election Studies (ANES), the General Social Surveys (GSS), the Health and Retirement Study (HRS) and its sister surveys conducted in various countries in the world (such as SHARE in Europe and CHARLS in China), and the European Social Surveys (ESS), to name just a few. However, the ability of such a type of single survey item to measure SWB is challenged and cast in doubt across disciplines (Kahneman and Krueger 2006). Empirically speaking, global measures of SWB have wide ranging test-retest reliabilities, from less than desirable values (0.40) to ranges well within desirable values (0.89) (Kreuger and Schkade 2008). Furthermore, responses to single-item global measures are found to be sensitive to small changes in survey question wordings, the order of survey questions, modes of administration, and other contextual factors not relevant to survey questions such as respondents’ mood (Schwarz and Clore 1983; Schwarz and Strack 1999; Schwarz 2007; Bertrand and Mullainathan 2001; Conti and Pudney 2011; Dolan and Kavetsos 2012).

Making use of response time data and building on existing survey methodology literature, this paper makes the first attempt to assess validity and reliability of a single-item SWB by taking into consideration the amount of time taken to answer the item. The first research question to be addressed in this paper is to what extent the amount of time spent answering a global SWB item is related to the reliability and validity of the resultant answers.

In addition, we expect that older people take more time on average to answer the SWB question than their younger counterparts for two reasons. First, older adults have reduced fluid cognitive resources (e.g., slower processing speeds and working memory capacity) compared to younger people (Salthouse 1991). Second, older adults have a longer life history to retrieve, review, and integrate. Therefore, the second research question to be addressed in this paper is whether or not the same relation between the amount of time taken to answer the SWB question and the quality of the answers holds for older people compared to their younger counterparts.

Data and methods

Data

For this analyses, we draw on data from Research on Well-being and Use of Time (ROBUST) conducted by Survey Research Center, University of Michigan. A sample of 968 adults aged between 50 and 97 (M = 69.33, SD = 11.64) were included in our study. Sample recruitment was stratified by age decade (50s, 60s, 70s, and 80s and above) and gender. One sub-sample (n = 642) was selected via Random Digit Dialing (RDD) across the continental United States and completed a Computer-Assisted Telephone Interview (CATI). A second sub-sample of individuals (n = 326) was recruited locally for Computer-Assisted Personal Interviews (CAPI). All respondents first completed a background, health and well-being interview and were interviewed a second time one month later for follow-up well-being assessments. Finally, all participants were asked to complete a self-administered paper questionnaire containing additional psychosocial measures. The completion rate for all three components of the study was 91%.

Measurement of response times

Response times to the general SWB item are measured via a latent timer and are calculated as the difference between the time when this item appears on the computer screen to the time when an interviewer clicks on the ‘next’ button to go to the next survey question. Response times measured via latent timers record the amount of time respondents spent answering that particular survey question, including comprehension (i.e., listening to interviewers reading the question and response options if they are also read), retrieving information and using retrieved information to construct an evaluation, reporting an evaluation back to interviewers, and interviewers selecting a response option or entering verbatim. An alternative method to measure response times involve active timers, which start when interviewers finish reading a survey question and ends when respondents starts to give an answer (e.g., Bassili and Fletcher 1991). However, the validity of response times obtained through active timers is challenged since respondents do not always wait until the end of a survey question to start processing the question (Yan and Tourangeau, 2008). Furthermore, response times produced by active and latent times are shown to be highly correlated and produce consistent and comparable results (Mulligan, Grant, Mockabee, and Monson 2003; Yan and Tourangeau 2008).

Measurement of subjective Well-being

Subjective well-being was measured with a global question on life satisfaction: “Please think about your life as a whole. How satisfied are you with it?” Five response options are provided ranging from not at all satisfied, not very satisfied, somewhat satisfied, very satisfied, to completely satisfied.

Analytical methods

To examine the impact of time taken to answer the SWB item on quality of answers, we first divided respondents into two groups based on the time they took to answer the general SWB question. The “fast respondents” group spent less than 30 seconds to answer the question whereas the “slow respondents” group spent at least 30 seconds.1 Table 1 displays the number of respondents in each of response time groups and age groups together with the mean response times in seconds for that group to answer the SWB item.

Table 1.

Mean Response Times in Seconds (and Sample Sizes) by Response Time Group by Age Group

Fast Respondents Slow Respondents
50 to Less than 70 years old 18.2 (345) 43.7 (111)
70 years or older 18.4 (321) 46.8 (109)

We then examine reliability and validity of the answers to the SWB question by this response time group. To assess reliability, we correlated answers to the same single-item SWB question asked at Wave 1 and Wave 2 (one month apart). Of course, this method of assessing reliability assumes uncorrelated errors, which may be violated in reality when respondents remembered what they answered at Wave 1 and tried to be consistent at Wave 2. However, this test-retest reliability measure is used in the SWB literature (Kreuger and Schkade 2008) and is employed here to for comparative purposes.

In a similar way, to assess validity, we correlated answers to the SWB question to other questions that are conceptually related (such as general health, positive affect, and Diener’s the Satisfaction With Life Scale). Again, this is not a pure measure of validity (see Saris and Gallhofer 2007, p. 193), but is employed here to understand correlational differences due to response times by different age groups. General health is measured through a single survey question asking people about their general health. The question reads: “Would you say your health is excellent, very good, good, fair, or poor?” Positive affect is measured by counting the number of positive feelings respondents reported to a batch of questions asking how respondents felt yesterday. The Satisfaction with Life Scale (Diener’s SWLS) is a short 5-item scale created by Diener and colleagues to measure global cognitive judgment of satisfaction with life (Diener, Emmons, Larsen, and Griffin 1985).2 Correlations are transformed to Fisher’s Z scores and differences are tested on the transformed Z scores.

We further take a more refined approach to examine the four relations mentioned above. We regress Wave 1 SWB on characteristics related to SWB (e.g., gender, education, employment status, marital status, and income) and the other three variables (general health, positive affect, Diener’s SWLS). In addition, we regress wave 2 SWB on Wave 1 SWB together with the same set of demographic characteristics. We compare the regression coefficients by age group and response times. This approach has the advantage of controlling for differences due to sample composition in different response time groups and age groups as well as the advantage of isolating context-specific effects of the demographic characteristics on relationships between SWB and W2 SWB, general health, positive affect, and Diener’s SWLS. All analyses are unweighted.

Results

We first examine, in Table 2, whether validity and reliability of answers to the SWB are different for fast and slow respondents. In general, correlations between answers to the SWB item at Wave 1 and other questions are larger among respondents who answered quickly compared to those who took longer to answer. However, none of the differences in correlation coefficients between the two respondent groups reaches statistical significance at the 0.05 level.

Table 2.

Reliability and Validity of Answers to SWB by Fast and Slow Respondents

Correlation Between SWB at
Wave 1 with
Fast
Respondents
Slow
Respondents
Z-Score
Differences
p-value
W2 SWB (Reliability) 0.59 0.55 0.81 0.42
General Health
(Concurrent Validity)
0.32 0.24 1.14 0.25
Positive Affect
(Concurrent Validity)
0.40 0.36 0.60 0.55
Diener’s SWLS
(Congruent Validity)
0.53 0.52 0.24 0.81

To investigate age differences in the relationship between response times and data quality, we grouped respondents by age and response times and presented the same set of correlation coefficients in Table 3.

Table 3.

Assessing Reliability and Validity of SWB answers by Age and Response Times

Fast
Respondents
Slow
Respondents
Z-Score
Differences between
Response Times
p-
value

Reliability: Correlation Between SWB in Wave 1 and Wave 2
50 to Less than 70 years
old
0.66 0.46 2.63 0.01
70 years or older 0.45 0.60 −1.85 0.06

Validity: Correlation between SWB in Wave 1 and General Health
50 to Less than 70 years
old
0.40 0.17 2.33 0.02
70 years or older 0.23 0.31 −0.77 0.44

Validity: Correlation between SWB in Wave 1 and Positive Affects
50 to Less than 70 years
old
0.54 0.36 2.02 0.04
70 years or older 0.22 0.35 −1.23 0.22

Validity: Correlation between SWB in Wave 1 and Diener's SWLS
50 to Less than 70 years
old
0.60 0.48 1.46 0.20
70 years or older 0.38 0.55 −1.92 0.06

In general, time taken to answer the SWB question has a significant impact on the reliability and validity of answers given by younger respondents between the age of 50 and 70. Specifically, those who answered faster produced answers with significantly higher reliability and validity (two of the three validity measures) than those who took a longer time. However, for older respondents aged 70 and above, the relationship between time taken to answer the SWB question and the reliability and validity of the actual answers is the opposite – quality measures are better for people who took more time than those who took less time.

Shown in Table 4 are regression coefficients from regression models that control for differences in sample composition and that isolate effects of demographic characteristics on the relationships (the full regression results are displayed in Appendix). These regression coefficients reflect the relations between SWB and the other four variables (W2 SWB, general health, positive affect, and Diener’s SWLS) holding demographics constant. The Potthoff test is employed to examine whether regression coefficients significantly differ across respondent subgroups (Weaver and Wuensch, 2013). In other words, the Potthoff test examines whether or not the four relationships of interest, after removing the impact of demographic characteristics, vary by age and response time groups. It is clear that Table 4 conveys the same conclusions as Table 3 – relationships between SWB and Wave 2 SWB, general health, positive affect, and Diener’s SWLS are stronger for fast respondents who aged less than 70 than for slow respondents in that age group. In addition, the four relationships are stronger for slow respondents who were at least 70 years old than their fast counterparts.

Table 4.

Regression Coefficients by Age and Response Time Groups

Regression Coefficient of SWB on
Wave 2 SWB
Potthoff Test
Statistics
p-
value
Fast Respondents Slow Respondents
50 to 69 years of
age
0.58 0.40 (F1,446)=3.23 0.07
70 years or older 0.38 0.62 F(1,417)=4.10 0.04

Regression Coefficient of General Health on
SWB
Potthoff Test
Statistics
p-
value
Fast Respondents Slow Respondents
50 to 69 years of
age
0.27 0.10 (F1,447)=3.81 0.05
70 years or older 0.16 0.21 F(1,420)=0.77 0.38

Regression Coefficient of Positive Affect on
SWB
Potthoff Test
Statistics
p-
value
Fast Respondents Slow Respondents
50 to 69 years of
age
0.58 0.30 (F1,423)=8.28 0.00
70 years or older 0.23 0.31 F(1,409)=1.63 0.20

Regression Coefficient of Diener's SWLS on
SWB
Potthoff Test
Statistics
p-
value
Fast Respondents Slow Respondents
50 to 69 years of
age
0.28 0.24 (F1,422)=0.70 0.40
70 years or older 0.19 0.28 F(1,405)=2.88 0.09

Discussion

This paper looks into the quality of answers provided to a single-item measuring global subjective well-being. Unlike previous research on SWB, this paper is the first to examine the quality of answers to this item in the context of the amount of time spent answering this item. It is also the first to examine age differences in the impact of time taken to answer the SWB item on the quality of the resultant answers.

We first analyzed reliability and validity of answers to the SWB item through bivariate correlations and found that, for respondents aged between 50 and 70, those who took longer time to answer the SWB questions tended to produce answers of lower reliability and validity compared to their counterparts who answered more quickly. By contrast, for older respondents aged 70 or above, those who spent more time to answer the SWB question tended to provide answers of higher reliability and validity compared to those in the same age group who spent less time. In addition, we re-examined these relationships after isolating potential effects cause by idiosyncratic circumstances and again found stronger relationships for fast respondents aged between 50 and 70 and slow respondents aged 70 or above.

We take the findings to indicate that the longer response times spent by respondents less than 70 years of age answering the SWB item reflect more of difficulties respondents had with answering the item than thoughtful processing. For older respondents aged 70 or above, it seemed that the difficult task of answering the SWB item might cause some of them to give up on thoughtful processing and to adopt a satisficing response strategy, leading to shorter response times and data of worse quality.

These conclusions warrant attention from survey researchers who include this type of single survey item measuring global subjective well-being in their surveys and data users who work with answers to this type of survey item in their analyses. Survey items measuring SWB vary in question wordings and response options, as evidenced via World Database of Happiness (http://worlddatabaseofhappiness.eur.nl/) and OECD guidelines (OECD, 2013). Given our findings, we strongly recommend that response times be taken advantage of to evaluate answers to these single-measures of SWB. Survey researchers and data analysts are encouraged to replicate our analytic methods in assessing quality of answers to different version of SWB questions and to consider alternative ways of measuring SWB that are cognitively less demanding and are able to produce answers of high quality.

One major limitation of our study is that our results are based on data from a survey of respondents aged 50 and above who resided in the United States. As a result, we can not generalize our findings to data from a younger population or data collected in a different culture. Future research is needed to replicate our findings with a representative sample of the general population and with surveys collected in a cross-cultural comparative design.

Supplementary Material

Appendix 1

Footnotes

1

The 30-second cutoff point is recommended by a reviewer based on the OECD guidelines (OECD, 2013, p64). We also used median values and age-adjusted median values (that is, respondents within each age group category are divided into two groups based on whether the time they took to answer the question is less than the median value of the response times for all people in that age group). Our findings and conclusions do not change.

2

For details, please refer to http://internal.psychology.illinois.edu/~ediener/SWLS.html for the exact wordings of the five items.

References

  1. Bassili JN. The Minority Slowness Effect: Subtle Inhibitions in the Expression of Views not Shared by Others. Journal of Personality and Social Psychology. 2003;84:261–276. [PubMed] [Google Scholar]
  2. Bassili J, Fletcher J. Response-Time Measurement in Survey Research a Method for CATI and a New Look at Nonattitudes. Public Opinion Quarterly. 1991;55(3):331–346. [Google Scholar]
  3. Bassili JN, Scott BS. Response Latency as a Signal to Question Problems in Survey Research. Public Opinion Quarterly. 1996;60:390–399. [Google Scholar]
  4. Bassili J, Krosnick JA. Do Strength-Related Attitude Properties Determine Susceptibility to Response Effects? New Evidence from Response Latency, Attitude Extremity, and Aggregate Indices. Political Psychology. 2000;21(1):107–132. [Google Scholar]
  5. Bertrand M, Mullainathan S. Do People Mean What They Say? Implications for Subjective Survey Data. American Economic Review. 2001;91:67–73. [Google Scholar]
  6. Bodenhausen GV, Wyer RS., Jr . Social Cognition and Social Reality: Information Acquisition and Use in the Laboratory and the Real World. In: Hippler H-J, Schwarz N, Sudman S, editors. Social Information Processing and Survey Methodology. New York: Springer-Verlag; 1987. pp. 6–41. [Google Scholar]
  7. Callegaro M, Yang Y, Bhola D, Dillman D, Chin T. Response Latency as an Indicator of Optimizing in Online Questionnaires. Bulletin de Methodologie Sociologique. 2009;103(1):5–25. [Google Scholar]
  8. Conti G, Pudney S. Survey Design and the Analysis of Satisfaction. Review of Economics and Statistics. 2011;93:1087–1093. [Google Scholar]
  9. Couper MP, Kreuter F. Using Paradata to Explore Item Level Response Times in Surveys. Journal of the Royal Statistical Society, A. Statistics in Society. 2013;176(1):271–286. doi: 10.1111/j.1467-985X.2012.01065.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Diener E, Emmons RA, Larsen RJ, Griffin S. The Satisfaction with Life Scale. Journal of Personality Assessment. 1985;49:71–75. doi: 10.1207/s15327752jpa4901_13. [DOI] [PubMed] [Google Scholar]
  11. Diener E, Inglehart R, Tay L. Theory and Validity of Life Satisfaction Scales. Social Indicators Research. 2013;112:497–527. [Google Scholar]
  12. Dolan P, Kavetsos G. CEP Discussion Papers dp1159. CEP Discussion Papers dp1159. Centre for Economic Performance, LSE; 2012. Happy Talk: Mode of Administration Effects on Subjective Well-Being. Accessed at http://ideas.repec.org/p/cep/cepdps/dp1159.html. [Google Scholar]
  13. Draisma S, Dijkstra W. Response Latency and (Para)Linguistic Expressions as Indicators of Response Error. In: Presser S, Rogthgeb J, Couper M, Lessler J, Martin E, Martin J, Singer E, editors. Methods for Testing and Evaluating Survey Questionnaires. Hoboken, NJ: John Wiley & Sons; 2004. pp. 131–147. [Google Scholar]
  14. Heerwegh D. Explaining Response Latencies and Changing Answers Using Client-Side Paradata from a Web Survey. Social Science Computer Review. 2003;21:360–373. [Google Scholar]
  15. Kahneman D, Krueger AB. Developments in the Measurement of Subjective Well-Being. Journal of Economic Perspectives. 2006;20:3–24. [Google Scholar]
  16. Kahneman D, Krueger AB, Schkade DA, Schwarz N, Stone AA. A Survey Method for Characterizing Daily Life Experience: The Day Reconstruction Method. Science. 2004;306:1776–1780. doi: 10.1126/science.1103572. [DOI] [PubMed] [Google Scholar]
  17. Kaminska O, McCutcheon AL, Billiet J. Satisficing Among Reluctant Respondents in a Cross-National Context. Public Opinion Quarterly. 2010;74(5):956–984. [Google Scholar]
  18. Knowles ES, Condon CA. Why People Say ‘Yes’: a Dual-Process Theory of Acquiescence. Journal of Personality and Social Psychology. 1999;77:379–386. [Google Scholar]
  19. Krueger A, Schkade D. The Reliability of Subjective Well-being Measures. Journal of Public Economics. 2008;92:1833–1845. doi: 10.1016/j.jpubeco.2007.12.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Krosnick J. Survey Research. Annual Review of Psychology. 1999;50(3):537–567. doi: 10.1146/annurev.psych.50.1.537. [DOI] [PubMed] [Google Scholar]
  21. Krosnick JA. Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology. 1991;5(3):213–236. [Google Scholar]
  22. Lenzner T, Kaczmirek L, Lenzner A. Cognitive Burden of Survey Questions and Response Times: A Psycholinguistic Experiment. Applied Cognitive Psychology. 2010;24(7):1003–1020. [Google Scholar]
  23. Malhotra N. Completion Time and Response Order Effects in Web Surveys. Public Opinion Quarterly. 2008;72(5):914–934. doi: 10.1093/poq/nfn059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Mulligan K, Grant JT, Mockabee ST, Monson JQ. Response Latency Methodology for Survey Research: Measurement and Modeling strategies. Political Analysis. 2003;11:289–301. [Google Scholar]
  25. OECD. OECD Guidelines on Measuring Subjective Well-being. OECD Publishing; 2013. http://dx.doi.org/10.1787/9789264191655-en. [PubMed] [Google Scholar]
  26. Olson K, Bilgen I. The Role of Interviewer Experience on Acquiescence. Public Opinion Quarterly. 2011;75(1):99–114. [Google Scholar]
  27. Olson K, Peytchev A. Effect of Interviewer Experience on Interview Pace and Interviewer Attitudes. Public Opinion Quarterly. 2007;71:273–286. [Google Scholar]
  28. Saris WE, Gallhofer IN. Design, Evaluation, and Analysis of Questionnaires for Survey Research. New York: John Wiley; 2007. [Google Scholar]
  29. Schwarz N. Attitude Construction: Evaluation in Context. Social Cognition. 2007;25:638–656. [Google Scholar]
  30. Schwarz N, Clore GL. Mood, Misattribution, and Judgments of Well-Being: Informative and Directive Functions of Affective States. Journal of Personality and Social Psychology. 1983;45(3):513–523. [Google Scholar]
  31. Schwarz N, Strack F. Reports of Subjective Well-being: Judgmental Processes and Their Methodological Implications. In: Kahneman D, Diener E, Schwarz N, editors. Well-being: The foundations of hedonic psychology. New York: Russell-Sage; 1999. pp. 61–84. [Google Scholar]
  32. Tourangeau R, Rips LJ, Rasinski K. The Psychology of Survey Response. New York: Cambridge University Press; 2000. [Google Scholar]
  33. van Zandt T. Analysis of Response Time Distributions. In: Pashler H, Wixted J, editors. Stevens’ handbook of experimental psychology (3rd ed., Vol. 4). Methodology in experimental psychology. New York: John Wiley & Sons; 2002. [Google Scholar]
  34. Wagner-Menghin M. Towards the Identification of Non-scalable Personality Questionnaire Respondents: Taking Response Time into Account. Psychologische Beiträg. 2002;44:62–77. [Google Scholar]
  35. Weaver B, Wuensch KL. SPSS and SAS programs for comparing Pearson correlations and OLS regression coefficients. Behavior Research Methods. 2013;45:880–895. doi: 10.3758/s13428-012-0289-7. [DOI] [PubMed] [Google Scholar]
  36. Yan T, Olson K. Analyzing Paradata to Investigate Measurement Error. In: Kreuter F, editor. Improving Surveys with Paradata: Analytic Use of Process Information. John Wiley & Sons; 2013. pp. 73–96. [Google Scholar]
  37. Yan T, Tourangeau R. Fast Times and Easy Questions: The Effects of Age, Experience and Question Complexity on Web Survey Response Times. Applied Cognitive Psychology. 2008;22(1):51–68. [Google Scholar]
  38. Zhang C, Conrad F. Speeding in Web Surveys: The Tendency to Answer Very Fast and Its Association with Straightlining. Survey Research Methods. 2013;8:127–135. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 1

RESOURCES