Skip to main content
Journal of Graduate Medical Education logoLink to Journal of Graduate Medical Education
editorial
. 2022 Dec;14(6):629–633. doi: 10.4300/JGME-D-22-00722.1

Getting the Most Out of Surveys: Optimizing Respondent Motivation

Anthony R Artino Jr 1,, Quentin R Youmans 2, Matthew G Tuck 3
PMCID: PMC9765912  PMID: 36591428

Health professions education researchers, including those who study graduate medical education (GME), have a long-standing love affair with surveys. Evidence of this fondness can be found by reviewing recent articles published in the Journal of Graduate Medical Education (JGME). By our count, 56% of Original Research and Brief Report articles published in 2021 used a survey. This large proportion is not surprising, considering the constraints that many GME scholars face, including limitations of time, money, and methodological expertise. Consequently, surveys are often the most accessible research method for GME investigators. In addition, surveys are commonly used by GME educators for trainee assessment and program evaluation. For these reasons, surveys are quite adaptable and can be an efficient way to assess hard-to-measure psychological constructs like beliefs, values, attitudes, perceptions, and opinions.1

Notwithstanding their widespread use and methodological flexibility, surveys come with several inherent weaknesses. One weakness, supported by decades of empirical evidence in fields like public opinion polling, sociology, and psychology, is that low levels of respondent motivation can lead to poor-quality data.2 In GME the problem may be even more acute, as resident physicians have many competing time constraints, including clinical and educational responsibilities, as well as life beyond work. These and other constraints make prioritizing surveys difficult, regardless of the merit of any particular GME study or evaluation effort.

With this landscape in mind, we focus on the issue of respondent motivation in this Editorial. To address motivation, we first discuss cognition and highlight what participants typically consider when completing a survey. Next, we describe several response behaviors that can occur when motivation is low, thereby resulting in low-quality survey data. We conclude with design and implementation strategies that can help researchers optimize respondent motivation and ultimately lead to more precise, accurate, and interpretable survey data.

Survey Response Process

To understand respondent motivation, it is helpful to first examine the psychology of survey response. A classic framework used to describe the cognitive work of taking a survey is Tourangeau and colleagues' response process model (see Figure).3 It proposes that respondents move through 4 cognitive processes when taking a survey. First, they need to comprehend the survey item and interpret the meaning of the words on the page (in a self-administered survey). Next, they need to retrieve from their long-term memory the relevant information needed to respond to the item. That information could include specific dates for activities in the past, or an attitude or opinion about a topic. Generally, something must be retrieved from memory. Next, respondents need to integrate that information into a judgment and, in some cases, make an estimation. For example, a respondent asked to report how often they gave blood last year might not remember all instances and therefore would need to estimate the number based on how often blood drives are held. If blood drives are conducted quarterly, then the respondent might estimate “4” as the number of times they gave blood last year. Finally, once respondents have an answer in mind, they must report that answer on the survey and adapt their response based on the options provided. In the example above, if the response options for the frequency of blood donations are presented as “sometimes” or “often,” then a respondent would need to convert their answer “4” into what they believe is the most appropriate response category.

Figure.

Figure

Components of the Survey Response Processa

a Survey response process is adapted from Tourangeau et al.3

It is important to note that respondents may jump around, and even skip steps, while working through the 4 cognitive processes. For instance, a person asked to report how often they saw a physician last year might begin retrieving that information from memory but wonder if going to the physical therapist should be counted. They might then jump back to the comprehension step and reread the question to look for clues. The respondent might then jump forward to the response step to look for other clues about what a reasonable number of physician visits might be, based on the response options provided. In this way, the survey response process is nonlinear; respondents hop around and use contextual clues provided by the survey to help navigate and respond to individual survey questions.

Another important point about these 4 cognitive steps is that difficulties encountered at any point along the process can produce errors. For example, respondents might misunderstand a question because of confusing wording or atypical visual layout, not be able to retrieve the relevant information because they have forgotten it, or not be able make an accurate judgment because they do not have the necessary information to give an informed answer. In each example, response errors may occur, and the answers provided are more likely to be imprecise, inaccurate, and difficult for survey researchers to interpret. Furthermore, respondents can and often do take cognitive shortcuts while working through a survey. That is, at any step in the response process—comprehension, retrieval, judgment and estimation, or reporting—respondents may not optimize the survey response process. Instead, they may choose to conserve their mental energy and satisfice.4

Motivation and Satisficing

Concerns about respondent motivation have long been described in the survey design literature. More than 2 decades ago, Krosnick4 noted that “a great deal of cognitive work is required to generate an optimal answer to even a single question.” As such, high-quality answers tend to come from respondents who are motivated to expend that energy and optimize the survey response process. Respondents are motivated by numerous factors, including their desire for self-expression, intellectual challenge, and a desire to be helpful. In GME, residents report being motivated to participate in surveys out of a sense of duty and professionalism.5 On the other hand, personal experience and decades of empirical research tell us that respondents are often unmotivated to provide high-quality answers to survey questions. Krosnick calls this common situation satisficing.4

Satisficing is the degree to which respondents “compromise their standards and expend less energy.”4 That is, rather than devote the necessary effort to generate optimal answers, respondents often give “good enough” answers by, for example, being less thoughtful about a question's meaning, searching their memory less thoroughly, integrating retrieved information carelessly, and/or selecting a response imprecisely. Thus, instead of carefully working their way through the 4 cognitive steps of the response process to generate the best, most precise answers (ie, optimizing the process), respondents who satisfice conserve their mental energy and settle for giving just satisfactory answers.4 Although empirical evidence is limited,5 we suspect that satisficing may be particularly prevalent for GME trainees given their unique time and context constraints.

In practice, satisficing results in a number of response behaviors that lead to low-quality survey data: these include (1) rushing through a survey; (2) selecting the first reasonable answer; (3) agreeing with all statements presented on the survey; (4) selecting the same options repeatedly, in a straight line (so-called straightlining); (5) selecting “don't know” or “not applicable” without actually thinking about the question being asked; and (6) skipping items or entire sections of a survey.2 Satisficing is epitomized by this quote from a resident at a large Midwestern academic medical center who was asked about their survey behaviors: “A lot of the time… I'll just click the middle all the way through, because I have nothing really to contribute and I just want to get through it.”5 As this statement implies, answers from respondents who are not optimizing the response process are suboptimal and result in poor-quality data that are unlikely to be trustworthy, credible, or valid for their intended use.

Mitigating Satisficing and Encouraging Thoughtful Responses

In light of the problems that result from satisficing, it is important for GME educators and researchers to understand the phenomenon and work to implement solutions that mitigate harms to data quality. Krosnick4 described 3 conditions that promote satisficing: (1) greater task difficulty; (2) lower respondent ability or education level; and (3) lower participant motivation to respond. In most cases, respondent ability and education level are fixed. Fortunately, GME researchers often are surveying high-ability participants who are well educated and thus less likely to satisfice than, for example, a high school student. As for task difficulty and respondent motivation, survey designers can influence these factors—and appreciably mitigate satisficing—through thoughtful design and implementation practices.6

Easing task difficulty is the most important way to mitigate satisficing. In the case of a survey, the task is completing the survey. The best approach for survey designers, to make the task easier, is to follow evidence-informed best practices. The goal is to design a high-quality survey that supports respondents as they work their way through the 4 cognitive response processes. Although these design practices have been articulated in detail elsewhere,1,7-9 we highlight in Table 1 a number of high-yield practices that GME survey designers can use to simplify the task of survey completion.

Table 1.

Strategies for Making Survey Completion Easier

Strategy Rationale
Write clear instructions • Respondents should easily understand the purpose of the survey and its subsections5
Ask questions • Questions on a survey are more conversational and easier to understand than statements7 • Where possible, ask respondents questions instead of asking them to agree or disagree with a number of statements10
Ask one question at a time • Respondents will struggle to give clear answers to questions that have more than one component (ie, multi-barreled items)11 • Split up questions with multiple elements and consider asking only the most important of the component parts
Keep it positive • Positively worded survey questions are easier to comprehend and answer than negatively worded questions7
Avoid reverse-scored items • Some sets of survey questions use reverse-scored items whose valence is the opposite of the other items on the survey (or in the scale) • Reverse-scored items are meant to “keep respondents on their toes,” but in practice, they usually lead to less reliable scores12
Label all response options • Labeling each response option (as opposed to only the end points, for example) helps respondents better comprehend what is being asked7
Pretest the survey • Despite careful adherence to evidence-informed practices, respondents may still struggle to optimize the survey task • Pretest individual survey items (and potentially the entire survey) using techniques such as expert reviews13 and cognitive interviews14 • Pretesting activities can help survey designers catch problems early in the development process, prior to full implementation

Finally, designers can directly address respondent motivation—the motivation to accept a survey invitation, start the survey, and optimize the response process—by viewing a survey request as a social exchange. As described by Dillman et al,8 “people are more likely to comply with a request from someone else if they believe and trust that the rewards for complying with that request will eventually exceed the costs of complying.” In other words, potential respondents often consider 3 primary factors: rewards (What will I gain by taking this survey?), costs (How much time will it take?), and trust (Do I trust the invitation source and the proposed data use?). Table 2 includes several practices that designers can employ to magnify rewards, decrease costs, and fortify trust.

Table 2.

Strategies for Bolstering Respondent Motivation Through Rewards, Costs, and Trust

Strategy Rationale
Magnify Rewards
 Provide incentives • Reward respondents with a small incentive; in graduate medical education, free food, cash, or gift cards can go a long way5 • Giving everyone an incentive up-front is more effective than entering them into a drawing to win a prize15
 Ask for help or advice • Residents often feel good when they are able to provide advice or otherwise help a colleague5 • Frame the survey as asking potential respondents for help or advice, since providing it can be socially rewarding8
 Provide social validation • Telling potential respondents that other colleagues have already helped can be validating, since much of human behavior is normative8
Decrease Costs
 Keep it short • A shorter survey is almost always better than a longer survey, since the time to complete a survey is the primary cost8,15
 Make it convenient • A simple way to decrease the cost of participation is to make completing the survey easy (eg, providing a hyperlink to the survey or providing a captive audience with time during training to complete a paper survey)8
 Minimize requests for personal information • Asking respondents for sensitive information such as income and age increases the costs of survey completion16 • Avoid or otherwise minimize requests for personal information in a survey and, when needed, obtain sensitive information at the end of the survey, once trust and rapport have been better established8
Fortify Trust
 Obtain sponsorship • Potential respondents are more likely to respond to a survey request if it comes from a legitimate source or authority with whom they are familiar8
 Make the task appear important • Make each contact with potential respondents appear important8 • Avoid unprofessional communications or dilapidated-looking surveys, which are unlikely to garner much trust or interest • With online survey tools like Qualtrics and SurveyMonkey, for example, it is easy to design a professional-looking survey that is error free and aesthetically pleasing15
 Ensure confidentiality and security • Most respondents are rightly concerned about the confidentially of their responses and the security of their personal information • Explain how survey data will be used and what efforts will be taken to ensure confidentially and security8

Summary

High-quality survey results come from participants who are motivated to optimize the response process. Unfortunately, many respondents are unmotivated and tend to conserve their mental energy and satisfice, thereby settling for “good enough” answers. A useful model for understanding how respondents think through a survey is the response process model, which describes 4 cognitive steps: comprehension, retrieval, judgement and estimation, and response. By using this model, survey developers can anticipate the cognitive work of respondents and mitigate respondents' tendencies to satisfice. Respondent motivation can be further bolstered by considering the costs, rewards, and trust involved in survey completion. By employing these strategies, researchers and educators can optimize respondent motivation and collect better-quality survey data. In addition, we encourage investigators to study GME-specific survey strategies, designed for the unique GME population of residents, staff, and faculty, to optimize survey data quality.

Footnotes

Disclaimer: Small portions of this editorial were originally published on the Harvard Macy Institute Community Blog (https://harvardmacy.org/index.php/hmi/designing-better-surveys) under a Creative Commons license (CC BY). The CC BY license allows material to be distributed, remixed, adapted, and built upon in any medium or format, so long as attribution is given to the creator. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the Department of Defense, or the US Government.

References

  • 1.Phillips AW, Artino AR., Jr Lies, damned lies, and surveys. J Grad Med Educ . 2017;9(6):677–679. doi: 10.4300/JGME-D-17-00698.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Vriesema CC, Gehlbach H. Assessing survey satisficing: the impact of unmotivated questionnaire responding on data quality. Am Educ Res Assoc . 2021;50(9):618–627. doi: 10.3102/0013189X211040054. [DOI] [Google Scholar]
  • 3.Tourangeau R, Rips L, Rasinski K. The Psychology of Survey Response . Cambridge, MA: Cambridge University Press; 2000. [Google Scholar]
  • 4.Krosnick JA. Survey research. Annu Rev Psychol . 1999;50(1):537–567. doi: 10.1146/annurev.psych.50.1.537. [DOI] [PubMed] [Google Scholar]
  • 5.Colbert CY, Brateanu A, Nowacki AS, Prelosky-Leeson A, French JC. An examination of resident perspectives on survey participation and methodology: implications for educational practice and research. J Grad Med Educ . 2021;13(3):390–403. doi: 10.4300/JGME-D-20-01431.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Hamby T, Taylor W. Survey satisficing inflates reliability and validity measures: an experimental comparison of College and Amazon Mechanical Turk samples. Educ Psychol Meas . 2016;76(6):912–932. doi: 10.1177/0013164415627349. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Gehlbach H, Artino AR., Jr The survey checklist (manifesto) Acad Med . 2018;93(3):360–366. doi: 10.1097/ACM.0000000000002083. [DOI] [PubMed] [Google Scholar]
  • 8.Dillman DA, Smyth JD, Christian LM. Internet Phone Mail and MixedMode Surveys The Tailored Design Method 4th ed. Hoboken, NJ: John Wiley & Sons, Inc; 2014. [Google Scholar]
  • 9.Sullivan GM, Artino AR., Jr How to create a bad survey instrument. J Grad Med Educ . 2017;9(4):411–415. doi: 10.4300/JGME-D-17-00375.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dykema J, Schaeffer NC, Garbarski D, Assad N, Blixt S. Towards a reconsideration of the use of agree-disagree questions in measuring subjective evaluations. Res Soc Adm Pharm . 2022;18(2):2335–2344. doi: 10.1016/J.SAPHARM.2021.06.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Artino AR, Jr, Gehlbach H, Durning SJ. AM last page: avoiding five common pitfalls of survey design. Acad Med . 2011;86(10):1327. doi: 10.1097/ACM.0b013e31822f77cc. [DOI] [PubMed] [Google Scholar]
  • 12.Swain SD, Weathers D, Niedrich RW. Assessing three sources of misresponse to reversed Likert items. J Mark Res . 2008;45(1):116–131. doi: 10.1509/jmkr.45.1.116. [DOI] [Google Scholar]
  • 13.Artino AR, Jr, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach . 2014;36:463–474. doi: 10.3109/0142159x.2014.889814. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Willis GB, Artino AR., Jr What do our respondents think we're asking? Using cognitive interviewing to improve medical education surveys. J Grad Med Educ . 2013;5(3):353–356. doi: 10.4300/JGME-D-13-00154.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Phillips AW, Durning SJ, Artino AR., Jr . Survey Methods for Medical and Health Professions Education A SixStep Approach . Philadelphia, PA: Elsevier; 2022. [Google Scholar]
  • 16.Tourangeau R, Yan T. Sensitive questions in surveys. Psychol Bull . 2007;133(5):859–883. doi: 10.1037/0033-2909.133.5.859. [DOI] [PubMed] [Google Scholar]

Articles from Journal of Graduate Medical Education are provided here courtesy of Accreditation Council for Graduate Medical Education

RESOURCES