Abstract
In this paper, we take the lessons learned from designing a survey and collecting validity evidence and prepare to administer the survey for research. We focus specifically on how researchers can reach individuals in the target population, methods of contact and engagement, evidence‐informed factors that enhance participation, and recommendations for follow‐up with nonrespondents. We also discuss the challenges of survey administration and provide guidance for navigating low response rates. Surveys are a common tool used to evaluate educational initiatives and collect data for all types of research. However, many clinician educators conducting survey‐based evaluation and research may struggle to efficiently administer their survey. As a result, they often struggle to obtain appropriate response rates and thus may have difficulty publishing their survey results. Previous papers in this series focused on the initial steps of survey development and validation, but it is equally important to understand how best to administer your survey to obtain meaningful responses from a representative sample.
INTRODUCTION
Given the complexity of survey‐based medical education research, our intention with this series was to simplify the process of conducting survey research within academic emergency medicine. 1 , 2 Our goal was to improve the approach to survey methodology; as such, we sought to develop a series of best practices articles as a guide for medical education researchers. While other papers have briefly touched on survey administration, our paper offers a different approach, delving further into the challenges and benefits of each type of administration option. 3 , 4 We further discuss the benefits and challenges with respect to tracking, personalization, survey length, use of incentives, and engaging with respondents.
SAMPLING THE TARGET POPULATION
When considering survey administration and delivery methods, researchers must first identify their target population as well as their sampling frame. 5 The target population is the group of individuals the researcher ultimately aims to describe and potentially make inferences about. The sampling frame, on the other hand, is the group or list from which the sample is drawn. In a perfect world, the sampling frame would perfectly match the target population; but in practice, this seldom occurs. For example, the target population might be all the emergency medicine core faculty in ACGME‐accredited programs. To sample this target population, the researcher might use the Council of Residency Directors in Emergency Medicine (CORD) listserv. This is not a perfect approximation of the target population, since all core faculty may not subscribe to the listserv; however, the researcher could make a reasoned argument that the listserv is a close approximation of the target population.
To determine the most appropriate target population and corresponding sampling frame, the researcher must consider both the objectives of the survey and practical issues to balance appropriate coverage to ensure representative responses and feasibility. Another important factor is the survey's response rate. Response rate refers to the fraction or percentage of potential survey respondents who return completed surveys. Stated another way, response rate is the ratio of the number of individuals who responded to a survey divided by the number of total potential respondents. The ultimate response rate can be affected by a number of important factors, including, among other things, the number of open‐ended versus closed responses (i.e., respondents generally do not like completing long, open‐ended items, and so response rates can suffer). 6 , 7 Response rates are important because if a survey is not completed by enough people, then that data may not be representative of the attitudes, opinions, beliefs, or behaviors of the entire group (i.e., nonresponse bias may exist). Therefore, representative sampling depends on having a large enough sample to make meaningful inferences, and the response rate provides some (but not all) of the information needed to know how representative the sample is likely to be.
As a rule, researchers should aim to achieve the highest response rate possible, given their study's contextual limitations (e.g., the overall size of the target population and the study's financial constraints). In addition, it is important to know that some journals require a minimum response rate. For example, JAMA asks that survey studies “have sufficient response rates, generally ≥60%.” 8 That said, regardless of the sample size (but especially when response rates are low), researchers should assess for potential nonresponse bias using techniques such as wave or follow‐up analysis. For a more complete description of response rates and nonresponse bias, interested readers are directed to AMEE Guide No. 102: Improving Response Rates and Evaluating Nonresponse Bias in Surveys. 6
MEDIUM OF SURVEY ADMINISTRATION
Once the target population and sampling frame have been determined, researchers should consider the best way or ways to administer the survey to potential respondents to ensure the highest possible response rate. This decision should also factor in budgetary constraints and institutional resources. There are multiple ways to administer or distribute a survey: paper surveys, in person or via postal mail; electronic surveys, by email or smartphone (mobile app or text); audience response systems, in‐person or virtual meetings; and social media. 5 , 6 , 7 , 9 , 10 Ultimately, a multimodal approach will typically yield the highest response rate. 11 Some of the benefits, challenges, and other considerations with these various administration approaches are described below (see Table 1 for a summary of these components).
TABLE 1.
Survey delivery tool.
| Survey delivery mode | Examples | Benefits | Drawbacks | Potential solutions |
|---|---|---|---|---|
| Paper survey | ||||
| In person | In a classroom or conference |
100% delivery to intended audience Physical copy Improved response rate if time allotted |
Increased cost Increased time investment May not be delivered by PI (if there is a power differential) Environmental impact |
Query institutional resources earmarked for this purpose Engage research assistants for survey delivery |
| Electronic survey | ||||
| Web‐based survey software |
Email delivery QR code Text message Social media* App based (respondents may need the app) |
Low cost Desktop, mobile device, or smartphone Charts, images, graphs may be integrated Allows for branching logic Can be automated May utilize listservs |
Email fatigue Possible technical issues for individuals who prefer paper Easily missed |
Include a prenotification Personalize the invitation Include survey in subject line of email |
| Audience response systems |
In a classroom or conference In a live webinar |
100% delivery to attendees | Missed responses from those not in attendance | Use a multimodal survey approach and send follow up survey to those not in attendance |
Note: Adapted from Step 4: Survey Delivery. 5
Hard‐copy surveys
Paper surveys, administered in person or less commonly delivered by postal mail, have several advantages over web‐based or other types of electronic surveys. Hard‐copy surveys provide respondents with a tactile motivator, which may increase response rates. 7 In‐person paper surveys can also allow researchers to easily reach a large sampling frame in a relatively short period of time (e.g., an in‐person survey delivered to a captive audience of students in a classroom). Moreover, a paper survey may allow respondents to better review a visual representation of queried content, as opposed to being asked such questions on a phone survey.
One challenge with using in‐person or mailed paper surveys is that respondents may be concerned about confidentiality and thus less likely to respond or provide accurate responses to sensitive questions. 5 Further, in‐person or mailed paper surveys can have higher overhead costs associated with printing the surveys, mailing them, and then entering the data into statistical software or a spreadsheet. Using paper surveys with scannable forms can reduce data entry expense but can also add a small layer of complexity in answering on a “bubble sheet” (from the respondent's perspective).
Electronic surveys
If considering the use of electronic surveys, researchers have a multitude of survey platforms to choose from (e.g., SurveyMonkey, Google Forms, Qualtrics, REDCap). Researchers might explore institution‐sponsored, web‐based options before independently investing funds in the use of these resources (specific examples can be found in Table 2). Electronic modes of administration have several benefits and features incorporated into the platform.
TABLE 2.
Digital survey platforms.
| Service | Benefits | Drawbacks |
|---|---|---|
| Qualtrics |
|
|
| SurveyMonkey |
|
|
| Google Forms |
|
|
| LimeSurvey |
|
|
| REDCap |
|
|
Note: Adapted from Step 4: Survey Delivery. 5
Each of these tools has various delivery options and other considerations:
Electronic surveys can be sent to each potential respondent directly via email. Sending the survey to individual email addresses allows the researcher to appropriately target the sample and monitor response rate. 5
Most of the current platforms used for survey‐based research have security options in which a single‐use link may be generated. Doing so requires personalization, likely generation of a contact list including emails and collection of some potentially identifiable information or IP addresses. Depending on the platform, the downloaded survey results may be confidential or anonymous; therefore, it is important to review the security features of the chosen platform to ensure, for example, that a promised “anonymous survey” is in fact anonymous.
Electronic surveys can be sent as a link placed in a webinar chat or virtual meeting or presented as a QR code on a slide. Depending on the chat traffic in a virtual meeting or webinar, however, a survey link could be easily missed or ignored. This approach also misses potential respondents from the target population who are not in attendance, and some potential respondents may struggle with how to use a QR code.
Surveys may be distributed to broad populations by social media or a hyperlink in a message to multiple respondents, i.e., via a listserv. 7 , 9 , 12
Audience response surveys, like those used with in‐person or virtual classrooms, have the benefit of allowing researchers to obtain real‐time data from a captive audience. Several systems (e.g., PollEverywhere, Kahoot!, Socrative) can also track the responses, which can allow the researcher to quantify the response rate of the individuals present in the audience in real time. Audience response systems only allow those present in the room (either in person or virtually) to participate and may inappropriately limit the population.
There are several challenges and special considerations with electronic surveys and in reporting of those results. 13 If using individual emails, researchers should specify whether a survey is confidential (i.e., deidentified) as opposed to truly anonymous (i.e., no personalized data is collected). Most electronic survey tools, like Qualtrics and SurveyMonkey, have system settings that allow for various types of confidential or anonymous data collection. This distinction is key because if the survey is truly anonymous (i.e., there is no way to link responses to an individual's personal information), then this should be explicitly stated. In practice, however, personal information is often collected for the purpose of linking survey data to other outcomes (e.g., linking a medical student's opinions on a course to their course grades). Such an approach also facilitates recontact or follow‐up for nonresponders.
One of the major challenges of disseminating a survey by social media and large listservs is the inability to track the true sampling frame or response rate, which can limit representativeness and result in survey results that are difficult to publish. What is more, it can be difficult to calculate a response rate for surveys that are distributed via email because the denominator is sometimes unclear. Some researchers count emails sent, and others count emails opened, while still others count the number of links clicked in an email as the denominator. Regardless of the method used, it is important for researchers to describe exactly how they have calculated their response rate and surveys completed. 14
Further, researchers may end up sampling responses from respondents who do not fall in their sampling frame. Therefore, it is important to demonstrate why this medium is appropriate for a specific study (e.g., access to special populations) and ensure adequate representativeness of the respondents. Similarly, placing a link in an email via listserv also makes it difficult to determine response rate, especially if the listserv is populated with outdated addresses. Additionally, the listserv may include individuals who are not part of the intended sample (e.g., the CORD listserv has more than just program directors).
FACTORS IMPACTING PARTICIPANT ENGAGEMENT
As described above, most researchers strive to obtain a high response rate in an effort to reduce nonresponse bias. As such, researchers can and should use strategies to encourage respondent motivation and participation in the survey. The following are several strategies that can be used to bolster respondent motivation and improve overall response rates.
Tracking
In selecting a survey delivery method, researchers should consider their ability to track receipt and responses. Doing so allows the research team to send reminders to nonresponding participants. It is important to remember that, in most cases, many participants who respond to a survey request will do so within the first 2 weeks of an invitation to participate. 7 , 9
Personalization
Personalizing survey invitations using conversational salutations can have a positive motivational effect on respondents, particularly if the invitation also comes from an individual the respondent knows. In some cases, a prenotification of an incoming survey, delivered by a person who has either influence or an existing relationship, may have a positive impact. If the survey is being used for research, then such an approach will require review by the local institutional review board.
Survey length
Survey length is one of the most influential factors in determining participant engagement. In a recent meta‐analysis of methods to increase response rates, Edwards and colleagues found that responses were almost twice as likely to occur when shorter surveys were used (as compared to longer tools). 9 In addition, providing a qualifier such as “brief” or “short” in the survey invitation may be more helpful than specifying the number of questions on the survey or estimating the amount of time required to complete the survey. 5 , 6 , 7 , 17
Incentives
Incentives in the form of cash, a gift, or a gift card are widely used by researchers with varying success. In their meta‐analysis, Edwards et al. found that the odds of response were more than doubled when a monetary incentive was used, and those odds nearly doubled again when such incentives were not conditional on response. 15 , 16 , 18 In other words, the most effective incentives are those that are given up front with no conditions or strings attached. This unconditional approach has the effect of creating a “social contract” between the researcher and the respondent. That is, the researcher has given the potential respondent money with no conditions attached, and so the respondent feels obligated to return the favor and complete the survey, even though completion is not required. 18 , 19 Providing monetary incentives (and especially up‐front incentives for everyone, with no conditions) tends to be a much more efficacious approach than incentives that require survey completion or lottery‐based incentives (e.g., “if you participate, you will be entered into a lottery to potentially win a prize”).
Participant interest
Interesting surveys garner higher response rates than uninteresting surveys. In fact, Edwards and colleagues 15 found that surveys designed with the participant's interest in mind were more than twice as likely to be returned. 9 Researchers can use this finding to their advantage by creating high‐quality survey tools that are interesting to potential respondents. Researchers can also explicitly address the importance of the survey to their broader research efforts and tell respondents how this work might link to topic areas of interest to them. On the other hand, surveys that ask sensitive questions tend to create response bias and have much lower response rates, even when respondent anonymity is promised. 9
COMMUNICATION AND RECONTACT PROCESSES
Communicating with respondents surrounding survey administration requires careful thought and consideration. From the outset, the invitation must have a coherent and straightforward description of the survey and its purpose. Researchers should articulate the relevance of the survey study to the individual participant (in the hopes of piquing their interest as discussed above). Although there is no specific invitation timing that tends to work best for all respondents, varying the time of delivery of initial and follow‐up invitations may help to improve response rates. Moreover, the literature suggests that a minimum of three attempts (or reminders) should be made to improve the overall response rate. 4 , 15 , 16 Willis et al. 20 also found no significant improvement in response rates after more than three requests are sent to potential respondents. 15 , 20 , 21
CONCLUSIONS
Researchers should consider factors that influence survey administration. These factors include the researcher's target population and sampling frame; their selected modality for administration, taking into account the respective benefits and challenges of each; factors that influence respondent participation; and finally, modes of communication and follow‐up. For researchers to apply their results to the target population, the sampling frame should be representative of that group. Further, to enhance engagement and meaningful responses, it is helpful if the research is relevant and interesting to the target population. Regardless of the chosen administration mode, a multimodal approach is recommended, as it often results in the highest response rates. In addition, personalization, the use of incentives, and frequent, high‐quality communication with potential respondents can also improve response rates. In the end, researchers should be intentional about collecting representative data about their target population and make deliberate choices when it comes to garnering responses in support of the inferences they intend to make.
CONFLICT OF INTEREST STATEMENT
The authors declare no conflicts of interest.
Ogle KY, Hill J, Santen SA, Gottlieb M, Artino AR Jr. Educator's blueprint: A how‐to guide on survey administration. AEM Educ Train. 2023;7:e10906. doi: 10.1002/aet2.10906
Supervising Editor: Anne Messman
REFERENCES
- 1. Hill J, Ogle K, Santen SA, Gottlieb M, Artino AR Jr. Educator's blueprint: a how‐to guide for survey design. AEM Educ Train. 2022;6(4):e10796. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Hill J, Ogle K, Gottlieb M, Santen SA, Artino AR. Educator's blueprint: a how‐to guide for collecting validity evidence in survey‐ based research. AEM Educ Train. 2022;6:e10835. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Nikiforova T, Carter A, Yecies E, Spagnoletti CL. Best practices for survey use in medical education: how to design, refine, and administer high‐quality surveys. South Med J. 2021;114(9):567‐571. [DOI] [PubMed] [Google Scholar]
- 4. Brasel K, Haider A, Haukoos J. Practical guide to survey research. JAMA Surg. 2020;155(4):351‐352. [DOI] [PubMed] [Google Scholar]
- 5. Phillips AW, Durning SJ, Artino AR. Step 4: survey delivery. Survey Methods for Medical and Health Professions Education: A Six‐Step Approach. Elsevier; 2022:53‐61. [Google Scholar]
- 6. Phillips AW, Reddy S, Durning SJ. Improving response rates and evaluating nonresponse bias in surveys: AMEE guide No. 102. Med Teach. 2016;38:217‐228. [DOI] [PubMed] [Google Scholar]
- 7. Millar MM, Dillman DA. Improving response to web and mixed‐mode surveys. Public Opinion Q. 2011;72(2):270‐286. [Google Scholar]
- 8. JAMA Network . Instructions for Authors. American Medical Association. 2023. Accessed July 14, 2023. https://jamanetwork.com/journals/jama/pages/instructions‐for‐authors [Google Scholar]
- 9. Beebe TJ, Locke GR, Barnes SA, Davern ME, Anderson KJ. Mixing web and mail methods in a survey of physicians. Health Serv Res. 2007;42:1219‐1234. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Millar M, Dillman DA. Encouraging survey response via smartphones. Surv Pract. 2012;5(3):1‐6. [Google Scholar]
- 11. Dillman DA, Smyth JD, Christian LM. Internet, Phone, Mail, and Mixed‐Mode Surveys: The Tailored Design Method. John Wiley & Sons; 2014. [Google Scholar]
- 12. Thoma B, Paddock M, Purdy E, et al. Leveraging a virtual community of practice to participate in a survey‐based study: a description of the METRIQ study methodology. AEM Educ Train. 2017;1:110‐113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Eysenbach G. Improving the quality of web surveys: the checklist for reporting results of internet E‐surveys (CHERRIES). J Med Internet Res. 2004. Sep 29;6(3):e34. doi: 10.2196/jmir.6.3.e34 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. The American Association for Public Opinion Research . Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 10th ed. AAPOR; 2023. [Google Scholar]
- 15. Edwards P, Roberts I, Clarke M, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324:1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. McFarlane E, Olmsted MG, Murphy J, Hill CA. Nonresponse bias in a mail survey of physicians. Eval Health Prof. 2007;30:170‐185. [DOI] [PubMed] [Google Scholar]
- 17. Jepson C, Asch DA, Hershey JC, Ubel PA. In a mailed physician survey, questionnaire length had a threshold effect on response rate. J Clin Epidemiol. 2005;58:103‐105. [DOI] [PubMed] [Google Scholar]
- 18. Church AH. Estimating the effect of incentives on mail survey response rates: a meta‐analysis. Public Opin Q. 1993;57:62‐79. [Google Scholar]
- 19. James JM, Bolstein R. The effect of monetary incentives and follow‐up mailings on the response rate and response quality in mail surveys. Public Opin Q [Internet]. 1990;54(3):346‐361. [Google Scholar]
- 20. Willis GB, Smith T, Lee HJ. Do additional recontacts to increase response rate improve physician survey data quality. Med Care. 2013;51:945‐948. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Cho YI, Johnson TP, Vangeest JB. Enhancing surveys of health care professionals: a meta‐analysis of techniques to improve response. Eval Health Prof. 2013;36:382‐407. [DOI] [PubMed] [Google Scholar]
