Abstract
Survey response is higher when the request comes from a familiar entity compared to an unknown sender. Little is known about how sender influences response to surveys of organizations. We assessed whether familiarity of the sender influences response outcomes in a survey of emergency medical services agencies. Emergency medical services agencies in one U.S. state were randomly assigned to receive survey emails from either a familiar or unfamiliar sender. Both deployment approaches were subsequently used nationwide, with each state selecting one of the two contact methods. Experimental results showed that requests from the familiar sender achieved higher survey response (54.3%) compared to requests from the unfamiliar sender (36.9%; OR: 2.03; 95% CI: 1.23, 3.33). Similar results were observed in the subsequent nationwide survey; in states where the familiar sender deployed the survey, 62.0% of agencies responded, compared to 51.0% when the survey was sent by the unfamiliar sender (OR: 1.57; 95% CI: 1.47, 1.67). The response difference resulted in nearly 60 additional hours of staff time needed to perform telephone follow-up to non-respondents. When surveying healthcare organizations, surveyors should recognize that it is more challenging to obtain responses without a pre-established relationship with the organizations.
Keywords: Survey sponsorship, survey response rate, survey breakoffs, web surveys, establishment surveys, emergency medical services
Background
Surveys of healthcare organizations, such as hospitals, care facilities, and other service providers, are an important tool for better understanding the contexts in which treatment decisions are made and care is delivered (Loft et al., 2015). Healthcare organization surveys can identify disparities in the distribution of specialized services or equipment, and are also integral in the evaluation of quality improvement efforts within healthcare systems. Surveys of healthcare organizations are a type of establishment survey, a term used to describe surveys that collect information at the organizational level rather than about the individual. Although individuals complete these surveys on behalf of organizations, the process is often more complex than surveys intended for individual-level data collection (Willimack & Nichols, 2010).
Methods for increasing survey response at the individual level have been widely studied (Edwards et al., 2009), but less research has focused on organizational-level surveys (Loft et al., 2015). This is particularly problematic given that there is some evidence that organizational-level surveys obtain lower response than surveys of individuals (Baruch & Holtom, 2008). Low response can potentially result in nonresponse bias (Foo et al., 2019; Fulton, 2018), which is difficult to identify in organizational surveys (Lewis et al., 2013). Because organizational surveys are completed by individual informants within the organization, best practices in improving response to surveys of individuals may also be relevant for organizational-level surveys.
One factor known to affect response in surveys of individuals is the sponsor or sender of a survey request. For example, individuals are more likely to respond to a survey from a familiar or local entity compared to less-familiar entities or those located out-of-state (Edwards et al., 2014; Ladik et al., 2007). Additionally, research has shown that surveys obtain higher response when sent from universities rather than commercial organizations (Edwards et al., 2002; Fox et al., 1988; Hansen & Pedersen, 2012). The sender of an emailed survey request can also affect the likelihood of survey breakoffs (Boulianne et al., 2010), i.e., when recipients begin answering questions but do not complete the survey. Such findings have led many surveyors to assume that survey response will be higher and survey breakoff will be lower if the request is from a familiar or trusted sender.
However, the effect of sponsorship is more complicated in health surveys. In surveys of healthcare providers, some studies have shown familiarity or personalization improved response (Field et al., 2002). Other studies found non-significant effects depending upon type of sender (Brehaut et al., 2006; Myers et al., 2007), and one study demonstrated endorsement by well-known, respected physicians resulted in lower survey response (Bhandari et al., 2003). Overall, this body of research illustrates mixed results on the impact of survey senders in surveys of healthcare professionals (VanGeest et al., 2015). Little is known about whether the sender of a survey affects response outcomes in surveys of healthcare organizations.
While a great deal of research addresses methods for improving survey response rates, response rate is not the only relevant response outcome. One metric that has been less examined is response timeliness (McCoy & Hargie, 2007). Research evaluating the effect of survey sponsor on survey response speed has found conflicting results (Faria & Dickinson, 1992). Response timeliness has implications for survey costs, quality, and representativeness. Encouraging participants to respond early in the survey administration process can substantially reduce the costs of fielding a survey by eliminating the need for subsequent, time-consuming follow-up. Some have suggested that telephone call follow-ups are not cost effective (Hendra & Hill, 2019) and researchers should invest instead in strategies to encourage response to the initial survey request (Breen et al., 2010).
Speedy response can also affect survey quality by ensuring temporal relevance of responses. This can be especially important in organizational surveys because responses are susceptible to changes in organizational policies and personnel over time (Gupta et al., 2000). Response timeliness can also inform assessments of representativeness and survey nonresponse bias (Groves, 2006; Johnson & Wislar, 2012; Lewis et al., 2013). Research indicates that individuals who respond to health surveys are similar, and often different from those who do not, potentially introducing bias into survey estimates (Li et al., 2018; Perneger et al., 2005).
This study aimed to assess the effect of survey sender on response to a nationwide organizational survey of emergency medical services (EMS) agencies. We sought to address this question because changes in how this recurring survey would be conducted in 2020 meant that some EMS agencies would be receiving survey invitations from an unfamiliar sender. Prior to nationwide survey deployment, we conducted a randomized experiment to determine the difference in response when the survey was administered by a familiar sender with an established relationship with the EMS agencies compared to a less-familiar sender. We also examined the effect of survey sender on two secondary response outcomes, early response and survey breakoff. We then compared these response outcomes and time spent in telephone follow-up by survey sender as the survey was deployed nationwide.
Methods
Study Setting
We evaluated the effect of survey sender within a nationwide survey of EMS agencies conducted by the Emergency Medical Services for Children (EMSC) Program of the U.S. Department of Health and Human Services’ Health Resources and Services Administration (HRSA). The EMSC Program awards partnership grants to 58 U.S. states, territories, and the District of Columbia (hereafter collectively referred to as “states”) to improve pediatric health outcomes in the pre-hospital and hospital emergency medical systems (EMS) across the nation (Genovesi et al., 2018; National EMS for Children Data Analysis Resource Center, 2017). The EMSC Program developed a series of infrastructure performance measures that have been evaluated through recurring nationwide surveys of EMS agencies since 2007 (Ely et al., 2020; Genovesi et al., 2018; Hewes et al., 2019). Details on the performance measures and prior rounds of the survey have been previously published (Ely et al., 2020; Hewes et al., 2019). The 2020 survey contained up to 26 questions (depending upon skip patterns) and also collected EMS agency contact and demograhic information. A single, uniform web-based instrument was developed and hosted centrally by the program’s national data resource center, described below. This survey was considered exempt from review by the Institutional Review Board at the University of Utah, and the submission of the completed survey served as consent to participate.
The population of interest for the performance measures survey was EMS agencies in the United States (including all 50 states, the District of Columbia, and seven U.S. territories) that provide transporting and non-transporting ground EMS services. Exclusions were EMS agencies that are air and water only; on federal properties such as military, tribal, or Indian Health Services agencies; or those that do not respond to public 911 calls (National EMS for Children Data Analysis Resource Center, 2017). The study aimed to collect information for over 16,000 EMS agencies across the U.S.
Survey Deployment Options and Experiment
Prior to 2020, the survey was conducted approximately every three years at the state level by the manager, or main point of contact, of each state’s EMSC program. Technical support for deploying the survey was provided by an EMSC Program-funded national data resource center located at the University of Utah. Starting in 2020, the survey is now deployed annually. Therefore, to reduce the burden placed on the state-level program managers, in 2020 the EMSC Program introduced a new survey administration option. Each state EMSC program manager could choose one of two deployment options for their state. For Option 1, the new approach, the data resource center, an unfamiliar sender, would deploy the survey. This deployment included emailing all survey invitations and reminders on the data resource center’s letterhead via a generic email address (emsc@hsc.utah.edu). The data resource center would also conduct telephone calls to nonresponding agencies but encourage optional telephone call assistance from the program manager. For Option 2, the program manager, a familiar sender, would deploy the survey in their state as had been done in previous survey rounds. The program managers would send all email messages via their email address, use state-specific EMSC Program letterhead, and conduct all follow-up telephone calls.
The data resource center created a recommended survey protocol, designed using established guidelines (Dillman et al., 2014). The protocol included templates for all email communications and a suggested timeline for their delivery within the 3-month survey period. The suggested contact sequence began with an initial emailed invitation to participate in the survey. Three additional reminder email messages were designed for nonresponding EMS agencies, followed by telephone calls. The data resource center followed this protocol for all states selecting the Option 1 deployment. Program managers who selected Option 2 for their states were encouraged, but not required, to use the templates and timeline developed by resource center.
Experimental Design and Evaluation of Two Deployment Options
Prior to the official 2020 nationwide survey deployment, we conducted an experiment in one state (State A) to determine the difference in response rates by Option 1 vs. Option 2 deployment method. All EMS agencies in State A (n=259) were randomly assigned to one of the deployment options. Both study arms used the recommended survey protocol, including the contact templates and distribution timeline.
Next, a field study was performed to further refine the new methodology. Two additional states participated in the field study, with each using one of the deployment options (State B: Option 1, State C: Option 2). In the field study, both the data resource center and the program manager used the recommended protocol, but the program manager in State C made slight modifications to the timeline and the contact messages. Additionally, during the field study two supplemental recruitment email messages were added to the protocol for Option 1. They were designed to be sent by the program manager in State B to help establish the legitimacy of the survey requests and overcome the challenge of sending the survey from the unfamiliar data resource center. The first of these messages was a pre-notification to be sent before any survey requests from the data resource center. Pre-notices increase response in mailed paper surveys (Heberlein & Baumgartner, 1978). While they are less common in web surveys, there is indication they are also helpful in encouraging response in web surveys of health professionals (Dykema et al., 2011). The second email was a supplemental reminder from the program manager to nonresponding agencies to remind them to complete the survey. Additional reminders typically boost survey response (Edwards et al., 2009) and could be more effective if sent by the familiar sender.
Finally, the survey was deployed to all remaining states during the nationwide survey in 2020, with each program manager in remaining states selecting Option 1 or 2. All program managers who opted to administer the survey in their state (Option 2) were provided with the recommended protocol, but could choose to what extent to follow it. For states that selected Option 1, the data resource center followed the established protocol. The program managers in the Option 1 states were also asked to send the supplemental pre-notification and reminder message that were used in the field study. Table 1 provides information on the study design, states, and number of agencies for each phase of the study.
Table 1.
Study sample and design for a nationwide survey of Emergency Medical Services agencies.
| Phase | Contact method |
State(s) | Study design | Timeframe | Agencies (n) |
|---|---|---|---|---|---|
| Experiment | Option 1 | State A | Randomized experiment | July-October 2019 | 130 |
| Option 2 | State A | Randomized experiment | July-October 2019 | 129 | |
| Field study | Option 1 | State B | Single-method field study | September-November 2019 | 217 |
| Option 2 | State C | Single-method field study | September-November 2019 | 65 | |
| Number of states |
|||||
| Nationwide survey | Option 1 | 22 | Program managers chose | January-March 2020 | 8,604 |
| Option 2 | 33 | Program managers chose | January-March 2020 | 7,138 |
Note: Option 1=Data resource center deploys; Option 2=State’s program manager deploys.
Variables and Analysis
The independent variable was familiar (program manager) vs. unfamiliar (data resource center) survey sender. The primary outcome was survey response, defined as whether or not each EMS agency submitted the web-based survey. Two secondary outcomes were also examined: 1) survey breakoff, indicating if an agency started answering questions but did not complete the survey, compared to those who completed the full survey; and 2) early response, defined as submission of a completed survey prior to any reminder messages being sent.
We performed cross tabulations of survey sender and each outcome variable. We also calculated odds ratios and 95% confidence intervals for each outcome using logistic regression models. We analyzed data for the randomized experiment separately from the rest of the states for which each program manager selected their preferred deployment option. We grouped the two non-randomized field study states with all remaining states.
After the nationwide survey administration, we estimated the difference in time spent conducting telephone follow-up calls for non-responding agencies between the two options. Based on the number of nonresponding agencies for each option at the time the data resource center began telephone call follow-up for the nationwide survey, we determined the number of excess telephone calls needed for Option 1. We accounted for only one telephone call attempt per EMS agency. We used a sample of available telephone call log notes to estimate the proportion of calls that resulted in successfully speaking with a person about the survey (20%, estimated six minutes per call), the proportion for which a message was left (59%, two minutes per call), and the proportion of calls to a bad number or no answer (21%, 30 seconds per call). Applying these proportions to the total number of needed telephone calls, we calculated an estimated number of calls that would have obtained each outcome and calculated the total estimated amount of time needed for the calls.
Results
The randomized experiment revealed that deployment by the program manager resulted in significantly higher response (54.3%) compared to Option 1 (36.9%; Table 2).
Table 2.
Response outcome results from randomized experiment in a survey of EMS agencies in State A (n=259).
| Outcome | Total | Manager deploys |
Data resource center deploys |
OR | 95% CI | p | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| N | % | N | % | N | % | |||||
| Responded to survey | 2.03 | 1.23 | 3.33 | 0.005 | ||||||
| Yes | 118 | 45.6 | 70 | 54.3 | 48 | 36.9 | ||||
| No | 141 | 54.4 | 59 | 45.7 | 82 | 63.1 | ||||
| Breakoff | 0.86 | 0.22 | 3.36 | 0.825 | ||||||
| Yes | 9 | 7.1 | 5 | 6.7 | 4 | 7.7 | ||||
| No | 118 | 92.9 | 70 | 93.3 | 48 | 92.3 | ||||
| Early response | 3.25 | 1.27 | 8.32 | 0.014 | ||||||
| Yes | 32 | 27.1 | 25 | 35.7 | 7 | 14.6 | ||||
| No | 86 | 72.9 | 45 | 64.3 | 41 | 85.4 | ||||
Note: OR= Odds ratios. Odds ratios comparing response outcome among EMS agencies contacted by their state’s program manager compared to those contacted by the data resource center. Breakoffs are defined as answering at least one survey question, but not completing the survey. Breakoff percentages are based upon the total number of agencies that answered at least one question (n=127). Early response is defined as a survey response that was submitted prior to any reminder messages being sent. Early response percentages are based upon the total number of EMS agencies that responded to the survey (n=118).
Option 2 also resulted in faster data collection. Among responding EMS agencies that were contacted by their program manager, 35.7% responded prior to any reminder messages, whereas only 14.6% of responding agencies contacted by the data resource center completed the survey prior to any reminder messages (OR: 3.25; 95% CI: 1.27, 8.32). There were nine survey breakoffs, and no significant difference in the frequency of breakoffs across the two experimental arms.
For the remaining states surveyed for the 2020 deployment, including the two additional pilot states involved in the field study, 56.0% of agencies responded to the survey. The median survey completion time was just over 6.5 minutes. The results were consistent with the experiment (Figure 1). The response proportion for Option 2 states (program manager deploys), 62.0%, was significantly higher than the response to Option 1 (data resource center deploys), which was 51.0% (OR: 1.57; 95% CI: 1.47, 1.67). Option 2 also resulted in significantly fewer breakoffs (3.5% compared to 5.2%; OR: 0.66; 95% CI: 0.54, 0.80). We did not observe a significant difference between Options 1 and 2 in terms of percent of responses obtained before any reminder messages were sent (20.0% and 20.2%, respectively; OR: 1.01; 95% CI: 0.91, 1.12).
Figure 1.

Response outcomes in a nationwide survey of EMS agencies (n=16,024).
Note: Includes field study states B and C.
The two optional activities implemented to allow for program managers to help encourage participation in Option 1 states also resulted in higher response. Response among agencies in Option 1 states was higher when the program managers sent supplemental email messages (59.3%) compared to when they did not send any emails (32.5%; OR: 3.02; 95% CI: 2.72, 3.35). Response also improved when program managers assisted the data resource center in making telephone calls in Option 1 states (57.2% compared to 31.7%; OR: 2.87; 95% CI: 2.60, 3.17).
Because Option 2 obtained more responses earlier, by the time telephone call follow-up began, Option 1 had 1,363 more non-responding agencies that needed a follow-up telephone call compared to Option 2. Based on estimates of the amount of time needed for call attempts to each agency, this excess number of nonresponding agencies necessitated at least 56.5 hours of additional staff time for telephone calls.
Discussion and Conclusions
This study evaluated the effect of the familiarity of the sender of emailed survey requests on survey response outcomes in an organizational-level survey. The results of both a randomized experiment and a large, nationwide survey confirm that response to a survey of EMS agencies varies significantly depending upon who sends the survey requests. When deployed by a familiar sender, in this case, the state program manager, survey response was higher than when deployed by an unfamiliar sender, in this case, a data resource center housed within a university. In the nationwide survey, there were also fewer breakoffs when the survey was deployed by a familiar sender. Our results are consistent with prior research demonstrating the value of survey sponsorship and requests from a recognized entity (Edwards et al., 2014; Ladik et al., 2007). Furthermore, our findings demonstrate that the sender’s identity is also relevant in surveys of healthcare organizations. Future studies are warranted to evaluate the effect of survey sender in surveys of other types of organizations to determine generalizability of our findings.
Organizational surveys are unique in that the respondent is an individual person serving as an informant on behalf of the organization, rather than reporting their own personal experiences or opinions (Loft et al., 2015). Nevertheless, the intervention tested here was designed to influence the behavior of a particular individual who received the emails, not the organization itself. Therefore the social processes involved in the recipients’ response to the survey requests are similar to those in surveys of individuals. In this context, trust is undoubtedly an important underlying factor determining recipients’ response to the emails (Fang et al., 2009).
Trust is perhaps the most important issue affecting people’s decision to respond to a survey (Dillman et al., 2014). Researchers have noted that the importance of trust in a survey request is more acute in the online environment, which introduces additional uncertainties (Fang et al., 2009). Lack of trust is considered the largest barrier preventing people from engaging in online interactions (Urban et al., 2009), thus it can be particularly challenging to establish trust in the legitimacy of a survey request when it is sent via email (Dillman et al., 2014). Recipients’ trust in a survey’s sponsor can influence their willingness to participate (Callegaro et al., 2015). Familiar senders may help to establish trust that the survey request is legitimate and that it is safe to click on the survey link (Dillman et al., 2014). In anticipation of the potential difficulty in establishing legitimacy, the data resource center used an “.edu” extension for the email address to provide some trustworthiness to the messages. However much this may or may not have affected survey response, it was not enough to overcome the unfamiliarity of the sender. It is possible that the Option 1 requests could have been improved by sending them from a named individual at the data resource center rather from the generic, organizational email address. Even if the recipients were unfamiliar with the individual sending the emails, perhaps this added level of personalization could help.
In the experimental phase of this study, survey sender also affected the timeliness of response. EMS agencies that received requests from a familiar sender were more likely to respond to the initial request, eliminating the need for reminder messages. In the nationwide survey, there was no significant difference in early response by survey sender. One possible explanation for this null result is that we did not have records of the precise dates on which Option 2 states sent their first reminder message. We used the dates recommended to program managers as a proxy when creating this measure, and this may not have been precise enough to evaluate this outcome in the nationwide survey. Future research is needed to evaluate predictors of timeliness of response. This outcome is less studied than overall response, but methods for promoting quicker data collection offer the potential to significantly reduce the amount of time and resources required to obtain responses. Although the proportion of responses in both options during nationwide data collection was similar prior to the first reminder, throughout the data collection period we observed that the response rate was consistently lower for Option 1 compared to Option 2. By the time we reached the telephone-call follow-up phase, a larger proportion of agencies needed to be called in states using Option 1. We estimated that the additional time needed to make calls by the unfamiliar sender was nearly 60 hours. This time equates to nearly 1.5 work weeks for the average staff member, time which could be spent on other tasks. We were not able to conduct a complete cost analysis for this study, but future research should consider how familiar senders affect survey costs.
This study indicates that methods used to improve response in surveys of individuals are also relevant for organizational surveys. After observing the effect of the unfamiliar sender in the experiment, two additional email messages were created for program managers to voluntarily send in Option 1 states: a pre-notice and an additional reminder. We found that more agencies responded in states that used these additional messages. These results are also consistent with previous research demonstrating that pre-notification and additional reminder messages improve survey response (Edwards et al., 2009; Fox et al., 1988). Pre-notification has increased participation in a variety of health survey settings, including population-based health surveys (Koitsalu et al., 2018) and surveys of physicians (Dykema et al., 2011). It was also helpful if program managers in Option 1 states assisted in making telephone calls to nonresponding agencies. Telephone follow-up reminders are an effective way to improve survey participation (Salim Silva et al., 2002). Using a different mode of follow-up contact can improve response by drawing attention to a novel survey request or by increasing the ability to successfully make contact (de Leeuw, 2005). While these varying contact strategies are effective generally, our goal in incorporating these supplemental recruitment contacts by the program manager in states that were contacted by the data resource center was to utilize the benefit of a familiar sender. It appears that these contacts from the familiar sender may have helped to convey the legitimacy of the survey requests. After 2020 data collection was complete, the data resource center sent a brief summary report of the study results to all EMS agencies, including those who did not participate. Providing information about how the study results are used could help to encourage more agencies to respond in 2021. For future rounds of the survey beginning in 2021, agencies may be more familiar with the data resource center after receiving its survey communications the prior year.
This study has limitations. First, the onset of the COVID-19 pandemic abruptly cut short the telephone call follow-up contacts for both Option 1 and 2. As COVID-19 began spreading throughout the U.S., many program managers were reassigned to COVID-19 response tasks and could not complete the follow-up telephone calls. As lockdown orders began affecting multiple states, all telephone follow-up for both sender options ceased. This premature end to data collection likely affected the overall response proportion for both options. We do not have any indication to suggest that this cessation of telephone calling disproportionately affected one option or another, or that it affected the conclusions drawn from these analyses.
Another limitation is that the use of the two sender approaches across different states was not randomly assigned in the nationwide survey. It is possible that the program managers who opted to deploy the survey themselves were more invested in the survey response outcomes than managers who chose for the data resource center to deploy the survey. Additionally, the completeness of contact information for the EMS agencies varied by state and could have factored into managers’ choice of deployment option. Also, some program managers who selected Option 1 performed some informal promotion of the survey, in addition to the standard protocol deployed by the data resource center, that may or may not have impacted the Option 1 response rate. Because program managers who selected Option 2 had the freedom to decide how to administer the survey in their states, there was variation in the extent to which Option 2 states used the recommended templates and timeline. Finally, we did not collect data from Option 2 managers on the dates they sent reminders, making it difficult to precisely identify early responses.
The results of this study highlight the challenge of conducting online surveys of healthcare organizations when the surveyor does not have a pre-established relationship with the recipients. These results can inform methods used in similar surveys of healthcare organizations. Surveyors should be aware that it will be more difficult to obtain responses when the survey is administered by an unfamiliar sender.
Funding:
The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This work was supported by the Health Resources and Services Administration (HRSA) of the U.S. Department of Health and Human Services (HHS) [grant number UJ5MC30824] as part of the Emergency Medical Services for Children Data Center award totaling $3,000,000 with 0% financed with non-governmental sources. The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement, by HRSA, HHS, or the U.S. Government. For more information, please visit HRSA.gov. The first author is also supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health [grant number UL1TR002538].
Footnotes
Conflicts of interest:
The authors declare that there is no conflict of interest.
References
- Baruch Y, & Holtom BC (2008). Survey response rate levels and trends in organizational research. Human Relations, 61(8), 1139–1160. 10.1177/0018726708094863 [DOI] [Google Scholar]
- Bhandari M, Devereaux PJ, Swiontkowski MF, Schemitsch EH, Shankardass K, Sprague S, & Guyatt GH (2003). A randomized trial of opinion leader endorsement in a survey of orthopaedic surgeons: effect on primary response rates. International Journal of Epidemiology, 32(4), 634–636. 10.1093/ije/dyg112 [DOI] [PubMed] [Google Scholar]
- Boulianne S, Klofstad CA, & Basson D (2010). Sponsor prominence and responses patterns to an online survey. International Journal of Public Opinion Research, 23(1), 79–87. 10.1093/ijpor/edq026 [DOI] [Google Scholar]
- Breen CL, Shakeshaft AP, Doran CM, Sanson-Fisher RW, & Mattick RP (2010). Cost-effectiveness of follow-up contact for a postal survey: a randomised controlled trial. Australian and New Zealand Journal of Public Health, 34(5), 508–512. 10.1111/j.1753-6405.2010.00598.x [DOI] [PubMed] [Google Scholar]
- Brehaut JC, Graham ID, Visentin L, & Stiell IG (2006). Print format and sender recognition were related to survey completion rate. Journal of Clinical Epidemiology, 59(6), 635–641. 10.1016/j.jclinepi.2005.04.012 [DOI] [PubMed] [Google Scholar]
- Callegaro M, Manfreda KL, & Vehovar V (2015). Web survey methodology. Sage Publications. [Google Scholar]
- de Leeuw ED (2005). To mix or not to mix data collection modes in surveys. Journal of Official Statistics, 21(2), 233–255. [Google Scholar]
- Dillman DA, Smyth JD, & Christian LM (2014). Internet, phone, mail, and mixed-mode surveys: the tailored design method (4th ed.). Wiley. [Google Scholar]
- Dykema J, Stevenson J, Day B, Sellers SL, & Bonham VL (2011). Effects of incentives and prenotification on response rates and costs in a national web survey of physicians. Evaluation & the Health Professions, 34(4), 434–447. 10.1177/0163278711406113 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards ML, Dillman DA, & Smyth JD (2014). An experimental test of the effects of survey sponsorship on internet and mail survey response. Public Opinion Quarterly, 78(3), 734–750. 10.1093/poq/nfu027 [DOI] [Google Scholar]
- Edwards P, Roberts I, Clarke M, DiGiuseppe C, Pratap S, Wentz R, & Kwan I (2002). Increasing response rates to postal questionnaires: Systematic review. BMJ, 324(7347), 1183. 10.1136/bmj.324.7347.1183 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, Cooper R, Felix LM, & Pratap S (2009). Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews (3), MR000008. 10.1002/14651858.MR000008.pub4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ely M, Edgerton EA, Telford R, Page K, Hemingway C, Vernon D, & Olson LM (2020). Assessing infrastructure to care for pediatric patients in the prehospital setting. Pediatric Emergency Care, 36(6), e324–e331. 10.1097/PEC.0000000000001649 [DOI] [PubMed] [Google Scholar]
- Fang J, Shao P, & Lan G (2009). Effects of innovativeness and trust on web survey participation. Computers in Human Behavior, 25(1), 144–152. 10.1016/j.chb.2008.08.002 [DOI] [Google Scholar]
- Faria AJ, & Dickinson JR (1992). Mail survey response, speed, and cost. Industrial Marketing Management, 21(1), 51–60. 10.1016/0019-8501(92)90033-P [DOI] [Google Scholar]
- Field TS, Cadoret CA, Brown ML, Ford M, Greene SM, Hill D, Hornbrook MC, Meenan RT, White MJ, & Zapka JM (2002). Surveying physicians: do components of the "Total Design Approach" to optimizing survey response rates apply to physicians? Medical Care, 40(7), 596–605. 10.1097/00005650-200207000-00006 [DOI] [PubMed] [Google Scholar]
- Foo CY, Reidpath DD, & Sivasampu S (2019). The association between hospital characteristics and nonresponse in an organization survey: An analysis of the national healthcare establishment and workforce survey in Malaysia. Evaluation & the Health Professions, 42(1), 3–23. 10.1177/0163278717713569 [DOI] [PubMed] [Google Scholar]
- Fox RJ, Crask MR, & Kim J (1988). Mail survey response rate: A meta-analysis of selected techniques for inducing response. Public Opinion Quarterly, 52(4), 467–491. Retrieved from http://www.jstor.org/stable/2749256 [Google Scholar]
- Fulton BR (2018). Organizations and survey research: Implementing response enhancing strategies and conducting nonresponse analyses. Sociological Methods & Research, 47(2), 240–276. 10.1177/0049124115626169 [DOI] [Google Scholar]
- Genovesi AL, Edgerton EA, Ely M, Hewes H, & Olson LM (2018). Getting more performance out of performance measures: The journey and impact of the EMS for Children Program. Clinical Pediatric Emergency Medicine, 19(3), 206–215. 10.1016/j.cpem.2018.08.009 [DOI] [Google Scholar]
- Groves RM (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70(5), 646–675. 10.1093/poq/nfl033 [DOI] [Google Scholar]
- Gupta N, Shaw JD, & Delery JE (2000). Correlates of response outcomes among organizational key informants. Orgazational Research Methods, 4(4), 323–347. 10.1177/109442810034002 [DOI] [Google Scholar]
- Hansen KM, & Pedersen RT (2012). Efficiency of different recruitment strategies for web panels. International Journal of Public Opinion Research, 24(2), 238–249. 10.1093/ijpor/edr020 [DOI] [Google Scholar]
- Heberlein TA, & Baumgartner R (1978). Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review, 43(4), 447–462. 10.2307/2094771 [DOI] [Google Scholar]
- Hendra R, & Hill A (2019). Rethinking response rates: New evidence of little relationship between survey response rates and nonresponse bias. Evaluation Review, 43(5), 307–330. 10.1177/0193841x18807719 [DOI] [PubMed] [Google Scholar]
- Hewes HA, Ely M, Richards R, Shah MI, Busch S, Pilkey D, Hert KD, & Olson LM (2019). Ready for children: Assessing pediatric care coordination and psychomotor skills evaluation in the prehospital setting. Prehospital Emergency Care, 23(4), 510–518. 10.1080/10903127.2018.1542472 [DOI] [PubMed] [Google Scholar]
- Johnson TP, & Wislar JS (2012). Response rates and nonresponse errors in surveys. JAMA, 307(17), 1805–1806. 10.1001/jama.2012.3532 [DOI] [PubMed] [Google Scholar]
- Koitsalu M, Eklund M, Adolfsson J, Grönberg H, & Brandberg Y (2018). Effects of pre-notification, invitation length, questionnaire length and reminder on participation rate: a quasi-randomised controlled trial. BMC Medical Research Methodology, 18(1), 3. 10.1186/s12874-017-0467-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ladik DM, Carrillat FA, & Solomon PJ (2007). The effectiveness of university sponsorship in increasing survey response rate. Journal of Marketing Theory and Practice, 15(3), 263–271. 10.2753/MTP1069-6679150306 [DOI] [Google Scholar]
- Lewis EF, Hardy M, & Snaith B (2013). Estimating the effect of nonresponse bias in a survey of hospital organizations. Evaluation & the Health Professions, 36(3), 330–351. 10.1177/0163278713496565 [DOI] [PubMed] [Google Scholar]
- Li A, Cronin S, Bai YQ, Walker K, Ammi M, Hogg W, Wong ST, & Wodchis WP (2018). Assessing the representativeness of physician and patient respondents to a primary care survey using administrative data. BMC Family Practice, 19(1), 77. 10.1186/s12875-018-0767-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loft JD, Murphy J, & Hill CA (2015). Surveys of health care organizations. In Johnson TP (Ed.), Handbook of Health Survey Methods (pp. 545–560). John Wiley & Sons. [Google Scholar]
- McCoy M, & Hargie O (2007). Effects of personalization and envelope color on response rate, speed and quality among a business population. Industrial Marketing Management, 36(6), 799–809. 10.1016/j.indmarman.2006.02.009 [DOI] [Google Scholar]
- Myers RP, Shaheen AA, & Lee SS (2007). Impact of pharmaceutical industry versus university sponsorship on survey response: a randomized trial among Canadian hepatitis C care providers. Canadian Journal of Gastroenterology, 21(3), 169–175. 10.1155/2007/945630 [DOI] [PMC free article] [PubMed] [Google Scholar]
- National EMS for Children Data Analysis Resource Center. (2017). EMS for children peformance measures: implementation manual for state partnership grantees. Retrieved from https://nedarc.org/performanceMeasures/documents/EMS%20Perf%20Measures%20Manual%20Web_0217.pdf
- Perneger TV, Chamot E, & Bovier PA (2005). Nonresponse bias in a survey of patient perceptions of hospital care. Medical Care, 43(4), 374–380. 10.1097/01.mlr.0000156856.36901.40 [DOI] [PubMed] [Google Scholar]
- Salim Silva M, Smith WT, & Bammer G (2002). Telephone reminders are a cost effective way to improve responses in postal health surveys. Journal of Epidemiology and Community Health, 56, 115–118. 10.1136/jech.56.2.115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Urban GL, Amyx C, & Lorenzon A (2009). Online trust: State of the art, new rrontiers, and research potential. Journal of Interactive Marketing, 23(2), 179–190. 10.1016/j.intmar.2009.03.001 [DOI] [Google Scholar]
- VanGeest JB, Beebe TJ, & Johnson TP (2015). Surveys of physicians. In Johnson TP (Ed.), Handbook of Health Survey Methods (pp. 515–543). John Wiley & Sons. [Google Scholar]
- Willimack DK, & Nichols E (2010). A hybrid response process model for business surveys. Journal of Official Statistics, 26(1), 3–24. [Google Scholar]
