INTRODUCTION
Obtaining adequate survey response rates from registered nurses can at times seem like an uphill battle. Even students enrolled in undergraduate nursing programs are often inundated with requests to participate in surveys, to which they may stop responding altogether (Nulty, 2008). Ignoring or not responding to survey requests does not bode well for the researcher who is trying to collect data to complete a post-graduate degree, or a researcher attempting to publish what could prove to be an important study.
This article does not seek to find an answer as to why nurses and nursing students have low response rates to surveys, but rather how to best design a survey that will yield the highest response rates. This idea was examined through the comparison of a sample of nursing studies that used surveys and includes knowledge of non-nursing survey experts such as D.A. Dillman. While not all suggestions are applicable in all nursing studies, the use of these techniques should alleviate some of the stress of not obtaining sufficient responses.
RESEARCH METHODOLOGY
This research study followed the format introduced by Whittemore and Knafl (2005) for an integrative review. A selection of nursing studies that used surveys to collect data were examined for quantitative and qualitative questions, along with their survey methodologies. They were then compared to prominent survey researchers’ guidelines for an effective survey.
Dillman states that the ideal response rate for a survey should be at least 70% (Dillman, 2000), a number echoed in work by Sills and Song for Internet-based surveys (Sills & Song, 2002). Here, the majority (n=31) of the sampled survey studies were below this threshold. However, common issues with the design and dissemination of these surveys became readily apparent.
From the 50 nursing survey studies examined in this review, the following criteria were used for comparison: a) data collection method (web-based, mail, both, or as a handout at the place of work), b) types of hospital staff included, c) number of surveys distributed, d) number of surveys returned, e) response rate, f) whether an invitation to participate was issued, g) whether an incentive was provided, h) types of questions (if available, qualitative, quantitative, or both), and i) the number of questions in the survey. Items (c) and (d) typically required manual calculation from the published study. The reasoning behind these criteria was to examine the types of questions and techniques that may elicit the highest response rates, and to compare the types of nurses that are more likely to respond. Of note, it is known that if a potential respondent is contacted more than three times to complete the survey, they are highly unlikely to respond out of frustration (Nulty, 2008; Porter & Whitcomb, 2003; Sax, Gilmartin, & Bryant, 2003).
ANALYSIS AND COMPARISON TO SURVEY RESEARCHERS’ SUGGESTIONS
The method by which the nurses received a survey were organized into three categories: web-based, mailed surveys, and handed out surveys. The surveys categorized as ‘web,’ used an online platform, whereas ‘handed out’ referred to paper-based surveys given to nurses during a staff event or placed in their professional mailboxes. ‘Mailed’ refers to surveys that were addressed to the respondent, and delivered via postal service.
Nursing researchers, and indeed most researchers, tend to prefer to host the survey on a website such as Survey Monkey for ease of both access and data collection. Fortunately, these online platforms tend to use quantitative questions, which are both simple to answer, and do not require a great deal of thought, which is beneficial to the researcher (Messer, Edwards, & Dillman, 2012). When mailed, e-mailed, and handed out to nurses, the response rates for this sample were 58%, 57.4%, and 71.8% respectively. Interestingly, the response rate tended to be inversely proportional to the number of surveys sent out; the larger the nursing population, the worse the response rates.
Findings on the types of questions asked revealed that quantitative questions dominated the highest response rate ranks. Only seven studies that had a response rate of 50% or greater included qualitative questions at all. Though the average number of questions was not significant, two studies that relied on a single qualitative question produced the lowest overall response rates of 3% and 16%. Specific lengths of time for nurses to respond to surveys were not noted in the surveys examined, though the number of survey questions, which thus dictate the potential length of time, is briefly discussed below.
Survey researchers claim that offering an invitation to complete a survey will increase the response rate as well, with a personalized invitation letter being ideal (Sills & Song, 2002; Yamrnanno, Skinner, & Childers, 1991). However, the sampled surveys do not agree with this, since the majority (n=36) invited nurses, and their response rates differ widely. Similarly, the use of an incentive is lauded by survey researchers, but was not a significant indicator of a higher response rate for nurses, whether said incentive was offered before or after administering the survey (Brown et al., 2016; Pedersen & Nielsen, 2016; Sills & Song, 2002; Stern, Bilgen, & Dillman, 2014). A majority (75%) of the studies which offered an incentive fell below the ideal 70% response rate.
One factor that did seem to matter for the nurses was that the more selective a survey was in the nursing population polled, the higher the response rates. This agrees with the findings of Sax, Gilmartin, and Bryant (2003); when a survey has applicability to the lives of the respondents, they are more likely to respond.
CONSIDERATIONS
If nursing researchers seek to find the most ideal survey design for their purposes, a few suggestions can be surmised from this small sample:
Visit your survey population: Physically handing the survey out to specialized nurses, or placing it in their professional mailboxes can be seen as the ideal delivery method. In addition, having the researcher be physically present during the survey to explain the purpose of their research, as well as answer questions directly would be beneficial.
Be specific when selecting nurses: When handed out to specialized nurses, such as Oncology Nurses, the surveys were responded to more readily. This suggests applicability in the lives of the respondents raising response rates (Table 1).
Include easy-to-answer questions: Having mostly quantitative questions that can be filled out quickly by busy nurses seems to consistently raise response rates. A survey with a small number of only qualitative questions seems to negatively impact response rates (Table 2).
Find a manageable number of questions: While the number of questions does not seem to make a difference, asking only one question is likely to produce an incredibly low response (less than 17% in this sample) (Table 3).
An incentive is unnecessary: Offering an incentive for nurses to complete the survey does not make a difference in the response rate, though relatively few incentives were offered in this sample. However, research into types of incentives may contextualize this data (Table 4).
Table 1.
Mailed Response Rate (%) | E-Mailed Response Rate (%) | Handed Out Response Rate (%) | |
---|---|---|---|
All Nurses | 58.0% | 57.4% | 71.8% |
Specialized Nurses | 56.2% | 56.0% | 73.1% |
Table 2.
Response Range | 0–10 (n=0) | 11–20 (n=3) | 21–30 (n=5) | 31–40 (n=3) | 41–50 (n=1) | 51–60 (n=10) | 61–70 (n=10) | 71–80 (n=5) | 81–90 (n=12) | 91–100 (n=1) |
---|---|---|---|---|---|---|---|---|---|---|
Quantitative Only | - | 1 | 3 | 1 | 1 | 6 | 9 | 3 | 11 | - |
Qualitative Only | - | 2 | - | - | - | 1 | - | - | - | - |
Both Qual. and Quant. | - | - | 1 | 2 | - | 3 | - | 1 | 1 | 1 |
Unknown Question Types | - | - | 1 | - | - | - | 1 | 1 | - | - |
Table 3.
Response Range | 0–10 (n=0) | 11–20 (n=3) | 21–30 (n=5) | 31–40 (n=3) | 41–50 (n=1) | 51–60 (n=10) | 61–70 (n=10) | 71–80 (n=5) | 81–90 (n=12) | 91–100 (n=1) |
---|---|---|---|---|---|---|---|---|---|---|
Average number of Questions | - | 11.7 | 49 | 22 | 4 | 51 | 34.6 | 36 | 43.1 | - |
Table 4.
Response Range | 0–10 (n=0) | 11–20 (n=3) | 21–30 (n=5) | 31–40 (n=3) | 41–50 (n=1) | 51–60 (n=10) | 61–70 (n=10) | 71–80 (n=5) | 81–90 (n=12) | 91–100 (n=1) |
---|---|---|---|---|---|---|---|---|---|---|
Surveys with Incentives | - | - | 1 | 1 | 1 | 2 | 1 | - | 2 | - |
CONCLUSIONS
The ideal response rate of 70% is attainable, and many of the examined nursing survey studies seem to follow the suggestions outlined by Dillman (2002), though the actual surveys were not available. While some survey studies received very low response rates, it does not mean that they failed to follow Dillman’s suggestions. However, when the aforementioned suggestions are combined, we see a positive impact on response rates. Overall, handing out a survey to under 1,000 specific nurses that includes 30 to 50 quantitative questions seems to be the best option for nursing researchers. The five considerations outlined in the previous section point to what we already know about nursing: it is hands-on. Its research then, needs to be equally hands-on to achieve the highest response rates, and thus more representative data.
REFERENCES
- Brown JA, Serrato CA, Hugh M, Kanter MH, Spritzer KL, Hays RD. Effect of a post-paid incentive on response rates to a web-based survey. Survey Practice. 2016;9(1):1–7. [Google Scholar]
- Dillman DA. Procedures for conducting government-sponsored establishment surveys: Comparisons of the total design method (TDM), a traditional cost-compensation model, and tailored design; Paper presented at the Proceedings of American Statistical Association, Second International Conference on Establishment Surveys.2000. [Google Scholar]
- Messer BL, Edwards ML, Dillman DA. Determinants of item nonresponse to web and mail respondents in three address-based mixed-mode surveys of the general public. Survey Practice. 2012;5(2):1–8. [Google Scholar]
- Nulty DD. The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education. 2008;33(3):301–314. [Google Scholar]
- Pedersen MJ, Nielsen CV. Improving survey response rates in online panels: Effects of low-cost incentives and cost-free text appeal interventions. Social Science Computer Review. 2016;34(2):229–243. [Google Scholar]
- Porter SR, Whitcomb ME. The impact of contact type on web survey response rates. The Public Opinion Quarterly. 2003;67(4):579–588. [Google Scholar]
- Sax LJ, Gilmartin SK, Bryant AN. Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education. 2003;44(4):409–432. [Google Scholar]
- Sills SJ, Song C. Innovations in survey research: An application of webbased surveys. Social Science Computer Review. 2002;20(1):22–30. [Google Scholar]
- Stern MJ, Bilgen I, Dillman DA. The state of survey methodology: Challenges, dilemmas, and new frontiers in the era of the tailored design. Field Methods. 2014;26(3):284–301. [Google Scholar]
- Whittemore R, Knafl K. The integrative review: Updated methodology. J Adv Nurs. 2005;52(5):546–553. doi: 10.1111/j.1365-2648.2005.03621.x. [DOI] [PubMed] [Google Scholar]
- Yamrnanno F, Skinner S, Childers T. Understanding mail survey response behavior. Public Opinion Quarterly. 1991;55:613–639. [Google Scholar]