Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2008 Apr 15;72(2):43. doi: 10.5688/aj720243

Response Rates and Responsiveness for Surveys, Standards, and the Journal

Jack E Fincham 1,
PMCID: PMC2384218  PMID: 18483608

The Journal has regularly published the results of survey research. As an academy we seem to be very interested in learning what our faculty members and students think, how they perform, and what is going on at other schools and colleges of pharmacy. A survey is often the best approach to acquiring that knowledge. However, the Editors believe that survey research published in the Journal has varied in quality and that standards for survey research can be used to improve the quality of research in the academy and the quality of papers published in the Journal. With that in mind, a decision was made in early 2008 to clarify expectations for survey research manuscripts submitted to the Journal. In Volume 72, Issue 1 of the Journal, Draugalis and colleagues1 presented an excellent paper detailing “best practices” for survey research manuscripts. These standards are now recommended to authors and reviewers, and will be used by the Editors in making decisions regarding acceptance of manuscripts.

One item addressed in the paper1 was the importance of response rates to questionnaire research, while another issue dealt with sample representativeness. The Draugalis et al1 paper and an examination of previously published survey research manuscripts in the Journal has led to the application of more stringent expectations for manuscripts published in the Journal.

Expectations for Survey Research Response Rates

There are now higher expectations for survey response rates. Response rates approximating 60% for most research should be the goal of researchers and certainly are the expectation of the Editor and Associate Editors of the Journal. For survey research intended to represent all schools and colleges of pharmacy, a response rate of ≥ 80% is expected.

The following sentences will be included by the Journal Editors in letters sent to authors of manuscripts that do not meet generally accepted standards for survey research:

“We are now applying stricter standards for survey research. For a discussion of the rationale behind the new standards, please refer to the paper by Draugalis et al, Article 11 in Volume 72, Issue 1 of AJPE (http://www.ajpe.org/view.asp?art=aj720111&pdf=yes). In brief, survey reports that are intended to be generalized to all colleges/schools of pharmacy should (1) have a response from at least 80% and (2) demonstrate that the sample includes representation of colleges based on the following factors that are similar to the overall profile of US institutions: public vs. private, geographic location, and university affiliation (stand-alone, part of a comprehensive university, or part of an academic health center).”

Why Are Representativeness and Response Rates Important Issues?

Representativeness

Representativeness refers to how well the sample drawn for the questionnaire research compares with (eg, is representative of) the population of interest. Can the reader evaluate the study findings with assurance that the sample of respondents reflects elements of the population with breadth and depth? Lack of response to the questionnaire by potential respondents in a sample or population is referred to as nonresponse bias. Nonresponse bias is a deadly blow to both the reliability and validity of survey study findings. If a survey achieves only a 30% response rate, the study suffers from a nonresponse bias of 70%. If the response rate to a survey is 20%, the nonresponse bias is 80%. Brick and Kalton2 suggest that one way of dealing with lack of representativeness is to weight the study sample segments to reflect the greater population attributes. However, the universe of pharmacy faculty members is too diverse and segmented for this to be a viable option for pharmacy education research.

Draugalis et al1 listed 10 criteria for survey research reports in the Appendix of their paper. Two of the criteria specific to representativeness (criterion 3) and response rates (criterion 7) are printed below in this section (representativeness) and the next (response rates):

Criterion 3. Did the authors select samples that well represent the population to be studied?

  1. What sampling approaches were used?

  2. Did the authors provide a description of how coverage and sampling error were minimized?

  3. Did the authors describe the process to estimate the necessary sample size?

Cook et al point out after conducting a meta-analysis of web- or Internet-based surveys that: “Response representativeness is more important than response rate in survey research. However, response rate is important if it bears on representativeness.”3(p821) When total nonresponse occurs in sample elements drawn from populations that are small, the effect of nonresponse bias is even more profound. Because the academy is relatively small and made up of disparate entities (small vs. larger schools; public vs. private; research intensive vs. teaching; religious affiliations vs. unaffiliated; standalone vs. medical center/or liberal arts based; and combinations and permutations of the above), samples must be appropriately representative of the greater academy in scope so as to further diminish the negative effects of nonresponse bias. A representation of 80% has been chosen as the standard for evaluation for the Journal.

Response rates

Draugalis et al1 list the following point when considering response rates:

Criterion 7. Was the response rate sufficient to enable generalizing the results to the target population?

  1. What was the response rate?

  2. How was response rate calculated?

  3. Were follow-ups planned for and used?

  4. Do authors address potential nonresponse bias?

Response rates are calculated by dividing the number of usable responses returned by the total number eligible in the sample chosen. Mitchell4 argues, with documentation from others, that the survey response rate should be calculated as the number of returned questionnaires divided by the total sample who were sent the survey initially. Others subtract the number of undeliverable questionnaires from the initial sample to obtain the denominator. Mitchell4 argues that this calculation only determines the questionnaire's success in inducing respondents to return the survey, and masks a potential large sample selection bias for the instrument.

Questionnaires can be either telephoned, administered in person, mailed only, e-mailed only, or Internet mediated only, or a combination of these. Response rates to e-mail surveys have decreased since the late 1980s.5 E-mail response rates may only approximate 25% to 30% without follow-up e-mail and reinforcements.6 E-mail surveys incorporating multimode approaches may yield response rates as high as 70%.6 Allowing for differing methods of returning surveys (e-mail and/or mailed options; eg, multimode) will aid those respondents who prefer to print out a survey instrument and respond via US mail. In a study carried out by Yun and Trumbo,6 a response rate of 72% was obtained with a multi-mode approach. A methodology similar to what has enhanced response rates to mailed survey instruments had not been developed by the late 1990s.7 This has since changed, with Schaefer and Dillman7 asserting that a multimode approach to e-mail survey administration will enhance response rates. In a completed study comparing differing methods of administration, response rates close to 60% were achieved by multimode contacts.7 This mixed-mode approach, combining both mailed and e-mailed survey instruments with an Internet-based response mechanism, also is an approach to help reduce the problem of coverage error in administration of surveys.

Reviews of electronic survey research point to similar response rates as those obtained via mailed survey methodologies.8 In a comparative study, mailed surveys alone or combined with e-mail/web follow-up resulted in larger response rates than an e-mail-web survey followed up by a mailed contact to non-respondents.8 Response rates to web and mailed survey instruments were both increased if preceded by a mailed contact to potential respondents.9 Multiple contacts, appearance, incentives, personalization, and sponsorship have significant impacts on survey response rates.10 High response rates are achievable and have been achieved in samples across many studies. Sitzia and Wood11 examined a large, global sample of survey response rates to patient satisfaction studies and found an average response rate of 76.7% for the studies chosen to analyze. Even so, they conclude that patient satisfaction studies show a poor awareness of important methodological considerations in design and administration.

Summary and Points About Previously Published Research in the Journal

It will be apparent when perusing past issues of the Journal and examining several of the manuscripts that have been previously published, that there are papers that the Journal has published that contain survey research that do not meet these new criteria for responsiveness and response rates. This is understood, and points related to this are not lost on the Journal Editors, however these new standards will be seen as positive by those in the academy and beyond who look to the Journal for quality in all aspects of educational research in pharmacy, and subsequent manuscript submissions emanating from studies.

REFERENCES

  • 1.Draugalis JR, Coons SJ, Plaza CM. Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers. Am J Pharm Educ. 2008;(1):72. doi: 10.5688/aj720111. Article 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Brick JM, Kalton G. Handling missing data in survey research. Stat Methods Med Res. 1996;5:215–38. doi: 10.1177/096228029600500302. [DOI] [PubMed] [Google Scholar]
  • 3.Cook C, Heath F, Thompson RL. A meta-analysis of response rates in web- or internet-based surveys. Educ and Psychol Meas. 2000;60(6):821–36. [Google Scholar]
  • 4.Mitchell RC. Washington, DC: Resources for the Future; 1989. Using surveys to Value Public Goods: The Contingent Valuation Method . [Google Scholar]
  • 5.Sheehan K. E-mail survey response rates: a review. J Compu-Mediated Com. 2001;6(2). Available at: http://jcmc.indiana.edu/vol6/issue2/sheehan.html Accessed April 1, 2008.
  • 6.Yun GW, Trumbo CW. Comparative response to a survey executed by post, e-mail, & web form. J Compu-Mediated Com. 2000:6. Available online at: http://jcmc.indiana.edu/vol6/issue1/yun.html Accessed April 1, 2008.
  • 7.Schaefer DR, Dillman DA. Development of a standard e-mail methodology. Public Opinion Q. 1998;62:378–97. [Google Scholar]
  • 8.Converse PD, Wolfe EW, Oswald FL. Response rates for mixed-mode surveys using mail and e-mail/web. Am J Eval. 2008;29(1):99–107. [Google Scholar]
  • 9.Kaplowitz MD, Hadlock TD, Levine R. A comparison of web and mail survey response rates. Public Opinion Q. 2004;68(1):94–101. [Google Scholar]
  • 10.Dillman DA. Mail and Internet-Surveys, the Tailored Design Method. 2nd ed. New York: John Wiley & Sons, Inc; Survey implementation; p. 149. [Google Scholar]
  • 11.Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care. 1998;10(4):311–7. doi: 10.1093/intqhc/10.4.311. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES