Skip to main content
Canadian Family Physician logoLink to Canadian Family Physician
. 2012 Apr;58(4):e225–e228.

Why are response rates in clinician surveys declining?

Pourquoi les enquêtes auprès des cliniciens ont-elles des taux de réponse décroissants?

Ellen R Wiebe 1,, Janusz Kaczorowski 2, Jacqueline MacKay 3
PMCID: PMC3325475  PMID: 22611609

Abstract

Objective

To understand why response rates in clinician surveys are declining.

Design

Cross-sectional fax-back survey.

Setting

British Columbia.

Participants

Random sample of family physicians and all gynecologists in the College of Physicians and Surgeons of British Columbia’s registry.

Main outcome measures

Accuracy of the College of Physicians and Surgeons of British Columbia’s registry, and the prevalence and characteristics of physicians with policies not to participate in any surveys.

Results

Of 542 physicians who received surveys, 76 (14.0%) responded. On follow-up we found the following: the College of Physicians and Surgeons of British Columbia’s registry was inaccurate for 94 (17.3%) listings; 14 (2.6%) physicians were away; 100 (18.5%) were not eligible; and 197 (36.3%) had an office policy not to participate in any surveys. Compared with the respondents, physicians with an office policy not to participate in any surveys were more likely to be men, less likely to be white, more likely to have urban-based practices, and more likely to have been in practice for more than 15 years.

Conclusion

Many physicians have an office policy not to participate in any surveys. Owing to the trend of lower response rates, recommendations of minimum response rates for clinician surveys by many journals might need to be reassessed.


Surveys of clinicians are an important source of information when designing, implementing, or evaluating new or existing programs and policies. Surveys are an efficient, inexpensive, and flexible way of collecting standardized information from a large number of respondents. A high response rate is generally seen as key to legitimizing a survey’s results. A number of medical journals, including national journals in the United States and Canada, recommend survey response rates of at least 60% to ensure that nonresponse bias does not threaten the validity of the findings.1,2

Although several systematic reviews have identified36 strategies to increase response rates, there has been a steady downward trend in clinicians’ response rates to surveys.7 Although faxed, mailed, and e-mailed surveys have similar response rates, McMahon et al suggest that a combination of these might be better than any one method.8

METHODS

We conducted a fax-back survey of family doctors and gynecologists in British Columbia originally designed to understand their knowledge about, attitude toward, and practices for handling side effects of hormonal contraceptives. We used a modified Dillman tailored design with pretesting, 3 fax or e-mail contacts, but no incentive reward or introductory letter or postcard before the survey (owing to budgetary and time constraints).9 The questionnaire was 2 pages long and included 15 questions. Our sampling frame consisted of every sixth family doctor in the alphabetic listing in the College of Physicians and Surgeons of British Columbia’s (the College) registry between 2008 and 2009 and every gynecologist in the same registry. Descriptive statistics and between-groups comparison were conducted using SPSS software, version 18. All statistical tests were 2-sided at a predetermined α level of .05. The study was approved by the University of British Columbia Behavioural Research Ethics Board. Because we had such a low response rate to the survey, we explored why this might have been the case.

RESULTS

Of 542 physicians who received faxed surveys (faxed 3 times), only 76 (14.0%) physicians responded. The research assistant then telephoned all the physicians who had not completed the surveys to determine whether they had clinical practices in which they saw women for contraception and whether there was an office policy pertaining to participation in surveys. We found that the College registry was inaccurate for 94 (17.3%) listings (eg, physicians were not in practice, wrong fax number). In addition, 14 (2.6%) physicians were away and 100 (18.5%) were not eligible because they did not see women for contraception (eg, their practices were limited to geriatrics or infertility). More important, 197 (36.3%) physicians had an office policy not to participate in any surveys.

Information from the College registry was used to compare the respondents with the eligible physicians with an office policy not to participate in any surveys and the other eligible nonrespondents. Compared with the respondents, physicians with an office policy not to participate in any surveys were more likely to be men, were less likely to be white, were more likely to have urban-based practices, and were more likely to have been in practice for more than 15 years (Table 1). The profile of physicians with an office policy not to participate in any surveys resembled closely the characteristics of nonrespondents.

Table 1.

Characteristics of respondents, nonrespondents, and physicians with an office policy not to participate in any surveys: N = 335.*

CHARACTERISTICS RESPONDENTS, N = 74, N (%) NONRESPONDENTS, N = 64, N (%) PHYSICIANS WITH AN OFFICE POLICY NOT TO PARTICIPATE IN ANY SURVEYS, N = 197, N (%) P VALUE
Sex .02
  • Male 31 (41.9) 38 (59.3) 127 (64.4)
  • Female 43 (58.1) 26 (40.6) 69 (35.0)
Ethnicity .002
  • White 71 (95.9) 49 (76.6) 143 (72.6)
  • East Asian 2 (2.7) 6 (9.4) 36 (18.3)
  • South Asian 1 (1.4) 4 (6.3) 7 (3.6)
  • Other 0 (0) 4 (4.7) 11 (5.6)
Practice setting .002
  • Urban 40 (54.1) 48 (75.0) 148 (75.1)
  • Small city 10 (13.5) 8 (12.5) 31 (15.7)
  • Rural 23 (31.1) 7 (10.9) 18 (9.1)
Years in practice < .001
  • < 15 31 (41.9) 13 (20.3) 29 (14.7)
  • ≥ 15 43 (58.1) 50 (79.7) 168 (85.3)
*

Eligible physicians only (ie, physicians who did not see women for contraception were excluded).

Data were not available for all physicians for all characteristics.

P value for the differences between respondents and physicians with an office policy not to participate in any survey (χ2 test).

DISCUSSION

While there is ample evidence about nonrespondents and their characteristics, there is little information on the prevalence and profile of physicians with an office policy not to participate in any surveys. Most of the evidence-based strategies to increase response rates are likely to be ineffective because physicians with such policies would never see any of the surveys addressed to them. The trend toward lower response rates to surveys and the high rate of clinicians who have a policy not to participate in any surveys indicates that conducting a mail or fax survey might no longer be an effective or useful method of collecting information about physicians’ knowledge of, attitude toward, and experiences with various medical practices. Low response rates also mean that it becomes more important to describe the nonrespondents for each survey.

If our results are representative of physician office policies in other jurisdictions, then the response rates of clinician surveys that are recommended by many journals might need to be reassessed. It is disturbing to see that so many clinicians have such policies, as their input is critical when designing, implementing, or evaluating new or existing programs and policies.

Acknowledgments

Dr Wiebe was supported by the Vancouver Foundation through a BC Medical Services Foundation grant to the Community-Based Clinical Investigator Program of the University of British Columbia’s Department of Family Practice.

EDITOR’S KEY POINTS

  • Clinician surveys are an important source of information to help plan education and policy, and a high response rate is generally seen as key to legitimizing a survey’s results.

  • The trend to lower response rates to surveys and the high rate of clinicians who have a policy not to participate in any surveys indicates that conducting a mail or fax survey might no longer be an effective or useful method of collecting information about physicians’ knowledge of, attitude toward, and experiences with various medical practices.

  • Many journals might need to reassess their recommended response rates for clinician surveys.

POINTS DE REPÈRE DU RÉDACTEUR

  • Les enquêtes auprès des cliniciens sont une importante source d’information permettant de mieux planifier la formation et les politiques, et on considère généralement qu’un taux élevé de réponses constitue un élément clé pour légitimer les résultats de ces enquêtes.

  • La tendance à la baisse des taux de réponse aux enquêtes et l’importante proportion de médecins qui ont comme politique de ne jamais participer aux enquêtes indiquent que les sondages postaux ou par télécopieur ne sont plus des moyens efficaces ou utiles pour recueillir de l’information sur les connaissances, attitudes et expériences des médecins concernant différentes façons de pratiquer la médecine.

  • Plusieurs revues pourraient devoir réévaluer les taux de réponse recommandés pour les enquêtes auprès de cliniciens.

Footnotes

This article has been peer reviewed.

Cet article a fait l’objet d’une révision par des pairs.

Contributors

All authors contributed to the concept and design of the study; data gathering, analysis, and interpretation; and preparing the manuscript for submission.

Competing interests

None declared

References

  • 1.JAMA [website] JAMA instructions for authors. Chicago, IL: American Medical Association; 2012. Available from: http://jama.ama-assn.org/misc/ifora.dtl#SurveyResearch. Accessed 2010 Jul 27. [Google Scholar]
  • 2.Burns KE, Duffett M, Kho ME, Meade MO, Adhikari NK, Sinuff T, et al. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ. 2008;179(3):245–52. doi: 10.1503/cmaj.080372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev. 2009;(3):MR000008. doi: 10.1002/14651858.MR000008.pub4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Nakash RA, Hutton JL, Jørstad-Stein EC, Gates S, Lamb SE. Maximising response to postal questionnaires—a systematic review of randomised trials in health research. BMC Med Res Methodol. 2006;6:5. doi: 10.1186/1471-2288-6-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Thorpe C, Ryan B, McLean SL, Burt A, Stewart M, Brown JB, et al. How to obtain excellent response rates when surveying physicians. Fam Pract. 2009;26(1):65–8. doi: 10.1093/fampra/cmn097. Epub 2008 Dec 12. [DOI] [PubMed] [Google Scholar]
  • 6.Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Methods to increase response rates to postal questionnaires. Cochrane Database Syst Rev. 2007;(2):MR000008. doi: 10.1002/14651858.MR000008.pub3. [DOI] [PubMed] [Google Scholar]
  • 7.Cook JV, Dickinson HO, Eccles MP. Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. BMC Health Serv Res. 2009;9:160. doi: 10.1186/1472-6963-9-160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.McMahon SR, Iwamoto M, Massoudi MS, Yusuf HR, Stevenson JM, David F, et al. Comparison of e-mail, fax, and postal surveys of pediatricians. Pediatrics. 2003;111(4 Pt 1):e299–303. doi: 10.1542/peds.111.4.e299. [DOI] [PubMed] [Google Scholar]
  • 9.Dillman DA. Mail and Internet surveys. The tailored design method. 2nd ed. Hoboken, NJ: John Wiley & Sons; 2007. [Google Scholar]

Articles from Canadian Family Physician are provided here courtesy of College of Family Physicians of Canada

RESOURCES