Skip to main content
The BMJ logoLink to The BMJ
editorial
. 2002 May 18;324(7347):1168–1169. doi: 10.1136/bmj.324.7347.1168

Improving the response rates to questionnaires

Several common sense strategies are effective

Liam Smeeth 1,2, Astrid E Fletcher 1,2
PMCID: PMC1123146  PMID: 12016167

Most readers of the BMJ probably receive postal questionnaires from time to time. Whether such questionnaires are dutifully completed and returned, left to gather dust, or rapidly thrown away may seem like a random process of little importance. However, while response may be of little consequence at the individual level, for many research studies a high response rate to a postal questionnaire is critical. No matter how expensive, well designed, or important a study, a poor response rate can introduce such uncertainty—and worse still, bias—in the results as to make the study of little scientific value. However, postal questionnaires are attractive to researchers because they are likely to be substantially cheaper than data collection based on interviews. Postal questionnaires are increasingly used in other areas of health care, for example in screening programmes, to assess patient satisfaction, or to assess outcomes after treatments such as surgery. Methods to maximise response rates from postal questionnaires therefore have considerable relevance for medical researchers, practitioners, and policy makers alike.

In this issue Edwards and colleagues present a systematic review of interventions to improve response rates to postal questionnaires (p 1183).1 The review included 292 randomised trials that evaluated 75 different strategies. The scale of the review indicates the need for high quality, rigorous systematic reviews—identifying, appraising, and collating such an enormous volume of research would clearly be beyond even the best intentioned researchers planning to post a questionnaire. The systematic review identified several factors that were associated with increased response rates including monetary incentives; sending the questionnaires by recorded delivery and by first class post; short questionnaires; coloured ink; personalised letters; and follow up contact and second copies. Questionnaires including questions of a sensitive nature and those from commercial as opposed to university sources were less likely to be returned. None of these factors are likely to surprise readers. However, even though the review by Edwards and colleagues serves to confirm many ideas that make sense, it is important because it provides a firm evidence base for researchers trying to improve response rates and therefore the quality of their research.

Some caution is required in interpreting the findings of the review. Many of the included trials had nothing to do with health care. This meant that a lot more trials could be included in the review, allowing the reviewers to assess a wider range of possible interventions and greatly increasing the power and precision of the estimates of effect of these interventions. However, the extent to which findings from, for example, commercial fields such as marketing can safely be generalised to a healthcare setting is questionable. The intervention found to have the greatest effect on response rates—offering money—raises a number of ethical considerations and is a strategy that many people in health care would be reluctant to use, particularly with vulnerable groups. The current relevance of the findings is also important. Some of the trials were done some decades ago, when the public was relatively naive. Personalised letters, coloured inks, free pens, promises of free gifts, and even gold or silver envelopes are now routinely used by the commercial sector to attract the attention of potential customers. Many recipients may now be immune to such devices. One clear message that does emerge is the need for health researchers to make their letter different from that of commercial organisations.

Edwards and colleagues quite rightly focused on a single issue: response rates to postal questionnaires. Their review is the first Cochrane review focusing on research methodology.2 Systematic reviews addressing a wide range of other methodology questions are clearly needed. For example, while postal questionnaires are relatively cheap and high response rates can be obtained, they are also associated with higher levels of missing or incomplete responses.3 Choosing between postal questionnaires and other methods for collecting data is another important question where the evidence is unclear, indicating the need for a systematic review. Much of the research on postal questionnaires will be irrelevant in some developing countries, where strategies such as door to door surveys are more likely to be used. Again, the evidence about factors associated with higher response rates for door to door surveys is unclear and a systematic review would be of great value.

The evidence base for research methodology is growing fast. An early example was a study showing that in randomised trials, concealment of allocation (meaning no one can predict which group participants will be randomised to) and blinding of outcome assessments were associated with reduced bias.4 Other areas such as methods for undertaking systematic reviews and health services research have substantial literature.5,6 Such evidence matters because it can improve the quality of research and ultimately improve clinical care and health policy. As more is known about the factors associated with high quality research, it is up to investigators to make more use of research findings. The review by Edwards and colleagues is a valuable step towards making evidence based research a reality.

Papers p 1183

Footnotes

  LS and AF work in the same institution as some of the authors of the paper by Edwards and colleagues, but have no research links with them. LS is an unpaid editor for the Cochrane Collaboration.

References

  • 1.Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ. 2002;324:1183–1185. doi: 10.1136/bmj.324.7347.1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. The Cochrane Library, Issue 4. Oxford: Update Software; 2001. Methods to influence response to postal questionnaires (Cochrane Methodology Review) . Cochrane library number: MR000008. [Google Scholar]
  • 3.Smeeth L, Fletcher AE, Stirling S, Nunes M, Breeze E, Ng E, et al. Randomised comparison of three methods of administering a screening questionnaire to elderly people: findings from the MRC trial of the assessment and management of older people in the community. BMJ. 2001;323:1403–1407. doi: 10.1136/bmj.323.7326.1403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273:408–412. doi: 10.1001/jama.273.5.408. [DOI] [PubMed] [Google Scholar]
  • 5.Egger M, Davey Smith G, Altman DG. Systematic reviews in health care: meta-analysis in context. 2nd ed. London: BMJ Books; 2001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Black N, Brazier J, Fitzpatrick R, Reeves B. Health services research methods. London: BMJ Books; 1998. [Google Scholar]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Publishing Group

RESOURCES