Short abstract
Understanding your study group is key to getting a good response to a questionnaire; dealing with the resulting mass of data is another challenge
The first step in producing good questionnaire research is getting the right questionnaire.1 However, even the best questionnaire will not get adequate results if it is not used properly. This article outlines how to pilot your questionnaire, distribute and administer it; and get it returned, analysed, and written up for publication. It is intended to supplement published guidance on questionnaire research, three quarters of which focuses on content and design.2
Piloting
Questionnaires tend to fail because participants don't understand them, can't complete them, get bored or offended by them, or dislike how they look. Although friends and colleagues can help check spelling, grammar, and layout, they cannot reliably predict the emotional reactions or comprehension difficulties of other groups. Whether you have constructed your own questionnaire or are using an existing instrument, always pilot it on participants who are representative of your definitive sample. You need to build in protected time for this phase and get approval from an ethics committee.3
During piloting, take detailed notes on how participants react to both the general format of your instrument and the specific questions. How long do people take to complete it? Do any questions need to be repeated or explained? How do participants indicate that they have arrived at an answer? Do they show confusion or surprise at a particular response—if so, why? Short, abrupt questions may unintentionally provoke short, abrupt answers. Piloting will provide a guide for rephrasing questions to invite a richer response (box 1).
Box 1: Patient preference is preferable
I worked on a sexual health study where we initially planned to present the questionnaire on a computer, since we had read people were supposedly more comfortable “talking” to a computer. Although this seemed to be the case in practices with middle class patients, we struggled to recruit in practices where participants were less familiar with computers. Their reasons for refusal were not linked to the topic of the research, but because they saw our laptops as something they might break, could make them look foolish, or would feed directly to the internet (which was inextricably linked to computers in some people's minds). We found offering a choice between completing the questionnaire on paper or the laptop computer greatly increased response rates.
Planning data collection
You should be aware of the relevant data protection legislation (for United Kingdom see www.informationcommissioner.gov.uk) and ensure that you follow internal codes of practice for your institution—for example, obtaining and completing a form from your data protection officer. Do not include names, addresses, or other identifying markers within your electronic database, except for a participant number linked to a securely kept manual file.
The piloting phase should include planning and testing a strategy for getting your questionnaire out and back—for example, who you have invited to complete it (the sampling frame), who has agreed to do so (the response rate), who you've had usable returns from (the completion rate), and whether and when you needed to send a reminder letter. If you are employing researchers to deliver and collect the questionnaire it's important they know exactly how to do this.4
Administrative errors can hamper the progress of your research. Real examples include researchers giving the questionnaire to wrong participants (for example, a questionnaire aimed at men given to women); incomplete instructions on how to fill in the questionnaire (for example, participants did not know whether to tick one or several items); postal surveys in which the questionnaire was missing from the envelope; and a study of over 3000 participants in which the questionnaire was sent out with no return address.
Administering your questionnaire
The choice of how to administer a questionnaire is too often made on convenience or cost grounds (see table A on bmj.com). Scientific and ethical considerations should include:
The needs and preferences of participants, who should understand what is required of them; remain interested and cooperative throughout completion; be asked the right questions and have their responses recorded accurately; and receive appropriate support during and after completing the questionnaire
The skills and resources available to your research team
The nature of your study—for example, short term feasibility projects, clinical trials, or large scale surveys.
Maximising your response rate
Sending out hundreds of questionnaires is a thankless task, and it is sometimes hard to pay attention to the many minor details that combine to raise response and completion rates. Extensive evidence exists on best practice (box 2), and principal investigators should ensure that they provide their staff with the necessary time and resources to follow it. Note, however, that it is better to collect fewer questionnaires with good quality responses than high numbers of questionnaires that are inaccurate or incomplete. The third article in this series discusses how to maximise response rates from groups that are hard to research.15
Accounting for those who refuse to participate
Survey research tends to focus on people who have completed the study. Yet those who don't participate are equally important scientifically, and their details should also be recorded (remember to seek ethical approval for this).4,16,17
Box 2: Factors shown to increase response rates
The questionnaire is clearly designed and has a simple layout5
It offers participants incentives or prizes in return for completion6
It has been thoroughly piloted and tested5
Participants are notified about the study in advance with a personalised invitation7
The aim of study and means of completing the questionnaire are clearly explained8,9
A researcher is available to answer questions and collect the completed questionnaire10
If using a postal questionnaire, a stamped addressed envelope is included7
The participant feels they are a stakeholder in the study11
Questions are phrased in a way that holds the participant's attention11
Questionnaire has clear focus and purpose and is kept concise7,8,11
The questionnaire is appealing to look at,12 as is the researcher13
If appropriate, the questionnaire is delivered electronically14
One way of reducing refusal and non-completion rates is to set strict exclusion criteria at the start of your research. For example, for practical reasons many studies exclude participants who are unable to read or write in the language of the questionnaire and those with certain physical and mental disabilities that might interfere with their ability to give informed consent, cooperate with the researcher, or understand the questions asked. However, research that systematically excludes hard to reach groups is increasingly seen as unethical, and you may need to build additional strategies and resources into your study protocol at the outset.15 Keep a record of all participants that fit the different exclusion categories (see bmj.com).
Collecting data on non-participants will also allow you to monitor the research process. For example, you may find that certain researchers seem to have a higher proportion of participants refusing, and if so you should work with those individuals to improve the way they introduce the research or seek consent. In addition, if early refusals are found to be unusually high, you might need to rethink your overall approach.10
Entering, checking, and cleaning data
Novice researchers often assume that once they have selected, designed, and distributed their questionnaire, their work is largely complete. In reality, entering, checking, and cleaning the data account for much of the workload. Some principles for keeping quantitative data clean are listed on bmj.com.
Even if a specialist team sets up the database(s), all researchers should be taught how to enter, clean, code, and back up the data, and the system for doing this should be universally agreed and understood. Agree on the statistical package you wish to use (such as SPSS, Stata, EpiInfo, Excel, or Access) and decide on a coding system before anyone starts work on the dataset.
It is good practice to enter data into an electronic database as the study progresses rather than face a mountain of processing at the end. The project manager should normally take responsibility for coordinating and overseeing this process and for ensuring that all researchers know what their role is with data management. These and other management tasks are time consuming and must be built into the study protocol and budget. Include data entry and coding in any pilot study to get an estimate of the time required and potential problems to troubleshoot.
Analysing your data
You should be able to predict the type of analysis required for your different questionnaire items at the planning stage of your study by considering the structure of each item and the likely distribution of responses (box 3).1 Table B on bmj.com shows some examples of data analysis methods for different types of responses.18,19 w1
Writing up and reporting
Once you have completed your data analysis, you will need to think creatively about the clearest and most parsimonious way to report and present your findings. You will almost certainly find that you have too much data to fit into a standard journal article, dissertation, or research report, so deciding what to include and omit is crucial. Take statistical advice from the outset of your research. This can keep you focused on the hypothesis or question you are testing and the important results from your study (and therefore what tables and graphs to present).
Box 3: Nasty surprise from a simple questionnaire
Moshe selected a standardised measure on emotional wellbeing to use in his research, which looked easy to complete and participants answered readily. When he came to analysing his data, he discovered that rather than scoring each response directly as indicated on the questionnaire, a complicated computer algorithm had to be created, and he was stumped. He found a statistician to help with the recoding, and realised that for future studies it might be an idea to check both the measure and its scoring system before selecting it.
Box 4: An unexpected result
Priti, a specialist registrar in hepatology, completed an attitude questionnaire in patients having liver transplantation and those who were still waiting for a donor. She expected to find that those who had received a new liver would be happier than those awaiting a donor. However, the morale scale used in her questionnaire showed that the transplantation group did not have significantly better morale scores. Priti felt that this negative finding was worth further investigation.
Methods section
The methods section should give details of your exclusion criteria and discuss their implications for the transferability of your findings. Data on refusals and unsuitable participants should also be presented and discussed, preferably using a recruitment diagram.w2 Finally, state and justify the statistical or qualitative analyses used.18,19 w2
Results section
When compiling the results section you should return to your original research question and set out the findings that addressed this. In other words, make sure your results are hypothesis driven. Do not be afraid to report non-significant results, which in reality are often as important as significant results—for example, if participants did not experience anxiety in a particular situation (box 4). Don't analyse and report on every question within your questionnaire
Choose the most statistically appropriate and visually appealing format for graphs (table). w3 Label graphs and their axes adequately and include meaningful titles for tables and diagrams. Refer your reader to any tables or graphs within your text, and highlight the main findings.
Table 1.
When to use | When to avoid | |
---|---|---|
Data table | If you need to produce something that is simple and quick and that has a low publication cost for journals. If you want to make data accessible to the interested reader for further manipulations | Do not use if you want to make your work look visually appealing. Too many tables can weigh down the results section and obscure the really key results. The reader is forced to work too hard and may give up reading your report |
Bar chart | If you need to convey changes and differences, particularly between groups (eg how men and women differed in their views on an exercise programme for recovering heart attack patients) | If your data are linear and each item is related to the previous then you should use a (line) graph. Bar charts treat data as though they are separate groups not continuous variables |
Scatter graph | Mostly used for displaying correlations or regressions (eg association between number of cigarettes smoked and reduced lung capacity) | If your data are based on groups or aggregated outcomes rather than individual scores |
Pie chart | Used for simple summaries of data, particularly if a small number of choices were provided | As with bar charts, avoid if you want to present linear or relational data |
Line graph | Where the points on the graph are logically linked, usually in time (eg scores on quality of life and emotional wellbeing measures taken monthly over six months) | If your data were not linked over time, repetition, etc it is inappropriate to suggest a linear relation by presenting findings in this format |
If you have used open ended questions within your questionnaire, do not cherry pick quotes for your results section. You need to outline what main themes emerged, and use quotes as necessary to illustrate the themes and supplement your quantitative findings.
Discussion section
The discussion should refer back to the results section and suggest what the main findings mean. You should acknowledge the limitations of your study and couch the discussion in the light of these. For example, if your response rate was low, you may need to recommend further studies to confirm your preliminary results. Your conclusions must not go beyond the scope of your study—for example, if you have done a small, parochial study do not suggest changes in national policy. You should also discuss any questions your participants persistently refused to answer or answered in a way you didn't expect.
Taking account of psychological and social influences
Questionnaire research (and indeed science in general) can never be completely objective. Researchers and participants are all human beings with psychological, emotional, and social needs. Too often, we fail to take these factors into account when planning, undertaking, and analysing our work. A questionnaire means something different to participants and researchers.w4 Researchers want data (with a view to publications, promotion, academic recognition, and further grant income). Junior research staff and administrators, especially if poorly trained and supervised, may be put under pressure, leading to critical errors in piloting (for example, piloting on friends rather than the target group), sampling (for example, drifting towards convenience rather than random samples) and in the distribution, collection, and coding of questionnaires.15 Staff employed to assist with a questionnaire study may not be familiar with all the tasks required to make it a success and may be unaware that covering up their ignorance or skill deficits will make the entire study unsound.
Summary points
Piloting is essential to check the questionnaire works in the study group and identify administrative and analytical problems
The method of administration should be determined by scientific considerations not just costs
Entering, checking, and cleaning data should be done as the study progresses
Don't try to include all the results when reporting studies
Do include exclusion criteria and data on non-respondents
Research participants, on the other hand, may be motivated to complete a questionnaire through interest, boredom, a desire to help others (particularly true in health studies), because they feel pressurised to do so, through loneliness, or for an unconscious ulterior motive (“pleasing the doctor”). All of these introduce potential biases into the recruitment and data collection process.
Supplementary Material
This is the second in a series of three articles edited by Trisha Greenhalgh
References w1-10, illustrative examples, and further information on using questionnaires are on bmj.com
I thank Alicia O'Cathain, Trish Greenhalgh, Jill Russell, Geoff Wong, Marcia Rigby, Sara Shaw, Fraser Macfarlane, and Will Callaghan for their helpful feedback on earlier versions of this paper and Gary Wood for advice on statistics and analysis.
PMB has taught research methods in a primary care setting for the past 13 years, specialising in practical approaches and using the experiences and concerns of researchers and participants as the basis of learning. This series of papers arose directly from questions asked about real questionnaire studies. To address these questions she and Trisha Greenhalgh explored a wide range of sources from the psychological and health services research literature.
Competing interests: None declared.
References
- 1.Boynton PM, Greenhalgh T. Hands-on guide to questionnaire research: selecting, designing, and developing your questionnaire. BMJ 2004;328: 1312-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Wall CR, De Haven MJ, Oeffinger KC. Survey methodology for the uninitiated. J Fam Pract 2002;51: 21-7. [PubMed] [Google Scholar]
- 3.Gillham B. Developing a questionnaire (real world research). London: Continuum, 2000.
- 4.Brogger J, Bakke P, Eide GE, Gulsvik A. Contribution of follow-up of nonresponders to prevalence and risk estimates: a Norwegian respiratory health survey. Am J Epidemiol 2003;157: 558-66. [DOI] [PubMed] [Google Scholar]
- 5.Puleo E, Zapka J, White MJ, Mouchawar J, Somkin C, Taplin S. Caffeine, cajoling, and other strategies to maximise clinician survey response rates. Eval Health Prof 2002;25: 169-84. [DOI] [PubMed] [Google Scholar]
- 6.Halpern SD, Ubel PA, Berlin JA, Asch DA. Randomized trial of $5 versus $10 monetary incentives, envelope size, and candy to increase physician response rates to mailed questionnaires. Med Care 2002;40: 834-9. [DOI] [PubMed] [Google Scholar]
- 7.Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Increasing response rates to postal questionnaires: systematic review. BMJ 2002;324: 1183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Oppenheim AN. Questionnaire design, interviewing and attitude measurement. London, New York: Continuum, 1992.
- 9.Sapsford R. Survey research. London: Sage, 1999.
- 10.Boynton PM. The research companion: a practical guide for the social and health sciences. London: Taylor and Francis (in press).
- 11.McColl E, Jacoby A, Thomas L, Soutter J, Bamford C, Steen N, et al. Design and use of questionnaires: a review of best practice applicable to surveys of health service staff and patients. Health Technol Assess 2001; 5. [DOI] [PubMed]
- 12.Keeter S, Kennamer JD, Ellis JM, Green RG. Does the use of colored paper improve response rates to mail surveys? A multivariate experimental evaluation. J Soc Serv Res 2001;28: 69-78. [Google Scholar]
- 13.Gueguen N, Legoherel P, Jacob C. Solicitation for participation in a survey via email. Effect of live presence and physical attractiveness of the solicitor on response rate. Can J Behav Sci 2003;35: 84-96. [Google Scholar]
- 14.Shannon DM, Bradshaw CC. A comparison of response rate, response time, and costs of mail and electronic surveys. J Exp Educ 2002;70: 179-92. [Google Scholar]
- 15.Boynton PM., Wood GW, Greenhalgh T. Hands-on guide to questionnaire research: reaching beyond the white middle classes. BMJ (in press). [DOI] [PMC free article] [PubMed]
- 16.Meadows KA, Gardiner E, Greene T, Rogers D, Russell D, Smoljanvic L. Factors affecting general practice patient response rates to postal survey of health status in England: a comparative analysis of three disease groups. J Eval Clin Prac 1998;4: 243-7. [DOI] [PubMed] [Google Scholar]
- 17.Dallosso HM, Matthews RJ, McGrother CW, Clarke M, Perry SI, Shaw C, et al. An investigation into nonresponse bias in a postal survey on urinary symptoms. BJU Int 2003;91: 631-6. [DOI] [PubMed] [Google Scholar]
- 18.Howitt D, Cramer D. First steps in research and statistics. London: Routledge, 2000.
- 19.Richardson JTE. Handbook of qualitative research methods for psychology and the social sciences. Leicester: BPS Books, 1996.
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.