Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2008 Feb 15;72(1):11. doi: 10.5688/aj720111

Best Practices for Survey Research Reports: A Synopsis for Authors and Reviewers

JoLaine Reierson Draugalis a,, Stephen Joel Coons b, Cecilia M Plaza c
PMCID: PMC2254236  PMID: 18322573

INTRODUCTION

As survey researchers, as well as reviewers, readers, and end users of the survey research literature, we are all too often disheartened by the poor quality of survey research reports published in the peer-reviewed literature. For the most part, poor quality can be attributed to 2 primary problems: (1) ineffective reporting of sufficiently rigorous survey research, or (2) poorly designed and/or executed survey research, regardless of the reporting quality. The standards for rigor in the design, conduct, and reporting of survey research in pharmacy should be no lower than the standards for the creation and dissemination of scientific evidence in any other discipline. This article provides a checklist and recommendations for authors and reviewers to use when submitting or evaluating manuscripts reporting survey research that used a questionnaire as the primary data collection tool.

To place elements of the checklist in context, a systematic review of the Journal was conducted for 2005 (volume 69) and 2006 (volume 70) to identify articles that reported the results of survey research. In 2005, volume 69, 10/39 (26%) and 2006, volume 70, 10/29 (35%) of the total research articles published (not including personal or telephone interviews) used survey research methods. As stated by Kerlinger and Lee, “Survey research studies large and small populations (or universes) by selecting and studying samples chosen from the population to discover the relative incidence, distribution, and interrelations of sociological and psychological variables.”1 Easier said than done; that is, if done in a methodologically sound way. Although survey research projects may use personal interviews, panels, or telephones to collect data, this paper will only consider mail, e-mail, and Internet-based data collection approaches. For clarity, the term survey should be reserved to describe the research method whereas a questionnaire or survey instrument is the data collection tool. In other words, the terms survey and questionnaire should not be used interchangeably. As well, data collection instruments are used in many research designs such as pretest/posttest and experimental designs, and use of the term survey is inappropriate to describe the instrument or the methodology in these cases. In 2005-2006 Journal volumes 69 and 70, 11/68 research articles (16%) used inappropriate terminology. Survey research can be very powerful and may well be the only way to conduct a particular inquiry or ongoing body of research.

There is no shortage of text and reference books, to name but a few of our favorites, Dillman's Mail and Internet Surveys: The Tailored Design Method,2 Fowler's Survey Research Methods,3 Salant and Dillman's How to Conduct Your Own Survey,4 and Aday and Cornelius's Designing and Conducting Health Surveys – A Comprehensive Guide.5 As well, numerous guidelines, position statements, and best practices are available from a wide variety of associations in the professional literature and via the Internet. We will cite a number of these throughout this paper. Unfortunately, it is apparent from both the published literature and the many requests to contribute data to survey research projects that these materials are not always consulted and applied. In fact, it seems quite evident that there is a false impression that conducting survey research is relatively easy. As an aside to his determination of the effectiveness of follow-up techniques in mail surveys, Stratton found, “the number of articles that fell short of a scholarly level of execution and reporting was surprising.”6 In addition, Desselle more recently observed that, “Surveys are perhaps the most used, and sometimes misused, methodological tools among academic researchers.”7

We will structure this paper based on a modified version of the 10 guiding questions established in the Best Practices for Survey and Public Opinion Research by the American Association for Public Opinion Research (AAPOR).8 The 10 guiding questions are: (1) was there a clearly defined research question? (2) did the authors select samples that well represent the population to be studied? (3) did the authors use designs that balance costs with errors? (4) did the authors describe the research instrument? (5) was the instrument pretested? (6) were quality control measures described? (7) was the response rate sufficient to enable generalizing the results to the target population? (8) were the statistical, analytic, and reporting techniques appropriate to the data collected? (9) was evidence of ethical treatment of human subjects provided? and (10) were the authors transparent to ensure evaluation and replication? These questions can serve as a guide for reviewers and researchers alike for identifying features of quality survey research. A grid addressing the 10 questions and subcategories is provided in Appendix 1 for use in preparing and reviewing submissions to the Journal.

Clearly Defined Research Question

Formulating the research questions and study objectives depends on prior work and knowing what is already available either in archived literature, American Association of Colleges of Pharmacy (AACP) institutional research databases, or from various professional organizations and associations.9,10 The article should clearly state why the research is necessary, placing it in context, and drawing upon previous work via a literature review.9 This is especially pertinent to the measurement of psychological constructs, such as satisfaction (eg, satisfaction with pharmacy services). Too many researchers just put items down on a page that they think measure the construct (and answer the research question); however, they may miss the mark because they have not approached the research question and, subsequently, item selection or development from the perspective of a theoretical framework or existing model that informs the measurement of satisfaction. Another important consideration is whether alternatives to using survey research methods have been considered, in essence asking the question of whether the information could better be obtained using a different methodology.8

Sampling Considerations

For a number of reasons (eg, time, cost), data are rarely obtained from every member of a population. A census, while appropriate in certain specific cases where responses from an entire population are needed to adequately answer the research question, is not generally required in order to obtain the desired data. In the majority of situations, sampling from the population under study will both answer the research question and save both time and money. Survey research routinely involves gathering data from a subset or sample of individuals intended to represent the population being studied.11 Therefore, since researchers are relying on data from samples to reflect the characteristics and attributes of interest in the target population, the samples must be properly selected.12 To enable the proper selection of a sample, the target population has to be clearly identified. The sample frame should closely approximate the full target population; any significant departure from that should be justified. Once the sample frame has been identified, the sample selection process needs to be delineated including the sampling method (eg, probability sampling techniques such as simple random or stratified). Although nonprobability sample selection approaches (eg, convenience, quota, or snowball sampling) are used in certain circumstances, probability sampling is preferred if the survey results are to be credibly generalized to the target population.13

The required sample size depends on a variety of factors, including whether the purpose of the survey is to simply describe population characteristics or to test for differences in certain attributes of interest by subgroups within the population. Authors of survey research reports should describe the process they used to estimate the necessary sample size including the impact of potential nonresponse. An in-depth discussion of sample size determination is beyond the scope of this paper; readers are encouraged to refer to the excellent existing literature on this topic.13,14

Balance Between Costs and Errors

Balance between costs and errors deals with a realistic appraisal of resources needed to carry out the study. This appraisal includes both monetary and human resource aspects. Tradeoffs are necessary but involve more than just numbers of subjects. For example, attempting to get large sample sizes but with insufficient follow-up versus a smaller more targeted representative sample with multiple follow-ups. Seemingly large sample sizes do not necessarily represent a probability sample. When conducting survey research, if follow-ups are not planned and budgeted for, the study should not be initiated. The effectiveness of incentives and approaches to follow-up are discussed in detail elsewhere,2,4,5 but the importance of well-planned follow-up procedures cannot be overstated. In volumes 69 and 70 of the Journal, 11/20 (55%) survey research papers reported the use of at least 1 follow-up to the initial invitation to participate.

Description of the Survey Instrument

The survey instrument or questionnaire used in the research should be described fully. If an existing questionnaire was used, evidence of psychometric properties such as reliability and validity should be provided from the relevant literature. Evidence of reliability indicates that the questionnaire is measuring the variable or variables in a reproducible manner. Evidence supporting a questionnaire's validity indicates that it is measuring what is intended to be measured.15 In addition, the questionnaire's measurement model (ie, scale structure and scoring system) should be described in sufficient detail to enable the reader to understand the meaning and interpretation of the resulting scores. When open-ended, or qualitative, questions are included in the questionnaire, a clear description must be provided as to how the resulting text data will be summarized and coded, analyzed, and reported.

If a new questionnaire was created, a full description of its development and testing should be provided. This should include discussion of the item generation and selection process, choice of response options/scales, construction of multi-item scales (if included), and initial testing of the questionnaire's psychometric properties.15 As with an existing questionnaire, evidence supporting the validity and reliability of the new questionnaire should be clearly provided by authors. If researchers are using only selected items from scales in an existing questionnaire, justification for doing so should be provided and their measurement properties in their new context should be properly tested prior to use. In addition, proper attribution of the source of scale items should be provided in the study report. In volumes 69 and 70 in the Journal, 10/20 (50%) survey research papers provided no or insufficient information concerning the reliability and/or validity of the survey instrument used in the study.

Commonly measured phenomena in survey research include frequency, quantity, feelings, evaluations, satisfaction, and agreement.16 Authors should provide sufficient detail for reviewers to be able to discern that the items and response options are congruent and appropriate for the variables being measured. For instance, a reviewer would question an item asking about the frequency of a symptom with the response options ranging from “excellent” to “poor.” In an extensive review article, Desselle provides an overview of the construction, implementation, and analysis of summated rating attitude scales.7

Pretesting

Pretesting is often conducted with a focus group to identify ambiguous questions or wording, unclear instructions, or other problems with the instrument prior to widespread dissemination. Pretesting is critical because it provides valuable information about issues related to reliability and validity through identification of potential problems prior to data collection. In volumes 69 and 70 in the Journal, only 8/20 survey research papers (40%) reported that pretesting of the survey instrument was conducted. Authors should clearly describe how a survey instrument was pretested. While pretesting is often conducted with a focus group of peers or others similar to subjects, cognitive interviewing is becoming increasingly important in the development and testing of questionnaires to explore the way in which members of the target population understand, mentally process, and respond to the items on a questionnaire.17,18 Cognitive testing, for example, consists of the use of both verbal probing by the interviewer (eg, “What does the response ‘some of the time’ mean to you?”) and think aloud, in which the interviewer asks the respondent to verbalize whatever comes to mind as he or she answers the question.16 This technique helps determine whether respondents are interpreting the questions and the response sets as intended by the questionnaire developers. If done with a sufficient number of subjects, the cognitive interviewing process also provides the opportunity to fulfill some of the roles of a pilot test in which length, flow, ease of administration, ease of response, and acceptability to respondents can be assessed.19

Quality Control Measures

The article should describe in the methods section whether procedures such as omit or skip patterns (procedures that direct respondents to answer only those items relevant to them) were used on the survey instrument. The article should also describe whether a code book was used for data entry and organization and what data verification procedures were used, for example spot checking a random 10% of data entries back to the original survey instruments. Outliers should be verified and the procedure for handling missing data should be explained.

Response Rates

In general, response rate can be defined as the number of respondents divided by the number of eligible subjects in the sample. A review of survey response rates reported in the professional literature found that over a quarter of the articles audited failed to define response rate.20 As stated by Johnson and Owens, “when a ‘response rate’ is given with no definition, it can mean anything, particularly in the absence of any additional information regarding sample disposition.”20 Hence, of equal importance to the response rate itself is transparency in its reporting. As with the CONSORT guidelines for randomized controlled trials, the flow of study subjects from initial sample selection and contact through study completion and analysis should be provided.21 Drop-out or exclusion for any reason should be documented and every individual in the study sample should be accounted for clearly. In addition, there may be a need to distinguish between the overall response rate and item-level response rates. Very low response rates for individual items on a questionnaire can be problematic, particularly if they represent important study variables.

Fowler states that there is no agreed-upon standard for acceptable response rates; however, he indicates that some federal funding agencies ask that survey procedures be used that are likely to result in a response rate of over 75%.3 Bailey also asserted that the minimal acceptable response rate was 75%.22 Schutt indicated that below 60% was unacceptable, but Babbie stated that a 50% response rate was adequate.23,24 As noted in the Canadian Medical Association journal's editorial policy, “Except for in unusual circumstances, surveys are not considered for publication in CMAJ if the response rate is less than 60% of eligible participants.”10 Fowler states that, “…one occasionally will see reports of mail surveys in which 5% to 20% of the selected sample responded. In such instances, the final sample has little relationship to the original sampling process; those responding are essentially self-selected. It is very unlikely that such procedures will provide any credible statistics about the characteristics of the population as a whole.”3 Although the literature does not reflect agreement on a minimum acceptable response rate, there is general consensus that at least half of the sample should have completed the survey instrument. In volumes 69 and 70 in the Journal, 7/20 survey research papers (35%) had response rates less than 30%, 6/20 (30%) had response rates between 31%-60%, and 7/20 (35%) had response rates of 61% or greater. In volumes 69 and 70 in the Journal, in the 13 survey research articles that had less than a 60% response rate, 8/13 (61.5%) mentioned the possibility of response bias.

The lower the response rate, the higher the likelihood of response bias or nonresponse error.4,25 “Nonresponse error occurs when a significant number of subjects in the sample do not respond to the survey and when they differ from respondents in a way that influences, or could influence, the results.”26 Response bias stems from the survey respondents being somehow different from the nonrespondents and, therefore, not representative of the target population. The article should address both follow-up procedures (timing, method, and quantity) and response rate. While large sample sizes are often deemed desirable, they must be tempered by the consideration that low response rates are more damaging to the credibility of results than a small sample.12 Most of the time, response bias is very hard to rule out due to lack of sufficient information regarding the nonrespondents. Therefore, it is imperative that researchers design their survey method to optimize response rates.2,27 To be credible, published survey research must meet acceptable levels of scientific rigor, particularly in regard to response rate transparency and the representativeness or generalizability of the study's results.

Statistical, Analytic, and Reporting Techniques

As noted in the Journal's Instructions to Reviewers, there should be a determination of whether the appropriate statistical techniques were used. The article should indicate what statistical package was used and what statistical technique was applied to what variables. Decisions must be made as to how data will be presented, for example, using a pie chart to provide simple summaries of data but not to present linear or relational data.28 The authors should provide sufficient detail to allow reviewers to match up hypothesis testing and relevant statistical analyses. In addition, if the questionnaire included qualitative components (eg, open-ended questions), a thorough description should be provided as to how and by whom the textual responses were coded for analysis.

Human Subjects Considerations

Even though most journals now require authors to indicate Institutional Review Board (IRB) compliance, there are still many examples of requests to participate, particularly in web-based or e-mail data collection modes, that have obviously not been subjected to IRB scrutiny. Examples of the evidence that this is the case include insufficient verbiage (eg, estimates of time to complete, perceived risks and benefits) in the invitation to participate, “mandatory” items (thereby violating subject's right to refuse to answer any or all items), and use of listservs for “quick and dirty” data gathering when the ultimate intent is to disseminate the findings. The authors should explicitly list which IRB they received approval from, the IRB designation received (eg, exempt, expedited), and how consent was obtained.

Transparency

The authors should fully specify their methods and report in sufficient detail such that another researcher could replicate the study. This consideration permeates the previous 9 sections. For example, an offer to provide the instrument upon request does not substitute for the provision of reliability and validity evidence in the article itself. Another example related to transparency of methods would be the description of the mode of administration. In volume 69 of the Journal, 3/10 (30%) survey research articles used mixed survey methods (both Internet and first-class mail) but did not provide sufficient detail as to what was collected by each respective method. Also, in volume 69 of the Journal, 1 survey research article simply used the word “sent” without providing any information as to how the instrument was delivered.

Additional Considerations Regarding Internet or Web-based Surveys

The use of Internet or e-mail based surveys (also referred to as “email surveys”) has grown in popularity as a proposed less expensive and more efficient method of conducting survey research.2,29-32 The supposed ease in data collection can give the impression that survey research is easily conducted; however, the good principles for traditional mail surveys still apply. Authors and reviewers must be aware that the mode of administration is irrelevant to all that must be done prior to that. Some potential problems associated with the use of web-based surveys are their ability to be forwarded to inappropriate or unintended subjects.31 Web-based surveys also suffer from potential problems with undeliverable e-mails due to outdated listservs or incorrect e-mail addresses, thus affecting the calculation of the response rate and determination of the most appropriate denominator.2,30-31 The authors should describe specifically how the survey instrument was disseminated (eg, e-mail with a link to the survey) and what web-based survey tool was used.

SUMMARY

We have provided 10 guiding questions and recommendations regarding what we consider to be best practices for survey research reports. Although our recommendations are not minimal standards for manuscripts submitted to the Journal, we hope that they provide guidance that will result in an enhancement of the quality of published reports of questionnaire-based survey research. It is important for both researchers/authors and reviewers to seriously consider the rigor that needs to be applied in the design, conduct, and reporting of survey research so that the reported findings credibly reflect the target population and are a true contribution to the scientific literature.

ACKNOWLEDGEMENT

The ideas expressed in this manuscript are those of the authors and do not represent the position of the American Association of Colleges of Pharmacy.

Appendix 1. Criteria for Survey Research Reports

  1. Was there a clearly defined research question?

    1. Are the study objectives clearly identified?

    2. Did the authors consider alternatives to using a survey technique to collect information? (ie, did they justify using survey research methods?)

      • —AACP databases

      • —Readily available literature

      • —Other professional organizations

  2. Did the authors select samples that well represent the population to be studied?

    1. What sampling approaches were used?

    2. Did the authors provide a description of how coverage and sampling error were minimized?

    3. Did the authors describe the process to estimate the necessary sample size?

  3. Did the authors use designs that balance costs with errors? (eg, strive for a census with inadequate follow-up versus smaller sample but aggressive follow-up)

  4. Did the authors describe the research instrument?

    1. Was evidence provided regarding the reliability and validity of an existing instrument?

    2. How was a new instrument developed and assessed for reliability and validity?

    3. Was the scoring scheme for the instrument sufficiently described?

  5. Was the instrument pretested?

    1. Was the procedure used to pretest the instrument described?

  6. Were quality control measures described?

    1. Was a code book used?

    2. Did the authors discuss what techniques were used for verifying data entry?

  7. Was the response rate sufficient to enable generalizing the results to the target population?

    1. What was the response rate?

    2. How was response rate calculated?

    3. Were follow-ups planned for and used?

    4. Do authors address potential nonresponse bias?

  8. Were the statistical, analytic, and reporting techniques appropriate to the data collected?

  9. Was evidence of ethical treatment of human subjects provided?

    1. Did the authors list which IRB they received approval from?

    2. Did the authors explain how consent was obtained?

  10. Were the authors transparent to ensure evaluation and replication?

    1. Was evidence for validity provided?

    2. Was evidence of reliability provided?

    3. Were results generalizable?

    4. Is replication possible given information provided?

REFERENCES

  • 1.Kerlinger FN, Lee HB. Foundations of Behavioral Research. 4th ed. Orlando, FL: Harcourt College Publishers; 2000. p. 599. [Google Scholar]
  • 2.Dillman DA. Mail and Internet Surveys: The Tailored Design Method. 2nd ed. Hoboken, NJ: John Wiley & Sons; 2007. (2007 Update). [Google Scholar]
  • 3.Fowler FJ. Survey Research Methods. 3rd ed. Thousand Oaks, CA: Sage Publications; 2002. [Google Scholar]
  • 4.Salant P, Dillman DA. How to Conduct Your Own Survey. New York, NY: John Wiley & Sons, Inc; 1994. [Google Scholar]
  • 5.Aday LA, Cornelius LJ. Designing and Conducting Health Surveys: A Comprehensive Guide. 3rd ed. San Francisco, Calif: Jossey-Bass; 2006. [Google Scholar]
  • 6.Stratton TP. Effectiveness of follow-techniques with nonresponders in mail surveys. Am J Pharm Educ. 1996;60:165–72. [Google Scholar]
  • 7.Desselle SP. Construction, implementation, and analysis of summated rated attitude scales. Am J Pharm Educ. 2005;69 Article 97. [Google Scholar]
  • 8.Best Practices for Survey Research and Public Opinion, American Association For Public Opinion Research. Available at: http://www.aapor.org/best_practices_for_survey_and_public_opinion_research.asp. Accessed March 8, 2007.
  • 9.Kelley K, Clark B, Brown V, Sitzia J. Good practices in the conduct and reporting of survey research. Int J Qual Health Care. 2003;15:261–6. doi: 10.1093/intqhc/mzg031. [DOI] [PubMed] [Google Scholar]
  • 10.Huston P. Reporting on surveys: information for authors and peer reviewers. Can Med Assoc J. 1996;154:1695–8. [PMC free article] [PubMed] [Google Scholar]
  • 11.Scheuren F. What is a survey? American Statistical Association. Available at: http://www.whatisasurvey.info/. Accessed March 8, 2007.
  • 12.Fink A, Kosecoff J. How to Conduct Surveys: A Step by Step Guide. Thousand Oaks, Calif: Sage Publications, Inc; 1998. [Google Scholar]
  • 13.Henry GT. Practical Sampling. Newberry Park, Calif: Sage Publications; 1990. [Google Scholar]
  • 14.Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc; 1988. [Google Scholar]
  • 15.Streiner DL, Norman GR. Health Measurement Scales: A Practical Guide to Their Development and Use. 3rd ed. New York, NY: Oxford University Press; 2003. [Google Scholar]
  • 16.Fowler FJ. Improving Survey Questions: Design and Evaluation. Thousand Oaks, CA: Sage Publications; 1995. [Google Scholar]
  • 17.Willis GB, DeMaio TJ, Harris-Kojetin B. Is the bandwagon headed to the methodological promised land? Evaluating the validity of cognitive interviewing techniques. In: Sirken MG, et al., editors. Cognition and Survey Research. New York, NY: John Wiley & Sons; 1999. pp. 133–53. [Google Scholar]
  • 18.Willis GB. Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, Calif: Sage Publications; 2005. [Google Scholar]
  • 19.Collins D. Pretesting survey instruments: an overview of cognitive methods. Qual Life Res. 2003;12:229–38. doi: 10.1023/a:1023254226592. [DOI] [PubMed] [Google Scholar]
  • 20.Johnson T, Owens L. Survey response rate reporting in the professional literature. Paper presented at the 58th Annual Meeting of the American Association for Public Opinion Research, Nashville, May 2003. Available at: http://www.srl.uic.edu/publist/Conference/rr_reporting.pdf. Accessed March 6, 2007.
  • 21.Moher D, Schulz KF, Altman DG for the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med. 2001;134:657–62. doi: 10.7326/0003-4819-134-8-200104170-00011. [DOI] [PubMed] [Google Scholar]
  • 22.Bailey KD. Methods of Social Research. 3rd ed. New York, NY: Free Press; 1987. [Google Scholar]
  • 23.Schutt RK. Investigating the Social World: The Process and Practice of Research. 2nd ed. Thousand Oaks, Calif: Pine Forge Press; 1999. [Google Scholar]
  • 24.Babbie E. Survey Research Methods. Belmont, Calif: Wadsworth; 1990. [Google Scholar]
  • 25.Hager MA, Wilson S, Pollak TH, Rooney PR. Response rates for mail surveys of nonprofit organizations: a review and empirical test. Nonprofit Voluntary Sector Q. 2003;32:252–267. [Google Scholar]
  • 26.Harrison DL, Draugalis JR. Evaluating the results of mail survey research. J Am Pharm Assoc. 1997;NS37:662–6. doi: 10.1016/s1086-5802(16)30271-6. [DOI] [PubMed] [Google Scholar]
  • 27.Dillman DA. Mail and Telephone Surveys: The Total Design Method. New York, NY: John Wiley; 1978. [Google Scholar]
  • 28.Boynton PM. Administering, analysing, and reporting your questionnaire. BMJ. 2004;328:1372–5. doi: 10.1136/bmj.328.7452.1372. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Schmidt WC. World-wide web survey research: benefits, potential problems, and solutions. Behavior Research Methods, Instruments & Computers. 1997;29:274–9. [Google Scholar]
  • 30.Sills SJ, Song C. Innovations in survey research: an application of web-based surveys. Social Science Computer Review. 2002;20:22–30. [Google Scholar]
  • 31.Zhang Y. Using the internet for survey research: a case study. Journal of the American Society for Information Science. 1999;51:57–68. [Google Scholar]
  • 32.Cook C, Heath F, Thompson RL. A meta-analysis of response rates in web- or internet-based surveys. Educational and Psychological Measurement. 2000;60:821–36. [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES