Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2013 Feb 12;77(1):4. doi: 10.5688/ajpe7714

The Importance of Survey Research Standards

Jack E Fincham a,,b,, JoLaine R Draugalis c
PMCID: PMC3578336  PMID: 23460755

Abstract

Every discipline within fields of research has instituted guidelines and templates for research endeavors and subsequent publications of findings, with the ultimate result being an increase in quality and acceptance by researchers within and across disciplines. These significant efforts are by nature ongoing, as well they should. These enhancements and guideline developments have been instituted in basic science disciplines, clinical pharmacy, and pharmacy administration relevant and related to subsequent scholarly publication of research findings. Specific research endeavors have included bench research, clinical trials and randomized clinical trials, meta analyses, outcomes research, and large scale database analyses. A similar need for quality and standardization also exists for survey research and scholarship. The purpose of this paper is to clarify why this is important and crucial for the Journal and our academy.

INTRODUCTION

In the Research Standards section of Instructions to Authors (http://archive.ajpe.org/instructions.asp), the Journal provides guidelines for authors to consider when preparing a manuscript for submission to the Journal. These standards are important for a number of reasons, and may be seen as unique and groundbreaking with regard to other academic health professions journals. This paper is intended to add clarity to this sometimes controversial set of Journal guidelines.

Whether referring to sampling texts such as Cochran’s Sampling Techniques, 3rd edition,1 or Kish’s Survey Sampling,2 or using guidelines or tables generated based on these classics as found in Krejcie and Morgan,3 Salant and Dillman,4 Bartlett and colleagues,5 and Dillman,6 the researcher will find that small populations require a high number of data elements (ie, high response rates) to confidently generalize results because of the potential for sampling error. The recommended minimum sample size for a study depends upon desired confidence level (typically 95%) and how varied the population is with respect to the variable(s) of interest.

Using the conservative approach of a 50/50 split (in other words, an equal chance of one response versus another) on a dichotomous variable of interest at the conventional 95% confidence level for a population of 100, we would need a sample of 80 to ensure a sampling error of no more than +/- 5% at the 95% confidence level. For a population of 100, if a response rate of 50% was achieved for an item with a simple yes/no answer (eg, “Do you have a full-time biostatistician employed by the college?”) and responses were evenly split (50% yes and 50% no), it would not be prudent to extrapolate those findings to the overarching population (100) because the range of possible true percentages would be 25%-75% (that is, all, some, or none of the 50 nonrespondents could have a biostatistician at their college.)7 (p55)

For a variable with a smaller standard deviation in response to a survey item, say an 80/20 split (eg, 80% agree, 20% disagree), a sample size of only 71 (rather than 80) would be required to maintain the same precision as in the previous example, ie, a 95% confidence level. However, according to Salant and Dillman,4 (p55) “unless we know the split ahead of time, it is best to be conservative and use 50/50.” Continuous data sets may not require as many data points, however, “if a categorical variable will play a primary role in data analyses…the categorical sample size formulas should be used.”5(p46) To estimate the sample size required for a continuous variable would necessitate a measure of variability in the population, which may not be easily discerned, thus “the sample size for the proportion is frequently preferred.”8(p4) As well, “the effect of nonresponse on one variable can be very different than for others in the same survey.”7 (p54)

Others have simply called for a census in small populations, again necessitating high response rates. 8,9 These considerations supported the rationale for the expectations set forth in the Viewpoint by Fincham.10

There are 129 doctor of pharmacy degree programs in academic pharmacy in 1 of 3 classifications of accreditation: 109 full accreditation, 15 candidates, and 5 precandidates.11 The recommended sample size for N=129 at +/- 5% sampling error and 95% confidence level is 97, or a 75% response rate for a 50/50 split. Modeling on a variable with an 80/20 split (ie, less variability in the population) would result in a recommended sample size of 85 or a 66% response rate. Because of the increase in the number of colleges and schools of pharmacy in the United States, the Journal will now accept a 70% response rate threshold for those survey projects collecting data on multiple variable types with the intent of generalizing results to the entire population.

The paper by Draugalis and Plaza12 provides several examples of the importance of striving for a census and how much confidence readers would have in a published study with a data set with less than optimal response rates, including the annual AACP Faculty Salary Survey. As an example of the potential effects of nonresponse on specific variables in a study, consider the following from a published study on career planning and preparation strategies of pharmacy deans.13 The subjects were 53 “new” deans with less than 5 years’ experience and 40 “experienced” deans previously in the database with greater than 5 years’ experience, for a cohort of 93 sitting permanent deans (ie, acting and interim deans were excluded) in 2009. Descriptive findings were presented for the total cohort as well as for separate groups on a number of variables when contrasts were desired. “Newly named deans spent an average of 17.1 +/- 8.7 years in the professoriate prior to assuming their first deanship, compared with established deans who had spent an average of 19.0 +/- 5.1 years (p = 0.006).” If just 3 of the new dean respondents with no or few years in the professoriate had not participated in the study, the mean would have increased to 18.1, the comparison would not have been significant, and an important finding would have been missed. In the career path ladder variable, 9 of the 53 new deans fell in the nontraditional category. If any number of these subjects had actually been nonrespondents, and the closer to actually all 9 of them not participating, this would have skewed descriptive findings and obscured longitudinal comparisons.

High response rates to a research survey do not ensure the validity of the findings as there are other potential sources of error to consider. While attaining a high response rate is a necessary first step, it is not sufficient in and of itself. The specific research question determines the acceptable research methods. For example, in some inquiries, a survey of all colleges and schools of pharmacy may not be necessary or desirable. Depending on the research question, interviews or focus groups may be useful, but the results cannot be generalized to all institutions. Some projects may be intended to gather information only from certain types of institutions, such as private entities, or programs affiliated with a health sciences center. A demonstration project with descriptive findings may be useful to others and in a sense, the argument would be for a methodological development, with the method being generalizable and useful to others, but not the specific institutional findings pertinent to their research. Also, the accepted tools of modeling and decision analytic methods may be appropriate alternatives.

IMPORTANCE OF RESEARCH GUIDELINES AND STANDARDS

In several other research arenas, standards for research methods have been proposed, implemented, and well accepted. Other journals have set standards for research and publications appearing in such. In the 1990s, an international collaboration set in motion a process whereby research standards were developed to enhance the quality and validity of results from clinical trials. A thorough scrutiny of refereed journals accessed through MEDLINE, Embase, Cochrane Central, and associated reference lists was accomplished, and then experts determined the CONSORT checklist, which was subsequently proven to improve the methodology, quality, and external validity aspects of reports of randomized clinical trials.14,15

Similarly a checklist has been published for qualitative research in hopes of promoting explicit, comprehensive reporting of such research.16 A Canadian group has proposed developing a survey reporting guideline for health research beginning in 2013 (David Moher, Director, Evidence-based Practice Centre, University of Ottawa, Canada, personal communication, May 17, 2012).

The EQUATOR network (the resource center for good reporting of health research studies) also has been developed to address and make recommendations dealing with the “growing evidence demonstrating widespread deficiencies in the reporting of health research studies.”17 The EQUATOR Web site provides a list of collected tools and guidelines available for assessing health research issues (www.equator-network.org).

Poor reporting guidelines lead to subsequent deficient outcome segments in written summaries of research. Bennett and colleagues have summarized this problem as follows: “There is limited guidance and no consensus regarding the optimal reporting of survey research. As in other areas of research poor reporting compromises both transparency and reliability, which are fundamental tenets of research.”18 (p.8)

In addressing their concerns over established response rates, Mészáros and colleagues19 point to the Journal of Dental Education and Academic Medicine as similar publications to the Journal that do not specify response rate criteria. Actually, the issue of response rates has been addressed repeatedly and specifically in these journals. As early as 1983, Creswell and Kuster20 writing in the Journal of Dental Education noted that at that juncture, 40% of papers published over the previous 5 years were survey studies. Thirty years ago, they called for increased diligence in assessing appropriate sample sizes, adequate attention paid to survey response rates, and greater effort in improving the quality of survey-related research in the Journal of Dental Education.

In 2009, in an excellent analysis of survey research issues in the Journal of Dental Education, Chambers and Licari suggest that: “Evidence that is not grounded in theory is just data. There is a natural pull on the authors of surveys to interpret their findings as supporting policies or positions they favor.”21(p288) The authors also speak to the importance of adequate response rates: “…that the precision of any claim based on a survey is strongly affected by sample size.” 21(p294) The authors point to sample saturation as a technique to reduce the impact of bias in surveys. This technique directly addresses the response rate issue by noting that the larger the sample size and the higher the response rate, the more accuracy can be attributed to the study results. A built in assumption is that even unknown missing data adversely affect the conclusions of the analyses. Subsequently, even contrary results that may have potentially come from the nonrespondents would result in a less likely scenario. In effect, the results would be different from what was obtained from the analyses of the data in hand.

Response rates matter a great deal, and this point has been made in the Journal of Dental Education over a 30-year period. The issue is not that the Journal of Dental Education has chosen not to develop standards for survey research papers, but rather that the American Journal of Pharmaceutical Education has taken a leadership role in this regard.

Although it is true that Academic Medicine does not explicitly list an acceptable response rate, the October 2011 issue provided summary guidance for survey research published in their journal.22 In this excellent summary of good research practices relative to survey design and reporting, 5 references are listed.23-27 These seminal references provide explicit information regarding sampling, research design, response rates and associated problems with biases, and acceptability indices in other components of survey research. In one of these “gold standard” references, Krosnick notes that: “It is important to recognize the inherent limitations of nonprobability sampling methods and to draw conclusions about populations or differences between populations tentatively when nonprobability sampling methods are used.”25(p541) This point becomes even more significant when low response rates are achieved in nonprobability samples.

GUIDELINES AND STANDARDS AS A QUALITY CONTROL MECHANISM

Setting standards and suggesting guidelines are in no way a move on the part of the Journal editors to stifle research or unfairly limit the reporting of research findings; nor are they intended in any manner to arbitrarily curtail creativity. Many fine survey research papers are published in the Journal and contribute to the academy. There are simply no published studies that have pointed out the negative impact of such standard-setting processes on the research endeavors related to clinical, health services research, or sociological research.

REFERENCES

  • 1.Cochran WG. Sampling Techniques, 3rd ed. New York, NY: John Wiley & Sons, Inc; 1977. [Google Scholar]
  • 2.Kish L. Survey Sampling. New York, NY: John Wiley & Sons, Inc.; 1965. [Google Scholar]
  • 3.Krejcie RV, Morgan DW. Determining sample size for research activities. Educ Psychol Meas. 1970;30:607–610. [Google Scholar]
  • 4.Salant P, Dillman DA. How to Conduct Your Own Survey. New York, NY: John Wiley & Sons, Inc; 1994. [Google Scholar]
  • 5.Bartlett JE, Kotrlik JW, Higgins CC. Organizational research: determining appropriate sample size in survey research. Inf Technol Learn Perform J. 2001;19(1):43–50. [Google Scholar]
  • 6.Dillman DA. Mail and Internet Survey: The Tailored Design Method. 2nd ed. 1. Hoboken, NJ: John Wiley & Sons, Inc; 2007. pp. 43–50. [Google Scholar]
  • 7.Fowler FJ. Survey Research Methods, 4th ed. Thousand Oaks, CA: Sage Publications, Inc; 2009. [Google Scholar]
  • 8.Israel GD. Determining sample size. 2011 http://edis.ifas.ufl.edu/pd006. Accessed April 4. [Google Scholar]
  • 9.Morris E. Sampling from small populations. 2012 http://uregina.ca/∼morrisev/Sociology/Sampling%20from%20small%20populations.htm. Accessed August 13. [Google Scholar]
  • 10.Fincham JE. Response rates and responsiveness for surveys, standards, and the Journal. Am J Pharm Educ. 2008;72(2):Article 43. doi: 10.5688/aj720243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Accreditation Council for Pharmacy Education. ACPE Update, August 2012. 2012 https://www.acpe-accredit.org/pdf/ACPEAugustNewsletter.pdf. Accessed August 14. [Google Scholar]
  • 12.Draugalis JR, Plaza CM. Best practices for survey research reports revisited: iImplications of target population, probability sampling, and response rate. Am J Pharm Educ. 2009;73(8):Article 42. doi: 10.5688/aj7308142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Draugalis JR, Plaza CM. A 20-year perspective on preparation strategies and career planning of pharmacy deans. Am J Pharm Educ. 2010;74(9):Article 162. doi: 10.5688/aj7409162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Improving the quality of reports of parallel group randomized trials. Lancet. 2001;357(9263):1191–1194. [PubMed] [Google Scholar]
  • 15.Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, Gaboury I. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? a systematic review. Med J Aust. 2006;185(5):263–267. doi: 10.5694/j.1326-5377.2006.tb00557.x. [DOI] [PubMed] [Google Scholar]
  • 16.Tong A, Saintsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–357. doi: 10.1093/intqhc/mzm042. [DOI] [PubMed] [Google Scholar]
  • 17.Simera I, Moher D, Hoey J, Schulz KF, Altman DG. A catalogue of reporting guidelines for health research. Eur J Clin Invest. 2010;40(1):35–53. doi: 10.1111/j.1365-2362.2009.02234.x. [DOI] [PubMed] [Google Scholar]
  • 18.Bennett C, Khangura S, Brehaut JC, et al. Reporting guidelines for survey research: an analysis of published guidance and reporting practices. PLoS Med. 2011;8(8):1–10. doi: 10.1371/journal.pmed.1001069. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Mészáros K, Barnett MJ, Lenth RV, Knapp KK. Pharmacy school survey standards revisited. Am J Pharm Educ. 2013;77(1):Article 3. doi: 10.5688/ajpe7713. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Creswell JW, Kuster CG. Survey practices in dental education research. J Dent Educ. 1983;47(10):676–680. [PubMed] [Google Scholar]
  • 21.Chambers DW, Licari FW. Issues in the interpretation and reporting of surveys in dental education. J Dent Educ. 2009;73(3):287–302. [PubMed] [Google Scholar]
  • 22.AM Last Page: avoiding five common pitfalls of survey design. Acad Med. 2011;86(10):1327. doi: 10.1097/ACM.0b013e31822f77cc. [DOI] [PubMed] [Google Scholar]
  • 23.Dillman DA, Smyth JD, Christian LM. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method, 3rd ed. New York, NY: John Wiley & Sons; 2009. [Google Scholar]
  • 24.Schwarz N. Self-reports: how the questions shape the answers. Am Psychol. 1999;54(2):93–105. [Google Scholar]
  • 25.Krosnick JA. Survey research. Ann Rev Psychol. 1999;50:537 567. doi: 10.1146/annurev.psych.50.1.537. [DOI] [PubMed] [Google Scholar]
  • 26.Tourangeau R, Rips LJ, Rasinski KA. The Psychology of Survey Response. New York, NY: Cambridge University Press; 2000. [Google Scholar]
  • 27.Weng L. Impact of the number of response categories and anchor labels on coefficient alpha and test-retest reliability. Educ Psychol Meas. 2004;64(6):956–972. [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES