Skip to main content
American Journal of Pharmaceutical Education logoLink to American Journal of Pharmaceutical Education
. 2013 Feb 12;77(1):3. doi: 10.5688/ajpe7713

Pharmacy School Survey Standards Revisited

Károly Mészáros a,, Mitchell J Barnett a, Russell V Lenth b, Katherine K Knapp a
PMCID: PMC3578335  PMID: 23459404

Abstract

In a series of 3 papers on survey practices published from 2008 to 2009, the editors of the American Journal of Pharmaceutical Education presented guidelines for reporting survey research, and these criteria are reflected in the Author Instructions provided on the Journal’s Web site. This paper discusses the relevance of these criteria for publication of survey research regarding pharmacy colleges and schools. In addition, observations are offered about surveying of small "universes" like that comprised of US colleges and schools of pharmacy. The reason for revisiting this issue is the authors’ concern that, despite the best of intentions, overly constraining publication standards might discourage research on US colleges and schools of pharmacy at a time when the interest in the growth of colleges and schools, curricular content, clinical education, competence at graduation, and other areas is historically high. In the best traditions of academia, the authors share these observations with the community of pharmacy educators in the hope that the publication standards for survey research about US pharmacy schools will encourage investigators to collect and disseminate valuable information.

Keywords: survey, pharmacy education, sample size, response rate, research

INTRODUCTION

The American Journal of Pharmaceutical Education is the official scholarly publication of the American Association of Colleges of Pharmacy and, as such, is committed to providing accurate and relevant information about colleges and schools of pharmacy. In a series of 3 papers on survey practices published from 2008 to 2009, 1-3 the editors of the Journal present guidelines for survey research reports, which are now referenced in the Journal’s Author Instructions as criteria for publication. The paper by Draugalis and colleagues provided a thoughtful review of good practices and recommendations for the planning, execution, analysis, and reporting of survey research to enhance the quality of published reports.1 Regarding response rates for survey instruments, the authors concluded that “Although the literature does not reflect agreement on a minimum acceptable response rate, there is general consensus that at least half of the sample should have completed the survey instrument.” In the summary, the authors also state that “…our recommendations are not minimal standards for manuscripts submitted to the Journal.

The subsequent paper by Fincham2 pronounced 2 specific publication standards: “survey reports that are intended to be generalized to all [US] colleges/schools of pharmacy should (1) have a response rate of at least 80%, and (2) demonstrate that the sample includes representation of colleges based on the following factors that are similar to the overall profile of US institutions: public vs. private, geographic location, and university affiliation (standalone, part of a comprehensive university, or part of an academic health center).” The reader was referred to the paper by Draugalis and colleagues1 for a “discussion of the rationale behind the new standards,” however, that paper provided little to no support for the 80% requirement. The 2009 paper by Draugalis and Plaza endorsed the requirement of an 80% response rate for surveys of US colleges and schools of pharmacy.3 Support for this requirement was solely based on a table entitled “Sample Required from a Given Population to be Representative,” adapted from a short paper by Krejcie and Morgan.4 (Since the publication of Fincham’s paper, the number of US colleges and schools of pharmacy has increased from 102 to 120; thus, even the most stringent standard would require a response rate of 77% instead of the declared 80%.)

In this paper, we comment on the relevance of these criteria for publication of survey research regarding pharmacy colleges and schools. In addition, we offer some observations about surveying of small universes like US colleges and schools of pharmacy. The reason for revisiting this issue is the concern that, despite the best of intentions, declared and overly constraining publication standards not only limit the dissemination of information but may also discourage the collection of data on US pharmacy colleges and schools. This would be unfortunate at a time when the interest in the growth of colleges and schools, curricular content, clinical education, competence at graduation and other areas is high. In the best traditions of academia, we share these observations with the community of pharmacy educators in the hope that the publication standards for survey research about US pharmacy colleges and schools will encourage investigators to collect and disseminate valuable information.

SAMPLE SIZE DETERMINATION

All sample size formulas take into account 3 factors: the expected variation of answers to the question(s), the desired precision of the measurement, and the size of the (total) population.5 The formulas yield the number of returned (completed) samples needed for the desired level of precision. These factors in Krejcie’s formula4 are considered below with regard to their relevance to surveying pharmacy colleges and schools.

With respect to expected variation of answers, the Krejcie equation assumes the greatest possible variation in the population (ie, an expected 50/50 response split in response to a categorical, dichotomous item). An example of a categorical, dichotomous item where a 50/50 split might be expected is the frequently used question of gender (male or female), though not in all populations. Compared with all other types of items, this item requires the highest number of responses to make it statistically valid.5 Thus the Krejcie formula applies only to the most restrictive - and relatively rare - items in surveying. All other types of items require fewer responses. Even categorical, dichotomous items not expecting a 50/50 proportion in the distribution of responses will decrease the number of required responses. For example, an 80/20 distribution requires only 71 responses out of a population of 100, instead of 80 responses.5 An example of a categorical, dichotomous item that is not likely to return a 50/50 distribution of responses is the following: “When was your pharmacy school founded: before 2005 or after 2005?”

Furthermore, not all items are categorical. A major class of survey items is those exploring continuous variables. For example, the question “How many students are enrolled in the first year class of your program?” will return a series of numbers. For exploring continuous variables, different equations are used to determine the number of required responses.6,7 The main difference is that an estimate of standard deviation of the continuous variable is used instead of an estimate of variance based on distribution of categories. An illustration of the different requirements for items with categorical and continuous variables is provided by Bartlett and colleagues in a table that compares “…minimum returned sample size for a given population size for continuous and categorical data.”6 For categorical data, the table includes the same numbers as those found in Krejcie’s table. Using similar parameters of precision and reliability, items exploring continuous data require markedly lower response numbers. For example, for a population size of 100, only 55 responses are needed, whereas a categorical, dichotomous item would require 80 responses.

We have demonstrated above the narrow scope of the Krejcie equation, and the need for different approaches to different types of items (categorical or continuous – debates still remain over whether Likert scale items should be treated as continuous or categorical variables8,9). Most strikingly, the rates of required responses for continuous variables are markedly overestimated by Krejcie’s approach. Based on these considerations, we suggest that in most instances it is unnecessary to use the values presented in the Krejcie paper, which require the highest number of responses. Because most survey instruments include a variety of item types, judgments about the adequacy of response rates will be highly dependent on the predominant type of item in the survey instrument. Therefore, it is highly unlikely that a single standard will apply to all survey instruments.

Cochran echoes these suggestions in his consideration of typical survey instruments, which contain a mix of items with continuous and categorical variables.7 He recommends calculation of the number of required responses (n) separately for each important item. If the largest n seems to be impractical, the “standard of precision may be relaxed for certain of the items, in order to permit the use of a smaller n.” This advice coming from a respected expert of survey research is worth serious consideration when surveying small populations, such as colleges and schools of pharmacy.

The desired precision is a major determinant in sample size equations. The precision used in the Krejcie equation corresponds to a confidence level of 95%, with ±5% confidence interval. As with all other preset values of accuracy or significance, this level of precision is merely a matter of convention.10-12 Depending on the nature of the question(s), greater or lesser precision could be acceptable. For example, information that more than half of US pharmacy colleges and schools are planning to expand class sizes is well worth knowing, even if the precision (confidence level) of this information is only 90%, or even 85%.

As to the size of the population, the number of US pharmacy colleges and schools is presently about 120, which is a small universe. Therefore, unlike surveys of larger populations, sampling is generally not a consideration. Survey research of pharmacy colleges and schools typically takes the form of a census where all members of this universe are queried. Conducting a census of this population is fairly easy because the American Association of Colleges of Pharmacy maintains accurate e-mail lists and mailing label databases and makes these available to researchers.

Surveyors of large populations may be able to estimate the nonresponse rate and adjust the target sample size upward accordingly.6 No such luxury is to be had when surveying pharmacy colleges and schools, where the investigator is already surveying the entire universe and cannot adjust the sampled population upward. The result may be a less-than-optimal response rate, even when follow-up methods5 or incentives have been applied. In this case, the ability of researcher(s) to demonstrate that the responding population shares relevant characteristics with the universe is critical.

Regarding the above issues, we offer the following suggestions. First, in judging the adequacy of response rate, the mix of item types should be taken into consideration, with the Krejcie values defining the upper limit of required responses. Second, editors and reviewers should take into consideration the difficulties in achieving adequate response rates from a small population where increasing the sample size is not an option. Adopting these suggestions does not reduce the rigor or quality of decisions about publication because a second requirement that the sample be a true representation of all colleges and schools must also be met.

REPRESENTATION AND NONRESPONSE BIAS

The second standard acknowledges that an established response rate alone is not sufficient to conclude that findings from a survey can be generalized to the universe.2 This standard requires that the responding colleges and schools be similar to all US pharmacy colleges and schools with respect to the following characteristics: “public-private, geographic location, and university affiliation (stand-alone, part of a comprehensive university, or part of an academic center).”2 While every researcher has the obligation to establish that a response set is representative of the universe that is being investigated, this particular list of characteristics is neither exhaustive nor necessarily relevant to all surveys. The characteristics used to establish representativeness should vary depending on the questions and the purpose of the survey. For example, class sizes may be similar across geographical locations; however, they are typically lower in new schools than in older ones. Thus, the respondent data for a survey about class size should include similar percentages of older and younger schools. Thus, in a survey on class size, the ratio of old schools to new schools among the respondents should be similar to that ratio in the universe.

One aspect of determining whether respondent data adequately represent the population studied is whether there is evidence of nonresponse bias. Fincham provides a rather unorthodox definition of nonresponse bias: “Lack of response to the questionnaire by potential responders in a sample or population is referred to as nonresponse bias.”2 As an example, the paper states that 20% return meant 80% nonresponse bias. There is little support for this view in the literature as recipients may have a number of reasons for not answering a questionnaire other than a biased attitude toward some or all of the questions. In fact, Dillman points out that “nonresponse error is not simply a function of low response rates.” 5 Moreover, Groves observes that “…while nonresponse bias clearly does occur, the nonresponse rate of a survey alone is not a very good predictor of the magnitude of the bias.”13

In the case of a census of pharmacy colleges and schools, both nonresponse bias and representativeness are more easily determined because the response rate is equal to the percentage of colleges and schools that return the completed questionnaire. Because “responders + nonresponders = universe,” the known characteristics of the respondent pool and the universe define the characteristics of nonresponders. Thus, if appropriate comparisons are made between the respondents and the universe, it should be possible to determine whether there are meaningful differences between the universe and the nonresponder pool. On this basis it can be examined and probably established whether the nonresponders had a bias.

IS RESPONSE RATE A STANDARD FOR PUBLICATION?

The editorial policies of 9 social science and 9 health science journals that regularly publish survey data were compared in 2003.14 None of the 18 journals reported having a minimal response standard. Among the comments, one editor reported that they expected a minimum of a “60% response rate with rare exceptions.” The editor of another journal commented “I don’t equate standardization with rigor.” Several of the 18 editors stated that decisions were made on a case-by-case basis.14 We reviewed the editorial policies of 2 publications more closely related to AJPE: Academic Medicine, the official journal of the Association of American Medical Colleges, and the Journal of Dental Education, the official publication of the American Dental Education Association. Each of these organizations comprises less than 140 professional schools. Neither of the 2 journals have specific standards or instructions to authors concerning survey response rates.15,16 In fact, a review of articles published in Academic Medicine from 2010 to 2012 revealed 9 national survey studies on medical schools in which the response rates were as low as 51% to 64%.17-25

CONCLUSIONS

For the evaluation of surveys of US pharmacy colleges and schools, 2 strict standards have been put forward by the Journal. The first requirement, that a survey must attain a predetermined threshold response rate, is not supported by available research. Experts in survey research provide guidelines based on the types of items included in the survey instrument and other considerations. These guidelines help authors, editors, and reviewers to determine whether an adequate number of responses are presented.

The second requirement, that the respondents represent all colleges and schools of pharmacy, ensures that the survey results can be extended to all colleges and schools. However, the tests of representation must be appropriate for the subject and purposes of the survey; no single list of tests fits all studies.

Overly constraining survey research standards might discourage the collection and dissemination of valuable information about US pharmacy colleges and schools. The recommendations put forth by Draugalis and colleagues provide a broad, solid, and sufficient basis for both researchers and reviewers regarding the expected quality of survey research on various populations, including pharmacy colleges and schools.1 In short, research should not be suppressed because it fails to pass a predefined hurdle, nor should it be automatically deemed of high quality if it passes the hurdle. If the methodology is sound, the data are presented correctly, the statistical analysis is done responsibly, and the limitations are acknowledged, then the results will stand or fall on their own merits. Whether the outcomes allow generalization to all US pharmacy colleges and schools should be decided on a case-by case-basis.

ACKNOWLEDGEMENTS

We thank Dr. Jon Krosnick for helpful discussions and for sharing his manuscript prior to publication. We also thank Ms. Tamara Trujillo for excellent library support.

REFERENCES

  • 1.Draugalis JR, Coons SJ, Plaza CM. Best practices for survey research reports: a synopsis for authors and reviewers. Am J Pharm Educ. 2008;72(1):Article 11. doi: 10.5688/aj720111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Fincham JE. Response rates and responsiveness for surveys, standards, and the Journal. Am J Pharm Educ. 2008;72(2):Article 43. doi: 10.5688/aj720243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Draugalis JR, Plaza CM. Best practices for survey research reports revisited: implications of target population, probability sampling, and response rate. Am J Pharm Educ. 2009;73(8):Article 142. doi: 10.5688/aj7308142. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Krejcie RV, Morgan DW. Determining sample size for research activities. Educ Psychol Meas. 1970;30:607–610. [Google Scholar]
  • 5.Dillman DA, Smyth JD, Christian LM. Internet, Mail, and Mixed-Mode Survey: The Tailored Design Method. 3rd ed. Hoboken, NJ: John Wiley & Sons. 2009;55-57:62. [Google Scholar]
  • 6.Bartlett JE, Kotrlik JW, Higgins CC. Organizational research: Determining appropriate sample size in survey research. Inf Technol Learn Perform J. 2001;19(1):43–50. [Google Scholar]
  • 7.Cochran WG. Sampling Techniques, 3rd ed. Vol. 77. New York, NY: John Wiley & Sons; 1977. p. 81. [Google Scholar]
  • 8.Harrison DL. Improving the quality of survey research. J Am Pharm Assoc. 2008;48(4):458–459. doi: 10.1331/JAPhA.2008.08531. [DOI] [PubMed] [Google Scholar]
  • 9.Hardigan PC, Carvajal MJ. An application of the Rasch rating scale model to the analysis of job satisfaction among practicing pharmacists. J Am Pharm Assoc. 2008;48(4):522–529. doi: 10.1331/JAPhA.2008.07042. [DOI] [PubMed] [Google Scholar]
  • 10.Ziliak ST. Washington, DC: McCloskey. The cult of statistical significance. Presented at the Joint Statistical Meetings. August 3, 2009. http://stephentziliak.com/doc/2009ZiliakMcCloskeyJSM%20PROCEEDINGS.pdf. Accessed May 15, 2012. [Google Scholar]
  • 11.Bacchetti P. Current sample size conventions: flaws, harms and alternatives. BMC Med. 2010;(8):17–24. doi: 10.1186/1741-7015-8-17. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2856520/pdf/1741-7015-8-17.pdf. Accessed May 15, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bacchetti P, Deeks SG, McCune JM. Breaking free of sample size dogma to perform innovative translational research. Sci Transl Med. 2011;3(87):87ps24. doi: 10.1126/scitranslmed.3001628. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3134305/pdf/nihms-307674.pdf. Accessed May 15, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opin Q. 2006;70(5):646–675. [Google Scholar]
  • 14.Johnson T, Owens L. Nashville, TN: 2003. Survey response reporting in the professional literature. Presented at the 58th Annual Meeting of the American Association for Public Opinion Research. http://www.amstat.org/sections/srms/Proceedings/y2003/Files/JSM2003-000638.pdf Accessed May 15, 2012. [Google Scholar]
  • 15.Academic Medicine. Journal of the Association of American Medical Colleges. Publication criteria for research reports. Acad Med. http://journals.lww.com/academicmedicine/Pages/checklistPubCriterial.aspx Accessed May 15, 2012. [Google Scholar]
  • 16.Journal of Dental Education. Instructions to authors. J Dent Educ. http://www.jdentaled.org/site/misc/ifora.xhtml. Accessed May 15, 2012. [Google Scholar]
  • 17.Lucey CR, Sedmak D, Notestine M, Souba W. Rock stars in academic medicine. Acad Med. 2010;85(8):1269–1275. doi: 10.1097/ACM.0b013e3181e5c0bb. [DOI] [PubMed] [Google Scholar]
  • 18.Friedman E, Sainte M, Fallar R. Taking note of the perceived value and impact of medical student chartdocumentation on education and patient care. Acad Med. 2010;85(9):1440–1444. doi: 10.1097/ACM.0b013e3181eac1e0. [DOI] [PubMed] [Google Scholar]
  • 19.O'Brien BC, Poncelet AN. Transition to clerkship courses: preparing students to enter the workplace. Acad Med. 2010;85(12):1862–1869. doi: 10.1097/ACM.0b013e3181fa2353. [DOI] [PubMed] [Google Scholar]
  • 20.Friedman E, Karani R, Fallar R. Regulation of medical student work hours: a national survey of deans. Acad Med. 2011;86(1):30–33. doi: 10.1097/ACM.0b013e3181ff9725. [DOI] [PubMed] [Google Scholar]
  • 21.Ferullo A, Silk H, Savageau JA. Teaching oral health in U.S. medical schools: results of a national survey. Acad Med. 2011;86(2):226–230. doi: 10.1097/ACM.0b013e3182045a51. [DOI] [PubMed] [Google Scholar]
  • 22.Chimonas S, Patterson L, Raveis VH, Rothman DJ. Managing conflicts of interest in clinical care: a national survey of policies at U.S. medical schools. Acad Med. 2011;86(3):293–299. doi: 10.1097/ACM.0b013e3182087156. [DOI] [PubMed] [Google Scholar]
  • 23.Liston BW, Fischer MA, Way DP, Torre D, Papp KK. Interprofessional education in the internal medicine clerkship: results from a national survey. Acad Med. 2011;86(7):872–876. doi: 10.1097/ACM.0b013e31821d699b. [DOI] [PubMed] [Google Scholar]
  • 24.Eickmeyer SM, Do KD, Kirschner KL, Curry RH. North American medical schools' experience with and approaches to the needs of students with physical and sensory disabilities. Acad Med. 2012;87(5):567–573. doi: 10.1097/ACM.0b013e31824dd129. [DOI] [PubMed] [Google Scholar]
  • 25.Kelly WF, Papp KK, Torre D, Hemmer PA. How and why internal medicine clerkship directors use locally developed, faculty-written examinations: results of a national survey. Acad Med. 2012;87(7):924–930. doi: 10.1097/ACM.0b013e318258351b. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy

RESOURCES