Skip to main content
Journal of the Medical Library Association : JMLA logoLink to Journal of the Medical Library Association : JMLA
editorial
. 2012 Jan;100(1):1–2. doi: 10.3163/1536-5050.100.1.001

Survey research: we can do better

Susan Starr 1
PMCID: PMC3257493  PMID: 22272152

Survey research is a commonly employed methodology in library and information science and the most frequently used research technique in papers published in the Journal of the Medical Library Association (JMLA) [1]. Unfortunately, very few of the survey reports that the JMLA receives provide sufficiently sound evidence to qualify as full-length JMLA research papers. A great deal of effort often goes into such studies, and our profession really needs the kind of evidence that the authors of these studies hoped to provide. Fortunately, the problems in these studies are not all that difficult to resolve. However, the problems do have to be addressed at the outset, before the survey is sent to potential respondents. Once the survey has been administered, it is too late.

To determine if a report qualifies for publication as a research study, the JMLA uses the definition of research given by the US Department of Health and Human Services, “a systematic investigation…designed to develop or contribute to generalizable knowledge” [2]. Problems arise when submitted surveys do not meet these criteria; either the reader cannot generalize from the findings to the population at large and/or the survey does not add to the knowledgebase of health sciences librarianship. If the results seem interesting, the JMLA may publish the paper as a brief communication in the hope that others will follow up with more in-depth investigations. However, many of these problematic surveys could have provided critically needed information, if only they had been done slightly differently. There are three common problems with the surveys that the JMLA receives, and each has a relatively straightforward solution.

Three common problems

Problem #1: The survey has not been designed to answer a question of interest to a substantial group of potential readers of the JMLA. A survey intended for publication should be designed to shed light on research questions relevant to health sciences librarianship or the delivery of biomedical information. Questions regarding user behavior, the effectiveness of interventions, barriers to using information, the utility of metadata, and so on are all potentially answerable with survey methodology. For example, a survey could be designed to reveal what influences users' decisions to use a library, whether physicians retain information retrieval techniques that are taught in medical school, what prevents clinicians from consulting published research, whether users appreciate good metadata, and so on. These are all important questions on issues of general interest, and surveys to help answer them are suitable for publication.

Problems arise because surveys can be used to provide information on local issues as well. For example, a librarian may wish to determine whether library users will tolerate increases in interlibrary loan fees, whether searchers are having trouble with a proxy server, or if local administrators approve of library services. A survey can be the best method to uncover this kind of information. However, such surveys are usually not publishable, even as a brief communication, as the questions included relate almost exclusively to local problems.

Solution #1: Before embarking on a survey intended for publication, review the current literature on the topic of interest. Design the survey to specifically address an issue of general importance that is not already answered in the literature. Survey questions should be written to provide information that can be used by others. A few questions specific to your institution or user group can also be included if necessary.

Problem #2: The results cannot be generalized beyond the group of people who answered the survey. Unfortunately, a major problem in all survey research is that respondents are almost always self-selected. Not everyone who receives a survey is likely to answer it, no matter how many times they are reminded or what incentives are offered. If those who choose to respond are different in some important way from those who do not, the results may not reflect the opinions or behaviors of the entire population under study. For example, to identify barriers to nurses' use of information, a survey should be answered by a representative sample of the nursing population. If only recent graduates of a nursing program, only pediatric nurses, or only nurses who are very annoyed with lack of access to computers in their hospital answer, the results may well be biased and so cannot be generalized to all nurses. Such a survey could be published as a brief communication, if the results were provocative and might stimulate research by others, but it would not be publishable as a research paper.

Solution #2: To address sample bias, take these three steps:

  1. Send the survey to a representative sample of the population. Use reminders and incentives to obtain a high response rate (over 60%), thus minimizing the chances that only those with a particular perspective are answering the survey. And…

  2. Include questions designed to identify sample bias. Questions will vary according to the topic of the survey, but typically such questions identify the demographics (age, sex, educational level, position, etc.) of the respondents or the characteristics of the organization (size, budget, location, etc.). Then…

  3. Compare the characteristics of those answering the survey to those of known distributions of the population to identify possible bias. Samples of librarians, for example, can be compared to Medical Library Association (MLA) member surveys to determine if they reflect the general characteristics of MLA members. Samples of clinicians can be compared to statistics on the nations' medical professionals, and samples of academic libraries can be compared to the characteristics reported in the Association of Academic Health Sciences Library (AAHSL) annual survey. If the sample appears to be biased, acknowledge that as a limitation of the study's results.

Problem #3: The answers to the survey questions do not provide the information needed to address the issue at hand. Many times survey questions in studies submitted to the JMLA are ambiguous. Since it is impossible to determine what the answers represent, the paper must be rejected. A related and more subtle problem occurs when the survey did not ask about all the relevant issues. For example, a librarian might decide to survey clinicians to identify barriers to their use of mobile devices. She designs a survey that includes questions related to physical barriers, such as screen size, and questions on availability issues, such as accessibility of a particular database. The paper reports that the major barriers to use of mobile devices are physical problems with the devices. However, reviewers may note that there are many other possible barriers to using mobile technology in a clinical setting. Infrastructure issues, such as wireless connectivity in the hospital, and organizational issues, such as policies with respect to using cell phones in front of patients, can be critical factors. As a result, the conclusion of the survey is misleading, and the paper cannot be published.

Solution #3: Interview a few representative members of the intended survey population to identify all the critical aspects of the study topic before designing the survey. Then, pretest the survey on others and discuss the survey with pretest participants to identify ambiguous answers or unintelligible questions.

Benchmarking surveys

Benchmarking surveys provide data on the characteristics of a particular population of individuals, businesses, or organizations. Their intention is not to add to the knowledgebase of a discipline, but instead to provide numerical information that others can use for that purpose. The US Census is an example of a benchmarking survey; the MLA membership survey is another benchmarking tool as are the AAHSL annual survey and many of the surveys undertaken by the Pew Research Center. The data in these surveys are used by others both for practical purposes and for research. Social scientists use census data to develop economic models; academic medical libraries use AAHSL data to justify their budgets; and policy makers use the Pew data to understand social trends in the United States.

To be useful, a benchmarking study must be structured so that the data can be used either by researchers to compare different groups or by organizations, such as hospitals or libraries, to identify a peer group for comparative purposes. Using data to reliably compare groups selected according to multiple variables requires a very large scale study, an unbiased sample, and a thoroughly pretested survey instrument. Because benchmarking surveys need to be large and use a professionally constructed sample and survey instrument, most such surveys are done by organizations rather than individuals. Few, if any, benchmarking surveys submitted to the JMLA have a large enough sample to permit detailed analysis or identification of peer groups. They remain suggestive rather than conclusive and are normally only published as brief communications.

Three problems and three solutions

The solutions are not all that difficult to implement but as noted at the beginning of this editorial, they must be put in place before the survey is administered. To “develop or contribute to generalizable knowledge,” a survey needs to be created to answer a question that is important to others, gather information that will allow the researcher to identify sample bias, and use a well-designed unambiguous set of questions. The research question comes first; if the answer is already in the literature, then no further research is required. Developing a sampling methodology comes next, including identifying possible sources of bias and creating questions that will allow them to be identified. Last are interviews to refine the questions and pretesting to identify problematic language. We can do better surveys, and if we do, we will have the evidence we need to improve the delivery of biomedical information.

References

  • 1.Gore S.A, Nordberg J.M, Palmer L.A, Piorun M.E. Trends in health sciences library and information science research: an analysis of research publications in the Bulletin of the Medical Library Association and Journal of the Medical Library Association from 1991 to 2007. J Med Lib Assoc. 2009 Jul;97(3):203–11. doi: 10.3163/1536-5050.97.3.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.US Department of Health and Human Services. Protection of human subjects. 45 CFR 46.102(d) 2005.

Articles from Journal of the Medical Library Association : JMLA are provided here courtesy of Medical Library Association

RESOURCES