Jon Eldredge had just given an excellent keynote address, and he and I were talking in the hotel lounge. The occasion was the annual meeting of the Midcontinental Chapter of the Medical Library Association (MLA), held in Kansas City, Missouri, last fall (the same hotel and same meeting where Lynn and I had gotten married nine years previously). Eldredge's topic was “Evidence-based Librarianship (EBL),” a subject of which he has been the leading proponent in the United States.
I have been cautiously skeptical about EBL. As Eldredge and others have pointed out, the key to applying evidence-based principles is to be sure to ask the right questions, and I have not been convinced that the questions that are most important to librarianship are the kinds of questions that are amenable to the sort of rigorous investigation that EBL, it has seemed to me, calls for.
Eldredge had spent some time during his presentation on the question of questions and was very effective in getting the audience engaged in developing samples. Afterward, he and I talked about applying the methods of EBL to the range of issues that librarians face.
A few weeks before, I had been in Belfast attending a meeting of the Health Libraries Group of the United Kingdom. There, I had the chance to talk about some of these same issues with Andrew Booth (sort of the UK Eldredge; or, maybe, Eldredge is the US Booth. I can never quite tell). I had given a talk about “informationists” and “information specialists in context” and their relationship to the long experience of clinical librarianship. In preparing for the talk, I drew on two recently published systematic reviews of the literature of clinical librarianship, one of which had been published in the Journal of the Medical Library Association (JMLA) [1] and one of which appeared in Health Information and Libraries Journal (HILJ) [2]. Having edited the article by Wagner and Byrd, I knew that they had become aware of Winning and Beverley's work in the late stages of preparing their report. The two groups had made some attempts to combine forces, but each was too far along in their own project, so, while they referred to each others' work in their papers, the studies appeared separately. One might consider this a lost opportunity, but, as I looked at the two papers, I decided that we were actually quite fortunate that they were both published, because their conclusions reinforce each other.
The thing that was quite striking was that despite all of the articles that have been written over a thirty-year period on the topic of clinical librarianship (more than thirty studies were reviewed in each article), the authors were unable to draw any compelling conclusions demonstrating that clinical librarian programs actually have the kinds of positive outcomes on patient care that their proponents hope for from them. Most of the articles under review were descriptive. When they were evaluative, the evaluations were idiosyncratic enough that they could not be combined in any compelling meta-analysis. Despite all of the effort that has been put into developing clinical librarian programs and writing articles about them, we are not much closer to demonstrating their value than we were over a quarter century ago. We have many articles; we do not have a body of evidence.
In our conversations, both Eldredge and Booth expressed their concerns about this state of affairs. This is one of the critical issues that they, and their kindred spirits, are attempting to address. In health sciences librarianship, we have seen a significant increase in attention being paid to research over the past several years. Hypothesis <http://gain.mercer.edu/mla/research/hypothesis.html>, the newsletter of the Research Section, has evolved into an excellent publication full of advice, examples, and background useful to librarians contemplating research projects. Many of MLA's chapters now have research sections of their own, and, at two of the chapter meetings I attended this year (Midcontinental and Southern Chapters), awards were given for the best research posters and papers. This is the case in many other chapters as well. Many of the chapters also have formal research mentoring programs.
Over the past few years, the number of articles submitted to the JMLA has increased significantly. While a number of factors are likely involved in this (the greater availability of the JMLA due to its being hosted on PubMed Central is probably the most influential), I think that it also reflects an increasing interest and effort on the part of our colleagues to do research and to try to get the reports of that research published.
Nonetheless, a growing number of published reports does not automatically translate into building a body of evidence. When I talk with potential authors about structuring their papers, one of the things that I always emphasize is the need to make sure that their work is soundly rooted in the existing literature of the topic. In the best papers, the authors work carefully to ensure that this is in fact done. If you look at the paper in this issue by Dee and Stanley [3], for example, you will see that, throughout their discussion section, they have carefully linked their results to previous work, pointing out when their results seem to confirm earlier studies and where their results differ. This is extremely useful but is something I do not often see in articles that we publish.
We have amassed, over the years, quite a number of articles on information-seeking behavior. Note the article by Andrews et al. [4] that precedes Dee and Stanley's study in this issue. Here, we have two articles on a similar topic, with slightly different subject populations and quite different approaches. The conclusions have similarities. Both studies are valuable and important contributions to the literature but present a bit of a challenge for the researcher who might want to directly compare the information-seeking behaviors of those two populations.
I use those two papers as examples, only because they are ready to hand. It would be easy enough to come up with numerous similar cases. Generally, in the JMLA, we try to include questionnaires or relevant portions of questionnaires as appendixes. We do this believing, primarily, that it is useful for the reader when interpreting the results to know exactly how a question was presented; and, secondarily, that the questionnaires are useful to other researchers who may be interested in doing similar work and would like to see samples of questionnaires that they can use in designing their own.
How often, though, do those researchers use the same questionnaire or at least the same or similar (enough) questions (after getting proper permissions and giving proper attributions, of course)? How often, when selecting survey participants, do they try to control for the same factors as the studies they are using as examples? How often, in other words, do they approach their project from the standpoint of gathering results that will be directly comparable to the work they are using as models? Not having studied this systematically myself, I cannot say for sure, but my impression is that the answer would have to be: not very often.
Some of you are aware of the LibQual+ survey <http://www.libqual.org> that is now being run by the Association of Research Libraries (ARL). It is an online survey designed to provide feedback on how well a library is meeting the expectations of its primary clientele. Established in 2000 as an experimental project with 13 libraries, by 2004, the survey was being managed by ARL and 204 institutions participated. One of the most valuable things about the LibQual+ survey is that all of the participating libraries are asking the same questions in the same way. The results are truly comparable from institution to institution. Although the survey has been modified from year to year, the managers of the survey are very cognizant of the need to maintain that comparability over time. They are building a body of data that will be an increasingly rich source for research in the years to come. A recent volume describes some of the work being done to analyze LibQual+ results and includes several papers from health sciences librarians [5].
Many opportunities exist in health sciences librarianship to work on building a body of evidence. Two of the most important pieces of library research, documenting the value of libraries, are colloquially known as the King study [6] and the Rochester study [7]. In a quick check of the Web of Science citation database, I see that the King study has been cited at least 51 times (since 1995, the earliest year for which I have electronic access) and the Rochester study 87 times. But a quick scan of the titles of those citing papers indicates that virtually none of them document attempts to replicate and verify the results. Just think how much easier it would make your life, on the day that your hospital administrator is musing about whether or not this library stuff really matters to the bottom line, if you could present him or her with a raft of related studies, systematically analyzed, rather than the handful that you would actually be able to identify.
In discussing the results of the last JMLA readership survey in my editorial in the October 2003 issue, I said that I was “disturbed by the number of people who said they had never submitted an article, because they did not think they had anything of interest to write about” [8]. Let me make a few suggestions:
Review Eldredge's overview of EBL in the October 2000 Bulletin of the Medical Library Association [9].
Browse the special issue on EBL published as a supplement to the June 2003 issue of HILJ, paying particular attention to the lead editorial by Booth and Eldredge [10].
Read a few other articles by these two authors and the others that you will quickly identify as part of the growing global EBL movement.
Settle in for an evening with a stack of recent JMLAs and HILJs and whatever other research-oriented library publications appeal to you, and browse for articles that remind you of your own library situation. Keep your mind in “curious mode.” Be alert to the questions that come to mind, “Well, that's interesting—I wonder if my users would respond that way…” Find an article that compels your interest and, perhaps, your skepticism.
Call a colleague or two and say, “I've got an idea. Wouldn't it be great if we could redo this study, and see if we find the same thing in our libraries?” (You might even want to see if the author of the paper that you are looking at would be interested in collaborating).
Finding something to study that no one has thought to look at before is definitely worthwhile. But we also need to spend time testing what we think we know and validating what we think we have proved. Single studies do not accomplish that. We have come a long way, as a profession, in improving our research skills and our understanding of the importance of research. But we still have considerable work to do in shaping our projects so that they contribute to building a body of evidence.
Editor's blog
As an aside, in sort of a research spirit of my own, I have been experimenting with a blog. If I have not gotten bored with it by the time this editorial comes out, you can find it at http://tscott.typepad.com. I would be happy to hear from you.
References
- Wagner KC, Byrd GD. Evaluating the effectiveness of clinical medical librarian programs: a systematic review of the literature. J Med Libr Assoc. 2004 Jan; 92(1):14–33. [PMC free article] [PubMed] [Google Scholar]
- Winning MA, Beverley CA. Clinical librarianship: a systematic review of the literature. Health Info Libr J. 2003 Jun; 20(suppl 1):10–21. [DOI] [PubMed] [Google Scholar]
- Dee C, Stanley EE. Information-seeking behavior of nursing students and clinical nurses: implications for health sciences librarians. J Med Libr Assoc. 2005 Apr; 93(2):213–21. [PMC free article] [PubMed] [Google Scholar]
- Andrews JE, Pearce KA, Ireson C, and Love M. Information-seeking behaviors of practitioners in a primary care practice-based research network (PBRN). J Med Libr Assoc. 2005 Apr; 93(2):206–12. [PMC free article] [PubMed] [Google Scholar]
- Heath FM, Kyrillido M, and Askew CA. eds. Libraries act on their LibQual+ findings: from data to action. Binghamton, NY: Haworth Information Press, 2004. (Published simultaneously as J Libr Admin 2004;40,(3–4).). [Google Scholar]
- King DN. The contribution of hospital library information services to clinical care: a study in eight hospitals. Bull Med Libr Assoc. 1987 Oct; 75(4):291–301. [PMC free article] [PubMed] [Google Scholar]
- Marshall JG. The impact of the hospital library on clinical decision making: the Rochester study. Bull Med Libr Assoc. 1992 Apr; 80(2):169–78. [PMC free article] [PubMed] [Google Scholar]
- Plutchak TS. The JMLA readership survey [editorial]. J Med Libr Assoc. 2003 Oct; 91(4):389–91. [PMC free article] [PubMed] [Google Scholar]
- Eldredge JD. Evidence-based librarianship: an overview. Bull Med Libr Assoc. 2000 Oct; 88(4):289–302. [PMC free article] [PubMed] [Google Scholar]
- Booth A, Eldredge JD. …and even evidence-based librarianship? Health Info Libr J. 2003 Jun; 20(suppl 1):1–2. [DOI] [PubMed] [Google Scholar]