Skip to main content
Journal of the Medical Library Association : JMLA logoLink to Journal of the Medical Library Association : JMLA
. 2007 Jan;95(1):46–53.

Empowering your institution through assessment

Douglas J Joubert 1, Tamera P Lee 2
PMCID: PMC1773025  PMID: 17252066

Abstract

Objectives: The objectives of this study are to describe the process of linking Association of Academic Health Sciences Libraries (AAHSL) data with 2002 LibQUAL+ data and to address four analytical questions created by the AAHSL Task Force on Quality Assessment that relate both to user satisfaction and to services provided by AAHSL libraries.

Methods: For the thirty-five AAHSL libraries that participated in the 2002 LibQUAL+ survey, nested-effect of variance was analyzed using a linear mixed model. Using the Pearson correlation coefficient, this study explored four questions about the effect of user demographics on perceived levels of satisfaction with library services.

Results: The supposition that library user satisfaction may differ according to library institutional reporting structure was unsupported. Regarding effect on mean overall satisfaction, size of library staff is not significant (P = 0.860), number of constituents is slightly significant (P = 0.027), and ratio of staff to constituents has a moderate and significant effect (P = 0.004).

Conclusions: From a demographic perspective, the 2002 LibQUAL+ survey represents the largest cross section of AAHSL libraries. Increased understanding of how qualitative assessment can supplement quantitative data supports evidence-based decision-making and practice. It also could promote changes in data collection and usage.


Highlights

  • Possible relationships between measures of user satisfaction and traditional library administrative data were explored in a cross section of health sciences libraries.

  • Organizational structure and size of library staff had no direct impact on service quality, while the ratio of constituents to library staff was correlated with user satisfaction ratings.

Implications

  • Libraries can be distinctive regardless of size in terms of overall user satisfaction.

  • The ratio of library staff to primary user population should be considered as a new measure for further exploration as an outcomes assessment.

  • Qualitative measures strengthen libraries to center resources and services based on users' reported needs and expectations.

BACKGROUND

Contemporary research reveals a number of factors that combine to place an ever-increasing strain on library resources [1]. This issue is compounded by an increased demand for specialized levels of service provided to internal and external clients and an increase in competition from other organizations [1, 2].

One possible solution for dealing with both increased competition and the need to enhance services provided to library patrons is the integration of qualitative and quantitative research methods, thus implementing mixed-model approaches to assessment. For example, researchers may examine qualitative data from student narratives, user surveys, and focus groups and consider this in relation to quantitative data such as library budgets and personnel expenditures. Findings from such investigations could allow senior level planners to develop highly targeted and effective programs and services.

CREATING A CULTURE OF ASSESSMENT

Recent literature in library and information science clearly documents the need for creating a “culture of assessment” in libraries [25]. Lakos defines a culture of assessment as the organizational and cultural structures of an organization that embrace research as a method for meeting customer needs [3]. Creating a culture of assessment requires input from a number of diverse domains, both research-oriented and practical. Specifically, investigators involved in intellectual inquiry, which in turn facilitates the dissemination of original research, support the research-oriented domain. Conversely, library practitioners and information professionals serve researchers by assessing the generalizability and transferability of research. Both of these fields share a common language that symbiotically links the two fields and facilitates transferability of knowledge from one area to the other. One problem that appears to affect this relationship is the diversity that exists in defining terms commonly associated with assessment [2, 5]. However, as more libraries integrate continuous and systematic programs of qualitative and quantitative assessment, terms associated with assessment will become integrated into the language of library practitioners.

At the national level, a number of initiatives provide guidelines for developing, implementing, and evaluating outcomes-based programs. The Association of College & Research Libraries (ACRL) Institute for Information Literacy and the Association of Research Libraries (ARL) New Measures Initiative and Higher Education Outcomes (HEO) Research Review are examples of such programs [6]. Furthermore, a diverse group of governing bodies, professional organizations, and accrediting agencies can influence outcomes assessment at institutions of higher education. Each distinct group contributes its own “language of evaluation” to define and describe its culture of assessment. The Association of College and Research Libraries (ACRL) and the Association of American Medical Colleges (AAMC) are organizations that have developed documents shaping outcomes assessment in libraries. For example, ACRL created the Objectives for Information Literacy Instruction, and AAMC formulated the Learning Objectives for Medical Student Education [7, 8]. Having many external forces shaping organizations can create a myriad of definitions and a dynamic tension as institutions navigate the landscape of evaluation. As mentioned previously, having researchers, librarians, and administrators communicate in a common language is a crucial step towards developing and facilitating a culture of assessment.

The Association of Academic Health Sciences Libraries (AAHSL) Task Force on Quality Assessment developed eight questions for analysis that shaped the multifaceted objectives of this paper. The AAHSL questions sought analysis related to potential correlation of qualitative measures of the AAHSL LibQUAL+ survey and quantitative measures of the AAHSL Annual Statistics [9]. Specifically, this paper addresses research questions created from the AAHSL queries: (1) How do satisfaction ratings differ for various reporting structures? (2) How does the size of library staff affect satisfaction ratings? (3) How does the number of constituents affect satisfaction ratings? and (4) How does the ratio of staff to constituents affect satisfaction ratings? Additionally, this paper explores the relationships that exist between these disparate sets of data and frames this investigation within the context of creating a culture of assessment at both the individual and institutional levels.

METHODS

LibQUAL+ instrument

LibQUAL+ is an assessment tool developed collaboratively by the Association of Research Libraries (ARL), and Texas A&M University [10]. LibQUAL+ was developed to measure user satisfaction in libraries, specifically levels of satisfaction based exclusively on the perceptions of the library user [10, 11]. Such measures provide a qualitative assessment tool for librarians and administrations; in particular, this tool is an instrument that is not based on traditional quantitative measures such as the size of library collections or the number of patrons served [11]. LibQUAL+ has been demonstrated to be a reliable and valid instrument and is but one tool available to librarians interested in improving the level of service provided to their users [11].

In 2002, LibQUAL+ participants were grouped into 4 institutional-type categories: 1) Four-Year Institutions, (2) Community Colleges, (3) Health Science Libraries, and (4) Other Institutions [10, 12]. This paper focuses exclusively on responses from AAHSL health science libraries. Thirty-five AAHSL libraries participated in the 2002 LibQUAL+ survey. Participants at these libraries generated 14,897 completed surveys, which translated into a 50% completion rate [12]. The highest percentage of respondents were faculty (35.5%), followed by graduate students (34.3%) and non-library staff (22.6%) [12].

The 2002 LibQUAL+ Survey consisted of twenty-five core questions and five local questions developed for AAHSL libraries. When answering the core questions, participants were asked to base their selections on three criteria [12]:

  • Minimum—the number that represents the minimal level of acceptable service

  • Desired—the number that represents the desired level of service

  • Perceived—the number that represents the perceived level of service

For the core questions, participants were instructed to rate all three levels (Minimum, Desired, and Perceived) or choose non-applicable (NA) for that question. The 2002 version of LibQUAL+ organized the twenty-five core questions into four distinct divisions called Dimensions. These Dimensions were Access to Information, Affect of Service, Library as Place and Personal Control [12].

Of the five AAHSL-specific questions, three were included to measure the overall level of satisfaction among users and the quality of service provided by the library; the remaining two questions addressed library usage, specifically internal usage of library resources versus collections accessed electronically. The three “satisfaction” questions were: 1) In general, I am satisfied with the way in which I am treated at the libraries, 2) In general, I am satisfied with library support for my learning, research and/or teaching needs, and 3) How would you rate the overall quality of the service provided by the library? [12]. The scale for Questions 1 and 2 ranged from 1 (strongly disagree) to 9 (strongly agree), the scale for Question 3 ranged from 1 (Extremely Poor) to 9 (Extremely Good). The third question forms the basis for addressing all four research questions posed by the investigators.

AAHSL annual survey

Published since 1978, the AAHSL Annual Survey provides comparative data on the characteristics of collections, expenditures, personnel and services in medical school libraries in the United States and Canada [9, 13]. The twenty-fourth AAHSL survey data was used for this analysis because the descriptive library data that includes library reporting structure was not included in the twenty-fifth AAHSL Survey; starting with the twentieth edition of the AAHSL Survey, demographic information has only been collected once every five years with the assumption that some library characteristics vary little over time [13].

Data considerations

Of the 125 AAHSL member institutions, 121 contributed data for the 24th AAHSL Annual Survey, and a cross-section of 35 participated in the 2002 LibQUAL+. The 35 libraries that participated in the 2002 LibQUAL+ and contributed data to the 24th Annual Survey form the cohort for this study.

Before statistical analysis, the 2 distinct sets of AAHSL and LibQUAL+ data were aggregated into 1 complete file and re-formatted as necessary to facilitate the analysis. Missing scores were identified, and 2 individuals not associated with the study verified the merged data. Data verification consisted of checking the merged data files for accuracy and making sure that the variable labels were consistently applied and that the SPSS variables were properly identified as either numeric or string (non-numeric) data. Validation and independent review of the data were crucial in linking a dataset containing 120 records (AAHSL) to a dataset containing almost 14,000 records (LibQUAL+).

Research questions

Overall quality of service, examined as the outcome variable in each of the four analyses described below, was characterized from responses to the 2002 LibQUAL+ survey question: How would you rate the overall quality of the service provided by the library? [12]. Table 1 lists the descriptive statistics for the LibQUAL+ question, “How would you rate the overall quality of the service provided by the library?”

Table 1 Descriptive statistics for overall quality of services*

graphic file with name i0025-7338-095-01-0046-t01.jpg

“How do satisfaction ratings differ for various reporting structures?”

This analysis was defined by responses from Question 1 from the twenty-fourth Annual AAHSL Descriptive Library Statistics Survey (2001–2002): “To whom does the library report?” The researchers hypothesized that the mean measures of overall quality of services would vary according to library reporting structure. AAHSL defines library reporting structure according to these categories:

  • medical school

  • other health science school

  • university library

  • health science center administration

  • university administration

  • other

Two institutions did not contribute information about library reporting structure and were not included in the analysis for Question 1.

“How does the size of library staff affect satisfaction ratings?”

The number of total full-time equivalent staff for each library was computed from the following categories: (1) professional staff, (2) support staff, (3) clerical staff, and (4) student and hourly staff.

Analysis of Question 3, “How does the number of constituents affect satisfaction ratings?” used AAHSL data regarding the number of primary and secondary clientele served by the individual libraries. From the AAHSL data, the number of total constituents served was computed from the following categories: (1) faculty, (2) interns, residents, and fellows, and (3) students. The above categories were selected for analysis as priority customers. The “staff” category from the AAHSL data was not included because of the heterogeneous nature of the group (clerical, basic, and clinical sciences) and a lower response rate among the reporting libraries. Another consideration was that many of these diverse staff were in support roles for the primary clientele.

“How does the ratio of staff to constituents affect satisfaction ratings?”

The examination used AAHSL data reporting: (1) total constituents served and (2) total library full-time equivalent staff (Total constituents served/Full-time equivalent = ratio). This ratio provided a way to determine the number of clients served per library employee.

Statistical analysis

For Question 1, the researchers constructed a linear mixed model (LMM) to demonstrate if overall satisfaction did indeed differ by reporting structure. For the LMM, the model dimensions were library reporting structure (fixed effects) and institution ID within library reporting structure (random effects). For research questions 2 through 4, linear regression analyses were performed, Pearson correlation coefficients were computed, and 2 categories of saved variables were created. First, Cook's D and leverage variables were created for evaluating the effect of outliers on the scatter plot. Second, residual variables were created for post-hoc examination of the regression analysis. For Question 2, although the scatter plot revealed a number of possible outliers, statistical analysis revealed no significant outliers (Cook's D < 1; SDRESID < 3). For Questions 3 and 4, statistical analysis revealed that 1 score was an outlier (Question 3 Cook's D = 3.10679 and Question 4 Cook's D = 1.440). A filter variable was created, allowing this case to be excluded from the analysis for Questions 3 and 4. The researcher conducting the statistical analysis (DJ) computed statistics with and without the data from this outlier case. Data were analyzed using SPSS version 11 (SPSS, Inc., Chicago IL, USA), and graphics were produced using JMP version 5 (SAS Institute Inc., Cary, NC, USA).

RESULTS

AAHSL Question 1: How do satisfaction ratings differ for various reporting structures?

The findings did not support that perceptions of library service quality differed according to library reporting structure (P = 0.464; df = (4, 27.038); F = 0.924).

AAHSL Question 2: How does the size of library staff affect satisfaction ratings?

The scatter plot for Question 2 seems to indicate that size of library staff and mean overall quality of services by institution were not correlated (Figure 1). The sample of 35 libraries did not reveal a statistically significant correlation between full time equivalent staff and mean overall quality of services provided by libraries (r = −0.031; r2 = 0.001; P = 0.860; N = 35).

Figure 1.

Figure 1

Scatterplot of total library full-time equivalent staff and mean overall quality of service

AAHSL Question 3: How does the number of constituents affect satisfaction ratings?

The scatter plot for Question 3 appears to indicate a negative correlation between the total numbers of constituents served and mean overall quality of services provided by libraries; thus, as the total number of constitutes served increased, the measures of mean overall quality of service decreased slightly. Additionally, the scatter plot revealed a possible outlier (Figure 2).

Figure 2.

Figure 2

Scatterplot of total number of constituents served and mean overall quality of service

In the analysis that included this outlier, the number of constituents had a low and non-statistically significant effect on mean overall quality of services (r = −0.244; r2 = 0.060; P = 0.171; N = 33). In the analysis that excluded Case 4, the number of constituents had a low but statistically significant effect on mean overall quality of services (r = −0.391; r2 = 0.153; P = 0.027; N = 32).

AAHSL Question 4: How does the ratio of staff to constituents affect satisfaction ratings?

The scatter plot for Question 4 seems to indicate a negative correlation between the ratio of constituents to full-time equivalent and the mean overall quality of services provided by libraries. Specifically, as the number of constituents served per each full-time equivalent increased, the measures of overall quality of services decreased. Additionally, the scatter plot revealed a possible outlier, as demonstrated by Case 4 (Figure 3).

Figure 3.

Figure 3

Scatterplot of the ratio of total number of constituents served to number of library full-time equivalent staff and mean overall quality of service

In the analysis that included Case 4, the ratio of staff to constituents had a moderate and statistically significant effect on mean overall satisfaction (r = −0.485; r2 = 0.351; P = 0.004; N = 33). In the analysis that excluded Case 4, the ratio of staff to constituents had a more strongly significant effect on mean overall satisfaction (r = −0.592; r2 = 0.351; P = 0.0003; N = 32).

DISCUSSION

This study was developed to address four research questions that focused on three quantitative aspects of AAHSL survey data—library reporting structure, staff, and constituents served. These questions were examined in relation to measures of overall quality of service defined by responses from the 2002 LibQUAL+ survey. The researchers found that library reporting structure and the size of library staff had no discernible effects on measures of overall perceived library service quality. However, the total number of constituents served did have a small and statistically significant effect on mean overall quality of service. The strongest correlation was between the ratio of staff to constituents and the overall perceived quality of services provided by AAHSL libraries.

From a demographic perspective, the 2002 LibQUAL+ survey represents the largest cohort and cross section of AAHSL libraries. This is an important factor in the present study as it allowed the researchers to measure the strength of the relationship between measures of overall satisfaction and demographic data submitted by AAHSL institutions. However, before drawing conclusions about the larger population of academic health science centers, further analysis is needed. Part of the challenge with the present sample relates to issues surrounding sample size, assumptions based on the normality of the distribution, and variance constancy. Additionally, the AAHSL Task Force should encourage member libraries to participate in multiple iterations of LibQUAL+ so that the present study can be replicated, thus strengthening the generalizability of the present study's findings.

These results revealed that overall service quality is not impacted by library reporting structure or size of library staff. Since service quality and customer satisfaction levels are not determined by an institution's organizational structure or size, libraries do not need to be large to be outstanding. This reinforces the tradition of service as the library's hallmark. The library's distinctive impact can be on a small patron group and its performance can be premiere without any association with size.

The analysis showed that overall library service quality is influenced slightly by the number of constituents served. A broader inference could be that library programs and staff serving customized needs for smaller, more targeted user groups could elicit increased satisfaction. A practical example of this is the commonly practiced liaison or subject specialist program within academic libraries. An emerging example is the informationist programs in practice at the National Institutes of Health Library and the Annette and Irwin Eskind Biomedical Library, Vanderbilt University [14, 15]. While NIH and Vanderbilt reflect large institutions, their informationist programs are service initiatives customized for constituent groups rather than the entire user populations, i.e., making a large institution “smaller” for the users through focused service initiatives based on special/unique population needs.

The strongest tested correlation with a moderate and statistically significant effect on overall library service quality was the ratio of library staff to constituents served. The logical implication may be that those libraries with the most compressed ratios would elicit the highest service satisfaction. This evidence might inform libraries making staffing and other resource allocation decisions. Matching staff with modestly sized user populations could enable increased customization with enhanced user satisfaction; this could occur whether the institution is large or small. The ratio of library staff to primary user population should be considered as a new measure for the AAHSL Annual Survey of Statistics for further exploration as an outcomes assessment. Composition of staff (ratio of degreed professionals to support staff or generalists versus specialized liaisons) might also have a significant effect on service quality perceptions.

This study is based on LibQUAL+ as a well-established qualitative assessment tool [6]. Potential limitations include variance in response rates and sampling techniques among different institutions. Comparisons of user satisfaction also may be affected by differing user expectations among libraries. Issues related to respondent bias reflect another possible limitation, although the survey administration has some built-in parameters to discount inconsistencies. Another potential limitation relates to differences in how contributing libraries report AAHSL data, although official member guidelines help optimize consistency.

This study enforces the idea of a culture of assessment, referring to the integration of assessment methods, philosophies, and practices into a sustained framework of action [3]. Furthermore, it implies continuous and deliberate means of evidence-based decision-making. While quantitative measures have been in the traditional forefront of libraries for centuries, qualitative measures can strengthen and support libraries in centering resources and services in the ways users need and expect them. At a time when libraries are amidst transformational change within an increasingly digital environment, more rigorous assessment studies based on qualitative and quantitative methods are needed to guide their future value and accountability.

A related consideration has much broader implications than the present analysis—empowerment. A culture of assessment and the process of its development is empowering to libraries, individuals, and the profession. For the authors and their institution, engagement in this study helped create a culture of assessment at the Medical College of Georgia Greenblatt Library in that it increased collaborative learning within the institution and promoted communications with users about library changes. The Greenblatt Library maintains a commitment to administer the survey every two to three years, to assess continuously the LibQUAL+ data, and to use LibQUAL+ with other assessment measures in planning library improvements.

This study also helped advance the work of the AAHSL Task Force on Service Quality Assessment, which guided four iterations of LibQUAL+ as a special cohort and presented highlights of results at national meetings. The Task Force paved the way for the creation of the AAHSL Assessment and Statistics Committee formed in November 2005, with overall responsibility for the philosophy, design, and production of statistical reports and data files that describe and support the planning needs of academic health sciences center libraries.

Participation in this study and engagement in the process of library assessment facilitated a number of relevant changes in researcher perceptions. These include an increased awareness of the role of assessment in libraries, the need for greater collaboration between library practitioners and researchers, and the benefit of segmented professional development activities tailored to reflect the diversity of our profession. Specifically, library researchers could benefit from taking advantage of educational opportunities from organizations such as the American Library Association (ACRL assessment activities), and ARL (Service Quality Evaluation Academy and Library Assessment Conferences). Additionally, this study sparked the initiative for a Special Interest Group (SIG) on Assessment and Benchmarking for the Medical Library Association. The SIG was formed in January 2003 with its purpose to foster learning and increase cooperation and support among libraries and librarians interested in assessment, benchmarking, and outcome measures.

The multi-faceted development of a culture of assessment empowers the profession also. Both authors of this paper have presented findings from the present study at professional meetings in anticipation of engaging more library practitioners in research [16, 17, 18]. As librarians, the authors recognize the importance for library practitioners to realize the symbiotic relationships that exist between research—which at times can seem removed from the daily practice of information professionals—and the services and programs that librarians provide to users. A culture of assessment is empowering as it may help motivate librarians to engage in more research and subsequently strengthen and invigorate the profession.

This paper explores one study as an approach designed to help develop a culture of assessment. The dynamic, multi-faceted process of assessment will be enriched by studies that employ alternative and complementary assessment tools. These include benchmarking, descriptive research, and mixed-model methods to employ both qualitative and quantitative measures.

A culture of assessment, supported by studies such as this one, empowers libraries and librarians in initiating change based on research. Potentially, it can advance achievement of strategic goals and launch new initiatives. It can help identify areas for improvement based on what users need and expect rather than what librarians think they want and expect. It can affirm or challenge existing knowledge about libraries. Ultimately, it can support evidence-based librarianship through a deliberate and dynamic means of decision-making and practice.

REFERENCES

  1. Fraser B, Mcclure C, and Leahy E. Toward a framework for assessing library and institutional outcomes. Portal Libr Acad. 2002 Oct; 2(4):505–28. [Google Scholar]
  2. Kyrillidou M. From input and output measures to quality and outcome measures, or, from the user in the life of the library to the library in the life of the user. J Acad Librariansh. 2002 Jan; 28(1/2):42. [Google Scholar]
  3. Lakos A. Culture of assessment as a catalyst for organizational culture change in libraries. In: Stein J, Kyrillidou M, Davis D, eds. Proceedings of the 4th Northumbria international conference on performance measurement in libraries and information services; 2001 Aug 12–16. Pittsburgh, PA; Association of Research Libraries; 2002:311–19. [Google Scholar]
  4. Thebridge S, Dalton P. Working towards outcomes assessment in UK academic libraries. J Libr Info Sci. 2003 Jun; 35(2):93–104. [Google Scholar]
  5. Young S. Research library involvement in learning outcomes assessment programs. Assoc Res Libr. 2003 Oct; 230/ 231. (1):14–17. [Google Scholar]
  6. Association of Research Libraries (ARL). Statistics and measurement program. [Website]. Washington: ARL, 2005. [rev. 5 Apr 2005; cited 1 Mar 2004]. <http://www.arl.org/stats/>. [Google Scholar]
  7. American Association of Medical Colleges. Learning objectives for medical student education: guidelines for medical schools. [Web document]. Washington: AAMC, 1998. [rev. Jan 1998; cited 1 Mar 2004]. <http://www.aamc.org/meded/msop/msop1.pdf>. [Google Scholar]
  8. American Library Association. Objectives for information literacy instruction: a model statement for academic librarians. [Web document]. Chicago: ALA, 2005. [rev. 1 Jun 2005; cited 1 Mar 2004]. <http://www.ala.org/ala/acrl/acrlstandards/objectivesinformation.htm>. [Google Scholar]
  9. Lee T. Exploring outcomes assessment: the AAHSL LibQUAL+TM experience. J Libr Adm. 2004 Sep; 40(3/4):49–58. [Google Scholar]
  10. Heath F. LibQUAL+TM: a methodological suite. Portal Libr Acad. 2002 Jan; 2(1):1–2. [Google Scholar]
  11. Thompson B, Cook C, and Thompson RL. Reliability and structure of LibQUAL+TM scores; measuring perceived library service quality. portal:Libraries and the Academy. 2002 Jan; 2(1):3–12. [Google Scholar]
  12. Webster D, Heath FM. LibQUAL+TM Spring 2002 survey results ARL. LibQUAL+TM 2002 Dimensions. Washington, DC: Association of Research Libraries, 2002. [Google Scholar]
  13. Shedlock J, Byrd GD. The Association of Academic Health Sciences Libraries annual statistics: a thematic history. J Med Libr Assoc. 2003 Apr; 91(2):178–85. [PMC free article] [PubMed] [Google Scholar]
  14. Vanderbilt University Medical Center,. Annette And Irwin Eskind Biomedical Library. Research Informatics Consult Service (RICS) Home Page. [Web document]. Nashville, TN: Annette and Irwin Eskind Biomedical Library. [cited 15 Aug 2006]. <http://www.mc.vanderbilt.edu/biolib/services/ rics/index.html>. [Google Scholar]
  15. NIH Library, National Institutes of Health. NIH Library Informationist Home Page. [Web document]. Bethesda, MD: NIH Library. [cited 15 Aug 2006]. <http://nihlibrary.nih.gov/LibraryServices/Informationists.htm>. [Google Scholar]
  16. Joubert D, Dennison L, and Lee T. Considering the integration of quantitative and qualitative. Presented at the AAHSL annual meeting, San Francisco, CA, Nov 12, 2002. [Google Scholar]
  17. Joubert D. Em(p)owering your institution: A mixed-model approach to assessment. Presented at MLA '04, the 104th Annual Meeting of the Medical Library Association; Washington, DC; May 23, 2004. [Google Scholar]
  18. Lee T, Shedlock J, and Forsman R. Surfing the tsunami of service quality: the AAHSL/ARL partnership in exploring outcomes assessment through LibQUAL+. Presented at MLA '03, the 103rd Annual Meeting of the Medical Library Association, San Diego, CA; May 4, 2003. [Google Scholar]

Articles from Journal of the Medical Library Association are provided here courtesy of Medical Library Association

RESOURCES