Abstract
The purpose of this study is to examine the relationship between librarians' perception of the difficulty of patron consultations and a variety of factors that characterize these interactions in the context of an academic library at a large public university. The study also provides insight into how changes in library service operations due to the global COVID-19 pandemic have affected the perceived difficulty of library consultations. Data samples were drawn from a LibInsight dataset and limited to consultations from Fall 2019 and Spring 2020 (N = 3331). Statistical analysis was conducted using ordinal logistic regression to quantify the relationship between perceptions of difficulty and factors indicating pre/post-COVID-19 modifications, patron type, scheduling, question format, library department, consultation duration, semester, and campus. Most notably, results indicate a statistically significant (p < 0.001) increase in the perceived difficulty of consultations that followed the closure of the library's physical spaces due to COVID-19, even when controlling for other factors in multiple model formulations. These results, as well as insights pertaining to other factors associated with library consultations and perceptions of difficulty, have implications for how librarians frame, understand, and manage their workloads. Additionally, findings may provide library service managers with the evidence needed to better coordinate and evaluate library services.
Keywords: Library consultations, Patron services, COVID-19, Reference interactions, Assessment
Introduction
Patron support is an integral component of library work, regardless of institution type. Specifically, consultations to help patrons find books, access materials, and complete research activities are a key facet of most librarians' work in both academic and other types of libraries. Despite extensive research in the library and information sciences literature on consultation service models and assessment, there has been limited examination of which patron and consultation characteristics are most related to the difficulty of these exceptionally important interactions. A better understanding of factors associated with difficult patron interactions would aid librarians and managers in predicting workloads, planning and balancing schedules, and providing adequate internal support.
Library consultations were drastically shifted in spring of 2020, when many libraries halted face-to-face interactions due to COVID-19. This unexpected disruption presented an opportunity to investigate the difficulty of patron interactions under varying circumstances. This study examines the relationship between the librarian's perception of the difficulty of an academic library consultation and related factors such as patron demographics, location, duration, the medium of communication, and the nature of the inquiry, as well as the relative change in these perceptions associated with the campus closure due to COVID-19. This analysis employed an ordinal logistic regression to examine data from Georgia State University's Library Patron Transaction Form for the Fall 2019 and Spring 2020 semesters.
Literature review
Reference statistics and assessment
Assessing reference services and patron interactions, or consultations, is a prominent theme in library and information sciences literature (Logan, 2009). As Krikelas (1966) points out, there are three common reasons libraries collect statistics: to support administrative decisions, to describe organizational activities, and “to establish general principles and relationships concerning library organizations, administration, and use” (p. 494). While assessment methods ranging from surveys and focus groups to case studies and observational studies have been used (Cassell & Hiremath, 2018), one of the simplest and most common modes is self-imposed observation, which often takes the form of transaction diaries or preset forms. The reference statistics librarians gather through self-imposed observation and supplemental assessment methods can be used to evaluate and improve services, determine optimal staffing levels and locations, and identify unmet service needs (Maloney & Kemp, 2015; Reiter & Huffman, 2016; Scales et al., 2015; Sullivan et al., 1994). Statistics might also be utilized to demonstrate the importance of reference services (Kloda & Moore, 2016) and advocate for additional resources, or, as Ryan (2008) notes, as a source of guidance when facing budget reductions that necessitate adjustments to staffing and service hours.
Despite a seeming consensus in the literature about the general utility and normative practice of collecting reference statistics, there has not been one single approach that has worked for, or been adopted by, a majority of libraries. This lack of standardization has long been a concern: Rothstein (1964) points out that varying approaches have been debated for decades, and as Krikelas (1966) notes, it has long been difficult for libraries to develop uniform terms and categories. Novotny (2002) reported in a study of Association of Research Library (ARL) member libraries that there was still little consensus on how reference statistics should be recorded; this inconsistency persisted despite efforts to standardize methods in the 1970s (Emerson, 1977) and 1980s (Association of Research Libraries, 1986). When the ARL began an in-depth discussion of reference statistics, most libraries wanted the ability to monitor success rates of reference transactions and the effect of bibliographic instruction on reference queries, but 23% of libraries involved could not supply statistics on reference transactions (Association of Research Libraries, 1986).
Even in the twenty-first century, with reference statistics now more widely captured, data recorded about reference interactions is often specific to an institution and focused on what a particular library wants to know about the services it provides (Murgai, 2006), suggesting that the long history of inconsistency reflects the varying locations under study and the individual needs and choices of those collecting the data (Oberg, 2011). Logan (2009) argues that measuring reference duties has changed over time from assessing quality to justifying the need for reference services, and concludes that because local conditions and advocacy needs vary, institutions should develop their own individualized assessment methodologies that create a holistic perspective of reference services. Thus, while inconsistent data makes comparative studies of reference statistics challenging, the institution-specific benefits of collecting data in a particular way might be perceived to outweigh the disadvantages of nonstandard practice.
Regardless of their degree of standardization, data collection and assessment projects focused on reference services in academic libraries tend to center patron service needs (Scales et al., 2015), student experiences and satisfaction (Murgai, 2012; Reiter & Cole, 2019; Rogers & Carrier, 2017), or student learning outcomes (Bradley et al., 2020; Maddox & Stanfield, 2020; Miller, 2018; Newton & Feinberg, 2020; Sikora & Fournier, 2016). The effects on librarians of providing these services and the use of reference statistics in predicting or assessing librarian workload and effort are comparatively little studied.
Difficulty of patron interactions
The Reference Effort Assessment Data (READ) Scale is a widely used tool, first developed in 2003 with the goal of systematizing perceptions of question difficulty when recording patron interactions in libraries and thus making reference statistics more useful (Gerlich & Berard, 2007). The six-point scale is the most common method that librarians use for measuring and comparing reference data within a standardized framework, although recent initiatives like ACRL's Project Outcome aim to provide more consistent instruments for assessing public services. Designed as a way to measure the skills and resources required to answer a question or complete a consultation, thus providing a mechanism for tracking and reporting “workload effort” (Gerlich & Berard, 2010, p. 137), the READ scale is often incorporated into reference tracking and assessment models and used to help differentiate between complex and simple interactions or those that require more and less effort from the librarian (Cassell & Hiremath, 2018).
Although the READ Scale was widely tested, with many institutions reporting willingness to adopt it (Gerlich & Berard, 2010), it is not without limitations. Some librarians, preferring fewer levels to choose from, needing a scale tailored to a particular reference format, or wishing to incorporate existing assessment terminology or systems for categorizing patron interactions, have instead implemented modified versions of the scale (Sheehan, 2011; Stieve & Wallace, 2018). Furthermore, while the READ Scale is a usable model for tracking effort in a somewhat consistent way and offers more nuance than traditional reference classification systems that use broad categories such as “directional” and “ready-reference” questions, its effectiveness and the accuracy of the resulting data depend on consistent training of users and normalization of inputs (Bowron & Weber, 2017; Gerlich & Berard, 2010). In other words, although the READ Scale relies on clearly defined categories, it does not perfectly address the underlying issue of consistently assessing effort and difficulty in order to select an appropriate category when reporting interactions.
Any attempt to study the effort or difficulty involved in patron interactions has limitations, particularly because effort and difficulty are relative and are not easily defined. While the difficulty of reference questions asked in academic libraries was once little studied (Whitlatch, 1989), the challenge of determining the difficulty of reference questions has received considerable attention in the literature over the last thirty-five years (Brown, 1985; Childers et al., 1991; Janes, 2002; Michell & Dewdney, 1998; Robinson, 1989; White & Iivonen, 2002). Some of the interest in determining the difficulty of questions stems from the use of these judgments to triage and appropriately direct inquiries. For example, question difficulty is often part of the training process and workflow for helping student workers, paraprofessionals, or other non–subject experts providing in-person or chat reference services know when to refer a patron to a specialist librarian (Avery & Ward, 2010; Keyes & Dworak, 2017; Pomerantz, 2004).
Recognizing difficult questions or complex research needs, however, is not always straightforward. Oberg (2011) notes that a recurring theme in this literature about difficulty is in fact the challenge of defining and measuring difficulty. While there are many different frameworks for defining the difficulty or complexity of a reference question, all of these typically involve necessarily self-determined and thus subjective judgments by librarians rather than consistent, objective measures. These subjective measures include the type of source required to answer (Brown, 1985), the amount of effort required (Robinson, 1989), and the time or skill required (Childers et al., 1991). Childers et al. (1991) propose addressing this problem by instead using proxy measures that are perceived to be more easily measured, such as number of sources, prior knowledge of subject, and ease of access to sources. But even less subjective measures are problematic proxies for difficulty; for example, the time spent answering a question or completing a consultation can be influenced by schedule constraints, additional patrons waiting for assistance, and other factors unrelated to the complexity of the question (Oberg, 2011). A further problem for tracking and understanding the effort expended in patron interactions is that the difficulty of the initial question itself does not necessarily correspond to the difficulty or complexity of—in other words, the effort expended during—the consultation. That is to say, studies of question difficulty used to inform referral decisions might not map perfectly onto assessments of interaction difficulty as gauged after the fact (Michell & Dewdney, 1998).
The literature exploring the difficulty of patron transactions covers a wide range of patron interaction types, from bibliographic questions presented at a service desk to emails and phone calls with reference librarians to extended, individualized appointments with specialists. The latter category of interaction is sometimes treated simply as a subset of reference service featuring particularly involved research questions (Cassell & Hiremath, 2018) and other times considered part of a library's instruction program (Yi, 2003). These interactions—referred to variously as one-on-one instruction, reference appointments, research clinics (Becker, 1993; Cardwell et al., 2001), personal research sessions (Whelan & Hansen, 2017), personalized research assistance, and research conversations (Maksin, 2015)—have emerged from the shift in service models from the “professionally staffed reference desk” to “a research consultation service” that relies on office hours or appointments where “librarians can spend uninterrupted time working with a user to offer research assistance and targeted instruction” (Bopp & Smith, 2011, p. 329). But while discussions of this type of patron interaction acknowledge their intensive nature, the existing literature pays little attention to how the rise of this model affects the interactions' difficulty and what degree of effort is required from librarians. These extended consultations are not the exclusive focus of this article, but they are of particular interest because such interactions tend to be time- and resource-intensive and thus warrant careful consideration in the context of operational decision-making.
COVID-19
The dataset examined in this study contains data collected during the period in which the Georgia State University Library's model for patron transactions shifted suddenly and dramatically in response to the SARS-CoV-2 (or COVID-19) pandemic. On Friday, March 13th, GSU closed for an extended spring break and then transitioned to online learning for the remainder of the semester. From March 13th until May 5th (the end of the semester and last date included in the dataset), the majority of library employees worked from home, all in-person reference services were suspended, and all patron interactions took place via phone, email, chat, or other remote technologies. GSU's response was typical of academic libraries during this time: as the Centers for Disease Control and Prevention began issuing social distancing guidelines (National Center for Immunization and Respiratory Diseases, 2020) and cases of COVID-19 appeared across the United States, many U.S. academic libraries expanded their virtual consultation offerings, shifted rapidly from an in-person to an online consultation service model, and advertised one-on-one virtual research appointments (“COVID-19 News”, n.d.). Lisa Janicke Hinchliffe and Christine Wolff-Eisenberg, with Ithaka S + R, quickly launched a survey on March 11th to gather detailed information about how academic libraries were responding to the crisis. Final results are not yet available, but preliminary reports indicate that 65% of libraries at first continued reference services as usual, with only 25% limiting hours or switching their offerings to virtual or phone only (Hinchliffe & Wolff-Eisenberg, 2020a). Within two weeks after the survey launched, however, 96% of libraries initially reporting continuation of normal reference services reported that they had switched to exclusively virtual and online reference (Hinchliffe & Wolff-Eisenberg, 2020b).
In quickly adjusting to these new circumstances, academic libraries were in effect expanding on the growing practice of offering and assessing virtual patron consultations—that is, patron interactions that occur over email, instant messaging, or video conferencing (Bennett, 2017; Duff & Johnson, 2001; Maddox & Stanfield, 2019; Steiner, 2011; Steiner, 2013; Tibbo, 1995). This virtual service is particularly important for libraries serving institutions with multiple campuses or extensive distance-learning programs (Guillot & Stahr, 2004). The sudden growth in virtual reference services spurred by COVID-19 highlights the relevance of the Association of College and Research Libraries' (ACRL's) “Standards for Distance Learning Library Services,” which deems consultation services essential support in an online-learning environment (ACRL Distance Learning, 2008), and the Reference and User Services Association's (RUSA's) “Guidelines for Implementing and Maintaining Virtual Reference Services,” which outlines definitions and core requirements for any remote reference services, including virtual consultations (American Library Association, 2017). The aforementioned extensive body of literature on standards and methods of conducting virtual reference tends to focus primarily on its effectiveness in meeting patron needs, which has largely been the focus of the emergency online pivot during the COVID-19 crisis, and less on virtual reference's consequences for librarians.
Transitioning to fully online modes of patron interaction during the recent disruption has enabled librarians to provide continuity of service, emphasize their ongoing availability to student and faculty researchers, and develop and test new skills and strategies. But while some academic libraries have been forced in the past to rapidly modify their references services in response to localized disruptions such as fires and natural disasters (Benefiel & Mosley, 2000; Littrell & Coleman, 2019; Liu et al., 2017; Missingham & Fletcher, 2020), it has not been well established how such disruptions might affect interaction difficulty, particularly in the case of widespread physical library closures prompted by public health concerns. It is also not yet known how the many disruptive and challenging circumstances that COVID-19 has created for students (Betancourt, 2020) might affect the frequency, type, or difficulty of patron interactions during the relevant period. These conditions have also underscored the need for clarity regarding the burdens and requirements of patron interactions in order to effectively manage expectations and workloads in an environment of rapidly evolving demands and delivery methods.
Patron type, interaction format, and time spent
Beyond a broad examination of reference difficulty via proxies such as skill required or number of sources consulted for a particular question, there are factors not connected to the topic or substance of the question that affect how that exchange takes place and how much effort it involves. Analyses of reference statistics have frequently examined relationships between such factors as the type of patron, the format of the interaction, and the time a librarian spends on the interaction. Some of these studies have examined the relationship between patron type and interaction format in particular. For example, research into which patron categories are most likely to use virtual reference has found that undergraduates have comparatively high rates of usage for chat reference services, while faculty and graduate students initiate most of the virtual interactions overall (Broughton, 2002; Nolen et al., 2012). Similarly, Schwartz (2004) and Lewis and DeGroote (2008) report that graduate students are more likely than other groups to use email reference. Gerlich and Berard (2010) observe that the format of an interaction can alter its difficulty level from what might be expected based on question content alone, and Maloney and Kemp (2015) specifically found that chat reference questions were seen as more complex than those asked in person.
Several studies have evaluated the amount of time spent by librarians on patron interactions in a variety of formats (Attebury et al., 2009; Gale & Evans, 2007; Lederer & Feldmann, 2012; Spencer & Dorsey, 1998; Yi, 2003). Time spent is often a particularly important measure when determining whether a reference service or initiative can scale, although use of scheduling tools, patron-initiated appointment models, and other management strategies can mitigate workload problems associated with interaction length or improve otherwise improve scalability by decreasing some of the logistical burdens on librarians (Cole & Reiter, 2017; Hess, 2014; Hoskisson & Wentz, 2001; Newton & Feinberg, 2020; Reiter & Cole, 2019). Magi and Mardeusz (2013) note that individual research consultations increase the likelihood that interactions based on complex or difficult questions will be rewarding because they “give students and librarians more time and space” compared to more perfunctory exchanges in person or online. These non-scheduled interactions are “often stressful and frustrating because the librarian feels pressure to ‘dispense with’ the student more quickly to help other waiting patrons” (pp. 290–91). This observation suggests that while the amount of time spent on an interaction might correspond with the difficulty of the question, time spent does not necessarily reflect the stress or burden involved in the interaction. It is clear from the literature that patron type, interaction format, and time spent are interrelated and affect the outcome of an interaction. However, despite extensive research over decides on librarian burnout across various library types (Affleck, 1996; Birch et al., 1986; Lindén et al., 2018; Nardine, 2019; Nelson, 1987; Sheesley, 2001; Smith & Nielsen, 1984), there has been little exploration of whether these characteristics of consultations (e.g. patron type, format, and time spent) are meaningfully predictive of an interaction's perceived complexity or difficulty and thus its effects on librarian capacity or burnout potential.
Patron transactions within specialized teams
One notable variable explored in the current study is the relationship between the reported difficulty of a patron interaction and the involvement of a specialized team of reference professionals, particularly those in research data services and special collections units. Dedicated support in libraries for data analysis, visualization, and management has grown considerably in recent years, making research data services an emerging area of academic librarianship extensively explored in the literature (Corrall et al., 2013; Koltay, 2017; Pryor et al., 2013; Si et al., 2015; Swygart-Hobaugh, 2017; Tenopir et al., 2012; Tenopir et al., 2014). Less well examined is the difficulty or complexity of these increasingly frequent research data transactions. Gao et al. (2018) use time spent as a proxy for difficulty in data-related patron interactions, describing data consultations as “lengthy” and noting that this “suggests the complexity and intensity of data-related questions” (p. 589). This observation may not be generalizable, though, as the time used for data-focused interactions is not compared to that for all patron interactions, and it is unclear how many of the questions in their dataset were answered by data specialists versus other librarians. Parrish (2006) similarly argues that GIS consultations, one type of research data interaction, are more time-consuming than other patron interactions but does not provide comparative data.
While many special collections and archives have long been housed within academic libraries, often staffed by individuals with traditional library reference training and tasked with collecting reference statistics in library-wide systems, the literature on reference assessment in these settings and on the difficulty of patron interactions is also sparse. This might be attributable in part to differences in the types of questions asked of special collections and archives teams (Lavender et al., 2005; Martin, 2001). With the release and implementation of the Society of American Archivists' “Standardized Statistical Measures and Metrics for Public Services in Archival Repositories and Special Collections Libraries” (SAA-ACRL/RBMS Joint Task Force, 2018), which includes metrics such as “time spent responding” and “question complexity” for patron interactions, special collections teams may now begin to collect more standardized statistics to assess the difficulty and burden of interactions (Hawk, 2018). Currently, though, there is little known about how difficulty of interactions compares between special collections units and other specialized teams or institutions' overall reference services, especially during a disruptive event like COVID-19 that limits the team's and researchers' access to collections.
Methods
Data
Data for this analysis spans the duration of the Fall 2019 and Spring 2020 semesters and was drawn from Georgia State University (GSU) Library's Patron Transaction Log. Samples were drawn from the first day of courses for Fall 2019, August 26th, through the last day of courses for Spring 2020, May 5th. GSU is a large public university with more than 50,000 students across seven campuses: Atlanta (main campus), Alpharetta, Buckhead, Clarkston, Decatur, Dunwoody, and Newton. The library at each campus is a central hub for student learning, socializing, and coursework completion. Given the dynamic nature of GSU, with one of the largest student bodies in the United States, multiple campuses, and a high number of library consultations, the data from GSU's patron interactions are uniquely suited for this analysis. Beyond the demographic characteristics of the university, GSU Library employees use an instrument that records myriad details about their individual consultations, including date, campus, patron type, question difficulty, time spent, question format, patron department, whether the consultation was scheduled, whether the consultation was related to a specialized team, and space to write in more information. The analysis of consultation difficulty in this study uses most of the aforementioned variables, including difficulty level, whether the consultation was scheduled, patron type, format, date, and campus.
Level of Difficulty
Level of Difficulty is measured with a four-point ordinal scale: (1) “Directional/Very Basic,” (2) “Some Effort Required,” (3) “Effort Required,” and (4) “Significant Effort Required.” This ordinal scale represents a modified version of the READ Scale (Gerlich & Berard, 2007) as it is implemented at GSU. The (1) “Directional/Very Basic” designation is used as the reference group in the models. Level of Difficulty is the dependent variable in this study.
COVID-19
GSU Library implemented a work-from-home policy for the overwhelming majority of employees as of March 16th, 2020, which was the first day of spring break and thirteen days after the midterm of the semester. We generated a variable based on the date to indicate whether a consultation occurred prior to the work-from-home policy or from the time the policy began until the end of the semester, May 5th, 2020. Thus, the COVID-19 variable is coded as (0) prior to the work-from-home policy and (1) after the work-from-home policy. Prior to COVID-19 (0) is used as the reference group in the models. COVID-19 is a main independent variable in this study.
Duration
Time spent on a consultation is measured with a seven-point interval scale: (1) “less than 10 minutes,” (2) “10–20 min,” (3) “20–30 min,” (4) “30–40 min,” (5) “40–50 min,” (6) “50–60 min,” and (7) “60+ min,” with the additional possibility of (8) “Unknown.” “Unknown” responses were removed, as they shift the level of measurement for the scale from interval to nominal and thus limit the analytical interpretations of the datum points. Consultations recorded as (1) “less than 10 minutes” are used as the reference group in the models. Duration is a main independent variable in this study.
Scheduled
Scheduled is measured with a single indicator “Was this transaction scheduled in advance?” and the options are coded as (0) “No” and (1) “Yes.” The reference category used in this variable for the model is (0) “No.” Scheduled is a main independent variable in this study.
Specialized Teams
GSU Library records patron interactions that are managed by two teams whose work is widely considered to be distinct from “traditional” subject-librarian and reference-librarian work. The first group is Special Collections and Archives (SCA), representing a team of archivists and other professionals who support research, scholarship, and preservation activities pertaining to GSU as an institution and to the subject areas represented in the collections. The second group is Research Data Services (RDS), representing a group of five library faculty members and one graduate research assistant who specialize in data visualization, quantitative and qualitative research methods, and a variety of software and methods related to data-driven research. Within the data, individual samples that pertain to SCA or RDS are explicitly noted and easily distinguished from other patron interactions. Both SCA and RDS observations in the data are labelled as (0) or (1) where appropriate. Implicitly, the reference group for Specialized Teams represents “traditional” patron-librarian interactions and is represented by observations where both SCA and RDS are labelled as (0).
Patron Type
Patron Type is generated from a single indicator with the following options: “Alumni,” “Community,” “Library Donor,” “Faculty,” “Graduate Student,” “Library Colleague,” “PhD Student,” “Staff,” “Undergraduate Student,” “University Administration,” and “Unknown.” We collapsed “PhD Student” into the “Graduate Student” category. Each of the aforementioned categories was dichotomized for the analysis. While some types of patrons had small numbers of consultations, we decided to keep them separate, as they are distinct groups of library patrons and collapsing them into a generic “other” category would have removed the potential nuance of differences between the groups. The smallest groups were “University Administration” (n = 3), “Alumni” (n = 28), and “Staff” (n = 55). Patrons with an unknown status were used as the reference group in the models. Patron Type is a main independent variable in this study.
Format of Consultation
Library consultation format was measured with a series of dichotomous variables generated from a single nominal indicator titled “Question Format” with the response options of (1) “In Person,” (2) “Email,” (3) “Phone,” (4) “Online, real-time,” and (5) “Social Media.” “Online, real-time” and “Social Media” were collapsed into a single indicator of “Online.” “In Person” was used as the reference group in the models, because most traditional library consultations occur in person at locations like a reference desk or face to face with a librarian. Format of Consultation is a main independent variable in this study.
Semester
The semester in which the consultation occurred was controlled for in this study. We used the dates of the consultations to determine whether each consultation occurred in the Fall 2019 or Spring 2020 semester. “Fall 2019” is used as the reference group to respect temporal order. “Spring 2020” has a higher average difficulty (M = 2.63, SD = 1.02) than “Fall 2019” (M = 2.33, SD = 1.03), and this difference is highly statistically significant (t(3329) = 8.19, p < 0.001). Given the higher number of consultations in the fall semester (n = 1756, 52.72%) and the statistically significantly difference between the semesters on Level of Difficulty, the dependent variable, Semester is used as a control variable in this study.
Campus
GSU has a total of seven campuses: Atlanta (main campus), Alpharetta, Buckhead, Clarkston, Decatur, Dunwoody, and Newton. Campus is measured by a single indicator where librarians selected from the list of aforementioned cities across metro Atlanta to indicate the campus where the consultation took place or with which the librarian is affiliated. We generated dichotomous variables for each campus location. “Atlanta,” the main campus, is used as a reference in the model. Campuses are used as control variables in this study.
We used listwise deletion on any patron transactions that were missing one or more of the study variables. The final sample size is N = 3331, from the total number of patron transactions (N = 26,334). This large amount of missing data is due to the normative practice at GSU of logging only the difficulty of the question, with little or no other information, when librarians are particularly busy at service points.
Methods of analysis
Basic descriptive statistics and ordered logistic regression were used to complete this analysis. The basic descriptive statistics used were percent, n, and range. We do not report standard deviation, as each study variable is dichotomous and not informative. Ordered logistic regression was used because the dependent variable, Level of Difficulty, is an ordinal scale. Thus, higher values mean more difficulty, but the exact difference in difficulty between groups is impossible to meaningfully discern. Prior to running the analysis, we tested for multicollinearity between all independent variables and control variables with the dependent variable to ensure the model assumptions were all met for ordered logistic regression (Pampel, 2000). Six unique model formulations were evaluated in a stepwise manner, where each successive model included a progressively broader set of independent variables. Model 1, the base model, uses only the COVID-19, Semester, and Campus independent variables. Model 6 represents the model formulation containing all independent variables. Analysis was completed in Stata 16.1. GSU granted IRB approval for this study in April 2020.
Results
Descriptive statistics
As seen in Table 1 , observations of Level of Difficulty, the dependent variable in our models, are spread out across four categories: “Basic,” “Some Effort,” “Effort,” and “Significant Effort.” The most common observation for this variable was “Some Effort” (N = 1092, 32.78%). Most observations for the COVID-19 variable are from before the GSU Library closed its physical spaces (N = 2651, 79.59%), which approximately reflects the percent of days of the academic year pre-COVID-19 and post-COVID-19. Observations for the Patron Type variable are dominated by three groups, the largest being “Undergraduate Student” (N = 1095, 32.87%), followed closely by “Graduate Student” (N = 1021, 30.65%) and then “Faculty” (N = 504, 15.13%). All remaining patron types combined account for less than 22% of total observations.
Table 1.
Descriptive statistics.
Ordered logistic regression modelling results
In Table 2 , the magnitude and statistical significance of individual independent variable parameters are reported across six different models. These results highlight a variety of interesting insights into the data and the nature of GSU Library's patron services.
Table 2.
Level of difficulty of consultations, ordered logistic regression.
Model 1 |
Model 2 |
Model 3 |
Model 4 |
Model 5 |
Model 6 |
|||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
b | b | b | b | b | b | |||||||
COVID-19 | 0.45 | ⁎⁎⁎ | 0.34 | ⁎⁎⁎ | 0.40 | ⁎⁎⁎ | 0.48 | ⁎⁎⁎ | 0.35 | ⁎⁎ | 0.12 | |
Patron Type (undergrad) | ||||||||||||
Alumni | 0.57 | 0.26 | 0.20 | 0.75 | ⁎ | 0.24 | ||||||
Community | −0.26 | −0.33 | ⁎ | −0.25 | 0.51 | ⁎⁎ | 0.10 | |||||
Library donor | −0.50 | ⁎⁎ | −0.74 | ⁎⁎⁎ | −0.58 | ⁎⁎⁎ | 0.34 | 0.46 | ⁎ | |||
Faculty | 0.20 | ⁎ | 0.14 | 0.19 | 0.45 | ⁎⁎⁎ | 0.07 | |||||
Graduate student | 0.98 | ⁎⁎⁎ | 0.65 | ⁎⁎⁎ | 0.66 | ⁎⁎⁎ | 0.93 | ⁎⁎⁎ | 0.35 | ⁎⁎ | ||
Library colleague | 0.21 | −0.17 | −0.17 | 0.40 | ⁎ | 0.12 | ||||||
Staff | 0.36 | 0.15 | 0.20 | 0.86 | ⁎⁎ | 0.31 | ||||||
Administration | 0.81 | −0.34 | −0.12 | 0.74 | 0.69 | |||||||
Unknown | −1.03 | ⁎⁎⁎ | −0.74 | ⁎⁎⁎ | −0.98 | ⁎⁎⁎ | −0.28 | 0.68 | ⁎⁎⁎ | |||
Scheduled | 1.61 | ⁎⁎⁎ | 1.30 | ⁎⁎⁎ | 1.47 | ⁎⁎⁎ | 0.09 | |||||
Format (in person) | ||||||||||||
−0.54 | ⁎⁎⁎ | −0.85 | ⁎⁎⁎ | −0.11 | ||||||||
Online | 0.39 | 0.56 | ⁎ | 0.34 | ||||||||
Phone | −0.39 | −0.60 | ⁎⁎ | −0.55 | ⁎ | |||||||
Specialized Teams | ||||||||||||
Special collections | −1.28 | ⁎⁎⁎ | −0.75 | ⁎⁎⁎ | ||||||||
Research data services | −1.50 | ⁎⁎⁎ | −1.71 | ⁎⁎⁎ | ||||||||
Duration (less than 10 min) | ||||||||||||
10–20 min | 2.43 | ⁎⁎⁎ | ||||||||||
20–30 min | 3.95 | ⁎⁎⁎ | ||||||||||
30–40 min | 4.85 | ⁎⁎⁎ | ||||||||||
40–50 min | 5.74 | ⁎⁎⁎ | ||||||||||
50–60 min | 5.74 | ⁎⁎⁎ | ||||||||||
60 min or longer | 7.17 | ⁎⁎⁎ | ||||||||||
Semester (fall) | −0.17 | ⁎ | −0.23 | ⁎⁎ | 0.00 | −0.01 | 0.04 | −0.21 | ⁎ | |||
Campus (Atlanta) | ||||||||||||
Alpharetta | −3.08 | ⁎⁎⁎ | −2.97 | ⁎⁎⁎ | −3.11 | ⁎⁎⁎ | −3.41 | ⁎⁎⁎ | −4.13 | ⁎⁎⁎ | −2.53 | ⁎⁎⁎ |
Buckhead | −0.05 | −0.61 | −0.51 | −0.71 | −1.45 | ⁎⁎ | 0.18 | |||||
Clarkston | −3.22 | ⁎⁎⁎ | −3.11 | ⁎⁎⁎ | −2.85 | ⁎⁎⁎ | −3.23 | ⁎⁎⁎ | −3.97 | ⁎⁎⁎ | −2.88 | ⁎⁎⁎ |
Decatur | −3.01 | ⁎⁎⁎ | −2.83 | ⁎⁎⁎ | −2.59 | ⁎⁎⁎ | −2.95 | ⁎⁎⁎ | −3.75 | ⁎⁎⁎ | −2.73 | ⁎⁎⁎ |
Dunwoody | 0.04 | 0.37 | 0.02 | 0.07 | −0.27 | −0.31 | ||||||
Intercepts (E = Effort) | ||||||||||||
Basic | Some E | −2.28 | −2.19 | −1.89 | −2.29 | −2.98 | −1.62 | ||||||
Some E | E | −0.24 | −0.03 | 0.35 | −0.03 | −0.59 | 2.11 | ||||||
E | Sig E | 1.10 | 1.43 | 1.99 | 1.63 | 1.19 | 4.98 | ||||||
Pseudo r2 (Δ in r2) | 0.13 | 0.16 (0.03) | 0.20 (0.04) | 0.20 (0.00) | 0.23 (0.03) | 0.43 (0.20) | ||||||
χ2 (df) | 1145 (7) | 1420 (16) | 1788 (17) | 1828 (20) | 2107 (22) | 3923 (28) | ||||||
⁎⁎⁎ | ⁎⁎⁎ | ⁎⁎⁎ | ⁎⁎⁎ | ⁎⁎⁎ | ⁎⁎⁎ |
N = 3331. Data: Georgia State University Library Consultations Fall 2019–Spring 2020.
p < 0.05.
p < 0.01.
p < 0.001.
The overall model fit for all six models is shown to be statistically significant (p < 0.001), according to each model's respective chi-squared (χ2) and degrees-of-freedom (df) metrics. While the pseudo-r2 performance metric increases with almost every progressive model formulation, there was little to no increase in model performance between Models 3 and 4 (Δ in r2 < 0.001), indicating that the addition of the Format variable to the model had no appreciable effect on overall model performance. Conversely, the largest increase in model performance is between Models 5 and 6 (Δ in r2 = 0.20) with the inclusion of Duration (length of consultation). This indicates that the addition of the Duration variable has a notable influence on overall model performance.
All models reported in Table 2 contain the variables COVID-19, Semester, and Campus. For COVID-19, the parameter estimates show that patron interactions that took place after the libraries closed their physical spaces were rated as more difficult by librarians. In all but Model 6, the COVID-19 variable's relationship with difficulty is highly significant (Models 1–4: p < 0.001, Model 5: p < 0.01). Inversely, the Semester variable has a negative relationship with the dependent variable in multiple model formulations, indicating that patron interactions in Spring 2020 (both before and after the COVID-19 pandemic) were generally given a lower difficulty rating. The Campus variable indicates that patron interactions at three out of five library locations (“Alpharetta,” “Clarkston,” and “Decatur”) have highly significant negative relationships (p < 0.001) with the dependent variable when compared to patron interactions at the downtown Atlanta campus. These relationships are manifest across all models. The Campus parameter estimates for the remaining two campuses, “Buckhead” and “Dunwoody,” are not statistically significant in most of the models. This is likely due to the relatively small sample sizes associated with these two campuses (N = 18, 0.54%, and N = 11, 0.33%, respectively).
Patron Type, Scheduled, Format, Specialized Teams, and Duration are only included in specific subsets of models. For Patron Type, although the parameter estimates and statistical significance vary from group to group and model to model, patron interactions involving graduate students are consistently shown to be labelled as more difficult by librarians in all model formulations as compared to the “Undergraduate Student” reference group. In Model 6 this result is statistically significant (p < 0.01), and for Models 2, 3, 4, and 5, the results are highly significant (p < 0.001). The Scheduled variable exhibits similar patterns to Patron Type. With the exception of Model 6, where the relationship is not statistically significant, the parameter estimates for Scheduled are highly significant in each model (p < 0.001) and indicate that patron transactions that were scheduled in advance tended to be more difficult than unscheduled interactions.
For the Format variable, the parameter estimates across all factor levels and relevant model formulations vary with respect to their statistical significance. When compared to in-person interactions, patron interactions labelled as “Email” and “Phone” had a negative relationship with the dependent variable, indicating that these interactions tended to be recorded as less difficult by librarians. While the parameter estimates indicate that interactions labelled as “Online” tended to be recorded as more difficult by librarians, these results are not statistically significant in two out of three relevant models.
The results for the variable Specialized Teams indicate a strong distinction between the substantive types of patron interactions librarians engage in. In Models 5 and 6, the parameter estimates for both “Special Collections” and “Research Data Services” show a negative relationship with the dependent variable and are statistically highly significant (p < 0.001) as compared to more “traditional” patron-librarian interactions.
Lastly, the Duration variable, which is only present in Model 6, is statistically highly significant (p < 0.001), and the magnitude of the parameter estimate increases notably for each progressive factor-level of Duration. The presence of Duration in Model 6 also appears to ameliorate the effects and statistical significance of many variables and individual factor-levels present in the model. While both Level of Difficulty and Duration are ordinal variables, these results suggest that the two variables are at least partially autocorrelated. This resonates intuitively, as one would expect patron interactions and queries that take longer to resolve would be perceived as more difficult by individual librarians.
Discussion
Utility and application
The analytic approach shown can be used to provide libraries with measurable insights into the nature of librarian-patron interactions. While the study results do not provide causal explanations for the relationships that are manifest in the data, they offer a variety of insights that can provide library service managers with the leads necessary to further investigate and improve the quality of patron services.
First, and most timely, the results show that the average difficulty of patron interactions has increased since the GSU campuses closed due to COVID-19. Since the results also show that online patron interactions are associated with higher Level of Difficulty ratings in general, library service managers may expect to see elevated difficulty ratings going into the Fall 2020 semester as patron services remain largely remote and online. This increased perception of difficulty may be a consequence of librarians needing better IT equipment, training, and practice in using synchronous communication tools for providing library services and support. These results may also mean that librarians need to be able to allocate larger portions of their time toward individual patron interactions at the expense of time spent on other duties in order to maintain a healthy work schedule.
Second, the results indicate that librarians do not rate the difficulty of individual patron transactions in a standardized or uniform way. For instance, the library's Special Collections and Research Data Services teams' interactions with patrons are generally limited to a specialized subset of patron needs and inquiries. As a result, it is assumed that these interactions are rarely characterized by brief durations and low-level inquiries with simple, canned, or predictable responses. Consequently, we would expect to see patron interactions associated with Specialized Teams to be more difficult on average. Despite this, the results indicate that patron interactions involving specialized librarians and staff are actually rated as less difficult on average. This information is a rich foundation for follow-up investigations, training, and inquiry. It is possible that different sub-groups within the library have different understandings of how to rate the difficulty of individual patron interactions. It is equally possible that the types of inquiries that different groups of librarians engage with are measurably more or less challenging than the types of inquiries that other groups engage with. In both cases, further investigation by service managers is warranted to determine if organization-wide training and normalization is needed and whether the duties and responsibilities both within and across different groups in the library need to be adjusted. Further research comparing Level of Difficulty ratings between Specialized Teams and the library overall might also examine whether the unexpected Level of Difficulty gap widened or narrowed during the COVID-19 period. Factors such as physical facility closures and service changes might have influenced the type and complexity of patron questions handled by members of Special Collections and Research Data Services.
Lastly, the results show very notable discrepancies in Level of Difficulty ratings between campus locations. Specifically, when compared with the downtown Atlanta campus, librarians at all other GSU campuses consistently rate their interactions with patrons with lower difficulty ratings. Similar to the results for Specialized Teams, these results may indicate that librarians at various campus locations may have different perspectives on how to apply difficulty ratings, even if the types of patron inquiries they engage with are largely the same across all campus locations. Alternatively, it is possible that there are intrinsic differences in the nature and relative difficulty of patron interactions at different campus libraries. If the former scenario is true, then service managers, again, may wish to pursue organization-wide training that ensures everyone has a shared understanding of the difficulty scale. If the latter scenario is true, then it represents an opportunity for librarians to engage in outreach efforts targeting students and faculty with the objective of promoting more advanced interactions between librarians and patrons at specific campuses.
Limitations
As with all research, this study has several notable limitations: missing data, lack of input from librarians on their perceptions, and limited data scope. We have a significant amount of missing data, as only 3331 out of 26,334 consultations in the original LibInsight dataset contained enough information to complete the analysis. The cases with missing data are largely from the reference desk, where librarians do not log information beyond difficulty, campus, and date. Since the data was not missing at random (Rubin, 1987) and the majority of consultations contained missing data (87.35%), multiple imputation was not used (Rubin, 2019). Additionally, this study only used the records generated by librarians about the difficulty of their consultations and related factors. The study would have been strengthened by an explanatory sequential mixed methods (Ivankova et al., 2006) approach in which the researchers followed up with librarians qualitatively to ask their perceptions of difficulty of consultations based on the statistical findings within the analysis as well as perceptions of how to reduce the difficulty of consultations. Lastly, the data used in this study represents only a single institution, and the results shown may not generalize to other institutions.
Recommendations and future directions
Librarians at all levels can use these findings to help frame their approach to and understanding of the difficulty level of consultations. On an individual level, librarians can use these findings to potentially identify sources of strain and opportunities for growth with respect to how they engage with patrons. Library administrators and supervisors can use these findings to inform deeper assessments of service operations, develop effective training programs, and better ensure that librarians receive the support they need to prevent burnout, particularly during periods of disruption or when providing increased remote services. Administrators and supervisors can also use these findings for training purposes to better prepare librarians who work directly with patrons to be aware of common patterns within more difficult consultations. Burnout among librarians is common, and understanding which factors are associated with more difficult consultations provides opportunities to reduce burden on librarians.
Most libraries already collect, maintain, and use datum points comparable to those used in this study. Scholars from other libraries should seek to reproduce this analysis with their own data. Analysis of a library's own data will help library leaders to better prepare and train their librarians for their patron interactions. Qualitative researchers should use these findings to frame interviews and focus groups of librarians on why consultations that occurred during the COVID-19 pandemic and online consultations are reported as more difficult than data analysis–focused consultations and interactions with specific types of patrons. Specifically, librarians should be interviewed as to why library consultations are more difficult as well as what could be done to reduce the difficulty of library consultations. The insights gleaned from those studies paired with this analysis would be instrumental in developing best practices for librarians working through difficult consultations and under conditions that might increase the effort required to provide reference services.
Conclusion
With the COVID-19 pandemic still ongoing at the time of writing, we believe that now is an opportune time for individual librarians and service managers to seriously investigate the factors contributing to the perceived burden and difficulty of individual interactions with library patrons. While we have presented a quantitative approach to evaluating the types of data that many librarians already have access, there are many rich opportunities for continued quantitative and qualitative inquiry in this area. Further examination of the perceived difficulty and burden imposed by different types of patron interactions is an essential strategy for understanding how the growth of virtual reference affects librarian workload, mitigating the deleterious effects of librarian burnout and low morale, and maintaining high-quality public services during facility closures or changes in service models.
CRediT authorship contribution statement
Raeda Anderson: Conceptualization, Methodology, Software, Formal analysis, Writing - original draft, Writing - review & editing, Project administration. Katherine Fisher: Resources, Methodology, Writing - original draft, Writing - review & editing. Jeremy Walker: Conceptualization, Methodology, Formal analysis, Resources, Writing - original draft, Writing - review & editing, Visualization.
Acknowledgments
Thank you to Laura Carscaddon, Mandy Swygart-Hobaugh, José Rodriquez, and Jason Puckett for your insights on normative practices surrounding the logging of patron consultations. Additionally, thank you to each of the faculty and staff at Georgia State University Library for completing thousands of consultations each semester. Your work for the GSU Library and our scholars is invaluable.
Appendix A. Patron transaction form
LibInsight question label | Response options |
---|---|
Start date | Datetime |
Campus | Alpharetta |
Atlanta | |
Buckhead | |
Clarkston | |
Decatur | |
Dunwoody | |
Newton | |
Location | CURVE |
Circulation | |
Circulation/reference | |
Office | |
Other | |
Reference | |
Roving | |
Special collections | |
Tech support | |
Patron type | Alumni |
Community | |
Donor | |
Faculty | |
Graduate student | |
Library colleague | |
Ph.D. student | |
Staff | |
Undergraduate student | |
University administration | |
Unknown | |
Question type | 1. Directional/very basic |
2. Some effort required | |
3. Effort required | |
4. Significant effort required | |
Time spent | Less than 10 min |
10–20 min | |
20–30 min | |
30–40 min | |
40–50 min | |
50–60 min | |
60+ min | |
Unknown | |
Question format | In person |
Phone | |
Online, real-time | |
Social media | |
Department(s) | 73 unique options |
Special Collections & Archives area | Photographic Collections |
Music & Radio Broadcasting | |
Pulp Literature & Zines | |
Social Change | |
Southern Labor Archives | |
University Archives | |
Women's Collections | |
Gender & Sexuality | |
Rare Books | |
Other | |
Question | Free text response field |
Answer | Free text response field |
Was this transaction scheduled in advance? | Yes/no |
Campus ID | Free text response field |
Appendix B. Code
Analysis completed in Stata/SE 16.1
---------------------
** import data
** data cleaning
- gen patron=.
- replace patron=1 if patrontype=="Alumni"
- replace patron=2 if patrontype=="Community"
- replace patron=3 if patrontype=="Donor"
- replace patron=4 if patrontype=="Faculty"
- replace patron=5 if patrontype=="Graduate Student"
- replace patron=6 if patrontype=="Library Colleague"
- replace patron=5 if patrontype=="Ph.D. Student"
- replace patron=7 if patrontype=="Staff"
- replace patron=0 if patrontype=="Undergraduate Student"
- replace patron=8 if patrontype=="University Administration"
- replace patron=9 if patrontype=="Unknown"
label variable patron "patron status"
label define patron3 1 "Alumni" 2 "Community" 3 "Donor" 4 "Faculty" 5 "Graduate Student" 6 "Library Colleague" 7 "Staff" 8 "University Administration" 9 "Unknown" 0 "undergrad student"
label values patron patron3
gen sem=.
replace sem=1 if semester=="spring "
replace sem=0 if semester=="fall "
label variable sem "semester 1=spring"
label define sem3 1 "spring" 0 "fall"
label values sem sem3
gen covid19=.
replace covid19=0 if covid=="no"
replace covid19=1 if covid=="yes"
label variable covid19 "covid19 0=no 1=yes"
label define covid192 0 "no" 1 "yes"
label values covid19 covid192
gen format=.
replace format=1 if questionformat=="Email"
replace format=2 if questionformat=="Facebook"
replace format=0 if questionformat=="In Person"
replace format=4 if questionformat=="Online, real-time"
replace format=5 if questionformat=="Phone"
replace format=2 if questionformat=="Social Media"
replace format=2 if questionformat=="Twitter"
label variable format "Format for consultation"
label define format2 1 "Email" 2 "Social Media" 0 "In Person" 4 "Online, Real Time" 5 "Phone"
label values format format2
rename scheduled schedule
gen scheduled = .
replace scheduled=0 if schedule=="No"
replace scheduled=1 if schedule=="Yes"
label variable scheduled "Was the consultation scheduled 1=yes"
gen camp = .
replace camp=0 if campus==""
replace camp=2 if campus=="Alpharetta"
replace camp=1 if campus=="Atlanta"
replace camp=3 if campus=="Buckhead"
replace camp=4 if campus=="Clarkston"
replace camp=5 if campus=="Decatur"
replace camp=6 if campus=="Dunwoody"
replace camp=7 if campus=="Newton"
label variable camp "Campus"
label define camp1 1 "Atlanta" 2 "Alpharetta" 3 "Buckhead" 4 "Clarkston" 5 "Decatur" 6 "Dunwoody" 7 "Newton"
label values camp camp1
gen time=.
replace time=1 if timespent=="Less than 10 minutes"
replace time=2 if timespent=="10-20 minutes"
replace time=3 if timespent=="20-30 minutes"
replace time=4 if timespent=="30-40 minutes"
replace time=5 if timespent=="40-50 minutes"
replace time=6 if timespent=="50-60 minutes"
replace time=7 if timespent=="60+ minutes"
label variable time "Time spent during consultation"
label define time1 1 "less than 10 min" 2 " 10-20" 3 "20-30" 4 "30-40" 5 "40-50" 6 "50-60" 7 "60+"
label values time time1
gen diff = .
replace diff =1 if questiontype == "1. Directional / Very Basic"
replace diff =2 if questiontype == "2. Some Effort Required"
replace diff =3 if questiontype == "3. Effort Required"
replace diff =4 if questiontype == "4. Significant Effort Required"
label variable diff "Level of difficulty of consultation 1= low"
gen spcoll = 1
replace spcoll=0 if specialcollections == ""
label variable spcoll "special collections 1=yes 0=no"
drop if diff==.
drop if scheduled==.
drop if time==.
drop if patron==.
drop if format==.
drop if camp==.
drop if dataservices==.
drop if sem==.
** analysis
* descriptive stats
sum i.diff covid19 i.patron scheduled i.format spcoll dataservices i.time sem i.camp
tab1 diff covid19 patron scheduled format spcoll dataservices time sem camp
ttest diff, by(sem)
*Ordered logistic regression DV: difficulty
ologit diff covid19 sem i.camp
ologit diff covid19 i.patron sem i.camp
ologit diff covid19 i.patron scheduled sem i.camp
ologit diff covid19 i.patron scheduled i.format sem i.camp
ologit diff covid19 i.patron scheduled i.format spcoll dataservices sem i.camp
ologit diff covid19 i.patron scheduled i.format spcoll dataservices i.time sem i.camp
---------------------
References
- ACRL Distance Learning Section Guidelines Committee Standards for distance learning library services: Approved by the ACRL Board of Directors, July 2008. C&RL News. 2008;69(9):558–569. doi: 10.5860/crln.69.9.8067. [DOI] [Google Scholar]
- Affleck M.A. Burnout among bibliographic instruction librarians. Library & Information Science Research. 1996;18(2):165–183. doi: 10.1016/S0740-8188(96)90018-3. [DOI] [Google Scholar]
- American Library Association Guidelines for implementing and maintaining virtual reference services. 2017. http://www.ala.org/rusa/resources/guidelines/virtrefguidelines/
- Association of Research Libraries . Minutes of the 108th meeting. 1 May 1986. Minneapolis, Minnesota. 1986. Research libraries: Measurement, management, marketing.https://files.eric.ed.gov/fulltext/ED278401.pdf [Google Scholar]
- Attebury R., Sprague N., Young N.J. A decade of personalized research assistance. Reference Services Review. 2009;37(2):207–220. doi: 10.1108/00907320910957233. [DOI] [Google Scholar]
- Avery S., Ward D. Reference is my classroom: Setting instructional goals for academic library reference services. Internet Reference Services Quarterly. 2010;15(1):35–51. doi: 10.1080/10875300903530264. [DOI] [Google Scholar]
- Becker K.A. Individual library research clinics for college freshmen. Research Strategies. 1993;11(4):202–210. [Google Scholar]
- Benefiel C.R., Mosley P.A. Coping with the unexpected: A rapid response group in an academic library. Technical Services Quarterly. 2000;17(2):25–35. doi: 10.1300/J124v17n02_03. [DOI] [Google Scholar]
- Bennett J.L. Virtual research consultations study. Internet Reference Services Quarterly. 2017;22(4):193–200. doi: 10.1080/10875301.2017.1305475. [DOI] [Google Scholar]
- Betancourt N. Student experiences during COVID-19: Actionable insights driving institutional support for students. Ithaka S+R. 2020, April 22. https://sr.ithaka.org/blog/student-experiences-during-covid-19/
- Birch N.E., Marchant M.P., Smith N.M. Perceived role conflict, role ambiguity, and reference librarian burnout in public libraries. Library and Information Science Research. 1986;8(1):53–65. [Google Scholar]
- Bopp R.E., Smith L.C. 4th ed. ABC-CLIO; 2011. Reference and information services: An introduction. [Google Scholar]
- Bowron C.R., Weber J.E. Implementing the READ Scale at the Austin Peay State University Library. The Journal of Academic Librarianship. 2017;43(6):518–525. doi: 10.1016/j.acalib.2017.08.010. [DOI] [Google Scholar]
- Bradley D.R., Oehrli A., Rieh S.Y., Hanley E., Matzke B.S. Advancing the reference narrative: Assessing student learning in research consultations. Evidence Based Library & Information Practice. 2020;15(1):4–19. doi: 10.18438/eblip29634. [DOI] [Google Scholar]
- Broughton K.M. Usage and user analysis of a real-time digital reference service. The Reference Librarian. 2002;38(79–80):183–200. doi: 10.1300/J120v38n79_12. [DOI] [Google Scholar]
- Brown D.M. Telephone reference: A characterization by subject, answer format, and level of complexity. RQ. 1985;24(3):290–303. https://www.jstor.org/stable/25827388 [Google Scholar]
- Cardwell C., Furlong K., O’Keeffe J. My librarian: Personalized research clinics and the academic library. Research Strategies. 2001;18(2):97–111. doi: 10.1016/S0734-3310(02)00072-1. [DOI] [Google Scholar]
- Cassell K.A., Hiremath U. 4th ed. American Library Association; 2018. Reference and information services: An introduction. [Google Scholar]
- Childers T., Lopata C., Stafford B. Measuring the difficulty of reference questions. RQ. 1991;31(2):237–243. https://www.jstor.org/stable/25829006 [Google Scholar]
- Cole C., Reiter L. Online appointment-scheduling for optimizing a high volume of research consultations. Pennsylvania Libraries: Research & Practice. 2017;5(2):138–143. doi: 10.5195/palrap.2017.155. [DOI] [Google Scholar]
- Corrall S., Kennan M.A., Afzal W. Bibliometrics and research data management services: Emerging trends in library support for research. Library Trends. 2013;61(3):636–674. doi: 10.1353/lib.2013.0005. [DOI] [Google Scholar]
- COVID-19 news and resource pages Association of Research Libraries. https://www.arl.org/resources/covid-19-resource-updates-pages/ Retrieved June 11, 2020, from.
- Duff W., Johnson C. A virtual expression of need: An analysis of e-mail reference questions. The American Archivist. 2001;64(1):43–60. doi: 10.17723/aarc.64.1.q711461786663p33. [DOI] [Google Scholar]
- Emerson K. National reporting on reference transactions, 1976–78. RQ. 1977;16(3):199–207. https://www.jstor.org/stable/25825825 [Google Scholar]
- Gale C.D., Evans B.S. Face-to-face: The implementation and analysis of a research consultation service. College & Undergraduate Libraries. 2007;14(3):85–101. doi: 10.1300/J106v14n03_06. [DOI] [Google Scholar]
- Gao W., Ke I., Martin L. Using consultation data to guide data services training for liaison librarians. Journal of Library Administration. 2018;58(6):583–596. doi: 10.1080/01930826.2018.1491190. [DOI] [Google Scholar]
- Gerlich B.K., Berard G.L. Introducing the READ Scale: Qualitative statistics for academic reference services. 2007;43(8) https://digitalcommons.kennesaw.edu/glq/vol43/iss4/4 [Google Scholar]
- Gerlich B.K., Berard G.L. Testing the viability of the READ Scale (Reference Effort Assessment Data)©: Qualitative statistics for academic reference services. College & Research Libraries. 2010;71(2):116–137. doi: 10.5860/0710116. [DOI] [Google Scholar]
- Guillot L., Stahr B. A tale of two campuses: Providing virtual reference to distance nursing students. Journal of Library Administration. 2004;41(1–2):139–152. doi: 10.1300/J111v41n01_11. [DOI] [Google Scholar]
- Hawk A. Proceedings of the 2018 Library Assessment Conference: Building effective, sustainable, practical assessment. 2018. Implementing standardized statistical measures and metrics for public services in archival repositories and special collections libraries; pp. 836–843.https://www.libraryassessment.org/past-conferences/2018-library-assessment-conference/2018-proceedings/ [Google Scholar]
- Hess A.N. Scheduling research consultations with YouCanBook.Me: Low effort, high yield. College & Research Libraries News. 2014;75(9):510–513. doi: 10.5860/crln.75.9.9197. [DOI] [Google Scholar]
- Hinchliffe L.J., Wolff-Eisenberg C. Ithaka S+R; 2020, March 13. Academic library response to COVID19. https://sr.ithaka.org/blog/academic-library-response-to-covid19/
- Hinchliffe L.J., Wolff-Eisenberg C. Ithaka S+R; 2020, March 24. First this, now that: A look at 10-day trends in academic library response to COVID19. https://sr.ithaka.org/blog/first-this-now-that-a-look-at-10-day-trends-in-academic-library-response-to-covid19/
- Hoskisson T., Wentz D. Simplifying electronic reference: A hybrid approach to one-on-one consultation. College & Undergraduate Libraries. 2001;8(2):89–102. doi: 10.1300/J106v08n02_09. [DOI] [Google Scholar]
- Ivankova N., Creswell J., Stick S. Using mixed-methods sequential explanatory design: From theory to practice. Field Methods. 2006;18(1):3–20. [Google Scholar]
- Janes J. Digital reference: Reference librarians’ experiences and attitudes. Journal of the American Society for Information Science and Technology. 2002;53(7):549–566. doi: 10.1002/asi.10065. [DOI] [Google Scholar]
- Keyes K., Dworak E. Staffing chat reference with undergraduate student assistants at an academic library: A standards-based assessment. The Journal of Academic Librarianship. 2017;43(6):469–478. doi: 10.1016/j.acalib.2017.09.001. [DOI] [Google Scholar]
- Kloda L.A., Moore A.J. Proceedings of the 2016 Library Assessment Conference: Building effective, sustainable, practical assessment. Association of Research Libraries; 2016. Evaluating reference consults in the academic library; pp. 626–633.https://www.libraryassessment.org/past-conferences/2016-library-assessment-conference/2016-proceedings/ [Google Scholar]
- Koltay T. Research 2.0 and research data services in academic and research libraries: Priority issues. Library Management. 2017;38(6/7):345–353. doi: 10.1108/LM-11-2016-0082. [DOI] [Google Scholar]
- Krikelas J. Library statistics and the measurement of library services. ALA Bulletin. 1966;60(5):494–499. [Google Scholar]
- Lavender K., Nicholson S., Pomerantz J. Building bridges for collaborative digital reference between libraries and museums through examination of reference in special collections. Journal of Academic Librarianship. 2005;31(2):106–118. doi: 10.1016/j.acalib.2004.12.013. [DOI] [Google Scholar]
- Lederer N., Feldmann L.M. Interactions: A study of office reference statistics. Evidence Based Library and Information Practice. 2012;7(2):5. doi: 10.18438/B88K6C. [DOI] [Google Scholar]
- Lewis K.M., DeGroote S.L. Digital reference access points: An analysis of usage. Reference Services Review. 2008;36(2):194–204. doi: 10.1108/00907320810873057. [DOI] [Google Scholar]
- Lindén M., Salo I., Jansson A. Organizational stressors and burnout in public librarians. Journal of Librarianship and Information Science. 2018;50(2):199–204. doi: 10.1177/0961000616666130. [DOI] [Google Scholar]
- Littrell L., Coleman T. Proving library value by losing your library. [Conference presentation.] Southeastern Library Assessment Conference; Atlanta, GA, United States: 2019, November 8. Smoke on the water, fire in the sky! [Google Scholar]
- Liu J., Tu-Keefner F., Zamir H., Hastings S.K. Social media as a tool connecting with library users in disasters: A case study of the 2015 catastrophic flooding in South Carolina. Science & Technology Libraries. 2017;36(3):274–287. doi: 10.1080/0194262X.2017.1358128. [DOI] [Google Scholar]
- Logan F.F. A brief history of reference assessment: No easy solutions. The Reference Librarian. 2009;50(3):225–233. doi: 10.1080/02763870902947133. [DOI] [Google Scholar]
- Maddox J., Stanfield L. A survey of technology used to conduct virtual research consultations in academic libraries. Journal of Library & Information Services in Distance Learning. 2019;13(3):245–261. doi: 10.1080/1533290X.2018.1555567. [DOI] [Google Scholar]
- Maddox J., Stanfield L.T. Georgia International Conference on Information Literacy; Savannah, GA, United States: 2020. Did it work?: The effects of research consultations on the quality of sources used in an undergraduate class.https://digitalcommons.georgiasouthern.edu/gaintlit/2020/2020/13/ [Conference presentation] February 20–22. [Google Scholar]
- Magi T.J., Mardeusz P.E. What students need from reference librarians: Exploring the complexity of the individual consultation. College & Research Libraries News. 2013;74(6) doi: 10.5860/crln.74.6.8959. [DOI] [Google Scholar]
- Maksin M. In: Rethinking reference for academic libraries: Innovative developments and future trends. Forbes C., Bowers J., editors. Rowman & Littlefield; 2015. From ready reference to research conversations: The role of instruction in academic reference service; pp. 169–184. [Google Scholar]
- Maloney K., Kemp J.H. Changes in reference question complexity following the implementation of a proactive chat system: Implications for practice. College & Research Libraries. 2015;76(7):959–974. doi: 10.5860/crl.76.7.959. [DOI] [Google Scholar]
- Martin K. Analysis of remote reference correspondence at a large academic manuscripts collection. The American Archivist. 2001;64(1):17–42. doi: 10.17723/aarc.64.1.g224234uv117734p. [DOI] [Google Scholar]
- Michell G., Dewdney P. Mental models theory: Applications for library and information science. Journal of Education for Library and Information Science. 1998;39(4):275–281. doi: 10.2307/40324303. [DOI] [Google Scholar]
- Miller R.E. Reference consultations and student success outcomes. Reference & User Services Quarterly. 2018;58(1):16–21. doi: 10.5860/rusq.58.1.6836. [DOI] [Google Scholar]
- Missingham R., Fletcher J. Dark clouds and silver linings: An epistemological lens on disaster recovery. Journal of the Australian Library and Information Association. 2020;69(2):246–261. doi: 10.1080/24750158.2020.1730067. [DOI] [Google Scholar]
- Murgai S.R. Reference use statistics: Statistical sampling method works. The Southeastern Librarian. 2006;54(1):45–57. https://digitalcommons.kennesaw.edu/seln/vol54/iss1/10/ [Google Scholar]
- Murgai S.R. They sought our help: A survey of one-on-one research assistance at the University of Tennessee Lupton Library. The Southeastern Librarian. 2012;60(1):24–38. https://digitalcommons.kennesaw.edu/seln/vol60/iss1/6/ [Google Scholar]
- Nardine J. The state of academic liaison librarian burnout in ARL libraries in the United States. College & Research Libraries. 2019;80(4):508–524. doi: 10.5860/crl.80.4.508. [DOI] [Google Scholar]
- National Center for Immunization and Respiratory Diseases (NCIRD), Division of Viral Diseases . Coronavirus disease 2019 (COVID-19) Centers for Disease Control and Prevention; 2020, July 15. Social distancing.https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/social-distancing.html [Google Scholar]
- Nelson V.C. Burnout: A reality for law librarians? Law Library Journal. 1987;79(2):267–275. [Google Scholar]
- Newton L., Feinberg D.E. Assisting, instructing, assessing: 21st century student centered librarianship. The Reference Librarian. 2020;61(1):25–41. doi: 10.1080/02763877.2019.1653244. [DOI] [Google Scholar]
- Nolen D.S., Powers A.C., Zhang L., Xu Y., Cannady R.E., Li J. Moving beyond assumptions: The use of virtual reference data in an academic library. Portal: Libraries and the Academy. 2012;12(1):23–40. doi: 10.1353/pla.2012.0006. [DOI] [Google Scholar]
- Novotny E. Association of Research Libraries, Office of Leadership and Management Services; 2002. Reference service statistics and assessment: A SPEC kit.https://hdl.handle.net/2027/mdp.39015052546648 [Google Scholar]
- Oberg K.J. Carolina Digital Repository; 2011. An investigation of reference question difficulty over time [Master’s thesis, University of North Carolina]https://cdr.lib.unc.edu/concern/masters_papers/df65vc94g [Google Scholar]
- Pampel F. Sage Publications; 2000. Logistic regression: A primer. [Google Scholar]
- Parrish A. Improving GIS consultations: A case study at Yale University Library. Library Trends. 2006;55(2):327–339. doi: 10.1353/lib.2006.0060. [DOI] [Google Scholar]
- Pomerantz J. Factors influencing digital reference triage: A think-aloud study. The Library Quarterly. 2004;74(3):235–264. doi: 10.1086/422765. [DOI] [Google Scholar]
- Pryor G., Jones S., Whyte A. Facet Publishing; 2013. Delivering research data management services: Fundamentals of good practice. [Google Scholar]
- Reiter L., Cole C. Beyond face value: Evaluating research consultations from the student perspective. Reference & User Services Quarterly. 2019;59(1):23–30. doi: 10.5860/rusq.59.1.7222. [DOI] [Google Scholar]
- Reiter L., Huffman J.P. Yes Virginia, it will scale: Using data to personalize high-volume reference interactions. Journal of Academic Librarianship. 2016;42(1):21–26. doi: 10.1016/j.acalib.2015.09.011. [DOI] [Google Scholar]
- Robinson B.M. Reference services: A model of question handling. RQ. 1989;29(1):48–61. https://www.jstor.org/stable/25828432 [Google Scholar]
- Rogers E., Carrier H.S. A qualitative investigation of patrons’ experiences with academic library research consultations. Reference Services Review. 2017;45(1):18–37. doi: 10.1108/RSR-04-2016-0029. [DOI] [Google Scholar]
- Rothstein S. The measurement and evaluation of reference service. Library Trends. 1964;12(3):456–472. [Google Scholar]
- Rubin D. Wiley; 1987. Multiple imputation for nonresponse in surveys. [Google Scholar]
- Rubin D. Wiley; 2019. Statistical analysis with missing data. [Google Scholar]
- Ryan S.M. Reference transactions analysis: The cost-effectiveness of staffing a traditional academic reference desk. Journal of Academic Librarianship. 2008;34(5):389–399. doi: 10.1016/j.acalib.2008.06.002. [DOI] [Google Scholar]
- SAA-ACRL/RBMS Joint Task Force on the Development of Standardized Statistical Measures for Public Services in Archival Repositories and Special Collections Libraries Society of American Archivists; 2018. Standardized statistical measures and metrics for public services in archival repositories and special collections libraries. https://www2.archivists.org/standards/standardized-statistical-measures-and-metrics-for-public-services-in-archival-repositories
- Scales B.J., Turner-Rahman L., Hao F. A holistic look at reference statistics: Whither librarians? Evidence Based Library & Information Practice. 2015;10(4):173–185. doi: 10.18438/B8X01H. [DOI] [Google Scholar]
- Schwartz J. Toward a typology of e-mail reference questions. Internet Reference Services Quarterly. 2004;8(3):1–15. doi: 10.1300/J136v08n03_01. [DOI] [Google Scholar]
- Sheehan L.A. Declaration of interdependence: The Proceedings of the ACRL 2011 Conference. 2011. Re-inventing reference; pp. 384–389.http://www.ala.org/acrl/sites/ala.org.acrl/files/content/conferences/confsandpreconfs/national/2011/papers/re-re_inventing.pdf [Google Scholar]
- Sheesley D.F. Burnout and the academic teaching librarian: An examination of the problem and suggested solutions. The Journal of Academic Librarianship. 2001;27(6):447–451. doi: 10.1016/S0099-1333(01)00264-6. [DOI] [Google Scholar]
- Si L., Xing W., Zhuang X., Hua X., Zhou L. Investigation and analysis of research data services in university libraries. The Electronic Library. 2015;33(3):417–449. doi: 10.1108/EL-07-2013-0130. [DOI] [Google Scholar]
- Sikora L., Fournier K. uO Research; 2016. Assessing the impact of individualized research consultations on students' search techniques and confidence levels. https://ruor.uottawa.ca/handle/10393/36153
- Smith N.M., Nielsen L.F. Burnout: A survey of corporate librarians. Special Libraries. 1984;75(3):221–227. [Google Scholar]
- Spencer J.S., Dorsey L. Assessing time spent of reference questions at an urban university library. The Journal of Academic Librarianship. 1998;24(4):290–294. doi: 10.1016/S0099-1333(98)90105-7. [DOI] [Google Scholar]
- Steiner H. Bridging physical and virtual reference with virtual research consultations. Reference Services Review. 2011;39(3):439–450. doi: 10.1108/00907321111161421. [DOI] [Google Scholar]
- Steiner H. Making the most of teachable moments: Livening and enhancing the virtual reference experience. LOEX Conference Proceedings. 2013;2011:75–78. https://commons.emich.edu/loexconf2011/13/ [Google Scholar]
- Stieve T., Wallace N. Chatting while you work: Understanding chat reference user needs based on chat reference origin. Reference Services Review. 2018;46(4):587–599. doi: 10.1108/RSR-09-2017-0033. [DOI] [Google Scholar]
- Sullivan W., Schoppmann L.A., Redman P.M. Analysis of the use of reference services in an academic health sciences library. Medical Reference Services Quarterly. 1994;13(1):35–55. doi: 10.1300/J115V13N01_03. [DOI] [PubMed] [Google Scholar]
- Swygart-Hobaugh A.J. Data services in academic libraries—What strange beast is this? SLIS Student Research Journal. 2017;6(2) https://scholarworks.sjsu.edu/ischoolsrj/vol6/iss2/2/ [Google Scholar]
- Tenopir C., Birch B., Allard S. Association of College and Research Libraries; 2012. Academic libraries and research data services: Current practices and plans for the future: An ACRL white paper. http://www.ala.org/acrl/sites/ala.org.acrl/files/content/publications/whitepapers/Tenopir_Birch_Allard.pdf
- Tenopir C., Sandusky R.J., Allard S., Birch B. Research data management services in academic research libraries and perceptions of librarians. Library and Information Science Research. 2014;36(2):84–90. doi: 10.1016/j.lisr.2013.11.003. [DOI] [Google Scholar]
- Tibbo H. Interviewing techniques for remote reference: Electronic versus traditional environments. The American Archivist. 1995;58(3):294–310. doi: 10.17723/aarc.58.3.61625250t016287r. [DOI] [Google Scholar]
- Whelan J.L.A., Hansen A. Personal research sets the stage for change. Reference Librarian. 2017;58(1):67–83. doi: 10.1080/02763877.2016.1194245. [DOI] [Google Scholar]
- White M.D., Iivonen M. Assessing level of difficulty in web search questions. The Library Quarterly: Information, Community, Policy. 2002;72(2):205–233. doi: 10.1086/603355. [DOI] [Google Scholar]
- Whitlatch J.B. Unobtrusive studies and the quality of academic library reference services. College & Research Libraries. 1989;50(2):181–194. doi: 10.5860/crl_50_02_181. [DOI] [Google Scholar]
- Yi H. Individual research consultation service: An important part of an information literacy program. Reference Services Review. 2003;31(4):342–350. doi: 10.1108/00907320310505636. [DOI] [Google Scholar]