ABSTRACT
BACKGROUND
Implementing quality improvement efforts in clinics is challenging. Assessment of organizational “readiness” for change can set the stage for implementation by providing information regarding existing strengths and deficiencies, thereby increasing the chance of a successful improvement effort. This paper discusses organizational assessment in specialty mental health, in preparation for improving care for individuals with schizophrenia.
OBJECTIVE
To assess organizational readiness for change in specialty mental health in order to facilitate locally tailored implementation strategies.
DESIGN
EQUIP-2 is a site-level controlled trial at nine VA medical centers (four intervention, five control). Providers at all sites completed an organizational readiness for change (ORC) measure, and key stakeholders at the intervention sites completed a semi-structured interview at baseline.
PARTICIPANTS
At the four intervention sites, 16 administrators and 43 clinical staff completed the ORC, and 38 key stakeholders were interviewed.
MAIN RESULTS
The readiness domains of training needs, communication, and change were the domains with lower mean scores (i.e., potential deficiencies) ranging from a low of 23.8 to a high of 36.2 on a scale of 10–50, while staff attributes of growth and adaptability had higher mean scores (i.e., potential strengths) ranging from a low of 35.4 to a high of 41.1. Semi-structured interviews revealed that staff perceptions and experiences of change and decision-making are affected by larger structural factors such as change mandates from VA headquarters.
CONCLUSIONS
Motivation for change, organizational climate, staff perceptions and beliefs, and prior experience with change efforts contribute to readiness for change in specialty mental health. Sites with less readiness for change may require more flexibility in the implementation of a quality improvement intervention. We suggest that uptake of evidence-based practices can be enhanced by tailoring implementation efforts to the strengths and deficiencies of the organizations that are implementing quality improvement changes.
KEY WORDS: organizational readiness for change, formative evaluation, specialty mental health, quality improvement
INTRODUCTION
Improving care for chronic illness, across many conditions, has proven to be very challenging1,2 in large part because of the conceptual and organizational changes required by a paradigm shift from acute treatment to long-term management.3,4 While there have been some successes in improving care in primary care,5,6 efforts in specialty mental health settings have been fewer and have not yielded substantial changes in long-term outcomes.7 In care for patients with severe mental illness (SMI), providers often have not adopted practices that improve outcomes.8,9
Implementing changes in clinical care is notoriously difficult.10,11 Passive approaches such as education may improve knowledge, but do not tend to induce change or improve care.12 Moreover, clinical interventions are often instituted without a baseline understanding of the context in which the change is to occur. Greater attention to existing organizational infrastructures could facilitate higher quality implementation.13,14 In particular, assessing organizational readiness for change can contribute to the tailoring of innovative adoption strategies to local structure and context.15 This paper describes a baseline assessment of readiness for change in EQUIP-2 (Enhancing QUality in Psychosis), a controlled trial of an intervention to improve mental health outpatient care for schizophrenia.13 With a recent upsurge in large-scale efforts to improve adoption of evidence-based practices in mental health (e.g., the Veterans Health Administration Uniform Mental Health Services Package)16 and chronic conditions in general, detailed case studies are necessary as examples of how to facilitate quality improvement. The goal of this paper is to provide a brief illustration of the ways in which the formative evaluation data were analyzed and used to address organizational issues prior to the implementation of quality improvement efforts, in order to enhance the probability that such efforts would be implemented favorably.
METHODS
The goals of EQUIP-2 were to increase use of the evidence-based practices of weight reduction (i.e., wellness) and supported employment (i.e., placement and support) for individuals with SMI. To assess the context and readiness for uptake of these services at the four clinics in the intervention, administrators and staff members completed an organizational readiness survey, and a subset also completed a qualitative interview.
Clinic Sites and Sampling
VA health care is managed within 21 national Veterans Integrated Service Networks (VISNs) of medical centers, veteran centers, and outpatient clinics offering primary and specialized care. Selected specialty mental health clinics in four VISNs are participating in EQUIP.
Of the 21 administrators eligible to participate at baseline, 20 (95%) consented; of the 95 eligible staff, 75 (79%) consented. Of those who consented, 16 (80%) administrators and 43 (57%) staff completed the organizational survey.
For the semi-structured interview, stratified purposeful sampling24 was used to select key stakeholders in the clinics who could provide their perspectives on organizational culture, readiness for change, and experience with quality improvement projects. All administrators (n = 10) participated in the interview, and all clinicians (n = 28) who had direct contact with patients with schizophrenia participated.
Procedures and Measures
All consented administrators and staff were asked in person after consenting to complete an online survey. Individuals who did not complete the survey in a timely manner were prompted via phone, visit, or e-mail. The 20-min electronic survey included several scales from the Organizational Readiness for Change (ORC) instrument.18 The ORC, originally developed for substance abuse treatment facilities, assesses five domains: motivation for change, resources, staff attributes, organizational climate, and training exposure and utilization. To tailor the ORC to VA specialty mental health clinics and minimize participant burden, minor adjustments were made to the wording, and two domains (resources, training exposure and utilization) were not included. These domains would have required extensive changes in order to make them relevant to mental health and VA settings. Ten subscales were utilized: program needs, training needs, and pressures for change in the “motivation for change” domain; growth and adaptability in the “staff attributes” domain; and mission, cohesion, autonomy, communication, and change in the “organizational climate” domain. Responses on each item are arranged on a Likert scale from 1 = strongly disagree to 5 = strongly agree.
Individuals who participated in the semi-structured interview component of the assessment were interviewed by the project coordinator at each site or by the lead author. The interview protocol was developed by the three authors and addressed perceptions of the needs of patients with schizophrenia, knowledge about the care targets, organizational culture in terms of decision-making and change, and experience with quality improvement projects. Interviews lasted 20–30 min on average. All interviews were digitally recorded and professionally transcribed.
Data Analysis
To analyze the ORC, subscale scores were obtained by summing responses to the items (reversing scores when necessary), dividing the sum by the number of items included (yielding an average), and multiplying by 10 in order to rescale final scores so they ranged from 10 to 50. Means for each scale were examined across sites and in comparison to normative data. In addition, 25th and 75th percentiles were calculated for further comparisons.
Subscale scores above 30 indicate areas of strength, which translates to the idea of “readiness” in that the respondents generally agree that the organization has the attribute in a given subscale. Scores below 30 (which are unusual) indicate areas of weakness that might need attention prior to change efforts. The standard deviations (SDs) of the ORC scores are also illuminating in that they indicate the level of consensus on any given subscale; SDs above 9 indicate considerable variability in responses and prompt questions about why the subscale topic is perceived differently across respondents (D. Simpson, personal communication, February 10, 2009).
Analysis of the interview data was conducted using Atlas.ti. Transcripts were coded using content analysis with codes deductively derived from the interview guide topics. Results within each interview topic were examined to determine areas that required subcoding. Results were reviewed collectively by the three authors, and narratives that speak to organizational predisposition are reported below.
RESULTS
Provider Background
Nineteen men and 24 women completed the staff version of the ORC. Approximately one-quarter (28%) of these respondents were psychiatrists, and the remaining respondents were other clinicians. Half (51%) had worked at the VA for 11 or more years, 9% for 5–10 years, and 40% for 4 years or less.
Organizational Readiness for Change
As depicted in Figure 1, scores on the staff ORC generally fell within the 25th and 75th percentile norms for the instrument, indicating moderately favorable conditions for change in terms of motivation for change, staff attributes, and organizational climate.
As depicted in Table 1, scores were consistent, with few high standard deviations. The staff attributes domain seemed to be the area of greatest strength across sites, with staff reporting high levels of agreement with regard to staff growth and adaptability. The program needs subscale (see Fig. 2) had the most inconsistency, with SDs of 9 or more in three of the four sites.
Table 1.
Clinic | N | Motivation for change | Staff attributes | Organizational climate | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Program needs | Training needs | Pressures for change | Growth | Adaptability | Mission | Cohesion | Autonomy | Communication | Change | ||
A | 11 | 33.6 (9.1) | 30.9 (8.7) | 34.3 (7.0) | 40.7 (2.7) | 41.1 (5.0) | 38.7 (4.2) | 37.4 (3.4) | 37.6 (3.3) | 36.2 (4.8) | 34.2 (4.9) |
B | 9 | 31.3 (10.9) | 28.3 (8.1) | 34.7 (4.4) | 35.8 (7.7) | 40.6 (5.8) | 37.6 (7.3) | 40.6 (9.8) | 35.8 (7.3) | 34.7 (9.5) | 32.7 (9.6) |
C | 6 | 31.3 (11.8) | 23.8 (6.9) | 32.6 (4.8) | 38.7 (6.3) | 38.8 (3.4) | 33.3 (4.5) | 39.2 (3.3) | 33.3 (3.3) | 32.7 (5.5) | 31.3 (2.7) |
D | 17 | 34.1 (8.3) | 31.7 (6.7) | 33.1 (5.6) | 38.9 (5.1) | 35.4 (6.0) | 33.6 (5.4) | 31.4 (6.9) | 33.5 (4.2) | 33.1 (6.6) | 32.4 (4.6) |
Scores were generally moderate for site A, indicating good structure and functioning, with high consistency across respondents, especially in the organizational climate scale. Site B scores tended to have higher standard deviations than site A, with high variability in scores on program needs and cohesion, communication, and change, indicating less consensus in these areas. Site C also displayed high variability in perceptions of program needs, but was otherwise consistent. Sites B and C had the lowest mean scores on program needs, and site C had the lowest scores for training needs, pressures for change, autonomy, mission, communication, and change. All scores were moderate and consistent for site D, which reported the lowest adaptability and cohesion.
Key Stakeholder Interviews
Interviews were analyzed with reference to the ORC findings to provide more detail about areas of organizational strengths and weaknesses. Site A had particularly high and consistent scores in the staff attributes domain, indicating strength with regard to opportunities for professional growth and adaptability to change; it also had a high autonomy score. This autonomy was evident in the interviews, where respondents described being able to make changes as necessary and to manage their own workloads and activities. Site B’s particularly high adaptability and cohesion scores were seen as strengths; staff also indicated little need for additional training. These findings were consistent with the interviews, where respondents described “team efforts.” One respondent stated, “We don’t always get somebody higher up for advice…In any area where we could do better, we do it.” Site C had a consistent and low change score. This subscale emphasizes an attitude of trying new things (e.g., “You are encouraged here to try new and different techniques.”). A lower score indicates only moderate agreement with that statement, and therefore a sense of little autonomy over initiating change. Accordingly, this issue was investigated in the interviews at this site, and we found that staff perceived “constant changes” to be occurring, but mostly from the top down, i.e., changes that were being imposed on them.
Site D interviews were explored for discussions related to greater reported program and training needs than the other sites, and lower scores regarding adaptability and cohesion. There was a sense in the interviews that this site was, as one respondent stated, “moving toward making changes,” but that there had been, as another stated, “a very large amount of inertia.” Interviews revealed that this clinic was somewhat fragmented at baseline; some suggested that decisions were made behind closed doors and then imparted to staff, and others described how small changes could be made by their clinical team, but “major changes” were not possible.
DISCUSSION
This study of organizational readiness provides insight into the complexity of engaging in organizational change18 and evidence-based quality improvement. Each intervention site had different strengths and weaknesses, but these would likely not have been revealed without the triangulated ORC and semi-structured interview data.
Implementation strategies at each site, in the site-level controlled trial, were tailored based on the findings from the baseline assessment. Sites A and B were both ready for change, so site A served as the lead implementation site, given its adaptability and emphasis on professional growth and trying new techniques. The site developed testimonials for the other sites on the successes of their implementation strategies. At site B, a strong sense of cohesion was built on by having the opinion leader talk to the staff as a whole about implementation and having frequent meetings with all staff where the project was discussed and problems were addressed.
Sites C and D required more tailoring of the context of the intervention to give it its best chance of acceptance. In site C, training needs were addressed by heightening awareness of gaps in care through use of opinion leaders and educational programs; changing leadership was compensated for by keeping the remaining team the same and by maintaining a consistent message about the mission and goals of the intervention targets; autonomy was encouraged by allowing clinicians to design aspects of their interventions (wherever possible given the evidence base) around the VISN-established care targets.
In site D, adaptability issues were addressed by bringing the opinion leader and key staff on board first and providing direct guidance on the specifics of implementing the intervention; efforts were made to link study goals with administration’s goals in order to address mission issues. Cohesion was enhanced by building teams within the clinic, and as with site C, autonomy was encouraged by allowing clinicians to design the implementation of their interventions.
Overall, intervention sites were moderately ready to change, and may have been identified by VISN leadership because they were perceived to be ready to change (sites A and B) and/or in need of change but also in need of additional support (sites C and D). The latter sites required more flexibility from the coordinating site as to how to implement the intervention, whereas the former sites could implement the interventions closer to the original intent. As noted, half (51%) of the intervention sites’ respondents had worked at the VA for 11 years or more, and the response rate was relatively low among staff (57%). This could suggest that intervention site staff in general were entrenched in their routines and/or perhaps had seen change efforts that had not worked or that had not been sustained and were skeptical about engaging in the project. The low response rate may indicate resistance on the part of the intervention sites to the presence of the project. Despite numerous targeted efforts to increase the response rate, those who consented but refused to complete the instrument were dogmatic in their refusal, usually due to the perception that the instrument would take too much time to complete.
The program needs subscale reflected the most variability in responses, with high standard deviations in three of the four sites. This variability may be due to differences in educational and professional backgrounds such that respondents had differing emphases on the areas in which their clinics needed additional guidance. Training needs, program needs, and change were the domains with generally lower scores, and staff attributes scores remained high. The latter finding could be because these questions focused on perceptions of one’s own professional motivation (growth) and personality style (adaptability), which might be more proximal and stable than most of the other subscales, which call for evaluation of the clinic and therefore might be more distal and subject to fluctuation depending on daily work dynamics.
Next Steps in Organizational Assessment and Implementation
EQUIP-2 falls within a burgeoning number of studies that assess barriers and facilitators to implementation and then focus on refining implementation strategies during intervention.19 Staff expectations, perceptions, and attitudes may encourage or inhibit adoption of evidence-based practices,20 and are thus critical to investigate and address at the outset of a quality improvement project through formative evaluation methods.21 Also critical to investigate are the larger systemic and structural factors that may affect attitudes towards and experiences of change. For example, in a system such as the VA, change from the “top” is expected by staff, and typically the changes are not voluntary or optional.
Key stakeholder perspectives will be gathered mid- and post-intervention, and organizational readiness will be assessed again post-intervention to see if readiness changed over time. This long-term perspective on adoption of innovation has been recommended by others who have noted that change toward evidence-based care in mental health care is a slow and uneven process, warranting a longitudinal perspective.22,23 Issues such as organizational vision and commitment may affect the long-term sustainability of innovations, so ongoing assessment of these issues could be critical.24 With a longitudinal perspective, this study will be able to explore associations between patient outcomes and organizational readiness, as has been demonstrated by others.25 Whitley and colleagues26 suggest that implementation of illness management and recovery requires strong leadership, an organizational culture that embraces innovation, effective training, and committed staff. Lin and colleagues1 suggest that employees’ perceptions of an organization’s orientation, activities, and support of quality management are associated with their perceptions of whether implementation of quality improvement activities will lead to improved patient outcomes.
Limitations
This study has limitations in that the sample sizes are not large and there is variability across sites, so generalizability is limited. Data from administrators was particularly limited, so insight is not provided here as to the ways in which administrators and staff may differ in their perceptions of organizational readiness, as operationalized by the ORC and as explored in the interviews. Additionally, taking the average survey response from a group of employees is a limited way in which to assess organizational readiness, at least in part because employees likely differ in their awareness of and contribution to readiness. This study was not designed to test variable contributions to readiness, but an investigation that tests this idea would be clinically and empirically valuable.
Conclusion
One of the main points of this quality improvement effort was the need to acknowledge from the outset that clinical settings, though they may be similar in many ways, are never completely equivalent despite best efforts to have them equivalent from an empirical standpoint. Accordingly, we designed the study to explore the ways in which the sites were not equivalent, i.e., to assess in some depth each site’s organizational climate, with more emphasis on the intervention sites because we wanted to maximize uptake of the intervention in order to improve quality of care.
Organizational change, though difficult to achieve, can occur. Adoption of evidence-based care in specialty mental health is critical for the improvement of patient outcomes.27 The onus is on implementation researchers to continue to identify factors that facilitate successful adoption of appropriate and effective clinical practices. One step toward the identification of these factors is thorough assessment of organizational readiness for change, so subsequent intervention efforts are more carefully attuned to the strengths and barriers present in each site. Ideally, with locally tailored implementation strategies, adoption of evidence-based practices will increasingly become the norm, and patients seeking services will receive the care that they need in order to have optimal health outcomes.
ACKNOWLEDGMENTS
Earlier drafts of this work were presented in poster form at the VA Quality Enhancement Research Initiative (QUERI) National Meeting in December, 2008 and orally and in poster form at the VA Health Services Research and Development National Conference in February, 2009. This material is based upon work supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development Quality Enhancement Research Initiative (MNT 03-213) and the Desert Pacific Mental Illness Research, Education and Clinical Center (MIRECC); and by the National Institute of Mental Health (NIMH) UCLA-RAND Partnered Research Center for Quality Care (P30 MH082760). We appreciate the efforts of the site Principal Investigators, clinicians, and project managers. We also appreciate the guidance of Dr. Dwayne Simpson. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.
Conflicts of Interest The authors declare that they have no conflicts of interest.
REFERENCES
- 1.Lin M, Marsteller J, Shortell S, et al. Motivation to change chronic illness care: results from a national evaluation of quality improvement collaboratives. Health Care Manage Rev. 2005;30(2):139–156. doi: 10.1097/00004010-200504000-00008. [DOI] [PubMed] [Google Scholar]
- 2.Wagner E, Austin B, Davis C, Hindmarsh M, Schaefer J, Bonomi A. Improving chronic illness care: translating evidence into action. Health Aff. 2001;20(6):64–78. doi: 10.1377/hlthaff.20.6.64. [DOI] [PubMed] [Google Scholar]
- 3.Glasgow R, Wagner E, Kaplan R, Vinicor F, Smith L, Norman J. If diabetes is a public health problem, why not treat it as one? A population-based approach to chronic illness. Ann Behav Med. 1999;21(2):159–170. doi: 10.1007/BF02908297. [DOI] [PubMed] [Google Scholar]
- 4.Soubhi H. Toward an ecosystemic approach to chronic care design and practice in primary care. Ann Fam Med. 2007;5:263–269. doi: 10.1370/afm.680. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Bodenheimer T, Wagner E, Grumbach K. Improving primary care for patients with chronic illness: the chronic care model, part 2. J Am Med Assoc. 2002;299:1909–1914. doi: 10.1001/jama.288.15.1909. [DOI] [PubMed] [Google Scholar]
- 6.Rothman A, Wagner E. Chronic Illness management: what is the role of primary care? Ann Intern Med. 2003;138(3):256–261. doi: 10.7326/0003-4819-138-3-200302040-00034. [DOI] [PubMed] [Google Scholar]
- 7.Williams J, Jr., Gerrity M, Holsinger T, Dobscha S, Gaynes B, Dietrich A. Systematic review of multifaceted interventions to improve depression care. Gen Hosp Psychiatry. 2007;29(2):91–116. doi: 10.1016/j.genhosppsych.2006.12.003. [DOI] [PubMed] [Google Scholar]
- 8.Grol R. Knowledge transfer in mental health care: how do we bring evidence into day-to-day practice? Can J Psychiatry. 2008;53(5):275–276. doi: 10.1177/070674370805300501. [DOI] [PubMed] [Google Scholar]
- 9.Proctor E, Knudsen K, Fedoravicius N, Hovmand P, Rosen A, Perron B. Implementation of evidence-based practice in community behavioral health: agency director perspectives. Adm Policy Ment Health. 2007;34(5):479–488. doi: 10.1007/s10488-007-0129-8. [DOI] [PubMed] [Google Scholar]
- 10.Auerbach A, Landefeld C, Shojania K. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357(6):608–613. doi: 10.1056/NEJMsb070738. [DOI] [PubMed] [Google Scholar]
- 11.Michie S, Pilling S, Garety P, et al. Difficulties implementing a mental health guideline: an exploratory investigation using psychological theory. Implementation Science. 2007;2(8). [DOI] [PMC free article] [PubMed]
- 12.Gilbody S, Whitty P, Grimshaw J, Thomas R. Educational and organizational interventions to improve the management of depression in primary care: a systematic review. J Am Med Assoc. 2003;289(23):3145–3151. doi: 10.1001/jama.289.23.3145. [DOI] [PubMed] [Google Scholar]
- 13.Brown A, Cohen A, Chinman M, Kessler C, Young A. EQUIP: implementing chronic care principles and applying formative evaluation methods to improve care for schizophrenia: QUERI Series. Implementation Science. 2008;3(9). [DOI] [PMC free article] [PubMed]
- 14.Glisson C, Landsverk J, Schoenwald S, et al. Assessing the organizational social context (OSC) of mental health services: implications for research and practice. Adm Policy Ment Health. 2008;35(1–2):98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
- 15.Rosenheck R. Organizational process: a missing link between research and practice. Psychiatr Serv. 2001;52:1607–1612. doi: 10.1176/appi.ps.52.12.1607. [DOI] [PubMed] [Google Scholar]
- 16.Department of Veterans Affairs. Uniform mental health services in VA medical centers and clinics. 2008. VA Handbook 1160.01.
- 17.Patton M. Qualitative evaluation and research methods. 2. Newbury Park, CA: Sage Publications; 1990. [Google Scholar]
- 18.Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. J Subst Abuse Treat. 2002;22:197–209. doi: 10.1016/S0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
- 19.Proctor E, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health. 2009;36(1):24–34. doi: 10.1007/s10488-008-0197-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Glisson C, Landsverk J, Schoenwald S, et al. Assessing the Organizational Social Context (OSC) of mental health services: implications for research and practice. Adm Policy Ment Health. 2008;35:98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
- 21.Curran G, Mukherjee S, Allee E, Owen R. A process for developing an implementation intervention: QUERI Series. Implementation Science. 2008;3(17). [DOI] [PMC free article] [PubMed]
- 22.Gioia D, Dziadosz G. Adoption of evidence-based practices in community mental health: A mixed-method study of practitioner experience. Community Ment Health J. 2008;44:347–357. doi: 10.1007/s10597-008-9136-9. [DOI] [PubMed] [Google Scholar]
- 23.Kimberly J, Cook J. Organizational measurement and the implementation of innovations in mental health services. Adm Policy Ment Health. 2008;35:11–20. doi: 10.1007/s10488-007-0143-x. [DOI] [PubMed] [Google Scholar]
- 24.Nutting P, Gallagher K, Riley K, White S, Dietrich A, Dickinson W. Implementing a depression improvement intervention in five health care organizations: experience from the RESPECT-depression trial. Adm Policy Ment Health & Ment Health Serv Res. 2007;34:127–137. doi: 10.1007/s10488-006-0090-y. [DOI] [PubMed] [Google Scholar]
- 25.Litaker D, Ruhe M, Weyer S, Stange K. Association of intervention outcomes with practice capacity for change: Subgroup analysis from a group randomized trial. Implementation Science. 2008;3(25). [DOI] [PMC free article] [PubMed]
- 26.Whitley R, Gingerich S, Lutz W, Mueser K. Implementing the illness management and recovery program in community mental health settings: facilitators and barriers. Psychiatr Serv. 2009;60:202–209. doi: 10.1176/appi.ps.60.2.202. [DOI] [PubMed] [Google Scholar]
- 27.Franx G, Kroon H, Grimshaw J, Drake R, Grol R, Wensing M. Organizational change to transfer knowledge and improve quality and outcomes of care for patients with severe mental illness: a systematic overview of reviews. Can J Psychiatry. 2008;53(5):294–305. doi: 10.1177/070674370805300503. [DOI] [PubMed] [Google Scholar]