Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Sep 30.
Published in final edited form as: Res Soc Work Pract. 2007 Aug 7;18(4):285–291. doi: 10.1177/1049731507302263

From Knowledge Production to Implementation: Research Challenges and Imperatives

Enola K Proctor 1, Aaron Rosen 1
PMCID: PMC3786596  NIHMSID: NIHMS514102  PMID: 24089591

Abstract

As evidence-based practice is increasingly accepted in social work, the challenges associated with its actual implementation become more apparent and pressing. This article identifies implementation as a critical issue for research; implementation itself must be better understood if evidence-based practices are to be used and resultant improvements to practice are to be realized. Social work needs to engage more fully in (a) service system research and (b) implementation research, each of which complements and has potential to extend the benefits of efficacy and effectiveness research. Service system research can enhance the fit of empirically supported treatments to the needs of real-world practice and thus facilitate their implementation. Implementation studies examine the acceptability of evidence-based interventions, the feasibility and likelihood of their sustained use, and the decision-support procedures that can help practitioners apply probabilistically based, empirically supported treatments to the individual case in real-world practice.

Keywords: evidence-based practice, implementation research, service systems research, social work research


Evidence-based practice (EBP) has been increasingly advocated and appears to be gaining acceptance in social work. It signifies reaffirmation of social work’s commitment to a scientific knowledge base in general and, more specifically, to an expectation that practice decisions be informed by and based on evidence from scientific research. But thus far, advocacy for EBP has had little tangible impact on social work. Despite the growing recognition of its appropriateness, EBP is not routinely implemented in practice. Findings from studies of practitioners’use of research in practice, a primary component of EBP, have also been disappointing (Rosen, 1994; Rosen, Proctor, Morrow-Howell, & Staudt, 1995). The many difficulties in the use of research evidence in practice and thus, the implementation of EBP (Rosen, 2003) have only rarely been addressed constructively through systematic research.

This article focuses on and encourages social work researchers to study the many issues related to implementation of scientific evidence in practice. Here, implementation refers to the use or employment by practitioners of pretested and empirically supported treatments (ESTs) to attain outcomes. For too long, implementation has been considered “after the fact”— that is, once ESTs are developed and tested. We believe that the challenges associated with implementation of ESTs necessitate its consideration at the “front end” of efforts to develop EBP. Accordingly, in this article, we discuss implementation research within the more general context of knowledge development for practice. Thus, we address three interrelated, mutually informing domains of knowledge development in relation to implementation research: service system research, efficacy research, and effectiveness research.

SERVICE SYSTEM RESEARCH

Social work has engaged in relatively little research on its service delivery systems, in contrast to health care, where services research has flourished for more than two decades. The importance of service system research lies in its capacity to inform intervention research and enhance its relevance to practice. Service system research focuses on and captures knowledge of the practice landscape. It is a wide field of research, but several issues are particularly important with respect to implementation. First, services research is required for an understanding of the organization and financing of care. The available sources of payment for social work services have tremendous impact on the providers, duration, and locations of care. Reimbursement policies should be studied and considered in relation to the development of interventions so that the treatments developed and tested through efficacy and effectiveness research are feasible in actual practice. Conversely, efficacy and effectiveness research should more productively inform reimbursement policy.

The social service workforce itself is a second, less studied but important focus of services research. Social workers’ perceptions of practice needs can and should inform the priorities of intervention research. Moreover, their clinical hypotheses about “what works” could be important bases for efficacy research (Zeira & Rosen, 2000). Certainly, as is generally accepted in EBP writings, research-based knowledge must be implemented in relation to the particular client (Gambrill, 2003) and be attuned to “local knowledge” (Stricker & Trierweiler, 1995), that is, knowledge of the agency and community setting, prior experience, and theory. Although sometimes given short shrift, local knowledge is a critical complement to knowledge developed in traditional efficacy and effectiveness research. Also, provider knowledge and training, practice preferences, sources of and receptivity to new practices and innovations, and patterns of professional decision making all affect their receptivity to and the likelihood of implementation of ESTs that derive from efficacy and effectiveness studies.

“Clinical epidemiology” constitutes another important focus for services research. Such studies capture the service needs and types of problems that are presented for care in various agency settings. This information informs the desired outcomes of service and the setting of priorities for developing evidence-based treatments. The context of problems must also be understood, including problem duration and severity. Knowledge of co-occurring conditions, or “comorbidity,” is also critical to the development of EBPs and should be treated as covariates (Videka, 2003), rather than exclusionary criteria. Finally, information about clients, their prior experience with and attitudes toward treatment, and their preferences for care are important foci for service system research. It is especially important to understand the factors that serve as barriers and facilitators to clients’ help-seeking and to their participation in and completion of treatment protocols. These factors are important for study in treatment research as possible moderators of the intervention–outcome links.

INTERVENTION RESEARCH

Intervention research is concerned with formulating interventions and testing their relationship to desired outcomes of service. Two progressive research phases are usually distinguished in intervention research— efficacy and effectiveness. Research on the efficacy of interventions submits to the most rigorous test of the basic clinical hypothesis—that a given outcome will be attained through the implementation of a specified treatment. Efficacy research is designed to render the clinical hypothesis vulnerable (disproved) and, therefore, is conducted to maximize internal validity. Because the purpose is to determine a treatment’s potency and to rule out competing or alternative hypotheses, efficacy is typically studied under highly controlled conditions. One typical consequence is the compromise of external validity or generalizability of the results in favor of internal validity. Efficacy research should, therefore, be viewed as a critically necessary but insufficient phase of intervention research. Both researchers and practitioners often express concerns about the ability of findings from efficacy research being applied in practice, with questions particularly focused on the relevance and applicability of the evidence to different client groups, settings, problems, and providers. Such cautions about the generalizability of findings from efficacy research give rise to and underscore the importance of an emphasis on effectiveness research.

Once a basic clinical hypothesis is supported under rigorous, albeit controlled, conditions with possible low verisimilitude to real-world practice, the hypothesis linking the intervention and outcome must be tested under conditions reflective of practice situations where the intervention is likely to be employed. Such studies aim to test and determine the limits of and qualifications on the generalizability of findings, particularly across different service system variables. Thus, if they are to be relevant to practice and appropriately designed, effectiveness studies must take account of and be responsive to the results of service system research. Service system research yields the relevant practice situation characteristics (client variables, problems, agency variables, provider variables) under which an intervention should be studied to determine and perhaps to qualify its external validity. Effectiveness studies also identify the necessary modifications and adaptations in a treatment and make it applicable to particular client populations or practice situations.

Intervention research, as described here, produces the basic “building blocks” of social work’s knowledge for practice. As a field, social work has approached the challenge of effective practice by exhorting researchers to disseminate the products of intervention research, particularly of effectiveness studies, to practitioners so that they can be applied in practice. A variety of means have been proposed to facilitate that process, including practitioner evaluation of practice using single-system designs, practitioner-conducted literature searches to guide particular cases, the conduct and publication of systematic reviews of the research literature, and the development of practice guidelines in social work (Proctor & Rosen, 2003). Each of these means has merit. Yet social work faces two formidable challenges in its aspiration of EBP. The first concerns the volume of relevant, well-designed research undertaken by social workers. In a review of the published articles in 13 social work journals, Rosen, Proctor, and Staudt (1999) found that only a small proportion of all published work concerned intervention research, and an even smaller proportion of studies could be considered minimally well designed and capable of informing practice. The need for more and better-designed intervention research in social work, particularly effectiveness studies, has been voiced frequently (Fortune, 1999; Fortune & Proctor, 2001; Fraser, 2003; Schilling, 1997), and there are signs of increased activity in this area (Reid, Kenaley, & Colvin, 2004). But social work must continue to invest more resources and put in a much greater effort in intervention research. The second obstacle is the insufficient attention paid to date to the challenges confronting practitioners in actually implementing research-based knowledge, as in ESTs. Although the “buzz words” of research use and dissemination were evident in social work circles in the 1980s and 1990s (Grasso & Epstein, 1992; Rosen, 1983, 1994) and although intervention research has increased the volume of evidence-based social work practices considerably, the actual use of research knowledge lags behind (Addis, 2002; Addis & Krasnow, 2000; Mullen & Bacon, 2003; Rosen, 1994; Rosen et al., 1995).

Clearly, social work has come a long way in its pre-paredness to base professional practice on empirically supported evidence. Efforts must continue relentlessly to research and develop practice-relevant interventions for as-specific-as-possible client groups and practice situations and to package these in the most accessible and usable manner for practitioners. But the challenge of implementation remains to be directly addressed. Even when empirically supported intervention knowledge is available, the provider is still left with considerable uncertainty regarding how effective a given intervention will be with a given client. Uncertainty regarding an intervention’s effectiveness inheres in the probabilistic nature of research evidence on one hand and in the fit between the research-based generalization and the uniqueness of the client and the practice situation on the other hand (Rosen, 2003).

IMPLEMENTATION CHALLENGES AND IMPLEMENTATION RESEARCH

Implementation of ESTs in practice is a complex process—one on which many variables impinge. The remainder of this article will highlight a number of these variables to signal some of the necessary directions for research. First, however, it is important to underscore again the distinction between implementation research and intervention research. Whereas intervention research concerns the production of knowledge that can guide practitioners toward effective attainment of the goals of treatment, implementation research concerns the production of knowledge that can help practitioners actually use and apply responsibly and reliably in practice the products of intervention research. Difficulties in practitioners’ use of research knowledge were noted frequently, and a variety of factors were seen affecting it, to wit: practitioners’ lack of preparation in research (Kirk & Penka, 1992), lack of awareness of the relevant literature (Mullen & Bacon, 2003), attitudes toward research (Rosen & Mutschler, 1982) and toward empirically based treatment manuals (Addis & Krasnow, 2000), practitioners’ difficulties in critical thinking (Gibbs & Gambrill, 1999), and the carryover of lay modes of thinking into professional practice (Rosen, 1996). Rather than acknowledging the difficulties and challenges of implementation and incorporating them into a research agenda, many in the profession and researchers in particular have tended to place the burden of using research evidence in practice primarily on the practitioners (Wakefield & Kirk, 1996). But knowledge use, or implementation in practice, can no longer be ignored or dismissed from the agenda of research in our profession. The purpose of knowledge use, the characteristics and organization of the knowledge to be used (Rosen, 1983; Proctor & Rosen, 2003), the decision-making processes involved in implementation, and the contingencies affecting the knowledge-using agency and the knowledge-using practitioner (including, of course, client characteristics) all need to be considered part of knowledge use and be integrated into the profession’s knowledge development enterprise. Thus, traditional intervention research—typically efficacy and effectiveness studies—needs to be supplemented and complemented with implementation research, the products of which aim to enhance the actual use of ESTs in practice.

The literature is replete with claims and demonstrations that practitioners, like lay persons, base their decisions on implicit, nonrational considerations rather than adhere to evidence-based, rational decision making (Dawes, 2001; Lilienfeld, 2002; Reber, 1993; Rosen, 2003; Shafir & LeBoeuf, 2002; Sloman, 1996). Basing clinical decisions on research evidence, as well as implementing ESTs with fidelity, requires a rational mode of thinking and decision making. We often assume that such modes of thinking are inculcated through the process of professional education—an assumption that, alas, is often unwarranted (Lilienfeld, 2002). An important challenge for implementation research is to focus efforts on the process and means through which practitioners can be taught, compelled, or constrained to use rational modes of decision making in their professional roles.

The likelihood of implementing research-based evidence is further complicated by the basic dilemma of idiographic application of empirical generalizations to the individual case (Rosen, 2003). Many decisions that practitioners make in the course of treatment are categorical in nature—to act in a certain manner or not. On the other hand, all research-based knowledge is probabilistic, varying in degree, and fraught with uncertainty. The uncertainty characterizing ESTs concerns the external validity of the knowledge in relation to the case at hand, for example, limited and selective sampling of populations, client types, situations, behaviors, and practitioner types. Uncertainty also surrounds the internal validity of findings supporting the EST, for example, nature and robustness of study design, the extent of control of rival hypotheses (history, maturation, etc.), the effect size, and the probabilistic robustness of the findings (Type I vs. Type II errors).

Hence, even when well specified in treatment manuals, interventions found effective through research remain generalized formulations that have been tested with samples of clients. Yet the practitioner’s own client is not likely to share with individuals in the research samples all the characteristics that are relevant to the intervention. Thus, having a solid foundation of empirical support does not guarantee that a given intervention will meet the needs of a particular client. This is the crux of the dilemma facing the practitioner who is knowledgeable and willing to use ESTs in practice.

Thus, a critical function of implementation research is addressing the knowledge application challenge. Through implementation research, decision aids and tools can be devised and tested and (evidence-based) methods developed for introducing them successfully into routine practice settings. Agency records have the potential to serve a decision-support function, to the extent that they structure assessment and information gathering, and call for a recording of not only the treatment used but also the rationale for its selection, such as the extent of evidence for its effectiveness. Both theory and some research evidence suggest that such means can be successfully developed (Sloman, 1996; Stanovich & West, 1999; also, systematic planned practice [SPP] is one example of a constraining system for rational and systematic practice decision making, Rosen, 1993; Rosen, Proctor, Morrow-Howell, Auslander, & Staudt, 1993). A variety of decision and implementation aids can help the practitioner in applying uncertain and probabilistic knowledge to a unique case. All such aids should enhance the practitioner’s capacity for using preformulated knowledge flexibly and reactively to a real-life situation (thereby also lowering resistance to using preformulated knowledge) and yet provide a procedure that can help in improvising on interventions (changing, adapting) and achieving case-by-case best fit between the knowledge brought into the situation and the attainment of the desired outcome. Implementation research should also address how practice guidelines and decision-support tools can be used, as well as how the practitioners can flexibly engage in recursive application– evaluation–improvisation processes in using ESTs when working on a particular case.

As with service system research, implementation research needs to be conducted in real-world practice settings and in “trench–bench” partnership with service delivery organizations (Proctor, 2003). Implementation research is concerned with outcomes at a variety of “levels”: the service organization; often, a clinical supervisor; the provider; and the client. Among the challenges of such research are the necessarily small “ns” at the organization or agency level and the “nesting” of data, as with multiple clients, served by a given group of providers, supervised by a smaller number of clinical supervisors, within one service agency. In addition to the basic focus on the clinical effectiveness of ESTs in relation to client outcomes, new and unique outcomes need to be conceptualized and measured in implementation research. These include (a) the acceptability of the EST to the agency leadership (administrator, supervisor, board, funding sources), to the provider, and to the client; (b) the feasibility of implementation within the agency and real-world contexts; and (c) the sustainability of the EST in terms of replicability of provider behaviors, fidelity with the treatment as designed, and agency resources to maintain a conducive environment for delivering the EST.

Thus, the challenge and imperative for implementation researchers is to devise tools that help workers implement the most effective treatments. Implementation methods need to be empirically developed, yielding “evidence-based implementation strategies.” Practice–research networks are essential for capturing data on the implementation process so that the experience of the field can be reaped and used to inform subsequent research to “fine-tune” or calibrate a given EST. This challenge also requires the engagement of service delivery systems and, in most cases, an increase in resources for recording and evaluation infrastructure, as well as ongoing connections with researchers. Implementation research should also address supervision and management processes necessary to support practitioners’ use of ESTs.

IMPLEMENTATION–EVALUATION FEEDBACK LOOPS

Even when intervention research has capitalized on and been informed by service system research and the implementation process has been carefully designed and studied, pressing questions remain: questions about the goodness of fit between the needs of practice, the effectiveness of the EST, and its reliability, replicability, acceptability, and effectiveness as implemented. Unfortunately, social work practice and research are too often conducted “in silo” form. The EBP processes are followed up, evaluated, and connected too rarely.

The importance of evaluating the implementation effort and providing feedback to practitioners and researchers alike must be underscored. It is unrealistic to expect a perfect fit between an empirically supported, standardized intervention and the needs of a particular client or practice situation. When a practitioner doubts the goodness of fit of an evidence-based treatment to the needs of the client and situation at hand, he or she may likely and should be encouraged to supplement or modify the EST. In such instances especially, and also when interventions are implemented as originally formulated, a recursive evaluation–feedback loop should be adopted to determine whether the intervention as implemented attains the predicted results or whether it should be further modified or abandoned. Engaging in this process should not only serve to allay practitioners’ concerns about using preformulated, generalized interventions with individual clients but also provide the means for enhancing the fit between the products of research and client needs. The results of such case-by-case recursive evaluation– feedback implementation efforts would yield more specific clinical hypotheses—intervention–outcome links—potentially applicable to designated subgroups of clients and situations compared to the more general original formulation of an EST. These hypotheses could then be subjected to more direct effectiveness evaluation through practice– research networks, thereby enriching the profession’s arsenal of empirically tested treatments.

When the practice situation differs from the context in which the supporting research was conducted, the following elements in the EST may require modification: (a) intermediate outcomes may need to be added or omitted; (b) the frequency, intensity, or duration of the treatment inputs may need to be altered; (c) the tasks given to clients, such as homework assignments, may need to be changed; and (d) because many manualized ESTs do not specify procedures for establishing and maintaining a facilitative helping relationship, the practitioner may need to supplement an EST with his or her own knowledge and skills in developing a good relationship with the client. The particular modification to be made depends, of course, on the practitioner’s knowledge and the reasons for undertaking the modification.

Modification of established evidence-based treatments carries two risks. First, as practitioners are aware, there is the risk of overcommitting to standardized interventions, applying them “as is” without considering the issue of fit to the client. Doing so may be “taking the easy way out,” abandoning the rigors of critical thinking, professional judgment, and evaluation. Second, there is risk associated with substantively changing empirically standardized treatments. Our encouragement to modify and revise ESTs should not be taken as a call to freely change the intervention. Rather, modifications must be careful and well reasoned, and the practitioner cannot assume that the modified intervention would retain the effectiveness of the EST. Therefore, both adherence to and modification of evidence-based treatments should always be accompanied by ongoing evaluation. However, it is critical to recognize that any modification of the original empirically supported intervention introduces substantial change that may affect its effectiveness.

Three points are critical in evaluation. First, all outcomes pursued need to be defined operationally and defined and assessed as specifically as possible by clinically meaningful indicators. Second, when available, clinically relevant standardized measures with acceptable reliability and validity should be used (Fischer & Corcoran, 1994). Third, other than some outcomes that are categorical in nature (e.g., obtaining housing, avoiding pregnancy, finding employment), the attainment of many commonly pursued outcomes can be appropriately measured by continuous scales. Such measurement affords a more discriminating evaluation of change and enables assessment of outcome attainment over time—including comparisons of treatment with pretreatment status, assessment of progress during and maintenance of change after treatment, and other comparisons of interest. Constant monitoring and recursive evaluation and revision of treatment should lessen practitioner concern that EBPs may be insensitive to clients and their needs.

CONCLUSION

In concert with and building on the very important issues advanced in the conference “What Works: Modernizing the Knowledge Base of Social Work,” we believe that social work must redouble efforts to empirically address the many critical issues related to practitioner use and implementation of knowledge in actual practice. We must design, compile, and test varied means to aid practitioners to deal with the many difficulties inherent in attempting to apply research-based knowledge to the individual case. Scholars must address these challenges through new forms of research that complement efficacy and effectiveness research. Service systems research must precede and inform efficacy and effectiveness studies. Then implementation research is needed to test the feasibility, acceptability, and sustainability of EBPs in real-world practice. Finally, evaluation–feedback studies are needed to examine the extent to which real-world implementation compromises the replicability and effectiveness of EBPs and to inform needed modifications through subsequent efficacy and effectiveness research.

Acknowledgments

Preparation of this article was supported in part by the Center for Mental Health Services Research at the George Warren Brown School of Social Work of Washington University in St. Louis through an award from the National Institute of Mental Health (5P30 MH 068579).

Footnotes

This article was presented at the conference “What Works: Modernizing the Knowledge Base of Social Work,” in Bielefeld, Germany, on November 11, 2005.

References

  1. Addis ME. Methods for disseminating research products and increasing evidence-based practice: Promises, obstacles, and future directions. Clinical Psychology: Science and Practice. 2002;9:367–378. [Google Scholar]
  2. Addis ME, Krasnow AD. A national survey of practicing psychologists’ attitudes toward psychotherapy treatment manuals. Journal of Consulting and Clinical Psychology. 2000;68:331–339. doi: 10.1037//0022-006x.68.2.331. [DOI] [PubMed] [Google Scholar]
  3. Dawes RM. Everyday irrationality: How pseudo scientists, lunatics, and the rest of us systematically fail to think rationally. Boulder, CO: Westview; 2001. [Google Scholar]
  4. Fischer J, Corcoran K. Measures for clinical practice: A sourcebook. 2. New York: Free Press; 1994. [Google Scholar]
  5. Fortune AE. Intervention research [Editorial] Social Work Research. 1999;23:2–3. [Google Scholar]
  6. Fortune AE, Proctor EK. Research on social work intervention [Editorial] Social Work Research. 2001;25:67–69. [Google Scholar]
  7. Fraser MW. Intervention research in social work: A basis for evidence-based practice and practice guidelines. In: Rosen A, Proctor EK, editors. Developing practice guidelines for social work interventions: Issues, methods, and research agenda. New York: Columbia University Press; 2003. pp. 17–36. [Google Scholar]
  8. Gambrill E. Evidence-based practice: Implications for knowledge development and use in social work. In: Rosen A, Proctor EK, editors. Developing practice guidelines for social work interventions: Issues, methods, and research agenda. New York: Columbia University Press; 2003. p. 3758. [Google Scholar]
  9. Gibbs L, Gambrill E. Critical thinking for social workers: Exercises for the helping professions. 2. Thousand Oaks, CA: Pine Forge Press; 1999. [Google Scholar]
  10. Grasso AJ, Epstein I. Toward a developmental approach to program evaluation. Administration in Social Work. 1992;16:187–204. doi: 10.1300/j147v16n03_11. [DOI] [PubMed] [Google Scholar]
  11. Kirk SA, Penka CE. Research utilization and MSW education: A decade of progress? In: Grasso AJ, Epstein I, editors. Research utilization in the social services. Binghamton, NY: Haworth Press; 1992. pp. 407–421. [Google Scholar]
  12. Lilienfeld SO. When worlds collide: Social science, politics, and the Rind et al. (1998) child sexual abuse meta-analysis. American Psychologist. 2002;57:176–188. [PubMed] [Google Scholar]
  13. Mullen EJ, Bacon WF. Practitioner adoption and implementation of practice guidelines and issues of quality control. In: Rosen A, Proctor EK, editors. Developing practice guidelines for social work intervention: Issues, methods, and research agenda. New York: Columbia University Press; 2003. pp. 223–235. [Google Scholar]
  14. Proctor EK. Research to inform the development of social work interventions. Social Work Research. 2003;27:3–5. [Google Scholar]
  15. Proctor EK, Rosen A. The structure and function of social work practice guidelines. In: Rosen A, Proctor EK, editors. Developing practice guidelines for social work intervention: Issues, methods, and research agenda. New York: Columbia University Press; 2003. pp. 108–127. [Google Scholar]
  16. Reber A. Implicit learning and tacit knowledge: An essay on the cognitive unconscious. New York: Oxford University Press; 1993. [Google Scholar]
  17. Reid WJ, Kenaley BD, Colvin J. Do some interventions work better than others? A review of comparative social work experiments. Social Work Research. 2004;28:71–81. [Google Scholar]
  18. Rosen A. Barriers to utilization of research by social work practitioners. Journal of Social Service Research. 1983;6(3/4):1–15. [Google Scholar]
  19. Rosen A. Systematic planned practice. Social Service Review. 1993;67:84–100. [Google Scholar]
  20. Rosen A. Knowledge use in direct practice. Social Service Review. 1994;68:561–577. [Google Scholar]
  21. Rosen A. The scientific practitioner revisited: Some obstacles and prerequisites for fuller implementation in practice. Social Work Research. 1996;20:105–111. [Google Scholar]
  22. Rosen A. Evidence-based social work practice: Challenges and promise. Social Work Research. 2003;27:197–208. [Google Scholar]
  23. Rosen A, Mutschler E. Correspondence between planned and actual use of interventions in treatment. Social Work Research & Abstracts. 1982;18:28–34. [Google Scholar]
  24. Rosen A, Proctor EK, Morrow-Howell N, Auslander W, Staudt M. Systematic planned practice: A tool for planning, implementation, and evaluation. St. Louis, MO: Washington University; 1993. [Google Scholar]
  25. Rosen A, Proctor EK, Morrow-Howell N, Staudt M. Rationales for practice decisions: Variations in knowledge use by decision task and social work service. Research on Social Work Practice. 1995;5:501–523. [Google Scholar]
  26. Rosen A, Proctor EK, Staudt M. Social work research and the quest for effective practice. Social Work Research. 1999;23:4–14. [Google Scholar]
  27. Schilling RF. Developing intervention research programs in social work. Social Work Research. 1997;21:173–180. [Google Scholar]
  28. Shafir E, LeBoeuf RA. Rationality. Annual Review of Psychology. 2002;53:491–517. doi: 10.1146/annurev.psych.53.100901.135213. [DOI] [PubMed] [Google Scholar]
  29. Sloman SA. The empirical case for two systems of reasoning. Psychological Bulletin. 1996;119:3–22. [Google Scholar]
  30. Stanovich KE, West RF. Individual differences in reasoning and the heuristics and biases debate. In: Ackerman PL, Kyllonen PC, Roberts RD, editors. Learning and individual differences: Process, trait, and content determinants. Washington, DC: American Psychological Association; 1999. p. 389411. [Google Scholar]
  31. Stricker G, Trierweiler SJ. The local clinical scientist: A bridge between science and practice. American Psychologist. 1995;50:995–1002. doi: 10.1037//0003-066x.50.12.995. [DOI] [PubMed] [Google Scholar]
  32. Videka L. Accounting for variability in client, population, and setting characteristics: Moderators of intervention effectiveness. In: Rosen A, Proctor EK, editors. Developing practice guidelines for social work interventions: Issues, methods, and research agenda. New York: Columbia University Press; 2003. pp. 169–192. [Google Scholar]
  33. Wakefield JC, Kirk SA. Unscientific thinking about scientific practice: Evaluating the scientist-practitioner model. Social Work Research. 1996;20:83–95. [Google Scholar]
  34. Zeira A, Rosen A. Unraveling “tacit knowledge”: What social workers do and why they do it. Social Service Review. 2000;74:103–123. [Google Scholar]

RESOURCES