Skip to main content
Springer logoLink to Springer
. 2010 Apr 15;37(1):201–204. doi: 10.1007/s10488-010-0300-5

The Demand Side: Uses of Research in Child and Adolescent Mental Health Services

Abram Rosenblatt 1,, Vivian Tseng 2
PMCID: PMC2874034  PMID: 20393795

Abstract

This special issue on child and adolescent mental health contains a thoughtful set of papers that address many of the challenges in bridging research and practice. These articles, however, focus predominantly on the supply side of producing research for use by a range of audiences, including practitioners, administrators and policy makers. This commentary emphasizes the importance of attending to, and better understanding, the demand side with regard to how research evidence is evaluated, understood, and utilized. Drawing from work underway at the William T. Grant Foundation, the authors argue for the need to understand three broad topics: user settings and perspectives, political, economic and social contexts, and the various uses of research. Furthermore, understanding the use of research evidence, or the demand side, is itself a topic for empirical investigation. The authors conclude that, when it comes to supplying evidence, don’t forget the demand side.

Keywords: Child and adolescent mental health, Evidence based practices, Child and adolescent mental health policy, Research use


The papers in this Special Issue on child and adolescent mental health services thoughtfully tackle persistent challenges in bridging research and practice. This volume comes at a time when the calls for evidence-based policy and practice are ubiquitous. On the research side considerable time and money are spent trying to produce stronger research for use in policy and practice. On the policy and practice sides, increasingly higher stakes and incentives are connected to using research evidence.

The market metaphor of supply and demand is often used to describe the production and use of research. The articles in this issue focus predominantly on the supply side of producing research for use in child and adolescent mental health services and systems. What receives less extensive and rigorous attention is the demand side of when and how research is used. What do policymakers and agency administrators think of extant research and its fit with their information needs? How do they evaluate its relevance and credibility for their work? What factors constrain and support their use of research? These are important and researchable questions. When billions of dollars and countless hardworking hours are spent trying to generate stronger research, it is important to know if those efforts generate greater, more beneficial usage of research and why they do or do not.

As Garland, Bickman, and Chorpita (this volume) write: “if we want to improve the children’s mental health system, we need to understand its functioning, identify what is broken, and tailor improvement efforts for maximal potential impact. Effective, sustainable prescriptions for change should be based on accurate knowledge about the current system….” We agree, and would push these arguments further. We argue in this commentary that researchers need to develop a stronger understanding of the demand side. Many of the key points in this commentary are drawn from work underway through the William T. Grant foundation (Tseng and Senior Program Team 2009). The first author received support through the Foundation’s Distinguished Fellows program to learn about practice and policy by working in those roles. The second author leads the Foundation’s initiative to increase understanding of when and how research evidence is used in practice and policy. We argue, in this commentary, for the need to understand three topics: (1) User settings and perspectives; (2) Political, economic, and social contexts; and (3) Various uses of research.

User Settings and Perspectives

The William T. Grant Foundation has a longstanding interest in supporting research that can inform policy and practice. From experience, we know that research sometimes gets used and we can tell stories about when it occurred and what seemed to cause it to occur. What we lack is a more systematic understanding of when, how, and under what conditions research is used and how to improve its use. To address this need, the Foundation recently began an initiative to build theory and empirical knowledge about how policymakers and practitioners acquire, interpret, and use research evidence.

In this work, we think it is important to develop stronger understanding of the intended users of research—the nature of policy and practice work, how that work is shaped by organizational and institutional settings, the forces that impel and impede change, and the role played by intermediary organizations such as technical assistance providers and advocacy groups. In understanding the adoption of evidence-based programs, for example, it is important to understand how decisions are made, what affects decision making, and the role research and other types of evidence play in that process.

Early findings from exploratory studies reinforce the importance of understanding practitioners’ and policymakers’ definitions and perspectives on research. People in different roles tend to hold differing definitions of research, evidence, and evidence-based practice—definitions that can be strongly held and defended (Sexton et al. this volume). At minimum, it is useful to recognize these differences so that researchers and practitioners understand each other at the outset, but more importantly so that they can find ways to work together more effectively. Researchers often use the terms research and evidence interchangeably, either implicitly or explicitly defining evidence as empirical findings derived from the scientific method. In contrast, policymakers and practitioners often view research evidence as only one form of evidence that is important for their work (e.g. Reay, this volume; Rosenblatt and Compian 2007; Kazdin 2006). Along these lines and with Foundation support, Lawrence Palinkas recently conducted focus groups and interviews with local leaders of child welfare, probation, and mental health agencies to understand their definitions of evidence-based practice. Many researchers define this term as practices with evidence of demonstrated impact in randomized controlled trials. In contrast, the practitioners interviewed held varying definitions including practices that have been tested widely and subjected to a variety of studies; have a body of research to support them; have proven effectiveness as reflected in positive outcomes or measureable changes; come with curricula, manuals, and training; or have specific requirements for training and fidelity to curriculum.

In addition to these definitional issues, it is vital to understand how researchers, policymakers, and practitioners assess the credibility and relevance of research on practices and other issues. As the research community seeks to improve research, are they doing it in ways that matter to policymakers and administrators and are understood by them? There are numerous initiatives underway to generate stronger research on the impact of programs and practices and to synthesize that research as lists of evidence-based practices and programs. Do the administrators charged with adopting programs and practices view the research and lists as credible and relevant? The agency leaders and managers in Palinkas’ exploratory study evaluated the utility of research in terms of its applicability to their local contexts. They valued research conducted with local data or in sites that were similar to theirs in terms of size, demographics, or location (urban or rural). When it came to evidence-based practices, they seemed to be swayed less by the strength of the research design or methods than by whether the models or practices were endorsed by trusted colleagues and by a desire to see them implemented on the ground.

In addition, it may be important to consider how the outcomes studied in research affects the uptake of that work. Various researchers have written about this issue, in effect offering hypotheses that research which addresses certain outcomes or which produces practitioner-friendly measures would result in greater uptake. Rosenblatt (1993), for example, argued for the value of assessing whether youth are “in home, in school, and out of trouble” as a way of focusing on core outcome indicators that may be of most direct public and policy interest. Lyons (2009) emphasizes the “communimetrics” of measures that are designed to transmit information of specific value to clinical staff and to link to treatment decisions and plans. Kazdin (2006) suggested grounding measures in metrics applicable to the real world. Analytic techniques such as the Reliable Change Index (Jacobson and Truax 1991) may help translate research significance to a more easily understandable categorical measurement of clinical significance.

Broader Contexts that Affect Research Use

It is also important to understand the broader political, economic and social contexts that affect policymakers’ and administrators’ work, their research needs, and how research gets put to use. Foster et al. (this volume), for example, illustrate the myriad links between economics and the delivery of evidence-based practices. As we write this commentary, Congress is debating healthcare legislation that could alter the fiscal and regulatory landscape of healthcare delivery. The legislation passed by the Senate, for instance, would expand Medicaid in most states; private insurance would be mandated for individuals and employers; and many limits on what private payors cover would be eliminated. If this or similar legislation passes, millions more Americans will have access to free or relatively affordable health care and presumably, given recent parity legislation, access to mental health care. Understanding these changes enables researchers to direct their efforts toward addressing pressing policy and practice questions: How should an already problematic child and adolescent mental health system grow? What services should be reimbursed? What should the relationship between public and private payors look like? What incentives would induce provision of effective services?

As agencies face greater political and fiscal demands to use research, it is important to understand how agencies are responding to those demands and whether the desired consequences are being achieved. Some anecdotal evidence from California suggests the need to understand research use within the complicated terrain of service delivery and changing economic circumstances. A voter initiative (Proposition 63) resulted in 1% of all million dollar incomes being dedicated to mental health services. The resulting Mental Health Services Act (MHSA) developed the mechanisms for how those resources would be allocated, and one emphasis was on providing evidence-based practices (EBPs).

This situation is creating unexpected dilemmas in some counties because MHSA provides funding at a time when “usual care” services funded through Medicaid are facing significant cuts. The evidence-based services that are established and allowable through the MHSA do not necessarily cover all young people who have been receiving “usual care” and likely do not encompass services such as case management and some community based services that many practitioners, providers and caregivers consider essential for youth with severe emotional disturbance. The MHSA was not meant to replace Medicaid-based services, and the funds are explicitly not able to supplant those services. Economic challenges, however, are creating difficult choices for counties. Evidence-based practices are more widely applied than originally anticipated, and there is little understanding of how greater use of EBP’s will impact access, costs, and outcomes when applied as a system-wide strategy. Some observers and many providers worry that this situation will have unintended negative consequences for which youth receive care. In addition, some people are questioning the effectiveness of EBP’s when they are applied to California’s culturally and ethnically diverse youth population—populations which have not been studied in many of the interventions (e.g. Alegria et al. this volume). Whether or not these concerns bear out, it highlights the need to understand the broader context in which research use occurs.

Various Uses of Research

Policy makers, administrators and clinicians use research evidence in various ways. Carol Weiss, Sandra M. Nutley, and Huw T.O. Davies offer descriptions of several types of research use. The articles in this special issue generally focus on instrumental uses of research—how research is directly applied to decision-making to address particular problems. There are other ways, however, that research is used. Conceptual use refers to situations in which research influences or informs how policymakers and practitioners think about issues, problems, or potential solutions. Tactical use, related to strategic and symbolic uses, occurs when research evidence is used to justify existing positions such as supporting a piece of legislation or challenging a reform effort. Imposed use—recently defined by Carol H. Weiss—refers to situations in which there are mandates to use research evidence, such as when government funding requires that practitioners adopt programs backed by research evidence.

There are numerous illustrations of these different types of research use in child and adolescent mental health services. Imposed use of evidence-based practices is becoming more common, but as Foster et al. (this volume) suggest, imposing use may not achieve desired results. This is a possibility worth rigorous empirical examination. Tactical use is well known to researchers who have seen their work support or justify legislation, sometimes in ways that are surprising to the researcher. When funding is at stake, for example, pressure to produce positive findings can be overwhelming and difficult to overcome. Measurement feedback systems such as the one described by Bickman (2008) are examples of research-based tools designed for instrumental use in the clinical setting.

Planning for Demand by Understanding Research Acquisition, Interpretation, and Use

We hope we have added some additional context to the extraordinary papers in this special issue by emphasizing the importance of understanding the demand side of how research evidence is acquired, used and interpreted. Understanding the demand for research can be a rigorous empirical field (Tseng and Senior Program Team 2009). While the 1970s and 1980s were heralded as a “Golden Age” for studies of research use, successfully meeting today’s challenges for evidence-based policy and practice requires stronger theory and empirical work on how research is used and how to improve its use.

The papers in this special issue provide depth, complexity, and understanding with regard to how research can help build more equitable, efficient, and effective child and adolescent mental health services. In light of this depth and complexity, the message here is deceptively simple: when it comes to supplying evidence, don’t forget the demand side.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Contributor Information

Abram Rosenblatt, Phone: +1-408-3644016, Email: abram@itsa.ucsf.edu.

Vivian Tseng, Phone: +1-212-7520071.

References

  1. Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry. 2008;47:114–119. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Jacobson NS, Truax P. Clinical significance: A statistical approach to defining meaningful change in psychotherapy. Journal of Consulting and Clinical Psychology. 1991;59(1):12–19. doi: 10.1037/0022-006X.59.1.12. [DOI] [PubMed] [Google Scholar]
  3. Kazdin AE. Arbitrary metrics: Implications for identifying evidence-based treatments. American Psychologist. 2006;61(1):42–49. doi: 10.1037/0003-066X.61.1.42. [DOI] [PubMed] [Google Scholar]
  4. Lyons JS. Communimetrics: A communication theory of measurement in human service settings. New York: Springer; 2009. [Google Scholar]
  5. Rosenblatt A. In home, in school, and out of trouble. Journal of Child and Family Studies. 1993;2(4):275–282. doi: 10.1007/BF01321225. [DOI] [Google Scholar]
  6. Rosenblatt A, Compian L. Exchanging glances? Systems, practice and evidence in children’s mental health services. In: Fisher WH, editor. Research in community and mental health vol. 14—research on community based mental health services for children and adolescents. Amsterdam: Elsevier Ltd; 2007. [Google Scholar]
  7. Tseng V, Senior Program Team . Focusing on the demand side in studies of the use of research evidence in policy and practice. William T. Grant Foundation 2008 Annual Report. New York, NY: William T. Grant Foundation; 2009. [Google Scholar]

Articles from Administration and Policy in Mental Health are provided here courtesy of Springer

RESOURCES