Abstract
Objective
To model children's mental health policy making dynamics and simulate the impacts of knowledge broker interventions.
Data sources
Primary data from surveys (n = 221) and interviews (n = 64) conducted in 2019–2021 with mental health agency (MHA) officials in state agencies.
Study design
A prototype agent‐based model (ABM) was developed using the PARTE (Properties, Actions, Rules, Time, Environment) framework and informed through primary data collection. In each simulation, a policy is randomly generated (salience weights: cost, contextual alignment, and strength of evidence) and discussed among agents. Agents are MHA officials and heterogenous in their properties (policy making power and network influence) and policy preferences (based on salience weights). Knowledge broker interventions add agents to the MHA social network who primarily focus on the policy's research evidence.
Data collection/extraction methods
A sequential explanatory mixed method approach was used. Descriptive and regression analyses were used for the survey data and directed content analysis was used to code interview data. Triangulated results informed ABM development. In the ABM, policy makers with various degrees of decision influence interact in a scale‐free network before and after knowledge broker interventions. Over time, each decides to support or oppose a policy proposal based on policy salience weights and their own properties and interactions. The main outcome is an agency‐level decision based on policy maker support. Each intervention and baseline simulation runs 250 times across 50 timesteps.
Principal findings
Surveys and interviews revealed that barriers to research use could be addressed by knowledge brokers. Simulations indicated that policy decision outcomes varied by policy making context within agencies.
Conclusions
This is the first application of ABM to evidence‐informed mental health policy making. Results suggest that the presence of knowledge brokers can: (1) influence consensus formation in MHAs, (2) accelerate policy decisions, and (3) increase the likelihood of evidence‐informed policy adoption.
Keywords: health policy/politics/law/regulation, mental health, state health policies
What is known on this topic
Dissemination barriers, such as lacking access to contextually relevant evidence, have been found to inhibit the use of research evidence in public agency policy making.
Prior research suggests the knowledge brokers could address these barriers.
What this study adds
When knowledge brokers are present, evidence‐informed policies may be more likely to be adopted.
The presence of knowledge brokers can influence consensus formation in public agencies.
The process of simulation model development and the results here serve as a prototype for agent‐based models that employ social network analysis to model policy making within agencies.
1. INTRODUCTION
Policy making is a complex process. 1 , 2 Complex processes are characterized by behaviors that dynamically interact over time through feedback loops and evolving, nonlinear relationships to “create … a larger whole.” 3 For example, policy makers must not only consider the best available research evidence to inform what decision alternatives might have the most efficient, positive impact, but also assess the potential impact of decision alternatives given other relevant decisions being made and their knowledge of the context in which they are intervening. 4 , 5 , 6 , 7 , 8 Ornstein and colleagues clarified how complexity shapes policy making through three aspects: (1) dimensionality (the need for multiple, interconnected decisions to be made); (2) ruggedness (the effect of one decision is difficult to disentangle from prior or future decisions); and (3) context‐specificity (the impact of decisions is dependent upon their context). 4 The behaviors that emerge from complexity, such as policy making decisions, are often difficult to anticipate or accurately identify without formally modeling and simulating the interactions between system components. 9 , 10
The current study focuses on the role of research evidence in policy making related to children in state mental health agencies (MHAs). 11 MHAs fund approximately 8500 providers and serve about 7.3 million patients, of whom about one‐third are under age 20. 11 , 12 MHAs have discretion in terms of which children's mental health policies and programs to adopt in their systems and how they are implemented. 12 Although the complexities of research use in policy making apply to a range of issues, the current study focuses on children's mental health policy because rates of mental health problems (e.g., anxiety, depression, suicide) have been increasing in the United States 13 , 14 , 15 , 16 and are being exacerbated by the COVID‐19 pandemic. 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 There is a growing evidence base about policy and programmatic interventions—such as coordinated specialty care for first‐episode psychosis 27 trauma‐focused cognitive behavioral theory for youth 28 —that can improve children's mental health and also increase access to services. 29 , 30 , 31 By simulating the use of children's mental health research in policy making within MHAs, the current study seeks to identify strategies to accelerate the translation of this evidence into practice via policy.
1.1. Policy making occurs in a social system
Policy making is an inherently social process. Even when an individual policy maker or small group is tasked with adopting a particular policy, networks of peers, advocates, and practitioners can influence policy makers. 32 , 33 , 34 Policy making thus requires decision‐making processes that have been conceptualized as a function of collaborative social networks and cooperative agenda setting. 35 Social network analysis (SNA), which offers a systems‐based methodology for modeling group dynamics, 36 , 37 , 38 may prove useful for modeling policy making through identifying network‐based targets for policy making process interventions. 9 , 37 , 39 , 40
Previous SNAs of policy making processes have primarily been developed for fields such as environmental science and resource management. 7 , 41 , 42 Some have focused on how health‐related conditions or behaviors (e.g., obesity, 43 tobacco use 37 ) can spread through social networks. While system dynamics models and agent‐based models (ABMs) have been used to study policy making in other fields such as agriculture and environmental policy, 44 none have done so in mental health 45 The current study seeks to address this knowledge gap.
1.2. Knowledge brokers, complex systems, and evidence‐informed policy making
One way to influence evidence‐informed policy making is the use of knowledge brokers. 46 Knowledge brokers can act as knowledge managers who compile and assess evidence for a particular policy, intermediary agents who facilitate connections between policy makers and researchers, and capacity builders who provide research training to policy makers. 46 Knowledge brokers have been studied in a range of policy and practice settings, including clinical care, 47 schools, 48 , 49 , 50 and health departments. 51 , 52 They may be consultants or longtime university partners. 51 , 52 , 53 Findings suggest that knowledge brokers may be particularly effective when functioning as intermediary agents. 48 Moreover, knowledge broker interventions have increased evidence‐informed decision‐making among clinical practitioners 51 and improved patient outcomes. 47
Knowledge broker interventions remain understudied in mental health policy making. Recent evidence shows that stronger partnerships between researchers and MHA officials may improve confidence in research use. 54 A survey of MHA officials found that dissemination barriers—such as lacking access to contextually relevant evidence—were inversely and independently associated with the frequency of research use to inform). 55 Dissemination barriers may be reduced by the introduction of knowledge brokers into MHAs.
This manuscript aims to advance the use of systems science in health policy research by using mixed‐methods primary data collection and analysis to inform the development of a prototype ABM. The ABM employs SNA to model policy making dynamics and explores how the following may influence evidence‐informed policy adoption decisions related to children's mental health in MHAs: (1) policy characteristics (i.e., policy cost, contextual alignment, and strength of evidence), and (2) interventions that leverage knowledge brokers to increase policy makers' awareness of supporting evidence.
2. METHODS
A sequential explanatory (QUAN → qual) mixed‐methods research design was used to collect and analyze primary data from state MHAs. 56 , 57 Surveys were analyzed using quantitative methods, and findings informed qualitative interview data collection and analysis. Quantitative and qualitative results were then triangulated to directly inform development and design of the ABM.
2.1. Surveys
2.1.1. Data
Web‐based surveys of state MHA officials were fielded between November 2019 and February 2020. Complete details about the survey methodology are published elsewhere. 55 The survey was completed by 221 officials (response rate: 33.7%). The response rate was significantly higher among respondents from states in the Midwest than Northeast US Census region (44.9% vs. 25.2%, p = 0.004). There was not a significant difference in response rate by gender (p = 0.11, as predicted by first name using Gender API Estimator, used in prior work with MHA officials). 58 All survey items were explicitly related to policy‐ and decision‐making and research use related to children's mental health. Research was defined in the survey as “information produced by using reliable data and systematic methods—such as findings reported in peer‐reviewed publications or from analyses of local, state, or national data,” a definition consistent with that of the US Commission on evidence‐informed policy making. 59 This manuscript focuses on survey variables that were influential in shaping ABM design decisions.
Frequency of using children's mental health research for instrumental purposes: Respondents separately indicated the frequency with which they “Use research to decide about content or direction of a policy or program” during three policy making process phases: Agenda setting, policy development, and policy implementation (1 = “Very rarely,” 5 = “Very frequently”). Aggregate scores of the frequency of instrumental research use in children's mental health policy making ranged from three to 15, α = 0.82.
Dissemination‐related barriers to using children's mental health research: Respondents were asked “To what extent do you perceive each issue as a barrier to using children's mental health research in your work?” and rated the extent to which four issues were barriers on 5‐point Likert scales (1 = “Not a barrier,” 5 = “Major barrier”). Aggregate scores of dissemination‐related barriers to using children's mental health research ranged from four to 20, α = 0.71.
Primary sources when seeking children's mental health research: Respondents were asked “If you were going to seek out children's mental health research to make a policy decision, who would you turn to?” and instructed to select up to three sources from a list of eight options (e.g., internal staff, partner agencies, university researchers).
Important features of children's mental health research: Respondents were asked “If you were to receive children's mental health research, how important would it be that the research have each of the following characteristics?” and rated the importance of five factors on 5‐point Likert scales (1 = “Not important,” 5 = “Very important”).
Factors that influence MHA children's mental health policy priorities: Respondents were presented with a list of nine factors and, for each, instructed to rate “How much influence you think it currently has on your agency's children's mental health priorities in general” on 5‐point Likert scales (1 = “No influence,” 5 = “Major influence”).
2.1.2. Analysis
Analyses were conducted with respondents stratified according to the relative amount of control they could exercise within their MHA's organizational structure. Respondents in roles such as MHA director, deputy director, and medical director were coded as high‐control, those in roles such as children's division director were coded as moderate‐control, and SAMHSA program administrators were coded as low‐control. One‐way ANOVAs and chi‐square tests compared the means and proportions across these three levels of organizational hierarchy.
2.2. Interviews
2.2.1. Data
Semistructured Zoom‐based interviews were conducted with 64 state and county mental health (n = 33) and substance use (n = 31) agency officials in between January and June 2021. The officials were from agencies in eight purposely selected states that varied in the terms of aggregate scores for frequency of research use (high/low) and barriers to research use (high/low)—classified as such through analysis of the MHA survey dataset and a similar survey of substance use agency officials. 60
Interviews focused on agency officials' experiences with children's policy making and factors that influence policy making processes. In particular, questions focused on the role of research evidence and dissemination‐related barriers, and questions around policy making were added to inform the ABM. Interview questions were informed by prior descriptive and linear regression analyses of the survey datasets. 54 , 55 , 61 The interviews were approximately 45 min in duration, audio recorded, transcribed, and imported into NVivo 11 (QSR International Pty Ltd, Melbourne, Australia) for analysis.
2.2.2. Analysis
To inform the design of the ABM, directed content analysis 62 was conducted using an approach adapted from the “rigorous and accelerated data reduction (RADaR)” technique. 63 Two interviewers reviewed transcripts and assigned text to broad coding categories with definitions that represented three factors that influenced officials' support for a new policy: cost, contextual alignment, and strength of evidence. The selection of these three factors was informed by survey data and reviews of research use in policy making. 8 , 61 , 64 , 65 , 66 , 67 , 68 The three factors are hereafter referred to as salience weights, because, as explained further below, different agents in the simulation model prioritized each differently and assigned different levels of salience to each. The two coders then reviewed text coded to these salience weights and assigned text a priori sub‐codes with definitions that captured variation in the extent to which each salience weight influenced support for a new policy. The two coders and a third member of a project team then collaboratively developed concise declarative statements that captured main findings related to each factor.
2.3. ABM development
2.3.1. Agents and environment
Agents in the model were of three types: MHA policy makers, MHA staff, and knowledge brokers. MHA officials were represented as policy makers, the only agents who had final voting power to adopt or reject a proposed policy, or other staff. Knowledge brokers were modeled as agents initially external to MHAs.
Dekker's 69 extension of Kawachi et al.'s 70 network rewiring method was used to construct MHA social networks. This method allows for the network structures to approximate characteristics of real‐world social networks (i.e., a low mean distance between network members, a moderate prevalence of local clustering between members, and an approximation of the power‐law degree distribution of nodes). 69 , 71 Ties in the model represent regular communications between members (agents or nodes). During baseline and intervention simulations, each MHA had a fixed population of 25 agents (5 policy makers, 20 staff). Results were analyzed across 50 timesteps in which agents interacted. Policy makers and staff were randomly assigned the three salience weights, informed from interview results, for strength of evidence, cost, and contextual alignment. Knowledge brokers in the intervention runs were also assigned salience weights, however, their salience for strength of evidence was always set higher than the other two. The 50 timesteps represented 50 weeks, which was intended to approximate a MHA's annual budget cycle. Each set of 50 timesteps represents a model run, and each baseline and intervention condition contained 250 runs.
The following simulation outcomes were assessed: Agent policy support level, the likelihood of policy adoption or rejection, and time to decision for both those agents in high‐control positions (i.e., policy makers) and all MHA agents. Agents' policy support levels ranged from 0 (no support) to 1 (complete support); at the end of each model run, agents' policy support levels translated to policy decisions as follows: adoption (policy maker support >0.85), rejection (policy maker support <0.15), or indecision (policy maker support 0.15–0.85). Time to decision was defined as the timestep in which the final policy decision stabilized, or when agents' preferences stopped adjusting.
A unique policy was generated for each simulation run consisting of three salience weights: cost, contextual alignment, and strength of evidence. Each salience weight was randomly assigned scores ranging from −1 (worst) to 1 (best), with 0 representing moderate. Policy quality was the average of these three scores. Policy preference scores, ranging from 0 (not important) to 1 (very important), were also randomly generated at baseline to model an agent's perceived importance of the three policy characteristics for their individual policy making decision.
2.3.2. Baseline simulation
Adapted from Friedkin and Johnsen 72 and Namatame and Chen, 73 each MHA agent was randomly assigned an alpha level (mean = 0.5, SD = 0.15, range = 0–1) that captured the degree to which an MHA agent was open to the influence of other agents relative to being anchored to their own initial opinion. We created a vector in which agents consulted with their connected colleagues in the network at each timestep. MHA agents thus updated their policy support level at each timestep through a weighted average of their current policy support level and the policy support level of agents with whom they interacted. The three baseline conditions were distinguished by the degree to which policy makers were connected to other network members: high‐degree (highly connected to other members), low‐degree (few connections to other members), and random‐degree (randomly connected to other members).
2.3.3. Tested interventions
Based on prior research 46 and the current study's survey and interview results, we identified the introduction of external knowledge brokers to MHAs as an evidence‐informed policy making intervention and assessed the degree to which it impacted MHA policy making. Knowledge brokers were modeled as initially external to MHAs who develop relationships with policy makers and aimed to build organizational capacity for evidence‐informed policy making. We tested three intervention conditions in which we introduced four knowledge brokers to MHA social networks. Policy makers and staff interacted with one another, while knowledge brokers were directly connected only to policy makers and had a fixed degree of four (i.e., interacted with four policy makers). More information including the parameters and equations for the model are included in Appendix A (Table A1 and Equations).
2.3.4. Example: A week in the life of an agent
Angela is a policy maker in the MHA. She starts out the budget year (model run) in week one considering a policy to implement. Her initial support level for the policy is determined by her preferences for cost, strength of evidence, and contextual alignment (i.e., her randomly assigned salience weights) and how closely those preferences match the policy under consideration. During each week (or timestep) in the model, Angela discusses the policy under consideration with her colleagues—those she is connected to in the agency network—and updates her support level for the policy based on her interactions, previous preferences, and level of influence/preference malleability within the network. If knowledge brokers are present (i.e., under the intervention conditions), she is likely to consult with them as well. Angela's support for the policy is updated as an average of her support and that of her connected colleagues, weighted by the level of influence/preference malleability (i.e., alpha) of each agent. Her support for the policy stabilizes at some point at which her preferences cease to change after continued interactions. At this point, Angela has decided to adopt, reject, or not make a decision on whether to adopt the policy. If she and the majority of her policy making colleagues support the policy, the agency does so.
2.3.5. ABM dashboard
A dashboard was created to show the inner workings of the ABM for the research team to observe the mechanisms, processes, outcomes, and other model metrics. Figure 1 shows a screenshot of the dashboard for additional information on the model.
FIGURE 1.
Knowledge broker ABM dashboard. The dashboard tracks model metrics, such as the maximum number of connections and other network statistics in the agency, as well as time to decision and policy characteristics [Color figure can be viewed at wileyonlinelibrary.com]
3. RESULTS
3.1. Survey results
We selectively present results from new analyses of survey data that were particularly influential in shaping the ABM. Research was used frequently for instrumental purposes (aggregate instrumental research use score = 12.0 on 15‐point scale) and dissemination‐related barriers to using children's mental health research were prevalent but not pervasive (aggregate dissemination‐related barrier score = 11.9 on 20‐point scale) (Table 1). ANOVAs revealed that the US Census Region of a respondent's state was not significantly associated with frequency of research use for any of the purposes (p ≥ 0.79).
TABLE 1.
Key results from survey of state mental health agency (MHA) officials that informed decisions about the design of the agent‐based model (ABM) (N = 221)
Position within the agency organizational structure | ||||||
---|---|---|---|---|---|---|
All (N = 221) | High‐control (n = 62) | Moderate‐control (n = 66) | Low‐control (n = 93) | F/x 2 | p | |
Research use | ||||||
Aggregate frequency of instrumental research use score (range: 3–15) | 12.01 | 12.50 | 11.69 | 11.96 | 1.81 | 0.17 |
Frequency of instrumental research use during agenda setting phase | 4.04 | 4.16 | 4.02 | 3.99 | 0.62 | 0.54 |
Frequency of instrumental research use during policy development phase | 3.96 | 4.13 | 3.80 | 3.99 | 2.09 | 0.13 |
Frequency of instrumental research use during policy implementation phase | 3.93 | 4.04 | 3.86 | 3.90 | 0.56 | 0.57 |
Dissemination‐related barriers to using children's MH research in policy making | ||||||
Aggregate dissemination‐related barrier score (range: 4–20) | 11.86 | 11.78 | 11.55 | 12.23 | 0.68 | 0.51 |
Lack of actionable messages/recommendations in summaries of research | 3.16 | 3.09 | 3.11 | 3.25 | 0.41 | 0.66 |
Lack of interaction or collaboration with researchers | 3.06 | 3.02 | 2.90 | 3.24 | 1.34 | 0.26 |
Questions that researchers ask are not relevant to decisions | 2.90 | 3.02 | 2.77 | 2.94 | 0.83 | 0.44 |
Unclear presentation/communication of research findings | 2.74 | 2.80 | 2.72 | 2.70 | 0.16 | 0.86 |
Where turn to for children's mental health research to inform policy decision | ||||||
University researchers | 57.0% | 61.3% | 59.1% | 52.7% | 1.29 | 0.53 |
Staff within my agency | 38.0% | 37.1% | 36.4% | 39.8% | 0.22 | 0.90 |
Importance of features of disseminated children's MH research evidence | ||||||
Be relevant to residents of my state | 4.47 | 4.45 | 4.53 | 4.43 | 0.46 | 0.63 |
Provides data on cost‐effectiveness/budget impact | 4.41 | 4.42 | 4.48 | 4.35 | 0.57 | 0.57 |
Be delivered by someone I know or respect | 3.27 | 3.29 | 3.2 | 3.31 | 0.21 | 0.81 |
Factors that influence agency children's mental health policy priorities | ||||||
Budget issues | 4.34 | 4.48 | 4.50 | 4.13 | 4.66 | 0.01 |
State resident demand | 3.55 | 3.53 | 3.56 | 3.55 | 0.01 | 0.99 |
Research evidence | 3.29 | 3.32 | 3.41 | 3.17 | 1.15 | 0.32 |
Over half (57.0%) of MHA officials identified university researchers as a primary source they turn to when seeking out children's mental health research to inform policy decisions. Just over one‐third (38.0%) of MHA officials reported that they would turn to staff within their agency for this research. This indicates that research evidence is exchanged between officials within agencies and, along with the policy making literature described above, contributed to the decision for the ABM to have a network design.
The features of children's mental health research perceived as most important by MHA officials were research being relevant to state residents (mean = 4.47) and research providing data on cost‐effectiveness/budget impact (mean = 4.41). The finding regarding relevance to state residents signals the importance of research evidence being aligned with the contexts in which MHA officials work, contributing to the decision to include a “contextual alignment” salience weight in the ABM. The finding that data about cost‐effectiveness/budget impact was perceived as a very important feature contributed to the decision to include a “cost” salience weight in the ABM. The source of research being someone that the MHA “know[s] or respect[s]” (mean = 3.27) was only rated as a moderately important feature of children's mental health research. Given this was not perceived as a very important feature, a knowledge broker intervention may be feasible since it entails an external agent (who may not be initially known or respected by MHA officials) joining the MHA to facilitate linkages to research.
Budget issues (mean = 4.34) were rated as factors that have substantial influence on children's mental health priorities of MHAs. This is consistent with “data about cost‐effectiveness/budget impact” being perceived as a very important feature of research and further supported the inclusion of “cost” salience weight in the ABM. State resident demand (mean = 3.55) and research evidence (mean = 3.29) were perceived as having moderate influences on children's mental health priorities of MHAs, further supporting the inclusion of “contextual alignment” and “strength of evidence” salience weights in the ABM.
3.2. Interview results
Table A2 presents key findings and illustrative quotes that informed the development and inclusion of salience weights in the ABM. These findings are also concisely summarized below.
3.2.1. Cost salience weight
The overwhelming majority of agency officials expressed that cost is the single factor with the most influence on agency decisions related to children's behavioral health policies. These officials also noted, however, that cost did not have as much influence on their personal opinions about policies and that cost was only one of many factors (e.g., along with research evidence, politics, community needs) they consider when deciding to support or oppose a new policy.
3.2.2. Strength of evidence salience weight
Most MHA officials and about half of substance use agency officials indicated that research evidence was equally important as cost and other factors in children's behavioral health policy making. Opinions about the influence of research evidence varied considerably across respondents. Factors frequently identified as affecting the impact of research evidence on children's behavioral health policy making include agency culture, time and staff resources, personal interest in research, access to locally relevant research, and payer and legislature requirements to use research evidence and fund evidence‐informed practices.
3.2.3. Contextual alignment salience weight
Agency mission/vision and local needs were identified as contextual factors that affected children's behavioral health policy making as well as the impact of research evidence. When asked about the influence of agency mission/vision on policy making process, the majority of MHA officials felt that it often had substantial influence on their personal support for a new policy as well as the agency decision to adopt or reject it. This feeling was not universal, however, as some respondents believed that their agency mission/vision was too vague to guide the child‐specific policy making. Most respondents also expressed that policy making processes were strongly influenced by the needs of the populations they serve. For example, many respondents described how they used local data to inform their children's policy making priorities. Some respondents emphasized the importance of meeting basic unmet service needs before considering the widespread adoption of new evidence‐informed practices, and other expressed that there was often misalignment between the characteristics of research evidence and local needs.
3.3. ABM results
Informed by the interview and survey results along with the extant literature on policy making, 1 , 2 the prototype ABM simulated knowledge broker interventions in MHAs to increase the uptake of evidence‐informed policies. We focus on two main outcomes from the model: policy decisions and time to decisions in baseline and intervention model runs.
In the 1500 runs of the model (250 each for three baseline and the three intervention conditions), most policies resulted in indecision, meaning that support for the policy stabilized somewhere between the adoption and rejection thresholds. The top panel of Figure 2 shows that policies were adopted in 13.2% of the conditions, rejected in 13.4%, and that no decision was reached in 73.4% of the conditions. The bottom panel in Figure 2 shows that when policies were relatively high or low quality based on the three salience weights, decisions were reached more quickly than middle quality policies under all conditions. The lowest‐quality policies resulted in a decision after about 20 weeks (or timesteps), as did the highest‐quality ones. Policies that were not relatively poor or high quality—those scaled around zero and rated moderate—took the longest amount of time to reach either a decision or stabilize between the two thresholds.
FIGURE 2.
Likelihood of policy adoption (top) and time to decision (bottom), based on overall policy quality. In each baseline and intervention run policy makers were either highly connected in the agency network (high‐degree), had only a few connections (low‐degree), or were initialized with a random number of connections (random‐degree). Intervention runs added knowledge brokers to the agency network [Color figure can be viewed at wileyonlinelibrary.com]
Figure 3 shows the time to decision (agency‐wide and policy makers only, respectively) under each baseline and intervention condition. The largest differences are seen under the knowledge broker intervention when policy makers had only a few ties, or low degrees, in the agency network. This indicates that sparsely connected policy makers reached consensus quickly relative to conditions where they were more connected to other network members. Under the two other intervention conditions (high‐ and random‐degree policy makers), the differences from the baseline runs were minimal.
FIGURE 3.
Time to decision for entire agency compared to that for policy makers under each model condition, based on overall policy quality. In each intervention run, policy makers were either highly connected in the agency network (high‐degree), had only a few connections (low‐degree), or were initialized with a random number of connections. Intervention runs added knowledge brokers to the agency network [Color figure can be viewed at wileyonlinelibrary.com]
Finally, we created a two‐way analysis of variance interaction plot to investigate how the introduction of knowledge brokers influenced the time to policy decisions. Figure 4 shows the interaction between the policy maker network degree condition and the presence of knowledge brokers. We see that the introduction of knowledge brokers in both the high‐ and random‐degree policy maker conditions potentially increases the average time to decision, however, for relatively isolated, or low‐degree policy makers, knowledge brokers are associated with a decrease in time to decision.
FIGURE 4.
Interaction between policy maker context and presence of knowledge brokers [Color figure can be viewed at wileyonlinelibrary.com]
4. DISCUSSION
This study is first application of system science methods to understand policy making in MHAs, and one of a growing number of applications of these methods to understand evidence‐informed policy making more broadly. 45 , 74 , 75 , 76 The mixed‐methods approach used here serves as an example of how empirical quantitative and qualitative data can be analyzed to inform development of an SNA‐based ABM. The survey and interview findings were largely consistent with prior research about mental health policy making in the United States. 61 , 64 Considerations related to costs and community and political context were perceived major drivers of decision‐making, and research evidence was largely perceived as valuable, regularly used, and having influence in certain contexts. The ABM findings, however, shed interesting new light on how the policy making dynamics could be impacted by knowledge brokers to promote evidence‐informed decision‐making. Over half (57.0%) of MHA officials identified university researchers as a primary source they turn to when seeking out children's mental health research to inform policy decisions. This finding suggests a desire for academic research among MHA officials and supports the notion that a knowledge broker intervention may be acceptable and desirable. 54
Employing a sequential explanatory mixed‐methods research design to inform computational model development could prove to be a useful approach to address complex issues in public health and beyond. Simulation results suggest that the introduction of knowledge brokers into MHA social networks changes the dynamics of the policy making process. While there were virtually no differences in the quantity of normatively good policies adopted between the baseline (no knowledge brokers) and intervention (knowledge brokers present) conditions, the policy making process was qualitatively different when knowledge brokers consulted policy makers. Though knowledge brokers may not have the power to change the level of resources (i.e., time, budget) in an agency, it is likely they can position MHAs to take advantage of policy windows and adopt evidence‐informed policies more quickly by facilitating and expediting the process of consensus formation within agencies around policy adoption. 77 , 78 , 79 While many simulation models focus solely on outcomes, for example, policy adoption or rejection, 45 examining process variables such as time to decision is also a useful endeavor.
The prototype ABM results also suggest that when knowledge brokers are present and policy options with strong research evidence support are present—and the cost of implementation is aligned with an agency's resources as well as mission or scope (i.e., context) —such policies are more likely to be adopted. Knowledge brokers act as a mechanism or tipping point, moving agencies across the threshold of policy adoption (or rejection, in the case of policies where costs are high, or the evidence base is weak).
4.1. Limitations, implications, and future directions
A limitation of the study is the MHA official survey response rate of 33.7%. Although higher than other recent surveys of similar policy makers, 58 , 60 the response rate raises questions about whether the research use practices of survey respondents are representative of all MHA officials. While we observed significant differences in response rate by US Census region, this is unlikely to have biased the results because we did not find that US Census Region was significantly associated with the frequency of research use.
The model development process and the results here serve as a prototype for ABMs that employ SNA to model health policy making within various types of agencies. The steps of: (1) quantitative data collection and analysis, to inform (2) qualitative data collection and analysis, then (3) triangulation of these results and extant literature on consensus formation, policy‐ and decision‐making, to finally (4) development and design of computational simulation models that take advantage of innovative social network techniques and strategies, can ostensibly be reproduced in other substantive, complex challenges in public health and health care service utilization.
Though the prototype model presented here may have caveats, such as equally weighting three policy characteristics (i.e., salience weights of cost, contextual alignment, and strength of evidence) that are likely not considered equally in day‐to‐day policy making processes, the results presented here demonstrate that final outcomes, that is, policy adoption, rejection, or indecision, are not the only metrics by which to judge the quality or effectiveness of a particular intervention. Time to decision, as well as the potential qualitative influence of knowledge brokers, are also important as they might influence the degree to which policy makers can capitalize upon often brief policy windows and build consensus within the agency, which might lead to more effective implementation and sustainment of policy decisions.
Next steps for the development of this model include varying: (1) the number of knowledge brokers introduced into the agency context, (2) agency size, (3) the number and proportion of policy makers within the agency, and (4) the salience weights given to each of cost, contextual alignment, and strength of evidence as policy characteristics. Furthermore, the research team will augment the model with additional variables to uncover other mechanisms promoting evidence‐informed policy making and other emergent phenomena, including inter‐agency collaboration as well as state executive and legislative influence over agency policy making.
5. CONCLUSION
This is the first application of ABM to evidence‐informed decision‐making in health policy. Results suggest that the presence of knowledge brokers can influence consensus formation in MHAs, accelerate policy decisions, and increase the likelihood of evidence‐informed policy adoption.
ACKNOWLEDGMENTS
None.
APPENDIX A.
TABLE A1.
Key agent‐based model parameters
Parameter | Description | Values |
---|---|---|
Policy | ||
Cost | The cost of the proposed policy | Range: −1 to 1; Higher value indicates less expensive policy. |
Contextual alignment | The degree to which the proposed policy is aligned with the unique context of the agency | Range: −1 to 1; Higher values indicates greater contextual alignment. |
Strength of evidence | The degree to which the evidence base supports the proposed policy | Range: −1 to 1; Higher values indicate greater strength of evidence. |
Policy quality | The average value of the policy's cost, contextual alignment, and strength of evidence | Range: −1 to 1; Simple average of the above three scores. |
Individual agents | ||
Type | The type of agent | Policy maker, staff, knowledge broker. |
Policy preference scores | ||
Cost | An agent's preference for considering costs of the proposed policy | Range: 0–1. |
Contextual alignment | An agent's preference for considering the contextual alignment of the proposed policy | Range: 0–1. |
Strength of evidence | An agent's preference for considering the strength of evidence of the proposed policy | Range: 0–1; (the policy preferences are scaled so that the three preference values add up to 1). |
Alpha (influence) | The degree to which an agent is open to policy opinion influence from other agents | Range: 0–1; normally distributed with mean = 0.15; std. deviation = 0.15. Values close to 1 imply that agents will mostly value their own policy opinion; values close to 0 imply that agents will mostly value the policy opinions of others. |
Policy support level | The degree to which an agent supports or opposes the proposed policy | Policy support starts at zero, can then become more positive or negative after policy evaluations and interactions with other agency members. |
Scaled policy support level | The degree to which an agent supports or opposes the proposed policy | Range: 0–1; logistic transform of the policy support score. Scaled positive support approaches 1.0; scaled negative support approaches 0.0. |
Organizational structure | ||
Agency size | Number of agency members | 25; 5 policy makers and 20 other agency members (note that for interventions, four knowledge brokers are added). |
Agent degree | How many ties each agent has to other agents | Average degree = 4. |
Model outcomes | ||
Policy decision (all MHA agents) | The final decision on the proposed policy reached by all agency members | Adopt, reject, no decision. A policy is adopted when the average scaled support exceeds 0.85; a policy is rejected when the average scaled support dips below 0.15. |
Policy decision (policy makers) | The final decision on the proposed policy reached by agency policy makers | Same as above, but just for decision makers. |
Time to policy decision (all MHA agents) | The time step (tick) in which the policy decision is reached (i.e., the average scaled support first exceeds 0.85 or dips below 0.15) | Range: 1–50 or NA (NA when there is no policy decision at the end of the model run). |
Time to policy decision (policy makers) | The time step (tick) in which the policy decision is reached (i.e., the average scaled support first exceeds 0.85 or dips below 0.15) | Same as above, but just for policy makers. |
A.1. Model equations
Equations for two core elements of REDMOD:
Initial policy evaluation (calculated once for every actor in the agency at the beginning of the model run) is the weighted sum of the preferences of an actor for cost, feasibility, and research multiplied by the actual policies cost, feasibility, and amount of research support.
-
2.
Policy support updating (calculated every tick for every actor in the agency)
where Support t = Updated support for policy at tick t; Support t−1 = Support for policy at previous tick; TH = equals a random “trembling hand” adjustment that adds random noise to the support calculation; Eval Avg = The weighted average of the actor's policy evaluation with the policy evaluation from the actor's connected neighbors.
Eval Avg is calculated as follows:
where is the actor's initial policy evaluation; is a connected neighbor's initial policy evaluation; is a salience weight that determines how much each actor pays attention to her own policy evaluations, versus how much they take into account the policy evaluations of her connected neighbors.
TABLE A2.
Key findings from rapid analysis of interviews with state and county mental health and substance use agency officials that informed salience weights included in agent‐based model (N = 64)
Salience weight | Main supporting finding | Illustrative quotes |
---|---|---|
Cost | Cost considerations have major influence policy decisions, but other factors also have substantial influence |
“‘How much would cost function into the decision?’ I would say, having years ago lost my total commitment to idealism, significant impact. If I can't fund it, it doesn't make sense to do it. I just frustrate everybody.” “‘The cost?’ Well, it's an aspect. But it's one dimension. I mean, you can't evaluate that one dimension on its own. I mean, there's like also the side of, ‘What are the alternatives that makes it actually relevant to what the cost is?’ ‘What are the benefits?’ ‘What else is already being offered?’ ‘What are competing priorities or options in the space?’.” |
Strength of evidence | Research evidence is valued by agency officials and has influence on policy decisions, but other factors often have greater influence and barriers often inhibit the impact of research on decision making |
“I think [research is] pretty important. I mean, because those are the things that we then have to go back and have to tell our executive director or our county executive why: why we are or why we are not going to move forward with a project or continue to fund a project. So if I don't have actual research or action steps that support that decision, we can definitely get jammed up for either funding or defunding a program, so I would say it's pretty important.” “[Decision making] happens really fast. We don't have the time or feel like we should make the time to slow down, look at research, make decisions based on the data and what's happening in the real world. So, I think we feel a lot of pressure to produce things before we've had a chance to actually get enough time to analyze them.” “And we do have [Data] Workgroup that really helps inform our efforts … So that's really been helpful to us because we believe prevention should be data driven. If you're not looking at your data and you're—I like to use the term “prevention confetti”—you're just like throwing things up in the air and hoping they work.” |
Contextual alignment | When agency mission/vision is clear and relevant to child‐specific issues, it is one of many factors that influences agency decision making processes a |
“I have the mission taped right behind my computer, and I have since the day I started. I was raised under leadership that really stressed being mission‐driven. And so, it's so important because we're in such a tight budget situation. I was raised under a boss that walked around the office and would randomly ask people to recite the mission … I get asked on a daily basis to make a decision to fund a program or to contract with a provider and—I can't say enough—on a daily basis we've got to go back to our mission to give me any sort of backbone with making these decisions. Because otherwise, we would be everywhere and doing nothing.” “Our vision is pretty focused on serving kids and families in [the state] … It's good to know that's where the agency is at, but I don't think it necessarily impacts my decision‐making. There are other variables that come into play in terms of feasibility and cost.” “I don't think our agency really, frankly, has a mission or vision for children's services.” “Our vision is pretty focused on serving kids and families in [the state] … It's good to know that's where the agency is at, but I don't think it necessarily impacts my decision‐making. There are other variables that come into play in terms of feasibility and cost.” “I don't think our agency really, frankly, has a mission or vision for children's services.” |
The extent to which research evidence is aligned with local context and needs affects the extent to which it influences agency decision‐making processes |
“And so, it's not just about cost or data. I don't want to just do a program because people think it's a good idea. I think it has to also meet a need. It has to, you know, meet the local or statewide use patterns.” “Research can be so rigid in terms of, you know, like clinical trials. Research can be so rigid that it can only work in this tiny vacuum … So, if you are doing research around Black youth—that's who it's gonna be in [my area], just because of our demographics—and thinking about risk factors and protective factors involved in sort of supporting youth. The way that I see it is when we're thinking about substance use disorder, you know, what we're treating is not [only the substance use issue] what we're treating is trauma, right? We're treating trauma and we're treating pain. So if we're not thinking about where that pain originates from, I mean, of course it's not going to be effective, right?” |
This finding mainly reflects interviews with mental health agency, not substance use agency, officials because a question explicitly about agency mission/vision was only asked in the mental health agency official interviews.
Combs T, Nelson KL, Luke D, et al. Simulating the role of knowledge brokers in policy making in state agencies: An agent‐based model. Health Serv Res. 2022;57(Suppl. 1):122‐136. doi: 10.1111/1475-6773.13916
[Correction added on 4 April 2022, after first online publication: the word ‘focusing’ in the ‘Conclusions’ section of the Abstract section has been removed in this version.]
Funding information National Institute of Mental Health, Grant/Award Numbers: F31MH122155‐01A1, P50MH113662; National Institute on Drug Abuse, Grant/Award Number: P50MH113662‐01A1S1
REFERENCES
- 1. Zagonel AA, Rohrbaugh J. Using group model building to inform public policy making and implementation. In: Qudrat‐Ullah H, Spector JM, Davidsen PI, eds. Complex Decision Making: Theory and Practice. Understanding Complex Systems. Springer; 2008:113‐138. doi: 10.1007/978-3-540-73665-3_7 [DOI] [Google Scholar]
- 2. Hammond RA. Considerations and Best Practices in Agent‐Based Modeling to Inform Policy. National Academies Press. 2015. Accessed July 29, 2021. https://www.ncbi.nlm.nih.gov/books/NBK305917/
- 3. Miller JH, Page SE. Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press; 2007. [Google Scholar]
- 4. Ornstein JT, Hammond RA, Padek M, Mazzucca S, Brownson RC. Rugged landscapes: complexity and implementation science. Implement Sci. 2020;15(1):85. doi: 10.1186/s13012-020-01028-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Sheldrick RC, Hyde J, Leslie LK, Mackie T. The debate over rational decision making in evidence‐based medicine: implications for evidence‐informed policy. Evid Policy. 2021;17(1):147‐159. doi: 10.1332/174426419X15677739896923 [DOI] [Google Scholar]
- 6. Huber GP. The nature of organizational decision making and the design of decision support systems. MIS Q. 1981;5(2):1‐10. doi: 10.2307/249220 [DOI] [Google Scholar]
- 7. Huber R, Bakker M, Balmann A, et al. Representation of decision‐making in European agricultural agent‐based models. Agric Syst. 2018;167:143‐160. doi: 10.1016/j.agsy.2018.09.007 [DOI] [Google Scholar]
- 8. Oliver K, Lorenc T, Innvær S. New directions in evidence‐based policy research: a critical analysis of the literature. Health Res Policy Syst. 2014;12(1):34. doi: 10.1186/1478-4505-12-34 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Meadows DH. Thinking in Systems: A Primer. Chelsea Green Publishing; 2008. [Google Scholar]
- 10. Sterman J. Business Dynamics: Systems Thinking and Modeling for a Complex World. McGraw Hill; 2000. [Google Scholar]
- 11. Miller JE. Too Significant to Fail: The Importance of State Behavioral Health Agencies in the Daily Lives of Americans with Mental Illness, for their Families, and for their Communities. National Association of State Mental Health Program Directors. 2012. Accessed July 20, 2021. https://www.nasmhpd.org/content/too-significant-fail-importance-state-behavioral-health-agencies-daily-lives-americans
- 12. Substance Abuse and Mental Health Services Administration . Funding and Characteristics of Single State Agencies for Substance Abuse Services and State Mental Health Agencies, 2015. Substance Abuse and Mental Health Services Administration. 2017. Accessed July 20, 2021. https://store.samhsa.gov/sites/default/files/d7/priv/sma17-5029.pdf
- 13. Mojtabai R, Olfson M, Han B. National trends in the prevalence and treatment of depression in adolescents and young adults. Pediatrics. 2016;138(6):e20161878. doi: 10.1542/peds.2016-1878 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Ruch DA, Sheftall AH, Schlagbaum P, Rausch J, Campo JV, Bridge JA. Trends in suicide among youth aged 10 to 19 years in the United States, 1975 to 2016. JAMA Netw Open. 2019;2(5):e193886. doi: 10.1001/jamanetworkopen.2019.3886 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Twenge JM, Cooper AB, Joiner TE, Duffy ME, Binau SG. Age, period, and cohort trends in mood disorder indicators and suicide‐related outcomes in a nationally representative dataset, 2005‐2017. J Abnorm Psychol. 2019;128(3):185‐199. doi: 10.1037/abn0000410 [DOI] [PubMed] [Google Scholar]
- 16. Centers for Disease Control and Prevention . Youth Risk Behavior Survey Data Summary & Trends Report 2007‐2017. National Prevention Information Network | Connecting public health professionals with trusted information and each other. Accessed October 18, 2021. https://npin.cdc.gov/publication/youth‐risk‐behavior‐survey‐data‐summary‐trends‐report‐2007‐2017
- 17. Leeb RT. Mental health–related emergency department visits among children aged 18years during the COVID‐19 pandemic — United States, January 1–October 17, 2020. MMWR Morb Mortal Wkly Rep. 2020;69:1675‐1680. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Raviv T, Warren CM, Washburn JJ, et al. Caregiver perceptions of children's psychological well‐being during the COVID‐19 pandemic. JAMA Netw Open. 2021;4(4):e2111103. doi: 10.1001/jamanetworkopen.2021.11103 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Patrick SW, Henkhaus LE, Zickafoose JS, et al. Well‐being of parents and children during the COVID‐19 pandemic: a national survey. Pediatrics. 2020;146(4):e2020016824. doi: 10.1542/peds.2020-016824 [DOI] [PubMed] [Google Scholar]
- 20. Krass P, Dalton E, Doupnik SK, Esposito J. US pediatric emergency department visits for mental health conditions during the COVID‐19 pandemic. JAMA Netw Open. 2021;4(4):e218533. doi: 10.1001/jamanetworkopen.2021.8533 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Racine N, Cooke JE, Eirich R, Korczak DJ, McArthur B, Madigan S. Child and adolescent mental illness during COVID‐19: a rapid review. Psychiatry Res. 2020;292:113307. doi: 10.1016/j.psychres.2020.113307 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Loades ME, Chatburn E, Higson‐Sweeney N, et al. Rapid systematic review: the impact of social isolation and loneliness on the mental health of children and adolescents in the context of COVID‐19. J Am Acad Child Adolesc Psychiatry. 2020;59(11):1218‐1239.e3. doi: 10.1016/j.jaac.2020.05.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Hill RM, Rufino K, Kurian S, Saxena J, Saxena K, Williams L. Suicide ideation and attempts in a pediatric emergency department before and during COVID‐19. Pediatrics. 2021;147(3):e2020029280. doi: 10.1542/peds.2020-029280 [DOI] [PubMed] [Google Scholar]
- 24. Verlenden JV. Association of children's mode of school instruction with child and parent experiences and well‐being during the COVID‐19 pandemic — COVID experiences survey, United States, October 8–November 13, 2020. MMWR Morb Mortal Wkly Rep. 2021;70(11):369‐376. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Panchal N, Kamal R, Cox C, Garfield R, Chidambaram P. Mental Health and Substance Use Considerations Among Children During the COVID‐19 Pandemic. KFF. Published May 26, 2021. Accessed October 17, 2021. https://www.kff.org/coronavirus-covid-19/issue-brief/mental-health-and-substance-use-considerations-among-children-during-the-covid-19-pandemic/
- 26.Interim Guidance on Supporting the Emotional and Behavioral Health Needs of Children, Adolescents, and Families During the COVID‐19 Pandemic. Accessed October 17, 2021. https://www.aap.org/en/pages/2019‐novel‐coronavirus‐covid‐19‐infections/clinical‐guidance/interim‐guidance‐on‐supporting‐the‐emotional‐and‐behavioral‐health‐needs‐of‐children‐adolescents‐and‐families‐during‐the‐covid‐19‐pandemic/
- 27. Murphy SM, Kucukgoncu S, Bao Y, et al. An economic evaluation of coordinated specialty care (CSC) services for first‐episode psychosis in the U.S. public sector. J Ment Health Policy Econ. 2018;21(3):123‐130. [PMC free article] [PubMed] [Google Scholar]
- 28. Cary CE, McMillen JC. The Data behind the Dissemination: A Systematic Review of Trauma‐Focused Cognitive Behavioral Therapy for Use with Children and Youth. Centre for Reviews and Dissemination (UK). 2012. Accessed October 17, 2021. https://www.ncbi.nlm.nih.gov/books/NBK91652/
- 29. Raghavan R, Bright CL, Shadoin AL. Toward a policy ecology of implementation of evidence‐based practices in public mental health settings. Implement Sci. 2008;3(1):26. doi: 10.1186/1748-5908-3-26 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Dopp AR, Lantz PM. Moving upstream to improve children's mental health through community and policy change. Adm Policy Ment Health. 2020;47(5):779‐787. doi: 10.1007/s10488-019-01001-5 [DOI] [PubMed] [Google Scholar]
- 31. So M, McCord RF, Kaminski JW. Policy levers to promote access to and utilization of children's mental health services: a systematic review. Adm Policy Ment Health. 2019;46(3):334‐351. doi: 10.1007/s10488-018-00916-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Hyde JK, Mackie TI, Palinkas LA, Niemi E, Leslie LK. Evidence use in mental health policy making for children in foster care. Adm Policy Ment Health. 2016;43(1):52‐66. doi: 10.1007/s10488-015-0633-1 [DOI] [PubMed] [Google Scholar]
- 33. Brownson RC, Gurney JG, Land GH. Evidence‐based decision making in public health. J Public Health Manag Pract. 1999;5(5):86‐97. doi: 10.1097/00124784-199909000-00012 [DOI] [PubMed] [Google Scholar]
- 34. Christopoulos D, Ingold K. Exceptional or just well connected? political entrepreneurs and brokers in policy making. Eur Political Sci Rev. 2015;7(3):475‐498. doi: 10.1017/S1755773914000277 [DOI] [Google Scholar]
- 35. Calanni JC, Siddiki SN, Weible CM, Leach WD. Explaining coordination in collaborative partnerships and clarifying the scope of the belief homophily hypothesis. J Public Adm Res Theory. 2015;25(3):901‐927. doi: 10.1093/jopart/mut080 [DOI] [Google Scholar]
- 36. Valente TW, Palinkas LA, Czaja S, Chu K‐H, Brown CH. Social network analysis for program implementation. PLoS One. 2015;10(6):e0131712. doi: 10.1371/journal.pone.0131712 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Institute of Medicine . Assessing the Use of Agent‐Based Models for Tobacco Regulation. The National Academies Press; 2015. doi: 10.17226/19018 [DOI] [PubMed] [Google Scholar]
- 38. Luke DA, Stamatakis KA. Systems science methods in public health: dynamics, networks, and agents. Annu Rev Public Health. 2012;33:357‐376. doi: 10.1146/annurev-publhealth-031210-101222 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Rutter H, Savona N, Glonti K, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602‐2604. doi: 10.1016/S0140-6736(17)31267-9 [DOI] [PubMed] [Google Scholar]
- 40. Hawe P, Shiell A, Riley T. Theorising interventions as events in systems. Am J Community Psychol. 2009;43(3–4):267‐276. doi: 10.1007/s10464-009-9229-9 [DOI] [PubMed] [Google Scholar]
- 41. Ghorbani M, Azadi H. A social‐relational approach for analyzing trust and collaboration networks as preconditions for rangeland comanagement. Rangel Ecol Manage. 2021;75:170‐184. doi: 10.1016/j.rama.2020.10.008 [DOI] [Google Scholar]
- 42. Villamor GB, Troitzsch KG, van Noordwijk M, Vlek PLG. Human decision making in empirical agent‐based models: pitfalls and caveats for land‐use change policies. 26th European Conference on Modelling and Simulation. European Council for Modeling and Simulation; 2012. doi: 10.7148/2012-0631-0637 [DOI] [Google Scholar]
- 43. Christakis NA, Fowler JH. The spread of obesity in a large social network over 32 years. N Engl J Med. 2007;357(4):370‐379. doi: 10.1056/NEJMsa066082 [DOI] [PubMed] [Google Scholar]
- 44. Perello‐Moragues A, Noriega P. Using Agent‐Based Simulation to Understand the Role of Values in Policy‐Making. Springer Nature; 2020. doi: 10.13039/501100003741 [DOI] [Google Scholar]
- 45. Langellier BA, Yang Y, Purtle J, Nelson KL, Stankov I, Diez Roux AV. Complex systems approaches to understand drivers of mental health and inform mental health policy: a systematic review. Adm Policy Ment Health. 2019;46(2):128‐144. doi: 10.1007/s10488-018-0887-5 [DOI] [PubMed] [Google Scholar]
- 46. Ward V, House A, Hamer S. Knowledge brokering: the missing link in the evidence to action chain? Evid Policy. 2009;5(3):267‐279. doi: 10.1332/174426409X463811 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Romney W, Salbach N, Parrott JS, Deutsch JE. A knowledge translation intervention designed and implemented by a knowledge broker improved documented use of gait speed: a mixed‐methods study. J Geriatr Phys Ther. 2020;43(3):E1‐E10. doi: 10.1519/JPT.0000000000000239 [DOI] [PubMed] [Google Scholar]
- 48. Neal JW, Neal ZP, Mills KJ, Lawlor JA, McAlindon K. What types of brokerage bridge the research‐practice gap? The case of public school educators. Soc Networks. 2019;59:41‐49. doi: 10.1016/j.socnet.2019.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Daly AJ, Finnigan KS, Moolenaar NM, Che J. The critical role of brokers in the access and use of evidence at the school and district level. In: Finnigan KS, Daly AJ, eds. Using Research Evidence in Education: From the Schoolhouse Door to Capitol Hill. Policy Implications of Research in Education. Springer International Publishing; 2014:13‐31. doi: 10.1007/978-3-319-04690-7_3 [DOI] [Google Scholar]
- 50. Armstrong R, Waters E, Dobbins M, et al. Knowledge translation strategies to improve the use of evidence in public health decision making in local government: intervention design and implementation plan. Implement Sci. 2013;8:121. doi: 10.1186/1748-5908-8-121 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51. Dobbins M, Traynor RL, Workentine S, Yousefi‐Nooraie R, Yost J. Impact of an organization‐wide knowledge translation strategy to support evidence‐informed public health decision making. BMC Public Health. 2018;18(1):1412. doi: 10.1186/s12889-018-6317-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Dobbins M, Robeson P, Ciliska D, et al. A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implement Sci. 2009;4:23. doi: 10.1186/1748-5908-4-23 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Langeveld K, Stronks K, Harting J. Use of a knowledge broker to establish healthy public policies in a city district: a developmental evaluation. BMC Public Health. 2016;16:271. doi: 10.1186/s12889-016-2832-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Cervantes PE, DEM S, Nelson KL, Purtle J, Hoagwood KE, Horwitz SM. Academic‐policy partnerships in evidence‐based practice implementation and policy maker use of child mental health research. Psychiatr Serv. 2021;72(9):1076‐1079. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Purtle J, Nelson KL, Horwitz SM, McKay MM, Hoagwood KE. Determinants of using children's mental health research in policymaking: variation by type of research use and phase of policy process. Implement Sci. 2021;16(1):13. doi: 10.1186/s13012-021-01081-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Palinkas LA. Qualitative and mixed methods in mental health services and implementation research. J Clin Child Adolesc Psychol. 2014;43(6):851‐861. doi: 10.1080/15374416.2014.910791 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. Creswell JW, Plano Clark VL. Designing and Conducting Mixed‐Methods Research. 3rd ed. SAGE Publications, Inc; 2017. https://us.sagepub.com/en-us/nam/designing-and-conducting-mixed-methods-research/book241842 [Google Scholar]
- 58. Purtle J, Nelson KL, Horwitz SM, Palinkas LA, MM MK, Hoagwood KE. Impacts of COVID‐19 on mental health safety net services for youths: A National Survey of agency Officials. Psychiatr Serv. 2021;appips202100176. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. U.S. Commission on Evidence‐Based Policymaking . The Promise of Evidence‐Based Policymaking: Report of the Commission on Evidence‐Based Policymaking. U.S. Commission on Evidence‐Based Policymaking. 2017. Accessed July 27, 2021. https://bipartisanpolicy.org/download/?file=/wp‐content/uploads/2019/03/Full‐Report‐The‐Promise‐of‐Evidence‐Based‐Policymaking‐Report‐of‐the‐Comission‐on‐Evidence‐based‐Policymaking.pdf
- 60. Purtle J, Nelson KL, Henson RM, Horwitz SM, MM MK, Hoagwood KE. Policymakers' priorities for addressing youth substance use and factors that influence priorities. Psychiatr Serv. 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Purtle J, Nelson KL, Bruns EJ, Hoagwood KE. Dissemination strategies to accelerate the policy impact of children's mental health services research. Psychiatr Serv. 2020;71(11):1170‐1178. doi: 10.1176/appi.ps.201900527 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Hsieh H‐F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277‐1288. doi: 10.1177/1049732305276687 [DOI] [PubMed] [Google Scholar]
- 63. Watkins DC. Rapid and rigorous qualitative data analysis: the “RADaR” technique for applied research. Int J Qual Methods. 2017;16(1):1609406917712131. doi: 10.1177/1609406917712131 [DOI] [Google Scholar]
- 64. Hoagwood KE, Purtle J, Spandorfer J, Peth‐Pierce R, Horwitz SM. Aligning dissemination and implementation science with health policies to improve children's mental health. Am Psychol. 2020;75(8):1130‐1145. doi: 10.1037/amp0000706 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Jakobsen MW, Eklund Karlsson L, Skovgaard T, Aro AR. Organisational factors that facilitate research use in public health policy‐making: a scoping review. Health Res Policy Syst. 2019;17(1):90. doi: 10.1186/s12961-019-0490-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Masood S, Kothari A, Regan S. The use of research in public health policy: a systematic review. Evid Policy. 2020;16(1):7‐43. doi: 10.1332/174426418X15193814624487 [DOI] [Google Scholar]
- 67. Williamson A, Makkar SR, McGrath C, Redman S. How can the use of evidence in mental health policy be increased? a systematic review. Psychiatr Serv. 2015;66(8):783‐797. doi: 10.1176/appi.ps.201400329 [DOI] [PubMed] [Google Scholar]
- 68. Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):2. doi: 10.1186/1472-6963-14-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69. Dekker AH. Realistic social networks for simulation using network rewiring. MODSIM 2007 International Congress on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand; 2007. https://www.mssanz.org.au/MODSIM07/papers/13_s20/RealisticSocial_s20_Dekker_.pdf [Google Scholar]
- 70. Kawachi Y, Murata K, Yoshii S, Kakazu Y. The structural phase transition among fixed cardinal networks. Proceedings of the 7th Asia‐Pacific Conference on Complex Systems. Complex 2004 Conference Secretariat; 2004. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.8497&rep=rep1&type=pdf [Google Scholar]
- 71. Barabási A‐L, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509‐512. doi: 10.1126/science.286.5439.509 [DOI] [PubMed] [Google Scholar]
- 72. Friedkin N, Johnsen E. Social influence networks and opinion change. Adv Group Process. 1999;16:1‐29. [Google Scholar]
- 73. Namatame A, Chen S‐H. Agent‐based influence dynamics. Agent‐Based Modeling and Network Dynamics. Oxford University Press; 2016. doi: 10.1093/acprof:oso/9780198708285.003.0007 [DOI] [Google Scholar]
- 74. Li Y, Kong N, Lawley M, Weiss L, Pagán JA. Advancing the use of evidence‐based decision‐making in local health departments with systems science methodologies. Am J Public Health. 2015;105(Suppl 2):S217‐S222. doi: 10.2105/AJPH.2014.302077 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75. Atkinson J‐AM, Wells R, Page A, Dominello A, Haines M, Wilson A. Applications of system dynamics modelling to support health policy. Public Health Res Pract. 2015;25(3):e2531531. doi: 10.17061/phrp2531531 [DOI] [PubMed] [Google Scholar]
- 76. Parks AL, Walker B, Pettey W, et al. Interactive agent based modeling of public health decision‐making. AMIA Annu Symp Proc. 2009;2009:504‐508. [PMC free article] [PubMed] [Google Scholar]
- 77. Brownson RC, Royer C, Ewing R, McBride TD. Researchers and policymakers: travelers in parallel universes. Am J Prev Med. 2006;30(2):164‐172. doi: 10.1016/j.amepre.2005.10.004 [DOI] [PubMed] [Google Scholar]
- 78. Howlett M. Designing Public Policies: Principles and Instruments. 2nd ed. Routledge; 2019. https://www.routledge.com/Designing-Public-Policies-Principles-and-Instruments/Howlett/p/book/9781138293649 [Google Scholar]
- 79. Kingdon JW. Agendas, Alternatives, and Public Policies. 2nd ed. Pearson; 2010. [Google Scholar]