Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 May 5.
Published in final edited form as: NEJM Catal Innov Care Deliv. 2023 Mar 15;4(4):10.1056/CAT.23.0034. doi: 10.1056/CAT.23.0034

Implementation Support for a Social Risk Screening and Referral Process in Community Health Centers

Rachel Gold 1,2, Jorge Kaufmann 3, Erika K Cottrell 4,5, Arwen Bunce 6, Christina R Sheppler 7, Megan Hoopes 8, Molly Krancari 9, Laura M Gottlieb 10, Meg Bowen 11, Julianne Bava 12, Ned Mossman 13, Nadia Yosuf 14, Miguel Marino 15
PMCID: PMC10161727  NIHMSID: NIHMS1885926  PMID: 37153938

Abstract

Evidence is needed about how to effectively support health care providers in implementing screening for social risks (adverse social determinants of health) and providing related referrals meant to address identified social risks. This need is greatest in underresourced care settings. The authors tested whether an implementation support intervention (6 months of technical assistance and coaching study clinics through a five-step implementation process) improved adoption of social risk activities in community health centers (CHCs). Thirty-one CHC clinics were block-randomized to six wedges that occurred sequentially. Over the 45-month study period from March 2018 to December 2021, data were collected for 6 or more months preintervention, the 6-month intervention period, and 6 or more months postintervention. The authors calculated clinic-level monthly rates of social risk screening results that were entered at in-person encounters and rates of social risk-related referrals. Secondary analyses measured impacts on diabetes-related outcomes. Intervention impact was assessed by comparing clinic performance based on whether they had versus had not yet received the intervention in the preintervention period compared with the intervention and postintervention periods. In assessing the results, the authors note that five clinics withdrew from the study for various bandwidth-related reasons. Of the remaining 26, a total of 19 fully or partially completed all 5 implementation steps, and 7 fully or partially completed at least the first 3 steps. Social risk screening was 2.45 times (95% confidence interval [CI], 1.32–4.39) higher during the intervention period compared with the preintervention period; this impact was not sustained postintervention (rate ratio, 2.16; 95% CI, 0.64–7.27). No significant difference was seen in social risk referral rates during the intervention or postintervention periods. The intervention was associated with greater blood pressure control among patients with diabetes and lower rates of diabetes biomarker screening postintervention. All results must be interpreted considering that the Covid-19 pandemic began midway through the trial, which affected care delivery generally and patients at CHCs particularly. Finally, the study results show that adaptive implementation support was effective at temporarily increasing social risk screening. It is possible that the intervention did not adequately address barriers to sustained implementation or that 6 months was not long enough to cement this change. Underresourced clinics may struggle to participate in support activities over longer periods without adequate resources, even if lengthier support is needed. As policies start requiring documentation of social risk activities, safety-net clinics may be unable to meet these requirements without adequate financial and coaching/technical support.


Social risks — the downstream material manifestations of adverse social determinants of health — are associated with a higher risk of chronic diseases such as type 2 diabetes, and hamper individuals’ ability to effectively self-manage these diseases.19 Clinical providers seeking to help mitigate the impacts of social risks (e.g., by referring patients to social services such as food banks) must know about patients’ risks. One strategy to increase provider awareness of social risks involves conducting social risk screening and documenting screening results in the electronic health record (EHR). Such screening is foundational to subsequent activities for mitigating the health impacts of social risks, such as adjusting care plans and connecting patients with community services, and conducting related advocacy efforts.

Numerous national initiatives now recommend incorporating social risk screening1013 into health care settings, and rates of such screening are increasing. This is particularly true in community health centers (CHCs),14,15 which primarily serve low-income populations that disproportionately experience poverty-associated barriers to health promotion. Many CHCs were early adopters of social risk screening, but some have found it difficult to conduct systemic screening with EHR documentation of those screening results or to sustain or expand screening efforts.16,17 From 2016 to 2018, while 67% of 107 CHCs conducted any social risk screening, only 2% of all patients at these clinics had documented results of such screening.18 This pattern is not unique to CHCs — in the few prior articles on social risk screening in other care settings, screening rates have similarly varied widely.19

Increasing systematic social risk screening and referrals — and then ensuring that screening results and referrals made are documented in the EHR — will require addressing the multiple implementation barriers described in a 2019 National Academies of Sciences, Engineering, and Medicine report20 and other publications.2123 Implementation barriers include obtaining leadership and staff buy-in,20 especially if follow-up service referrals are not feasible; determining which patients to screen for which social risks and how often; configuring the EHR to easily ingest screening data and present them in a useful manner; developing and adopting effective workflows (e.g., determining which staff will conduct screening at which step during or before a clinical encounter, and whether staff expected to enter such data can access the appropriate EHR interfaces); deciding when and how documented social risks will be reviewed and whether and how the clinic will act on reported risks; maintaining up-to-date lists of social service agencies, if referrals to such agencies are planned; staffing and resource limitations; and others.20

Given these challenges, multifaceted implementation support may be necessary to enhance adoption of social risk screening, referral-making, and documentation of screening results and referrals in the EHR. Empirical evidence is needed about which practices for supporting this implementation will be most useful for CHCs and which are chronically underresourced. This trial (R18DK114701) examined whether an implementation support intervention — 6 months of tailored technical assistance and coaching — improved EHR documentation of social risks and associated referral-making in CHCs. It also assessed the intervention’s impact on diabetes outcomes. To our knowledge, no previous studies have rigorously tested a multicomponent implementation support intervention designed to facilitate the adoption of social risk screening and related referral-making in any health care setting. Trial results should inform the work of health care organizations aiming to implement social care initiatives in underresourced care settings.

Methods

Data Source and Trial Design

OCHIN, Inc. (a nonprofit health care innovation center based in Portland, Oregon) serves a national network of locally controlled organizations that provide independent, community-based care to patients across a variety of health care settings; as of 2021, OCHIN supported a network of more than 21,000 providers who reach more than 6 million patients while supporting nearly 1,000 community health care sites in 45 states.

At the beginning of our study, in 2018, there were 593 OCHIN member clinic sites in 16 states that shared a single instance of the Epic EHR. Our stepped-wedge trial included 31 CHC clinics recruited from OCHIN’s member CHCs; these 31 clinics are located in California, Georgia, Massachusetts, Montana, Ohio, Oregon, Washington, and Wisconsin. We included no more than two clinics from a given CHC organization. Recruitment occurred in two waves (14 in spring 2018 and 17 in fall 2019) to ensure that no recruited clinics waited more than 1 year for the intervention. Each set of clinics was block-randomized to wedges 1–3 and 4–6, respectively. A stepped-wedge trial was chosen as an effective design for when an intervention under study cannot be rolled out simultaneously for everyone yet allows all to eventually receive the intervention. The comparison is of metrics before the implementation and those during and after the intervention. Clinics were eligible to participate if they were interested in implementing or expanding social risk screening and/or referral activities. They had to commit to identifying staff members to serve as a Clinician Champion and/or Operational Champion for the project and allowing those staff to participate in intervention activities (≥2 hours per month interacting with the implementation support team).

Intervention

The intervention details and conceptual frameworks underlying this study have been described previously.24 In brief, study clinics received 6 months of technical assistance in the use of relevant EHR tools (within the shared Epic EHR) and practice coaching in how to use these tools in clinic workflows, both tailored to the individual clinic’s needs. (The relevant tools supported: identifying patients due for social risk screening; customizing which patients were considered due; documenting and reviewing screening results; and ordering social service referrals.25,26) At the outset, one person served as both EHR trainer and practice coach. By wedge 3, we modified the intervention staff structure by bringing on a separate practice coach. Thereafter, the EHR trainer and practice coach worked as a team to support clinics (e.g., preparing for, leading, and debriefing after clinic meetings and further supporting clinics as needed by email or calls).

The trainer/coach team guided each wedge of study clinics though a five-step implementation process:27 (1) secure leadership buy-in; (2) set goals; (3) develop workflows; (4) orient staff; and (5) implement and iterate. Throughout the 6-month intervention, the dedicated trainer/coach team met with clinic representatives two to three times monthly and tracked clinic progress in completing these steps. All meetings were conducted via video conferencing with one clinic at a time, a feature intended to enhance the intervention’s potential scalability. The coach and trainer spent about 3–6 hours each per month per clinic on meeting with and preparing to meet with the study clinics. The intervention was designed to address barriers to social risk screening/referral implementation identified in a prior pilot study (R18DK105463) and as reported in the literature. Prior evidence suggested that each intervention component — practice coaching/facilitation, technical assistance, interdisciplinary support teams, tailored support, staff training, feedback data, goal identification, leadership engagement, peer-to-peer learning, orientation materials, and how-to guides — had the potential to effectively support practice changes in primary care settings.2835

Study Period

The study period was March 2018 through December 2021 and included six 6-month intervention wedges. The first 6-month intervention (wedge 1) began in September 2018, and the last (wedge 6) began in January 2021. This allowed for at least 6 months of data collection before each wedge started and after the last wedge ended. Thus, all months before the 6-month intervention periods were considered preintervention, the 6 months of the intervention were the intervention phase, and all months from the intervention period’s end through December 2021 were postintervention. Of note, wedge 4 began in February 2020, as the Covid-19 pandemic began affecting care delivery in the study clinics.

Outcome Measures

Patient- and encounter-level data were aggregated to the clinic level, limited to people 18 years of age or older. Patients seen only for Covid-19 vaccination/testing were excluded (n = 3,720; 0.7% of the total study sample). Primary analyses centered on social risk screening and related referrals and included two outcome measures.

The first outcome measure was the monthly clinic rate of social risk screening, measured as the number of patients with documented social risk screening results entered at a face-to-face clinical encounter in the measurement period (excluding those only for Covid-19 testing/vaccination, as many people received these services at the study sites who were not otherwise patients at these clinics). Domains of social risk screening included child/family care insecurity, education, employment, financial strain, food insecurity, health insurance, health literacy, housing instability, inadequate physical activity, relationship safety, social isolation, stress, transportation needs, and utilities insecurity. Because the shared EHR enabled clinics to select from several commonly used social risk screening tools, the use of any of the questions from any of these tools was counted.

The second outcome was the monthly clinic rate of provision of social risk-related referrals, measured as the number of patients with a documented referral among all patients seen in the measurement period (regardless of whether social risk screening was documented). This outcome included referrals internal (e.g., to a social worker) or external (e.g., to housing services) to the clinic. Procedure and diagnosis codes were used to find indication of related referrals. Some are specific enough that they were considered to indicate a social risk-related referral on their own. Other codes, such as referrals to a social worker, were more ambiguous and thus were only considered a social risk-related referral in the presence of a related positive social risk screening result before or on the same date as the referral code.

The EHR enabled documenting when patients declined to answer social risk screening questions or declined offered referrals. Documented declinations were considered to indicate that screening or referral actions were taken and included in the numerators described above.

Given the known association between social risks and diabetes outcomes, secondary analyses assessed intervention impacts on diabetes control and receipt of relevant diabetes care. Patients with an encounter during the study period and established diabetes before the second month of their clinic’s baseline period comprised this subpopulation cohort (excluding pregnant women). Guideline-concordant diabetes-related care was considered monthly for this cohort and included whether patients were up to date on receipt of their: (1) annual lipid panel; and (2) biannual hemoglobin A1c (HbA1c) screening. Three diabetes-control measures were assessed monthly among patients screened that month: (1) blood pressure (BP; <130/80 mm Hg); (2) HbA1c (<7.0%); and (3) low-density lipoprotein (LDL; <100 mg/dL).

Baseline Covariates

The baseline period was defined as the 6 months before each clinic’s wedge began. Analyses accounted for clinic-level baseline measures: number of years since the clinic began using their current EHR; whether the clinic conducted screening at or above the 50th percentile for all study clinics (to capture prior experience with such screening); and patient characteristics aggregated to the clinic level (Table 1).

Table 1.

Wedge Characteristics in the 6-Month Baseline Period

Wedge 1 Wedge 2 Wedge 3 Wedge 4 Wedge 5 Wedge 6
Clinics N = 4 N = 5 N = 5 N = 5 N = 6 N = 6
intervention started 9/2018 3/2019 9/2019 2/2020 8/2020 1/2021
Urban location, N 4 5 4 5 4 5
Accountable Health Communities participant,* N 2 2 1 1 0 1
Clinic characteristics
 Years using OCHIN Epic EHR, EpicCare
  Median (range)** 1 (0–6) 3 (1–10) 4 (−2 to 6) 2 (−2 to 11) 2 (−1 to 3)  1 (−1 to 5)
 Encounters
  Median 3,532 14,586 4,449 16,486 23,682 41,591
  Range 579–23,827 1,755–108,301 520–14,574 7, 539–63,521 13,151–32,088 4, 502–147,310
 Telehealth encounters, % (SD) 0.0 (0.0) 0.0 (0.0) 0.3 (0.4)  0.1 (0.2) 11.5 (8.0)  13.8 (9.3)
 Patients (≥18 years of age)
  Median 601 3,285 1,505 2,345 4,473 9,088
  Range 161–5,628 695–20,977 264–3,428 1,653–10,765 2,576–5,856 1,724–11,901
 Age, median (range), y 36 (24—47) 44 (43–50) 45 (29–46) 45 (39–55) 44 (35–53) 45 (41–57)
 Female, % (SD) 58.2 (20.6) 60.6 (2.9) 61.1 (21.2) 55.0 (12.1) 52.3 (11.2) 57.5 (6.3)
 Race/ethnicity, % (SD)
  Hispanic 9.9 (9.7) 24.2 (21.1) 30.4 (21.5) 15.2 (18.7) 19.1 (21.0) 40.9 (34.8)
  Non-Hispanic Black 21.7 (38.1) 4.5 (5.2) 34.1 (40.4) 10.0 (12.1) 15.0 (21.5) 5.7 (6.8)
  Non-Hispanic, non-Black, and nonwhite 24.6 (42.8) 21.5 (34.7) 3.3 (1.6) 3.3 (1.9) 3.6 (4.0) 3.9 (3.2)
  Non-Hispanic white 41.2 (38.5) 46.5 (31.6) 27.6 (14.2) 61.0 (20.4) 54.4 (23.7) 42.1 (35.7)
  Not documented in EHR 2.5 (2.5) 3.4 (1.2) 4.7 (4.2) 10.6 (2.5) 7.9 (4.8) 7.3 (3.2)
 Primary language, % (SD)
  English 71.7 (42.9) 64.1 (33.4) 77.9 (17.0) 87.2 (18.4) 85.8 (16.9) 73.2 (23.4)
  Not documented in EHR 1.9 (3.3) 0.1 (0.1) 0.1 (0.2) 1.1 (1.0) 0.4 (0.3) 1.1 (0.7)
 Federal poverty level, % (SD)
  ≤200% 91.4 (4.9) 91.9 (2.8) 92.6 (3.2) 76.9 (21.4) 71.0 (24.4) 78.2 (13.9)
  Not documented in EHR 1.3 (1.1) 0.6 (0.5) 1.9 (1.7) 10.7 (14.2) 16.1 (21.5) 10.1 (6.1)
 Insurance status, % (SD)
  Public 44.0 (26.4) 65.5 (8.2) 37.1 (10.8) 62.9 (12.3) 51.5 (14.8) 61.7 (8.9)
  Uninsured 38.5 (28.1) 20.1 (3.7) 53.5 (14.9) 19.2 (9.0) 26.6 (15.1) 20.5 (7.7)
 Patients screened for social risk, median (range) 2 (0–751) 57 (0–882) 360 (1–563) 23 (0–332) 67 (10–311) 430 (2–2,771)
 Patients with social risk referrals, median (range) 1 (0–74) 28 (0–135) 19 (0–113) 3 (0–22) 13 (1–108) 89 (7–1,072)
 Patients with diabetes, range 8–944 187–3,984 22–829 343–3,618 352–923 495–2,981

EHR = electronic health record, SD = standard deviation.

*

Participated in Accountable Health Communities program of the U.S. Centers for Medicare & Medicaid Services.

**

The negative numbers in the range indicate that the user did not have experience with EpicCare until 1 or 2 years after the baseline. Source: The authors

We also accounted for whether the clinic was concurrently involved in the U.S. Centers for Medicare & Medicaid Services (CMS) Innovation Center’s Accountable Health Communities (AHC) Model,36 a large national demonstration project targeting implementation of social risk screening and navigation services. Participants involved in this demonstration received modest financial incentives but only minimal implementation support from CMS.

Statistical Analysis

Clinic-level outcomes were monthly from March 2018 through December 2021 (totaling 1,384 monthly time points across 31 clinics). Generalized linear mixed models (GLMMs) were used to assess intervention effect by comparing outcomes during time periods in clinics that had versus had not yet participated in the intervention. GLMMs were used to account for a general time trend and to flexibly model the intervention effect over time postintervention. Negative binomial mixed-effects modeling was used to evaluate the primary outcomes; mixed-effects linear regression was used to evaluate secondary outcomes. Each GLMM fit flexible time effects by treating time as a categorical variable and included random effects for clinics, adjusted for baseline covariates, and used robust standard errors. Average differences are reported comparing the preintervention period versus: (1) the 6-month intervention period; and (2) the postintervention period. Rate ratios for the primary outcomes, rate differences for the secondary outcomes, and corresponding 95% confidence intervals (CIs) are reported. A more detailed description of the GLMM is provided in Exhibit 1 of the Appendix. Analyses were performed by using Stata 15 (StataCorp); hypothesis tests were two-sided with a type I error of 0.05.

Note on Protocol Deviation

The GLMM specifications written at the time of trial registration were based on the then-flagship article for analysis of stepped-wedge clinic-randomized trials.37 This approach used continuous variables for modeling the general time trend of the outcomes without intervention and estimating the added intervention effect that accrues over time after intervention initiation. This approach originally provided an estimate for the intervention effect that appears in the first time period (month) of intervention. Since trial registration, advances have been made in the analysis of such study designs that improve ability to estimate intervention effects.3840 We updated the GLMM specifications in these analyses to reflect these advances by using the more flexible, categorical time variables. In addition, we dropped the stand-alone estimator for intervention effect in the first month of intervention, as it is unlikely that the effect was immediate and constant; instead, this estimator was absorbed into the categorized estimator for added intervention effects. Ultimately, these modifications resulted in changes to our reported outcomes. Instead of reporting intervention effect in the first month of intervention and the added intervention effect beyond the first month until study end, we report the average intervention effect for the 6 months of hands-on intervention and for the postintervention period. We only report estimates from the updated models in the main results. A comparison of estimates using both the original and updated models is provided in Exhibit 2 of the Appendix.

Results

Of 31 clinics enrolled in the study, five withdrew before intervention initiation due to various bandwidth-related issues (three after March 2020). Of the remaining 26 clinics, 7 fully completed all 5 implementation process steps, 12 fully or partially completed all steps (i.e., all steps were started, but not all were completed), and 7 fully or partially completed at least the first 3 steps, including 3 that completed the last step but skipped an intermediate step. All 31 recruited clinics were included in analyses for intention-to-treat assessments. Five clinics (including 2 of the 5 that withdrew from the study) did not have preintervention data from 6 months before the start of wedge 1 (due to activating their EHR after that date), but all 31 clinics had ≥6 months of data before the start of their wedge. Exhibit 3 in the Appendix presents months of observation and denominators for all outcomes.

Table 1 presents characteristics of study clinics and their patients according to wedge in the 6 months before a given wedge’s intervention began. Variation was seen across wedges in clinic patients’ median age, distribution according to race/ethnicity, primary language, poverty level, and insurance status; regression models adjusted for these variables. Notably, there was variability in the extent to which study clinics conducted social risk screening and referrals in the 6-month baseline period. Seven study clinics were involved in the CMS Innovation Center’s AHC initiative.

Figure 1 shows clinic screening rate patterns over the study period according to wedge. In March 2020, at month 24 of the study, the Covid-19 pandemic began to severely affect clinic operations; this time period coincides with when the intervention began for wedge 4. As shown in Figure 1, the study sites had very different preintervention rates of social risk screening. These screening rates rarely increased in a linear or sustained manner but instead varied substantially over time and by clinic site. Exhibit 4 of the Appendix presents the variation in social risk screening in all study clinics over the study period; of note, what may be a secular trend of increased social risk screening in months 1–23 appears to flatten when the pandemic began. Exhibit 5 of the Appendix presents patterns of social risk referral rates in each wedge’s clinics over time.

FIGURE 1. Patterns of Screening Rates over Time, According to Study Clinics Within Study Wedges.

FIGURE 1

Screening rates rarely increased in a linear manner and varied widely across study sites. Notes: The y-axis denotes the percentage of patients screened for social risk. The x-axis denotes the study period, from month 1 (March 2018) to month 45 (December 2021). The colored lines represent each of the 31 individual clinics. Wedge 1 has four clinics; wedges 2, 3, and 4 have five clinics; and wedges 5 and 6 have six clinics each. The solid vertical line denotes the start of the implementation for that wedge. The dashed vertical line at month 24 is March 2020, when the Covid-19 pandemic began affecting U.S. clinics.

Source: The authors

NEJM Catalyst (catalyst.nejm.org) © Massachusetts Medical Society

Social Risk Screening and Referral Outcomes

Results of adjusted regression analyses assessing intervention effects on social risk screening and referral rates are presented in Table 2.

Table 2.

Effects of Implementation Support on Social Risk Screening Rates and Referrals

Outcome Added Effects of Intervention Compared with Preintervention, RR (95% CI)
Updated Model Adjusted
During 6-Month Intervention Postintervention
All patients
Social risk screening 2.45 (1.32–4.39) 2.16 (0.64–7.27)
Social risk referral 1.33 (0.73–2.43) 0.89 (0.18–1.93)
 Documented need 0.79 (0.46–1.36) 0.56 (0.21–1.48)
 No documented need 1.11 (0.60–2.04) 0.40 (0.12–1.34)

Social risk screening rates were significantly higher during the 6-month intervention period compared with the preintervention period. Estimates were derived by using mixed-effects negative binomial regression with random effects for clinic adjusted for baseline characteristics. Note: Boldface data are considered statistically significant at P < .05. RR = rate ratio, CI = confidence interval. Source: The authors

The rate of social risk screening was 2.45 times (95% CI, 1.32–4.39) higher during the 6 intervention months compared with preintervention rates. This impact was not sustained in the postintervention period; although the effect size was similar in magnitude (rate ratio, 2.16; 95% CI, 0.64–7.27), it lacked statistical significance. No significant difference was seen in rates of social risk referrals during or postintervention, regardless of patients having documented social risks.

Diabetes Outcomes

In brief, analyses showed little intervention impact on diabetes outcomes (Table 3) in the intervention and postintervention periods compared with the preintervention period, with a few exceptions.

Table 3.

Association Between Implementation Support Intervention and Diabetes Biomarkers and Receipt of Recommended Biomarker Screening

Outcome Added Effects Associated with Intervention Compared with Preintervention, Absolute Percent Change (95% CI)
During 6-Month Intervention Postintervention
All patients with diabetes
 HbA1c screen, % up to date −9.92 (−15.91 to −3.94) −30.51 (−50.20 to −10.83)
 LDL screen, % up to date −10.02 (−15.75 to −4.30) −33.20 (−54.01 to −12.39)
 BP, % controlled −0.90 (−3.37 to 1.56) 1.57 (−6.21 to 9.36)
 HbA1c, % controlled −2.05 (−5.26 to 1.17) −6.46 (−13.52 to 0.61)
 LDL, % controlled 4.63 (−0.05 to 9.30) 5.61 (−2.99 to 14.20)
Patients with diabetes screened for social risks (subset)
 HbA1c screen, % up to date −7.50 (−14.51 to −0.49) −23.26 (−48.13 to 1.62)
 LDL screen, % up to date −8.53 (−15.03 to −2.02) −29.95 (−54.58 to −5.32)
 BP, % controlled 1.35 (−1.96 to 4.65) 11.26 (1.51 to 21.00)
 HbA1c % controlled −2.01 (−7.50 to 3.47) −6.74 (−15.98 to 2.49)
 LDL, % controlled 4.19 (−1.86 to 10.25) 7.33 (−2.61 to 17.27)

Significant decreases in guideline-concordant diabetes care may reflect decreased in-person encounters resulting from the Covid-19 pandemic. Notes: Results are given for all patients with diagnosed diabetes on or before the first baseline month and for the subset of patients screened for social risk. Estimates were derived by using mixed-effects linear regression with random effects for clinic, adjusted for baseline characteristics. Boldface data indicate statistical significance at P < .05. CI = confidence interval, HbA1c = hemoglobin A1c, LDL = low-density lipoprotein, BP = blood pressure. Source: The authors

In the intervention and postintervention periods, the monthly average of patients with up-to-date diabetes-related screenings declined significantly from the preintervention period among all patients with diabetes and the subset who were screened for social risks. (The percentage of those screened for social risks who had up-to-date HbA1c screening was not significantly lower postintervention but trended in the same direction.) Among patients with biomarker screenings in a given month, no patterns were seen in the percentage with controlled HbA1c or LDL. There was a significant increase in the percentage of patients with diabetes who had controlled BP in the postintervention period among those screened for social risks. To assess the influence of the Covid-19 pandemic on these outcomes, we ran these analyses stratified according to wedges occurring before and after March 2020. (Both sets of clinics had postintervention data from after March 2020, but all postintervention data from wedges 4–6 occurred in this period; results not shown.) Outcomes related to being up to date on diabetes-related screenings showed a larger effect size in the later three wedges, suggesting that practice changes made in response to the pandemic had influenced these outcomes; hypotheses about this pattern are given in the Discussion.

Discussion

During a 6-month tailored implementation support intervention, CHC clinics’ social risk screening rates were more than twice as high than screening rates during the preintervention period (P < .01). In the postintervention period, screening rates trended in the same direction as in the intervention period, but differences were not significantly different from the preintervention period. Rates of social service referrals did not increase regardless of patients’ social risk status. The intervention was associated with greater BP control among patients with diabetes, and lower receipt of recommended diabetes biomarker screening, in the postintervention period.

These results suggest that intensive, adaptive implementation support was effective at temporarily increasing social risk screening rates. The increase in screening seen during the intervention period may be lower than what would have occurred without the impact of the Covid-19 pandemic on clinics’ ability to adopt new practices, as discussed below. Furthermore, most study sites chose to test and iterate screening workflows in a subset of their patients, as suggested in implementation step 3 of the five-step process: (1) secure leadership buy-in; (2) set goals; (3) develop workflows; (4) orient staff; and (5) implement and iterate. The changes seen in this study may, therefore, indicate success in meeting those goals, but as clinic-defined targets changed frequently, it was not possible to limit analyses to each clinic’s target population; this study limitation is discussed below. In addition, although this study measured rates of EHR-documented social risk screening and referrals, CHCs have been providing contextualized, person-centered care since their inception. It is possible that the implementation support furthered this work but did not always result in measurable screening/referral outcomes. Conversely, the increases in social risk screening may reflect a secular trend, as a growing national emphasis on such screening occurred during the study period. Others’ research concurs; a 2022 report,41 for example, suggests that screening rates are rising nationally. Policy changes may be relevant: in data from 2019,14 higher social risk screening rates were reported by clinics in states with Medicaid accountable care organizations. These factors may have influenced the results seen here.

Multiple components of the intervention — including staff training, using small tests of change, having a champion, leadership support, and flexibility in screening implementation — have been proven effective in other circumstances.4145 This may explain the intervention’s initial impact. The fact that this result was not consistent across study sites aligns with prior research showing variable impacts of multifaceted interventions targeting practice change.4649 Furthermore, although the intervention was designed based on results of a pilot study, limited prior evidence on social risk screening adoption, and evidence on methods for supporting practice change in general,4145 it may not have adequately addressed key barriers to sustained screening/referral implementation in underresourced clinics. The timing of the reported effects was surprising, as we expected that screening rates would increase after rather than during the 6-month intervention period. This finding suggests that improvements may have occurred as a result of clinic staff engaging in implementation efforts with outside support, rather than because of specific intervention elements. Qualitative analyses now underway should help explain these findings.

Results also suggest the possibility that the 6-month intervention did not last long enough; the implementation science literature on effective maintenance of change adoption postimplementation support is nascent.50 However, it may be challenging for underresourced clinics to participate in such support activities over a longer period without resources to cover staff time spent on such efforts. Research is needed to explore whether incentive structures enhance the implementation of sustained social risk screening. (As noted above, 7 of the 31 study clinics concurrently participated in a national initiative through which they received modest reimbursement for conducting screening and referrals; our team is now analyzing the relative impacts of reimbursement versus hands-on support in these clinics.) It is critical to understand the resources that CHCs need to effectively implement and sustain social risk screening as state Medicaid policies begin to require documentation of such screening and related interventions. These requirements might motivate targeted clinics to focus on social risk screening, but the results presented in this article and by others14,22,51 suggest that without adequate support — both financial and coaching/technical assistance — underresourced clinics may struggle to meet these requirements.

Another potential driver of these results is the limited evidence on best practices for social risk efforts in clinical environments (e.g., who should be screened, how often, and with which screening instruments) or on the effectiveness of interventions meant to address identified social risks.16,17,20,5254 It may be easier to implement and sustain new processes that have clear protocols and/or solid evidence on the expected health impacts of such processes. Research is needed to provide this evidence.

Although social risk screening increased significantly during the 6-month implementation support period, no such increase occurred in related referrals. Previous research suggests possible explanations. Standards for documenting formal referrals are evolving,55,56 and our methods may not have captured some referrals. For example, as documented referrals to a clinic’s social worker did not always specify whether that referral was for behavioral health or social service navigation support, they were not considered social risk referrals here unless a concurrent positive social risk screening was documented, as described in the Methods. In addition, some patients with identified social needs likely declined such referrals.57,58 Care team members might have provided patients with relevant information via written materials, without documenting this action in the EHR. It is also possible that clinics experienced challenges in connecting patients with social services, such as a lack of referral resources.52,59 Prior research found that care team members may be reluctant to screen for social risk factors if they feel unable to address identified needs;60,61 if increased social risk screening was not coupled with an increased ability to make referrals, it may have diminished enthusiasm for continuing screening postintervention. Finally, although the intervention was designed to help clinics start making such referrals, doing so may not have been a priority for every clinic, and/or the support provided may not have been enough to overcome the challenges to making these referrals described above.

We posit that the association seen between the intervention and decreased provision of guideline-concordant diabetes care is because of the Covid-19 pandemic. In the pandemic’s first year (our study months 24–35), in-person clinical encounters decreased dramatically, which certainly affected clinics’ ability to conduct HbA1c or LDL tests. There are several possible explanations for the significant and substantial postintervention improvement in BP control among patients with diabetes who were screened for social risks. It is possible that having social risks documented drove care teams to provide social service referrals (documented or not) to patients with reported risks or to make care plan adjustments for these patients to enhance their ability to follow recommended care. Alternatively, social risk documentation may have been used by clinics to prioritize which patients were contacted via outreach during the pandemic. Another possibility is that patients whose care involved titrating BP medications had additional encounters and thus more opportunities to receive social risk screening. However, it is also possible that CHCs focused outreach efforts on patients with the highest documented BP, who may also be those with social risks, in which case social risk documentation did not play a role.

The effect size for these diabetes-related outcomes was greater among clinics for which all data collection occurred in the post–Covid-19 pandemic period. This supports the proposed explanation that changes seen in diabetes-related screening up-to-date status and in rates of BP control reflect changes in clinic processes made in response to the pandemic. This influence on overall study outcomes is likely because given the stepped-wedge design, all study clinics had some follow-up data during the pandemic.

Limitations

All study results must be interpreted with caution given that the Covid-19 pandemic began midway through the trial and almost certainly affected outcomes. Although the pandemic dramatically influenced financial insecurity among CHC patients, it also heightened interest by CHC in social risk screening; thus, it may have both increased clinics’ motivation to conduct social risk screening and referrals and affected their ability to do so. It is clear that the pandemic profoundly disrupted primary care clinics’ capacity, workflows, staffing, and ability to implement non–pandemic-related practice changes. It also affected the capacity of social service organizations.

Several additional limitations must be considered in interpreting these findings, some of which were mentioned earlier. First, in this pragmatic trial, each study clinic targeted different groups of patients for their initial screening efforts, and it was not feasible to limit analyses to their target populations. Results, therefore, reflect clinic-wide rather than population-specific changes, even though only certain patient populations were targeted for screening. Second, some screenings and referrals may have been documented in EHR text notes and not captured in analyses. This would incur an error toward the null, and the intervention’s goal was to improve documentation in discrete data fields; this limitation is therefore noted but is not concerning. Third, some of the study clinics’ concurrent participation in the AHC initiative may have affected study results. As noted, we adjusted for this in analytic models, and analyses now underway are assessing the potential interplay between these projects. Last, recruitment bias may affect the generalizability of these findings. Clinics that agreed to take part in this study were motivated to implement social risk screening; many study sites had clearly attempted to implement such screening in the past and may have struggled to do so effectively. Study results should be interpreted as generalizable to clinics that are eager to implement or expand their social risk screening efforts.

Looking Ahead

Although social risk screening is increasingly emphasized by national health care leaders and payers, many primary care clinics face complex barriers to implementing social risk screening and related referral-making. Substantial and ongoing investment and support are needed to enable implementing this practice change. This is especially important for safety-net CHCs given that social risk screening documentation is becoming a requirement in many state Medicaid agencies. A publicly available implementation guide based on study findings may be a useful resource for clinics seeking to adopt or expand social risk screening and referral-making efforts.27

Supplementary Material

CAT.23.0034-Appendix

Acknowledgments

We express our deep gratitude to the OCHIN member clinics that participated in this study. We also thank the study’s expert advisory committee, whose guidance was invaluable: Laura Mae Baldwin, Deborah Cohen, Yuriko de la Cruz, Jennifer DeVoe, Arvin Garg, Nancy Gordon, Amber Haley, Danielle Marie Hessler-Jones, Christian Hill, Hilary Placzek, Bryon Powell, Michelle Proser, and Thomas J. Schuch.

Footnotes

Disclosures: Rachel Gold, Jorge Kaufmann, Erika K. Cottrell, Arwen Bunce, Christina R. Sheppler, Megan Hoopes, Molly Krancari, Laura M. Gottlieb, Meg Bowen, Julianne Bava, Ned Mossman, Nadia Yosuf, and Miguel Marino have nothing to disclose. The Kaiser Permanente Northwest Institutional Review Board (FWA #00002344, IRB #00000405) approved the study (Project 1394354) and continues to review study activities and monitor progress. All clinics in the study consented to participate. Study data and materials are available by request. The study was funded by the National Institute of Diabetes and Digestive and Kidney Diseases grant 1R18DK114701. Trial registration: clinicaltrials.gov, NCT03607617 (https://clinicaltrials.gov/ct2/show/NCT03607617; registration date: July 31, 2018 — retrospectively registered).

Contributor Information

Rachel Gold, Lead Research Scientist, OCHIN, Portland, Oregon, USA; Senior Investigator, Kaiser Permanente Center for Health Research, Portland, Oregon, USA.

Jorge Kaufmann, Biostatistician, Department of Family Medicine, Oregon Health & Science University, Portland, Oregon, USA.

Erika K. Cottrell, Senior Investigator, OCHIN, Portland, Oregon, USA; Research Associate Professor, Department of Medical Informatics and Clinical Epidemiology, School of Medicine, Oregon Health & Science University, Portland, Oregon, USA.

Arwen Bunce, Qualitative Research Scientist, OCHIN, Portland, Oregon, USA.

Christina R. Sheppler, Research Associate III, Kaiser Permanente Center for Health Research, Portland, Oregon, USA.

Megan Hoopes, Manager of Research Analytics, OCHIN, Portland, Oregon, USA.

Molly Krancari, Research Associate, OCHIN, Portland, Oregon, USA.

Laura M. Gottlieb, Professor of Family and Community Medicine, School of Medicine, University of California San Francisco, San Francisco, California, USA.

Meg Bowen, Practice Coach, OCHIN, Portland, Oregon, USA.

Julianne Bava, Trainer, OCHIN, Portland, Oregon, USA.

Ned Mossman, Director of Social and Community Health, OCHIN, Portland, Oregon, USA.

Nadia Yosuf, Project Manager III, Fred Hutchinson Cancer Research Center, Seattle, Washington, USA.

Miguel Marino, Assistant Professor, Department of Family Medicine, Oregon Health & Science University, Portland, Oregon, USA.

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

CAT.23.0034-Appendix

RESOURCES