Skip to main content
PLOS Medicine logoLink to PLOS Medicine
. 2021 Oct 4;18(10):e1003796. doi: 10.1371/journal.pmed.1003796

The effects of an evidence- and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: A controlled interrupted time series analysis

Sarah L Alderson 1,*, Tracey M Farragher 2, Thomas A Willis 1, Paul Carder 3, Stella Johnson 3, Robbie Foy 1
Editor: Zirui Song4
PMCID: PMC8489725  PMID: 34606504

Abstract

Background

The rise in opioid prescribing in primary care represents a significant international public health challenge, associated with increased psychosocial problems, hospitalisations, and mortality. We evaluated the effects of a comparative feedback intervention with persuasive messaging and action planning on opioid prescribing in primary care.

Methods and findings

A quasi-experimental controlled interrupted time series analysis used anonymised, aggregated practice data from electronic health records and prescribing data from publicly available sources. The study included 316 intervention and 130 control primary care practices in the Yorkshire and Humber region, UK, serving 2.2 million and 1 million residents, respectively. We observed the number of adult patients prescribed opioid medication by practice between July 2013 and December 2017. We excluded adults with coded cancer or drug dependency. The intervention, the Campaign to Reduce Opioid Prescribing (CROP), entailed bimonthly, comparative, and practice-individualised feedback reports to practices, with persuasive messaging and suggested actions over 1 year. Outcomes comprised the number of adults per 1,000 adults per month prescribed any opioid (main outcome), prescribed strong opioids, prescribed opioids in high-risk groups, prescribed other analgesics, and referred to musculoskeletal services. The number of adults prescribed any opioid rose pre-intervention in both intervention and control practices, by 0.18 (95% CI 0.11, 0.25) and 0.36 (95% CI 0.27, 0.46) per 1,000 adults per month, respectively. During the intervention period, prescribing per 1,000 adults fell in intervention practices (change −0.11; 95% CI −0.30, −0.08) and continued rising in control practices (change 0.54; 95% CI 0.29, 0.78), with a difference of −0.65 per 1,000 patients (95% CI −0.96, −0.34), corresponding to 15,000 fewer patients prescribed opioids. These trends continued post-intervention, although at slower rates. Prescribing of strong opioids, total opioid prescriptions, and prescribing in high-risk patient groups also generally fell. Prescribing of other analgesics fell whilst musculoskeletal referrals did not rise. Effects were attenuated after feedback ceased. Study limitations include being limited to 1 region in the UK, possible coding errors in routine data, being unable to fully account for concurrent interventions, and uncertainties over how general practices actually used the feedback reports and whether reductions in prescribing were always clinically appropriate.

Conclusions

Repeated comparative feedback offers a promising and relatively efficient population-level approach to reduce opioid prescribing in primary care, including prescribing of strong opioids and prescribing in high-risk patient groups. Such feedback may also prompt clinicians to reconsider prescribing other medicines associated with chronic pain, without causing a rise in referrals to musculoskeletal clinics. Feedback may need to be sustained for maximum effect.


Using a controlled interrupted time series analysis, Sarah L. Alderson and colleagues examine the association between an evidence and theory-informed feedback intervention and opioid prescribing for non-cancer pain in primary care in UK.

Author summary

Why was this study done?

  • Opioid prescribing for non-cancer pain is rising despite limited knowledge on effectiveness and increasing evidence of harms, such as falls, fractures, overdose, and addiction.

  • There are large differences in opioid prescribing between practices, suggesting prescribing is driven by clinician habits rather than patient need.

  • We delivered evidence-based and theory-informed feedback reports to 316 general practices in Yorkshire, UK, every 2 months for 1 year, intended to reduce opioid prescribing by prompting physicians to think twice before starting patients on opioid medication and to review patients not currently benefiting from the medication.

What did the researchers do and find?

  • We looked at trends in the number of patients prescribed opioids for non-cancer pain before, during, and after the intervention in 316 practices that received the feedback compared to 130 practices that did not.

  • We also assessed changes in prescribing in patients at higher risk of longer or stronger opioid prescribing, and changes in the prescribing of other painkiller medications, to look at wider impacts on prescribing for pain.

  • During the intervention period, the number of adults prescribed any opioid per 1,000 patients per month fell in intervention practices (change −0.11; 95% CI −0.30, −0.08) and rose in control practices (change 0.54; 95% CI 0.29, 0.78), with a difference of −0.65 (95% CI −0.96, −0.34), corresponding to 15,000 fewer patients prescribed opioids at the end of the intervention year.

  • Prescribing of strong opioids, total opioid prescriptions, and prescribing in high-risk groups generally fell, although effects lessened after the feedback stopped.

  • Prescribing of painkillers not specifically targeted by feedback also fell, without any increases in referrals to musculoskeletal services.

What do these findings mean?

  • Repeated comparative feedback offers a promising and relatively efficient population-level approach to reduce opioid prescribing in primary care.

  • Feedback may need to be sustained for maximum effect.

Introduction

Opioid prescribing is an internationally recognised threat to population health and a pressing challenge for healthcare services [15]. Prescription opioid use in the US has fallen little from 2010 peaks, despite increased awareness of risks and opioid abuse [6]. North America is experiencing an ‘opioid crisis’, with rapidly rising opioid-related mortality, initially due to prescription opioids and more recently due to illicit heroin and fentanyl use, reaching a peak in 2016 [7]. Other higher-income countries risk following similar trajectories [8]. These trends are largely attributed to prescribing for chronic non-cancer pain [9], where opioids are no more effective than non-opioid pain medications and are associated with increased falls, fractures, dependence, overdose, and mortality [10,11]. Despite increased awareness of the potential harms in opioid prescribing, prescription rates remain historically high in both North America and Europe [1214].

Whilst a growing body of work has investigated problematic opioid prescribing [1,12,1517], less attention has been paid to evaluating proposed solutions. A Cochrane review found inadequate evidence for interventions targeting opioid use in individuals with chronic pain [18]. However, more recent studies indicate the value of provider- and system-level interventions [1921], including a multifaceted approach comprising nurse care management, an electronic registry, data-driven academic detailing, and electronic decision tools [20].

Observed large variations in opioid prescribing, up to 10-fold in a UK study of primary care practices, suggests that physician habits and norms are a major driver, rather than patient need and evidence of benefit [16]. An ‘upstream’ population approach would therefore aim to change physician behaviour around both initiating and continuing opioid prescribing. The audit and feedback approach involves giving healthcare providers a summary of their clinical performance over a specified period [22]. It generally has modest effects on healthcare practice, which can translate into substantial population impacts [22].

We devised and applied an evidence- and theory-informed feedback intervention, the Campaign to Reduce Opioid Prescribing (CROP), to reduce opioid prescribing in primary care by prompting physicians to initiate opioids with caution and review patients currently prescribed opioids with no clear individual benefit. We evaluated the effect of the feedback intervention on prescribing of opioids and, anticipating the possibility of unintended consequences, prescribing of other analgesics and referrals to musculoskeletal services.

Methods

Study design and setting

In the UK, primary care is provided by general practices. Contracts for providing medical care relate to the practice rather than individual physicians. Patients are registered with a single practice rather than individual general practitioners, with an average practice list size of 9,000 patients and a single common electronic health record (EHR). The Yorkshire and Humber region covers an ethnically diverse population of 5.4 million residents with above average socioeconomic deprivation levels [23,24]. This study was conceived from our previous work that showed a rise in opioid prescribing in Leeds and Bradford, the 2 largest cities in West Yorkshire. Medicines optimisation leads, employed by clinical commissioning groups (CCGs), for West Yorkshire asked us to deliver an intervention to reduce opioid prescribing in this area. West Yorkshire (intervention group) has a population of 2.2 million residents served by 317 practices organised within 10 CCGs in 2016. One practice declined data sharing for this study. Five CCGs from the wider Yorkshire and Humber region (outside of West Yorkshire), with a population of 1 million residents and 130 practices, provided control data. We chose to use CCGs in the same region as our intervention sample for the control sample, as these would be subject to similar region-wide prescribing initiatives. Our main study population, and hence sample size, was therefore limited by the coverage of the data-sharing agreements. A further 3 CCGs in the region comprising 134 practices and approximately 650,000 residents were included as additional controls in an analysis using publicly available prescribing data.

We conducted a controlled interrupted time series (ITS) analysis. Controlled ITS is a quasi-experimental design used to evaluate the longitudinal effects of interventions, through regression modelling. The addition of a control group minimises potential confounding from concurrent interventions [25]. This design can detect whether an intervention effect is significantly greater than underlying trends and is appropriate in evaluating area-wide service improvement strategies when randomisation is not feasible [26,27].

Intervention

Evidence- and theory-informed feedback [28] to each practice reported the number of patients 18 years and older prescribed opioids in the preceding 8 weeks, excluding those with coded cancer, palliative care, or drug dependence, compared to other practices within their CCG and West Yorkshire, as well as changes over time. We did not define clinical categories, given highly variable diagnostic coding for painful conditions. Report content and formats followed a design previously demonstrated to reduce high-risk prescribing in primary care that addressed identified Theoretical Domains Framework determinants of adherence to quality indicators [28,29]. Reports emphasised ‘thinking twice’ before initiating opioids, rather than addressing the more complex patients prescribed multiple opioids (see S1 Text for the TIDieR summary and S2 Text for an illustrative report). Feedback highlighted patient groups at higher risk of long-term or stronger opioid prescribing: for example, individuals 75 years and older, individuals with coded mental health diagnoses, and individuals co-prescribed antidepressants [16]. The reports incorporated evidence-informed behaviour change techniques, such as specific recommendations for action and action plans, designed to enhance effectiveness [30]. Given the competing priorities and demands that primary care physicians face in routine practice, the reports used non-judgmental and encouraging language. We granted practices access to our EHR searches, allowing them to identify and review individual patients.

Practices received a total of 6 bimonthly reports. We posted 5 copies of each report to practice managers, and the local medicine optimisation leads emailed PDF copies to practice managers for 8 out of 10 CCGs.

The intervention did not involve any changes to existing musculoskeletal or pain services, which general practices and patients could access as usual throughout the study period.

Data sources and outcomes

Our primary outcome was the number of adults prescribed any opioid per 1,000 adults per month. Secondary outcomes included the number of adults prescribed any opioid per 1,000 adults per month in the high-risk groups highlighted in the feedback [16]. Co-prescription with antidepressants was used as a proxy for mental health illness to reflect our previous work that identified that mental health diagnoses are often poorly recorded in EHRs in the UK [31]. We collected retrospective aggregated, anonymised practice-level data for intervention and control CCGs through the centralised reporting of 2 EHR systems (The Phoenix Partnership SystmOne and EMIS Health), at monthly intervals for 3 periods: pre-intervention (1 July 2013 to 31 March 2016; 47 months), intervention (1 April 2016 to 31 March 2017; 12 months), and post-intervention (1 April 2017 to 31 December 2017; 9 months). We extracted data on numbers of adults prescribed opioids in the previous 8 weeks, excluding those ever coded with cancer, palliative care, or drug dependence. We categorised opioid strength according to World Health Organization reported potency [16]. ‘Weaker’ opioids (with or without acetaminophen or ibuprofen) comprised codeine, dihydrocodeine, tramadol, pethidine, meptazinol, and tapentadol. ‘Strong’ opioids comprised diamorphine, morphine, oxycodone, fentanyl, hydromorphone, buprenorphine (excluding preparations used for substance misuse), pentazocine, dipipanone, and papaveretum. We collected data to assess any potential wider impacts on prescribing for pain, specifically the number of adults prescribed non-steroidal anti-inflammatory drugs (NSAIDs) and gabapentinoids, and referrals to musculoskeletal services (see S3 Text for sample search). We converted the numbers of adults in a prescription category into monthly rates based on monthly numbers of relevant adults per practice. The denominator for all outcomes was the number of adults per practice per month, except for the number of adults aged over 75 years prescribed opioids, where the number of adults aged over 75 years per practice per month was used. No patient-level data were extracted to calculate morphine equivalent doses.

We collected monthly data on total opioid prescriptions from the publicly available OpenPrescribing database for the same time periods, to assess overall opioid prescribing trends for all intervention and control practices [32]. We converted the monthly prescribing data into opioid prescribing monthly rate per 1,000 patients based on the 2017–2018 practice list size. We collected data from the 2017–2018 Public Health England National General Practice Profiles [33] for practice-level variables, comprising practice list size; female-to-male patient ratio; percentage of patients with long-term conditions, as a proxy for disease burden; and percentage of patients reporting a positive experience of their practice, as a marker of satisfaction with care. We used the percentage of patients in employment and practice-level Index of Multiple Deprivation (IMD) score as markers of deprivation. The IMD measures area deprivation and is determined for each patient on the list, where available, and then averaged over the practice. We used overall achievement in the clinical domain of the Quality and Outcomes Framework—a performance management system whereby primary care practices are remunerated according to achievement of targets—as a measure of overall quality of care [34].

We estimated intervention costs based on known costs (e.g., postage and data extraction fees) and time spent by staff (full-time equivalent salaries). Potential opioid prescription savings were calculated based on national opioid prescription costs and trends for the West Yorkshire population. A formal economic analysis was not conducted.

Data analysis

We used multilevel linear mixed-effects models (LMMs) for all outcomes. This was a 3-level model with a random intercept and random slope on month at the practice level, and a random intercept at the CCG level, with practice nested within CCG (S4 Text). The LMMs allowed the outcome to differ over time for each practice and accounted for correlations in outcomes over time within a practice and between practices within the same CCG area. A fixed-effect interaction term of intervention (control/intervention), the 3 intervention periods (pre-intervention/intervention/post-intervention), and month (July 2013 to December 2017) estimated the change in the outcomes over time across the 3 periods, and differences in change in outcomes between intervention and control practices, within a single model. We compared different structures of the covariance matrices (unstructured, independent, and identity) to assess which best accounted for autocorrelation. For all outcomes, the unstructured covariance (i.e., distinct variances and covariance) was the most appropriate, comparing both the Akaike information criterion (AIC) and Bayesian information criterion (BIC) values. Finally, each LMM included the predetermined practice characteristics as fixed effects, to assess whether any differences in the outcomes between the intervention and controls arms were due to practice differences. We checked that assumptions regarding autocorrelation—homoscedasticity of the residuals and normality of the residuals’ distribution for LMMs—were not violated for all unadjusted and adjusted models. We confirmed that seasonality would not be an influence by reviewing changes in outcomes for each practice over time before developing models.

Sensitivity analysis (S5 Text) explored and confirmed the robustness of the modelling approaches, based on the main outcome adjusted LMM. We removed predicted values with residuals more than 2 or less than −2 to assess the impact of outliers; this made little difference to model estimates (S6 Text). Multicollinearity was not found for correlations between the practice characteristics (ρ > 0.7 and p < 0.05), and while some practice characteristics showed differences in rates of adults taking opioids at different levels of the practice characteristic (determined by including a 4-way interaction term with intervention, the 3 periods, and month), these differences did not change over time. Comparisons of AIC and BIC values for multilevel mixed-effects Poisson and negative binomial regression models and the adjusted LMM (all without CCG level due to convergence issues) for the main outcome indicated that the LMM was the most appropriate fit to the data (S6 Text).

We adhered to current reporting recommendations for ITS [3537]. Our statistical analysis plan is provided (S7 Text).

Ethical approval

The University of Leeds School of Medicine Research Ethics Committee provided ethical approval for the evaluation (MREC 17–042).

Results

Intervention practices were similar to control practices but generally had larger list sizes, fewer patients with long-term conditions, and more deprived populations (Table 1). Before the intervention, the mean rate of adults prescribed opioids per 1,000 adults per month was 58.1 in intervention practices and 62.2 in control practices (Table 2). The number of patients at higher risk of long-term or stronger opioid prescribing who were prescribed opioids; the number of patients prescribed NSAIDs, gabapentin, or pregabalin; and the number of patients referred to musculoskeletal services were similar between intervention and control.

Table 1. Summary of practice characteristics.

Dataset and group  Number of practices Median list size (IQR)  Mean percent female (95% CI)  Median percent positive patient experience (IQR)a  Mean percent with LTC (95% CI)b  Median percent QOF score (IQR)c  Mean percent IMD (95% CI)d 
CCG data
Control practices  130  6,673 (4,102, 9,803)  49.4 (49.0, 51.6)  83.3 (76.5, 89.6)  55.4 (54.0, 58.1)  98.1 (96.1, 99.5)  28.9 (26.5, 32.1) 
Intervention practices  313  7,550 (4,452, 10,540)  49.2 (48.8, 51.4)  83.8 (76.3, 89.7)  51.0 (50.0, 53.5)  98.1 (96.1, 99.4)  30.3 (28.9, 33.0) 
OpenPrescribing data
Control practices 264 7,131 (3,982, 9,878) 51.5 (48.0, 55.3)  86.4 (77.9, 91.6)  54.9 (53.9, 57.4)  98.6 (96.5, 99.8)  25.1 (23.3, 28.0) 
Intervention practices 313 7,550 (4,452, 10,540) 49.2 (48.8, 51.4)  83.8 (76.3, 89.7)  51.0 (50.0, 53.5)  98.1 (96.1, 99.4)  30.3 (28.9, 33.0) 

CCG, clinical commissioning group; GP, general practitioner; IMD, Index of Multiple Deprivation; LTC, long-term condition; QOF, Quality and Outcomes Framework.

aResults from GP patient survey question: ‘Overall, how would you describe your experience of your GP practice’. The indicator value is the percentage of people who answered ‘very good’ or ‘fairly good’.

bResults from GP patient survey question: ‘Do you have any long-term physical or mental health conditions, disabilities or illnesses’. The indicator value is the percentage of people who answered ‘Yes’.

cThe percentage of all QOF points achieved across all domains as a proportion of all achievable points. (QOF is a financially incentivised quality improvement programme for all GP practices in England.)

dAn overall measure of multiple deprivation experienced by people living in an area: the higher the score, the greater the deprivation.

Table 2. Summary of opioid prescribing and other outcome-related characteristics at baseline for intervention and control practices.

Characteristic Median (IQR) number of adults per 1,000 adults at baseline (2013 September)
CCG data OpenPrescribing data
Control practices Intervention practices Control practices Intervention practices
Opioid prescription 62.2 (49.7, 76.8) 58.1 (44.9, 71.9) 40.3 (30.6, 50.8) 34.5 (25.7, 44.7)
Strong opioid prescription 4.2 (2.8, 5.8) 4.9 (3.2, 7.0)
Opioid prescription—patient >75 years 108.1 (83.9, 138.0) 119.5 (97.8, 143.7)
Anti-depressant prescription 14.5 (10.0, 18.9) 12.8 (9.1, 17.0)
Mental health diagnosis 23.9 (17.6, 32.3) 23.9 (17.7, 30.0)
Benzodiazepine prescription 4.8 (3.3, 6.6) 3.9 (2.1, 5.9)
Non-steroidal anti-inflammatory prescription 27.0 (20.2, 34.1) 27.9 (20.8, 40.5)
Gabapentin prescription 8.0 (5.9, 10.8) 6.4 (4.5, 8.8)
Pregabalin prescription 5.3 (3.5, 7.2) 5.2 (3.5, 7.2)
Musculoskeletal referral 2.8 (1.9, 4.0) 3.5 (2.4, 4.6)

CCG, clinical commissioning group.

For the primary outcome, the rate of any opioid prescribing rose across all practices during the pre-intervention period, increasing more in control than in intervention practices, with an adjusted change in rate of 0.36 (95% CI 0.27, 0.46) and 0.18 (95% CI 0.11, 0.25) adults prescribed opioids per 1,000 per month, respectively (Table 3). During the intervention period, the opioid prescribing rate rose by 0.53 per 1,000 per month (95% CI 0.29, 0.77) in control practices but fell in intervention practices by 0.12 per 1,000 per month (95% CI −0.30, −0.07), a difference of 0.65 (95% CI −0.95, −0.35; Fig 1). Post-intervention, the opioid prescribing rates decreased in both groups, with a smaller difference in mean change per month between the control and intervention practices of 0.26 (95% CI −0.57, 0.05). By the final month of follow-up, there was a mean difference of 7.4 (95% CI −17.4, 2.6) per 1,000 adults prescribed opioids between control and intervention practices. We estimate that this corresponds to around 15,000 fewer adults prescribed opioids during the intervention year in our total intervention population of 1.9 million. Estimated intervention effects changed little after adjustment for practice characteristics, and therefore adjusted estimates are shown.

Table 3. Mean number of adults prescribed opioid per 1,000 adults and mean change per month: multilevel linear model—electronic health record data and denominator.

Outcome and time period Month Mean (95% CI) number of adults prescribed opioid per 1,000 adults Mean (95% CI) change per month, over the time period
Control (n = 130) Intervention (n = 213) Difference Control (n = 130) Intervention (n = 213) Difference
Adults prescribed opioid—unadjusted
Pre-intervention 2013–09 57.3 (49.6, 64.9) 57.0 (50.0, 63.9) −0.3 (−10.6, 10.0) 0.36 (0.27, 0.46) 0.18 (0.11, 0.25) −0.18 (−0.30, −0.07)
2016–03 68.2 (61.0, 75.4) 62.4 (55.7, 69.0) −5.8 (−15.6, 3.9)
Intervention 2016–04 63.9 (56.6, 71.1) 63.7 (57.0, 70.3) −0.2 (−10.1, 9.6) 0.53 (0.29, 0.77) −0.12 (−0.30, 0.07) −0.65 (−0.95, −0.35)
2017–03 69.7 (62.5, 77.0) 62.4 (55.7, 69.0) −7.4 (−17.2, 2.5)
Post-intervention 2017–04 66.3 (59.0, 73.5) 61.7 (55.0, 68.4) −4.6 (−14.4, 5.3) 0.22 (−0.03, 0.46) −0.04 (−0.24, 0.15) −0.26 (−0.57, 0.05)
2018–03 68.7 (61.3, 76.0) 61.2 (54.5, 68.0) −7.4 (−17.4, 2.6)
Adults prescribed opioid—adjusted a
Pre-intervention 2013–09 55.0 (46.8, 63.2) 58.2 (50.6, 65.9) 3.2 (−8.0, 14.5) 0.36 (0.27, 0.46) 0.18 (0.11, 0.25) −0.18 (−0.30, −0.07)
2016–03 65.9 (58.2, 73.7) 63.6 (56.2, 71.0) −2.3 (−13.1, 8.5)
Intervention 2016–04 61.7 (53.8, 69.6) 64.9 (57.4, 72.4) 3.2 (−7.7, 14.0) 0.54 (0.29, 0.78) −0.11 (−0.30, 0.08) −0.65 (−0.96, −0.34)
2017–03 67.6 (59.7, 75.5) 63.7 (56.2, 71.1) −4.0 (−14.8, 6.9)
Post-intervention 2017–04 64.2 (56.3, 72.1) 63.0 (55.5, 70.5) −1.2 (−12.0, 9.7) 0.21 (−0.03, 0.46) −0.05 (−0.24, 0.15) −0.26 (−0.57, 0.05)
2018–03 66.5 (58.6, 74.5) 62.5 (55.0, 70.0) −4.0 (−15.0, 7.0)
Adults prescribed strong opioid—adjusted a
Pre-intervention 2013–09 3.9 (3.1, 4.7) 4.9 (4.2, 5.6) 1.0 (0.0, 2.1) 0.04 (0.03, 0.05) 0.04 (0.03, 0.05) 0.002 (−0.01, 0.01)
2016–03 5.1 (4.4, 5.8) 6.2 (5.5, 6.8) 1.1 (0.1, 2.0)
Intervention 2016–04 4.4 (3.7, 5.2) 6.2 (5.6, 6.8) 1.8 (0.8, 2.7) 0.01 (−0.01, 0.03) −0.10 (−0.11, −0.08) −0.11 (−0.13, −0.08)
2017–03 4.6 (3.9, 5.3) 5.1 (4.5, 5.8) 0.6 (−0.4, 1.5)
Post-intervention 2017–04 4.2 (3.5, 4.9) 5.1 (4.5, 5.7) 0.9 (−0.1, 1.8) −0.03 (−0.05, −0.01) −0.02 (−0.03, −0.003) 0.01 (−0.01, 0.04)
2018–03 3.9 (3.2, 4.6) 4.9 (4.2, 5.5) 1.0 (0.1, 2.0)
Adults aged >75 years prescribed opioid—adjusted a
Pre-intervention 2013–09 81.9 (64.4, 99.3) 111.0 (95.8, 126.2) 29.1 (5.9, 52.3) 1.54 (1.33, 1.76) 0.77 (0.60, 0.94) −0.78 (−1.05, −0.50)
2016–03 128.2 (113.3, 143.1) 134.1 (120.6, 147.6) 5.9 (−14.3, 26.1)
Intervention 2016–04 106.5 (91.5, 121.4) 137.4 (123.8, 151.0) 30.9 (10.7, 51.2) 1.82 (1.37, 2.27) 0.06 (−0.28, 0.41) −1.76 (−2.33, −1.19)
2017–03 126.5 (112.0, 141.0) 138.1 (124.8, 151.4) 11.6 (−8.2, 31.4)
Post-intervention 2017–04 118.1 (103.6, 132.6) 133.1 (119.8, 146.4) 15.0 (−4.8, 34.7) 0.72 (0.28, 1.17) 0.09 (−0.26, 0.45) −0.63 (−1.20, −0.06)
2018–03 126.1 (111.7, 140.5) 134.1 (120.9, 147.3) 8.1 (−11.6, 27.7)
Adults co-prescribed an antidepressant with opioid—adjusted a
Pre-intervention 2013–09 11.7 (9.7, 13.8) 11.7 (9.9, 13.5) −0.03 (−2.8, 2.7) 0.1 (0.07, 0.14) 0.12 (0.09, 0.14) 0.02 (−0.03, 0.06)
2016–03 14.8 (13.0, 16.7) 15.3 (13.6, 17.0) 0.4 (−2.1, 3.0)
Intervention 2016–04 14.2 (12.3, 16.2) 16.2 (14.5, 18.0) 2.0 (−0.6, 4.6) 0.18 (0.09, 0.27) −0.003 (−0.08, 0.07) −0.18 (−0.30, −0.06)
2017–03 16.2 (14.3, 18.2) 16.2 (14.4, 17.9) −0.04 (−2.7, 2.6)
Post-intervention 2017–04 15.4 (13.5, 17.4) 16.1 (14.3, 17.8) 0.7 (−2.0, 3.3) 0.13 (0.04, 0.23) 0.04 (−0.03, 0.11) −0.09 (−0.21, 0.03)
2018–03 16.9 (14.9, 18.9) 16.5 (14.7, 18.3) −0.4 (−3.1, 2.3)
Adults with a mental health diagnosis prescribed opioid—adjusted a
Pre-intervention 2013–09 20.3 (16.0, 24.6) 22.3 (18.2, 26.4) 2.0 (−4.0, 7.9) 0.2 (0.16, 0.23) 0.14 (0.11, 0.16) −0.06 (−0.11, −0.02)
2016–03 26.2 (22.0, 30.4) 26.3 (22.3, 30.4) 0.1 (−5.8, 5.9)
Intervention 2016–04 24.6 (20.4, 28.8) 27.0 (22.9, 31.0) 2.4 (−3.5, 8.2) 0.27 (0.19, 0.35) 0.03 (−0.03, 0.09) −0.24 (−0.35, −0.14)
2017–03 27.6 (23.4, 31.8) 27.3 (23.2, 31.3) −0.3 (−6.2, 5.6)
Post-intervention 2017–04 26.2 (22.0, 30.4) 27.0 (22.9, 31.1) 0.8 (−5.1, 6.7) 0.18 (0.10, 0.26) 0.08 (0.02, 0.15) −0.1 (−0.20, 0.004)
2018–03 28.2 (23.9, 32.5) 27.9 (23.8, 32.0) −0.3 (−6.2, 5.6)
Adults co-prescribed a benzodiazepine with opioid—adjusted a
Pre-intervention 2013–09 5.9 (5.0, 6.7) 4.6 (3.9, 5.2) −1.3 (−2.4, −0.2) −0.02 (−0.04, 0.01) 0.02 (0.0002, 0.04) 0.04 (0.006, 0.07)
2016–03 5.3 (4.6, 6.1) 5.2 (4.6, 5.7) −0.2 (−1.1, 0.7)
Intervention 2016–04 5.2 (4.4, 6.0) 5.5 (4.8, 6.1) 0.2 (−0.8, 1.3) 0.05 (−0.02, 0.13) −0.03 (−0.09, 0.03) −0.09 (−0.19, 0.01)
2017–03 5.8 (5.0, 6.7) 5.1 (4.4, 5.7) −0.8 (−1.8, 0.3)
Post-intervention 2017–04 5.3 (4.5, 6.2) 5.3 (4.7, 6.0) 0.0 (−1.1, 1.1) 0.02 (−0.06, 0.09) −0.07 (−0.13, −0.01) −0.09 (−0.19, 0.01)
2018–03 5.5 (4.6, 6.4) 4.5 (3.8, 5.3) −1.0 (−2.2, 0.2)
Adults prescribed a non-steroidal anti-inflammatory—adjusted a
Pre-intervention 2013–09 35.2 (29.9, 40.6) 40.8 (36.1, 45.4) 5.5 (−1.6, 12.7) −0.15 (−0.20, −0.10) −0.08 (−0.12, −0.05) 0.07 (0.005, 0.13)
2016–03 30.8 (25.8, 35.8) 38.3 (33.8, 42.7) 7.5 (0.8, 14.2)
Intervention 2016–04 28.7 (23.6, 33.7) 36.6 (32.2, 41.1) 8.0 (1.2, 14.7) −0.1 (−0.25, 0.05) −0.35 (−0.47, −0.24) −0.25 (−0.44, −0.06)
2017–03 27.6 (22.6, 32.6) 32.8 (28.3, 37.2) 5.2 (−1.5, 11.9)
Post-intervention 2017–04 26.3 (21.4, 31.3) 30.8 (26.4, 35.2) 4.5 (−2.2, 11.2) −0.11 (−0.26, 0.03) −0.16 (−0.28, −0.04) −0.04 (−0.23, 0.15)
2018–03 25.1 (20.2, 30.0) 29.1 (24.7, 33.5) 4.0 (−2.7, 10.6)
Adults prescribed gabapentin—adjusted a
Pre-intervention 2013–09 6.6 (4.4, 8.8) 6.1 (4.3, 8.0) −0.5 (−3.4, 2.5) 0.07 (0.04, 0.11) 0.11 (0.09, 0.14) 0.04 (−0.005, 0.08)
2016–03 8.8 (6.6, 11.0) 9.5 (7.6, 11.3) 0.7 (−2.2, 3.5)
Intervention 2016–04 9.0 (6.7, 11.3) 10.5 (8.6, 12.5) 1.5 (−1.5, 4.5) 0.1 (−0.04, 0.24) −0.15 (−0.26, −0.04) −0.25 (−0.42, −0.07)
2017–03 10.1 (7.8, 12.4) 8.9 (7.0, 10.8) −1.2 (−4.2, 1.8)
Post-intervention 2017–04 10.2 (7.9, 12.4) 9.0 (7.0, 10.9) −1.2 (−4.2, 1.8) 0.03 (−0.11, 0.17) −0.02 (−0.13, 0.09) −0.05 (−0.23, 0.13)
2018–03 10.5 (8.2, 12.7) 8.7 (6.8, 10.6) −1.7 (−4.7, 1.2)
Adults prescribed pregabalin—adjusted a
Pre-intervention 2013–09 8.4 (7.0, 9.7) 5.0 (3.9, 6.1) −3.4 (−5.1, −1.6) −0.07 (−0.11, −0.04) 0.06 (0.04, 0.09) 0.14 (0.10, 0.18)
2016–03 6.1 (4.8, 7.5) 6.8 (5.7, 7.9) 0.7 (−1.0, 2.5)
Intervention 2016–04 5.6 (4.1, 7.1) 6.3 (5.1, 7.5) 0.7 (−1.2, 2.6) −0.03 (−0.17, 0.10) −0.02 (−0.13, 0.08) 0.01 (−0.16, 0.18)
2017–03 5.3 (3.8, 6.8) 6.0 (4.8, 7.2) 0.8 (−1.2, 2.7)
Post-intervention 2017–04 3.4 (1.9, 4.9) 5.2 (4.0, 6.4) 1.8 (−0.1, 3.8) 0.25 (0.11, 0.39) 0.24 (0.14, 0.35) −0.01 (−0.18, 0.17)
2018–03 6.1 (4.6, 7.6) 7.9 (6.7, 9.1) 1.8 (−0.2, 3.7)
Adults referred to musculoskeletal services—adjusted a
Pre-intervention 2013–09 2.7 (2.0, 3.3) 3.8 (3.2, 4.4) 1.1 (0.3, 2.0) 0.02 (0.009, 0.03) 0.004 (−0.005, 0.01) −0.02 (−0.03, −0.002)
2016–03 3.3 (2.8, 3.8) 3.9 (3.4, 4.4) 0.6 (−0.1, 1.3)
Intervention 2016–04 3.4 (2.8, 3.9) 4.2 (3.7, 4.6) 0.8 (0.1, 1.5) −0.02 (−0.04, 0.01) 0.003 (−0.01, 0.02) 0.02 (−0.008, 0.05)
2017–03 3.2 (2.7, 3.7) 4.2 (3.7, 4.7) 1.0 (0.3, 1.7)
Post-intervention 2017–04 3.3 (2.8, 3.9) 4.5 (4.0, 5.0) 1.1 (0.4, 1.9) −0.05 (−0.07, −0.02) −0.05 (−0.07, −0.04) −0.01 (−0.04, 0.02)
2018–03 2.8 (2.3, 3.4) 3.9 (3.4, 4.4) 1.1 (0.3, 1.8)

aAdjusted for percent female, Quality and Outcomes Framework score, percentage of patients reporting a positive experience of their practice, percentage of patients with long-term conditions, and Index of Multiple Deprivation.

Fig 1. Mean number of adults prescribed opioid per 1,000 adults: multilevel linear model estimates: Electronic health record data and denominator.

Fig 1

Adjusted for percent female, Quality and Outcomes Framework score, percentage of patients reporting a positive experience of their practice, percentage of patients with long-term conditions, and Index of Multiple Deprivation. Black line = intervention practices; grey line = control practices.

We observed trends generally favouring the intervention for groups at higher risk of long-term or stronger opioid prescribing. During the intervention, the rate of strong opioid prescribing decreased more in intervention than control practices (−0.11; 95% CI −0.13, −0.08), although rates in both groups similarly declined post-intervention. The rate of opioid prescribing in those aged 75 years and over decreased more in intervention practices than in control practices during the intervention period (−1.76; 95% CI −2.33, −1.19), with a sustained, if reduced, post-intervention difference (−0.63; 95% CI −1.20, −0.06). During the intervention period, rates of opioid prescribing fell more per month in intervention practices than in control practices in adults co-prescribed an antidepressant (−0.18; 95% CI −0.30, −0.06) and in adults with a mental health diagnosis (−0.24; 95% CI −0.35, −0.14), although post-intervention differences were not sustained. Rates of co-prescribed benzodiazepines did not differ significantly between intervention and control practices.

Regarding other analgesics, we observed declining pre-intervention trends for NSAID prescribing, with a larger decrease in intervention practices than control practices during the intervention (−0.35; 95% CI −0.47, −0.24) and both groups having similar post-intervention decreases. Rates of gabapentin prescribing decreased more in intervention practices than control practices during the intervention period (−0.25; 95% CI −0.42, −0.07), but this was not the case for pregabalin prescribing (0.01; 95% CI −0.16, 0.18). We observed no differences in rates of musculoskeletal referrals between intervention and control practices during the intervention period (0.02; 95% CI −0.008, 0.05) or after (0.01; 95% CI −0.04, 0.02).

Using publicly available data for total opioid prescriptions, we observed rising pre-intervention trends for both groups, a small decline during the intervention in intervention practices (−0.1; 95% CI −0.19, −0.01), and fairly static post-intervention rates in both groups (Table 4).

Table 4. Mean number of prescriptions for opioids per 1,000 adults: Multilevel linear model—OpenPrescribing data, Public Health England National General Practice Profiles denominator.

Time period Month Mean (95% CI) number of prescriptions for opioids per 1,000 adults Mean (95% CI) change per month, over the time period
Control (n = 264) Intervention (n = 313) Difference Control (n = 264) Intervention (n = 313) Difference
Pre-intervention 2013–09 38.7 (33.2, 44.2) 33.5 (29.3, 37.6) −5.3 (−12.1, 1.6) 0.11 (0.06, 0.15) 0.11 (0.09, 0.14) 0.01 (−0.04, 0.06)
2016–03 41.9 (36.6, 47.2) 36.9 (32.9, 40.9) −5.0 (−11.7, 1.7)
Intervention 2016–04 43.3 (38.0, 48.6) 37.9 (33.9, 41.9) −5.4 (−12.1, 1.3) 0.02 (−0.05, 0.10) −0.08 (−0.13, −0.03) −0.10 (−0.19, −0.01)
2017–03 43.5 (38.2, 48.8) 37.0 (33.0, 41.1) −6.5 (−13.2, 0.2)
Post-intervention 2017–04 43.7 (38.4, 49.0) 37.1 (33.0, 41.1) −6.7 (−13.4, 0.0) −0.03 (−0.10, 0.05) −0.05 (−0.10, 0.001) −0.02 (−0.11, 0.07)
2018–03 43.4 (38.1, 48.8) 36.5 (32.5, 40.6) −6.9 (−13.6, −0.2)

CCG and practice levels, adjusted for percent female, Quality and Outcomes Framework score, percentage of patients reporting a positive experience of their practice, percentage of patients with long-term conditions, and Index of Multiple Deprivation.

The results of a simple (uncontrolled) ITS of intervention practices mirrored those of the controlled ITS (Table 5). This provides greater confidence that any association between the intervention and the effect is likely to be causal, and provides evidence that the control practices did not experience some other event [35].

Table 5. Mean number of adults prescribed opioid per 1,000 adults: Multilevel linear model—electronic health record data and denominator, intervention only model (n = 313).

Time period Month Mean (95% CI) number of adults prescribed opioid per 1,000 adults—adjusted Mean (95% CI) change per month, over the timeframe
Pre-intervention 2013–09 57.1 (54.3, 70.6) 0.18 (0.10, 0.25)
2016–03 62.4 (55.6, 72.1)
Intervention 2016–04 63.7 (55.5, 71.9) –0.12 (–0.32, 0.08)
2017–03 62.4 (54.3, 70.6)
Post-intervention 2017–04 61.8 (53.6, 70.0) –0.04 (–0.25, 0.16)
2018–03 61.3 (53.1, 69.6)

We estimated that the feedback intervention cost approximately US$66,000 to deliver, including US$52,000 in staff costs, US$3,200 in data extraction fees, and US$5,200 in stationary costs. Nationally, opioid prescription costs rose by approximately US$26,000 per 100,000 population during the intervention year. The reduction in opioid prescribing equated to around US$1,155,000 savings across intervention CCGs. The intervention gave overall cost savings of US$1,000,000 once all costs were accounted for.

Discussion

We observed that repeated evidence- and theory-informed comparative feedback reversed a rising trend of opioid prescribing in primary care, with sustained, if attenuated, effects. We have therefore demonstrated a successful, scalable strategy to reduce population-level opioid prescribing. The feedback intervention had a modest effect, with a difference of 0.65 fewer adults prescribed any opioid per 1,000 per month in intervention practices compared to control practices. However, at a population level, there were substantially fewer patients taking prescribed opioid medications.

The number of patients prescribed strong opioids fell during the intervention, although at a slower rate than the number of patients prescribed any opioid, possibly reflecting a longer de-prescribing process than for weaker opioids, given the need for gradual reductions to limit withdrawal symptoms. The intervention also had sustained effects for patients in targeted high-risk groups, including adults with coded mental health diagnoses and those co-prescribed antidepressants. The greatest effect was in adults aged 75 years and older, with a greater reduction in intervention practices than control practices of almost 1.8 adults aged 75 years and older prescribed opioids per 1,000 per month. This is important given the heightened risks of premature mortality, associated falls, and unplanned hospital admissions in this population [38,39].

Contrary to expectations, we observed reductions in wider analgesic prescribing not specifically targeted by feedback, specifically of NSAIDs and gabapentin, and no increases in referrals to musculoskeletal services. This provides some reassurance that the intervention had few rebound effects on wider service utilisation and costs. Indeed, it may have prompted primary care physicians to think differently about the value of prescribing analgesics in chronic non-cancer pain, and to prefer self-management options.

Prescribing data from publicly available sources [32] confirm that the intervention changed the underlying trend of rising opioid prescriptions, although it levelled off rather than fell. The smaller effect in this dataset is likely due to additional ‘noise’ in these data, which include prescriptions for cancer pain and drug dependency, especially as primary care physicians are encouraged to prescribe stronger opioids earlier and longer for palliative care [40].

There is a growing evidence base on the value of provider- and system-level interventions to reduce opioid use in adults with chronic non-cancer pain [1921,41,42]. We provide evidence for a relatively efficient and scalable population strategy to address prescribing of both weaker and stronger opioids. The widespread use of EHR systems means that primary care prescribing data can be used to both drive and monitor change at a relatively low cost [4345]. Our estimated costs suggest this intervention is relatively efficient given potential savings in projected opioid prescription costs.

Our intervention incorporated a range of evidence- and expert-informed suggestions to improve the effectiveness of feedback, such as providing repeated feedback with comparators to reinforce desired behaviour, recommending specific actions, and ensuring credibility of information [30]. However, the success of our strategy may also have depended upon contextual factors, specifically the timing and nature of the targeted clinical behaviour [46]. The intervention occurred during a period when primary care physicians were becoming increasingly aware of an opioid prescription problem and recognised a need for action. Feedback, used alone or with other interventions, may not be effective in changing all types of clinical behaviour [29]; opioid prescribing represents a relatively discrete behaviour that is reasonably within physician control [30].

We highlight 5 limitations. First, our study took place in a single region, potentially limiting generalisability to the rest of the UK and other healthcare systems. However, primary care physicians internationally report similar types of challenges in managing opioid prescribing [47], and performance feedback has been shown to work in many settings [22]. As only 1 out of 317 practices declined participation, selection bias is unlikely. We also demonstrated effects in a population with relatively high levels of socioeconomic deprivation, a factor that is associated with higher levels of opioid prescribing [48].

Second, routinely collected data are prone to coding errors. Such errors are less likely for prescribing data, but our use of ‘ever coded’ diagnoses may have overestimated current diagnoses, especially cancer and drug dependence. Some practices may have responded to feedback by re-categorising patients as drug dependent, thereby taking them out of the denominator and inflating intervention effects. However, we observed similar patterns of reductions in OpenPrescribing data. Sensitivity analysis showed that ‘extreme’ values, possibly due to coding errors, did not affect model estimates. Furthermore, the modelling approaches accounted for missing data within practices.

Third, the quasi-experimental design cannot fully account for concurrent interventions. Our previous publication showing the rise in prescribing in this area may have alerted practices to rising opioid prescribing [16]. Media attention to the North American ‘opioid crisis’ during the intervention period may also have influenced prescribing behaviour. Media coverage of the scale of UK opioid prescribing began towards the end of the intervention period and is unlikely to significantly account for observed changes in opioid prescribing [49,50]. Use of control practices [26] and a simple ITS analysis provides greater confidence that any association between intervention and effect is likely to be causal.

Fourth, this study did not specifically examine the acceptability of the feedback reports and whether or how they were used by general practices. This will be addressed in a separate process evaluation.

Fifth, we cannot be certain whether reductions in opioid prescribing were always clinically appropriate as we did not assess individual patient clinical indications and outcomes. The absence of any increases in prescribing of other potentially harmful analgesics and in referrals suggests that the intervention did not generate increased demand.

Patients have strong expectations for prescription pain relief, making reductions in prescribing challenging if they are perceived as undermining therapeutic relationships and patient satisfaction. Strategies to bring about significant improvements in healthcare delivery are unlikely to succeed if they fail to address multiple barriers and enablers. Addressing the rise of opioid prescribing and its legacy is likely to require sustained, coordinated efforts across all levels of healthcare systems that target organisational, clinical, and patient behaviours [51]. Performance feedback offers one approach that can be coupled with complementary educational campaigns and decision support to change physician prescribing habits and patient expectations [22]. We welcome further research to determine whether our findings can be replicated in other healthcare systems. There are further opportunities to evaluate and enhance feedback effectiveness, ideally involving head-to-head comparisons of different ways of delivering feedback within randomised designs [52].

Conclusions

We observed that an evidence- and theory-informed feedback intervention reversed rising opioid prescribing trends in a primary care setting. Effects decreased following cessation of the feedback, which may need to be sustained for maximum long-term impact. We observed no concurrent increases in prescribing of other analgesics or demand for musculoskeletal services. Feedback therefore offers a scalable approach to reduce population-level opioid prescribing.

Supporting information

S1 Text. TIDieR checklist for the Campaign to Reduce Opioid Prescribing intervention, incorporating the reporting and design elements of audit and feedback intervention recommendations.

(PDF)

S2 Text. Sample practice report.

The baselayer of the map used in this file is from https://commons.wikimedia.org/wiki/File:England_Clinical_Commissioning_Group_(CCG)_Map_(Labelled).svg.

(PDF)

S3 Text. Primary outcome search terms for The Phoenix Partnership SystmOne electronic health record system.

(PDF)

S4 Text. Multilevel linear mixed-effects model.

(PDF)

S5 Text. Sensitivity analysis.

(PDF)

S6 Text. Mean number of adults prescribed opioid per 1,000 adults: Multilevel linear model—electronic health record data and denominator, residuals ±2 removed.

(PDF)

S7 Text. Statistical analysis plan.

(PDF)

Acknowledgments

We would like to thank Mohammed Imran at West Yorkshire Research and Development for his role in data collection.

Abbreviations

CCG

clinical commissioning group

EHR

electronic health record

IMD

Index of Multiple Deprivation

ITS

interrupted time series

LMM

linear mixed-effects model

NSAID

non-steroidal anti-inflammatory drug

Data Availability

Data cannot be shared publicly because of risk of patient identification where small numbers of patients per practice are included. Data are available from the University of Leeds School of Medicine Ethics Committee (contact via fmhuniethics@leeds.ac.uk) for researchers who meet the criteria for access to confidential data.

Funding Statement

SA received a starter grant for clinical lecturers from the Academy of Medical Sciences, The Wellcome Trust, Medical Research Council, British Heart Foundation, Arthritis Research UK, the Royal College of Physicians and Diabetes UK [Grant number: SGL017/1033] to undertake this research. URL: https://acmedsci.ac.uk/grants-and-schemes/grant-schemes/starter-grants The funders had no role in the study design, data collection and analysis, decision to publish or preparation of the manuscript.

References

  • 1.Kiang MV, Humphreys K, Cullen MR, Basu S. Opioid prescribing patterns among medical providers in the United States, 2003–17: retrospective, observational study. BMJ. 2020;368:l6968. doi: 10.1136/bmj.l6968 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Kalkman GA, Kramers C, van Dongen RT, van den Brink W, Schellekens A. Trends in use and misuse of opioids in the Netherlands: a retrospective, multi-source database study. Lancet Public Health. 2019;4(10):E498–505. doi: 10.1016/S2468-2667(19)30128-8 [DOI] [PubMed] [Google Scholar]
  • 3.Fredheim OMS, Mahic M, Skurtveit S, Dale O, Romundstad P, Borchgrevink PC. Chronic pain and use of opioids: a population-based pharmacoepidemiological study from the Norwegian Prescription Database and the Nord-Trøndelag Health Study. Pain. 2014;155(7):1213–21. doi: 10.1016/j.pain.2014.03.009 [DOI] [PubMed] [Google Scholar]
  • 4.Lalic S, Gisev N, Bell JS, Korhonen MJ, Ilomäki J. Predictors of persistent prescription opioid analgesic use among people without cancer in Australia. Br J Clin Pharmacol. 2018;84(6):1267–78. doi: 10.1111/bcp.13556 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Rosner B, Neicun J, Yang JC, Roman-Urrestarazu A. Opioid prescription patterns in Germany and the global opioid epidemic: systematic review of available evidence. PLoS ONE. 2019;14(8):e0221153. doi: 10.1371/journal.pone.0221153 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Jeffery MM, Hooten WM, Henk HJ, Bellolio MF, Hess EP, Meara E, et al. Trends in opioid use in commercially insured and Medicare Advantage populations in 2007–16: retrospective cohort study. BMJ. 2018;362:k2833. doi: 10.1136/bmj.k2833 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Ballantyne JC, Mao J. Opioid therapy for chronic pain. N Engl J Med. 2003;349(20):1943–53. doi: 10.1056/NEJMra025411 [DOI] [PubMed] [Google Scholar]
  • 8.Berterame S, Erthal J, Thomas J, Fellner S, Vosse B, Clare P, et al. Use of and barriers to access to opioid analgesics: a worldwide, regional, and national study. Lancet. 2016;387(10028):1644–56. doi: 10.1016/S0140-6736(16)00161-6 [DOI] [PubMed] [Google Scholar]
  • 9.Ray WA, Chung CP, Murray KT, Hall K, Stein C. Prescription of long-acting opioids and mortality in patients with chronic noncancer pain. JAMA. 2016;315(22):2415–23. doi: 10.1001/jama.2016.7789 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain—United States, 2016. JAMA. 2016;315(15):1624–45. doi: 10.1001/jama.2016.1464 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Krebs EE, Gravely A, Nugent S, Jensen AC, DeRonne B, Goldsmith ES, et al. Effect of opioid vs nonopioid medications on pain-related function in patients with chronic back pain or hip or knee osteoarthritis pain: the space randomized clinical trial. JAMA. 2018;319(9):872–82. doi: 10.1001/jama.2018.0899 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Jani M, Birlie Yimer B, Sheppard T, Lunt M, Dixon WG. Time trends and prescribing patterns of opioid drugs in UK primary care patients with non-cancer pain: a retrospective cohort study. PLoS Med. 2020;17(10):e1003270. doi: 10.1371/journal.pmed.1003270 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Schieber LZ, Guy GP Jr, Seth P, Young R, Mattson CL, Mikosz CA, et al. Trends and patterns of geographic variation in opioid prescribing practices by state, United States, 2006–2017. JAMA Netw Open. 2019;2(3):e190665. doi: 10.1001/jamanetworkopen.2019.0665 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Verhamme KMC, Bohnen AM. Are we facing an opioid crisis in Europe? Lancet Public Health. 2019;4(10):e483–4. doi: 10.1016/S2468-2667(19)30156-2 [DOI] [PubMed] [Google Scholar]
  • 15.Curtis HJ, Croker R, Walker AJ, Richards GC, Quinlan J, Goldacre B. Opioid prescribing trends and geographical variation in England, 1998–2018: a retrospective database study. Lancet Psychiatry. 2019;6(2):140–50. doi: 10.1016/S2215-0366(18)30471-1 [DOI] [PubMed] [Google Scholar]
  • 16.Foy R, Leaman B, McCrorie C, Petty D, House A, Bennett M, et al. Prescribed opioids in primary care: cross-sectional and longitudinal analyses of influence of patient and practice characteristics. BMJ Open. 2016;6(5):e010276. doi: 10.1136/bmjopen-2015-010276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Zin CS, Chen L-C, Knaggs RD. Changes in trends and pattern of strong opioid prescribing in primary care. Eur J Pain. 2014;18(9):1343–51. doi: 10.1002/j.1532-2149.2014.496.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Eccleston C, Fisher E, Thomas KH, Hearn L, Derry S, Stannard C, et al. Interventions for the reduction of prescribed opioid use in chronic non-cancer pain. Cochrane Database Syst Rev. 2017;11(11):CD010323. doi: 10.1002/14651858.CD010323.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Katzman JG, Qualls CR, Satterfield WA, Kistin M, Hofmann K, Greenberg N, et al. Army and navy ECHO pain telementoring improves clinician opioid prescribing for military patients: an observational cohort study. J Gen Intern Med. 2019;34(3):387–95. doi: 10.1007/s11606-018-4710-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Liebschutz JM, Xuan Z, Shanahan CW, LaRochelle M, Keosaian J, Beers D, et al. Improving adherence to long-term opioid therapy guidelines to reduce opioid misuse in primary care: a cluster-randomized clinical trial. JAMA Intern Med. 2017;177(9):1265–72. doi: 10.1001/jamainternmed.2017.2468 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Samet JH, Tsui JI, Cheng DM, Liebschutz JM, Lira MC, Walley AY, et al. Improving the delivery of chronic opioid therapy among people living with human immunodeficiency virus: a cluster randomized clinical trial. Clin Infect Dis. 2020Jul22. doi: 10.1093/cid/ciaa1025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard‐Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259. doi: 10.1002/14651858.CD000259.pub3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Race Disparity Unit. Regional ethnic diversity. GOV.UK; 2020 [cited 2021 Sep 16]. Available from: https://www.ethnicity-facts-figures.service.gov.uk/uk-population-by-ethnicity/national-and-regional-populations/regional-ethnic-diversity/latest.
  • 24.Department for Communities and Local Government. The English indices of deprivation 2010: statistical release. London: Department for Communities and Local Government; 2011. [Google Scholar]
  • 25.Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin; 2002. doi: 10.1037/1082-989x.7.1.3 [DOI] [Google Scholar]
  • 26.Fretheim A, Zhang F, Ross-Degnan D, Oxman AD, Cheyne H, Foy R, et al. A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. J Clin Epidemiol. 2015;68(3):324. doi: 10.1016/j.jclinepi.2014.10.003 [DOI] [PubMed] [Google Scholar]
  • 27.Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299–309. doi: 10.1046/j.1365-2710.2002.00430.x [DOI] [PubMed] [Google Scholar]
  • 28.Glidewell L, Willis TA, Petty D, Lawton R, McEachan RRC, Ingleson E, et al. To what extent can behaviour change techniques be identified within an adaptable implementation package for primary care? A prospective directed content analysis. Implement Sci. 2018;13(1):32. doi: 10.1186/s13012-017-0704-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Willis TA, Collinson M, Glidewell L, Farrin AJ, Holland M, Meads D, et al. An adaptable implementation package targeting evidence-based indicators in primary care: a pragmatic cluster-randomised evaluation. PLoS Med. 2020;17(2):e1003045. doi: 10.1371/journal.pmed.1003045 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Brehaut JC, Colquhoun HL, Eva KW, Carroll K, Sales A, Michie S, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–41. doi: 10.7326/M15-2248 [DOI] [PubMed] [Google Scholar]
  • 31.McLintock K, Russell AM, Alderson SL, West R, House A, Westerman K, et al. The effects of financial incentives for case finding for depression in patients with diabetes and coronary heart disease: interrupted time series analysis. BMJ Open. 2014;4(8):e005178. doi: 10.1136/bmjopen-2014-005178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.EBM DataLab. OpenPrescribing. Oxford: University of Oxford; 2017.
  • 33. Public Health England. National general practice profiles. London: Public Health England; 2021 [cited 2021 Sep 16]. Available from: https://fingertips.phe.org.uk/profile/general-practice.
  • 34.Doran T, Fullwood C, Gravelle H, Reeves D, Kontopantelis E, Hiroeh U, et al. Pay-for-performance programs in family practices in the United Kingdom. N Engl J Med. 2006;355(4):375–84. doi: 10.1056/NEJMsa055505 [DOI] [PubMed] [Google Scholar]
  • 35.Lopez Bernal J, Cummins S, Gasparrini A. The use of controls in interrupted time series studies of public health interventions. Int J Epidemiol. 2018;47(6):2082–93. doi: 10.1093/ije/dyy135 [DOI] [PubMed] [Google Scholar]
  • 36.Hudson J, Fielding S, Ramsay CR. Methodology and reporting characteristics of studies using interrupted time series design in healthcare. BMC Med Res Methodol. 2019;19(1):137. doi: 10.1186/s12874-019-0777-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Turner SL, Karahalios A, Forbes AB, Taljaard M, Grimshaw JM, Korevaar E, et al. Creating effective interrupted time series graphs: review and recommendations. Res Synth Methods. 2021;12(1):106–17. doi: 10.1002/jrsm.1435 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Reid MC, Henderson CR, Papaleontiou M, Amanfo L, Olkhovskaya Y, Moore AA, et al. Characteristics of older adults receiving opioids in primary care: treatment duration and outcomes. Pain Med. 2010;11(7):1063–71. doi: 10.1111/j.1526-4637.2010.00883.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Daoust R, Paquet J, Moore L, Émond M, Gosselin S, Lavigne G, et al. Recent opioid use and fall-related injury among older patients with trauma. CMAJ. 2018;190(16):E500–6. doi: 10.1503/cmaj.171286 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Chapman EJ, Edwards Z, Boland JW, Maddocks M, Fettes L, Malia C, et al. Practice review: evidence-based and effective management of pain in patients with advanced cancer. Palliat Med. 2020;34(4):444–53. doi: 10.1177/0269216319896955 [DOI] [PubMed] [Google Scholar]
  • 41.Tadrous M, Greaves S, Martins D, Nadeem K, Singh S, Mamdani MM, et al. Evaluation of the fentanyl patch-for-patch program in Ontario, Canada. Int J Drug Policy. 2019;66:82–6. doi: 10.1016/j.drugpo.2019.01.025 [DOI] [PubMed] [Google Scholar]
  • 42.Chen T-C, Chen L-C, Knaggs RD. A 15-year overview of increasing tramadol utilisation and associated mortality and the impact of tramadol classification in the United Kingdom. Pharmacoepidemiol Drug Saf. 2018;27:487–94. doi: 10.1002/pds.4320 [DOI] [PubMed] [Google Scholar]
  • 43.Curtis HJ, Goldacre B. OpenPrescribing: normalised data and software tool to research trends in English NHS primary care prescribing 1998–2016. BMJ Open. 2018;8(2):e019921. doi: 10.1136/bmjopen-2017-019921 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Hallsworth M, Chadborn T, Sallis A, Sanders M, Berry D, Greaves F, et al. Provision of social norm feedback to high prescribers of antibiotics in general practice: a pragmatic national randomised controlled trial. Lancet. 2016;387(10029):P1743–52. doi: 10.1016/S0140-6736(16)00215-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Guthrie B, Kavanagh K, Robertson C, Barnett K, Treweek S, Petrie D, et al. Data feedback and behavioural change intervention to improve primary care prescribing safety (EFIPPS): multicentre, three arm, cluster randomised controlled trial. BMJ. 2016;354:i4079. doi: 10.1136/bmj.i4079 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, et al. Clinical Performance Feedback Intervention Theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci. 2019;14(1):40. doi: 10.1186/s13012-019-0883-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Desveaux L, Saragosa M, Kithulegoda N, Ivers NM. Understanding the behavioural determinants of opioid prescribing among family physicians: a qualitative study. BMC Fam Pract. 2019;20(1):59. doi: 10.1186/s12875-019-0947-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Mordecai L, Reynolds C, Donaldson LJ, de C Williams AC. Patterns of regional variation of opioid prescribing in primary care in England: a retrospective observational study. Br J Gen Pract. 2018;68(668):e225–33. doi: 10.3399/bjgp18X695057 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Gornall J. Exposed: national disgrace as a quarter of a million patients are turned into drug addicts by their doctors. The Daily Mail. 2017Mar28. [Google Scholar]
  • 50.Rhodes D. NHS accused of fuelling rise in opioid addiction. BBC News. 2018 Mar 15 [cited 2021 Sep 20]. Available from: https://www.bbc.co.uk/news/uk-england-43304375.
  • 51.Ferlie EB, Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q. 2001;79(2):281–315. doi: 10.1111/1468-0009.00206 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Grimshaw J, Ivers N, Linklater S, Foy R, Francis JJ, Gude WT, et al. Reinvigorating stagnant science: implementation laboratories and a meta-laboratory to efficiently advance the science of audit and feedback. BMJ Qual Saf. 2019;28(5):416–23. doi: 10.1136/bmjqs-2018-008355 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Artur Arikainen

23 Nov 2020

Dear Dr Alderson,

Thank you for submitting your manuscript entitled "The effects of an evidence and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: a controlled interrupted time series analysis" for consideration by PLOS Medicine.

Your manuscript has now been evaluated by the PLOS Medicine editorial staff and I am writing to let you know that we would like to send your submission out for external peer review.

However, before we can send your manuscript to reviewers, we need you to complete your submission by providing the metadata that is required for full assessment. To this end, please login to Editorial Manager where you will find the paper in the 'Submissions Needing Revisions' folder on your homepage. Please click 'Revise Submission' from the Action Links and complete all additional questions in the submission questionnaire.

Please re-submit your manuscript within two working days, i.e. by .

Login to Editorial Manager here: https://www.editorialmanager.com/pmedicine

Once your full submission is complete, your paper will undergo a series of checks in preparation for peer review. Once your manuscript has passed all checks it will be sent out for review.

Feel free to email us at plosmedicine@plos.org if you have any queries relating to your submission.

Kind regards,

Artur A. Arikainen,

Associate Editor

PLOS Medicine

Decision Letter 1

Emma Veitch

16 Jan 2021

Dear Dr. Alderson,

Thank you very much for submitting your manuscript "The effects of an evidence and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: a controlled interrupted time series analysis" (PMEDICINE-D-20-05335R1) for consideration at PLOS Medicine.

Your paper was evaluated by a senior editor and discussed among all the editors here. It was also discussed with an academic editor with relevant expertise, and sent to independent reviewers, including a statistical reviewer (r#2). The reviews are appended at the bottom of this email and any accompanying reviewer attachments can be seen via the link below:

[LINK]

In light of these reviews, I am afraid that we will not be able to accept the manuscript for publication in the journal in its current form, but we would like to consider a revised version that addresses the reviewers' and editors' comments. Obviously we cannot make any decision about publication until we have seen the revised manuscript and your response, and we plan to seek re-review by one or more of the reviewers.

In revising the manuscript for further consideration, your revisions should address the specific points made by each reviewer and the editors. Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments, the changes you have made in the manuscript, and include either an excerpt of the revised text or the location (eg: page and line number) where each change can be found. Please submit a clean version of the paper as the main article file; a version with changes marked should be uploaded as a marked up manuscript.

In addition, we request that you upload any figures associated with your paper as individual TIF or EPS files with 300dpi resolution at resubmission; please read our figure guidelines for more information on our requirements: http://journals.plos.org/plosmedicine/s/figures. While revising your submission, please upload your figure files to the PACE digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at PLOSMedicine@plos.org.

We expect to receive your revised manuscript by Feb 08 2021 11:59PM. Please email us (plosmedicine@plos.org) if you have any questions or concerns.

***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***

We ask every co-author listed on the manuscript to fill in a contributing author statement, making sure to declare all competing interests. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. If new competing interests are declared later in the revision process, this may also hold up the submission. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT. You can see our competing interests policy here: http://journals.plos.org/plosmedicine/s/competing-interests.

Please use the following link to submit the revised manuscript:

https://www.editorialmanager.com/pmedicine/

Your article can be found in the "Submissions Needing Revision" folder.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see http://journals.plos.org/plosmedicine/s/submission-guidelines#loc-methods.

Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it.

We look forward to receiving your revised manuscript.

Sincerely,

Emma Veitch, PhD

PLOS Medicine

On behalf of Clare Stone, PhD, Acting Chief Editor,

PLOS Medicine

plosmedicine.org

-----------------------------------------------------------

Requests from the editors:

*In the last sentence of the Abstract Methods and Findings section, please include a note about any key limitation(s) of the study's methodology.

*In the abstract, effects and confidence intervals are currently a bit hard to read and could be reformatted so the reader can follow these better eg:

"The number of adults prescribed any opioid rose pre-intervention in both intervention and control practices BY 0.18 (95% CI; 0.11, 0.25) and 0.36 (95% CI; 0.27, 0.46) per 1,000 adults per month respectively. During the intervention period, prescribing fell in intervention practices (change -0.11; 95% CI -0.30, -0.08) and continued rising in control practices (change 0.54; 95% CI 0.29, 0.78) with a difference of -0.65 per 1000 patients (95% CI -0.96, -0.34)"

*At this stage, we ask that you include a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract. Please see our author guidelines for more information: https://journals.plos.org/plosmedicine/s/revising-your-manuscript#loc-author-summary

*On page 9 of the paper, currently the authors note a sensitivity analysis that isn't included with the paper (below); the journal prefers to ensure that all claims made in the paper are backed up with analyses presented and there are no page/length limits; we'd recommend that the sensitivity analysis noted below is included, with supplementary materials if needed:

"Sensitivity analysis (not shown) explored and confirmed the robustness of the modelling approaches, based on the main outcome adjusted LMM"

*On page 12, the paper references some analyses on acceptability and value of the feedback intervention (below). If the PLOS Medicine paper is accepted then it won't be possible to include a reference to a manuscript under review, and the best outcome is if the acceptability/value paper is accepted for publication by this stage (and then it can be cited and included in the reference list as "in press" - or with a full citation/DOI). Please consider this when you come to resubmit the revision, it's also possible that the way this article is cited can be updated later, if the PLOS Medicine paper gets to the stage of provisional acceptance and materials are being prepared for publication. If the acceptability/value paper isn't accepted by this point, then the authors would need to be prepared to remove any reference/citation to it.

"Our process evaluation suggests the acceptability and perceived value of credible, tailored feedback targeting issues of emerging or established concern (manuscript under review)"

*Did your study have a prospective protocol or analysis plan? Please state this (either way) early in the Methods section.

a) If a prospective analysis plan (from your funding proposal, IRB or other ethics committee submission, study protocol, or other planning document written before analyzing the data) was used in designing the study, please include the relevant prospectively written document with your revised manuscript as a Supporting Information file to be published alongside your study, and cite it in the Methods section. A legend for this file should be included at the end of your manuscript.

b) If no such document exists, please make sure that the Methods section transparently describes when analyses were planned, and when/why any data-driven changes to analyses took place.

c) In either case, changes in the analysis-- including those made in response to peer review comments-- should be identified as such in the Methods section of the paper, with rationale.

-----------------------------------------------------------

Comments from the reviewers:

Reviewer #1:

There is an urgent need for evidence-based interventions to reduce inappropriate opioid prescribing at both population and individual patient level and this work certainly has potential to contribute to the former category. The supplementary information regarding the intervention characteristics (TIDiER checklist) and example feedback were very helpful. The overall finding of reduced opioid prescribing in practices receiving a fairly straightforward and low cost feedback intervention is very welcome. However, 3 years have elapsed since the intervention period, during which time there have been changes in prescriber awareness and behaviour that may impact the findings and I have some reservations about how the study is reported which are summarised below.

Introduction

Paragraph 1

Whilst it is true that the US has an opioid crisis - the scale of this may have peaked around 2016, based on CDC data from 2017- 18. Reference 7 is particularly old & I suggest using that to highlight that in the last 15-20 years an opioid crisis developed - in the US & other western countries before adding more up to date information from the US, UK and elsewhere that suggests the rise in prescribing may have peaked but hasn't fallen so much as may have been expected. I recommend highlighting other opioid-related harms - not just dependency e.g. overdose & falls/ fractures

Paragraph 2

Line 61-62 - there is a lack of evidence for all kinds of interventions - why highlight only psychological?

I don't think ref 16 is particularly well summarised in line 62-63 - is the struggle with managing chronic pain & the lack of effective medicines rather than with opioids per se?

Line 65 - reference 17 seems to be wrong

Methods

Study design and setting

Line 78 - whilst accepting this is worded for an international audience, it seems odd not to mention General Practice or GP practices when discussing UK primary care.

I suspect controlled ITS is not a study design that many readers will be very familiar with and I think a more detailed description of that and the rationale for using it would more useful to most readers than some of the detail given about UK CCGs etc., which is to some extent repeated in the results section.

I'm aware that there are not such clear reporting guidelines for studies using ITS design as for RCTs - but I think there are areas where more detail is needed.

How did the authors decide on the sample size - i.e. number of practices included - and what is the justification for the substantial difference in numbers of intervention and control practices?

Discussion and conclusions

The study findings are reported as though they arise form an RCT rather than a quasi-experimental study design and whilst the shortcomings of the design is acknowledged to some extent, the authors conclude that "Repeated evidence and theory informed comparative feedback reversed a rising trend of opioid prescribing in primary care" but does this study design really permit such conclusions about causation?

Review by a methodologist and statistician is recommended - for a more expert opinion on these aspects than I can give.

-----------------------------------------------------------

Reviewer #2:

Thanks for the opportunity to review your manuscript. My role is as a statistical reviewer so my queries are focused on study design, data, and analysis.

This study uses a multi-level interrupted time series analysis with control sites to test the effect of an intervention intended to reduce prescription of opioid analgesics. . I have put overall queries first, and then followed by questions related to a specific section of the manuscript with a page/line reference.

Was there any information available in the study from the practices about whether the intervention reports were used by physicians (or others in the practices?), and what they thought of the information (i.e. acceptability)?

P6. L86. Is the same database (electronic prescribing data) used throughout the different CCGs?

P6. L86. How were the areas/practices that received the intervention decided? Was this random allocation or deliberate? The limitations of the study mention the quasi-experimental design but not specifically whether there were systematic differences between the intervention and control areas and whether these could lead to differing temporal changes.

P6. L87. To clarify, these extra practices were in the Yorkshire + Humber region, but not in the West Yorkshire area, and not selected from the five CCGs that provide control data?

P6. L87. Is the data source for these extra practices the same as from the other sites (ie. Just a difference means for data provision)?

P6.L87. Do patients always use the same practice? Is there any information on how many patients would move practices throughout the study period?

P6, L98. Were there other services available to patients in this area to replace opioid prescriptions? i.e. specialist pain management?

P8. L127. Is it possible to derive total morphine equivalent dose from the prescription data? Or is lowering any opioid prescribing the main aim of the intervention, not lowering the dose or substituting a less strong agent?

P8. L144. This is a reasonably complicated model and I think it would be beneficial to include this in an appendix as a formula. Was a 'shift' parameter at the beginning of the intervention period used or just change over time?

P8. L164. For which outcomes was the Poisson model used, and which for the Neg Bin? How was the decision to use these made?

P11, L219. Were these stratified analyses done by restricting the patients in the analysis, or by including an interaction (e.g. age x control/int variable)? It isn't detailed in the methods how this was done. Was the effect in over 75s tested directly or is this a comparison of stratified analyses?

Figure 1. Are these the estimated rates from the mixed models? Is it possible to also show the crude rates as well as the estimates in each month of the study? I would also consider modifying the x-axis, to make this easier to read (e.g. maybe have each year in labels and ticks for each month/quarter?)

-----------------------------------------------------------

Reviewer #3:

This article, "The effects of an evidence and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: a controlled interrupted time series analysis", has significant merit in that it is reporting a large scale, low intensity intervention and its population level impact. Because the focus was on population exposure to opioids, the primary outcome, appropriately chosen was any adult prescribed an opioid. This is essentially how many people are exposed to opioids and not necessarily any impact on people already getting opioids, particularly those with risky opioids use.

By section:

Abstract- well written, clearly stated.

Introduction:

First paragraph-

Line 55- North America is experiencing an opioid crisis but rising opioid mortality is related more to heroin and synthetic opioids than prescription opioids. The prescription opioid crisis occurred in the 2000s-2010 after which heroin (2010) and then fentanyl (2014) took over as leading causes of opioid overdose death. Important not to promulgate misinformation about the contribution of prescription opioids to the current crisis.

Line 58- Opioids are likely to be no more effective than non-opioid pain medication (see Krebs SPACE trial published in JAMA in 2019), which doesn't mean they are of limited effectiveness. It is important to be clear that opioids do help some people but clearly have potential for great harm and don't help everyone.

Second paragraph;

Contrary to line 60's assertion, a number of approaches have been tested to decrease opioid prescribing. There is a literature on ECHO to reduce opioid prescribing e.g. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6420488/; also, implementation of prescription drug monitoring programs (vast literature showing decreased in population based prescribing associated with PDMP), clinical intervention to improve guideline adherent prescribing using academic detailing and nurse care managers (https://pubmed.ncbi.nlm.nih.gov/28715535/, https://pubmed.ncbi.nlm.nih.gov/32697847/) and academic detailing alone (plentiful literature on this).

Methods:

Study Design:

It might be useful to define "lists" in this first paragraph since this is used later in the manuscript, particularly in the tables, and it would be useful to understand how a list relates to the description of the practices for those in North America not familiar with the system in the UK.

Intervention:

Line 93: It would be useful to name the evidence and theory that drive the intervention. The manuscript is diminished by lack of a published protocol that could have outlined this in detail.

Data Sources and Outcomes:

The primary outcome is appropriate for this intervention. It wasn't clear why the particular high risk groups were targeted and what the goal was- For example, patients prescribed anti-depressants and opioids. Was the denominator people prescribed anti-depressants and the goal to decrease opioids or was it to ensure that people prescribed opioids also be prescribed anti-depressants for concomitant depression and for analgesic impact? Similarly, the strong vs. weak opioid analysis wasn't clearly justified although it more intuitive. Was it based on Morphine Equivalent doses? If so, the reporting on the data might include that, since someone one a weak opioid might be prescribed high amounts and someone on a stronger opioid might be prescribed just a few tablets. In the US, hydrocodone is one of the most commonly prescribed opioids but this was not listed here. Perhaps that is due to a different name in the UK or that it isn't on the formulary. Lastly, excluding individuals with drug dependence may exclude people prescribed opioids and then later diagnosed with drug dependence as a result of the prescription.

The use of gabapentin and NSAID was useful to include as a comparator.

Data Analysis-

A statistician should comment on these analyses, but to a non-statistician, this looks complete and well detailed.

The discussion reports on cost estimates, but there is no description where those calculations come from.

See comment below on results/tables

Results:

The reporting of the results in the body of the manuscript seem clear but don't seem to match up with the values in the tables, which were a bit confusing. For example, the results in table 3 showed two values- the beginning of the period and the end of the period. But in some cases, the numbers didn't make sense. For example, number of adults prescribed opioids per 1000 control group 2016m3 was 68.2 but the next month 2016m4 was 63.9. Why was there such a disparity month to month? The same with the figure- it looks like it drops precipitously in a month period. Why wasn't this displayed for a rolling average? It might not be right to display this way, but in Larochelle, 2016 Annals of Internal Medicine, appendix figure 2, we showed a daily average before and after an index event, which was steadier over time, reflecting the reality of opioid prescribing. This article might have something similar but using an 8 week rolling average of adults per 1000 given opioids. The fact that the data is displayed this way makes me question whether all months of data were used, and if so, why weren't they incorporated into tables and figures?

It wasn't clear why the authors did an uncontrolled Interrupted time series of the intervention practices alone and it doesn't augment any of the findings or discussion in the manuscript. This is superfluous data analysis.

Cost estimates should go in Results section along with details of how they were calculated prior to the discussion.

Discussion:

Paragraph 1- it would be useful to show the estimates of the population impact earlier, in the results

Paragraph 2- line 216-17 Since there was no analysis of MME, just the type of opioid, it isn't at all clear why there would need to be a gradual reduction to limit withdrawal symptoms. Most people on moderate doses would not experience withdrawal.

Paragraph 3- the reduction of non-opioid medication and referrals may have had positive financial impact, but it may be that practitioners are leaving patients without any treatment for pain. There was no evidence of self-management options in the data reported. Furthermore, it generally takes longer to work with patients on self-management than a prescription medication so it is hard to know what actually occurred. Best to leave out speculation and do some future studies examining what did happen for patients with pain.

Paragraph 5- see comment above about presenting cost data in the discussion without presenting first in the results. That being said, this has important implications for the intervention's impact on the population and health service.

Paragraph 10 (lines 273-276) The lack of information on what actually happened in the clinical interaction with the patient is one of the most important limitations of the study. The generalizability and coding errors were expected as part of the study but would be balanced by the large size and reach (generalizability) as well as balance between intervention and control (coding errors). What we don't know is how the patients fared with the intervention. The lack of referrals and non-opioid medications is not necessarily reassuring because it could indicate that the pain was not addressed at all.

Tables:

Table 1:

It would be important to define some of the headings in expanded footnotes below, including:

List

Patient Experience- what does median patient experience mean- what measure is used and what is the scale?

LTC- what are included in these?

QoF- What does % score mean?

IMD- What does % IMD indicate?

Table 2:

Baseline characteristics- are these a 12 month average? One month average? The time frame needs to be defined.

CCG should be defined in footnote of table

Table 3:

See critique above-

Also, this table is very busy with lots of detail. It would be useful to simplify, if possible. Perhaps show mean numbers for the pre-intervention, intervention and post-intervention periods rather than the multiple data points.

Table 4:

Similar critique as for Table 3- could simplify it

In summary, this is an important intervention study with powerful population level results. It could use improvement in each of the main section of the paper and in the display of data.

Jane Liebschutz, MD MPH

University of Pittsburgh

-----------------------------------------------------------

Any attachments provided with reviews can be seen via the following link:

[LINK]

Decision Letter 2

Raffaella Bosurgi

4 Mar 2021

Dear Dr. Alderson,

Thank you very much for re-submitting your manuscript "The effects of an evidence and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: a controlled interrupted time series analysis" (PMEDICINE-D-20-05335R2) for review by PLOS Medicine.

I have discussed the paper with my colleagues and the academic editor and it was also seen again by the below reviewers. I am pleased to say that provided the remaining editorial and production issues are dealt with we are planning to accept the paper for publication in the journal.

The remaining issues that need to be addressed are listed at the end of this email. Any accompanying reviewer attachments can be seen via the link below. Please take these into account before resubmitting your manuscript:

[LINK]

***Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.***

In revising the manuscript for further consideration here, please ensure you address the specific points made by each reviewer and the editors. In your rebuttal letter you should indicate your response to the reviewers' and editors' comments and the changes you have made in the manuscript. Please submit a clean version of the paper as the main article file. A version with changes marked must also be uploaded as a marked up manuscript file.

Please also check the guidelines for revised papers at http://journals.plos.org/plosmedicine/s/revising-your-manuscript for any that apply to your paper. If you haven't already, we ask that you provide a short, non-technical Author Summary of your research to make findings accessible to a wide audience that includes both scientists and non-scientists. The Author Summary should immediately follow the Abstract in your revised manuscript. This text is subject to editorial change and should be distinct from the scientific abstract.

We expect to receive your revised manuscript within 1 week. Please email us (plosmedicine@plos.org) if you have any questions or concerns.

We ask every co-author listed on the manuscript to fill in a contributing author statement. If any of the co-authors have not filled in the statement, we will remind them to do so when the paper is revised. If all statements are not completed in a timely fashion this could hold up the re-review process. Should there be a problem getting one of your co-authors to fill in a statement we will be in contact. YOU MUST NOT ADD OR REMOVE AUTHORS UNLESS YOU HAVE ALERTED THE EDITOR HANDLING THE MANUSCRIPT TO THE CHANGE AND THEY SPECIFICALLY HAVE AGREED TO IT.

Please ensure that the paper adheres to the PLOS Data Availability Policy (see http://journals.plos.org/plosmedicine/s/data-availability), which requires that all data underlying the study's findings be provided in a repository or as Supporting Information. For data residing with a third party, authors are required to provide instructions with contact information for obtaining the data. PLOS journals do not allow statements supported by "data not shown" or "unpublished results." For such statements, authors must provide supporting data or cite public sources that include it.

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript.

Please note, when your manuscript is accepted, an uncorrected proof of your manuscript will be published online ahead of the final version, unless you've already opted out via the online submission form. If, for any reason, you do not want an earlier version of your manuscript published online or are unsure if you have already indicated as such, please let the journal staff know immediately at plosmedicine@plos.org.

If you have any questions in the meantime, please contact me or the journal staff on plosmedicine@plos.org.  

We look forward to receiving the revised manuscript by Mar 11 2021 11:59PM.   

Sincerely,

Dr Raffaella Bosurgi,

Executive Editor

PLOS Medicine

plosmedicine.org

------------------------------------------------------------

Requests from Editors:

Comments from Reviewers:

Reviewer #1: The authors have taken on board reviewer feedback. As a result, the revised manuscript is much improved and reads well - thank you! I have no further suggestions

Reviewer #2: Thanks for the opportunity to see your revised manuscript. Overall I think that the changes in the version resolve all of my queries from the first version I reviewed. The data sources section is clearer and the inclusion of the SAP was helpful for me as well.

The supplementary material was a useful addition. I can now follow the analysis with the equation for the main LMM model displayed. The Model fit table (S2) supports the use of the LMM and the results of the sensitivity analysis are similar during the pre and intervention periods. The post-intervention change over time seems to shift in the intervention areas, becoming positive whereas before it declined (although it's still lower than the control areas). The numbering of these tables may need to be adjusted as Table S1 follows Table S2 in the appendix.

I think the information you provided about how the study groups were formed (request from staff at practices that eventually received the intervention, and then surrounding area) should be presented in the methods.

This is a nice study - clearly lots work went into this and it has resulted in a good manuscript.

Reviewer #3: Overall, this is a much improved manuscript. It reads more clearly and was very responsive to the reviewer comments. I was not familiar with CITS and it was helpful to get the explanations provided in the response to reviewers, particularly about interpretation of the tables.

There are a few things remaining that would be useful to clarify.

It was not clear to me until reading the response to the reviewers that this study took advantage of a natural experiment to examine the impact of a public health/health policy-driven intervention to address a rise in opioid prescribing in Leeds and Bradford by conducting a quasi-experimental analysis. This might be of interest to the readers, which also puts into context the the entire project. While statisticians are likely to understand that a controlled interrupted time series is quasi-experimental, many readers who are interested in clinical interventions for opioids prescribing may not understand this. Thus, I suggest that it be clarified in the methods section that the intervention was implemented in targeted areas and the control sites were identified to match the region and other characteristics of the intervention site. And it might be useful to add the term quasi-experimental in the abstract to alert readers unfamiliar with controlled interrupted time series. In my mind, the quasi experimental design is an important limitation as these sites were aware on some level that they were receiving this intervention because of poor performance. While the analysis makes a convincing argument that the intervention was impactful, there may be other forces at play. This should be added to the limitations.

There is some confusion about the number of control practices. In the abstract, it says 130 control practices. In the Author Summary it says 187 practices. In the methods, on line 130, it says 187 practices provided control data, and on line 132 it was 134 practices were added as controls.

In my prior review, I may not have been clear about my question about anti-depressants. If I understand the response to reviewers, anti-depressants were used as a proxy for mental illness in addition to analysis of diagnoses of mental illness. It might be helpful to be explicit about that in the manuscript. Other studies of pain control have used anti-depressants as a specific intervention for chronic pain, so higher antidepressant may be interpreted as a therapeutic intervention, not a proxy for mental illness.

Tables: table 1 much improved- it would be useful to put the explanations below the table in order that the columns appear in the table. Patient experience is currently at the bottom despite being the 5th column.

Minor typo- line 278 used the Pound rather than Dollar sign.

Any attachments provided with reviews can be seen via the following link:

[LINK]

Decision Letter 3

Beryne Odeny

3 Sep 2021

Dear Dr Alderson, 

On behalf of my colleagues and the Academic Editor, Dr. Zirui Song, I am pleased to inform you that we have agreed to publish your manuscript "The effects of an evidence and theory-informed feedback intervention on opioid prescribing for non-cancer pain in primary care: a controlled interrupted time series analysis" (PMEDICINE-D-20-05335R3) in PLOS Medicine.

Before your manuscript can be formally accepted you will need to complete some formatting changes, which you will receive in a follow up email. Please be aware that it may take several days for you to receive this email; during this time no action is required by you. Once you have received these formatting requests, please note that your manuscript will not be scheduled for publication until you have made the required changes.

In the meantime, please log into Editorial Manager at http://www.editorialmanager.com/pmedicine/, click the "Update My Information" link at the top of the page, and update your user information to ensure an efficient production process. 

PRESS

We frequently collaborate with press offices. If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximise its impact. If the press office is planning to promote your findings, we would be grateful if they could coordinate with medicinepress@plos.org. If you have not yet opted out of the early version process, we ask that you notify us immediately of any press plans so that we may do so on your behalf.

We also ask that you take this opportunity to read our Embargo Policy regarding the discussion, promotion and media coverage of work that is yet to be published by PLOS. As your manuscript is not yet published, it is bound by the conditions of our Embargo Policy. Please be aware that this policy is in place both to ensure that any press coverage of your article is fully substantiated and to provide a direct link between such coverage and the published work. For full details of our Embargo Policy, please visit http://www.plos.org/about/media-inquiries/embargo-policy/.

To enhance the reproducibility of your results, we recommend that you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. Additionally, PLOS ONE offers an option to publish peer-reviewed clinical study protocols. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols

Thank you again for submitting to PLOS Medicine. We look forward to publishing your paper. 

Sincerely, 

Beryne Odeny 

Associate Editor 

PLOS Medicine

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Text. TIDieR checklist for the Campaign to Reduce Opioid Prescribing intervention, incorporating the reporting and design elements of audit and feedback intervention recommendations.

    (PDF)

    S2 Text. Sample practice report.

    The baselayer of the map used in this file is from https://commons.wikimedia.org/wiki/File:England_Clinical_Commissioning_Group_(CCG)_Map_(Labelled).svg.

    (PDF)

    S3 Text. Primary outcome search terms for The Phoenix Partnership SystmOne electronic health record system.

    (PDF)

    S4 Text. Multilevel linear mixed-effects model.

    (PDF)

    S5 Text. Sensitivity analysis.

    (PDF)

    S6 Text. Mean number of adults prescribed opioid per 1,000 adults: Multilevel linear model—electronic health record data and denominator, residuals ±2 removed.

    (PDF)

    S7 Text. Statistical analysis plan.

    (PDF)

    Attachment

    Submitted filename: Response to reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers 2.docx

    Data Availability Statement

    Data cannot be shared publicly because of risk of patient identification where small numbers of patients per practice are included. Data are available from the University of Leeds School of Medicine Ethics Committee (contact via fmhuniethics@leeds.ac.uk) for researchers who meet the criteria for access to confidential data.


    Articles from PLoS Medicine are provided here courtesy of PLOS

    RESOURCES