Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2024 Dec 11.
Published in final edited form as: Healthc (Amst). 2024 Oct 4;12(4):100753. doi: 10.1016/j.hjdsi.2024.100753

Implementation and adaptation of clinical quality improvement opioid measures

Catherine Hersey a,*, Sarah Shoemaker-Hunt a, Michael Parchman b, Ellen Childs c, John Le d, Wesley Sargent d
PMCID: PMC11631668  NIHMSID: NIHMS2028559  PMID: 39368348

1. Introduction

We describe how nine health systems participating in an opioid quality improvement (QI) collaborative selected their QI measures and how they operationalized those measures. These examples illustrate approaches other health systems interested in pursuing opioid prescribing improvement efforts may pursue. While prior work has identified system level implementation factors,1,2 here we focus specifically on implementation of the clinical QI measures.

In 2022, approximately 82 percent of overdose deaths in the United States (US) involved an opioid, and 13 percent involved a prescription opioid.3 From 1999 to 2020, more than 263,000 people died in the US from prescription opioid-related overdoses.4 Of all patients who reported noncancer pain symptoms or received pain-related diagnoses between 2000 and 2010, approximately 20 percent received an opioid prescription.4 The prescribing rate declined after 2012 and in 2020 was at its lowest in 15 years–43.3 prescriptions per 100 persons.5 However, the 2020 prescribing rate showed high variability across the country with almost 4 percent of US counties having dispensed enough opioid prescriptions for every person to have one.5

In March 2016, the Centers for Disease Control and Prevention (CDC) released the Guideline for Prescribing Opioids for Chronic Pain (Guideline)6 to ensure patients have access to safer, more effective chronic pain treatment while reducing the risks associated with long-term opioid therapy, including opioid use disorder and overdose. In 2018, the CDC published the Quality Improvement and Care Coordination: Implementing the CDC Guideline for Prescribing Opioids for Chronic Pain.7 This resource included 16 opioid quality improvement (QI) measures that were created in collaboration with clinicians in the field and that align with the Guideline’s 12 recommendations. The process of developing the 16 opioid QI measures is described elsewhere.8 The QI measures were specified for electronic health record (EHR) data, providing an alternative to current measures that rely on claims and enrollment data. EHRs allow systems to integrate measure rates into dashboards, facilitating identification of and clinician engagement in specific QI activities. To encourage uptake of the Guideline and the QI measures, the authors solicited health system participation in an Opioid QI Collaborative (Collaborative).

In addition to supporting systems in their implementation of the Guideline, the Collaborative aimed to understand the systems’ approaches to and experiences with implementing the QI measures. Because systems had wide variation in their context, available technical resources, and EHR systems, the Collaborative allowed participating systems to operationalize the measures in ways that fit their needs.9 The 2022 CDC Clinical Practice Guideline for Prescribing Opioids for Pain (2022 Clinical Practice Guideline) updates and replaces 2016 CDC Guideline for Prescribing Opioids for Chronic Pain.10

2. Materials and methods

2.1. Selection and description of participating health systems

We identified and recruited nine health systems in two cohorts. The first cohort began in 2018. After receiving additional funding, we recruited the second cohort which began in 2019. We identified potential systems through recommendations of state stakeholders addressing opioid overdoses, the team’s professional networks, and cold contact of individuals from health systems across the country. We conducted telephonic readiness assessments to determine the interest and capability of candidate systems. The assessments addressed the extent of leadership buy-in, available organizational resources to support participation, presence of an identified champion, and major changes expected in the system such as transferring to a new EHR system.

Participating systems varied in size, geographic location, and number of participating clinics, as illustrated in Table 1.

Table 1.

Health systems in the CDC opioid quality improvement collaborative (n = 9).

System Region Participating Clinics (n) Approximate Participating Primary Care Providers Full-time Equivalents (n) EHR Vendor
Cohort 1
System A Northeast 23 300 Epic
System B Northeast 4 74 Epic
System C West 8 Not Reported Centricity
System D Midwest 15 136 Centricity
Cohort 2
System A South 2 34 Centricity
System B South 4 30 Epic
System C South 3 40 Epic
System D West 4 1400 Epic
System E Northeast 4 200 Epic

Provides descriptive information on participating health systems.

One participant system was not formally a health system, but a nursing school with primary care clinics. Four systems were part of academic medical centers. Each system pursued at least five of the opioid QI measures, participated in monthly telephone calls with Collaborative staff, and submitted measure-related data. The systems received various forms of technical assistance and opportunities for cross-system collaboration, including a QI website, clinical webinars, and, occasionally, direct support and guidance. The systems also received a small amount of funding to help offset some of the costs.

2.2. Data collection

Cohort I provided measure information in a form of their choosing, which typically included a narrative description of measure approaches supplemented by charts or graphs with measure data. For Cohort II we implemented a more standardized data collection approach, whereby systems reported comparable information in a structured, secure, online form we provided. We used monthly Collaborative meeting notes to supplement measure data where needed for clarity. We did not collect any patient identifiers or protected health information.

We reviewed all reported measure data and entered it into a structured Excel database. This database reflected: 1) information on which measures systems selected and why; 2) measure definitions, including measure numerators, denominators, exclusions, and contextual information with respect to how they were operationalized; and 3) time periods, including measurement and relevant lookback periods. All systems reported between one and two years of data with the start date varying among systems. Two authors independently reviewed the data and resolved any differences, with the principal investigator also reviewing the findings for accuracy.

Abt’s Institutional Review Board reviewed and approved the data collection strategies for this study.

3. Results

3.1. Measure selection

Most health systems prioritized long-term opioid therapy (LTOT) measures over new opioid prescription measures. Among systems reporting new opioid prescription measures, most looked to decrease the days’ supply for new opioid prescriptions to 3 days or less (n = 4 systems). The most common LTOT measures were aimed at decreasing rates of co-prescribed benzodiazepines (n = 7); increasing rates of annual urine drug testing (UDT) (n = 5 systems); decreasing the number of patients on high daily dosages, defined as ≥ 50 or ≥ 90 morphine milligram equivalents (MMEs) per day (n = 6 systems); increasing quarterly prescription drug monitoring program (PDMP) checks (n = 5 systems); and increasing the rate of quarterly follow-up visits (n = 5 systems). See Table 2 for the number of systems that selected each measure.

Table 2.

CDC opioid quality improvement measures and number of systems reporting (n = 9).

Measure # of systems reporting
New Opioid Prescription Measures
The percentage of patients with a new opioid prescription for acute pain for a 3 days’ supply or less 4
The percentage of patients with a new opioid prescription for chronic pain with documentation that a PDMP was checked prior to prescribing 3
The percentage of patients with a new opioid prescription for chronic pain with documentation that a urine drug test was performed prior to prescribing 3
The percentage of patients with a new opioid prescription for an immediate-release opioid 1
The percentage of patients with a follow-up visit within 4 weeks of starting an opioid for chronic pain 0
Long Term Opioid Therapy Measures
The percentage of patients on long-term opioid therapy who received a prescription for a benzodiazepine 7
The percentage of patients on long-term opioid therapy with documentation that a urine drug test was performed at least annually 5
The percentage of patients on long-term opioid therapy who had documentation that a PDMP was checked at least quarterly 5
The percentage of patients on long-term opioid therapy who had a follow-up visit at least quarterly 5
The percentage of patients on long-term opioid therapy who are taking 50 MMEs or more per day 4a
The percentage of patients on long-term opioid therapy who are taking 90 MMEs or more per day 4a
The percentage of patients on long-term opioid therapy who had at least quarterly pain and functional assessments 3
The percentage of patients on long-term opioid therapy who were counseled on the purpose and use of naloxone, and either prescribed or referred to obtain naloxone 2
The percentage of patients with chronic pain who had at least one referral or visit to nonpharmacological therapy as a treatment for pain 1
The percentage of patients on long-term opioid therapy the clinician counseled on the risks and benefits of opioids at least annually 1
The percentage of patients with an opioid use disorder (OUD) who were referred to or prescribed medication assisted treatment 0

Describes the measure used in the quality improvement initiative and the number of health systems reporting each.

a

Two systems selected both MME measures.

The new prescription and LTOT measures both reflect UDT and PDMP checks. Three of the five systems reporting PDMP checks for patients on LTOT also selected the new prescription PDMP measure; only one system reporting UDT for patients on LTOT also reported the new prescription UDT measure.

Few systems explicitly stated why they selected certain measures, but some noted choosing measures that would be easiest to implement. Six systems provided reasons for not selecting specific measures. Reasons included 1) not yet being prepared to produce the measure, 2) an inability to collect measure data accurately or in a structured way, and 3) confidence that their system was already sufficiently adhering to the recommended practice. Some systems’ efforts in a particular area were in nascent stages and their administrators reported not feeling ready to begin a related QI effort.

3.2. Operationalizing measure components

Each measure has specific components, either in the numerator, denominator, or exclusion criteria, and each system had to determine how it was going to capture that data. Systems varied in their approaches to constructing measures, opting for approaches that were more feasible to build into their existing EHR system than the suggested measure specification. Here we present approaches to the most comment measure elements, including calculating days’ supply, identifying acute and chronic pain, and identifying exclusions; approaches to capturing structured data in the EHR; and other notable approaches for operationalizing less commonly selected measures. Tables 3 and 4 present summaries of how systems defined denominators and exclusions and how they operationalized key measure components, respectively.

Table 3.

System-defined denominators and exclusions (n = 9).

New Opioid Prescription Denominatorsa Systems (n)
Patients prescribed an opioid who had no opioid prescription in the previous 45 days (CDC suggested) 3
Patients prescribed an opioid for chronic pain who had no opioid prescriptions in the prior 45 days 1
Patients prescribed an opioid for acute pain who had no opioid prescriptions in the prior 45 days 1
Patient prescribed immediate-release opioid for acute painb 1
Patients prescribed an opioid and who have payers that provide claims data 1
Patients with prescribed an opioid who are not on long-term opioid therapy or diagnosed with opioid use disorder 1
Long-Term Opioid Therapy Denominatorsc
Patients with60-day supply of opioids within a quarter (CDC suggested) 3
Patients with ≥ 60-day supply of opioids within a quarter or prescribed 70 pills or more 1
Patients with a medication safety agreement 1
Patients with a medication safety agreement and prescribed 70 pills or more 1
Patients with > 4 opioid prescriptions in a six-month period 1
Patients on opioids for ≥ 3 months 1
Patients with a prescription in ≥ 2 of the past 3 months 2
Exclusionsd
Active cancer, palliative and end of life care (defined by ICD-10 codes) (CDC suggested) 6
None 2
Patients with opioid use disorder 1

Summaries of how each health system operationalized measure denominators and exclusions.

a

One system uses slightly different denominators for the acute and chronic New Opioid Prescription Denominators and are represented here twice. Three systems did not select any new prescription measures.

b

One system restricted their Measure 5 denominator to those patients prescribed an immediate release opioid.

c

One system did not report their LTOT denominator.

d

Three systems did not report exclusions. One system applied no exclusions to its LTOT population while applying the cancer, palliative, and end-of-life care and patients with opioid use disorder exclusions to its new opioid prescription population. This system is represented in each category.

Table 4.

Operationalization of key measure components.

Key Measure Components Operationalization
Calculating days supply
  • Calculate days between prescriptions using prescription start date

  • Calculate the maximum dose a patient could take on an hourly or daily basis with prescription data in the electronic health record (EHR)

  • Manually code dose and frequency for common prescriptions to calculate a maximum daily dose

  • Use total day’s supply documented in EHR prescription instructions or medication order

  • Calculate number of pills dispensed

Determining acute or chronic pain
  • Use acute pain diagnosis codes

  • Use chronic pain diagnosis codes

  • System-specific EHR intake form indicating acute or chronic pain

  • Include patients with either acute or chronic pain

Identifying exclusions
  • Use cancer registry

  • Pull cancer diagnosis from EHR/problem list

  • Use hospice discharge dispositions or limited life/comfort care modifiers

Identifying a Urine Drug Test (UDT)
  • Develop list of drug screens and implement UDT prompt

  • Pull most recent UDT into progress note using EHR functionality

  • Specify a UDT every 3 months

  • Specify a UDT annually, or ≤ 12 months prior to opioid prescription

  • UDT document ≤ 7 days prior to medication order start date

Identifying patient referrals
  • Use existing patient referral categories (which may not be limited to opioid status)

Identifying naloxone prescriptions
  • Implement an EHR checkbox and manually review records for accuracy

  • Patients with either a prescription for naloxone or an order for a naloxone kit

  • Use date of last naloxone kit issued

  • Include patients who were either offered or prescribed naloxone

  • Focus on patients on daily dosages of ≥ 50 MMEs in denominator

  • Use EHR functionality to prompt naloxone prescription

  • Collect prescription and counseling as separate data elements

Calculating Morphine Milligram Equivalents (MMEs)
  • Use CDC or external MME calculator

  • Use EHR calculated MME

  • Sum MME across prescriptions using medication start date

  • Use average MME over the past 90 days

Identifying overlapping benzodiazepine prescriptions
  • Identify benzodiazepines using CDC list or EHR groupings

  • Define overlap as benzodiazepines prescribed for 3 contiguous months during the same 6-month lookback used to determine LTOT

  • Focus on benzodiazepines prescribed by the same department that prescribed opioid

  • Define overlap as at least one concurrent prescription in the quarter

  • Define overlap as > 1 day of overlap within a quarter

Identifying a follow-up visit
  • At least one follow-up visit with the opioid-prescribing clinician (for any reason)

  • A follow-up visit for pain (with any provider)

  • Include office and telehealth visits

  • A follow-up visit for chronic pain

  • Any follow-up visit with 90 days of the first medication order start date in the quarter

  • Any medical home visit

Identifying pain and functional assessments
  • Use of the pain, enjoyment, and general quality of life (PEG) assessment

  • Document assessment in progress note and identify with manual chart review

  • Build PEG assessment into EHR flowsheet

  • Use EHR functionality to make “documentation of functional goals” pull into the progress note and prompt the provider to document

  • Make functional goals part of patient instructions

  • Measure pain and function separately

Identifying a prescription drug monitoring program (PDMP) check
  • Identify a PDMP check though manual chart review

  • Link to PDMP from EHR/User interface function with direct access to PDMP

  • Implement EHR form/checkbox

  • Use EHR functionality to display PDMP check in visit summary

  • PDMP check < 7 days prior to the medication order start date

  • Manually confirm PDMP check in records containing “controlled substance database”

  • Prompt provider documentation with EHR

Risks and Benefit Counseling
  • Include in controlled substance agreement and review charts to identify

Summaries of health each health system operationalized specific parts of each measure.

3.3. Days’ supply

Of the seven systems that reported new opioid prescription measures, only three used a denominator consistent with the resource guidance. Similarly, with LTOT measures, only three of the nine systems defined their LTOT population using the resource guidance—a 60 days’ supply of opioids within a quarter.

Variation in how EHR systems capture prescribing instructions such as dosage and frequency complicated a system’s ability to identify days’ supply or daily dosages. Even among systems that use the same EHR, some captured prescribing information as free text and others as structured data. One system’s EHR only captured quantity, so they manually coded the most common dosages and frequencies, determined the maximum number of doses a patient could take on either an hourly or daily basis, and used that information to determine their total days’ supply. Another system implemented something similar by using the total days’ supply documented in either the EHR fields or the medication order and calculating a maximum dosage. Alternatively, one system used only quantity information by defining a 3 days’ supply as a quantity of 18 pills and, to capture what may have been dispensed outside of the practice, implemented a manual search for quantities of six that an emergency department may have been dispensed.

Several systems forewent calculating or imputing days’ supply and developed an alternative definition for LTOT. Six systems used the number of opioid prescriptions within a certain number of months, and two used the number of pills dispensed during the period. Three systems also used the presence of a treatment agreement to identify additional LTOT patients.

3.4. Acute and chronic pain

The new opioid prescription measures pertain to patients with either acute pain or chronic pain. Although all systems selected LTOT measures to report, only seven of the nine systems selected new prescription measures, primarily because of difficulties in identifying patients with acute versus chronic pain. Five systems reported an acute pain measure, and they employed a variety of alternative definitions to circumvent impediments to identifying those patients. For example, while one used acute pain diagnosis codes, two systems did not restrict their applicable denominators to these codes. One system did not think their list of acute pain codes was exhaustive and the other was unable to capture the codes as structured data.

Identifying new prescriptions for chronic pain was even more challenging. One system built a customized EHR intake form where clinicians documented whether pain was acute or chronic, in addition to other information supporting their opioid QI efforts. Other systems simply chose not to distinguish between acute and chronic pain for their new prescription measures. For example, one system defined newly prescribed patients as those prescribed an opioid but neither on LTOT nor diagnosed with opioid use disorder. See Table 4 for summary of how each system operationalized identification of patients with acute or chronic pain.

3.5. Identifying exclusions

With respect to denominator exclusions, six systems aligned with the CDC Guideline by excluding patients with cancer or receiving palliative or end-of-life care. Of note, one system did not explicitly exclude these patients because their EHR did not always accurately identify them and to do so would require a manual chart review. One system also excluded patients with opioid use disorder in addition to the cancer, palliative care, and end-of-life exclusions from their new opioid prescription measures but applied no exclusions to their LTOT population. Table 3 summarizes the exclusions participating health systems used.

3.6. Capturing structured data

Several health systems used EHR prompts to facilitate data entry or built or used EHR functionality to convert notes into structured data. One system configured their EHR to allow clinicians to indicate dosages for outside prescriptions. Examples of alerts systems used included prompts for naloxone prescriptions, documentation of functional assessments, documentation of PDMP checks, and ordering of an appropriate UDT. Systems also implemented checkboxes to capture structured data, such as PDMP checks, state-issued naloxone kits, and naloxone counseling, but they were often inconsistently completed at the point of care making them unreliable. Despite the use of an EHR, not all data could be structured. As just one example, two systems used manual chart reviews to determine whether the PDMP was checked.

3.7. Other approaches

With Collaborative systems each reporting different measures, there were a multitude of measure components systems had to operationalize. Two systems separated measures into distinct parts. One system captured the naloxone prescription and counseling measure components of the “patients on LTOT who were counseled on the purpose and use of naloxone, and either prescribed or referred to obtain naloxone” measure separately, and another reported the pain and functional assessments of the “patients on LTOT who had at least quarterly pain and functional assessments” separately. One system that reported the naloxone prescribing and counseling measure restricted the denominator to patients taking a daily dosage of more than 50 MMEs, thereby focusing on the patients most likely to benefit. Similarly, with respect to patients receiving a new opioid prescription with a 3 days’ supply or less, one system restricted the denominator specifically to patients newly prescribed an immediate-release opioid.

Several systems used treatment agreements. As discussed earlier, in some cases it was to identify their patients on LTOT. One system also used treatment agreements to guide discussions of the risks and benefits of opioids, and thus the presence of a signed treatment agreement was a proxy for annual risks and benefits counseling.

4. Discussion

Of the 16 opioid QI measures, Collaborative systems focused on LTOT, and measures directly connected with mitigating the risk of overdose: decreasing co-prescribed benzodiazepines and opioids, decreasing daily dosages, and increasing quarterly PDMP checks. The most common challenges were defining days’ supply, differentiating between acute and chronic pain, and an inability to capture structured data within an EHR. In general, these health systems decided whether precision or ease was more important to their organization and often approximated their patient populations using data they could obtain more easily.

Even when opting for ease, systems often had to supplement their EHR data with manual chart review. Previous studies have underscored the importance of consistently capturing structured data in the EHR as an underpinning to any QI initiative and enumerated challenges in configuring an EHR to facilitate quality measurement in the primary care setting.1113 Our findings highlight these challenges in an opioid QI initiative. Systems were often unable to capture relevant measure data accurately or as structured data. They commonly reported creating “homegrown” lists and forms or defining new EHR functions to support data collection and QI efforts. EHR checkboxes and functions that convert text to structured data were often insufficient or subject to inconsistent documentation by clinicians.

Opting for ease could sacrifice precision. Using easily available data can overestimate the denominator, which may result in underestimating measure rates. Use of inconsistently completed checkboxes or relying on information that is not widely available at the point of care could result in underestimated numerators and understated measure rates. This can be frustrating for clinicians, whose “buy-in” is needed for QI initiatives. Regardless of rates being over- or under-estimated, a consistently defined measure can support QI efforts by showing relative changes over time. Indeed, most systems showed improvements in at least one measure over the course of the Collaborative.14

These approaches to measure operationalization represent an ongoing challenge to constructing meaningful measure sets for this population. The National Quality Forum, an organization that formally endorses measures for quality measurement and improvement, recently released two relevant reports of technical expert panel findings. One reflected efforts to describe and address challenges related to implementing EHR-based quality measures, including challenges discussed here, such as unstructured data and EHR limitations. The other identified priorities in opioids and opioid use disorder quality measurement, which identified opioid tapering strategies and patient-centered pain management—both of which are reflected in the measures discussed here and other work—as gaps that should be prioritized.1517 This reinforces the clear widespread support for engaging systems in this important public health crisis and acknowledgement that EHR data are, at present, insufficient.

Systems taking part in the Collaborative found pragmatic approaches to understand their opioid prescribing, began to use data to measure care processes, and ultimately looked toward improving the care of patients with chronic pain who are prescribed opioids. Systems had to reflect on what was important to them, what their systems were capable of, and determine how to pull the data they needed out of their system. Examples of how systems did this include: relying on the number of pills dispensed or the number of opioid prescriptions over several months instead of a specific number of days; separating multi-part measures or focusing certain measures on patients most likely to benefit, such as those on higher daily dosages; and leveraging treatment agreements to identify long-term opioid therapy patients or facilitate risk/benefit counseling.

If the Collaborative had required measures be precisely operationalized, rather than used as guidance, these systems may have struggled to make any progress. For example, the five systems that were unable to determine days’ supply were allowed the flexibility to alternatively define new opioid prescriptions, giving them more time to focus on improvement rather than measurement. Supporting health systems with opioid QI measures that they can apply in ways they find most feasible or meaningful might be needed for systems to make progress in their opioid QI efforts.

4.1. Limitations

Our data reflects a small sample of differing systems that volunteered to participate and were selected based on their existing ability to launch a QI initiative. Many had previous QI experience. As such, our results may not apply to health systems more generally. Further, each measure had data from, on average, three of the nine systems. Three measures had data from only one system. We only had contact with systems over the course of the collaborative and we do not know whether systems sustained their approaches beyond that period, but there is no evidence that systems considered their approaches limited or short-term. Further, over the course of Collaborative health systems developed skills that promote ongoing quality improvement, such as identifying areas for improvement, understanding and using data, and tracking performance over time.18 Though not the goal of our study, the differing approaches of each system precludes summarizing or comparing across systems. However, each system can use such a tailored approach to evaluate its own progress. Whereas prior work has focused on a single health system or data that may not be widely available at the point of care,2,19 our results provide clear and relevant examples of how to define and collect data on opioid prescribing in different primary care settings.

Additionally, though measurement is a key tool in any QI initiative, it is often not sufficient. Childs and colleagues detail the contextual factors that influenced health systems’ implementation of their opioid prescribing QI efforts more broadly, such as staff engagement, QI experience, and state opioid prescribing laws.1

5. Conclusions

Our results provide specific, practical approaches to operationalizing measures that monitor opioid prescribing practices. This is a critical first step for systems looking to measure or improve their opioid prescribing practices and, based on our results, requires a pragmatic approach that reflects health system priorities, resources, and capabilities. Using a flexible measurement approach driven by system-specific needs and priorities in other key areas of public health concern may be valuable when engaging health systems in quality improvement efforts. A better understanding of whether the Collaborative’s investments in quality measurement resulted in improved outcomes can provide additional insights into system-specific QI initiatives.

Acknowledgements

Contributors:

We are grateful for the work done by staff at all participating healthcare systems, Abt Global, and the CDC that participated in this project.

Sources of support

This work was funded by the Centers for Disease Control and Prevention [Contract No. 200-2016-F-92356 and 200-2018-F-03382].

Footnotes

Disclaimers

The content, findings and conclusions of this paper do not necessarily reflect the views or policies, or official position of the US Department of Health and Human Services, the Centers for Disease Control and Prevention, or Abt Global, nor does the mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.

CRediT authorship contribution statement

Catherine Hersey: Writing – review & editing, Writing – original draft, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Sarah Shoemaker-Hunt: Writing – review & editing, Supervision, Conceptualization. Michael Parchman: Writing – original draft, Conceptualization. Ellen Childs: Writing – review & editing, Writing – original draft. John Le: Writing – review & editing. Wesley Sargent: Writing – review & editing.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Data availability

Data will be made available on request.

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data will be made available on request.

RESOURCES