Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Dec 28.
Published in final edited form as: Obs Stud. 2016 Feb 1;2:24–38.

Electronic Health Records to Evaluate and Account for Non-response Bias: A Survey of Patients Using Chronic Opioid Therapy

Susan M Shortreed 1, Michael Von Korff 2, Stephen Thielke 3, Linda LeResche 4, Kathleen Saunders 5, Dori Rosenberg 6, Judith A Turner 7
PMCID: PMC5193131  NIHMSID: NIHMS811035  PMID: 28042621

Abstract

Background

In observational studies concerning drug use and misuse, persons misusing drugs may be less likely to respond to surveys. However, little is known about differences in drug use and drug misuse risk factors between survey respondents and nonrespondents.

Methods

Using electronic health record (EHR) data, we compared respondents and non-respondents in a telephone survey of middle-aged and older chronic opioid therapy patients to assess predictors of interview nonresponse. We compared general patient characteristics, specific opioid misuse risk factors, and patterns of opioid use associated with increased risk of opioid misuse. Inverse probability weights were calculated to account for nonresponse bias by EHR-measured covariates. EHR-measured covariate distributions for the full sample (nonrespondents and respondents), the unweighted respondent sample, and the inverse probability weighted respondent sample are reported. We present weighted and unweighted prevalence of self-reported opioid misuse risk factors.

Results

Among 2489 potentially eligible patients, 1477 (59.3%) completed interviews. Response rates differed with age (45–54 years, 51.8%; 55–64 years, 58.7%; 65–74 years, 67.9%; and 75 years or older, 59.9%). Tobacco users had lower response rates than did nonusers (53.5% versus 60.9%). Charlson comorbidity score was also related to response rates. Individuals with a Charlson score of 2 had the highest response rate at 65.6%; response rates were lower amoung patients with the lowest (the patients with the fewest health conditions had response rates of 56.7–60.0%) and the highest Charlson scores (patients with the most health conditions had response rates of 52.2–56.0%). These bivariate relationships persisted in adjusted multivariable logistic regression models predicting survey response. Response rates of persons with and without specific opioid misuse risk factors were similar (e.g., 58.7% for persons with substance abuse diagnoses, 59.4% for those without). Opioid use patterns associated with opioid misuse did not predict response rates (e.g., 60.6% versus 59.2% for those receiving versus not receiving opioids from 3 or more physicians outside their primary care clinic). Very few patient characteristics predicted non-response; thus, inverse probability weights accounting for nonresponse had little impact on the distributions of EHR-measured covariates or self-reported measures related to opioid use and misuse.

Conclusions

Response rates differed by characteristics that predict nonresponse in general health surveys (age, tobacco use), but did not appear to differ by specific patient or drug use risk factors for prescription opioid misuse among middle- and older-aged chronic opioid therapy patients. When observational studies are conducted in health plan populations, electronic health records may be used to evaluate nonresponse bias and to adjust for variables predicting interview nonresponse, complementing other research uses of EHR data in observational studies.

Keywords: Inverse probability weights, missing data, electronic medical records

1. Introduction

A National Research Council (NRC) report observed that for more than 2 decades response rates in observational studies, including large surveys such as the National Health Interview Survey, have been declining (National Research Council, 2013) The NRC report concluded that, “Current trends in nonresponse, if not arrested, threaten to undermine the potential of household surveys to elicit information that assists in understanding social and economic issues. The trends also threaten to weaken the validity of inferences drawn from estimates based on those surveys. High nonresponse rates create the potential or risk for bias in estimates and affect survey design, data collection, estimation, and analysis.” (National Research Council, 2013, p. 1) The NRC report called for “research on the relationship between nonresponse rates and nonresponse bias and on the variables that determine when such a relationship is likely.” (National Research Council, 2013, p. 4)

Nonresponse bias is of particular concern in observational studies pertaining to drug use and abuse. Gfroerer et al. (1997) observed, “Drug abuse surveys are particularly vulnerable to nonresponse and measurement error because of the difficulties in accessing heavy drug users and the likelihood that the illegal and stigmatized nature of drug abuse may lead to underreporting.” (Gfroerer, Lessler, and Parsley, 1997, p. 291). Although there is evidence hazardous alcohol use is associated with interview nonresponse, most empirical studies of predictors of nonresponse are based on comparisons of rates of loss to follow-up in longitudinal cohort studies, not comparisons of differences between survey respondents and nonrespondents (Ahacic, Kareholt, Helgason, and Allebeck, 2013) for cross-sectional surveys or the initial, baseline survey for longitudinal studies.

Although differences in response rates based on demographic variables such as age and sex have been reported frequently, efforts to characterize differences between survey respondents and nonrespondents on variables of primary interest in health surveys generally have been inconclusive because key measures are not available for nonrespondents (National Research Council, 2013) With growing availability of rich sampling frames (Groves, 2006) using electronic health record (EHR) data, a qualitative improvement in assessment of and adjustment for nonresponse factors is now possible. EHR data can be used to measure health characteristics of primary interest in observational studies of health outcomes and behaviors for both respondents and nonrespondents, including measures pertaining to sensitive topics such as indicators of substance abuse and other stigmatized health problems. Although EHR data have occasionally been used to adjust for variables related to survey nonresponse (Von Korff et al., 2005; Tivesten et al., 2012), these data are a largely untapped resource.

The aim of this report is to demonstrate the use of data collected from EHRs in evaluating and accounting for potential nonresponse bias. Specifically, we used EHR data to assess variables related to nonresponse in an observational study of prescription opioid use, misuse and outcomes among middle- and older-aged patients with chronic pain. We compared differences in survey response by general patient characteristics, by specific opioid misuse risk factors, and by patterns of opioid use that predict opioid misuse. If missing at random assumptions are met, inverse probability weighting can account for the nonresponse bias (Little and Rubin, 2002). We constructed inverse probability of response weights and report the weighted and unweighted prevalence of self-reported opioid misuse risk factors among respondents.

2. Methods

2.1 Setting

The Middle-Aged/Seniors Chronic Opioid Therapy (MASCOT) study was conducted at Group Health, a health plan in Washington State serving over 600,000 persons (Turner, Shortreed, Saunders, LeResche, and Von Korff, In Press; Von Korff, Turner, Shortreed, Saunders, Rosenberg, Thielke, and LeResche, In Press). MASCOT study procedures, including use of de-identified EHR data for nonrespondents, were approved by Group Health’s Institutional Review Board.

2.2 Study design

The analyses reported here were performed using cross-sectional data from a sample of Group Health patients receiving chronic opioid therapy. Study patients were age 45 years or older and had been enrolled at Group Health for at least one year. They were contacted between November 2010 and March 2013. The goal of this observational study was to survey individuals who were likely transitioning to long-term opioid use. Eligible patients were identified if, in the 120 days prior to sample selection, they had filled at least three prescriptions for opioids totaling at least 60 days supply of opioid medication, with a period of at least 90 days with no opioid prescriptions dispensed prior to the date of the index prescription. We anchored the definition of time-varying covariates to the sampling date, which we defined as the date a potential participant was identified as eligible and his or her contact information was given to our survey team.

When contacted by telephone, these patients were eligible for the study if they reported taking prescription analgesics on at least 7 days in the prior 2 weeks. They were told the aims of the survey were to study how pain affects daily life and emotional health, how people use medicines to manage pain, ways that prescription pain medicines help, and problems with prescription pain medicines. There was a $2 pre-incentive sent with an initial informational mailing, and a $25 payment to persons who enrolled and completed the initial interview.

Patients were excluded if they were incapable of doing the telephone interview due to physical, mental or hearing impairments; were no longer enrolled at Group Health or planned to disenroll in the coming year; or did not speak English. Patients who had received 2 or more cancer diagnoses in the prior year or who were receiving hospice or nursing home care at the time of sampling were excluded. Eligible and consenting patients completed a 25-minute baseline telephone interview and agreed to be contacted for follow-up interviews 4 and 12 months later.

2.3 Analytic sample and definition of survey nonresponse

Persons included in nonresponse analyses either provided initial screening eligibility information and were determined to be eligible, declined eligibility screening, declined linkage of EHR and survey data, or were not able to be contacted to determine study eligibility status. Persons known to be ineligible for the baseline survey were excluded. Survey interview nonresponse was defined as not completing the baseline interview. An individual was considered to have completed the baseline interview if they reached the end of the survey without breaking off, regardless of the number of questions they answered over the course of the interview. The analyses reported here compared characteristics of persons who completed the baseline interview to those who either did not provide eligibility screening information or who were eligible but not interviewed.

2.4 Study measures

We present analyses using two sets of patient covariates. The first set of variables was obtained from EHR data for respondents and nonrespondents, and was used to assess which patient characteristics predicted interview nonresponse and to construct inverse probability of response weights. The second set included self-reported variables collected during the baseline patient survey. These self-report measures included variables associated with opioid use and misuse and were used to characterize the sample of respondents.

2.4.1 Patient information collected from ehr data

General patient characteristics including age, sex, race, ethnicity, the Romano version of the Charlson comorbidity score (Romano, Roos, and Jollis, 1993) based on diagnoses in the year prior to the sampling date, and current tobacco use in the 2 years prior to the sampling date opioid prescription.

Specific opioid misuse risk factors - A concern in a survey of persons using opioids is that those with specific risk factors for opioid misuse, such as a history of substance abuse, may be unwilling to participate. For this reason we assessed the association between survey response and opioid misuse risk factors. The time period used to assess the risk factors was the 2 years prior to the sampling date, unless otherwise indicated below.

Substance use disorder diagnoses included alcohol and drug abuse disorders as identified by relevant International Classification of Disease, version 9-clinical modification (ICD-9-CM) codes (Public Health Service and Health Care Financing Administration, 1980)

Mood/anxiety disorder diagnoses included mood and anxiety disorders as identified by ICD-9-CM codes.

Hepatitis C diagnoses and liver cirrhosis diagnoses were assessed using ICD-9-CM codes because they have been found to predict opioid misuse (Palmer et al., 2015).

Excess days supply of opioids was defined as receipt of more than a 20% excess in days supply of opioids dispensed in a three-month period for at least one quarter in the prior 2 years. This was operationally defined by a patient receiving 109 or more days supply of long-acting opioids or 109 or more days supply of short-acting opioids in a 90-day interval. Long-acting and short-acting opioids are considered separately because some patients are prescribed both to be used concurrently when needed. Receiving excess days supply of opioids has been shown to predict opioid misuse (Sullivan, Von Korff, Banta-Green, Merrill, and Saunders, 2010).

Receiving opioids from 3 or more doctors outside the patients primary care clinic, a measure of potential doctor-shopping for opioids, was assessed for a 1-year period prior to the sampling date. This has been shown to predict opioid misuse (Palmer et al., 2015).

Receipt of opioid prescriptions from an emergency department provider has been shown to predict opioid misuse (Palmer et al., 2015) and was assessed over the same 2-year period as most other covariates.

Average daily morphine-equivalent dose (MED) (Von Korff et al., 2008) dispensed between the index prescription date and the sampling date. Prior research has found higher opioid doses to be associated with increased risk of prescription opioid misuse (Palmer et al., 2015).

Use of sedative-hypnotic medications was defined by the days supply of sedative-hypnotic medications received in the six months prior to the sampling date. This measure has been shown to predict opioid misuse (Palmer et al., 2015).

Frequency of no-shows for health care appointments was determined from the patients’ EHR for the year prior to the sampling date.

2.4.2 Self-reported information collected from mascot baseline interview

Depressive symptom severity was assessed using the Patient Health Questionnaire-8 (PHQ-8, 0–24 scale) (Kroenke, Spitzer, and Williams, 2001; Kroenke, Spitzer, Williams, and Lowe, 2010).

Severity of anxiety symptoms was measured using the Generalized Anxiety Disorder-2 scale (GAD-2, range 0–6) (Spitzer, Kroenke, Williams, and Lowe, 2006; Skapinakis, 2007).

Smoking status was assessed among all survey respondents (never, former, current) with the following question: “Are you a current smoker, an ex-smoker or have you ever smoked?”

Prescribed Opioids Difficulties Scale (PODS) (Banta-Green, Von Korff, Sullivan, Merrill, Doyle, and Saunders, 2010) consists of twelve questions about the patients experience with potential side effects of opioids (e.g., opioid cause me to have trouble concentrating, feel sluggish, or lose interest in usual activities) as well as potential concerns around continued use of opioids (e.g., I am worried I am becoming addicted to opioids, I want to cut down or stop using opioids, or I need a higher opioid dose than I am currently receiving). Two PODS subscale scores were computed: the PODS Concerns subscale and the PODS Problems subscale (Banta-Green et al., 2010).

2.5 Data analyses

Chi-square tests were performed to determine whether response rates differed by greater than chance expectation by patient characteristics and opioid use risk factors. Because our aim was to describe patient characteristics related to non-response and to asses bias that might occur in analyses restricted to respondents only, thus we used percentages to describe EHR-collected patient characteristics of the full sample (both respondents and non-respondents), the respondents only, and the weighted respondent sample. If patient characteristics are predictive of nonresponse, then the covariate distributions will be different in the full sample and in the unweighted respondent only sample. Applying inverse probability of response weights to the respondent only sample accounts for bias due to these measured covariates; thus, the weighted respondent sample should have similar covariate distributions to the full population for all patient characteristics included in the nonresponse weight model.

To construct response weights, we first estimated a multivariable logistic regression model using patient characteristics gathered from EHR data to predict the binary outcome variable of response or nonresponse (1 and 0, respectively) to the baseline MASCOT interview. We then generated the predicted probability of response, , given this set of covariates for all respondents. Weights were calculated for individuals who responded to the survey as one over (i.e., the inverse of) the predicted probability of responding, (1/). Inverse probability weighted analyses were then performed in the respondent sample, such that nonrespondents were assigned weights of zero (Robins, Rotnitzky, and Zhao, 1994; Little and Rubin, 2002). We use percentages to report weighted and unweighted survey responses to describe depressive and anxiety symptom severity, tobacco misuse, and PODS items and subscales. All analyses were performed using Stata version 12.1 (StataPress, 2011).

3. Results

Among 3172 persons initially eligible based on EHR data, 364 were not able to be contacted for the telephone screen after multiple repeated attempts. Of the 2808 individuals who completed telephone screening, 683 were screened as ineligible for the study.

The analytic sample for this report consisted of 2489 patients who were: a) known to be eligible for the survey, or b) eligibility status was unknown because they did not complete the telephone screen. Among the 2489 persons in the analytic sample, 1477 (59.3%) completed the baseline interview. The contact rate for initial telephone screening was high (88.5%), whereas 69.5% of the 2125 persons who were eligible after telephone screening completed the baseline interview. Of those individuals who completed the baseline interview, item non-response was low. For example, only 92 (6.2%) participants did not respond to one or more survey items considered in this report. Among individuals with any survey item missing, the mean number of missing responses was 1.5 items.

Response rates differed by age (see Table 1). The interview response rate was 51.8% for those 45–54 years of age, and increased to 69.7% among patients 65–74 years, but then declined to 59.9% among those 75 years or older. Current tobacco use (measured using EHR data) was associated with lower interview response - 53.5% of current tobacco users completed the survey compared to 60.9% among those who were not current tobacco users. Charlson comorbidity scores were also associated with differential response rates, with a slight U-shaped pattern present. Individuals with a Charlson score of 2 had the highest response rate at 65.6%, with response rates for those with a Charlson score of 0 and those with a Charlson score of 6 or more had response rates of about 56%. Gender, race-ethnicity, and the frequency of no-shows at scheduled health care appointments did not appear to be associated with differences in interview response rates.

Table 1.

Interview response rates by electronic health record measures.

General Patient Characteristics N Response
Rate
%
Chi-square
p-value
Adjusted
Odds Ratio
(95% CI)
Age, years 45 to 54 573 51.7 REF
55 to 64 866 58.7 30.89 1.30 (1.04, 1.62)
65 to 74 551 67.9 p<0.0001 1.89 (1.45, 2.46)
75 and older 499 59.9 1.29 (0.98, 1.71)

Sex Female 1538 60.7 2.92 REF
Male 951 57.2 p=0.09 0.93 (0.78, 1.10)

Race/ethnicity White, non-Hispanic 2105 59.9 REF
Hispanic or non-White 311 56.9 1.62 1.30 (1.04, 1.62)
Unknown 73 54.8 p=0.45 0.92 (0.57, 1.48)

Charlson Score 0 1101 56.7 REF
1 220 60.0 1.08 (0.80, 1.47)
2 387 65.6 12.64 1.42 (1.11, 1.82)
3 to 5 481 52.2 p=0.013 1.13 (0.89, 1.43)
6 or more 300 56.0 0.82 (0.62, 1.09)

No show appointments
in prior year
0 1681 59.6 REF
1 487 60.6 1.25 1.05 (0.85, 1.30)
2 or more 321 56.7 p=0.54 0.90 (0.70, 1.16)

Current Smoking No 1951 60.9 9.60 REF
Yes 538 53.5 p=0.002 0.80 (0.65, 0.98)

Specific Opioid Misuse Risk Factors

Substance Abuse
Diagnosis
No 2172 59.7 0.99 REF
Yes 317 58.8 p=0.32 0.96 (0.74, 1.24)

Mood/Anxiety disorder
Diagnosis
No 1388 58.1 2.10 REF
Yes 1101 60.9 p=0.15 1.21 (1.01, 1.45)

Hepatitis C/ cirrhosis
Diagnosis
No 2338 59.8 2.70 REF
Yes 151 53.0 p=0.10 0.80 (0.56, 1.13)

Excess days’ supply of
opioids: prior 2 years
No 2273 59.2 0.31 REF
Yes 216 61.1 p=0.58 1.12 (0.56, 1.32)

Opioids from 3+ doctors
outside primary care
clinic in prior year
No 2238 59.2 0.17 REF
Yes 251 60.6 p=0.68 1.15 (0.87, 1.52)

Opioids from emergency
department prescriber
No 2221 59.2 0.15 REF
Yes 268 60.5 p=0.70 1.08 (0.82, 1.41)

Opioid morphine
equivalent
dose/day
less than 15mg 1218 59.1 REF
15mg to < 50mg 931 60.4 0.91 1.08 (0.90, 1.29)
50mg or more 327 57.5 p=0.64 0.96 (0.74, 1.24)

Sedative/hypnotic
days’ supply
(half-year)
None 1382 61.1 REF
1 to 29 days 381 54.3 6.06 0.79 (0.62, 1.00)
30 to 89 days 314 57.6 p=0.11 0.91 (0.70, 1.17)
90 or more days 412 59.5 0.87 (0.69, 1.10)

Adjusted odds ratios are from a multivariable logistic regression model that predicted survey response using variables shown in the table.

The presence or absence of specific substance abuse risks factors did not predict differences in interview response rates. Substance abuse (drug or alcohol), depression, anxiety, and hepatitis C/cirrhosis diagnoses did not appear to be associated with interview nonresponse (Table 1). Likewise, response rates were similar across measures of drug use patterns previously found to be associated with differences in rates of opioid misuse (Table 1). Excess days supply of opioids, receiving opioids from multiple prescribers, receiving opioids from emergency department providers, higher opioid dose, and more frequent use of sedative-hypnotic medications did not appear to predict survey response. For example, patients on low opioid doses (less than 15 mg. mean daily MED) had an interview response rate of 59.3%, whereas patients with intermediate doses (15 to less than 50 mg. MED/day) had a 60.1% response rate, and those with higher doses (50 mg. or greater MED/day) had a 56.4% response rate.

Similar patterns of nonresponse were observed in multivariable logistic regression analyses (see Table 1) as in bivariate analyses. One notable difference was that in multivariable analyses, individuals with a prior mood or anxiety diagnosis appeared to have a slightly higher odds of survey response with an adjusted odds ratio of 1.21 and a 95% confidence interval (CI) of (1.10,1.45). The response rate for those without a mood or anxiety disorder diagnosis recorded in their EHR was 58.1%, whereas the response rate for those with a mood or anxiety disorder diagnosis was 60.9%.

Very few patient characteristics were predictors of nonresponse; thus, the distribution of patient characteristics across the full sample (both respondents and nonrespondents) is similar to both the unweighted and weighted respondent samples (Table 2). For example, in the full sample (respondents and nonrespondents) 23.0% of patients were between 45 and 54 years old, whereas 20% of respondents were between 45 and 54 years old. In the weighted respondent sample, 22.9% were between 45 and 54 years of age. A similar pattern was observed for current tobacco use, with 78.4% of the full sample, 80.5% of the respondent sample, and 78.5% of the weighted respondent sample classified as not currently using tobacco. The data shown in Table 2, in particular, the similarity between the weighted and unweighted distributions, indicate that the survey respondents are a representative sample of the target population of interest.

Table 2.

Patient characteristics measured using electronic health records in full sample and in sample of survey respondents, unweighted and weighted using inverse probability of response weights to account for measured nonresponse bias.

General Patient Characteristics Full Sample

%
Respondents
Unweighted
%
Respondents
Weighted
%
Age, years 45 to 54 23.0 20.0 22.9
55 to 64 34.8 34.4 34.7
65 to 75 22.1 25.3 22.2
75 and older 20.1 20.2 20.2

Sex Female 61.8 63.2 61.9
Male 38.2 36.8 38.1

Race/ethnicity White non-Hispanic 84.6 85.3 84.5
Other 12.5 12.0 12.6
Unknown 2.9 2.7 2.9

Charlson Score 0 44.2 42.3 44.1
1 8.8 8.9 8.9
2 15.5 17.2 15.6
3 to 5 19.3 20.2 19.4
6 or more 12.1 11.4 12.1

No show appointments
in prior year
0 67.5 67.7 67.5
1 19.6 20.0 19.6
2 or more 12.9 12.3 12.9

Current Smoking No 78.4 80.5 78.5
Yes 21.6 19.5 21.5

Specific Opioid Misuse Risk Factors

Substance Abuse
Diagnosis
No 87.3 87.8 87.3
Yes 12.7 12.2 12.7

Mood/Anxiety disorder
Diagnosis
No 55.8 54.6 55.7
Yes 44.2 45.4 44.3

Hepatitis C/ cirrhosis
Diagnosis
No 93.9 94.6 93.9
Yes 6.1 5.4 6.1

Excess days’ supply of
opioids: prior 2 years
No 91.3 91.1 91.3
Yes 8.7 8.9 8.7

Opioids from 3+ doctors
outside primary care
clinic in prior year
No 89.9 89.7 90.0
Yes 10.1 10.3 10.0

Opioids from emergency
department prescriber
No 89.2 89.0 89.2
Yes 10.8 11.0 10.8

Opioid morphine
equivalent
dose/day
less than 15mg 49.5 49.2 49.4
15mg to less than 50mg 37.4 38.1 37.5
50mg or more 13.1 12.7 13.1

Sedative/hypnotic days’
supply (half-year)
None 55.5 57.1 55.5
1 to 29 days 15.3 14.0 15.2
30 to 89 days 12.6 12.3 12.7
90 or more days 16.6 16.6 16.7

Weighted distributions of patient characteristics of the survey respondents do not differ substantially from the unweighted distributions (Table 3). For example, in the unweighted respondent sample, 13.5% of participants self-reported having never smoked, whereas the percent of individuals in the weighted respondent sample was 14.8%. Similar patterns were observed for the PODS subscale sum scores and the PODS items; weighting had little impact on the distribution of responses. Among the respondents, 47.7% had a PODS Concerns subscale score of 0, and in the weighted sample this percent was 47.4%. Among respondents 14.6% reported that they had been worried they might be dependent or addicted to opiate pain medicines; the percentage in the weighted respondent sample was 14.7%.

Table 3.

Self-reported patient characteristics among respondents; unweighted and weighted to account for nonresponse bias by EHR-measured covariates.

General Patient Characteristics Respondents
Unweighted
%
Respondents
Weighted
%
Smoking Status Never 13.5 14.8
Ever 50.3 49.8
Current 36.2 35.4

Depression symptom severity
(PHQ-8)
0 to 10 69.7 69.5
11 to 19 27.1 27.3
20 to 24 3.2 3.2

Anxiety symptom severity
(GAD-2)
0 to 3 73.0 72.8
4 to 6 27.0 27.2

Prescription opioids difficulties scale items and subscales

PODS concerns subscale sum score 0 47.7 47.4
1 to 5 32.9 33.0
6 to 10 12.0 12.1
11 to 20 7.5 7.5

In last two weeks, I have been preoccupied
with or thought constantly about use of
opiate pain medicines.
Disagree / strongly disagree 81.2 81.1
Neutral 8.3 8.3
Agree / strongly agree 10.5 10.6

In the last 3 months, I have felt that I could
not control how much or how often I used
opiate medicine.
Disagree / strongly disagree 91.9 92.0
Neutral 3.7 3.8
Agree / strongly agree 4.4 4.3

In the last 3 months, I have needed to use
a higher dose of opiate pain medicine to
get the same effect.
Disagree / strongly disagree 76.5 76.0
Neutral 6.8 7.0
Agree / strongly agree 16.7 17.1

In the last 3 months, I have worried that I
might be dependent on or addicted to
opiate pain medicines.
Disagree / strongly disagree 78.2 77.8
Neutral 7.3 7.6
Agree / strongly agree 14.6 14.7

In the last 3 months, I have wanted to stop
using opiate pain medicines or cut down
on the amount of opiate medicines I use.
Disagree / strongly disagree 49.3 49.1
Neutral 13.7 14.0
Agree / strongly agree 37.1 37.0

PODS problems subscale sum score 0 34.1 33.8
1 to 5 36.7 36.9
6 to 10 14.1 14.0
11 to 15 7.4 7.5
15 to 28 7.7 7.8

In the past 2 weeks, opiate medicines have
caused me to lose interest in my usual
activities
Disagree / strongly disagree 79.4 79.3
Neutral 9.1 9.1
Agree / strongly agree 11.5 11.6

In the past 2 weeks, opiate medicines have
caused me to have trouble concentrating
or remembering.
Disagree / strongly disagree 76.0 75.9
Neutral 9.7 9.8
Agree / strongly agree 14.3 14.4

In the past 2 weeks, opiate medicines have
caused me to feel slowed down, sluggish
or sedated
Disagree / strongly disagree 64.3 64.3
Neutral 10.6 10.8
Agree / strongly agree 25.1 25.0

In the past 2 weeks, opiate pain medicines
have caused me to feel depressed, down,
or anxious.
Disagree / strongly disagree 80.9 80.8
Neutral 8.9 8.9
Agree / strongly agree 10.2 10.3

In the past 2 weeks, how often have side
effects of opiate medicine interfered with
your work, family, or social responsibilities?
Never / rarely 83.6 83.3
Sometimes 9.2 9.4
Often / almost every day 7.2 7.3

In the past 2 weeks, how often did opiate
medicine make it hard for you to think
clearly?
Never / rarely 83.4 83.4
Sometimes 11.5 11.4
Often / almost every day 5.2 5.3

Over the past 3 months, how
bothersome have you found side
effects of opiate pain medicines?
Not at all bothersome 50.9 50.5
A little / moderately bothersome 44.2 44.7
Very / extremely bothersome 4.9 4.8

4. Discussion

It is unusual to have extensive, survey-relevant data on sensitive variables for both survey respondents and nonrespondents (National Research Council, 2013). EHR data provide a valuable resource for obtaining such information on respondents and nonrespondents alike when surveys are conducted within health systems with linked EHR data. Using high quality EHR data available for both survey respondents and nonrespondents, we compared the two groups on general patient characteristics and specific opioid misuse risk factors. Two variables that predicted nonresponse (age and tobacco use) have been found to predict nonresponse in general health surveys (Herzog and Rodgers, 1988; Cunradi, Moore, Killoran, and Ames, 2005). We found response rates increased with age until age 75 years; participants aged 75 years and older had response rates similar to those aged 45 to 54 years. Other surveys have found that response rates decreased with age (Gfroerer et al., 1997; Herzog and Rodgers, 1988). This modest difference in response rate patterns might be due to the survey being limited to persons age 45 years or older.

We observed a U-shaped association between nonresponse and Charlson comorbidity score. Patients with a score of 2 (moderate levels of comorbid conditions) had the highest response rate; response rates decreased as the Charlson score increased or decreased. This pattern of non-response could reflect those with high scores being more likely to be too ill to respond and those who are very healthy not responding due to greater competing demands on their time.

Patterns of opioid use previously found to be associated with opioid misuse did not predict differences in response rates. This suggests decisions about whether to participate in this particular survey regarding use of, and problems with, prescription opioid medications were influenced by patient characteristics associated with nonresponse in general health surveys rather than by specific risk factors for opioid misuse. Specific patient-related prescription opioid misuse risk factors did not appear to be associated with survey nonresponse, although the two patient characteristics associated with nonresponse in our study (younger age, tobacco use) have been found to predict opioid and other drug misuse (Palmer et al., 2015).

In public opinion research, it has been found that framing of a request to participate in a survey typically has only modest effects on a persons likelihood of participating in the survey (Tourangeau, Presser, and Sun, 2014). Less is known about nonresponse biases in drug abuse surveys. The Census Match Study found that response rates varied across household and neighborhood characteristics likely related to differences in the prevalence of drug abuse, but some populations with low response rates in that study (e.g., older persons and high income populations) have lower drug abuse prevalence rates (Gfroerer et al., 1997). Thus, it is difficult to predict whether persons with substance abuse problems are more or less likely to decline to participate in a survey that concerns substance use and abuse.

This survey was limited to persons receiving chronic opioid therapy who were 45 years of age or older in a single health plan in Washington State. The results reported here cannot be generalized to differences in response rates among younger persons, persons using illicit drugs, or general population surveys. This survey was a telephone survey, in which consent was obtained and all survey questions were answered via telephone interview. Predictors of non-response for observational studies relying on different survey modes may be different. We used patient characteristics gathered from EHR data to investigate the relationship between those variables and survey nonresponse. It is possible that there is an association between patient characteristics not available in the EHR and survey nonresponse. This could lead to unmeasured nonresponse bias even in analyses that used weights as constructed in this paper.

While these analyses were conducted in a specific patient population, the results of these analyses are relevant to the feasibility of using EHR data to compare drug abuse risk factors and patterns of drug use among all persons selected for inclusion in drug abuse surveys where EHR data are available for all respondents and nonrespondents. Data gathered from EHRs can be used to determine whether specific drug abuse risk factors that can be assessed with EHR data predict differences in response rates in observational studies concerning illicit drug use and abuse conducted in health plan populations with EHRs. In this study, very few patient characteristics predicted nonresponse; therefore, the weighted distribution of self-reported patient characteristics in the respondents was very similar to the unweighted distribution. Because of the availability of a variety of information on risk factors for prescription drug misuse and abuse in EHR data, the similarity of the weighted and unweighted results suggests that nonresponse bias would not likely compromise unweighted analyses of the data gathered in the MASCOT study.

As was demonstrated here, non-response weights can be constructed using data gathered from EHRs and survey responses can be weighted to account for nonresponse bias by patient characteristics measured in the EHR. In order to account for nonresponse bias, missing at random assumptions must be met (Little and Rubin, 2002). Given the rich information available in the EHR this assumption is plausible; EHR data provide an invaluable resource for understanding predictors of nonresponse and evaluating the potential for nonresponse bias.

In a paper prepared for the NRC report on survey nonresponse (National Research Council, 2013), Peytchev concluded that, “unbiased inference from probability-based surveys relies on the collection of data from all sample membersin other words, a response rate of 100 percent” (Peytchev, 2013, p. 89). In conventional observational studies, relevant information for all persons in the sampling frame is typically not available, except for demographic variables such as age and sex. While no analytic method can replace a high response rate, when appropriate assumptions are met, EHR data can be leveraged to obtain unbiased estimates. Specifically, relevant measures from EHR data provide a means of rigorously assessing differences between respondents and nonrespondents. Moreover, extensive relevant data on respondents and nonrespondents permits weighting study data to obtain less biased population estimates (Little, 1982).

As EHR data become available for larger and more representative segments of the United States population, trade-offs between conducting observational studies in populations with and without linked EHR data should be considered. The advantages of access to linked EHR data for evaluating and adjusting for interview non-response bias are complemented by using EHR data for additional information on health status, key health behaviors and health care utilization. As people contacted to participate in observational studies become less willing to participate in extended research interviews, the potential to obtain key study measures from EHR data sources now available for large, representative populations is an attractive methodologic option.

We conclude EHR data can be used to assess predictors of nonresponse, estimate non-response adjustment weights, and enhance understanding of when nonresponse bias may undermine scientific inference. Given increasing rates of survey nonresponse, this suggests the potential utility of conducting observational studies in health systems with EHRs to permit evaluation of and adjustments for nonresponse bias.

Acknowledgments

This research was supported by grants from the National Institute on Aging (AG034181, Von Korff) and from the Patient-Centered Outcomes Research Institute (R-IHS-1306-02198, Von Korff).

Competing interests: Dr. Von Korff is the Principal Investigator of grants to GHRI from Pfizer Inc. that concern opioids and is a co-investigator on grants from the Campbell Alliance, a consortium of pharmaceutical companies carrying out FDA-mandated studies regarding the safety of extended release opioids. Ms. Saunders is the programmer employed on Dr. Von Korffs grants funded by the pharmaceutical industry listed above. She owns stock in Merck. Dr. Von Korff was also the Principal Investigator of a grant to GHRI from Johnson & Johnson concerning prediction of clinical pain outcomes. This grant also supported Ms. Saunders and Drs. Turner and LeResche. Dr. Shortreed has received funding from research grants awarded to Group Health Research Institute (GHRI) by Bristol-Myers Squibb and Pfizer Inc and is a Co-Investigator on a grant awarded to GHRI from the Campbell Alliance. She has also received funding to attend review panel and methods meeting through the Patient-Centered Outcome Research Institute.

Contributor Information

Susan M. Shortreed, Group Health Research Institute, Seattle, WA, USA., Department of Biostatistics, University of Washington, Seattle, WA, USA..

Michael Von Korff, Group Health Research Institute, Seattle, WA, USA..

Stephen Thielke, Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA., Geriatric Research, Education, and Clinical Center, Puget Sound Veterans Affairs Medical Center, Seattle, WA, USA..

Linda LeResche, Department of Oral Medicine, University of Washington, Seattle, WA, USA..

Kathleen Saunders, Group Health Research Institute, Seattle, WA, USA..

Dori Rosenberg, Group Health Research Institute, Seattle, WA, USA..

Judith A. Turner, Department of Psychiatry and Behavioral Sciences; Department of Rehabilitation Medicine; Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA, USA..

References

  1. Ahacic K, Kareholt I, Helgason AR, Allebeck P. Non-response bias and hazardous alcohol use in relation to previous alcohol-related hospitalization: comparing survey responses with population data. Subst Abuse Treat Prev Policy. 2013;8:10. doi: 10.1186/1747-597X-8-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Banta-Green CJ, Von Korff M, Sullivan MD, Merrill JO, Doyle SR, Saunders K. The prescribed opioids difficulties scale: a patient-centered assessment of problems and concerns. Clin J Pain. 2010;26(6):489–497. doi: 10.1097/AJP.0b013e3181e103d9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Cunradi CB, Moore R, Killoran M, Ames G. Survey nonresponse bias among young adults: the role of alcohol, tobacco, and drugs. Subst Use Misuse. 2005;40(2):171–185. doi: 10.1081/ja-200048447. [DOI] [PubMed] [Google Scholar]
  4. Gfroerer J, Lessler J, Parsley T. Studies of nonresponse and measurement error in the national household survey on drug abuse. NIDA Res Monogr. 1997;167:273–295. [PubMed] [Google Scholar]
  5. Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly. 2006:70. [Google Scholar]
  6. Herzog AR, Rodgers WL. Age and response rates to interview sample surveys. J Gerontol. 1988;43(6):S200–S205. doi: 10.1093/geronj/43.6.s200. [DOI] [PubMed] [Google Scholar]
  7. Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606–613. doi: 10.1046/j.1525-1497.2001.016009606.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Kroenke K, Spitzer RL, Williams JB, Lowe B. The patient health questionnaire somatic, anxiety, and depressive symptom scales: a systematic review. Gen Hosp Psychiatry. 2010;32(4):345–359. doi: 10.1016/j.genhosppsych.2010.03.006. [DOI] [PubMed] [Google Scholar]
  9. Little RJA. Models for nonresponse in sample surveys. J Am Stat Assoc. 1982;77:327–350. [Google Scholar]
  10. Little RJA, Rubin DB. Statistical Analysis with Missing Data. second. New York, NY: J Wiley & Sons; 2002. [Google Scholar]
  11. National Research Council. Nonresponse in social science surveys: A research agenda. Technical report, Division of Behavioral and Social Sciences and Education. 2013
  12. Palmer RE, Carrell DS, Cronkite D, Saunders K, Gross DE, Masters E, Donevan S, Hylan TR, Von Kroff M. The prevalence of problem opioid use in patients receiving chronic opioid therapy: computer-assisted review of electronic health record clinical notes. Pain. 2015;156(7):1208–1214. doi: 10.1097/j.pain.0000000000000145. [DOI] [PubMed] [Google Scholar]
  13. Peytchev A. Consequences of survey nonresponse. The Annals of the American Academy of Political and Social Science. 2013;645:88–111. [Google Scholar]
  14. Public Health Service and Health Care Financing Administration. International classification of diseases, 9th revision, clinical modification. Technical report, Public Health Service. 1980
  15. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. Journal of the Amercian Statistical Association. 1994;89:447–482. [Google Scholar]
  16. Romano PS, Roos LL, Jollis JG. Further evidence concerning the use of a clinical comorbidity index with ICD-9-CM administrative data. J Clin Epidemiol. 1993;46:1085–1090. doi: 10.1016/0895-4356(93)90103-8. [DOI] [PubMed] [Google Scholar]
  17. Skapinakis P. The 2-item generalized anxiety disorder scale had high sensitivity and specificity for detecting GAD in primary care. Evid Based Med. 2007;12(5):149. doi: 10.1136/ebm.12.5.149. [DOI] [PubMed] [Google Scholar]
  18. Spitzer RL, Kroenke K, Williams JB, Lowe B. A brief measure for assessing generalized anxiety disorder: the gad-7. Arch Intern Med. 2006;166(10):1092–1097. doi: 10.1001/archinte.166.10.1092. [DOI] [PubMed] [Google Scholar]
  19. StataPress. Stata statistical software: Release 12.1. 2011 [Google Scholar]
  20. Sullivan MD, Von Korff M, Banta-Green C, Merrill JO, Saunders K. Problems and concerns of patients receiving chronic opioid therapy for chronic non-cancer pain. Pain. 2010;149(2):345–353. doi: 10.1016/j.pain.2010.02.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Tivesten E, Jonsson S, Jakobsson L, Norin H. Nonresponse analysis and adjustment in a mail survey on car accidents. Accid Anal Prev. 2012;48:401–415. doi: 10.1016/j.aap.2012.02.017. [DOI] [PubMed] [Google Scholar]
  22. Tourangeau R, Presser S, Sun H. The impact of partisan sponsorship on political surveys. Public Opinion Quarterly. 2014;78:510–522. [Google Scholar]
  23. Turner JA, Shortreed S, Saunders K, LeResche L, Von Korff M. Association of levels of opioid use with pain and activity interference among patients initiating chronic opioid therapy: A longitudinal study. PAIN. doi: 10.1097/j.pain.0000000000000452. In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Von Korff M, Katon W, Lin EH, Simon G, Ludman E, Oliver M, Ciechanowski P, Rutter C, Bush T. Potentially modifiable factors associated with disability among people with diabetes. Psychosom Med. 2005;67(2):233–240. doi: 10.1097/01.psy.0000155662.82621.50. [DOI] [PubMed] [Google Scholar]
  25. Von Korff M, Saunders K, Thomas Ray G, Boudreau D, Campbell C, Merrill J, Sullivan MD, Rutter CM, Silverberg MJ, Banta-Green C, Weisner C. De facto long-term opioid therapy for noncancer pain. Clin J Pain. 2008;24(6):521–527. doi: 10.1097/AJP.0b013e318169d03b. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Von Korff M, Turner JA, Shortreed SM, Saunders KW, Rosenberg D, Thielke S, LeResche LA. Timeliness of care planning upon initiation of chronic opioid therapy for chronic pain. Pain Medicine. doi: 10.1093/pm/pnv054. In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES