Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Nov 4.
Published in final edited form as: AIDS Behav. 2007 Mar;11(2):161–173. doi: 10.1007/s10461-006-9133-3

Assessing Antiretroviral Adherence via Electronic Drug Monitoring and Self-Report: An Examination of Key Methodological Issues

Cynthia R Pearson 1,, Jane M Simoni 2, Peter Hoff 3, Ann E Kurth 4, Diane P Martin 5
PMCID: PMC5096443  NIHMSID: NIHMS825020  PMID: 16804749

Abstract

We explored methodological issues related to antiretroviral adherence assessment, using 6 months of data collected in a completed intervention trial involving 136 low-income HIV-positive outpatients in the Bronx, NY. Findings suggest that operationalizing adherence as a continuous (versus dichotomous) variable and averaging adherence estimates over multiple assessment points (versus using only one) explains greater variance in HIV-1 RNA viral load (VL). Self-reported estimates provided during a phone interview accounted for similar variance in VL as EDM estimates (R2 = .17 phone versus .18 EDM). Self-reported adherence was not associated with a standard social desirability measure, and no difference in the accuracy of self-report adherence was observed for assessment periods of 1–3 days. Self-reported poor adherence was more closely associated with EDM adherence estimates than self-reported moderate and high adherence. On average across assessment points, fewer than 4% of participants who reported taking a dose of an incorrect amount of medication.

Keywords: HIV/AIDS, HAART, Antiretroviral adherence assessment, Electronic drug monitoring, MEMS, Self-report

Introduction

A frequent refrain among reviewers of the medication adherence literature, dating back to the field’s pioneers (Rudd et al., 1988), laments the difficulty in measuring adherence and underscores the need for further research and better methods. In addition to uncertainty about the best assessment strategy, there is little consensus regarding how best to operationalize medication adherence for analysis (Osterberg & Blaschke, 2005).

Accurately assessing highly active antiretroviral therapy (HAART) adherence is critical to HIV clinical management as well as for research purposes. An abundance of convergent empirical evidence confirms that strict adherence to HAART is key to successful treatment of HIV/AIDS (Bangsberg et al., 2000; Paterson et al., 2000). However, there is decidedly less agreement about how best to assess HAART adherence (Liu et al., 2001). The difficulty may stem in part from the lack of a commonly accepted definition of the term “adherence” (Simoni, Frick, Pantalone, & Turner, 2003).

Current strategies for assessing HIV medication adherence range from direct methods that assess medication levels in bodily fluids to indirect methods that rely on more peripheral indicators. However, with respect to comparing adherence methods, Gao and Nau (2000) commented: “The advantages of cost, convenience, and acceptability have generally served as counterweights to accuracy, stability, and comprehensiveness.” (p. 1117). Recent comprehensive reviews of HAART-specific adherence assessment methods leave many methodological questions unanswered (Ammassari et al., 2002; Liu et al., 2001; Miller and Hays, 2000a; Paterson, Potoski, & Capitano, 2002; Simoni et al., 2006; Turner, 2002; Wutoh et al., 2003).

In the present paper, we used data from a completed intervention trial among indigent men and women, HIV-infected mainly through injection drug use, to examine some key questions and methodological issues with respect to measuring antiretroviral adherence. Focusing on two adherence assessment methodologies— self-report (the most ubiquitous) and electronic drug monitoring (EDM; the newest and one of the most highly lauded)—we aimed to identify (a) how best to operationalize adherence, (b) the quality of data elicited from self-report measures during telephone and in-person interviews, and (c) discrepancies between adherence estimates based on data from self-report versus EDM.

Patient self-report measures have many advantages including low cost, minimal participant burden, and flexibility in terms of mode of administration and timing of assessment, all of which have contributed to their widespread use in clinics and countries where resources are limited (Reynolds, 2004). Self-reports are most often elicited during telephone or in-person interviews or with written questionnaires. Measures range from a single open-ended question such as “How many of your HIV medication doses did you miss in the last 7 days?” (Golin et al., 2002) to a standardized set of open- and closed-ended questions, such as the Adult AIDS Clinical Trials Group (AACTG) questionnaire (Chesney, 1999) and the Simplified Medication Adherence Questionnaire (SMAQ; Morisky, Green, and Levine, 1986). Some of these longer questionnaires, however, are complex to administer and analyze, which limits their value in clinical settings. Other types of self-report measures include patient diaries and pill identification tests (Miller and Hays, 2000b; Parienti et al., 2001).

A recent addition to self-report measures is the visual analogue scale (VAS), a linear scale ranging from 0% to 100%. In some versions of the VAS there are no demarcations or anchors (other than the endpoints) on the line (Wewers and Lowe, 1990); in others, there are demarcations at intervals of 10% (Walsh, Mandalia, & Gazzard, 2002). In both versions, individuals are asked to put a cross on a line indicating their best guess as to how much medication they have ingested. Although first used and validated in resource-rich countries (Walsh et al., 2002), the VAS has particular appeal and preliminary evidence of validity in less literate populations and resource-poor countries as well (Bangsberg, 2004; Kalichman et al., 2005; Oyugi et al., 2004).

Adherence can also be assessed with audio computer- assisted self-interviewing (A-CASI), in which patients provide responses via a computer keyboard to questions posed through headphones. A-CASI may be less susceptible to reporting bias than personal interviews, although it presents significantly greater financial and technological burdens (Bangsberg, Bronstone, Chesney, & Hecht, 2002).

Self-reports can provide useful adherence estimates. A recent review indicated self-reported HAART adherence and HIV-1 RNA VL were consistently associated, even though most of the studies in the review relied on chart-abstracted VL data instead of blood samples drawn at the same time that adherence was assessed (Simoni, Frick, & Huang, 2006). Associations between self-reported adherence and CD4 count were less robust. A recent meta-analysis demonstrated that despite significant study heterogeneity the pooled association between self-reported HAART adherence and VL was statistically significant (adjusted OR = 2.31; 95% CI, 1.99–2.68 Nieuwkerk & Oort, 2005).

Nevertheless, many researchers remain skeptical of the quality of self-report measures. Indeed, many aspects of self-reported assessment may affect results, including how the measures are administered, length of recall, how often and when adherence is assessed, and how adherence is operationalized in the analysis (Bangsberg, Hecht, Clague, et al., 2001; Reynolds, 2004; Simoni, Frick, & Huang, 2006). For example, social desirability bias and the inherent limitations of human memory may lead to an overestimation or inaccurate estimation of adherence (Miller & Hays, 2000a). Dunbar-Jacob & Schlenk (2001) found that the accuracy of remembering doses declined after just one day (Dunbar-Jacob & Schlenk, 2001). Self-reported adherence has been shown to have generally high specificity (i.e., patients’ reports of non-adherence can generally be believed) (Chesney, 1999; Liu et al., 2001), although self-reports seem to have limited accuracy in terms of measuring the precise number of missed doses (Kimmerling, Wagner, & Ghosh-Dastidar, 2003; Wagner & Rabkin, 2000). Single-item measures that ask about missed doses at a single assessment point often are as predictive of VL (Murri et al., 2000) as are multiple-item measures, which assess food requirements, timing of dose, and missed doses for each medication (Chesney et al., 2000). In addition to these measurement issues, there remains debate about how to operationalize adherence based on self-report data to produce the best adherence estimates: from a single point in time, averaging points in time, or using summary or composite measures that combine two or more measures (Bangsberg, Hecht, Clague, et al., 2001; Simoni, Frick, & Huang, 2006).

EDM technology, such as the Medication Event Monitoring System (MEMS®; AARDEX Corporation, Zug, Switzerland) employs a medicine container with a built-in microchip in the cap that registers the time and date of each opening as a presumptive dose. EDM has the potential to provide more accurate data by detecting overestimates of adherence, pill dumping, and the “white coat effect” (improved adherence immediately prior to clinic appointments). The primary advantage of EDM is its ability to estimate dose timing adherence and to describe patterns of adherence over time. For these reasons, numerous studies have employed it as the criterion against which to compare other adherence estimates (Arnsten et al., 2001; Golin et al., 2002; Hugen, Burger, et al., 2002 ; Kimmerling et al., 2003; Knobel et al., 2001; Liu et al., 2001; Melbourne et al., 1999; Oyugi et al., 2004; Wagner, 2003; Walsh et al., 2002). Studies have indicated EDM is more sensitive than self-report for the detection of non-adherence, suggesting it is more appropriate for adherence intervention trials (Arnsten et al., 2001; Melbourne et al., 1999).

However, EDM has several drawbacks. It is expensive, it cannot reveal whether the medication was actually ingested, and it provides no information about how much of the medication was taken (Bova et al., 2005; Deschamps et al., 2004; Nieuwkerk, 2003; Samet, Sullivan, Traphagen, & Ickovics, 2002; Wendel et al., 2001) Its use requires that all medications be stored in the container, that only the correct number of pills be removed at each opening, that the container only be opened during dosing, and that the container be closed after each dose (Bangsberg, Hecht, Charlebois, Chesney, & Moss, 2001). EDM is not appropriate for liquid medications, may serve as a temporary inducement to adherence, can be cumbersome to carry, and is problematic for patients accustomed to using pillboxes or other adherence devices incompatible with electronic monitoring. Such individuals may refuse to participate, thereby increasing the likelihood of non-representative samples. If they do participate, they risk decreasing their adherence. If used continuously over a long period of time, EDM produces data that are difficult to interpret because of spans of misuse or non-use. One example of misuse is the common practice of “pocket dosing” (also called “double dosing,” “decanting,” or “cacheing”), which is the removal of more than one dose per opening. Also, the batteries or the device itself may fail, and patients may lose their vials or forget to bring them to assessment visits. Although widely used in HIV medication adherence research, at least in resource-rich settings, the practical limitations of EDM have curbed its clinical application.

Often adherence estimates based on data from EDM are used as a “gold standard” because they are more highly correlated with VL than adherence estimates based upon self-report. Results from studies comparing adherence data obtained using self-report versus EDM indicate EDM produces higher estimates of non-adherence than self-report (Deschamps et al., 2004; Hugen, Langebeek, et al., 2002; Liu et al., 2001; Paterson et al., 2002). Correlations between data from EDM and self-report tend to be weak to moderate, r = .34–.64; (Golin et al., 2002; Liu et al., 2001; Wagner & Rabkin, 2000; Walsh et al., 2002). Walsh and colleagues computed Kappas to examine agreement between a variety of self-report measures and EDM (Walsh et al., 2002). They concluded that self-report and EDM provided comparable estimates, supported by sensitivity and specificity analyses. Additionally, one of the few reports examining medication adherence in a resource-poor setting found that among 34 Ugandans, self-report (3-day recall and 30-day visual analog scale) and EDM produced comparable estimates of adherence to HIV medications (91–94%), were highly correlated, r = .77–.89, and both related significantly to VL at 12 weeks after baseline (Oyugi et al., 2004). However, ceiling effects may have affected these findings.

Methods

Participants and Procedures

We conducted secondary analyses with data obtained from a randomized controlled trial of a 3-month peer support adherence intervention conducted between 2000 and 2002 at a public outpatient infectious disease clinic at Montefiore Medical Center in the Bronx, New York (more details about the trial and outcome analyses are reported in Simoni, Pantalone, Plummer, & Huang, in press). Patients eligible for the trial were: at least 18 years of age, proficient in English, currently on HAART, and not experiencing symptoms of dementia or psychosis.

Research assistants collected self-report adherence data during in-person interviews (via pen and paper) at baseline, at 3 months, and at 6 months, and by phone during 2- and 4-month assessment interviews. Participants were compensated $20–$30 for completing each interview. Assistants abstracted the prescribed medication regimen data from patient medical records at the time of the 3- and 6-month interviews and for VL at 6 months. EDM methods (i.e., MEMS caps) were used to assess adherence continuously throughout the 6-month study period.

Preliminary analyses indicated the main findings from this paper did not differ for participants in the intervention versus control arm of the original trial. Therefore, to simplify the presentation of our findings, we report only the analyses for the participants overall.

The study participants were mainly African American (46.3%) and Puerto Rican (44.1%) men and women of low socio-economic status. There were slightly more men (55.2%) than women, and the mean age was 42.3 years (SD = 8.9, range 22–77). A little more than half of the participants (53.9%) reported monthly incomes between $501 and $1,000, with 30.8% reporting monthly incomes less than $500. Mean education was 11.2 years (SD = 2.7), with 56.3% having less than a high school degree. Most participants reported sexual orientation as exclusively heterosexual (84%). About half of the sample reported heavy alcohol use (51.5%) and crack or heroin use (52%) in their lifetime. Preliminary analyses indicated that no socio-demographic indicator was associated with any measure of adherence.

Measures

Viral Load

In analyses, we used the VL data that were drawn closest to the 6-month assessment. On average, data were collected 86 days (SD = 74) after the 6-month interview or 9 months from baseline. As VL data were non-normal, they were natural-log-transformed.

Self-reported Adherence

In a format similar to that used by the AACTG (Chesney et al., 2000), participants were queried about their adherence at the baseline and 2-, 3-, 4-, and 6-month assessments. Adherence items followed a preamble acknowledging the difficulties in taking all of the medications. Participants were asked (a) “What are the names of the HIV medications you took yesterday?” For each medication, they were asked (b) “How many times did you take this medication?”; (c) “Did you take more or less than you were told to at any time?” (yes or no); (d) “Did you take any dose more than 2 hours earlier or later than you were supposed to?” (yes or no); and (e) “Did you follow any special instructions every time you took the drug? With food; on an empty stomach; with plenty of fluids; none of these” (yes or no). (Note that we modified the original AACTG by asking whether the participant took more or less of the prescribed dose for each medication instead of how many pills of each medication they had taken, and we asked number of times they took the medication, not number of doses.) Participants supplied the same information about their adherence for the day before yesterday and the day before that. For each day, we calculated dose adherence as the number of times they took their medication for the day divided by the number of daily doses prescribed (based on chart-abstracted prescription information) and multiplied by 100 to form a percentage. These three daily percentages were averaged across the 3-day period to calculate the percentage of doses taken, out of doses prescribed, across all medications. Additionally, participants indicated how often in the past 4 weeks they had missed a dose of their medication: every day, most days, a few days a week, about once a week, less than once a week, or not once.

EDM Adherence

All participants were asked to keep a single medication in a pill bottle with an EDM cap for the 6-month duration of the study. The protease inhibitor was selected because it generally has to be taken most frequently and often produces the most severe side effect and, therefore, would presumably yield the greatest variance and the most conservative estimates of overall adherence. Also, the protease inhibitors appear to be the most unforgiving in terms of tolerance for drug holidays and, therefore, adherence with the protease inhibitor is the most crucial to medical outcomes (Duong et al., 2001). If the participant was not on a protease inhibitor or was taking more than one, the medication with the most frequent dosing schedule was targeted for the EDM. EDM dose adherence was operationalized as the number of EDM openings divided by the total number of prescribed doses in a specified time interval and multiplied by 100 to form a percentage. For example, if a participant opened the bottle 68 times in 1 month and was prescribed three doses per day, then dose adherence was calculated as (68/90) × 100 = 81%. EDM dose adherence percentages were calculated at the 2-, 3-, 4-, and 6-month assessment points for 1-, 2-, 3-, and 28-day assessment intervals.

Social Desirability

The Marlowe-Crown Social Desirability Scale was used to identify the tendency of individuals to describe themselves in favorable terms to garner approval from others. The scale’s ten items consist of five socially desirable but probably untrue items (e.g., “I always try to practice what I preach”) and five socially undesirable but probably true statements (e.g., “I like to gossip at times”). Internal consistency, α = .88, and test–retest reliability, r = .88, are reportedly high (Crowne & Marlow, 1964). For this study, α = .61.

Results

The analyses address a series of questions grouped into three conceptual areas: (a) the best operationalizations of adherence, (b) the quality of data from self-report measures, and (c) discrepancies between adherence estimates based on self-report versus EDM.

Operationalizing Adherence

Researchers most commonly report adherence as the percentage of prescribed doses taken (Simoni, Frick, & Huang, 2006), but there are many other issues involved in determining the most appropriate operationalization of adherence. In this first section, we address a series of questions to determine which operationalization of adherence perform “best” in terms of demonstrating the strongest associations with VL.

Which Cut-point (i.e., 100%, 95%, 90%, 85%, or 80%) is Best for Dichotomizing Continuous Adherence Data?

Because continuous adherence data do not provide clinically meaningful information, researchers often opt to dichotomize them. But which cut-point for dichotomization is best? Using a 100% cut-point is statistically practical because it often approximates a median split (G. Wagner, personal communication, March 2, 2005), but as a clinical goal it is likely unreasonable for most patients long-term. A cut-point of 95% is often used based on the finding that virologic success is significantly less likely in patients taking less than 95% of their prescribed doses (Paterson et al., 2000).

However, adherence data for very short assessment intervals are often highly skewed, rendering the issue of dichotomizing at high levels moot. For example, adherence estimates based on our 3-day EDM data from the 6-month assessment were identical (i.e., 27% of the sample was “adherent”) whether we dichotomized at 100%, 95%, or 90%. For continuous adherence data based on longer assessment intervals, the question of which cutoff is best is considerably more germane.

To empirically address this issue, we ran 20 Pearson correlations with the 28-day EDM data, employing cut-points of 100%, 95%, 90%, 85%, and 80% to determine which produced a variable that best correlated with VL. Because we had collected self-report continuous adherence data only for assessment intervals of 3 days or less, we were not able to use them to investigate this question.

Analyses indicated that no single cut-point produced an EDM variable that was consistently correlated with VL. In fact, there were only two significant correlations for any 28-day EDM dichotomized adherence measure and VL: the 85% cut-point at 2 months, r = −.26, P < .02, and the 100% cut-point at 4 months, r = −.25, P < .05. A visual inspection of the scatter plots of adherence and VL at each of the assessment points confirmed the lack of any clear cut-point. Given these results, we recommend, and will use in the remaining analyses, the most conservative cut-point of 100%.

What is the Best Level of Measurement for an Adherence Variable?

Generally, converting continuous data to categorical data decreases the degrees of freedom in the analysis, resulting in lost information, reduced power, and an increased probability of a Type II error (i.e., failing to detect real differences; Streiner, 2002). However, when data are highly skewed (as is often the case with adherence data), categorical data may be more appropriate.

To determine whether continuous data (percentage of prescribed doses taken), ordinal data (based on a 6-point scale), or dichotomous data (100% vs. < 100% of doses) best predicted VL, we ran 30 unadjusted general linear model (GLM) analyses regressing VL on adherence. For continuous and dichotomous measures, we used the self-reported and EDM data for the 3-day and the EDM data for the 28-day intervals for each assessment point. For ordinal measures, we had only self-reported data for the 28-day interval.

Overall, the continuous measures outperformed the ordinal and dichotomous measures. Specifically, for the continuous measures, there were nine models (five self-report and four EDM), seven (78%) of which were significant (four self-report and three EDM). For the ordinal measures, there were four self-report models and only one or 25% was significant. Of the 17 dichotomous models (nine self-report and eight EDM), three or 18% were significant (all self-report). The explained variance (R2) ranged from 0% for the dichotomized measure at 4 months to 18.9% for the continuous measure at 2 months and averaged 8.3% for the continuous measures, 2.6% for the dichotomized measures, and 1.6% for the ordinal measures. Test statistics were generally higher and often changed from non-significant to significant when the analysis substituted continuous variables for dichotomous or ordinal variables.

Which Assessment Interval for Measuring Adherence is Best?

Recommendations regarding the appropriate length of the assessment interval vary. Shorter intervals are assumed to improve recall, but longer intervals probably provide a better representation of adherence behavior over time, which is more likely to affect VL. A 1-day interval has been used successfully (Arnsten et al., 2001); 3 days is most common (Nieuwkerk & Oort, 2005); 7 days has been suggested because it always includes a weekend during which adherence may be especially challenging (Simoni et al., 2006); and 30 days provides a representative behavioral sample but may be too long for accurate recall.

We used GLM to regress adherence as assessed by EDM percentage of doses taken for 1-, 2-, 3-, and 28-day intervals onto VL. Similarly, we used GLM to regress self-report percentage of dose taken for 1-, 2-, and 3-day intervals on VL.

Overall, findings revealed little effect of length of EDM assessment interval on the association with VL (see Table 1). Analyses of self-report percentage of doses taken for 1, 2, and 3 days revealed similar results with little fluctuation in explained variance among the three measures across all time points (differences in R2 ranged from .13% to 1.19%).

Table 1.

Amount of variance in HIV-1 RNA viral load explained by electronic drug monitoring antiretroviral adherence estimates at each assessment point by assessment interval among HIV-positive outpatients

Assessment interval Study assessment point R2; β co-efficient (SE)
2-month (n = 92) 3-month (n = 85) 4-month (n = 76) 6-month (n = 90)
1 day .10; −.84 (.30)** .10; −.86 (.30)** .01; −.23 (.31)** .11; −.89 (.32)**
2 days .12; −.94 (.31)** .08; −.84 (.34)** .01; −.30 (.33)** .15; −1.1 (.31)**
3 days .18; −1.3 (.31)** .06; −.76 (.35)* .03; −.45 (.33)* .12; −.99 (.29)**
28 days .19; −1.5 (.35)** .08; −.95 (.39)* .01 −.30 (.37)** .05; −.71 (.37)*
*

P < .05.

**

P < .01

Is Calculating Average Adherence Across Multiple Assessment Points better than Relying on Data from any One Point?

Because antiretroviral adherence is dynamic and inconsistent (Howard et al., 2002), averaging multiple estimates over time instead of relying on any one might yield a more representative depiction of long-term adherence, thus potentially improving the power to predict VL (Johnson et al., 2003; Lievens, 2002).

We used the 3- and 28-day continuous adherence measures to create three averages for self-report and three for EDM—one for the two telephone assessments, one for the two in-person assessments, and one for all four assessments. We then used 12 GLM models to regress each averaged estimate on VL at each assessment period.

With one exception, for both self-report and EDM 3-day data, averaging estimates across all four assessment points explained more variance in VL than did estimates based on two assessments (see Table 2) or on only one assessment (see Table 3). For EDM, averaging estimates across either pair of assessment points (i.e., months 2 and 4 or months 3 and 6) explained more variance in VL than three of the four single estimates. For self-report, the estimates averaged over the two phone assessment points at months 2 and 4 explained more variance than the estimate at 4 months yet less variance than the 2-month estimate.

Table 2.

Amount of variance in HIV-1 RNA viral load explained by self-reported and electronic drug monitoring (EDM) 3-day antiretroviral adherence estimates based on either two or four assessment points among HIV-positive outpatients

EDM R2; β co-efficient (SE) Self-report R2; β co-efficient (SE)
2 and 4 months .17; −1.3 (.35)** .12; −1.2 (.31)** (telephone)
3 and 6 months .17; −1.4 (.39)** .05; −.59 (.24)** (in person)
2, 3, 4, and 6 months .21; −1.5 (.32)** .14; −1.7 (.38)**
**

P < .01

Table 3.

Amount of variance in HIV-1 RNA viral load explained by 3-day antiretroviral adherence estimates from electronic drug monitoring (EDM) and in-person and telephone self-reports among HIV-positive outpatients

Assessment point EDM R2 ; β co-efficient (SE) Self-report R2 ; β co-efficient (SE)
In person Telephone
Baseline N/A .06; −1.05(.40)** N/A
2-month .18; −1.25 (.31)** N/A .17; −1.40 (.31)**
3-month .06; −.76 (.35)* .05; −.76 (.38)* N/A
4-month .03; −.45 (.33) N/A .03; −.62 (.39)
6-month .12; −1.0 (.29)** .04; −.61 (.30)* N/A
*

P < .05.

**

P < .01

Similarly, with respect to the 28-day measures, averaging estimates across all four assessment points improved predictive power (data not shown). For the 28-day EDM data, averaging estimates across four assessment points explained slightly more variance, R2 = .14, than averaged across two assessment points (2 and 4 months: R2 = .13; 3 and 6 months R2 = .10). Although illustrating overall weaker predictive ability, the 28-day self-report measures followed the same pattern: stronger predictive ability when all assessment points were combined, R2 = .10, than with any two assessment points (2 and 4 months R2 = .07; 3 and 6 months R2 = .06).

Self-Report Adherence Measures

Many researchers have questioned the quality of self-report assessments of adherence, pointing to the inherent limitations of human memory (Miller & Hays, 2000a, b; Wagner & Miller, 2004). It is important to examine these concerns as self-report is the less expensive and more practical method of assessing adherence, especially in resource-constrained settings. Therefore, in this section, we conduct a detailed examination of several questions to address issues of bias and accuracy of self-report measures.

Is Self-reported Adherence Prone to Social Desirability Bias?

Self-reported estimates of adherence generally overestimate adherence when compared with data from EDM and other less subjective assessment modalities. This is assumed to be due to social desirability—the tendency to present oneself in a socially approved fashion. We found only two published studies that directly examined the relation between social desirability and self-reported antiretroviral medication adherence (Wagner & Miller, 2004; Di Matteo et al., 1993); neither reported a significant association.

We used 14 Spearman rank order and Pearson correlations to examine the association at each assessment point and across assessment points between scores on the short form of the Marlowe-Crown Social Desirability Scale and self-reported adherence according to (a) the continuous 3-day dose adherence variable and (b) the ordinal 28-day measure. Further, because there may be a relation with social desirability only for reports of good adherence, we looked at the correlation between social desirability and adherence within subgroups of participants reporting good (≥95%) and poor (≤70%) adherence.

No associations were significant using the 3-day measures (r ranged from .03 to .14). For the 28-day measure, only at 6 months was there a significant correlation, r = .27. Subgroup analyses were all non-significant, with no significant relation between social desirability and adherence among good adherers (r ranged from .00 to 09) or poor adherers (r ranged from .04 to .12).

Are Self-reports of High or Moderate Adherence Less Credible than Self-reports of Poor Adherence?

Previous studies have demonstrated that self-reported adherence is valid among individuals who report missing a dose (Chesney, 1999; Kimmerling et al., 2003) and that acknowledgment of non-adherence can generally be believed (Liu et al., 2001). Self-reported perfect adherence, however, is often inaccurate (Wagner & Rabkin, 2000). Kimmerling, Wagner and Ghosh-Dastidar (2003) demonstrated an even finer distinction: reports of very poor and perfect adherence were generally accurate but reports of adherence falling somewhere between these extremes were less accurate.

To clarify at what level self-reported adherence is accurately reported, we used the 3-day self-report and the 3-day EDM data, both averaged across all assessment points, and grouped participants into three mutually exclusive adherence categories based on Chesney’s typology (1999): those whose self-reported adherence was high (95–100%), moderate (71–94%), or low ( <70%). We ran binomial tests of proportions to determine the accuracy of self-reported adherence for individuals within each group, using EDM adherence estimates as more objective estimates of adherence.

Of the 98 participants, 44 self-reported high adherence, 22 self-reported moderate adherence and 32 reported low adherence. Nine of 44 (20%) of those reporting near perfect, 8 of 22 (36%) of those reporting moderate, and 29 of 31 (94%) of those reporting poor adherence were accurate according to EDM estimates. The group relative difference in the proportion between the high and moderate adherers versus low adherers was significant, McNemar’s χ2 (1, N = 98) = 33.4, P < .01.

Does the Accuracy of Self-report Adherence Data Decline Significantly After One Day Post-event?

Earlier reports suggested that individuals report adherence behavior more accurately for briefer than for longer assessment intervals, with accuracy dropping off significantly as soon as 24 h after the event (Turner and Hecht, 2001; Wagner & Miller, 2004; Walsh, Horne, Dalton, Burgess, & Gazzard, 2001).

To assess accuracy of recall, we computed the percentage of doses taken for each of the 3 days before each assessment point according to self-report and according to EDM and then calculated the difference in these estimates for the first day before the assessment, the second day before, and the third day before. Three paired t tests were then used to determine if any pair was significantly different (i.e., day 1 vs. day 2, day 1 vs. day 3, or day 2 vs. day 3). In additional analyses, we used VL as the criterion. We calculated 15 Pearson r coefficients to examine whether the correlation with VL was stronger for self-reported adherence estimates from day 1, day 2, or day 3.

Findings suggested recall did not decline over the 3 days. Specifically, none of the paired t tests were statistically significant at the P < .05 level, and the association between adherence and VL did not vary based on how long ago the adherence behavior was recalled. Indeed, coefficients and significance levels were often identical. For example, at 3 months, the correlations between self-reported adherence and VL for day 1, day 2, and day 3 were −.27, −.30, and −.27, respectively (all P values < .01). At 6 months, these correlations were −.20, −.19, and −.21, respectively (all P values < .05).

Do Individuals Who Report taking a Dose of a Medication Actually take More or Less than What was Prescribed?

Most adherence measures inquire about missed or ingested doses without defining what constitutes a dose (Simoni et al., 2006). It is possible that individuals who report taking a dose may actually have taken more or less medication than was prescribed, although partial dosing (i.e., ingesting less than the prescribed amount of medication at a given dosing event) is reportedly rare (G. Wagner, personal communication, March 2, 2005).

Recall that for each of the 3 days prior to each assessment point, participants reported how many of their prescribed HAART medications they took. For each medication they took, they were asked, “How many times did you take this medication?” and then “Did you take more or less than you were told to at any time?”

Descriptive frequencies indicated that among participants who reported taking any amount of a prescribed medication, on average across all 15 assessed days over the 6-month study period, only 3.7% (SD = 3.8%; range 0–15.4%) reported taking more or less than the prescribed amount of that medication at one or more of the dosing times that day.

Comparing Self-report versus EDM Adherence Assessment

In the next section, we compare the relative ability of different adherence assessment modalities (i.e., self-report in person, self-report by telephone, and EDM) to predict VL and consider whether a composite score across different modalities improves the results over the use of any one modality.

What are the Differences in Estimates of Adherence Based on Self-report versus EDM?

Although, as just shown, self-reported adherence can be useful in terms of its associations with VL, self-report generally produces higher estimates of adherence than EDM.

To compare estimates generated by self-report and EDM, we plotted mean adherence levels according to 3-day self-reports and EDM at each assessment point. Then, we used paired t tests to determine if 3 day self-reported adherence was significantly higher than EDM adherence at each assessment point.

As seen in Fig. 1, self-reported and EDM mean adherence levels slightly increase, decrease, and level off over the 6 month study period. However, at each assessment point, self-reported adherence estimates were about 27 percentage points higher than for EDM. Each was significantly different, t = 4.4 to 6.6, all P values < .05.

Fig. 1.

Fig. 1

Three-day antiretroviral adherence estimates based on self-report (SR) and electronic drug monitoring (EDM) among HIV-positive individuals

Which Assessment Modality (i.e., Self-report in Person, Self-report by Telephone Interview, or EDM) is Best?

We examined the association of adherence estimates based on self-report and EDM at each assessment point using 28 Pearson r correlations. For each modality, we included estimates of adherence for 3 days and 28 days at each assessment point as well as the estimates that were averaged across two time points and across four time points. In addition, we conducted 30 GLM regressions to determine which modality most consistently predicted VL.

Results indicated that the self-report phone measures were moderately correlated with EDM, r = .44–.48, but that the self-report in-person measures were only weakly correlated with EDM, r = .26–.33. All correlations were statistically significant. (Note that we could not directly compare adherence estimates of the two types of self-report modalities because they were never collected at the same assessment point). This same pattern—moderate correlations with the phone measures, r = .42–.49 and weak correlations with the in-person measures, r = .26–.34,—is also evident with the 28-day measures.

As illustrated in Table 3, percentages of variance in VL explained by the adherence estimates were comparable for self-report and EDM modalities at 2 and 4 months (telephone) and at 3 months (in-person). However, at 6 months, EDM explained more variances in VL than the self-report in-person measure. Although the self-report phone and EDM estimates at 2 and 4 months differed on average by 27%, they accounted for comparable explained variance in VL at both phone assessment points.

Is it Better to Combine Measures Across Modalities than to Use Measures from a Single Modality?

As illustrated above, self-report measures overestimate adherence when compared to EDM, yet EDM may underestimate adherence because of inconsistent use (Bova et al., 2005; Deschamps et al., 2004). Two research teams have argued for combining adherence estimates from different assessment modalities in order to optimize validity (Liu et al., 2001; Llabre et al., 2006). Their solutions, however, involve rather complicated algorithms and statistical techniques. We aimed to find a relatively simple way to combine measures from different modalities.

Using the EDM and self-report measure averaged across all assessment points, we employed two different statistical techniques for combining the measures to determine if either one could produce a composite adherence estimate capable of explaining greater variance in VL than an estimate based a single measure. If self-report and EDM adherence estimates contain a substantial amount of non-overlapping information about “true” dose taken, and if “true” dose taken is a better predictor of VL than either EDM or self-reported adherence, then we would expect that incorporating a composite variable in a model would substantially improve prediction of VL. The two techniques were (a) calculating a factor score derived from an unrotated principal component from the two variables that would produce a weighted value and (b) creating a multiple-regression model with the two individual variables (EDM adherence and self-reported adherence) and their interaction term to account for the variance common to both measures.

Results indicated neither technique explained significantly greater variance in VL than using either measure alone. Factor loading from each measure was identical as neither measure contributed more than the other to the factor score, R2 = 19.7%. In the multiple-regression model with EDM and self-report variables (data not shown), the self-report variable was not significant. This model only explained 1% more variance in VL, R2 = .22, than the model with the EDM measure alone, R2 = .21, and only explained 8% more variance in VL than the model with the self-report measure alone, R2 = .14.

Discussion

Using antiretroviral adherence data from a completed intervention trial, we examined key issues regarding the operationalization of adherence, the quality of self-report measures, and discrepancies in adherence estimates based on data from self-report versus EDM. The questions and main findings are summarized in Table 4; further discussion for a few of the results follows.

Table 4.

Summary of main findings

Operationalizing adherence
1. Which cut-point is best for dichotomizing continuous adherence data? No single cut-point was superior (although a 100% cut is considered more appropriate for data from very short assessment intervals, i.e., < 7 days)
2. What is the best level of measurement for an adherence variable? Continuous (versus dichotomous or ordinal) data consistently explained the most variance in VL
3. Which assessment interval for measuring adherence is best? No assessment interval (3- or 28-day) was superior in explaining VL
4. Is calculating average adherence across multiple assessment points better than relying on data from any one point? Yes, averaging adherence estimates over 2 or more assessment points increased explained variance in VL over estimates based on only a single point
Self-report measures of adherence
1. Is self-reported adherence prone to social desirability bias? No, the association between self-report adherence and social desirability was non-significant
2. Are self-reports of high or moderate adherence less credible than self-reports of poor adherence? Yes, both self-reported high and moderate adherence were less associated with electronic drug monitoring data than self-reported poor adherence
3. Does the accuracy of self-report adherence data decline significantly after one day post-event? No, the accuracy of self-reported adherence estimates (in terms of their association with EDM data) was similar for the first, second, third day post-event
4. Do individuals who report taking a dose of a medication actually report taking more or less than what was prescribed for that dose? No, participants’ reports of a “dose” only seldom (M = 3.7% of the time) referred to an incorrect dose
Comparing adherence assessment modalities
1. What is the difference in estimates of adherence based on self-report versus EDM? Self-report mean adherence estimates were consistently and significantly higher than EDM mean adherence estimates (mean difference 26.8%, SD 32%)
2. Which assessment modality (i.e., self-report in-person, self-report by telephone interview, or EDM) is best? There were no clear differences in modalities. Self-report by telephone at 2 and 4 months and in-person at 3 months were similar with respects to EDM estimates, EDM at 6 months explained more variances in VL than the self-report in-person measure
3. Is it better to combine measures across modalities than to use a single modality? Yes, combining modalities explained more variance in VL than a single modality

Our findings provide some guidance for operationalizing variables for self-report and EDM data that will account for greater variance in VL. We recommend using a continuous variable (versus categorical) for both self-report and EDM and averaging measures over multiple assessment points (versus relying on any single point).

In the cross-sectional comparison of adherence modalities (Table 3), we found self-report by telephone and EDM estimates accounted for almost the same amount of explained variance in VL, whereas the in-person modality explained considerably less variance than the EDM. This difference in explained variance in VL by self-report modality may be an issue of unmeasured social desirability bias, because self-report by telephone seems equivalent to EDM but self-report in person overestimates adherence. A possible explanation is that the Marlowe-Crowne scale, used in this study to measure social desirability, may not be able to capture motivations for medication adherence reporting. Low-income, ethnic minority respondents (such as the participants in this study) may respond more “honestly” to the scale’s items than to queries about their medication adherence behavior because they perceive that reports of good adherence are related to tangible rewards or, conversely, to threatening consequences such as loss of medication and other benefits. Additionally, phone interviews were conducted when the respondents were presumably at home, in the place they usually take their medications, with access to their bottles and pill boxes; this may have led to a better estimation of adherence. Use of ACASI instead of phone interviews may also improve the quality of in-person reporting (Rogers et al., 2005).

Our analyses of self-report data indicated poor adherence was reported more accurately than both moderate adherence and high adherence, at least according to EDM data. These findings contradict two other reports (Kimmerling et al., 2003; Wagner & Rabkin, 2000), which found less accurate reporting for those who reported missed doses than those reporting taking all of their doses. However, these authors only analyzed data using two groups—perfect and non-perfect self-reported adherence—rather than further dividing the non-perfect group into poor and moderate adherers. By taking our analyses one step further, we found that poor adherence is reported more accurately than moderate adherence. It may be people who moderately adhere to their medication regimens have the most difficulty in accurately reporting their adherence.

With respect to self-report data on amount and timing of dosage, most participants reported taking their entire dose each time they reported taking their medication; in other words, participants are not generally referring to partial doses of a medication when they report doses taken. One way to avoid any confusion around this issue is to include precise definitions in adherence questionnaires as to what constitutes a “dose” or even a “missed dose” (see Sankar, Neufeld, Nevedal, and Luborsky, 2006) or more simply, to ask about number of pills taken for each medication in the regimen.

Comparative analyses indicated EDM produced lower estimates of adherence than self-report, 46–54% vs. 72–80%, as has been consistently demonstrated in other studies (Arnsten et al., 2001; Deschamps et al., 2004; Liu et al., 2001). In the present study, participants used the EDM device with the protease inhibitor, which is the drug most likely to be missed due to generally more frequent dosing and adverse side effects, while self-report estimates were based on averaged reports for all medications. Also, EDM cannot distinguish non-adherence from non-use of the device, which may lead to underestimates of actual adherence. These factors may at least partially explain differences in adherence estimates. Although it is difficult to discern where EDM estimates or self-report estimates fall in relation to “true” adherence, we would suspect that “true” adherence likely lies somewhere between—precisely where is an ongoing research question.

Averaged across all assessment points, EDM adherence estimates explained greater variance in VL than the self-report estimates, R2 = .21 vs. R2 = .14. However, a comparison at each assessment point of the three modalities—EDM versus self-report in-person versus self-report by telephone—indicated that at three of the four assessment points there was no differences in the amount of variance in VL they explained. These findings suggest EDM may have a slight but not consistent or overwhelming advantage in predicting VL—good news for resource-constrained settings for which EDM is prohibitively expensive. These findings confirm other recent reports of the utility of self-report modalities with respect to HAART adherence assessment (Nieuwkerk & Oort, 2005; Simoni et al., 2006).

In sum, self-reported adherence estimates can provide useful information, with a few caveats. First, self-report measures are likely to overestimate adherence based on EDM. Second, they can indicate poor adherence, but clinicians may want to follow up with further questions when patients report moderate or good adherence. In all cases, clinicians and researchers should consider their population and the cultural context in which adherence measures are used. Evaluating self-report and EDM qualitatively and quantitatively will assist in adapting the measures for use with a specific population.

Several limitations in the present study warrant mention. First, the data were obtained from an adherence intervention trial and were not collected specifically to examine the present aims. For this reason, we did not have available at each assessment point all versions of the measures for each modality, limiting the extent of our comparisons. Additionally, we recognize the very modest (and at times non-significant) associations between the adherence estimates and VL. Possibly, the lower correlations were due to the time lag between the adherence assessment and the date blood was drawn to calculate VL (recall that we used VL data abstracted from medical charts which was, on average, actually drawn 9 months after baseline). Ideally, VL data would be based on blood drawn the day of the assessment. Although the correlations are lower than what some studies report (e.g., Liu et al., 2001), they were not much lower than those reported with the use of direct study assays (Arnsten et al., 2001; Wagner, Kanouse, Koegel, & Sullivan, 2003; Walsh et al., 2002). Finally, the modifications to the AACTG prevented us from obtaining a more in-depth analysis of adherence, because we could not determine the precise number of pills missed for each medication at each dose.

Despite these limitations, our study provides some answers to basic questions regarding best practices for operationalizing adherence, the quality of self-report data, and the relative merits of EDM and self-report measures. Yet more work remains to be done. Given the growing HIV/AIDS pandemic and the increasing availability of HAART throughout the developing world, the successful resolution of these methodological issues has never been more urgent.

Acknowledgments

This work was supported by Stroum Endowed Minority Dissertation Fellowship funding to Dr. Cynthia Pearson, University of Washington Center for AIDS Research Sociobehavioral and Prevention Research Core (P30 AI 27757) funding to Dr. Kurth, and 2 R01 MH58986 to Dr. Simoni.

Contributor Information

Cynthia R. Pearson, Email: pearsonc@u.washington.edu, Department of Health Services, School of Public Health and Community Medicine, University of Washington, Seattle, WA, USA. Department of Psychology, University of Washington, Seattle, WA 98105-1525, USA

Jane M. Simoni, Department of Psychology, University of Washington, Seattle, WA 98105-1525, USA

Peter Hoff, Department of Statistics, University of Washington, Seattle, WA, USA.

Ann E. Kurth, School of Nursing, University of Washington, Seattle, WA, USA. Center for AIDS Research, University of Washington, Seattle, WA, USA

Diane P. Martin, Department of Health Services, School of Public Health and Community Medicine, University of Washington, Seattle, WA, USA

References

  1. Ammassari A, Trotta M, Murri R, Castelli F, Narciso P, Noto P, et al. Correlates and predictors of adherence to highly active antiretroviral therapy: Overview of published literature. Journal of Acquired Immune Deficiency Syndromes. 2002;31:S123–S127. doi: 10.1097/00126334-200212153-00007. [DOI] [PubMed] [Google Scholar]
  2. Arnsten JH, Demas PA, Farzadegan H, Grant RW, Gourevitch MN, Chang CJ, et al. Antiretroviral therapy adherence and viral suppression in HIV-infected drug users: Comparison of self-report and electronic monitoring. Clinical Infectious Diseases. 2001;33(8):1417–1423. doi: 10.1086/323201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bangsberg DR. U.S. researcher starts treatment fund in Uganda: Interview with David Bangsberg, M.D., M.P.H. AIDS Treatment News. 2004;(402):3–5. [PubMed] [Google Scholar]
  4. Bangsberg DR, Bronstone A, Chesney MA, Hecht FM. Computer-assisted self-interviewing (CASI) to improve provider assessment of adherence in routine clinical practice. Journal of Acquired Immune Deficiency Syndromes. 2002;31(3):S107–S111. doi: 10.1097/00126334-200212153-00004. [DOI] [PubMed] [Google Scholar]
  5. Bangsberg DR, Hecht FM, Charlebois ED, Chesney MA, Moss A. Comparing objective measures of adherence to HIV antiretroviral therapy: Electronic medication monitors and unannounced pill counts. AIDS and Behavior. 2001;5:272–281. [Google Scholar]
  6. Bangsberg DR, Hecht FM, Charlebois ED, Zolopa AR, Holodniy M, Sheiner L, et al. Adherence to protease inhibitors, HIV-1 viral load, and development of drug resistance in an indigent population. AIDS. 2000;14:357–366. doi: 10.1097/00002030-200003100-00008. [DOI] [PubMed] [Google Scholar]
  7. Bangsberg DR, Hecht FM, Clague H, Charlebois ED, Ciccarone D, Chesney M, et al. Provider assessment of adherence to HIV antiretroviral therapy. Journal of Acquired Immune Deficiency Syndrome. 2001;26(5):435–442. doi: 10.1097/00126334-200104150-00005. [DOI] [PubMed] [Google Scholar]
  8. Bova CA, Fennie KP, Knafl GJ, Dieckhaus KD, Watrous E, Williams AB. Use of electronic monitoring devices to measure antiretroviral adherence: Practical considerations. AIDS and Behavior. 2005;9(1):103–110. doi: 10.1007/s10461-005-1685-0. [DOI] [PubMed] [Google Scholar]
  9. Chesney M. The challenge of adherence. Bulletin of Experimental Treatments for AIDS. 1999;12(1):10–13. [PubMed] [Google Scholar]
  10. Chesney MA, Ickovics JR, Chambers DB, Gifford AL, Neidig J, Zwickl B, et al. Self-reported adherence to antiretroviral medications among participants in HIV clinical trials: The AACTG adherence instruments. Patient Care Committee and Adherence Working Group of the Outcomes Committee of the Adult AIDS Clinical Trials Group (AACTG) AIDS Care. 2000;12(3):255–266. doi: 10.1080/09540120050042891. [DOI] [PubMed] [Google Scholar]
  11. Crowne D, Marlow D. Social desirability scale. In the approval motive. John Wiley and Sons Inc; New York: 1964. [Google Scholar]
  12. Deschamps AE, Graeve VD, van Wijngaerden E, De Saar V, Vandamme AM, van Vaerenbergh K, et al. Prevalence and correlates of nonadherence to antiretroviral therapy in a population of HIV patients using Medication Event Monitoring System. AIDS Patient Care and STDS. 2004;18(11):644–657. doi: 10.1089/apc.2004.18.644. [DOI] [PubMed] [Google Scholar]
  13. Di Matteo MR, Hays RD, Gritz ER. Patient adherence to cancer control regimens: Scale development and initial validation. Psychological Assessment. 1993;5:101–112. [Google Scholar]
  14. Dunbar-Jacob J, Schlenk E. Patient adherence to treatment regimen. In: Baum A, Revenson TA, Singer JE, editors. Handbook of health psychology. Mahwah; NJ: 2001. pp. 571–580. [Google Scholar]
  15. Duong M, Piroth L, Peytavin G, Forte F, Kohli E, Grappin M, et al. Value of patient self-report and plasma human immunodeficiency virus protease inhibitor level as markers of adherence to antiretroviral therapy: relationship to virologic response. Clinical Infectious Diseases. 2001;33(3):386–392. doi: 10.1086/321876. Epub 2001 Jun 2021. [DOI] [PubMed] [Google Scholar]
  16. Gao X, Nau DP. Congruence of three self-report measures of medication adherence among HIV patients. Annals of Pharmacotherapy. 2000;34(10):1117–1122. doi: 10.1345/aph.19339. [DOI] [PubMed] [Google Scholar]
  17. Golin CE, Liu H, Hays RD, Miller LG, Beck CK, Ickovics J, et al. A prospective study of predictors of adherence to combination antiretroviral medication. Journal of General Internal Medicine. 2002;17(10):756–765. doi: 10.1046/j.1525-1497.2002.11214.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Howard AA, Arnsten JH, Lo Y, Vlahov D, Rich JD, Schuman P, Stone VE, et al. A prospective study of adherence and viral load in a large multi-center cohort of HIV-infected women. AIDS. 2002;16(16):2175–2182. doi: 10.1097/00002030-200211080-00010. [DOI] [PubMed] [Google Scholar]
  19. Hugen PW, Burger DM, Aarnoutse RE, Baede PA, Nieuwkerk PT, Koopmans PP, Hekster YA. Therapeutic drug monitoring of HIV-protease inhibitors to assess noncompliance. Theraputic Drug Monitoring. 2002;24(5):579–587. doi: 10.1097/00007691-200210000-00001. [DOI] [PubMed] [Google Scholar]
  20. Hugen PW, Langebeek N, Burger DM, Zomer B, van Leusen R, Schuurman R, et al. Assessment of adherence to HIV protease inhibitors: Comparison and combination of various methods, including MEMS (electronic monitoring), patient and nurse report, and therapeutic drug monitoring. Journal of Acquired Immune Deficiency Syndrome. 2002;30(3):324–334. doi: 10.1097/00126334-200207010-00009. [DOI] [PubMed] [Google Scholar]
  21. Johnson RL, Botwinick G, Sell RL, Martinez J, Siciliano C, Friedman LB, et al. The utilization of treatment and case management services by HIV-infected youth. Journal of Adolescent Health. 2003;33(2 Suppl):31–38. doi: 10.1016/s1054-139x(03)00158-7. [DOI] [PubMed] [Google Scholar]
  22. Kalichman SC, Cain D, Fuhrel A, Eaton L, Di Fonzo K, Ertl T. Assessing medication adherence self-efficacy among low-literacy patients: development of a pictographic visual analogue scale. Health Education Research. 2005;20(1):24–35. doi: 10.1093/her/cyg106. [DOI] [PubMed] [Google Scholar]
  23. Kimmerling M, Wagner G, Ghosh-Dastidar B. Factors associated with accurate self-reported adherence to HIV antiretrovirals. International Journal of STD and AIDS. 2003;14(4):281–284. doi: 10.1258/095646203321264917. [DOI] [PubMed] [Google Scholar]
  24. Knobel H, Guelar A, Carmona A, Espona M, Gonzalez A, Lopez-Colomes JL, et al. Virologic outcome and predictors of virologic failure of highly active antiretroviral therapy containing protease inhibitors. AIDS Patient Care and STDs. 2001;15:193–199. doi: 10.1089/10872910151133729. [DOI] [PubMed] [Google Scholar]
  25. Lievens F. Trying to understand the different pieces of the construct validity puzzle of assessment centers: An examination of assessor and assessee effects. Journal of Applied Psychology. 2002;87(4):675–686. doi: 10.1037/0021-9010.87.4.675. [DOI] [PubMed] [Google Scholar]
  26. Liu H, Golin CE, Miller LG, Hays RD, Beck CK, Sanandaji S, et al. A comparison study of multiple measures of adherence to HIV protease inhibitors. Annals of Internal Medicine. 2001;134(10):968–977. doi: 10.7326/0003-4819-134-10-200105150-00011. [DOI] [PubMed] [Google Scholar]
  27. Llabre MM, Weaver KE, Durán RE, Antoni MH, McPherson-Baker S, Klimas N, et al. A measurement model of medication adherence to highly active antiretroviral therapy and its relation to viral load in HIV+ Adults. AIDS Care and STD. 2006 doi: 10.1089/apc.2006.20.701. [DOI] [PubMed] [Google Scholar]
  28. Melbourne KM, Geletko SM, Brown SL, Willey-Lessne C, Chase S, Fisher A. Medication adherence in patients with HIV infection: A comparison of two measurement methods. AIDS Read. 1999;9(5):329–338. [PubMed] [Google Scholar]
  29. Miller LG, Hays RD. Adherence to combination antiretroviral therapy: Synthesis of the literature and clinical implications. The AIDS Reader. 2000a;10(3):177–185. [PubMed] [Google Scholar]
  30. Miller LG, Hays RD. Measuring adherence to antiretroviral medications in clinical trials. HIV Clinical Trials. 2000b;1(1):36–46. doi: 10.1310/enxw-95pb-5ngw-1f40. [DOI] [PubMed] [Google Scholar]
  31. Morisky DE, Green LW, Levine DM. Concurrent and predictive validity of a self-reported measure of medication adherence. Medical Care. 1986;24(1):76–74. doi: 10.1097/00005650-198601000-00007. [DOI] [PubMed] [Google Scholar]
  32. Murri R, Ammassari A, Gallicano K, De Luca A, Cingolani A, Jacobson D, et al. Patient-reported nonadherence to HAART is related to protease inhibitor levels. Journal of Acquired Immunodeficiency Syndromes. 2000;24(2):123–128. doi: 10.1097/00126334-200006010-00006. [DOI] [PubMed] [Google Scholar]
  33. Nieuwkerk PT. Electronic monitoring of adherence to highly active antiretroviral therapy changes medication-taking behaviour? AIDS. 2003;17(9):1417–1418. doi: 10.1097/00002030-200306130-00029. [DOI] [PubMed] [Google Scholar]
  34. Nieuwkerk PT, Oort FJ. Self-reported adherence to antiretroviral therapy for HIV-1 infection and virologic treatment response: a meta-analysis. Journal of Acquired Immune Deficiency Syndromes. 2005;38(4):445–448. doi: 10.1097/01.qai.0000147522.34369.12. [DOI] [PubMed] [Google Scholar]
  35. Osterberg L, Blaschke T. Adherence to medication. New England Journal of Medicine. 2005;353(5):487–497. doi: 10.1056/NEJMra050100. [DOI] [PubMed] [Google Scholar]
  36. Oyugi JH, Byakika-Tusiime J, Charlebois ED, Kityo C, Mugerwa R, Mugyenyi P, et al. Multiple validated measures of adherence indicate high levels of adherence to generic HIV antiretroviral therapy in a resource-limited setting. Journal of Acquired Immune Deficiency Syndromes. 2004;36(5):1100–1102. doi: 10.1097/00126334-200408150-00014. [DOI] [PubMed] [Google Scholar]
  37. Parienti JJ, Verdon R, Bazin C, Bouvet E, Massari V, Larouze B. The pills identification test: A tool to assess adherence to antiretroviral therapy. JAMA. 2001;285(4):412. doi: 10.1001/jama.285.4.412. [DOI] [PubMed] [Google Scholar]
  38. Paterson D, Potoski B, Capitano B. Measurement of adherence to antiretroviral medications. Journal of Acquired Immune Deficiency Syndromes. 2002;31:S103–S106. doi: 10.1097/00126334-200212153-00003. [DOI] [PubMed] [Google Scholar]
  39. Paterson D, Swindells S, Mohr J, Brester M, Vergis E, Squier C, et al. Adherence to protease inhibitor therapy and outcomes in patients with HIV infection. Annals of Internal Medicine. 2000;133(1):21–30. doi: 10.7326/0003-4819-133-1-200007040-00004. [DOI] [PubMed] [Google Scholar]
  40. Reynolds NR. Adherence to antiretroviral therapies: State of the science. Current HIV Research. 2004;2(3):207–214. doi: 10.2174/1570162043351309. [DOI] [PubMed] [Google Scholar]
  41. Rogers SM, Willis G, Al-Tayyib A, Villarroel MA, Turner CF, Ganapathi L, et al. Audio computer assisted interviewing to measure HIV risk behaviours in a clinic population. Sexually Transmitted Infections. 2005;81(6):501–507. doi: 10.1136/sti.2004.014266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Rudd P, Byyny RL, Zachary V, LoVerde ME, Mitchell WD, Titus C, et al. Pill count measures of compliance in a drug trial: Variability and suitability. American Journal of Hypertension. 1988;1(3 Pt 1):309–312. doi: 10.1093/ajh/1.3.309. [DOI] [PubMed] [Google Scholar]
  43. Samet JH, Sullivan LM, Traphagen ET, Ickovics JR. Measuring adherence among HIV-infected persons: Is MEMS consummate technology? AIDS and Behavior. 2002;5(1):21–30. [Google Scholar]
  44. Sankar A, Neufeld S, Nevedal D, Luborsky M. What is a missed dose? Implications or construct validity and patient adherence. AIDS Care. 2006 doi: 10.1080/09540120600708501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Simoni JM, Frick PA, Huang B. A longitudinal evaluation of a social support model of medication adherence among HIV-positive men and women on antiretroviral therapy. Health Psychology. 2006;25:74–81. doi: 10.1037/0278-6133.25.1.74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Simoni JM, Frick PA, Pantalone DW, Turner BJ. Antiretroviral adherence interventions: A review of current literature and ongoing studies. Topics in HIV Medicine. 2003;11(6):185–198. [PubMed] [Google Scholar]
  47. Simoni JM, Kurth A, Pearson CR, Pantalone DW, Merrill J, Frick PA. A review of self-report measures of HIV antiretroviral adherence. AIDS and Behavior. 2006 doi: 10.1007/s10461-006-9078-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Simoni JM, Pantalone DW, Plummer MD, Huang B. A randomized controlled trial of a peer support intervention to improve antiretroviral adherence and decrease depressive symptomatology among HIV-positive individuals. Health Psychology. doi: 10.1037/0278-6133.26.4.488. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Streiner DL. Breaking up is hard to do: the heartbreak of dichotomizing continuous data. Canadian Journal of Psychiatry. 2002;47(3):262–266. doi: 10.1177/070674370204700307. [DOI] [PubMed] [Google Scholar]
  50. Turner B. Adherence to antiretroviral therapy by human immunodeficiency virus-infected patients. The Journal of Infectious Diseases. 2002;185(Suppl 2):S143–151. doi: 10.1086/340197. [DOI] [PubMed] [Google Scholar]
  51. Turner BJ, Hecht FM. Improving on a coin toss to predict patient adherence to medications. Annals of Internal Medicine. 2001;134(10):1004–1006. doi: 10.7326/0003-4819-134-10-200105150-00015. [DOI] [PubMed] [Google Scholar]
  52. Wagner G, Miller LG. Is the influence of social desirability on patients’ self-reported adherence overrated? Journal of Acquired Immune Deficiency Syndromes. 2004;35(2):203–204. doi: 10.1097/00126334-200402010-00016. [DOI] [PubMed] [Google Scholar]
  53. Wagner GJ. Does discontinuing the use of pill boxes to facilitate electronic monitoring impede adherence? International Journal of STD and AIDS. 2003;14(1):64–65. doi: 10.1258/095646203321043327. [DOI] [PubMed] [Google Scholar]
  54. Wagner GJ, Kanouse DE, Koegel P, Sullivan G. Adherence to HIV antiretrovirals among persons with serious mental illness. AIDS Patient Care STDS. 2003;17(4):179–186. doi: 10.1089/108729103321619782. [DOI] [PubMed] [Google Scholar]
  55. Wagner GJ, Rabkin JG. Measuring medication adherence: Are missed doses reported more accurately then perfect adherence? AIDS Care. 2000;12(4):405–408. doi: 10.1080/09540120050123800. [DOI] [PubMed] [Google Scholar]
  56. Walsh JC, Horne R, Dalton M, Burgess AP, Gazzard BG. Reasons for non-adherence to antiretroviral therapy: patients’ perspectives provide evidence of multiple causes. AIDS Care. 2001;13(6):709–720. doi: 10.1080/09540120120076878. [DOI] [PubMed] [Google Scholar]
  57. Walsh JC, Mandalia S, Gazzard BG. Responses to a 1 month self-report on adherence to antiretroviral therapy are consistent with electronic data and virological treatment outcome. AIDS. 2002;16(2):269–277. doi: 10.1097/00002030-200201250-00017. [DOI] [PubMed] [Google Scholar]
  58. Wendel CS, Mohler MJ, Kroesen K, Ampel NM, Gifford AL, Coons SJ. Barriers to use of electronic adherence monitoring in an HIV clinic. Annals of Pharmacotherapy. 2001;35(9):1010–1015. doi: 10.1345/aph.10349. [DOI] [PubMed] [Google Scholar]
  59. Wewers ME, Lowe NK. A critical review of visual analogue scales in the measurement of clinical phenomena. Res Nurs Health. 1990;13(4):227–236. doi: 10.1002/nur.4770130405. [DOI] [PubMed] [Google Scholar]
  60. Wutoh A, Elekwachi O, Clarke-Tasker V, Daftary M, Powell N, Campusano G. Assessment and predictors of antiretroviral adherence in older HIV-infected patients. Journal of Acquired Immune Deficiency Syndromes. 2003;33(2):S106–S114. doi: 10.1097/00126334-200306012-00007. [DOI] [PubMed] [Google Scholar]

RESOURCES