Abstract
This study sought to elucidate methodological issues in adherence research by comparing multiple methods of assessing adherence to antiretroviral medication. From 2003 to 2004, 24 youths with vertically infected HIV disease (mean age = 14.0 years; range, 8–18) and their caregivers participated in a 6-month study. These children were all on highly active antiretroviral therapy (HAART) and were relatively healthy (mean CD4 absolute count = 711.8 ± 604.5). Adherence was assessed with the Medication Event Monitoring System (MEMS), pill counts, and interviews. Patients and caregivers completed the Perceptions of Adherence Study Participation (PASP) questionnaire. MEMS provided the most detailed adherence information, and good reliability was indicated by significant correlations with medical markers. Pill counts provided similar adherence rates, while patients and caregivers reported nearly perfect adherence in interviews. Problems were experienced with each method: MEMS were expensive, had cap malfunctions, and lack a consistent guiding principle for data interpretation. With pill counts, families forgot to bring all medication bottles to clinic, and interviews were compromised by social desirability and difficulty reaching families by telephone. Most patients and caregivers believed study participation improved the child's adherence, although PASP ratings were unrelated to adherence at the study endpoint. While MEMS may be most reliable, pill counts offer comparable data and are less costly, while interviews seemed least accurate in this study. Most participants reported positive perceptions of their research experience. A consensus among researchers is needed for defining and measuring adherence, and specific recommendations are offered for achieving this goal.
Introduction
The identification of variables associated with medication adherence is of critical importance in the development of effective interventions aimed at improving medication-taking behavior. Adherence is particularly essential among HIV-positive populations given that highly active antiretroviral therapy (HAART) has changed the course of HIV from a fatal disease to a chronic illness. With recent treatment advances in antiretroviral (ARV) therapy that allow for reduced pill burden and less frequent dosing, adherence may be less challenging than in the past for many individuals. However, reliable adherence to HAART remains critical for preventing opportunistic infections and consistent suppression of HIV-1 RNA (viral load) levels,1,2 while poor adherence increases the risk of disease progression and the development of drug resistance.3,4 Unfortunately, nonadherence continues to be fairly problematic in the United States, with adult studies citing prevalence rates of 3% to 68%.5–8 Among children and adolescents with HIV disease, estimates of the prevalence of nonadherence range from 16 to 60.9–12 Methodological inconsistencies involving measurement and conceptualization of the construct may account for this variation, and need to be pinpointed and addressed to ascertain the most accurate information about medication-taking behavior within this and other patient populations.
Methods of assessing adherence
Although numerous methodologies have been developed to measure adherence, all have been associated with a variety of limitations. Three commonly used assessment techniques are electronic monitoring systems, pill counts, and patient reports.
Electronic monitoring systems
In the past decade, studies using electronic monitoring methods became increasingly common. These methods, such as the Medication Event Monitoring System (MEMS) Track caps, contain a microelectronic circuit that registers the exact dates and times of all bottle openings, thus providing more detailed information about the timing of doses than can be obtained through most other methods. Despite being considered among the most accurate methods of assessing adherence,13,14 the expense of these caps can be a limiting factor;15,16 for example, outfitting one medication for a sample of 50 patients can cost thousands of dollars. In addition, bottle openings do not always correspond to an ingested dose and patients who use weekly pill trays may resist changing their routine if it works well for them.
Pill counts
Collecting adherence estimates via pill counts is a technique that has been promoted as relatively easy to use, inexpensive,17 and a valid indicator of overall adherence.9,18 Pill counts offer estimates of the total quantity of doses consumed across a given time span, usually between clinic visits, but do not provide information about the timing of those doses.17 Also, this method assumes that all pills missing from the bottles have been ingested. Another potential problem is the fact that when multiple family members take the same medication, which is often the case in families living with HIV disease, pills from the same bottle may be shared.19 Furthermore, accurate pill counts rely upon patients remembering to bring all empty, full, and partially full bottles into clinic. This may be particularly difficult since some patients keep pill supplies in multiple locations (e.g., at home, school, work, relatives' houses).19 Pill counting machines may reduce the time required and the likelihood of human error, and although these devices can be costly, they are a one-time expense as opposed to MEMS caps, which are not reusable across patients.
Patient and caregiver reports
Although some researchers support the validity of patient and/or caregiver interviews,20–23 multiple empirical studies have noted that such subjective reports are a less accurate method of assessing adherence compared to more objective measures, including biologic markers24 and electronic monitoring.13,15,25,26 For example, Farley et al.15 found that MEMS adherence rates were associated with undetectable viral loads, while caregiver reports were not. Hypothesized explanations for inflated patient and caregiver reports include social desirability,27,28 poor memory,25,28 and language comprehension difficulties, particularly in child reporters.11 In addition, perceptions of the patient's adherence may differ across family members.29
Issues in conceptualizing adherence
Irrespective of the methodology used to measure adherence, an additional source of variance in existing literature lies in the conceptualization of the construct. In the absence of universally accepted criteria for establishing adherence, researchers diverge in their methods of defining and enumerating successful medication-taking behavior.
Adherence commonly is quantified as the percentage of doses taken as prescribed. However, variables such as the frequency (number of doses per day) and quantity (number of pills per dose) as well as the complexity (e.g., number of medications) and specificity (e.g., meal indications, medication storage conditions) of the regimen often are handled differently across studies. The pronounced impact that such variation may have on reported outcomes and interpretations was exemplified by Arnsten et al.,30 who calculated mean adherence rates ranging from 26% to 64% in a sample of HIV-infected adults depending on the criteria used to establish adherence. Similarly, Giacomet et al.31 found that 79% of their sample of HIV-positive children were classified as 100% adherent when the quantity of doses taken was examined, whereas only 11% qualified as perfectly adherent when timing of dose administration and meal specifications were considered. Deschamps et al.32 found differences in MEMS adherence rates by comparing the percentage of days in which the correct number of doses were taken (mean adherence rate = 98%), the percentage of prescribed doses taken (91.5%), and the percentages of doses taken within a ± 1-hour window of the “target time” (86%). The above studies underscore the importance of establishing a consistent operational definition of adherence. Further divergence occurs in the analytical approach used by adherence researchers. Data often are left in continuous form, as a number or percentage, for analytical and reporting purposes. In other cases, discrete categories are defined to classify adherence levels,22,33 and these categories vary across studies. Some researchers have imposed a single critical value, typically in the range of 80%15,34,35 to 100%,36–39 to dichotomize adherence. While several of these cutoff values are somewhat arbitrary, others have scientific underpinnings. For instance, the criterion of 100% adherence (no missed doses) over the span of a week used by Heckman et al.36 was rooted in the knowledge that maximal viral suppression is associated with adherence rates of 95% or above,33 and that 95% adherence equated to less than one missed dose in the given assessment period.
Participant perceptions of adherence assessments
While a number of past studies have evaluated the strengths and limitations of various methods of assessing adherence from the researchers' perspectives, no published reports were identified that included the perspectives of patients and their caregivers. An understanding of patients' and caregivers' perceptions of adherence research participation may lead to identification of the factors that influence their willingness to enroll in future studies and to follow adherence assessment procedures.
Study aims and hypotheses
The current study sought to describe the medication adherence of a group of HIV-positive children taking HAART, and to compare adherence rates obtained by use of electronic monitoring, pill counts, and patient and caregiver interviews. We hypothesized that adherence rates obtained from electronic measurement would be lower than patient and caregiver interviews, and would be more reliable as indicated by their relationship with medical markers. In an exploratory fashion, patient and caregiver perspectives of adherence research participation were compared and also examined in terms of their relation to adherence rates.
Method
Eligible participants
HIV-positive children from around the United States were referred for participation in ARV treatment protocols at the National Cancer Institute (NCI), typically by their local medical providers. From 2003 to 2004, families of these children were invited to participate in the current adherence study during a routine clinic visit at any point during their ARV treatment protocol, provided they met all eligibility requirements. To be eligible, children must have been between 8 and 18 years of age, had vertically acquired HIV disease, and have been on a HAART regimen that included a protease inhibitor (PI) in solid form for at least 3 months.
Thirty-eight eligible families were invited to participate, and 30 agreed to enroll. Of these, 2 families withdrew because the caregivers and/or patients reported discomfort with study procedures (e.g., “felt we were being checked up on”). Two patients' ARV medications were discontinued temporarily for medical reasons. One caregiver with a history of psychiatric problems became depressed during her participation and requested that she and her child be withdrawn from the study. Finally, 1 child's medical care was transitioned to another clinic in the patient's home town. Thus, 24 families provided baseline and follow-up data for this study. Seventeen of these families participated in the electronic monitoring arm of the study (as described in more detail in the following sections).
Measures and procedures
The current study was approved by the NCI Institutional Review Board. Informed consent was obtained from the patient's parent or legal guardian prior to enrollment, and minor assent was obtained from all patients as well.
Electronic monitoring
MEMS track caps record the exact times and dates when a pill bottle is opened. At the baseline evaluation, each family was given a MEMS cap and a log for recording cap openings that did not reflect an actual dosing (e.g., if they removed the cap accidentally or took out pills early for a dose to be taken later). Such logs have been used in previous studies of adherence.8 MEMS adherence rates were calculated in four ways: as the percentage of doses taken within a 1-, 2-, and 3-hour window of the target time, and as the percentage of days in which the correct number of doses were taken (regardless of dosing times). Data from MEMS caps and MEMS logs were obtained at the patients' 3-month and 6-month visits. Families that used pill trays were given the option of using a MEMS cap during the study, but were told that they could continue using their pill trays if they preferred in order to avoid disrupting a routine that was working well for them. Families who chose to be on the non-MEMS arm of the study could participate in all other protocol procedures.
For consistency, we monitored each patient's primary PI. Thus, at study entry, the MEMS cap was placed on a new bottle of the patient's PI. Families were instructed on how to use the MEMS cap and given a list of written instructions to take home. These instructions included the fact that the caregiver should remove the cap from the first bottle when it was empty and place it onto the next full bottle of the PI.
In order to calculate adherence based on doses taken during a certain time frame (e.g., 12 ± 2 hours apart for twice daily dosing), it was necessary to calculate a target time from which acceptable windows were defined. This was achieved by reviewing the data obtained from the MEMS software to identify times, 12 hours apart for twice daily dosing and 8 hours apart for three times daily dosing, that yielded the highest percentage of doses taken within the specified time frames. Data for some patients included a schedule transition (i.e., they began taking their medication at later times when school ended for the summer). To avoid penalizing these patient unfairly, their data were broken down into phases, with each phase utilizing a different target time. We defined a schedule transition as a change from one set of target time points to another set that continued for at least 1 month or until the dataset concluded, whichever came first. Changes in dose time due to weekends or brief holidays did not constitute a transition. Doses were inserted or deleted from the MEMS data according to patient logs.
Pill counts
For the pill count assessment, the number of pills returned to clinic was subtracted from the number of pills dispensed at the previous visit. That number was divided by the number of pills the patient should have taken according to their prescribed dosing schedule, and then multiplied by 100 to obtain the adherence percentage. To facilitate accurate pill counts, families were asked to bring in all full, partially full, and empty bottles of the PI at the baseline visit. When they returned for their 3-month and 6-month appointments, all remaining pills were counted using an electronic pill counter.
Patient and caregiver interviews
To gather self-report adherence data, structured interviews with patients and caregivers were conducted monthly throughout the study, by telephone at months 1, 2, 4, and 5, and in-person at the patients' 3- and 6-month clinic visits. During these interviews, a research assistant asked respondents how many doses of the patient's PI were missed during the past three days, and adherence was calculated as the percentage of doses taken over the previous 3 days. Composite adherence variables were formed by combining interview data from months 1–3 (time 1) and months 4–6 (time 2). When interview data was missing from a certain time-point, data from the remaining time-points were combined and adherence percentages were calculated to reflect the percentage of doses taken within the time period covered by the interviews.
Perceptions of adherence study participation
At the 6-month study end point, patients and caregivers were administered the Perceptions of Adherence Study Participation (PASP) questionnaire. This measure was designed by the research team to evaluate participants' perceptions of the adherence assessment tools, as well as their general perceptions of being involved in the research study. The measure contains five questions assessing beliefs about study participation (e.g., “Participation in this study has improved the way my child takes his/her medicine”) and responses are based on a 5-point Likert scale ranging from “strongly disagree” to “strongly agree.” For each statement marked “agree” or “strongly agree,” participants indicated which part(s) of the study (i.e., MEMS cap, MEMS logs, pill counts, telephone interviews, or other) led them to feel that way. In addition, open-ended questions provided patients and caregivers with the opportunity to identify other factors that influenced their perceptions of study participation.
Medical variables
CD4+ percentages and absolute counts as well as levels of HIV1 RNA PCR (viral load) were collected as required by the patient's ARV treatment protocol. These values were obtained at the child's baseline, 3-month, and 6-month evaluations on the adherence study.
Statistical analyses
Descriptive statistics were calculated for demographic, medical, and adherence variables. Analyses of variance (ANOVAs) and Pearson correlations were used to assess relationships of adherence to demographic and medical variables at 3 and 6 months. χ2 tests of independence and ANOVAs were utilized to assess differences in demographic variables and adherence (pill count) rates between participants in the MEMS and non-MEMS arms of the study. A paired t test was used to examine the difference between MEMS and pill count adherence rates; interviews were not included in this analysis since they covered a different time period compared to MEMS and pill counts (3 days versus approximately 3 months). In a separate t test, adherence rates were compared between child interviews and caregiver interviews. Differences among the four MEMS calculations (1-, 2-, 3-hour windows, and percentage of days in which the correct number of doses were taken) were assessed with a repeated measures ANOVA with post hoc comparisons. Finally, to determine if there was a relationship between perceptions of participation and adherence, participants were divided into two groups based on whether their mean score on the five PASP items was greater than or equal to 4 (indicating more positive perceptions of participation) or less than 4 (indicating less positive perceptions of participation). Mean adherence percentages for these two groups were compared using ANOVA.
Results
Demographic variables
Table 1 shows the demographic characteristics of the study participants. Twenty-four children (13 boys and 11 girls) with vertically acquired HIV infection participated (mean age = 14.0 years; range, 8 to 18 years). Most children were African American (46%) or Caucasian (42%), slightly more than half (55%) were living in two-parent homes, and 42% were the biologic offspring of their caregiver. Seven (29%) children were living with at least one caregiver who was HIV-positive. The mean years of caregiver education was 13.4 (standard deviation [SD] = 3.1).
Table 1.
Demographic Characteristics of Child and Caregiver Participants (N = 24)
Characteristic | n (%) |
---|---|
Gender of child | |
Male | 13 (54) |
Female | 11 (46) |
Ethnicity of child | |
African American | 11 (46) |
Caucasian | 10 (42) |
Hispanic | 1 (4) |
Native American | 2 (8) |
Family composition | |
Single-parent | 11 (46) |
Two-parent | 13 (54) |
Child–caregiver relationship | |
Biologic parent | 10 (42) |
Extended family | 7 (29) |
Adoptive parent | 5 (21) |
Foster parent | 2 (8) |
Medical variables
With respect to medical functioning, the mean CD4 percentage at baseline was 27% (SD = 12%), the mean absolute CD4 count was 711.8 (SD = 604.5), and the mean viral load was 19,968 (SD = 46,997). Nine (38%) children had undetectable viral load levels at study enrollment. The mean length of time since initiation of the children's first antiretroviral regimen was 6.9 years (SD = 0.68). Eighty-eight percent of patients were on either three or four antiretroviral drugs (all regimens included one or more PIs). Also, 88% were on a twice-daily dosing schedule and the remaining 12% were on a three times per day schedule.
Adherence rates
Given their significant relationship with medical markers (results described below), MEMS ± 2-hour adherence percentages were used to represent adherence rates in all analyses that follow except where otherwise specified. There were no significant differences between 3-month and 6-month adherence percentages for the three assessment methods. Thus, only 6-month data are displayed in Table 2 for simplicity. There were no differences in MEMS adherence rates by demographic variables, including gender, ethnicity, family composition (single- versus two-parent home), child–caregiver relationship (biologic versus alternative caregiver), HIV status of primary caregiver, or caregiver education (p > 0.05). The difference in adherence rates, as measured by pill count, between patients who used a MEMS cap (84.2% ± 15.3%) and those who did not (77.7% ± 19.4%) was not statistically significant (p = 0.44).
Table 2.
Adherence Percentages for MEMS, Pill Counts, and Interviews
Adherence assessment | Mean (%) | SD |
---|---|---|
MEMS: 1-hour window | 66.87 | 24.1 |
MEMS: 2-hour window | 78.47 | 21.9 |
MEMS: 3-hour window | 81.60 | 21.1 |
MEMS: days correct doses taken | 76.60 | 24.9 |
Pill count | 86.49 | 15.5 |
Child interview | 97.07 | 7.6 |
Caregiver interview | 99.77 | 1.1 |
All data are from the 6-month evaluation.
MEMS, Medication Event Monitoring System; SD, standard deviation.
There was not a significant difference between adherence rates obtained from MEMS (78%) and pill counts (86%; p = 0.059), or between child (98%) and caregiver interviews (99%; p = 0.088). Seventy-five percent of caregivers and 67% of patients reported 100% adherence at every interview despite contradictory pill count and MEMS data. For the seventeen families who used MEMS caps, data from the 3 days corresponding to the interviews were compared to interview data in order to determine the level of consistency between the two measures. Interview data from only two (12%) caregivers and two (12%) children (from the same two families) were completely consistent with MEMS data at every time-point, and adherence was 100% in both patients during each of these time periods. The remaining 88% of caregivers and 88% of children reported that they had taken at least one dose that the MEMS caps did not record; none of these discrepancies were accounted for on the MEMS logs.
Significant differences emerged among the four methods used to calculate MEMS adherence rates (F[1,14] = 168.53, p < 0.001). Post hoc comparisons revealed different adherence rates across each of the three time windows (1-hour versus 2-hour versus 3-hours; p < 0.001). When defined as the percent of days in which the correct number of doses were taken, mean adherence rates were significantly higher than rates obtained via the 1-hour window (F[1,14] = 14.51, p = 0.002) and significantly lower than rates obtained from the 3-hour window (F[1,14] = 12.10, p = 0.004). There was not a significant difference between the percent of days in which the correct number of doses were taken and the 2-hour time window (p = 0.23).
Relationship between adherence and medical markers
Of the three methods used to assess adherence in this study, only MEMS adherence rates were related to medical markers. As shown in Table 3, MEMS adherence percentages using a 2-hour and 3-hour window correlated significantly with CD4+ percentages at 3 months (p = 0.048 and 0.04, respectively), and with viral load at 3 months (p = 0.04 and 0.01) and 6 months (both p = 0.03). Medical markers were unrelated to MEMS adherence percentages calculated using a 1-hour window or according to the percentage of days in which the correct number of doses were taken. Neither pill counts nor interview data were correlated with any of the medical markers. Also, CD4 absolute counts were not correlated with any of the adherence rates at either time point.
Table 3.
Correlations Between Adherence and Medical Markers
|
3 months |
6 months |
||||
---|---|---|---|---|---|---|
CD4 % | CD4 abs | VL | CD4 % | CD4 abs | VL | |
MEMS 1-hour window | 0.46 | 0.36 | −0.42 | 0.45 | 0.30 | −0.48 |
MEMS 2-hour window | 0.50a | 0.43 | −0.51a | 0.43 | 0.29 | −0.55a |
MEMS 3-hour window | 0.52a | 0.40 | −0.61a | 0.45 | 0.31 | −0.56a |
Pill count | −0.24 | −0.32 | −0.10 | −0.16 | −0.05 | −0.36 |
Child interview | −0.17 | −0.01 | −0.25 | −0.11 | 0.01 | −0.30 |
Caregiver interview | −0.15 | 0.03 | −0.13 | −0.21 | −0.08 | 0.02 |
p < 0.05.
Three-month medical markers are correlated with 3-month adherence data and 6-month medical markers are correlated with 6-month adherence data.
CD4 abs, CD4 absolute counts; VL, viral load.
A comparison of adherence assessment methods: researchers' perspectives
Electronic monitoring
Several problems emerged with the use of MEMS in this study, some of which have not been adequately addressed in the literature. For example, upon study enrollment, some families who used pill trays worried about disrupting their routine and declined using MEMS for that reason (n = 7). Also, data for nearly all patients (94%) contained “phantom openings,” or openings that occurred in excess of the prescribed regimen, with as many as nine openings being displayed in 1 day. These openings deflated the adherence rates representing the percentage of days in which the correct number of doses was taken, as the day would be considered “nonadherent” if there were four openings recorded instead of the two that the regimen required. Because pill count data did not indicate that patients were taking too many pills, we countered this artificial deflation by retaining the doses that fell closest to the target times. If no two points came close to the target times, we retained the doses that were most consistent with those taken in the day immediately preceding this period. Although this gives the patient the benefit of the doubt in terms of the timing of their doses, others have handled this problem in a similar fashion.40
Additional variance was introduced with the MEMS logs. Six (35%) of the 17 patients using a MEMS cap returned complete logs, 4 (24%) submitted partially completed logs, and 7 (41%) did not submit a log at any point during the study. Accuracy of the MEMS data for patients without complete logs was questionable. It may be that caregivers who were more vigilant about completing their logs were also more vigilant about their child's adherence.
Finally, technical difficulties presented multiple problems. Four patients required a replacement MEMS cap due to an apparent malfunction. In addition to the financial cost of replacing the caps, data were missed in between the realization that the cap was not working and when the families received the new cap in the mail.
Pill counts
Of the 48 possible pill counts (24 patients, 2 time-points each), 16 were invalid or missing due to practitioner or patient/caregiver error. Prior to this study, our medical team did not complete pill counts routinely. During the study, there were several instances of practitioners disposing of the remaining pills before they were counted. On three occasions, patients forgot to bring in their pills and/or empty bottles. Several patients had pills at another family member's home and forgot to bring those pills to clinic. Finally, 2 siblings who enrolled on the study at the same time both were taking nelfinavir. During the patients' 3-month adherence interview, we were informed that the siblings had been sharing pills from each other's bottles; this rendered the 3-month pill count data invalid. We requested that the caregiver ensure that each child took pills only from their assigned bottle from that point on.
Interviews
A common problem with adherence telephone interviews involved having to reach the child and caregiver on the same day in order to assess adherence over the same 3-day period. A total of 8 child and 8 caregiver interviews were missed because of difficulty contacting one or both family members. One may speculate that participants who had been less adherent may have avoided the telephone interviews for that reason. Also, adherence estimates from interviews were much higher than those obtained from MEMS and pill counts, suggesting a social desirability bias that compromised the validity of the self-report data.
A comparison of adherence assessment methods: caregiver and patient perspectives
On the PASP, 87% of caregivers agreed that study participation made them more aware of their child's medication-taking behavior, and most attributed this improvement to MEMS (55%) and/or pill counts (45%). More specifically, 73% believed the study helped them become more aware of what time their child was taking his/her medicine. Additionally, 59% believed that study participation improved their child's adherence; of those, most (69%) attributed this improvement to MEMS. Fifty-nine percent of caregivers believed they would do better at monitoring their child's adherence in the future, with most (69%) of those attributing future improvements to MEMS.
Patients reported similar results, with 71% indicating that study participation increased their awareness of their medication-taking behavior. The majority of patients attributed this greater awareness to telephone interviews (59%) and/or MEMS (53%). Seventy-one percent of patients also reported greater awareness of what time they took their medicine, again attributing this to phone interviews (47%) and/or MEMS (47%). Seventy-nine percent of patients indicated study participation helped them “do a better job” of taking their medication. Moreover, 79% believed their adherence would be better in the future and judged phone interviews as the contributing factor (48%) followed by MEMS (46%). Thirty-six percent of caregivers and 8% of patients reported that study participation was stressful, most often citing pill counts as the primary reason.
The open-ended section of the PASP allowed caregivers and patients to identify other factors relevant to their perceptions of study participation. Four caregivers and one patient reported that their involvement in the study made them feel as though they had a greater support system, as the study involved monthly phone calls between research team members and families. Moreover, three caregivers and one patient reported feeling as though they were being held more accountable for medication administration, ultimately influencing their adherence. One patient wrote, “I liked that somebody was keeping up with my taking medicine every month, and I know I need to take my medicine to stay healthy.” One caregiver expressed that being part of an adherence study validated the challenges in her own family, explaining, “Knowing that others may have the same problems we do [with medication adherence] made us feel more normal.”
Relationship between adherence and perceptions of study participation
Mean adherence rates (via MEMS) averaged across time 1 and time 2 were not significantly different between caregivers with more positive (mean adherence = 80.8, SD = 19) versus less positive (M = 79.5, SD = 17) perceptions of study participation (F[1,15] = 0.02, p = 0.85). Similarly, adherence was not significantly different between patients with more positive (M = 80.3, SD = 18) versus less positive (M = 79.5, SD = 17) perceptions of study participation (F[1,15] = 0.44, p = 0.52).
Discussion
To our knowledge, this study is the first to directly compare multiple adherence measures in terms of indicated levels of adherence, consistency with medical markers, researcher perspectives, and caregiver and patient perspectives. With respect to adherence outcomes, child and caregiver interviews offered the highest estimates while pill counts and MEMS yielded rates of adherence that were notably lower. Moreover, adjusting the MEMS time window (e.g., from 1 hour to 2 hours) changed adherence percentages dramatically for many patients. Consistent with our hypothesis, only MEMS data using 2-hour and 3-hour windows produced significant correlations with medical markers, both with CD4+ percentages (at 3 months) and viral load (at 3 and 6 months), suggesting that this method of assessment may be most reliable to the extent that adherence is linked to clinical outcomes. We chose to use MEMS 2-hour adherence rates in most analyses rather than 3-hour rates to be slightly more conservative. The above differences between adherence measures highlight the fact that medication-taking behavior can appear very different depending on the method of analysis and interpretation utilized.
Based on the researchers' collective experience conducting this study, numerous limitations involving each of the three adherence measures were encountered. With respect to electronic monitoring, some families declined the use of a MEMS cap because they used pill trays, thus limiting the data for our study. This problem is cited in other studies as well15,16,41 despite some research suggesting that the temporary use of MEMS caps among individuals previously using pill trays does not decrease their adherence.42 As noted previously in this paper and numerous others,16,40,43–45 MEMS data reflect only the number of times the cap is removed from the bottle rather than how many pills are ingested by the patient. In addition, target time calculations were very time-consuming, particularly for patients whose data incorporated a schedule transition.
Barriers to obtaining accurate pill counts included families forgetting to bring all medication bottles to clinic and siblings who were prescribed the same drug sharing pills. However, pill counts remain an inexpensive and relatively easy method of obtaining adherence estimates. The reliability of pill counts may be improved by providing more explicit instructions to families about the importance of bringing all pill bottles to each visit, including those not kept with the primary pill bottle (e.g., relative's house, pill trays), and calling families to remind them to bring their pills before each visit. More frequent reminders to the medical team regarding the procedures for obtaining pill count data may be useful as well, especially if pill counts have not been a routine part of medical visits prior to study initiation.
Consistent with our findings, several past studies have proposed that self-reports overestimate adherence,13,30,46 while others have found comparatively worse adherence through interviews.11,31,47 One reason that our sample may have produced inflated self-reports is that families who obtain their care at the NIH are required to be enrolled on a research protocol. These families receive all study evaluations and medications at no cost. Thus, they may have been motivated to appear adherent to avoid being removed from their primary medical protocol. Another difficulty in obtaining interview data was reaching participants by telephone; asking families to provide their email addresses to help schedule specific interview times may be helpful for future studies.
With respect to caregiver and patient perspectives, most reported that study participation was helpful, with MEMS and telephone interviews being cited as particularly helpful by both caregivers and patients. Patients also reported that phone interviews were beneficial, presumably due to the regular contact with the medical team. Specific benefits of study involvement included increased accountability and a greater support system due to more frequent monitoring, as well as normalization of adherence challenges. Many participants indicated present and anticipated future improvements in the child's medication adherence, but neither patient nor caregiver PASP ratings were significantly related to adherence percentages during the study and no follow-up measures of adherence were collected. Studies with longer-term follow-up assessments may determine whether adherence research participation leads to beneficial adherence outcomes that are maintained over time.
Conclusions
Despite this study's limitations, several tentative conclusions and recommendations can be offered. Given the various advantages and disadvantages of the adherence measures compared in this study and the relationship between these measures and medical markers, MEMS seemed to be the most reliable method of assessing adherence, particularly when rates were calculated using the 2- or 3-hour windows. However, had additional steps been taken regarding collection of pill count data (as discussed in the preceding paragraphs), this method may have offered a similarly reliable account of adherence at less expense. Interviews should not be used as the sole measure of adherence but may be useful for supplementing another method, particularly given their cost effectiveness and the fact that many participants appreciated the more frequent contact with the medical team.
Researchers should choose a method of measuring adherence based on the goals of their study. For example, studies aimed at gaining an in-depth understanding of medication-taking behavior may benefit from having access to detailed information about the timing of doses that is available from MEMS caps. If funding is a prohibitive factor to using MEMS, pill counts can offer a sound alternative and involve a much less complex process of data analysis and interpretation compared to MEMS. Furthermore, researchers should consider the capacities of the patient population and the potential burden imposed by each method. For example, methods that rely on a patient's recall (e.g., to bring pill bottles to clinic visits and of recent pill-taking behavior) may be inappropriate with a medical population whose memory is compromised due to dementia or other cognitive deficits.
While many of the problems described in this article have been noted in the literature, few studies provided detailed methodology regarding how those problems were addressed. For example, among studies that used MEMS caps, it would help to know how the researchers handled missing data from broken caps, extraneous cap openings, and data from logs or diaries that did not correspond with MEMS data. No measure of adherence is perfect, and each measure should be used with a detailed set of methods to guide data collection, analysis, and interpretation, as well as detailed instructions for participants. This article provides a starting point for reaching a consensus regarding defining and assessing adherence, strategies for addressing methodological challenges in adherence research, and an improved confidence in the interpretations that stem from study findings.
Acknowledgments
The authors wish to express their sincere appreciation to the families who participated in this research. We also would like to thank Mary Anne Toledo-Tamula, M.A. and the rest of the pediatric HIV working group of the National Cancer Institute.
This research was supported by the Intramural Research Program of the National Institutes of Health, National Cancer Institute, and by federal contracts N01-SC-07006 and HHSN261200477004C.
Author Disclosure Statement
No competing financial interests exist.
References
- 1.Berk DR. Falkovitz-Halpern MS. Hill DW, et al. Temporal trends in early clinical manifestations of perinatal HIV infection in a population-based cohort. JAMA. 2005;293:2221–2231. doi: 10.1001/jama.293.18.2221. [DOI] [PubMed] [Google Scholar]
- 2.Gortmaker SL. Hughes M. Cervia J, et al. Effect of combination therapy including protease inhibitors on mortality among children and adolescents infected with HIV-1. N Engl J Med. 2001;345:1522–1528. doi: 10.1056/NEJMoa011157. [DOI] [PubMed] [Google Scholar]
- 3.Harrigan PR. Hogg RS. Dong WW, et al. Predictors of HIV drug-resistance mutations in a large antiretroviral-naive cohort initiating triple antiretroviral therapy. J Infect Dis. 2005;191:339–347. doi: 10.1086/427192. [DOI] [PubMed] [Google Scholar]
- 4.Sethi AK. Celentano DD. Gange SJ. Moore RD. Gallant JE. Association between adherence to antiretroviral therapy and human immunodeficiency virus drug resistance. Clin Infect Dis. 2003;37:1112–1118. doi: 10.1086/378301. [DOI] [PubMed] [Google Scholar]
- 5.Barclay TR. Hinkin CH. Castellon SA, et al. Age-associated predictors of medication adherence in HIV-positive adults: Health beliefs, self-efficacy, and neurocognitive status. Health Psychol. 2007;26:40–49. doi: 10.1037/0278-6133.26.1.40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Flandre P. Peytavin G. Meiffredy V, et al. Adherence to antiretroviral therapy and outcomes in HIV-infected patients enrolled in an induction/maintenance randomized trial. Antivir Ther. 2002;7:113–121. [PubMed] [Google Scholar]
- 7.Goldman JD. Cantrell RA. Mulenga LB, et al. Simple adherence assessments to predict virologic failure among HIV-infected adults with discordant immunologic and clinical responses to antiretroviral therapy. AIDS Research Hum Retrovir. 2008;24:1031–1035. doi: 10.1089/aid.2008.0035. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Hugen PW. Langebeek N. Burger DM, et al. Assessment of adherence to HIV protease inhibitors: Comparison and combination of various methods, including MEMS (electronic monitoring), patient and nurse report, and therapeutic drug monitoring. J Acquir Immune Defic Syndr. 2002;30:324–334. doi: 10.1097/00126334-200207010-00009. [DOI] [PubMed] [Google Scholar]
- 9.Farley JJ. Montepiedra G. Storm D, et al. Assessment of adherence to antiretroviral therapy in perinatally HIV-infected children and youth using self-report measures and pill count. J Dev Behav Pediatrics. 2008;29:377–384. doi: 10.1097/DBP.0b013e3181856d22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Marhefka SL. Koenig LJ. Allison S, et al. Family experiences with pediatric antiretroviral therapy: Responsibilities, barriers, and strategies for remembering medications. AIDS Patient Care STDs. 2008;22:637–647. doi: 10.1089/apc.2007.0110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Mellins CA. Brackis-Cott E. Dolezal C. Abrams EJ. The role of psychosocial and family factors in adherence to antiretroviral treatment in human immunodeficiency virus-infected children. Pediatr Infect Dis J. 2004;23:1035–1041. doi: 10.1097/01.inf.0000143646.15240.ac. [DOI] [PubMed] [Google Scholar]
- 12.Williams PL. Storm D. Montepiedra G, et al. Predictors of adherence to antiretroviral medications in children and adolescents with HIV infection. Pediatrics. 2006;118:e1745–e1757. doi: 10.1542/peds.2006-0493. [DOI] [PubMed] [Google Scholar]
- 13.Levine AJ. Hinkin CH. Marion S, et al. Adherence to antiretroviral medications in HIV: Differences in data collected via self-report and electronic monitoring. Health Psychol. 2006;25:329–335. doi: 10.1037/0278-6133.25.3.329. [DOI] [PubMed] [Google Scholar]
- 14.Llabre MM. Weaver KE. Duran RE. Antoni MH. McPherson-Baker S. Schneiderman N. A measurement model of medication adherence to highly active antiretroviral therapy and its relation to viral load in HIV-positive adults. AIDS Patient Care STDs. 2006;20:701–711. doi: 10.1089/apc.2006.20.701. [DOI] [PubMed] [Google Scholar]
- 15.Farley JJ. Hines S. Musk A. Ferrus S. Tepper V. Assessment of adherence to antiviral therapy in HIV-infected children using the medication event monitoring system, pharmacy refill, provider assessment, caregiver self-report, and appointment keeping. J Acquir Immune Defic Syndr. 2003;33:211–218. doi: 10.1097/00126334-200306010-00016. [DOI] [PubMed] [Google Scholar]
- 16.Paes AH. Bakker A. Soe-Agnie CJ. Measurement of patient compliance. Pharm World Sci. 1998;20:73–77. doi: 10.1023/a:1008663215166. [DOI] [PubMed] [Google Scholar]
- 17.Farmer KC. Methods for measuring and monitoring medication regimen adherence in clinical trials and clinical practice. Clin Ther. 1999;21:1074–1090. doi: 10.1016/S0149-2918(99)80026-5. [DOI] [PubMed] [Google Scholar]
- 18.Kalichman SC. Amaral CM. Cherry C, et al. Monitoring medication adherence by unannounced pill counts conducted by telephone: Reliability and criterion-related validity. HIV Clin Trials. 2008;9:298–308. doi: 10.1310/hct0905-298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Craig HM. Accuracy of indirect measures of medication compliance in hypertension. Res Nurs Health. 1985;8:61–66. doi: 10.1002/nur.4770080112. [DOI] [PubMed] [Google Scholar]
- 20.Deschamps AE. De Geest S. Vandamme AM. Bobbaers H. Peetermans WE. Van Wijngaerden E. Diagnostic value of different adherence measures using electronic monitoring and virologic failure as reference standards. AIDS Patient Care STDs. 2008;22:735–743. doi: 10.1089/apc.2007.0229. [DOI] [PubMed] [Google Scholar]
- 21.Munoz-Moreno JA. Fumaz CR. Ferrer MJ, et al. Assessing self-reported adherence to HIV therapy by questionnaire: The SERAD (Self-Reported Adherence) Study. AIDS Res Hum Retroviruses. 2007;23:1166–1175. doi: 10.1089/aid.2006.0120. [DOI] [PubMed] [Google Scholar]
- 22.Murri R. Ammassari A. Gallicano K, et al. Patient-reported nonadherence to HAART is related to protease inhibitor levels. J Acquir Immune Defic Syndr. 2000;24:123–128. doi: 10.1097/00126334-200006010-00006. [DOI] [PubMed] [Google Scholar]
- 23.Simoni JM. Kurth AE. Pearson CR. Pantalone DW. Merrill JO. Frick PA. Self-report measures of antiretroviral yherapy adherence: A review with recommendations for HIV research and clinical management. AIDS Behav. 2006;10:227–245. doi: 10.1007/s10461-006-9078-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Duong M. Piroth L. Peytavin G, et al. Value of patient self-report and plasma human immunodeficiency virus protease inhibitor level as markers of adherence to antiretroviral therapy: Relationship to virologic response. Clin Infect Dis. 2001;33:386–392. doi: 10.1086/321876. [DOI] [PubMed] [Google Scholar]
- 25.Kimmerling M. Wagner GJ. Ghosh-Dastidar B. Factors associated with accurate self-reported adherence to HIV antiretrovirals. Int J STD AIDS. 2003;14:281–284. doi: 10.1258/095646203321264917. [DOI] [PubMed] [Google Scholar]
- 26.Wagner GJ. Predictors of antiretroviral adherence as measured by self-report, electronic monitoring, and medication diaries. AIDS Patient Care STDs. 2002;16:599–608. doi: 10.1089/108729102761882134. [DOI] [PubMed] [Google Scholar]
- 27.Simoni JM. Frick PA. Lockhart D. Liebovitz D. Mediators of social support and antiretroviral adherence among an indigent population in New York City. AIDS Patient Care STDs. 2002;16:431–439. doi: 10.1089/108729102760330272. [DOI] [PubMed] [Google Scholar]
- 28.Wagner GJ. Miller LG. Is the influence of social desirability on patients' self-reported adherence overrated? J Acquir Immune Defic Syndr. 2004;35:203–204. doi: 10.1097/00126334-200402010-00016. [DOI] [PubMed] [Google Scholar]
- 29.Dolezal C. Mellins C. Brackis-Cott E. Abrams EJ. The reliability of reports of medical adherence from children with HIV and their adult caregivers. J Pediatr Psychol. 2003;28:355–361. doi: 10.1093/jpepsy/jsg025. [DOI] [PubMed] [Google Scholar]
- 30.Arnsten JH. Demas PA. Farzadegan H, et al. Antiretroviral therapy adherence and viral suppression in HIV-infected drug users: Comparison of self-report and electronic monitoring. Clin Infect Dis. 2001;33:1417–1423. doi: 10.1086/323201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Giacomet V. Albano F. Starace F, et al. Adherence to antiretroviral therapy and its determinants in children with human immunodeficiency virus infection: A multicentre, national study. Acta Paediatr. 2003;92:1398–1402. doi: 10.1080/08035250310006737. [DOI] [PubMed] [Google Scholar]
- 32.Deschamps AE. Graeve VD. van Wijngaerden E, et al. Prevalence and correlates of nonadherence to antiretroviral therapy in a population of HIV patients using Medication Event Monitoring System. AIDS Patient Care STDs. 2004;18:644–657. doi: 10.1089/apc.2004.18.644. [DOI] [PubMed] [Google Scholar]
- 33.Paterson DL. Swindells S. Mohr J, et al. Adherence to protease inhibitor therapy and outcomes in patients with HIV infection. Ann Intern Med. 2000;133:21–30. doi: 10.7326/0003-4819-133-1-200007040-00004. [DOI] [PubMed] [Google Scholar]
- 34.Avants SK. Margolin A. Warburton LA. Hawkins KA. Shi J. Predictors of nonadherence to HIV-related medication regimens during methadone stabilization. Am J Addict. 2001;10:69–78. doi: 10.1080/105504901750160501. [DOI] [PubMed] [Google Scholar]
- 35.Eldred LJ. Wu AW. Chaisson RE. Moore RD. Adherence to antiretroviral and pneumocystis prophylaxis in HIV disease. J Acquir Immune Defic Syndr Hum Retrovirol. 1998;18:117–125. doi: 10.1097/00042560-199806010-00003. [DOI] [PubMed] [Google Scholar]
- 36.Heckman BD. Catz SL. Heckman TG. Miller JG. Kalichman SC. Adherence to antiretroviral therapy in rural persons living with HIV disease in the United States. AIDS Care. 2004;16:219–230. doi: 10.1080/09540120410001641066. [DOI] [PubMed] [Google Scholar]
- 37.Kleeberger CA. Buechner J. Palella F, et al. Changes in adherence to highly active antiretroviral therapy medications in the Multicenter AIDS Cohort Study. AIDS. 2004;18:683–688. doi: 10.1097/00002030-200403050-00013. [DOI] [PubMed] [Google Scholar]
- 38.Nieuwkerk P. Gisolf EH. Sprangers M. Danner SA. Adherence over 48 weeks in an antiretroviral clinical trial: Variable within patients, affected by toxicities and independently predictive of virological response. Antivir Ther. 2001;6:97–103. [PubMed] [Google Scholar]
- 39.Weiss L. French T. Finkelstein R. Waters M. Mukherjee R. Agins B. HIV-related knowledge and adherence to HAART. AIDS Care. 2003;15:673–679. doi: 10.1080/09540120310001595159. [DOI] [PubMed] [Google Scholar]
- 40.Wall TL. Sorensen JL. Batki SL. Delucchi KL. London JA. Chesney MA. Adherence to zidovudine (AZT) among HIV-infected methadone patients: A pilot study of supervised therapy and dispensing compared to usual care. Drug Alcohol Depend. 1995;37:261–269. doi: 10.1016/0376-8716(94)01080-5. [DOI] [PubMed] [Google Scholar]
- 41.Wendel CS. Mohler MJ. Kroesen K. Ampel NM. Gifford AL. Coons SJ. Barriers to use of electronic adherence monitoring in an HIV clinic. Ann Pharmacother. 2001;35:1010–1015. doi: 10.1345/aph.10349. [DOI] [PubMed] [Google Scholar]
- 42.Wagner GJ. Does discontinuing the use of pill boxes to facilitate electronic monitoring impede adherence? Int J STD AIDS. 2003;14:64–65. doi: 10.1258/095646203321043327. [DOI] [PubMed] [Google Scholar]
- 43.Gross R. Bilker WB. Friedman HM. Strom BL. Effect of adherence to newly initiated antiretroviral therapy on plasma viral load. AIDS. 2001;15:2109–2117. doi: 10.1097/00002030-200111090-00006. [DOI] [PubMed] [Google Scholar]
- 44.Ickovics JR. Wilson TE. Royce RA, et al. Prenatal and postpartum zidovudine adherence among pregnant women with HIV: Results of a MEMS substudy from the Perinatal Guidelines Evaluation Project. J Acquir Immune Defic Syndr. 2002;30:311–315. doi: 10.1097/00126334-200207010-00007. [DOI] [PubMed] [Google Scholar]
- 45.Walsh JC. Mandalia S. Gazzard BG. Responses to a 1 month self-report on adherence to antiretroviral therapy are consistent with electronic data and virological treatment outcome. AIDS. 2002;16:269–277. doi: 10.1097/00002030-200201250-00017. [DOI] [PubMed] [Google Scholar]
- 46.Wagner GJ. Rabkin JG. Measuring medication adherence: Are missed doses reported more accurately then perfect adherence? AIDS Care. 2000;12:405–408. doi: 10.1080/09540120050123800. [DOI] [PubMed] [Google Scholar]
- 47.Van Dyke RB. Lee S. Johnson GM, et al. Reported adherence as a determinant of response to highly active antiretroviral therapy in children who have human immunodeficiency virus infection. Pediatrics. 2002;109:E61. doi: 10.1542/peds.109.4.e61. [DOI] [PubMed] [Google Scholar]