Abstract
Objective
To examine concordance between member self‐reports and the organization's administrative claims data for two key health factors: number of chronic conditions, and number of prescription drugs.
Data
Medicare Advantage plan claims data and member survey data from 2011 to 2012.
Design
Mailed surveys to 15,000 members, enrolled minimum 6 months, drawn from a random sample of primary care physician practices with at least 200 members.
Methods
Descriptive statistics were generated for extent of concordance. Multivariable logistic regressions were used to analyze the association of selected respondent characteristics with likelihood of concordance.
Findings
Concordance for number of chronic conditions was 58.4 percent, with 27.3 percent under‐reporting, 14.2 percent over‐reporting. Concordance for number of prescription drugs was 56.6 percent with 38.9 percent under‐reporting, 4.5 percent over‐reporting. Number of prescriptions and assistance in survey completion were associated with higher likelihood of concordance for chronic conditions. Assistance in survey completion and number of chronic conditions were associated with higher concordance, and age and number of prescriptions were associated with lower concordance, for prescription drugs.
Conclusions
Self‐reported number of chronic conditions and prescription medications are not in high concordance with claims data. Health care researchers and policy makers using patient self‐reported data should be aware of these potential biases.
Keywords: Survey, elderly, Medicare, agreement, chronic condition
Survey instruments have been widely used for data collection in the health care industry as a means of gathering patient feedback. Medicare surveys its beneficiaries to evaluate their care experience when using the health care delivery system. The Centers for Medicare and Medicaid Services (CMS) use two survey instruments: the Consumer Assessment of Health Plans and Providers (CAHPS) and the Health Outcomes (HOS) survey. CAHPS and HOS surveys are an integral part of CMS's efforts to improve health care in the United States. These data are relied upon as a key element in the CMS quality Star Program, which rates the quality performance and enrollee experience in Medicare Advantage (MA) plans (Jones, Jones, and Miller 2004). With the growing number of people with multiple chronic conditions and polypharmacy, the CMS is increasingly interested in beneficiaries' experience, health outcomes, and medication adherence (Sequist et al. 2008; Brown and Bussell 2011; Tinetti, Fried, and Boyd 2012). The quality of these services in the Star Program is measured clinically, administratively, and through self‐reported surveys.
It is essential that health service researchers and the CMS consider the validity of self‐reported data. The context of validity used by the authors for this research is defined as the extent to which a question or a scale is measuring the specific concept (i.e., criterion validity).
To evaluate the validity of self‐reports, this study measured responses in two specific areas that, based on clinical judgment, impact the overall health and well‐being of the elderly population: the number of chronic conditions and the number of prescription medications. These two metrics were chosen because they are accessible as discrete data in the organizations' administrative claims database. Chronic conditions are identified using hierarchical condition category codes from claims submitted by providers. Prescription medication data are electronically transmitted from the externally contracted pharmacy benefit manager (PBM) to the organizations' administrative claims data. This study investigated the extent to which self‐reported measures in these areas accurately matched the administrative claims data, and what patient characteristics were predictive of a greater likelihood of accuracy.
Background
The CMS has employed the CAHPS and HOS surveys to measure and report on performance of MA plans since 2001 in an effort to better inform beneficiaries on plan quality. The importance of these surveys has increased over time. In 2007, CMS introduced a five‐star quality rating system for MA plans that rates MA plans on over 50 measures in five domains, which are as follows: staying healthy, getting care from your doctor, timeliness of information from your health plan, managing chronic conditions, and administrative measures related to appeals and grievances. Star Rating measures are gathered from a variety of sources, including the Healthcare Effectiveness Data and Information Set (HEDIS), CAHPS, HOS, and MA plans' administrative data, which are reported to CMS (Darden and McCarthy 2013). In 2012, CMS introduced a weighting system into the star program that identified three categories of measures: process measures assigned a weight of one, patient experience measures assigned a weight of 1.5, and outcome measures assigned a weight of three. Self‐reported survey responses for the CAHPS survey are categorized as patient experience measures, which are assigned a weight of 1.5. HOS self‐reported survey measures are categorized as outcome measures and are in the triple weighted category.
One particularly critical question in survey research relates to the accuracy of respondents' answers. Respondents, especially the elderly, may experience emotional stress associated with survey length, health literacy, and educational factors such as an inability to read (Bradburn et al. 1978; Baker et al. 2002), which may increase the potential for conceptual measurement errors and response bias (Barofsky 2000). These measurement errors include factors such as the level of inaccuracy; the characteristics of the inaccurate responders (e.g., age, gender, and socioeconomic status); and the determination of whether the inaccuracy is related to the respondent or the survey question (Presser 1984).
The health of elderly respondents may also be conjectured as being a factor in the accuracy of their survey responses. The prevalence of chronic conditions, defined as “those conditions that last a year or more and require ongoing medical attention and/or limit activities of daily living” (Hwang et al. 2001, p. 268), is growing and the trend is expected to continue (Anderson and Horvath 2004). Among those above 65 years of age, it is estimated that three in four individuals have more than one chronic condition (Tinetti, Fried, and Boyd 2012). Also of significance, with the aging of the baby boomer generation, is the projected additional 10 million cases of Alzheimer's disease by 2050, bringing the prevalence to between 11 and 16 million cases (Alzheimer's Facts and Figures, 2012) with the most dramatic increase in individuals 85 years and older (Alzheimer's Facts and Figures 2012). These health issues may all be confounding factors to accurate survey responses in the elderly regardless of the validity of the survey instrument.
This study explores whether the following patient characteristics: total number of chronic conditions, total number of prescription medications, age, gender, race, marital status, living alone or with others, educational status, length of time with the Plan, and assistance with survey completion are predictors of agreement of responses to questions when compared to the administrative claims data.
It is hypothesized that survey respondents with more chronic conditions are less likely to have answers that agree with the organization's administrative claims data, as there is evidence in the literature that multiple chronic conditions can affect cognitive recall (Williams, Manias, and Walker 2008). Along parallel lines, it is hypothesized that survey respondents on more prescription medications are also less likely to have answers that agree with the organization's administrative claims data.
Methods
Data for this study were obtained from the patient survey administered by the MA plan, which was linked to the organizations' claims data and de‐identified prior to analysis. The survey was administered to individual enrollees who were members in the MA plans' network of primary care practices as part of the organizations' member satisfaction survey process. The survey targeted primary care practices with a minimum of 200 of the MA plans' members who had been enrolled in the health plan for at least 6 months. A prenotification letter was mailed to randomly selected respondents 1 week prior to mailing the surveys. The survey instrument was mailed with a cover letter explaining the purpose and significance of the survey and a return business reply envelope addressed to an external third party. Each respondent received only one survey. Fifteen thousand surveys were mailed to members in three separate increments of 5,000 mailings. Data were collected during September 2011, October 2011, and January 2012. A 99 percent confidence level with a 3 percent margin of error was used to determine the sample size. Four thousand seven hundred fifty‐two surveys were returned, representing a 31.6 percent response rate. The dataset from the three surveys represented the following distribution: September 2011, represented 32.7 percent of the sample; October 2011, represented 32 percent of the sample; and January 2012, represented 35.3 percent of the sample. Responders were compared to non‐responders to assess response bias.
Enrollee characteristics were self‐reported on the survey and included age, gender, race, educational level, marital status, length of time enrolled in the health plan, assistance in completing the survey, living alone or with others, total number of chronic conditions, and total number of prescription medications. One question included in the survey instrument asked respondents to report their number of chronic conditions: no chronic conditions, 1–3 chronic conditions, 4–6 chronic conditions, and ≥7 chronic conditions. A second survey question asked respondents to report the number of prescription medications currently being taken as: no prescription medications, 1–5 prescription medications, 6–8 prescription medications, and more than 8 prescription medications. These two questions were used to test agreement of self‐reported responses with the organizations' claims data. The total number of chronic conditions was calculated using a 3‐year look‐back period from the organization claims data, allowing for at least 6 months of claims lag. Unique chronic conditions during this 3‐year period were aggregated. The total number of prescription medications was calculated using the unique number of prescriptions filled for at least a 30‐day supply within 365 days of the survey date.
Empirical Approach
Respondents' characteristics and distribution of chronic conditions were analyzed using descriptive statistics. All inferential statistical analyses used an alpha level of .05.
The first outcome of interest was the degree of agreement between self‐reported responses to the survey questions asking the number of chronic conditions and the number of prescription medications. Responses were considered to be in agreement if the categorized self‐responses matched the categorized total from claims. Cohen's kappa statistic was used to quantify the concordance between respondents' self‐reports and administrative claims; a method frequently utilized in the literature (Robinson et al. 1997; Kwon et al. 2003; Garber et al. 2004). The null hypothesis of the kappa statistic being 0—that is, no agreement between the two measures—was tested at the .05 level. The 95 percent confidence interval (CI) was calculated using the estimate (k) and the standard error (SEk). Note that it is not enough to reject the null of no agreement; rather, convention dictates that a kappa level should be at least 0.6, preferably 0.7, before concluding that there is a good level of agreement (Landis and Koch 1977; Kriegsman et al. 1996).
Total concordance, total discordance, percent of self‐responses under‐reporting, and percent of self‐responses over‐reporting the number of chronic conditions and number of prescription medications with the claims data were calculated.
The second question of interest was to evaluate the characteristics that may influence the accuracy of self‐reports for the number of chronic conditions and the number of prescription medications. For analysis purposes, six of the ten control variables were recoded into two categories as follows:
Age: <65 and 65 years and older
Gender: Female and Male
Race: “Minority” and “Non‐Minority”
Educational level: High School and above and “All Others”
Marital status: Married and Not Married
Living alone or with others: Living alone and Not living alone
Assistance in completing the survey
Years enrolled with the health plan at the time of the survey
Total number of prescription medications (range 0 to >9)
Total number of chronic conditions (range 0 to >7)
The total number of chronic conditions and total number of prescription medications were linked from the claims database to the individual survey respondent by a third party.
Two separate multivariate logistic regression models were estimated to analyze how respondent characteristics predicted the outcomes of interest, which respectively are (1) agreement with the number of chronic conditions and (2) agreement with the number of prescription medications. For both outcomes, “agreement” was denoted by one (1), and lack of agreement by zero (0).
Data Management
Data management and analysis were performed using SPSS statistical software version 20 (IBM Corp. 2011). Prior to analysis of the data, each survey question was evaluated for out‐of‐range values, mismatched identifier numbers, and eligibility criterion of being enrolled in the health plan for at least 6 months. This resulted in 51 cases that did not meet the eligibility criterion and 82 out‐of‐range responses. The ineligible cases and individual out‐of‐range responses were eliminated from the analysis. Additionally, four survey questions—self‐reported number of chronic conditions, self‐reported number of prescription medications, self‐rating of physical health, and self‐rating of mental health—were evaluated for missing responses. Three hundred seventy‐five cases with missing responses to these questions were excluded from the dataset, resulting in a sample size of 4,325 respondents.
Results
Descriptive Statistics
Descriptive statistics for the independent variables used in the study are presented in Table 1. Of the 4,325 responses used in this study, females represented 55.4 percent of the total responses. The majority of respondents were white (72.1 percent) with a mean age of 76.3 (SD: 8.0) years. The mean length of time respondents were enrolled with the health plan was 9.2 years, and 41.4 percent of respondents reported completing a high school education. Descriptive statistics for respondents who did not answer one or more of the four questions revealed that 63.8 percent were female, 61.5 percent were white, the mean age was 77.1 years (SD: 8.1), and the mean number of years with the plan was 9.3 years (SD: 3.06). Additionally, descriptive statistics for nonresponders to the survey revealed that 55.9 percent were female, 62 percent were white, with a mean age of 77 years and a mean number of years with the plan of just over 10 years.
Table 1.
Variable | N | Mean |
---|---|---|
Age [SD] | 4,325 | 76.3 [8.0] |
Plan years [SD] | 4,325 | 9.2 [2.8] |
Age | 4,325 | % |
<65 | 307 | 7.1 |
≥65 | 4,018 | 92.9 |
Race | 4,237 | |
Nonminority | 3,119 | 72.1 |
Minority | 1,118 | 25.8 |
Marital status | 4,280 | |
Married | 2,043 | 47.2 |
Nonmarried | 2,237 | 51.8 |
Gender | 4,270 | |
Male | 1,874 | 43.3 |
Female | 2,396 | 55.4 |
Living status | 4,263 | |
Alone | 1,397 | 32.3 |
Not alone | 2,866 | 66.3 |
Assistance in completing survey | 4,291 | |
Yes | 632 | 14.6 |
No | 3,659 | 84.6 |
Education | 4,232 | |
High school and above | 3,044 | 71.9 |
All others | 1,188 | 28.0 |
Details of concordance between self‐reported responses and claims data for the number of chronic conditions and the number of prescription medications are presented in Table 2. The overall concordance, discordance, over‐ reporting, and under‐reporting for both categories are summarized in Table 3. With respect to chronic conditions, results revealed the total concordance for the number of chronic conditions was 58.4 percent with 27.3 percent under‐reporting and 14.2 percent over‐reporting. Agreement between self‐reports and claims database declined in two categories for chronic conditions: 4–6, which dropped to 18 percent agreement and ≥7 in which there was no (0 percent) agreement with the claims database. With respect to prescription medications, the total concordance was 56.6 percent with 38.9 percent under‐reporting and 4.5 percent over‐reporting. There was agreement when self‐reporting zero and 1–5 medications and the claims data for number of prescription medications at 70 and 89 percent, respectively. Self‐reports of 6–8 medications were in agreement with claims data 28 percent of the time, and for greater than nine medications agreement increased to 38 percent of the time.
Table 2.
Self‐Reported | Chronic Conditions Claims Data* | Total | |||
---|---|---|---|---|---|
None | 1–3 | 4–6 | 7 plus | ||
None | 543 | 711 | 60 | 0 | 1,314 |
1–3 | 457 | 1,879 | 404 | 5 | 2,745 |
4–6 | 10 | 116 | 108 | 5 | 239 |
7 plus | 2 | 9 | 16 | 0 | 27 |
Total | 1,012 | 2,715 | 588 | 10 | 4,325 |
Self‐Reported | Prescription Claims Data** | Total | |||
---|---|---|---|---|---|
None | 1–5 | 6–8 | 9 plus | ||
None | 92 | 82 | 4 | 2 | 180 |
1–5 | 24 | 1,533 | 828 | 277 | 2,662 |
6–8 | 9 | 79 | 351 | 491 | 930 |
9 plus | 6 | 15 | 61 | 471 | 553 |
Total | 131 | 1,709 | 1,244 | 1,241 | 4,325 |
*kappa = 0.206; **kappa = 0.339.
Table 3.
Total Concordance | Total Discordance | Under‐Reported | Over‐Reported | |
---|---|---|---|---|
Number of chronic conditions | 2,530 (58.4) | 1,795 (41.5) | 1,185 (27.3) | 610 (14.2) |
Number of prescription medications | 2,447 (56.6) | 1,878 (43.4) | 1,684 (38.9) | 194 (4.5) |
The kappa statistic allowed rejection of the null of no agreement between self‐reported chronic conditions and the organizations' claims database (k = 0.206, 95 percent CI, 0.181, 0.231, p < .001). The estimated magnitude of 0.206 for the kappa statistic puts it at the upper limit of the “slight agreement” range of 0.0–0.2 (Landis and Koch 1977). For prescription medications, the null of no agreement between self‐reported number of prescription medications and the claims database was also rejected (k = 0.339, 95 percent CI, 0.319, 0.358, p < .001). The estimated magnitude in this case, 0.339, puts it in the “fair agreement” range of 0.21–0.40 (Landis and Koch 1977).
The total number of chronic conditions and the total number of prescription medications were included in both regression analyses, anticipating there would be concordance between chronic conditions and the number of prescription medications. Tables 4 and 5, respectively, present the multivariate logistic regression results for agreement on number of chronic conditions, and agreement on number of prescription drugs. The first outcome variable of interest (Model 1A) was the “agreement between reported chronic conditions and the organizations' claims database.” Three predictor variables, “total number of chronic conditions,” “total number of prescription medications taken,” and “assistance in survey completion,” demonstrated statistical significance. The predictor variable “total number of chronic conditions” was associated with a 0.769 decrease in the odds of agreement with the claims database (p < .001). Each additional prescription medication was associated with an increase in the odds of agreement with the claims database by a factor of 1.049 (p < .001). Respondents who reported having assistance with survey completion had a 1.218 increase in the odds of agreement compared to the reference group “no assistance” (p < .05). The sociodemographic factors of age, race, marital status, living alone or with others, education, gender, and years with the Plan have the potential to introduce bias, contributing to reporting errors. In this study they did not demonstrate statistical significance which is consistent with past research (Glandon, Counte, and Tancredi 1992; Law et al. 1996; Reijneveld 2000; Ritter et al. 2001; Lubeck and Hubert 2005).
Table 4.
p | Odds Ratio | 95% CI for Odds Ratio | ||
---|---|---|---|---|
Lower | Upper | |||
Age >65 | .564 | 1.077 | .837 | 1.387 |
Female gender | .452 | 1.053 | .920 | 1.206 |
Assistance with survey completion | .042* | 1.218 | 1.007 | 1.473 |
Living status | .490 | .940 | .789 | 1.120 |
Education | .199 | 1.101 | .951 | 1.275 |
Nonminority race | .190 | .908 | .786 | 1.049 |
Years in plan | .193 | .985 | .964 | 1.007 |
Total chronic conditions | <.001** | .769 | .732 | .809 |
Prescription count | <.001** | 1.049 | 1.029 | 1.069 |
Marital status | .744 | .972 | .819 | 1.153 |
Intercept | .005 | 1.728 |
*p < .05; **p < .01.
Table 5.
p | Odds Ratio | 95% CI for Odds Ratio | ||
---|---|---|---|---|
Lower | Upper | |||
Age >65 | .005* | .683 | .525 | .890 |
Female gender | .313 | 1.074 | .935 | 1.235 |
Assistance in survey completion | .036* | 1.231 | 1.014 | 1.496 |
Living status | .966 | 1.004 | .839 | 1.201 |
Education | .343 | 1.076 | .925 | 1.251 |
Nonminority race | .043* | 1.164 | 1.005 | 1.349 |
Years in plan | .158 | .984 | .961 | 1.006 |
Total chronic conditions | .006* | 1.073 | 1.020 | 1.129 |
Prescription count | <.001** | .833 | .816 | .851 |
Marital status | .723 | 1.032 | .866 | 1.230 |
Intercept | <.001 | 5.093 |
*p < .05; **p < .01.
We additionally conducted supplementary analyses on factors that predicted the degree of disagreement (0 if agreement, else the “count” of categories by which administrative data differed from the self‐reported range) using multivariate Poisson models. Poisson models were selected over negative binomial models since likelihood ratio tests failed to reject the null that the overdispersion parameter was 0. The findings were in accordance with the logistic model in that the number of chronic conditions was associated with a higher level of disagreement (incidence rate ratio or IRR: 1.19, p < .001) and number of prescription medications was associated with a lower level of disagreement (IRR: 0.97, p < .001). Receiving assistance compared to no assistance was associated with a lower level of disagreement, but this result was statistically imprecise in the Poisson model (IRR: 0.913, p = .20). The full results are not shown but are available on request.
The second outcome variable of interest (Model 1B) was the agreement between reported prescription medications and the organizations' claims database. Five predictor variables demonstrated statistical significance: total number of prescription medications, total number of chronic conditions, age, nonminority race, and assistance in survey completion. Results for total number of prescriptions showed that the odds of agreement were lower by a factor of 0.833 for additional prescriptions (p < .001). Results for age demonstrated that the odds of agreement with the claims database decreased by a factor of .683 for those 65 and older as compared to the reference group <65 years of age (p < .05). Nonminority races demonstrated higher odds of agreement as compared to the reference group of minority races (p < .05). Higher total number of chronic conditions was associated with increased odds of agreement by a factor of 1.073 (p < .01). Assistance in survey completion (p = .043) and being nonminority race (p = .036) demonstrated positive associations in the likelihood of agreement.
We conducted analyses here as well on factors that predicted the extent of disagreement using multivariate Poisson models. Again, the results largely agreed with those of the logistic model. The number of prescription medications was associated with more disagreement (IRR: 1.10, p < .001), as was being 65 or older (IRR: 1.22, p < .05). The number of chronic conditions was associated with lower levels of disagreement (IRR: 1.07, p < .01). Assistance in survey completion was associated with lower level of disagreement—though the result for this variable fell just short of statistical significance at the .05 level (IRR: 1.23, p = .06). Full results from these models are available on request.
Discussion
The findings from this study are consistent with results from previous research exploring various influences on the potential accuracy of responses in the Medicare population, including the number of chronic conditions and medications, socioeconomic factors, and demographic factors (Sherbourne and Meredith 1992; Hoffman, Rice, and Sung 1996; Reidy and Richards 1997; Wagner et al. 1998). In looking at factors that predicted self‐reported outcomes being in concordance with claims‐based results for chronic conditions and prescription medications, we found that those with more chronic conditions (based on claims data) had lower odds of concordance for chronic conditions, and those using more prescription medications (based on claims data) had lower odds of concordance for prescription medication. Sociodemographic factors did not appear to be significantly associated with odds of concordance for chronic conditions, though being above 65 years and minority race appeared to reduce concordance in the case of prescription drugs. It is interesting to note that education level has no significant association with the odds of concordance in either model. Assistance with survey completion was associated with higher odds of concordance, providing grounds for speculating that in some cases the assistance might be provided by an informal caregiver who was more aware of the member's health than the member himself—though more research is needed to confirm if this is the case.
The findings have implications for policy makers and administrators of MA plans especially as survey results continue to increase in importance in CMS's programs related to quality bonus payments to MA plans. CMS policy makers need to be aware of issues regarding the validity of self‐reported patient data, and patient characteristics that may be correlated with their ability to self‐report accurately, particularly when such data inform financ‐ial decisions. Plan administrators are concerned about the discordance between self‐reported and administrative claims data. Discordance may have implications on the integrity of the claims data, communication failures between the patients and MA plan, and even the modalities for delivering care. The consequences of underestimation or overestimation of the number of medications and/or the number of chronic conditions may be far reaching and costly. While there is little current research about the implications of (self) over‐ or under‐reporting, certain conjectures can be made about routes through which this may affect Star Ratings. For example, one survey question that is directly related to a star measure requires respondents to recall if they received a flu shot in the past 6 months. Patients whose self‐reports on chronic conditions and medications are not in concordance with claims may also provide incorrect responses on whether they received a flu shot. A purposeful attention by CMS to such discordance in self‐reported data when they weigh their survey measures, as well as more in‐depth research into the implications that such discordance may have on Star Ratings, could yield meaningful outcomes for both policy makers and MA plan administrators.
This study sought to evaluate self‐reported survey data as an area that has the potential to influence an MA plans's CMS Star Rating, therefore impacting revenue and quality bonus payments that fund supplemental plan benefits. As the CMS evolves its methodology for the Star Rating program, policies linking revenue to enrollees self‐reported survey responses should continually be re‐assessed to insure validity or adjust for responses in an elderly population, as these responses have the potential to significantly affect an organization's Star Rating. The Star Rating program is technically complex with inconsistencies in measures and methodology year to year. These inconsistencies also need to be addressed if the program is to remain a valid policy tool for measuring quality in MA plans.
Limitations
The study was limited to survey respondents who were enrolled in an MA plan in southeastern Louisiana, and therefore results may not be generalizable beyond the specific geographic region and should be carefully considered. The survey relied on self‐reported assessments of respondents' number of chronic conditions and number of prescription medications. The possibility of response and recall bias may serve as a limitation to the accuracy of self‐reports. Using claims data presents a limitation in that the data are only as complete as the providers' accuracy of submitted claims to the health plan, and the data may also be subject to errors in claims processing, coding diagnoses, and so forth. Additionally, using data from the PBM presents a limitation as enrollees may acquire their prescription medications through a variety of other sources such as free clinics or the Veterans Administration System. These data are generally not reported to the MA plan. This limitation may contribute to under‐reporting of both chronic conditions and total number of prescription medications in the organization's claims data.
Recommendations for Future Research
Areas for future research include continuing to develop our understanding of attributes respondents consider when reporting their health status and determining how and if those considerations change over time. With an aging population that is living longer with more chronic conditions, developing a more comprehensive understanding of the physical and emotional impacts of chronic conditions will be important considerations for MA plans as they make strategic business decisions. Additionally, it would be useful to examine the steps that MA plans can take to understand and mitigate the discordance between self‐reported survey data and administrative claims data.
In summary, the use of enrollee self‐reported survey data continues to have advantages for research studies. It is generally an inexpensive way to collect data and within the Medicare population, response rates are historically excellent. This study sought to further the understanding of validity in self‐reported survey responses among an elderly population by studying self‐reported chronic conditions and number of prescription medications in an MA plan population. It is hoped that this research contributes to a better understanding of the factors that confound the validity of self‐reported survey responses among the elderly.
Supporting information
Acknowledgments
Joint Acknowledgment/Disclosure Statement: This study was partially supported by Peoples Health, a Medicare Advantage plan in southeast Louisiana. The data used in this work are the property of the Peoples Health organization. The organization had no role in the design or conduct of the study or approval of the manuscript. The content is solely the responsibility of the authors and does not represent the official views of the organization.
Disclosures: None.
Disclaimers: None.
References
- Alzheimer's Facts and Figures . 2012. “2012 Alzheimer's Disease Facts and Figures.” Alzheimer's & Dementia 8 (2): 131–68. doi:10.1016/j.jalz.2012.02.001. [DOI] [PubMed] [Google Scholar]
- Anderson, G. , and Horvath J.. 2004. “The Growing Burden of Chronic Disease in America.” Public Health Reports 119 (3): 263. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baker, D. W. , Gazmararian J. A., Williams M. V., Scott T., Parker R. M., Green D., and Peel J.. 2002. “Functional Health Literacy and the Risk of Hospital Admission among Medicare Managed Care Enrollees.” American Journal of Public Health 92 (8): 1278–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barofsky, I. 2000. “The Role of Cognitive Equivalence in Studies of Health‐Related Quality‐of‐Life Assessments.” Medical Care 38 (9): 125–9. [DOI] [PubMed] [Google Scholar]
- Bradburn, N. M. , Sudman S., Blair E., and Stocking C.. 1978. “Question Threat and Response Bias.” Public Opinion Quarterly 42 (2): 221–34. [Google Scholar]
- Brown, M. T. , and Bussell J. K.. 2011. “Medication Adherence: WHO Cares?” Mayo Clinic Proceedings 86 (4): 304–14. doi:10.4065/mcp.2010.0575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Darden, M. , and McCarthy I. M.. 2013. The Star Treatment: Estimating the Impact of Star Ratings on Medicare Advantage Enrollments. SSRN 2328803 [accessed August 26, 2015]. Available at http://economics.wpdev.gsu.edu [Google Scholar]
- Garber, M. C. , Nau D. P., Erickson S. R., Aikens J. E., and Lawrence J. B.. 2004. “The Concordance of Self‐Report with Other Measures of Medication Adherence: A Summary of the Literature.” Medical Care 42 (7): 649–52. [DOI] [PubMed] [Google Scholar]
- Glandon, G. L. , Counte M. A., and Tancredi D.. 1992. “An Analysis of Physician Utilization by Elderly Persons: Systematic Differences between Self‐Report and Archival Information.” Journal of Gerontology 47 (5): S245–52. [DOI] [PubMed] [Google Scholar]
- Hoffman, C. , Rice D., and Sung H. Y.. 1996. “Persons with Chronic Conditions.” Journal of the American Medical Association 276 (18): 1473. [PubMed] [Google Scholar]
- Hwang, W. , Weller W., Ireys H., and Anderson G.. 2001. “Out‐of‐Pocket Medical Spending for Care of Chronic Conditions.” Health Affairs 20 (6): 267–78. [DOI] [PubMed] [Google Scholar]
- IBM Corp. (2011) IBM SPSS Statistics for Windows, V. A. New York: IBM Corp. [Google Scholar]
- Jones, N. , Jones S., and Miller N.. 2004. “The Medicare Health Outcomes Survey Program: Overview, Context, and Near‐Term Prospects.” Health and Quality of Life Outcomes 2 (1): 33–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegsman, D. M. W. , Penninx B. W. J. H., Van Eijk J. T. M., Boeke A. J. P., and Deeg D. J. H.. 1996. “Self‐Reports and General Practitioner Information on the Presence of Chronic Diseases in Community Dwelling Elderly: A Study on the Accuracy of Patients' Self‐Reports and on Determinants of Inaccuracy.” Journal of Clinical Epidemiology 49 (12): 1407–17. [DOI] [PubMed] [Google Scholar]
- Kwon, A. , Bungay K. M., Pei Y., Rogers W. H., Wilson I. B., Zhou Q., and Adler D. A.. 2003. “Antidepressant Use: Concordance between Self‐Report and Claims Records.” Medical Care 41 (3): 368–74. [DOI] [PubMed] [Google Scholar]
- Landis, J. R. , and Koch G. G.. 1977. “The Measurement of Observer Agreement for Categorical Data.” Biometrics 33 (1): 159–74. [PubMed] [Google Scholar]
- Law, M. G. , Hurley S. F., Carlin J. B., Chondros P., Gardiner S., and Kaldor J. M.. 1996. “A Comparison of Patient Interview Data with Pharmacy and Medical Records for Patients with Acquired Immunodeficiency Syndrome or Human Immunodeficiency Virus Infection.” Journal of Clinical Epidemiology 49 (9): 997–1002. [DOI] [PubMed] [Google Scholar]
- Lubeck, D. P. , and Hubert H. B.. 2005. “Self‐Report Was a Viable Method for Obtaining Health Care Utilization Data in Community‐Dwelling Seniors.” Journal of Clinical Epidemiology 58 (3): 286–90. [DOI] [PubMed] [Google Scholar]
- Presser, S. 1984. “Is Inaccuracy on Factual Survey Items Item‐Specific or Respondent‐Specific?” The Public Opinion Quarterly 48 (1): 344–55. [Google Scholar]
- Reidy, J. , and Richards A.. 1997. “Anxiety and Memory: A Recall Bias for Threatening Words in High Anxiety.” Behaviour Research and Therapy 35 (6): 531–42. [DOI] [PubMed] [Google Scholar]
- Reijneveld, S. A. 2000. “The Cross‐Cultural Validity of Self‐Reported Use of Health Care: A Comparison of Survey and Registration Data.” Journal of Clinical Epidemiology 53 (3): 267–72. [DOI] [PubMed] [Google Scholar]
- Ritter, P. L. , Stewart A. L., Kaymaz H., Sobel D. S., Block D. A., and Lorig K. R.. 2001. “Self‐Reports of Health Care Utilization Compared to Provider Records.” Journal of Clinical Epidemiology 54 (2): 136–41. [DOI] [PubMed] [Google Scholar]
- Robinson, J. R. , Young T. K., Roos L. L., and Gelskey D. E.. 1997. “Estimating the Burden of Disease: Comparing Administrative Data and Self‐Reports.” Medical Care 35 (9): 932–47. [DOI] [PubMed] [Google Scholar]
- Sequist, T. D. , Schneider E. C., Anastario M., Odigie E. G., Marshall R., Rogers W. H., and Safran D. G.. 2008. “Quality Monitoring of Physicians: Linking Patients' Experiences of Care to Clinical Quality and Outcomes.” Journal of General Internal Medicine 23 (11): 1784–90. doi:10.1007/s11606‐008‐0760‐4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sherbourne, C. D. , and Meredith L. S.. 1992. “Quality of Self‐Report Data: A Comparison of Older and Younger Chronically Ill Patients.” Journal of Gerontology 47 (4): 204–11. [DOI] [PubMed] [Google Scholar]
- Tinetti, M. E. , Fried T. R., and Boyd C. M.. 2012. “Designing Health Care for the Most Common Chronic Condition—Multimorbidity.” Journal of the American Medical Association 307 (23): 2493–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wagner, A. K. , Gandek B., Aaronson N. K., Acquadro C., Alonso J., Apolone G., and Kaasa S.. 1998. “Cross‐Cultural Comparisons of the Content of SF‐36 Translations across 10 Countries: Results from the IQOLA Project.” Journal of Clinical Epidemiology 51 (11): 925–32. [DOI] [PubMed] [Google Scholar]
- Williams, A. , Manias E., and Walker R.. 2008. “Interventions to Improve Medication Adherence in People with Multiple Chronic Conditions: A Systematic Review.” Journal of Advanced Nursing 63 (2): 132–43. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.