Table 1.
Checklist for the Reporting of Studies That Use Claims Data*
| Description of data source | Was a description of the sociodemographic characteristics and health care profile of the population provided? Did the investigators acknowledge limitations of services provided due to type of insurance or plan type of the enrollees, benefit design, and how providers are reimbursed for services studied? |
| Checks of data quality | Sources of unreliable data include changes in reporting/coding practices over time and in reporting resulting from changes in reimbursement, and services may be inadequately captured if not covered by the plan. Did the investigators explain how they handled missing and out-of-range values? Did the investigators explain how they handled duplicate claims and inconsistencies (differences in age of same patient on different claims)? Did the investigators compare the reported rates of disease or use with established norms or other data sources? If other researchers have studied the reliability and validity of the data source used, those should be cited. Have the necessary linkages among data sources or sites of care been carried out appropriately? Is there an explanation of how member eligibility was determined? |
| Sample selection | Is there a sample selection figure to easily show readers the numbers of enrollees included and excluded and for what reasons? Is there is justification provided for using the chosen inclusion/exclusion criteria for selecting beneficiaries for the study sample? Is there a transparent listing of all of the ICD-9-CM and CPT codes used in the study? Were enrollees who were noncontinuously enrolled in the health plan during the entire study period included in the analysis? |
| Analysis | Is the data analysis plan clearly described? Were research hypotheses generated a priori or were the findings generated the result of unsystematic data exploration? Did the investigator provide a cogent rationale for the study design chosen, in light of the data, setting, and research questions? Are limitations of the study design chosen clearly delineated to the reader? Examples of potential biases include selection bias, maturation, and regression to the mean. For studies reporting treatment effects, was there a control group created to compare against the group receiving the intervention? Did the investigators censor subjects, and, if so, did they explain how this may affect the sample selection or generalizability of the cohort? Are the end points or outcomes clearly defined on the basis of diagnosis or procedure codes or other criteria? Did the investigators justify the definition of the end points they chose for the analysis or cite other sources who used similar criteria? Were sensitivity analyses performed to explore the impact of changing the criteria for study inclusion or the definition of the outcome(s) of interest? Is there a temporal relationship between the exposure and the outcome of interest (did the researchers require the exposure to come before the outcome)? |
| Statistics | Were important confounding factors identified and adjusted for in the analyses either by stratification of the sample by the confounding variable or by the use of multivariable statistical techniques? What sort of risk adjustment was performed? Did the investigators account for differences in sociodemographic characteristics, medical comorbidities, disease severity? Were adequate tests of the statistical assumptions performed? Examples include testing for multicollinearity and adjustment for multiple comparisons. |
| Discussion/conclusions | Did the investigators provide a rationale for the study findings in light of the existing literature? Were alternative explanations for the findings offered? Did the investigators comment on the clinical or economic relevance of the study findings because statistical significance may not necessarily translate into clinical significance? Did the authors address concerns about the generalizability of study findings to other groups? |
| Funding sources | Were the funding sources for the analyses clearly identified? Did the funding sources participate in designing or conducting the study? |
CPT = Current Procedural Terminology; ICD-9-CM = International Classification of Diseases, 9th Revision, Clinical Modification.
Adapted from Motheral B, Brooks J, Clark MA, et al. A checklist for retrospective database studies—report of the ISPOR Task Force on Retrospective Databases. Value Health 2003;6:90-7.2