Skip to main content
Family Practice logoLink to Family Practice
. 2016 Jul 28;33(6):639–643. doi: 10.1093/fampra/cmw065

The validation of electronic health records in accurately identifying patients eligible for colorectal cancer screening in safety net clinics

Amanda F Petrik a,*, Beverly B Green b, William M Vollmer a, Thuy Le c, Barbara Bachman a, Erin Keast a, Jennifer Rivelli a, Gloria D Coronado a
PMCID: PMC5161488  PMID: 27471224

Abstract

Background.

While electronic health records (EHRs) play a key role in increasing colorectal cancer (CRC) screening by identifying individuals who are overdue, important shortfalls remain.

Objectives.

As part of the Strategies and Opportunities to STOP Colon Cancer (STOP CRC) study, we assessed the accuracy of EHR codes in identifying patients eligible for CRC screening.

Methods.

We selected a stratified random sample of 800 study participants from 26 participating clinics, in the Pacific Northwest region of the USA. We compared data obtained through codes in the EHR to conduct a manual chart audit. A trained chart abstractor completed the abstraction of eligible and ineligible patients.

Results.

Of 520 individuals in need of CRC screening, identified via the EHR, 459 were confirmed through chart review (positive predictive value = 88%). Of 280 individuals flagged as up-to-date in their screening per EHR data, 269 were confirmed through chart review (negative predictive value = 96%). Among the 61 patients incorrectly classified as eligible, 83.6% of disagreements were due to evidence of a prior colonoscopy or referral that was not captured in recognizable fields in the EHR.

Conclusions.

Our findings highlight importance of better capture of past screening events in the EHR. While the need for better population-based data is not unique to CRC screening, it provides an important example of the use of population-based data not only for tracking care, but also for delivering interventions.

Key words: Colorectal cancer, community medicine, electronic health record, gastroenterology, screening.

Introduction

Despite unequivocal evidence that colorectal cancer screening effectively reduces mortality and morbidity, screening rates remain low in the USA (1,2). Data from the US National Health Interview Survey (NHIS), e.g. show that in 2010 41% of American adults aged 50–75, nearly 35 million people, were not up-to-date with CRC screening (3). CRC continues to be the second leading cause of cancer-related death in both Europe and the USA (4).

Colorectal cancer screening is conducted through faecal testing (annually), flexible sigmoidoscopy (every 5 years) with annual faecal testing, or colonoscopy (every 10 years) (3). Despite the availability of these options, almost 30% of eligible American adults have never received CRC screening. It is troubling that the patients least likely to be up-to-date with screening are those who receive care at community clinics or Federally Qualified Health Center (FQHC) delivery sites nationwide in the USA (5–7). Although new interventions and systems are being tested to increase screening, few physicians report using these approaches to increase colorectal cancer screening (8). This study, the STOP CRC project, tests a centralized approach to increase colorectal cancer screening (9).

Universally, electronic health records (EHRs) hold much promise for increasing screening rates by identifying eligible individuals who are overdue for, or who have never completed, CRC screening. However, important barriers to their use remain. For example, records of colonoscopy completion are frequently missing in the EHR. This is especially true in community clinics, where colonoscopy services are referred to external providers with separate EHR systems, making it difficult to identify procedures not performed within the clinic. Other sources of error include clinics’ reliance on patient self-report of testing, as patients tend to over-report colonoscopy completion (10–13). Even when the date of the colonoscopy is correctly reported by the patient, test results are often missing, which makes it difficult to determine when follow-up tests are due.

Variation in workflow and documentation of outside procedures also contributes to incomplete capture of CRC screening and testing events in the EHR (14). Results captured in notes rather than in discrete fields typically are not recoverable if using systems processes for identifying those due for screening (4). These data issues are compounded by the fact that records often lack the granularity of data needed to determine screening intervals (e.g. pathology reports) (14). Filling these important gaps in medical records is vital to realizing the promise of employing EHRs to increase CRC screening.

As part of the Strategies and Opportunities to Stop Colon Cancer in Priority Populations (STOP CRC) study, we sought to design EHR tools to identify patients due for CRC screening. Our overall goal was to test an automated data-driven direct-mail faecal testing intervention that would rely on these tools. As such, it was critical for us to understand the accuracy of the EHR data we used to identify which patients were eligible and due for CRC screening and to exclude those not eligible (e.g. recent CRC screening, end-stage renal failure, prior CRC or inflammatory bowel disease). This article presents our analysis of the predictive value of the EHR data used to identify patients eligible for CRC screening.

Methods

The design of the STOP CRC trial has been described elsewhere (9). Briefly, STOP CRC is a cluster-randomized trial being conducted in 26 clinics from 8 FQHCs in the Pacific Northwest region of the USA to evaluate the impact of an EHR-driven intervention designed to increase CRC screening among adults who are not up-to-date with screening. Patients meet screening guidelines if they have had a faecal immunochemical test or faecal occult blood test (FIT/FOBT) in the past year, a flexible sigmoidoscopy in the past 5 years with annual faecal testing, or a colonoscopy in the past 10 years (3).

The study was a collaboration among OCHIN, Center for Health Research (CHR) at Kaiser Permanente Northwest, and FQHCs in the USA. OCHIN is a non-profit Health Center-Controlled Network headquartered in Portland, Oregon with an organization-wide single EHR that allows researchers to access clinical and utilization data across all OCHIN clinic sites. It also provides researchers with a robust data warehouse. OCHIN’s EHR includes a practice management data system (claims, billing, appointments), that has been customized for FQHCs and provides critical tools for clinical oversight, reporting, and quality improvement.

The STOP CRC intervention is pragmatic and therefore required the ability to accurately identify patients eligible for screening, who did not meet current screening guidelines. We, therefore, conducted a chart audit substudy to assess the predictive value of our EHR-based screening algorithm. Specifically, we answered the questions: (i) ‘What proportion of individuals, identified by EHR databases, as being eligible for the study (i.e. not up-to-date with screening) really are eligible?’, and (ii) ‘What proportion of individuals, identified as meeting criteria for study ineligibility, really are ineligible (i.e. are not up-to-date with screening)?’

STOP CRC population

Individuals were eligible for screening if they were 50–75 years of age and had no prior diagnosis of CRC, inflammatory bowel disease, end-stage renal failure, or total colectomy. We also required participants to have had at least one clinic visit in the past year and a viable address. The clinics that participated in the STOP CRC project were diverse. At randomization, 62% of the clinics were located in rural areas, their size ranged from around 300–2600 eligible patients, Hispanic patients accounted for 2% to 36% of patients, and screening rates ranged from 18% to 60%.

We classified individuals as not being up-to-date with screening guidelines if they had: no colonoscopy with results in the prior 9 years; no flexible sigmoidoscopy with results in the prior 4 years; no FIT/FOBT test with results in the past 11 months; no referral for a colonoscopy or flexible sigmoidoscopy order during the previous year; or no FIT/FOBT order within the previous 6 months. The coding details we used to define these events are presented in the supplemental materials.

Protocol for chart validation

We used SAS/STAT® software to draw a simple random sample of 20 individuals from each clinic who met the study’s eligibility criteria and 10 who did not (two clinics were oversampled by 10 patients). The sample sizes were chosen to provide good precision for estimating both positive and negative predictive value for the clinics overall, while at the same time enabling us to identify major problems at individual clinics that we could then address as needed. Across the 26 clinics, we reviewed 800 charts, 520 of them to validate eligibility [i.e. to assess positive predictive value (PPV)] and 280 charts to validate ineligibility [i.e. to assess negative predictive value (NPV)]. Clinic systems ranged in size from 739 patients to 5961 patients. Details of the selection algorithms and proportions of patients sampled are in the supplemental materials.

A trained chart abstractor logged into each clinic’s EPIC platform at OCHIN to complete the chart review. Validation forms were created and tested by the investigators, analyst, and abstractor.

For comparison, OCHIN staff pulled inclusion and exclusion data from the end user databases (Clarity). The data were then sent for analysis to CHR, where researchers compared the chart abstraction data to OCHIN end user datasets. Two staff analyzed the data sources and compared findings. All disagreements were discussed and arbitrated.

Analysis

We calculated PPV as the proportion of individuals flagged by the EHR-algorithm as not being up-to-date for screening who, based on the chart review, were confirmed as not being up-to-date. Similarly, we defined NPV as the proportion of individuals deemed up-to-date with screening per the EHR who were confirmed as such by the chart review. Continuity adjusted confidence intervals were calculated using formulas provided by Fleiss (15).

Data from the chart review included free text, scanned documents, and other data that might not be captured by EHR codes. We used chart review as the reference standard (e.g. represents the gold standard). We assumed that the chart audit is an accurate reflection of what is in that medical record, even though it may be incomplete. Consequently, our analysis focused on estimating the predictive value of our operational participant selection algorithm. For this analysis, we assumed all missing data were actually missing. Thus, we did not complete sensitivity analyses.

Results

Positive predictive value

Of the 520 individuals we found in need of CRC screening by applying our EHR-based classification rule, 459 (88%) were confirmed based on chart reviews (Table 1). These patients did not have evidence in their medical record of being current for CRC screening (i.e. a colonoscopy, sigmoidoscopy or FIT/FOBT within the defined time periods). Site variation in PPV ranged from 77.5% to 95.8%.

Table 1.

Percent and 95% confidence intervals (95% CI) of patients who were correctly included (PPV) and excluded (NPV) from eligibility list, by health centre

Clinic system Positive predictive value Negative predictive value
N % correctly included (95% CI) N % correctly excluded (95% CI)
1 80 77.5 (66.5–85.8) 40 97.5 (85.3–99.9)
2 40 90.0 (75.4–96.7) 40 100.0 (89.1–99.8)
3 60 90.0 (78.8–95.9) 30 96.7 (81.0–99.8)
4 60 95.0 (85.2–98.7) 30 90.0 (72.3–97.4)
5 80 85.0 (74.9–91.7) 40 100.0 (89.1–99.8)
6 40 80.0 (63.9–90.4) 20 95.0 (73.1–99.7)
7 40 87.5 (72.4–95.3) 20 90.0 (66.9–98.2)
8 120 95.8 (90.0–98.4) 60 95.0 (85.2–98.7)
Total 520 88.3 (85.1–90.9) 280 96.1 (92.9–97.9)

Of the 61 patients incorrectly classified as needing screening, 83.6% (51/61) had evidence of a prior colonoscopy or referral that was not captured in recognizable fields in the EHR (Table 2). This included 31.1% (19/61) who had evidence of a colonoscopy with a procedure report present. For example, clinician notes indicated that there was a communication with the referred GI and the procedure or pathology report was located in scanned documents. Indeed, 44.3% (27/61) of the disagreements that did not have a colonoscopy report present did have a physician-reported colonoscopy in the problem list. Additionally among the disagreements due to colonoscopy, five individuals had an indication of a patient-reported colonoscopy (8.2%) and four (6.6%) had evidence of a referral during the past year in a notes field. These findings are not mutually exclusive and patients could fall into more than one category.

Table 2.

Reasons for incorrect inclusion (n = 61) and incorrect exclusion (n = 11) found on chart audit

Reason for incorrect inclusion found on chart audita n = 61, n (%)
Evidence of prior colonoscopy or referrala 51 (83.6%)
Evidence of colonoscopy reported by physician in encounter, problem list (no pathology report present) 27 (44.3%)
Evidence of colonoscopy with pathology report present 19 (31.1%)
Patient reported colonoscopy, no pathology report present 5 (8.2%)
Evidence of colonoscopy referral only 4 (6.6%)
Evidence of prior FIT 12 (19.7%)
Encounter notes about sending home FIT tests 5 (8.2%)
FIT codes not recognized 7 (11.5%)
Other exclusion 4 (6.6%)
Unrecognized codes of unspecified colorectal cancer screening 2 (3.3%)
Colitis 1 (1.6%)
Colectomy 1 (1.6%)
Reason for incorrect exclusion found on chart audit n = 11, n (%)
Evidence of resulted FIT, but more than 1 year ago 6 (54.5%)
No codes found that indicate ineligibility 3 (27.3%)
Evidence of prior colonoscopy or referral but not up-to-date 2 (18.2%)

aReasons are not mutually exclusive.

Some of the disagreements in eligibility were due to missed faecal test orders and results (Table 2). Of the 61 disagreements, e.g. 19.7% (n = 12) were due to missed evidence of faecal testing. This included 8.2% (n = 5) of disagreements that were due to recording FIT orders in notes, and 11.5% (n = 7) due to in-house processed tests without codes or with non-standardized or internal codes (4/7), or that had obscure codes for unknown reasons (3/7). Of the 61 disagreements, 6.6% (n = 4) were due to unrecognized codes of unspecified colorectal cancer screening, colitis or colectomy.

Negative predictive value

Of the 280 individuals flagged as being ineligible for screening per their EHR data, 269 (96%) were confirmed using chart review (Table 1). These patients had evidence in their record of screening or other exclusion criteria (e.g. prior CRC diagnosis or renal failure) and we excluded them from the study. Site variation in NPV ranged from 90.0% to 100%.

Of the 11 incorrectly excluded patients, 54.5% (n = 6) were excluded due to faecal testing. These patients had a test >11 months before the study period, which should have made them eligible (Table 2). We excluded another 27.3% (3/11) for whom we did not find any codes that indicated ineligibility. The final 18.2% (2/11) had evidence of a colonoscopy or referral to colonoscopy, but that was outside the window of ineligibility (>10 years earlier, and referral more than a year earlier).

Discussion

We assessed the accuracy of EHR codes in identifying patients eligible for CRC screening, and found a need for better capture of past screening events. The power of EHRs, data registries and population-based approaches offers researchers and care providers a robust tool for improving CRC screening rates. In using EHRs to estimate screening prevalence, Hubbard calculated the bias in estimates of screening colonoscopies, and reported under-estimates of 3% and overestimates of 12% across varying methods such as EHR capture and self-report (10). Similarly, Palaniappan et al. (11) found that EHR data showed 6% to 14% less screening than self-reported data and Reiter et al. (12) found that participants over-reported CRC screening; 68% reported screening versus 49% shown in the medical records. The community clinic environment often relies on communication between specialists or patients to track colonoscopy screening. The need for accurate data will continue to grow as interest in improving CRC screening uptake increases. However, resources for improving capture are often limited in community clinics.

In our study, the EHR databases captured accurate inclusion and exclusion data more often than expected. We found that nearly all of the ineligible patients were correctly excluded. This might be a result of our study being pragmatic with few exclusion criteria and conditions for which coding would be expected to be used more reliably (e.g. CRC and renal failure diagnoses).

Implications

The extent of disagreement among the eligible population illustrates the need to improve data capture. The biggest reason for disagreement between the data sources was due to colonoscopy not being captured in discoverable fields which is consistent with previous research (16). Colonoscopies in FQHC clinics are completed by outside specialists in facilities without direct data linkage to primary care, which makes data capture difficult. This is because EHR procedural codes necessary for documentation and billing occur at another facility with an EHR not linked to the community clinics EHR (lack of interoperability). Once the community clinic provider receives the procedure report there is no consistent process for transferring the data into the EHR. Even when the procedure is documented and discoverable, results of the procedure and pathology results (if a biopsy is performed) are often not available. These data would ideally be entered into discrete fields for informing subsequent screening intervals or surveillance after positive results. EHRs, however, have not yet been able to handle gathering this type of data. While EHR tools, such as Health Maintenance (EPIC), provide a partial solution by aggregating EHR data to inform population care needs, data still need to be added manually to the EHR. At the time of the STOP CRC rollout, Health Maintenance was relatively new and not widely used in participating FQHCs. Optimal use requires training and specific workflows to ensure that completed procedure results are obtained and entered properly into the EHR system. While these tools make managing repeated screening more standardized, they are still a ‘work-around’ that might be better solved by direct sharing of data across providers and facilities.

Data capture in clinics can be improved by creating workflows and processes to follow up on referred and patient reported colonoscopies, and to acquire missing or scanned reports that have not been recorded in discrete fields. While the STOP CRC project has created a series of reports to assist in the ‘scrubbing’ and identification of scanned colonoscopies that are not recorded, such reports are used with varying consistency. Variation in PPV across clinic organizations is not surprising. Because different specialists are used in different geographical areas, processes to communicate results vary. Data capture might also be improved by sharing appropriate colonoscopy results with patients, which then could be shared with a provider. This solution could be beneficial when patients change clinic organizations, and the EHR is not shared.

When people are incorrectly included in CRC screening programs, clinics incur unnecessary costs by incorrect classification of patients’ CRC screening status. While FIT kits are inexpensive (generally about $6.00 per kit), there are additional costs, postage and time spent prepping, ordering and mailing the kit. To minimize this expense, staff at most participating clinics chose to scrub reports of eligible participants to identify colonoscopies among patients in the eligible pool. Incorrect classification of screening status may have other unintended effects. Invitation for screening may cause anxiety in a patient who had a recent screening and normal result or who lacked confidence in their provider or confusion if they are under active surveillance for a positive test. Unnecessary screening may also cause patient and provider uncertainty if a FIT test is positive after a recent normal colonoscopy.

Colonoscopies that are not recorded correctly can put patients at risk when biopsy results are unknown and recommended follow-up testing is not tracked. Currently, some EHRs default to a follow-up colonoscopy interval of 10 years. In the event a colonoscopy has abnormal results, this sort of default could be dangerous if those results were not received and the follow-up interval was not modified.

Laboratory data results can be also be problematic. The EHR vendor (in this case, OCHIN) has lab interfaces for direct transfer of data with most, but not all, outside labs where FIT tests were analyzed. Without an interface the results have to be manually entered into the EHR. Potential solutions include population-wide data repositories, with successful examples emerging internationally and early experiments at the state-level (17). These issues are not unique to CRC screenings and similar issues emerge for other screening tests (e.g. mammogram), immunizations, and in the care of chronic conditions. Clinical protocols allowing for capture of screening completion and results in discrete EHR fields are therefore necessary for population-wide screening programs to be effective.

Limitations

There are several limitations to this study. STOP CRC study clinics volunteered to participate in a cluster trial and agreed to be randomized to implement EHR-embedded tools for improving population CRC screening or a delayed intervention; therefore our clinics may be more engaged in improving EHR-based activities for promoting population-based care. Additionally, our study was conducted in a FQHC environment and may not be generalizable to other community clinics, private practice, health care systems, or countries with universal health care and centralized data systems. These findings, however, are generalizable to community clinics that serve underinsured, uninsured or low-income patients, which typically includes a network of primary care clinics (sometimes with additional services such as dental and mental health care), with specialty services being external to their clinics and EHR systems.

Another limitation is our use of the EHR medical record as the gold standard, which also may have incomplete CRC screening related data. The clinics share a common EHR and a common set of tools for managing population health and receive added support and training to use these tools. Also, we were able only to validate our classification rules against what was in the EHR, thus could have missed outside utilization not occurring at (or even ordered by) these FQHCs. Finally, we report only on PPV and NPV. Although these were the pertinent statistics for STOP CRC, we recognize that, unlike sensitivity and specificity, predictive values will depend on the underlying prevalence of screening in the population under study. For a given sensitivity and specificity, the PPV of any EHR-based rule to identify who needs CRC screening will decrease as the prevalence of CRC screening in the population increases, while at the same the NPV will increase. For instance, for a sensitivity of 90% and specificity of 82% (the naïve estimates from this study ignoring sampling), PPV will decrease from 94% to 63% as the screening prevalence increases from 25% to 75%, and NPV will increase from 73% to 96%. Hence to truly generalize our findings to other settings, one would need to know the sensitivity and specificity of our classification rules. Unfortunately, our sampling protocol makes it extremely difficult to estimate these quantities since (i) we used different sampling fractions for those classified as needing versus non needing screening and (ii) these fractions further varied across clinics. Even if we knew the sensitivity and specificity, the generalizability of our findings to another setting might vary depending on the relative richness of the data in the EHR in that other setting. A more comprehensive EHR end-user database would be expected to yield greater predictive value, while a less comprehensive EHR database should yield lower predictive value. Existing reports, however, have supplied us with the prevalence of CRC screening in our clinics. The National Quality Forum (NQF) score measures the percentage of patients aged 50–75 who have had an appropriate screening for colorectal cancer. In 2013, the NQF scores in these clinics ranged from 15.4% to 59.3%.

Conclusions

The STOP CRC study has increased CRC screening uptake in populations with extremely low screening rates (18). EHR data allowed us to rapidly identify patients who were eligible for CRC screening and deliver an automated, mailed-FIT, CRC screening program. While the high NPV suggests that our algorithm does not miss many individuals who truly need screening, we did not formally assess test sensitivity. By contrast, the lower PPV suggests that our protocol will result in over-screening. While erring on the side of over-screening may be preferable in under-screened priority populations, over-screening is not without potential harm (19).

As CRC screening uptake increases, so does the need to improve data capture of all screening events. While the need for better population-based data is not unique to CRC screening, CRC provides an important example of using population-based data for not only tracking needed care, but also directly delivering needed interventions.

Supplementary material

Supplementary material is available at Family Practice online.

Declaration

Funding: NIH-sponsored Health Systems Collaboratory Project (UH3CA188640), Clinical Trials NCT01742065; National Institutes of Health (NIH) Common Fund, through a cooperative agreement (U54 AT007748) from the Office of Strategic Coordination within the Office of the NIH Director.

Ethical approval: none.

Conflict of interest: The authors have no conflicts of interest to declare.

Supplementary Material

Supplementary Data

Acknowledgements

The article was presented at 8th Annual Conference of Dissemination and Implementation, Washington, DC, December 14, 2015. The views presented here are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health.

References

  • 1. US Department of Health and Human Services Health Resources and Services Administration. Colorectal Cancer Screening [online]. http://www.hrsa.gov/quality/toolbox/measures/colorectalcancer/index.html (accessed on 6 September 2016). [Google Scholar]
  • 2. Centers for Disease Control and Prevention (CDC). Vital signs: colorectal cancer screening test use—United States, 2012. MMWR Morb Mortal Wkly Rep 2013; 62: 881–8. [PMC free article] [PubMed] [Google Scholar]
  • 3. Centers for Disease Control and Prevention. Morbidity and Mortality Weekly Report. In: Cancer Screening—United States, 2010. Atlanta, GA, 2012, pp. 41–56. [PubMed] [Google Scholar]
  • 4. Garborg K, Holme O, Loberg M, Kalager M, Adami HO, Bretthauer M. Current status of screening for colorectal cancer. Ann Oncol 2013; 24: 1963–72. [DOI] [PubMed] [Google Scholar]
  • 5. Sarfaty M, Doroshenk M, Hotz J, et al. Strategies for expanding colorectal cancer screening at community health centers. CA Cancer J Clin 2013; 63: 221–31. [DOI] [PubMed] [Google Scholar]
  • 6. Davis TC, Rademaker A, Bailey SC, et al. Contrasts in rural and urban barriers to colorectal cancer screening. Am J Health Behav 2013; 37: 289–98. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Klabunde CN, Cronin KA, Breen N, Waldron WR, Ambs AH, Nadel MR. Trends in colorectal cancer test use among vulnerable populations in the United States. Cancer Epidemiol Biomarkers Prev 2011; 20: 1611–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Yabroff KR, Zapka J, Klabunde CN, et al. Systems strategies to support cancer screening in U.S. primary care practice. Cancer Epidemiol Biomarkers Prev 2011; 20: 2471–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Coronado GD, Vollmer WM, Petrik A, et al. Strategies and opportunities to STOP colon cancer in priority populations: design of a cluster-randomized pragmatic trial. Contemp Clin Trials 2014; 38: 344–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Hubbard R, Chubak J, Rutter C. Estimating screening test utilization using electronic health records data. EGEMS (Washington, DC) 2014; 2: 14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Palaniappan LP, Maxwell AE, Crespi CM, Wong EC, Shin J, Wang EJ. Population colorectal cancer screening estimates: comparing self-report to electronic health record data in California. Int J Canc Prev 2011; 4: pii: 28540. [PMC free article] [PubMed] [Google Scholar]
  • 12. Reiter PL, Katz ML, Oliveri JM, Young GS, Llanos AA, Paskett ED. Validation of self-reported colorectal cancer screening behaviors among Appalachian residents. Public Health Nurs 2013; 30: 312–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Shokar NK, Vernon SW, Carlson CA. Validity of self-reported colorectal cancer test use in different racial/ethnic groups. Fam Pract 2011; 28: 683–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Hersh WR, Weiner MG, Embi PJ, et al. Caveats for the use of operational electronic health record data in comparative effectiveness research. Med Care 2013; 51: S30–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Fleiss JL. Statistical Methods for Rates and Proportions. New York: John Wiley & Sons, 1981. [Google Scholar]
  • 16. Baker DW, Liss DT, Alperovitz-Bichell K, et al. Colorectal cancer screening rates at community health centers that use electronic health records: a cross sectional study. J Health Care Poor Underserved 2015; 26: 377–90. [DOI] [PubMed] [Google Scholar]
  • 17. van Hees F, Zauber AG, van Veldhuizen H, et al. The value of models in informing resource allocation in colorectal cancer screening: the case of the Netherlands. Gut 2015; 64: 1985–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Coronado GD, Vollmer WM, Petrik A, et al. Strategies and opportunities to STOP colon cancer in priority populations: pragmatic pilot study design and outcomes. BMC Cancer 2014; 14: 55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Whitlock EP, Lin J, Liles E et al. Screening for Colorectal Cancer: An Updated Systematic Review [Internet]. Rockville, MD: Agency for Healthcare Research and Quality (US), 2008 (Evidence Syntheses, No. 65.1.). http://www.ncbi.nlm.nih.gov/books/NBK35179/ [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data

Articles from Family Practice are provided here courtesy of Oxford University Press

RESOURCES