Abstract
PURPOSE
Population-based administrative health care data could be a valuable resource with which to study the cancer diagnostic interval. The objective of the current study was to determine the first encounter in the diagnostic interval and compute that interval in a cohort of patients with breast cancer using an empirical approach.
METHODS
This is a retrospective cohort study of patients with breast cancer diagnosed in Ontario, Canada, between 2007 and 2015. We used cancer registry, physician claims, hospital discharge, and emergency department visit data to identify and categorize cancer-related encounters that were more common in the three months before diagnosis. We used statistical control charts to define lookback periods for each encounter category. We identified the earliest cancer-related encounter that marked the start of the diagnostic interval. The end of the interval was the cancer diagnosis date.
RESULTS
The final cohort included 69,717 patients with breast cancer. We identified an initial encounter in 97.8% of patients. Median diagnostic interval was 36 days (interquartile range [IQR], 19 to 71 days). Median interval decreased with increasing stage at diagnosis and varied across initial encounter categories, from 9 days (IQR, 1 to 35 days) for encounters with other cancer as the diagnosis to 231 days (IQR 77 to 311 days) for encounters with cyst aspiration or drainage as the procedure.
CONCLUSION
Diagnostic interval research can inform early detection guidelines and assess the success of diagnostic assessment programs. Use of administrative data for this purpose is a powerful tool for improving diagnostic processes at the population level.
CONTEXT
Key Objective
Can routinely collected administrative health data be used to measure the cancer diagnostic interval by identifying the earliest cancer-related health care encounter before diagnosis?
Knowledge Generated
Use of a data-driven method to identify the earliest cancer-related encounter in the breast cancer diagnostic interval resulted in a median interval of 36 days (interquartile range, 19 to 72 days), with 10% of patients waiting more than 144 days for diagnosis. The observed breast cancer diagnostic interval was longer than previously reported, owing to improved methods to identify the first encounter in the diagnostic interval.
Relevance
Health systems and health care providers strive to provide timely care for patients diagnosed with cancer. The methods developed and applied in this study offer a generalized approach to characterizing the cancer diagnostic interval using routinely collected health care data that can be applied across cancer sites and jurisdictions with similar data holdings, thereby supporting health systems surveillance and quality improvement initiatives aimed at care within the diagnostic interval.
INTRODUCTION
Ineffective cancer diagnostic processes as reflected in system-related diagnostic delay may contribute to a more advanced stage at diagnosis and cause distress to patients.1-8 The length of the cancer diagnostic interval, defined in the Aarhus statement as the time from a patient’s presentation to the health care system to the patient’s cancer diagnosis, has been recognized as a determinant of cancer outcomes, including survival.9-12 Initiatives that are aimed at promoting an earlier cancer diagnosis have targeted care within the diagnostic interval using practice guidelines and diagnostic pathways.13-17 There is a need to accurately monitor the diagnostic interval and changes in the interval associated with such initiatives. Administrative health care data have the potential to provide such information.
Previous studies that have evaluated the cancer diagnostic interval using administrative data have used arbitrary lookback periods to identify cancer-related health care encounters, likely leading to interval length inaccuracies.18,19 These studies also restricted cancer-related encounters to cancer tests and specialist visits, excluding provisional diagnoses or misdiagnoses, which likely resulted in an underestimation of interval length and the activity within the interval.18-20 Here, we report on an enhanced methodology for calculating the diagnostic interval using administrative data. Our method includes a broader range of cancer-related encounters and an empirical approach for identifying the first cancer-related encounter.
The current report demonstrates the use of this method in a cohort of patients with breast cancer. Our initial developmental work was in oral cavity cancer,21 and we have also used this method in colorectal cancer.22,23 Whereas breast cancer can be detected early with screening mammograms, 44% to 52% of patients are diagnosed symptomatically, which increases their risk of late-stage disease.20,24-26 Characterizing the breast cancer diagnostic interval is a first step toward developing successful strategies to ensure the early recognition and efficient diagnostic evaluation of breast cancer signs and symptoms and to achieving a timely, earlier diagnosis.
METHODS
The study population was a retrospective cohort of patients with breast cancer who were diagnosed in Ontario, Canada, between January 1, 2007, and December 31, 2015. Patients were excluded if they were missing a data linkage identifier, diagnosed on death certificate only, eligible for Ontario Health Insurance Plan (OHIP) coverage for less than 6 months before diagnosis, younger than age 18 or older than 105 years at diagnosis, diagnosed with a previous or concurrent—within 6 months of breast cancer diagnosis—cancer, male, not an Ontario resident at diagnosis, or diagnosed with stage 0 cancer.
This study used linked administrative databases from ICES.27 Data sources are listed in Table 1. The following sections describe the steps involved in identifying the initial health care encounter that defines the start of the diagnostic interval. This process is also depicted in Figure 1. We defined the end of the interval as the diagnosis date in the Ontario Cancer Registry (OCR), which is normally the first positive biopsy date.
TABLE 1.
Administrative Data Sources
FIG 1.
Steps to identifying the initial cancer-related health care encounter.
Step One: Identifying Relevant Physician Specialties
We determined the physician specialties, the encounters of which we would be examining in Step Two, by identifying those specialties seen by patients more often in the 3 months immediately preceding diagnosis compared with the more than 3 to 15 months before. We used OHIP billing claims, and specialty selection was also guided by clinical advice to ensure face validity.
Step Two: Identifying Cancer-Related Health Care Encounters
We identified diagnoses and procedures on OHIP claims that occurred more often in the 3-month period preceding diagnosis compared with the more than 3- to 12-month period before. We examined all encounters with physicians whose specialties were identified in Step One. We categorized these encounters by grouping similar diagnoses and similar procedures. This work was reviewed by a clinician for face validity.
Step Three: Determining Each Encounter Category’s Lookback Period
We used cumulative sum control charts to identify the lookback period for each encounter category.28-30 We defined lookbacks across three time periods—diagnosed in 2007 to 2009, 2010 to 2012, or 2013 to 2015—to account for lookback variations across this 9-year study.
All cancer-related encounters were captured in the 15 months before diagnosis regardless of physician specialty. We also captured equivalent Canadian Institute for Health Information hospital discharge and ambulatory clinic encounters as well as National Ambulatory Care Reporting System emergency department encounters not already identified in OHIP.
For each encounter category, control charts compared the weekly encounter frequencies for each of the 52 weeks before diagnosis with the background encounter frequency. This background frequency was the mean weekly encounter frequency in the more than 12- to 15-month period before cancer diagnosis. We used four established rules to identify a signal in the control charts30:
Rule 1: Any weekly count more than three standard deviations greater than the background frequency
Rule 2: Two of three consecutive weekly counts greater than two standard deviations more than the background frequency
Rule 3: Four of five consecutive weekly counts greater than one standard deviation more than the background frequency
Rule 4: Eight consecutive weekly counts more than the background encounter frequency
The encounter category-specific lookback period was the furthest week from diagnosis that had a signal. The lookback period stopped once a single week had no signal, even if the signal re-emerged further back in time. We calculated four lookback period cut points for each encounter category using rules one to four, one to three, one and two, and one only, with each providing an increasingly shorter lookback period.
Step Four: Assessing Each Encounter Category’s Signal Strength
We computed a signal strength measure for each lookback period by calculating the proportion of encounters in the lookback period that exceeded the expected number on the basis of the background mean weekly frequency. Signal strength was computed for all four lookback period cut points, stopping once a signal strength of 80% or greater was achieved. If a signal strength of 80% or greater could not be achieved, that encounter category was excluded from our collection of cancer-related encounters.
Step Five: Collecting All Prediagnostic Cancer-Related Encounters for Each Patient
For each patient, we now had captured all cancer-related encounters and their dates. Mammogram-related encounters were categorized as screening or diagnostic on the basis of the procedure codes, when possible. Those with nonspecific codes were categorized using subsequent procedures. If the mammogram was followed by another screening or diagnostic mammogram or a breast ultrasound, it was categorized as a screening mammogram, otherwise it was categorized as a diagnostic mammogram.
Step Six: Adding Referring Doctor Encounters
We identified the referring physician for all nonscreening procedure–based encounters, then looked for a diagnosis-based encounter with that physician before the procedure. If there was no such visit, we identified the last visit with that referring physician for any reason in the 6 months before the procedure date. This referring physician encounter was then assigned to the associated procedure encounter record for the purpose of identifying the index contact date. For those procedure-based encounters with no referring doctor identified or no visit to the referring doctor found, we assigned an x.5 category code—where x indicates the category number—to the procedure encounter category number, and we used the procedure date on record when identifying the earliest encounter.
Initial Health Care Encounter Identification and Calculation of the Diagnostic Interval
We identified each patient’s earliest breast cancer–related encounters from all encounters collected. If there was more than one encounter on the index contact date, we applied a hierarchy (Table 2) to assign the index contact encounter type. Diagnostic interval was calculated as the time from the index contact to diagnosis date, with a minimum of 1 day.
TABLE 2.
Breast Cancer–Related Health Care Encounter Lookback Periods
We describe the diagnostic interval distribution and report the proportion of patients whose interval was fewer than 7 weeks. This is in accordance with a Canadian timeliness benchmark that 90% of patients with screen-detected breast cancer who require a tissue biopsy should be diagnosed within 7 weeks.31
RESULTS
The final cohort had 69,717 patients (Data Supplement). Mean age was 61.1 years (standard deviation, 13.8 years). Cancer stage distribution was as follows: stage I, 40.0%; stage II, 36.7%; stage III, 13.5%; stage IV, 4.7%; and unknown, 5.0%.
Relevant physician specialties included family medicine, general surgery, internal medicine, and diagnostic radiology. Encounters with these specialties were used to define the cancer-related encounter categories listed in Table 2 (see Data Supplement for OHIP and CIHI category coding). Diagnosis-based encounters—categories 1 to 7—included those that were overtly cancer-related, such as breast and other related cancers, and breast-related provisional diagnoses, such as breast cysts. Diagnostic encounter categories 1 and 2 can indicate either a pathologically confirmed diagnosis of breast cancer or a physician’s suspicion of breast cancer based on the patient’s signs or symptoms. Procedure-based encounters—categories 8 to 22—included consults, breast imaging procedures, and surgical procedures.
Final lookback periods and corresponding signal strength values for each encounter category are listed in Table 2. Most encounter categories contained intracategory variations in the lookback periods across the three time periods, with differences of as much as 31 weeks (category 9, cyst aspiration or drainage). Intercategory variation ranged from a low of 6 weeks (lymph system–related conditions) to a high of 52 weeks (cyst aspiration or drainage). Four encounter categories—6, 7, 17, and 21—were discarded because they did not achieve 80% signal strength. Encounter category 19 (other magnetic resonance imaging) only achieved 80% signal strength among patients who were diagnosed in 2007 to 2009, so it was excluded in the other years. Median weekly encounter frequency in the 52 weeks before diagnosis varied across encounter categories, with the lowest among breast magnetic resonance imaging, cyst aspiration and drainage, and mastectomy, and the highest among other x-ray and signs and symptoms, not breast-related.
Figure 2 presents control chart examples for four encounter categories in patients who were diagnosed in 2013 to 2015. In each control chart, there is an increase in the encounter frequency closer to diagnosis; however, the rate of increase varied across encounter categories. The decrease in Ontario Breast Screening Program abnormal screening immediately preceding diagnosis is likely a result of the time needed to complete diagnostic evaluations after an abnormal screen. Whereas the shape of the control charts looks similar across encounter categories, y-axis frequencies vary considerably. For lymph system–related conditions, there were no more than 20 encounters per week in the entire cohort. Breast cancer diagnosis encounters had the highest weekly frequency, with more than 7,500 patients in the week immediately preceding the diagnosis date in the OCR.
FIG 2.
Control charts for four encounter categories: (A) Breast cancer, (B) lymph system–related conditions, (C) mastectomy, and (D) Ontario Breast Screening Program (OBSP) abnormal screening mammogram. Blue lines plot weekly encounter count in 1 year before diagnosis. Red dashed lines plot the background period confidence limits.
We identified an index contact for 68,220 patients (97.8%) in the final cohort. This group was similar to the whole cohort with regard to age and stage (results not shown). The diagnostic interval distribution—overall and by stage group—is listed in Table 3. Median diagnostic interval was 36 days, with an interquartile range (IQR) of 19-71 days. Ninety percent of patients were diagnosed within 144 days. Diagnostic interval was shorter with more advanced disease, with a median interval of 20 days (IQR, 5-50 days) in stage IV disease compared with 41 days (IQR, 22-80 days) in stage I disease. Overall, 49.9% of patients were diagnosed within 7 weeks of their index contact date. Patients with stage IV disease were most likely to meet the 7-week benchmark at 65.9%, whereas among those with stage I, only 43.7% met the benchmark.
TABLE 3.
Diagnostic Interval Distribution in Days, Overall and Stratified by Stage at Diagnosis
As shown in Table 4, the most common index encounter was an abnormal Ontario Breast Screening Program mammogram, accounting for 25.8% of the cohort, followed by 17.8% in the breast cyst, cystic disease, abscess, hypertrophy, other diagnostic category, which is commonly used for breast lump presentations. The shortest median intervals were observed in patients with a breast cancer or other cancer diagnosis on their index contact, at 15 days (IQR, 2-40 days) and 9 days (IQR, 1-35 days), respectively. The longest median interval, 231 days, was observed in the small group whose index contact involved cyst aspiration or drainage and for whom no referring physician visit could be identified.
TABLE 4.
Diagnostic Interval Distribution in Days, by Index Contact Encounter Category
DISCUSSION
We used an empirical approach to identify the first health care encounter leading to a breast cancer diagnosis. This is a necessary first step in computing the diagnostic interval and characterizing diagnostic pathways. Our principal data sources were the OCR to identify the cohort and their diagnosis date and OHIP claims data to identify breast cancer–related encounters using diagnostic and procedure codes. We used control charts to identify lookback periods for cancer-related encounter capture, with patients providing their own background encounter rates.28-30 We included only those encounters that achieved an 80% or greater signal strength, which led to the exclusion of four encounter categories. The earliest cancer-related encounter identified for each patient marked the start of the patient’s breast cancer diagnostic interval.
The control chart results highlight the importance of using encounter-specific lookback periods when identifying the index contact, with encounter lookback periods varying from 6 to 52 weeks. Using a single lookback period for all encounters could have erroneously included or excluded relevant encounters from the interval. Although we identified 18 categories of cancer-related encounters, 80.5% of the cohort had an index contact that was either symptom related or for mammography. Other encounter categories more often reflected activity that occurred after the index encounter. The occurrence of mastectomy within the diagnostic interval may reflect errors in the OCR diagnosis date when mastectomy-based pathology reports were used to identify the diagnosis date.
The median diagnostic interval was slightly more than 1 month, and for 10% of patients it was more than 5 months. The diagnostic interval observed in this study is longer than that reported in previous research.25,32-34 The difference may be a result, in part, of differences in the interval definition9 and/or methodologic differences in identifying the earliest cancer-related encounter. For instance, some of our previous work that measured the breast cancer diagnostic interval in Ontario in 2011 using ICES data observed a shorter diagnostic interval, particularly at the tail of the distribution, with a median of 32 days, a 75th percentile of 60 days, and a 90th percentile of 107 days.25 The index contact in that study was defined as the earliest of one of six breast cancer–related procedures or emergency department visits, taking the referring physician visit when available. This definition more closely reflects the Aarhus secondary care interval, which measures the time from specialist referral to diagnosis, rather than the diagnostic interval.9 Our approach to identifying the diagnostic interval index contact—using symptom- and procedure-related visits and cancer-specific encounters as well as encounters that reflect provisional diagnosis or misdiagnoses—captures encounters that occurred before and after the first specialist referral, thereby measuring the entire diagnostic interval and all relevant activity within that interval. In particular, for many symptomatic patients, our longer interval reflects the added duration of the primary care interval.9
A protracted cancer diagnostic interval may result in more advanced disease at diagnosis, necessitating more invasive treatment and contributing to poorer survival.1,35,36 A systematic review examining the relation between the diagnostic interval and cancer outcomes across 27 cancer sites found that an expedited diagnosis can promote earlier cancer detection and improved survival, with the strongest evidence for this relationship coming from breast, head and neck, colorectal, and testicular cancers and melanoma.4 Inconsistent results in that review were attributed to interval definition differences as well as the waiting time paradox, a phenomenon whereby patients with more advanced disease and poorer outcomes often have shorter diagnostic intervals because of patient triaging on the basis of symptom severity.9,37,38 Because of the waiting time paradox and the need to provide a comprehensive workup, the relation between a shorter diagnostic interval and better outcomes is not straightforward; however, holding all else constant, the literature supports activities that aim to expedite the diagnostic interval to improve outcomes.
This study has a number of strengths. It builds on previous efforts with the incorporation of provisional and less-specific diagnoses as cancer-related encounters. The control chart methodology avoids the use of arbitrary lookback periods to identify relevant encounters, thereby ensuring that only those encounters related to the diagnosis of the cancer are captured. All decisions regarding the selection of cancer-related encounters and lookback periods were made to ensure a conservative estimate of the diagnostic interval. This methodology lends itself to surveillance and quality improvement efforts as it uses existing, linked administrative data and can be translated across jurisdictions and cancer sites, with previous success in adapting these methods to studies of the diagnostic interval in colorectal and oral cavity cancer.21-23 Our efforts to refine the encounter category lookback periods by allowing them to vary over time ensure that this methodology is sensitive to changes in the diagnostic interval over time. The method has the added benefit of providing a complete account of diagnostic activity, which allows for the study of diagnostic processes as well as the length of the diagnostic interval.39
The main weakness of this work was the missing encounter data for referring physician visits. Most breast cancer–related procedures would have required a referral, and our inability to capture the referring physician visit could result in an underestimate of the diagnostic interval length. This problem affects, at most, 4% of the cohort. We were unable to examine whether the lookback periods varied within the 3-year diagnosis periods, as shorter periods did not provide a sufficient number of encounters to identify signal in the control charts. We have not validated these intervals with another data source, such as medical charts. We attempted such a validation in oral cavity cancer, but found the treating medical charts to be incomplete for this purpose.
This study offers a generalized methodology with which to measure and characterize the cancer diagnostic interval using administrative data. Algorithms developed from this methodology could be used for ongoing program surveillance and evaluation. For example, we have been collaborating with Cancer Care Ontario to incorporate this approach into their surveillance activities and for use in the assessment of their diagnostic assessment programs.
Our results indicate a prolonged diagnostic interval for patients with breast cancer, with a median interval of more than 1 month. Studies such as this can inform the development of standardized diagnostic pathways and empirically based early detection guidelines. Integrating the evaluation of the diagnostic interval into cancer system performance assessments offers a first step in improving the effectiveness and efficiency of the cancer diagnostic interval.
ACKNOWLEDGMENT
The opinions, results, and conclusions reported in this article are those of the authors and are independent from the funding sources. No endorsement by ICES or the Ontario Ministry of Health and Long-Term Care is intended or should be inferred. These data sets were linked using unique encoded identifiers and analyzed at ICES. Parts of this material are based on data and/or information compiled and provided by Canadian Institute for Health Information (CIHI). However, the analyses, conclusions, opinions, and statements expressed in the material are those of the author(s) and not necessarily those of CIHI. Parts of this material are based on data and information provided by Cancer Care Ontario (CCO). The opinions, results, views, and conclusions reported in this paper are those of the authors and do not necessarily reflect those of CCO. No endorsement by CCO is intended or should be inferred.
Footnotes
Presented in at the Canadian Cancer Research Conference, Montreal, QC, Canada, November 8-10, 2015; and at Cancer Care Ontario Research Day, Toronto, ON, Canada, April 12, 2018.
Funded by grants from the Canadian Institutes of Health Research and Cancer Care Ontario. Supported by ICES, which is funded by an annual grant from the Ontario Ministry of Health and Long-Term Care.
Preprint version available on bioRxiv.
AUTHOR CONTRIBUTIONS
Conception and design: Patti A. Groome, Colleen Webber, Eva Grunfeld, Andrea Eisen, Julie Gilbert, Claire Holloway, Jonathan C. Irish, Hugh Langley
Collection and assembly of data: Patti A. Groome, Colleen Webber, Marlo Whitehead, Eva Grunfeld
Data analysis and interpretation: All authors
Manuscript writing: All authors
Final approval of manuscript: All authors
Accountable for all aspects of the work: All authors
AUTHORS' DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST
The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/jco/site/ifc.
Andrea Eisen
Other Relationship: Cancer Care Ontario
No other potential conflicts of interest were reported.
REFERENCES
- 1.Richards MA. The size of the prize for earlier diagnosis of cancer in England. Br J Cancer. 2009;101(s) uppl 2:S125–S129. doi: 10.1038/sj.bjc.6605402. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Gospodarowicz M, O’Sullivan B, Sobin LH. Prognostic Factors in Cancer. ed 3. New Jersey, NJ: John Wiley & Sons; 2006. [Google Scholar]
- 3.Ferrante JM, Chen PH, Kim S. The effect of patient navigation on time to diagnosis, anxiety, and satisfaction in urban minority women with abnormal mammograms: A randomized controlled trial. J Urban Health. 2008;85:114–124. doi: 10.1007/s11524-007-9228-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Neal RD, Tharmanathan P, France B, et al. Is increased time to diagnosis and treatment in symptomatic cancer associated with poorer outcomes? Systematic review. Br J Cancer. 2015;112(suppl 1):S92–S107. doi: 10.1038/bjc.2015.48. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Macleod U, Mitchell ED, Burgess C, et al. Risk factors for delayed presentation and referral of symptomatic cancer: Evidence for common cancers. Br J Cancer. 2009;101(Suppl 2):S92–S101. doi: 10.1038/sj.bjc.6605398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Montgomery M, McCrone SH. Psychological distress associated with the diagnostic phase for suspected breast cancer: Systematic review. J Adv Nurs. 2010;66:2372–2390. doi: 10.1111/j.1365-2648.2010.05439.x. [DOI] [PubMed] [Google Scholar]
- 7.Brocken P, Prins JB, Dekhuijzen PNR, et al. The faster the better?—A systematic review on distress in the diagnostic phase of suspected cancer, and the influence of rapid diagnostic pathways. Psychooncology. 2012;21:1–10. doi: 10.1002/pon.1929. [DOI] [PubMed] [Google Scholar]
- 8.Liao M-N, Chen M-F, Chen S-C, et al. Uncertainty and anxiety during the diagnostic period for women with suspected breast cancer. Cancer Nurs. 2008;31:274–283. doi: 10.1097/01.NCC.0000305744.64452.fe. [DOI] [PubMed] [Google Scholar]
- 9.Weller D, Vedsted P, Rubin G, et al. The Aarhus statement: Improving design and reporting of studies on early cancer diagnosis. Br J Cancer. 2012;106:1262–1267. doi: 10.1038/bjc.2012.68. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Butler J, Foot C, Bomb M, et al. The International Cancer Benchmarking Partnership: An international collaboration to inform cancer policy in Australia, Canada, Denmark, Norway, Sweden and the United Kingdom. Health Policy. 2013;112:148–155. doi: 10.1016/j.healthpol.2013.03.021. [DOI] [PubMed] [Google Scholar]
- 11.Weller D, Vedsted P, Anandan C, et al. An investigation of routes to cancer diagnosis in 10 international jurisdictions, as part of the International Cancer Benchmarking Partnership: Survey development and implementation. BMJ Open. 2016;6:e009641. doi: 10.1136/bmjopen-2015-009641. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Rose PW, Rubin G, Perera-Salazar R, et al. Explaining variation in cancer survival between 11 jurisdictions in the International Cancer Benchmarking Partnership: A primary care vignette survey. BMJ Open. 2015;5:e007212. doi: 10.1136/bmjopen-2014-007212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Cancer Care Ontario Colorectal cancer diagnosis pathway map 2018. https://www.cancercareontario.ca/sites/ccocancercare/files/assets/DPMColorectalDiagnosis.pdf
- 14.Borugian MJ, Kan L, Chu CCY, et al. Facilitated “fast track” referral reduces time from abnormal screening mammogram to diagnosis. Can J Public Health. 2008;99:252–256. doi: 10.1007/BF03403749. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Del Giudice ME, Vella ET, Hey A, et al. Guideline for referral of patients with suspected colorectal cancer by family physicians and other primary care providers. Can Fam Physician. 2014;60:717–723, e383-e390. [PMC free article] [PubMed] [Google Scholar]
- 16.Cancer Care Ontario Oropharyngeal squamous cell cancer diagnosis pathway map 2017. https://www.cancercareontario.ca/sites/ccocancercare/files/assets/DPMOropharyngealSquamousDiagnosis.pdf
- 17.Cancer Care Manitoba Work-up of suspected breast cancer 2015. https://www.cancercare.mb.ca/export/sites/default/For-Health-Professionals/.galleries/files/diagnostic-pathway-files/breast-diagnostic-pathway-files/IN60_BC-Revised-FullPath_02-02-2016.pdf
- 18.Singh H, De Coster C, Shu E, et al. Wait times from presentation to treatment for colorectal cancer: A population-based study. Can J Gastroenterol. 2010;24:33–39. doi: 10.1155/2010/692151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Cheung WY, Butler JR, Kliewer EV, et al. Analysis of wait times and costs during the peri-diagnostic period for non-small cell lung cancer. Lung Cancer. 2011;72:125–131. doi: 10.1016/j.lungcan.2010.08.001. [DOI] [PubMed] [Google Scholar]
- 20.Yuan Y, Li M, Yang J, et al. Using administrative data to estimate time to breast cancer diagnosis and percent of screen-detected breast cancers: A validation study in Alberta, Canada. Eur J Cancer Care (Engl) 2015;24:367–375. doi: 10.1111/ecc.12277. [DOI] [PubMed] [Google Scholar]
- 21. Groome PA, Whitehead M, Grunfeld E, et al: The initial per-diagnostic encounter leading to a cancer diagnosis: Development of an administrative data-based approach to its identification. Presented at the Union for International Cancer Control World Cancer Congress, Montreal, QC, Canada, August 27-30, 2012. [Google Scholar]
- 22. Webber C: Availability and quality of colonoscopy resources and the colorectal cancer diagnostic interval [doctoral dissertation]. Queens University, Kingston, ON, Canada, 2017.
- 23.Webber C, Flemming J, Birtwhistle R, et al. Wait times and patterns of cancer in the colorectal cancer diagnostic interval. Int J Pop Data Sci. 2017;1:191. [Google Scholar]
- 24.Cancer Care Ontario . Ontario Cancer Screening Performance Report 2016. Toronto, ON, Canada: Cancer Care Ontario; 2016. [Google Scholar]
- 25.Jiang L, Gilbert J, Langley H, et al. Breast cancer detection method, diagnostic interval and use of specialized diagnostic assessment units across Ontario, Canada [in French] Health Promot Chronic Dis Prev Can. 2018;38:358–367. doi: 10.24095/hpcdp.38.10.02. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Yuan Y, Li M, Yang J, et al. Factors related to breast cancer detection mode and time to diagnosis in Alberta, Canada: A population-based retrospective cohort study. BMC Health Serv Res. 2016;16:65. doi: 10.1186/s12913-016-1303-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.ICES Data. Discovery. Better health. https://www.ices.on.ca/
- 28.Shewhart WA. Economic Control of Quality of Manufactured Product. New York, NY: Van Nostrand Company; 1931. [Google Scholar]
- 29.Page ES. Continuous inspection schemes. Biometrika. 1954;41:100–115. [Google Scholar]
- 30.United States Navy Handbook for basic process improvement. Module 10: Control chart 1996. http://www.au.af.mil/au/awc/awcgate/navy/bpi_manual/mod10-control.pdf
- 31.Canadian Partnership Against Cancer . Report from the Evaluation Indicators Working Group: Guidelines for Monitoring Breast Cancer Screening Program Performance. Toronto, ON, Canada: Canadian Partnership Against Cancer; 2013. [Google Scholar]
- 32.Caplan LS, May DS, Richardson LC. Time to diagnosis and treatment of breast cancer: Results from the National Breast and Cervical Cancer Early Detection Program, 1991-1995. Am J Public Health. 2000;90:130–134. doi: 10.2105/ajph.90.1.130. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Tartter PI, Pace D, Frost M, et al. Delay in diagnosis of breast cancer. Ann Surg. 1999;229:91–96. doi: 10.1097/00000658-199901000-00012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Redaniel MT, Martin RM, Ridd MJ, et al. Diagnostic intervals and its association with breast, prostate, lung and colorectal cancer survival in England: Historical cohort study using the Clinical Practice Research Datalink. PLoS One. 2015;10:e0126608. doi: 10.1371/journal.pone.0126608. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Neal RD. Do diagnostic delays in cancer matter? Br J Cancer. 2009;101(suppl 2):S9–S12. doi: 10.1038/sj.bjc.6605384. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Richards MA, Westcombe AM, Love SB, et al. Influence of delay on survival in patients with breast cancer: A systematic review. Lancet. 1999;353:1119–1126. doi: 10.1016/s0140-6736(99)02143-1. [DOI] [PubMed] [Google Scholar]
- 37.Tørring ML, Frydenberg M, Hamilton W, et al. Diagnostic interval and mortality in colorectal cancer: U-shaped association demonstrated for three different datasets. J Clin Epidemiol. 2012;65:669–678. doi: 10.1016/j.jclinepi.2011.12.006. [DOI] [PubMed] [Google Scholar]
- 38.Tørring ML, Frydenberg M, Hansen RP, et al. Evidence of increasing mortality with longer diagnostic intervals for five common cancers: A cohort study in primary care. Eur J Cancer. 2013;49:2187–2198. doi: 10.1016/j.ejca.2013.01.025. [DOI] [PubMed] [Google Scholar]
- 39. Guan Z: Colorectal cancer diagnostic pathways in Ontario [master’s thesis]. Queen’s University, Kingston, ON, Canada, 2018.






