Skip to main content
Clinical Epidemiology logoLink to Clinical Epidemiology
. 2009 Aug 9;1:109–117. doi: 10.2147/clep.s5755

Specifying exposure classification parameters for sensitivity analysis: family breast cancer history

Anne M Jurek 1,2,, Timothy L Lash 3, George Maldonado 4
PMCID: PMC2943170  PMID: 20865092

Abstract

One of the challenges to implementing sensitivity analysis for exposure misclassification is the process of specifying the classification proportions (eg, sensitivity and specificity). The specification of these assignments is guided by three sources of information: estimates from validation studies, expert judgment, and numerical constraints given the data. The purpose of this teaching paper is to describe the process of using validation data and expert judgment to adjust a breast cancer odds ratio for misclassification of family breast cancer history. The parameterization of various point estimates and prior distributions for sensitivity and specificity were guided by external validation data and expert judgment. We used both nonprobabilistic and probabilistic sensitivity analyses to investigate the dependence of the odds ratio estimate on the classification error. With our assumptions, a wider range of odds ratios adjusted for family breast cancer history misclassification resulted than portrayed in the conventional frequentist confidence interval.

Keywords: breast cancer, family cancer history, sensitivity analysis, sensitivity, specificity

Introduction

A standard quantitative analysis of epidemiologic data implicitly assumes the exposure (risk marker, risk factor) classification proportions (eg, sensitivity and specificity) equal 1.0 (ie, perfect classification). For many studies, however, this assumption may not be justified. Epidemiologists are strongly encouraged to incorporate sensitivity analyses into the analysis for these situations.19

One of the challenges to implementing sensitivity analysis for exposure misclassification of a binary exposure variable is the process of specifying the sensitivity and specificity values. The difficulty lies in determining which values should be used and explaining why these values were used. The specification of these values is guided by three sources of information: estimates from validation studies, expert judgment, and numerical constraints given the data.10

These three sources of information can be used in both nonprobabilistic and probabilistic (Monte-Carlo) sensitivity analysis. When adjusting for exposure misclassification, nonprobabilistic sensitivity analysis11 uses multiple fixed values for the sensitivity and specificity proportions. In contrast, in probabilistic sensitivity analysis,5,7,1115 an investigator specifies probability distributions for the classification proportions. Prior probabilities are not specified for the effect measure of interest or the exposure prevalence; thus the analysis corresponds to using noninformative priors for these parameters in Bayesian bias analysis.7,11,1618

The goal of this teaching paper is to illustrate how to specify values of classification parameters for nonprobabilistic11 and probabilistic sensitivity analyses5,7,1115 using two of the three sources of information: validation data and expert judgment. We will specify single-point estimates and probability distributions for classification parameters. Then we will use these estimates and distributions to adjust one odds ratio (OR) estimate for possible exposure misclassification.

Application

For many types of cancer, an important predictor of a person’s cancer risk is an established family history of that cancer. While accurate reporting by affected relatives might be expected, in fact, validation studies have shown that self-reported history of cancer in family members is inaccurately reported.1921

Epidemiologic studies that rely on these self-reports of cancer in family members without adjustment for classification errors can provide inaccurate results and underestimates of the true uncertainty. Adjusting relative-risk estimates for systematic error under such circumstances (eg, exposure misclassification) has been strongly encouraged.18,11,22,23

We selected breast cancer as our example because it is both prevalent and because a family history of breast cancer is an established predictor of breast cancer risk. We chose one case-control study24 that provided a 2 × 2 table of first-degree relative’s (FDR’s) breast cancer history and breast cancer risk. There were 316 exposed breast cancer cases, 1567 unexposed cases, 179 exposed noncases, and 1449 unexposed noncases, where exposure was any FDR’s breast cancer history. From these data, the calculated crude OR estimate associating FDR with breast cancer occurrence for women from Los Angeles County, California, was 1.63 (95% confidence limits: 1.34, 1.99). The OR adjusted for confounders was 1.68.

Methods

Source 1: Validation data

Identifying validation studies

The observed exposure measure was self-reported breast cancer history in any FDR – a parent, sibling or child – by the index subject. “Gold standard” measurements used to verify the breast cancer status in FDRs were verbal confirmation by the FDR, medical records, pathology reports, cancer registries, and/or death certificates. While these are labeled “gold standard,” they are themselves likely measured with some error. We defined sensitivity as the proportion of FDRs reported as having breast cancer among those according to the gold-standard measurement, and specificity as the proportion of FDRs not reported as having breast cancer given it was absent from the gold standard measurement at the time of index subject’s interview.

With these criteria, we sought out articles that validated self-reported data on any FDR. Our approach was guided by a 2004 article by Murff and colleagues19 that summarized the results from validation studies that determined the accuracy of self-reported history of cancer in family members for colon, prostate, breast, endometrial, and ovarian cancers. The first author met with a research librarian for search-strategy assistance since medical subject headings change over time. In April 2008, after discussions with a librarian, AMJ performed a database literature search to find English-language articles that provided sensitivity and specificity values for classification of self-reported family breast cancer history. The following medical subject headings from PubMed were used: “sensitivity and specificity”, “breast neoplasms”, “reproducibility of results”, and “medical history taking”. A text-word search for “validation study” as well as the above terms was also performed. Article titles, abstracts, and text were reviewed for inclusion. Reference lists of identified articles were searched to identify additional studies.

We also performed a cited-reference search of the Murff and colleagues19 article to learn whether it was referenced in recently published studies. Studies that determined accuracy (eg, positive-predictive value) of family breast cancer history,2529 expanded first-degree relatives to include aunts,30 did not distinguish between FDRs and second-degree relatives,31 validated bilateral breast cancer,32 or were a sub-study of a larger included validation study33 were not used. Five publications20,21,3436 met our criteria.

Incorporating validation data

We assumed the data from the five validation studies (Table 1) to be appropriate for adjusting the OR for misclassification. Using these data, we explored various scenarios for possible classification error. The scenarios involved differential classification error because the validation data (Table 1) indicated the classification processes were differential.

Table 1.

Validation studies that reported sensitivity and specificity values for self-reported first-degree relative’s breast cancer history

Authors Breast cancer cases
Healthy noncases
Source population “Gold standard” measurement tool(s)
Sensitivity (No./Total) (95% CI) Specificity (No./Total) (95% CI) Sensitivity (No./Total) (95% CI) Specificity (No./Total) (95% CI)
Chang et al20 0.72 (61/85) (0.62, 0.81) 0.99 (1114/1127) (0.98, 0.99) Sweden Swedish Cancer Registry
Kerber and Slattery34 0.85 (11/13) (0.55, 0.98) 0.96 (107/112) (0.90, 0.99) 0.82 (18/22) (0.60, 0.95) 0.91 (167/184) (0.87, 0.95) Utah, USA Utah Population Database
Soegaard et al21 0.94 (121/129) (0.90, 0.98) 1.00 (4505/4527) (0.99, 1.00) Denmark Danish Cancer Registry
Verkooijen et al35 0.98 (60/61) (0.91, 1.00) 0.99 (247/249) (0.97, 1.00) Geneva, Switzerland Cantonal Population Office and Geneva Cancer Registry
Ziogas and Anton-Culver36 0.95 (188/197) (0.93, 0.98) 0.97 (850/873) (0.96, 0.98) Orange County, California, USA Pathology, self-reported, or death certificates

Abbreviation: CI, confidence interval.

For nonprobabilistic sensitivity analysis

We specified single-point values as scenarios for possible classification proportions. Since Kerber and Slattery34 reported classification proportions for both cases and noncases (Table 1), we considered this validation study as one scenario (scenario 2, Table 2). Then we combined the noncase sensitivity and specificity values from Chang and colleagues20 with the breast cancer case sensitivity and specificity values from Verkooijen and colleagues35 and Ziogas and Anton-Culver36 for scenarios 3 and 4 (Table 2), respectively. Similarly, we combined the noncase classification proportions from Soegaard and colleagues21 with case classification proportions from Verkooijen and colleagues35 and Ziogas and Anton-Culver36 for scenarios 5 and 6 (Table 2), respectively. We also defined scenarios for the lower (scenario 7, Table 2) and upper (scenario 8, Table 2) extreme values from all five studies. Finally, we investigated a scenario within the ranges of validation data (scenario 9, Table 2) and other combinations from the validation data (scenarios 10 and 11, Table 2).

Table 2.

Single point-estimate values for classification errors and nonprobabilistic sensitivity analysis results

Scenario Breast cancer cases
Healthy noncases
OR adjusted
Sensitivity Specificity Sensitivity Specificity
1a 1.00 1.00 1.00 1.00 1.63
2 0.85 0.96 0.82 0.91 6.67
3 0.98 0.99 0.72 0.99 1.19
4 0.95 0.97 0.72 0.99 1.08
5 0.98 0.99 0.94 1.00 1.46
6 0.95 0.97 0.94 1.00 1.33
7 0.85 0.96 0.72 0.91 5.73
8 0.98 0.99 0.94 1.00 1.47
9b 0.92 0.98 0.93 0.99 1.61
10 0.98 0.96 0.72 1.00 0.87
11 0.85 0.99 0.94 0.91 9.62

Notes:

a

Crude odds ratio scenario;

b

Approximately nondifferential.

Abbreviation: OR adjusted, odds ratio adjusted for family breast cancer history (exposure) misclassification.

For probabilistic sensitivity analysis

To assign probability distributions to the classification parameters, we examined each column of sensitivity and specificity data in Table 1 for cases and noncases separately. Although we assumed the ranges of validation data to be adequate for our probability distributions, we were not 100% confident in the distributions’ shapes. As a result, we constructed different distribution scenarios to determine the dependence classification error had on the crude OR.

To allow each value within the range an equal probability of occurring, we began by specifying continuous uniform distributions informed by the lower and upper values of the validation data values (scenario 13, Table 3). Since the case and noncase classification proportions each had three values, triangular distributions were then used for both cases and noncases (scenarios 14 and 15, Table 3). That is, we specified triangular distributions using the lower and upper validation data values as the minimum and maximum, respectively, and the middle value (scenario 14) and average value (scenario 15) as the modes for each distribution.

Table 3.

Descriptions of the probability distributions used for exposure classification errors

Scenario Breast cancer cases
Healthy noncases
Sensitivity Specificity Sensitivity Specificity
12 Custom uniforma (1.00) Custom uniform (1.00) Custom uniform (1.00) Custom uniform (1.00)
13 Uniformb (0.85, 1.00) Uniform (0.96, 1.00) Uniform (0.72, 1.00) Uniform (0.91, 1.00)
14 Triangularc (0.85, 0.95, 1.00) Triangular (0.96, 0.97, 1.00) Triangular (0.72, 0.82, 1.00) Triangular (0.91, 0.99, 1.00)
15 Triangular (0.85, 0.93, 1.00) Triangular (0.96, 0.97, 1.00) Triangular (0.72, 0.83, 1.00) Triangular (0.91, 0.97, 1.00)

Notes:

a

Discrete uniform distribution with a single value at 1.00 with probability of occurring = 1;

b

Continuous uniform distribution (minimum value, maximum value);

c

Triangular distribution (minimum value, mode, maximum value).

Source 2: Incorporating expert knowledge

We changed the upper limit to 1.00 (perfect sensitivity and specificity) in scenarios 13–15 (Table 3), because we cannot rule out the possibility that all individuals with and without breast cancer may be correctly classified.

Source 3: Incorporating numerical constraints given the data

Adjustment for misclassification may result in negative cell frequencies when certain combinations of observed data and classification proportions are used. However, negative cell frequencies are impossible. Therefore, combinations of values yielding negative corrected cell frequencies are impossible and should be excluded from the sensitivity analysis. In our sensitivity analyses, no combinations of values assigned to sensitivity and specificity resulted in adjusted-cell frequencies that were negative. Therefore, no values were excluded within the explored ranges of values.

Nonprobabilistic sensitivity analyses

For each of the 11 scenarios (Table 2), we calculated an OR adjusted for family breast cancer history misclassification (OR adjusted) using the exposure misclassification adjustment methods of Greenland and Lash.11 Briefly, we used the observed cell frequencies of data along with sensitivity and specificity values for cases and noncases (Table 1) to calculate a 2 × 2 table of cell frequencies adjusted for exposure misclassification and an odds ratio adjusted for exposure misclassification (Table 4).

Table 4.

2 × 2 tablea after adjustment for exposure misclassification

Breast cancer outcome Any first-degree family breast cancer history
Odds ratio adjusted for exposure misclassification
Yes No
Breast cancer cases
e=a(1Spcases)(a+b)Secases+Spcases1
f = a + b − e
ORadjusted=ehfg
Breast cancer noncases
g=c(1Spnoncases)(c+d)Senoncases+Spnoncases1
h = c + d − g

Notes:

a

a, breast cancer cases classified as having a first-degree family breast cancer history; b, breast cancer cases classified as not having a first-degree family breast cancer history; c, breast cancer noncases classified as having a first-degree family breast cancer history; d, breast cancer noncases classified as not having a first-degree family breast cancer history;

Abbreviations: OR, odds ratio; Se, sensitivity; Sp, specificity.

Probabilistic sensitivity analyses

We employed probabilistic sensitivity analysis based on published methods.5,11,37 In short, we used equations in Table 4 to adjust the observed cell frequencies for exposure misclassification and substituted the probability distributions from Table 3 for the sensitivity and specificity values. We also included a correlation11,37 value of 0.80 between the sensitivities for cases and noncases and between the specificities for cases and noncases to prevent extreme differentiality on any particular simulation trial. As a last step, we incorporated random error to obtain an OR estimate adjusted for exposure misclassification and random error. Adjustment for random error requires specification of a random error distribution for the data-generating process.38 We used the following formula, exp ⌊In(OR adjusted)–z SE⌋, which assumes that random error is modeled by a standard normal deviate (z) and the standard error (SE) of the original (misclassified) cell frequencies.11,22,23

For each scenario, we graphed a frequency (uncertainty) distribution of the odds ratio adjusted for exposure misclassification only and for exposure misclassification and random error. These frequency distributions are dependent on our assumptions for the classification proportions and random error parameters. We also calculated 95% uncertainty limits by taking the lower 2.5 and upper 97.5 percentiles of the frequency distribution. These percentiles provide the lower and upper limits for the odd ratio adjusted for our beliefs about the relative proportions of the exposure-classification values (ie, uncertainty-analysis-parameter values).8 Crystal Ball (version 7.3; Oracle, Redwood Shores, CA, USA) software was used to run 50,000 simulation trials for the four simulation experiments.

Results

Table 2 presents the results of the nonprobabilistic sensitivity analyses. The OR adjusted for misclassification resulted in a wide range of values, assuming the OR adjusted for misclassification is the true value, our assumptions are correct, and no other systematic errors exist. Some combinations of classification proportions (scenarios 2, 7, and 11, Table 2) gave ORs adjusted for misclassification that were much greater than the crude OR of 1.63, other combinations resulted in ORs between 1 and the crude OR (scenarios 3–6, 8, and 9, Table 2), and one combination produced a protective effect (scenario 10, Table 2). Thus, demonstrating that differential classification error can cause error toward (scenarios 2, 7, and 11, Table 2), away from (scenarios 3–6, 8, and 9), or past the null value of 1 (scenario 10, Table 2).39 Approximately nondifferential misclassification (scenario 9, Table 2) resulted in an OR adjusted for exposure misclassification that was less than the crude value.

The probabilistic sensitivity analyses results are found in Table 5 and Figures 1 and 2. The geometric means and medians are greater than the crude OR value of 1.63 for scenarios where classification was imperfect, and over half of the simulation trials resulted in ORs adjusted for exposure misclassification greater than the crude OR. The 95% uncertainty limits are wider than the conventional limits (1.34, 1.99). Compared to the conventional analysis (scenario 12, analysis b, Table 5), the ratio of the upper 95% uncertainty limit to the 95% lower uncertainty limit was largest for the uniform scenario (scenario 13, analysis b, Table 5). Minor changes in the modal values shifted the distribution of ORs adjusted for exposure misclassification further away from the crude OR for scenario 15 compared with scenario 14 because scenario 15 is slightly more differential than scenario 14.

Table 5.

Probabilistic sensitivity analyses resultsa after 50,000 simulation trials, by scenario

Scenario Analysis ORadjusted geometric mean ORadjusted median 95% uncertainty limits for ORadjusted % of trials with ORadjusted > crude ORb Ratio of upper 95% uncertainty limit to lower 95% uncertainty limit
12c a. No misclassification 1.63 1.63 (1.63, 1.63) 0 1.00
b. Conventional analysis (random error only) 1.63 1.63 (1.34, 1.99) 50.6 1.49
13 a. Misclassification only 2.46 2.25 (1.41, 5.88) 84.4 4.17
b. Misclassification and random error 2.46 2.27 (1.33, 6.01) 83.3 4.52
14 a. Misclassification only 1.89 1.74 (1.36, 3.84) 62.5 2.82
b. Misclassification and random error 1.89 1.77 (1.25, 3.93) 64.0 3.14
15 a. Misclassification only 2.09 1.96 (1.46, 4.13) 85.0 2.83
b. Misclassification and random error 2.09 1.99 (1.37, 4.21) 81.8 3.07

Notes:

a

Correlation between the sensitivities for cases and noncases and between the specificities for cases and noncases = 0.80;

b

Crude odds ratio = 1.63;

c

Crude odds ratio scenario.

Abbreviation: OR adjusted, Odds ratio adjusted for family breast cancer history (exposure) misclassification.

Figure 1.

Figure 1

Frequency distributions of breast cancer odds ratios adjusted for family breast cancer history misclassification, by scenario.

Figure 2.

Figure 2

Frequency distributions of breast cancer odds ratios adjusted for family breast cancer history misclassification and random error, by scenario.

Discussion

We performed partial sensitivity analyses to adjust a breast cancer OR estimate for misclassification of family breast cancer history. In general, three sources10 of information are used to specify scenarios for sensitivity analysis: validation data (we found existing data in the literature20,21,3436); expert judgment (we modified ranges of values from the validation studies based on our expert judgment of sensitivity and specificity for family history of breast cancer); and numerical constraints given the data (we were prepared to exclude values assigned to classification proportions that yielded negative cell frequencies). For all sensitivity analyses we further assumed that the OR estimate adjusted for exposure misclassification was not affected by other systematic errors.

We used both nonprobabilistic and probabilistic sensitivity analyses because they are complementary yet imperfect techniques. Since no likelihood (probability) is associated explicitly with each scenario in the nonprobabilistic sensitivity analysis, the results should not necessarily be viewed as having equal probability. The nonprobabilistic sensitivity analyses resulted in a wide range of ORs adjusted for exposure misclassification: from less than 1 to almost six times the crude OR value. Similar results were found using probabilistic sensitivity analyses.

As guided by the literature, classification errors were differential for all scenarios. It is well known that the effect of differential misclassification on study results is unpredictable. Both our nonprobabilistic and probabilistic sensitivity analysis results show the wide range of values that are possible. Importantly, approximately nondifferential misclassification resulted in an OR adjusted for misclassification that was less than the crude (Table 2, scenario 9). Thus, the sensitivity analysis results demonstrate the importance of quantitatively evaluating the effect of differential misclassification. Nevertheless, nondifferential misclassification only biases the expected value of an OR estimate toward the null value under very specific conditions.39

When available, internal validation data from the study of interest are the recommended data to inform the values used for sensitivity analysis, so long as the internal validation study itself was not biased by, for example, selection of subjects into the validation substudy. When such unbiased validation data are available, we specify sampling-error distributions for the classification probabilities observed in the validation substudy. Since we did not have internal validation data for the sensitivities and specificities from the study of interest,24 we could not use this approach.

We were able to find external validation data to inform the values assigned to classification proportions in our sensitivity analyses. The validation data, however, were not generated from the same population as that from the crude OR data. Therefore, these external validation data may not be generalizable across different populations. Further, the classification proportions were not calculated by first-degree relative status (eg, grandmother, sister, and daughter), which may differ by generation. Nonetheless, we know of no existing methodology that incorporates selection forces into the classification proportions for sensitivity analyses.

When only external validation data for the classification proportion estimates are available, it is difficult to know which of these estimates to use. Therefore, we varied our probability distributions by specifying several different distributions. In addition, it is not recommended to pool the results from multiple-validation studies or to use the variance of the pooled result to parameterize a distribution. Instead, it is usually better to use the range of classification proportion values to parameterize a probability distribution (eg, triangular) or to use the range of values to conduct a multidimensional bias analysis. Further, we did not specify a probability distribution for each classification probability reported in external validation studies (a complete sensitivity analysis that takes into account the uncertainty in the classification proportions is the best route for funded analyses). Rather, we used the reported classification proportions to construct one composite probability distribution for each scenario.

The specification of the shape and range of the probability distribution is often difficult in light of internal or external validation data. In this research, we specified one uniform and two triangular distributions out of an infinite number of possibilities. Other probability distributions that can be used include the trapezoidal, logit-normal, logit-logistic, and beta.11,14

When validation data are unavailable or inapplicable, investigators must assign values to the classification parameters based on expert judgment and numerical constraints given the data. This option, while perhaps suboptimal, has two advantages over conventional analyses that ignore quantitative estimates of uncertainty from classification errors. First, it emphasizes the absence of reliable validation data and identifies that absence as a research gap that should be a priority to fill. Second, conventional analyses implicitly treat the classification as perfect, and substituting expert judgment about actual classification errors for this often untenable assumption at least allows a quantitative assessment of the uncertainty arising from these errors.

Acknowledgments

The authors thank Dr Sander Greenland and the anonymous reviewers for helpful comments on an earlier draft. The authors report no conflicts of interest in this work. This study was supported in part by the Children’s Cancer Research Fund, Minneapolis, MN, USA (to AMJ).

References

  • 1.Greenland S. Basic methods for sensitivity analysis of biases. Int J Epidemiol. 1996;25:1107–1116. [PubMed] [Google Scholar]
  • 2.Maldonado G. Informal evaluation of bias may be inadequate [Abstract] Am J Epidemiol. 1998;147:S82. [Google Scholar]
  • 3.Phillips CV, Maldonado G. Using Monte Carlo methods to quantify the multiple sources of error in studies [Abstract] Am J Epidemiol. 1999;149:S17. [Google Scholar]
  • 4.Maldonado G, Greenland S. Estimating causal effects. Int J Epidemiol. 2002;31:422–429. [PubMed] [Google Scholar]
  • 5.Lash TL, Fink AK. Semi-automated sensitivity analysis to assess systematic errors in observational data. Epidemiology. 2003;14:451–458. doi: 10.1097/01.EDE.0000071419.41011.cf. [DOI] [PubMed] [Google Scholar]
  • 6.Phillips CV. Quantifying and reporting uncertainty from systematic errors. Epidemiology. 2003;14:459–466. doi: 10.1097/01.ede.0000072106.65262.ae. [DOI] [PubMed] [Google Scholar]
  • 7.Greenland S. Multiple-bias modelling for analysis of observational data (with discussion) J R Stat Soc A. 2005;168:267–306. [Google Scholar]
  • 8.Maldonado G. Adjusting a relative-risk estimate for study imperfections. J Epidemiol Community Health. 2008;62:655–663. doi: 10.1136/jech.2007.063909. [DOI] [PubMed] [Google Scholar]
  • 9.Fox MP. Creating a demand for bias analysis in epidemiological research. J Epidemiol Community Health. 2009;63:91. doi: 10.1136/jech.2008.082420. [DOI] [PubMed] [Google Scholar]
  • 10.Jurek AM, Maldonado G, Spector LG, Ross JA. Periconceptional maternal vitamin supplementation and childhood leukemia: an uncertainty analysis. J Epidemiol Community Health. 2009;63:168–172. doi: 10.1136/jech.2008.080226. [DOI] [PubMed] [Google Scholar]
  • 11.Greenland S, Lash TL. Bias analysis. Ch. 19. In: Rothman KJ, Greenland S, Lash TL, editors. Modern Epidemiology. 3rd ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2008. [Google Scholar]
  • 12.Morgan MG, Henrion M. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge: Cambridge University Press; 1990. [Google Scholar]
  • 13.Eddy DM, Hasselblad V, Shachter R. Meta-Analysis by the Confidence Profile Method. Boston, MA: Academic Press; 1992. [Google Scholar]
  • 14.Vose D. Risk Analysis: A Quantitative Guide. 2nd ed. Chichester: John Wiley and Sons; 2000. [Google Scholar]
  • 15.Jurek AM, Maldonado G, Greenland S, Church TR. Uncertainty analysis: an example of its application to estimating a survey proportion. J Epidemiol Community Health. 2007;61:650–654. doi: 10.1136/jech.2006.053660. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Chu H, Wang Z, Cole SR, Greenland S. Sensitivity analysis of misclassification: a graphical and a Bayesian approach. Ann Epidemiol. 2006;16:834–841. doi: 10.1016/j.annepidem.2006.04.001. [DOI] [PubMed] [Google Scholar]
  • 17.Turner RM, Spiegelhalter DJ, Smith GCS, Thompson SG. Bias modelling in evidence synthesis. J R Statist Soc A. 2009;172:21–47. doi: 10.1111/j.1467-985X.2008.00547.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.MacLehose RF, Olshan AF, Herring AH, et al. Bayesian methods for correcting misclassification: an example from birth defects epidemiology. Epidemiology. 2009;20:27–35. doi: 10.1097/EDE.0b013e31818ab3b0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Murff HJ, Spigel DR, Syngal S. Does this patient have a family history of cancer? An evidence-based analysis of the accuracy of family cancer history. JAMA. 2004;292:1480–1489. doi: 10.1001/jama.292.12.1480. [DOI] [PubMed] [Google Scholar]
  • 20.Chang ET, Smedby KE, Hjalgrim H, Glimelius B, Adami HO. Reliability of self-reported family history of cancer in a large case-control study of lymphoma. J Natl Cancer Inst. 2006;98:61–68. doi: 10.1093/jnci/djj005. [DOI] [PubMed] [Google Scholar]
  • 21.Soegaard M, Jensen A, Frederiksen K, et al. Accuracy of self-reported family history of cancer in a large case-control study of ovarian cancer. Cancer Causes Control. 2008;19:469–479. doi: 10.1007/s10552-007-9108-3. [DOI] [PubMed] [Google Scholar]
  • 22.Greenland S. The impact of prior distributions for uncontrolled confounding and response bias: a case study of the relation of wire codes and magnetic fields to childhood leukemia. J Am Stat Assoc. 2003;98:47–54. [Google Scholar]
  • 23.Lash TL. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty. J Occup Med Toxicol. 2007;2:15. doi: 10.1186/1745-6673-2-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Carpenter CL, Ross RK, Paganini-Hill A, Bernstein L. Effect of family history, obesity and exercise on breast cancer risk among postmenopausal women. Int J Cancer. 2003;106:96–102. doi: 10.1002/ijc.11186. [DOI] [PubMed] [Google Scholar]
  • 25.Love RR, Evans AM, Josten DM. The accuracy of patient reports of a family history of cancer. J Chronic Dis. 1985;38:289–293. doi: 10.1016/0021-9681(85)90074-8. [DOI] [PubMed] [Google Scholar]
  • 26.Theis B, Boyd N, Lockwood G, Tritchler D. Accuracy of family cancer history in breast cancer patients. Eur J Cancer Prev. 1994;3:321–327. doi: 10.1097/00008469-199407000-00004. [DOI] [PubMed] [Google Scholar]
  • 27.Parent ME, Ghadirian P, Lacroix A, Perret C. Accuracy of reports of familial breast cancer in a case-control series. Epidemiology. 1995;6:184–186. doi: 10.1097/00001648-199503000-00018. [DOI] [PubMed] [Google Scholar]
  • 28.Parent ME, Ghadirian P, Lacroix A, Perret C. The reliability of recollections of family history: implications for the medical provider. J Cancer Educ. 1997;12:114–120. doi: 10.1080/08858199709528465. [DOI] [PubMed] [Google Scholar]
  • 29.Sijmons RH, Boonstra AE, Reefhuis J, et al. Accuracy of family history of cancer: clinical genetic implications. Eur J Hum Genet. 2000;8:181–186. doi: 10.1038/sj.ejhg.5200441. [DOI] [PubMed] [Google Scholar]
  • 30.Albert S, Child M. Familial cancer in the general population. Cancer. 1977;4:1674–1679. doi: 10.1002/1097-0142(197710)40:4<1674::aid-cncr2820400442>3.0.co;2-c. [DOI] [PubMed] [Google Scholar]
  • 31.Schneider KA, DiGianni LM, Patenaude AF, et al. Accuracy of cancer family histories: comparison of two breast cancer syndromes. Genet Test. 2004;8:222–228. doi: 10.1089/gte.2004.8.222. [DOI] [PubMed] [Google Scholar]
  • 32.Breuer B, Kash KM, Rosenthal G, et al. Reporting bilaterality status in first-degree relatives with breast cancer: a validity study. Genet Epidemiol. 1993;10:245–256. doi: 10.1002/gepi.1370100405. [DOI] [PubMed] [Google Scholar]
  • 33.Anton-Culver H, Kurosaki T, Taylor TH, et al. Validation of family history of breast cancer and identification of the BRCA1 and other syndromes using a population-based cancer registry. Genet Epidemiol. 1996;13:193–205. doi: 10.1002/(SICI)1098-2272(1996)13:2<193::AID-GEPI5>3.0.CO;2-9. [DOI] [PubMed] [Google Scholar]
  • 34.Kerber RA, Slattery ML. Comparison of self-reported and database-linked family history of cancer data in a case-control study. Am J Epidemiol. 1997;146:244–248. doi: 10.1093/oxfordjournals.aje.a009259. [DOI] [PubMed] [Google Scholar]
  • 35.Verkooijen HM, Fioretta G, Chappuis PO, et al. Set-up of a population-based familial breast cancer registry in Geneva, Switzerland: validation of first results. Ann Oncol. 2004;15:350–353. doi: 10.1093/annonc/mdh072. [DOI] [PubMed] [Google Scholar]
  • 36.Ziogas A, Anton-Culver H. Validation of family history data in cancer family registries. Am J Prev Med. 2003;24:190–198. doi: 10.1016/s0749-3797(02)00593-7. [DOI] [PubMed] [Google Scholar]
  • 37.Fox MP, Lash TL, Greenland S. A method to automate probabilistic sensitivity analyses of misclassified binary variables. Int J Epidemiol. 2005;34:1370–1376. doi: 10.1093/ije/dyi184. [DOI] [PubMed] [Google Scholar]
  • 38.Greenland S. Randomization, statistics, and causal inference. Epidemiology. 1990;1:421–429. doi: 10.1097/00001648-199011000-00003. [DOI] [PubMed] [Google Scholar]
  • 39.Jurek AM, Greenland S, Maldonado G, Church TR. Proper interpretation of misclassification effects: expectations versus observations. Int J Epidemiol. 2005;34:680–687. doi: 10.1093/ije/dyi060. [DOI] [PubMed] [Google Scholar]

Articles from Clinical epidemiology are provided here courtesy of Dove Press

RESOURCES