Abstract
This article addresses issues relevant to interpreting findings from 26 epidemiologic studies of persons exposed to low-dose radiation. We review the extensive data from both epidemiologic studies of persons exposed at moderate or high doses and from radiobiology that together have firmly established radiation as carcinogenic. We then discuss the use of the linear relative risk model that has been used to describe data from both low- and moderate- or high-dose studies. We consider the effects of dose measurement errors; these can reduce statistical power and lead to underestimation of risks but are very unlikely to bring about a spurious dose response. We estimate statistical power for the low-dose studies under the assumption that true risks of radiation-related cancers are those expected from studies of Japanese atomic bomb survivors. Finally, we discuss the interpretation of confidence intervals and statistical tests and the applicability of the Bradford Hill principles for a causal relationship.
Since the publication of the BEIR VII report in 2006, numerous epidemiologic studies of persons exposed at low doses and dose rates have been conducted with the objective of providing a direct evaluation of cancer risks from such doses. This article is part of a broader effort to review epidemiologic studies found to meet several criteria, including the availability of radiation dose estimates for individual patients and a mean dose of 100 mGy or less. Twenty-six studies were found to meet the required criteria (1). These included studies of environmental [eight studies (2–9)], medical [four studies (10–13)], and occupational [17 studies of 14 datasets (14–30)] exposures and are listed in Table 1. Earlier articles in this monograph discuss potential biases related to dosimetry (35), confounding (36), and outcome ascertainment (37). In this article, we discuss additional topics relevant to the interpretation of results from the 26 studies as well as from future low-dose epidemiologic studies. These include consideration of radiation dose-response data from other studies, analytic methods, the effects of dose uncertainties, power and precision, and causation.
Table 1.
Power for selected endpoints using a one-sided test of trend (with type I error α = 0.05)*
Leukemia excluding CLL |
Cancer excluding leukemia |
Other endpoints |
||||||||
---|---|---|---|---|---|---|---|---|---|---|
Description | Incidence or mortality | Age at exposure | Mean dose/max dose (mSv) | Cancer cases or deaths | Power (%)* | Cancer cases or deaths | Power (%)* | Endpoint | Cancer cases or deaths | Power (%)* |
Environmental | ||||||||||
Chornobyl residents (3) | Incidence | Childhood | 6/391 | 421 | 11.1 | |||||
Three Mile Island (4) | Incidence | Adulthood | 0.1/2.1 | 55‖ | NC | 1588 | ||||
China background (9) | Mortality | Adulthood | 66/125+ | 15‖ | 10.3 | 941 | 13.5 | |||
Great Britain (GB) background (6) | Incidence | Childhood | 4/31 | 9058 | 72.4 | 18 389 | NC | CNS cancers | 6585 | 35.8 |
Swiss background (8) | Incidence | Childhood | 9/49 | 530 | 23.2 | CNS cancers | 423 | 14.1 | ||
Techa River solid cancers (2) | Incidence | All ages | 60/960 | 1933‡ | 60.4 | |||||
Finnish background study (7) | Incidence | Childhood | 2/12 | 1093 | NC | |||||
Taiwanese 60Co contamination study (5) | Incidence | All ages | 48/2363 | 11 | 9.7 | 274 | 11.8 | |||
Medical | ||||||||||
Canadian cardiac imaging (11) | Incidence | Adulthood | ~18/30+ | 11 033‡ | 24.9§ | |||||
French pediatric computed tomography (CT) (12) | Incidence | Childhood | 9/100+ (bone marrow) 23/100+ (brain) | 25 | 9.9 | CNS cancers | 27 | 15.2 | ||
UK pediatric CT (10) | Incidence | Childhood | 12/100+(bone marrow) 45/400+ (brain) | 74 | 18.9 | Brain tumors | 135 | 85.6 | ||
Pooled childhood thyroid (low-dose) (13) | Incidence | Childhood | 30/200 | Thyroid cancer | 394 | 70.9 | ||||
Occupational | ||||||||||
1st Korean workers (14) | Mortality + incidence | Adulthood | 6/159 | 9‖, ¶ | 8.7‖ | 289†,¶ | 6.1 | |||
Chornobyl liquidators (19) | Incidence | Adulthood | 13/500+ | 19 | 9.9 | |||||
UK National Registry for Radiation Workers (NRRW) (25) | Incidence | Adulthood | 25/500+ | 234 | 43.7§ | 10 855 | 92.5§ | |||
2nd Korean nuclear workers (18) | Mortality | Adulthood | 20/481 | 3 | 7.8§ | 96 | 8.0§ | |||
Rocketdyne workers (16) | Mortality | Adulthood | 14/1000 | 159 | 12.3§ | 4646 | 10.3§ | |||
Japanese workers (15) | Mortality | Adulthood | 12/100+ | 80‖ | 14.4 | 2636 | 12.7 | |||
Canadian nuclear workers (30) | Mortality | Adulthood | 22/679 | 17 | 12.8 | 468 | 10.5 | |||
Ukrainian Chornobyl liquidators (29) | Incidence | Adulthood | 82/3220 | 52 | NC | |||||
German nuclear workers (24) | Mortality | Adulthood | 30/100+ | 7 | NC | 119 | NC | |||
US nuclear workers (28) | Mortality | Adulthood | 20/100+ | 369 | 52.3 | 10 877 | 46.6 | |||
International Nuclear Workers Study (INWORKS) (23, 27) | Mortality | Adulthood | 21/1332 (27) 16/1218 (23) | 531 | 61.4 | 17 957‡ | 51.1 | |||
US atomic veterans (17) | Mortality | Adulthood | ∼9/908 | 74 | 9.5§ | |||||
French nuclear workers (22) | Mortality | Adulthood | 18/669 | 57 | 14.0 | 2356‡ | 13.1 | |||
US Radiologic Technologists (USRT) (26) | Incidence | Adulthood | 56/1735 | Basal cell carcinoma | 3615 | 77.3 | ||||
USRT (21) | Incidence | Adulthood | 37/647+ | Breast cancer | 1922 | 46.3 | ||||
USRT (20) | Mortality | Adulthood | 12/290 | CNS cancers | 193 | 5.4 |
Unless otherwise stated, power for solid cancers was estimated by fitting a stratified linear excess relative risk model to solid cancer incidence data of Preston et al. (31) or cancer mortality data of Ozasa et al (32), with stratification by sex, city, age at exposure, time since exposure, distance (proximal vs distal) (and for skin, thyroid by Adult Health Study status). Power for leukemia was estimated by fitting a stratified linear excess relative risk model to leukemia incidence data of Hsu et al (33) or leukemia mortality data of Ozasa et al (32) with stratification by sex, city, age at exposure, age, time since exposure, or by city, sex, age at exposure, and attained age. Power is estimated using the method outlined by Little et al. (34), using 10 000 Monte Carlo samples. NC = not calculated because dose distributions needed for power calculations were not available. GB = Great Britain; CT = computed tomography; NRRW = National Registry for Radiation Workers; INWORKS = International Nuclear Workers Study, USRT = US Radiologic Technologists.
All cancers excluding leukemia and non-Hodgkin lymphoma.
All solid cancers.
Power calculations based on the distribution of people (or cases) by cumulative dose at the end of the study instead of the more appropriate distribution of dose by person-years. Thus, power is likely overestimated. See text.
All leukemia.
Deaths, including automobile workers.
Background: Other Studies of Radiation Dose-Response
In interpreting results from the low-dose epidemiologic studies evaluated in this monograph, it is important to consider the abundance of related data from other sources. Ionizing radiation has been clearly established as carcinogenic, with evidence coming predominantly from groups exposed at moderate and high doses (38); epidemiologic studies of populations exposed in this higher dose range offer a wealth of information on the expected magnitude and patterns of cancer risks in humans. Results of studies of persons exposed at lower doses have not been as clear but have been reasonably consistent with these findings [eg, (38–42)]. Experimental studies also offer valuable insights on the mechanisms of radiation carcinogenesis.
Evidence From Epidemiologic Studies of Carcinogenicity in Persons Exposed at Moderate and High Doses
The Life Span Study (LSS) cohort of Japanese atomic bomb survivors, which includes persons of all ages and both sexes, has served as the primary resource for estimating carcinogenic risks from low linear energy transfer (LET) external exposure (39–41). The LSS comprises 120 321 survivors of the Hiroshima and Nagasaki bombings including 26 580 persons who were not in either city at the time of the bombings and 7021 people who were in either city for whom doses could not be estimated (43, 44). Individual estimates of radiation dose range from less than 0.005 Gy to about 4 Gy, with 29 676 survivors with estimated doses in the range 0.005–0.1 Gy (43). Exposure was to the whole body, which allows the evaluation of risks of cancers of any specific site. Both cancer mortality (1950–2003) and cancer incidence (1957–2009) have been extensively studied, with compelling evidence of dose-response relationships for leukemia excluding chronic lymphatic leukemia (CLL) (33), all solid cancers (32, 45), and many site-specific cancers (31, 44, 46). Linear-quadratic functions have provided a good description of the dose response for all leukemias excluding CLL, with the linear term dominating at low doses. However, this may be driven by heterogeneity by leukemia type. The most recent analysis of all solid cancer incidence for 1958–2009 provides some evidence of nonlinearity in the dose response for males but not for females (45), a finding that appears to be driven by heterogeneity of risk by cancer site and the combining of different cancer endpoints within the “all solid cancer” rubric. The usefulness of this rubric is discussed in the “Disease Endpoints” section of this article. Recent evaluations of the dose-response relationships for lung (44) and female breast (46) found no evidence of departure from linearity. Based on an earlier evaluation (31), there is little evidence of departure from linearity for most other site-specific cancers. For the objectives of this monograph, it is particularly relevant that there is evidence of a statistically significant dose response for all solid cancers when analyses are restricted to the lowest doses (<0.1 Gy) (45).
Workers at the Mayak plutonium production complex in the Southern Urals also provide information on low-dose rate exposures (47–49). The mean colon dose from external gamma rays is 354 mGy but ranges above 3 Gy (48). Many of these workers also received a substantial dose from plutonium-derived alpha particles, which primarily exposes the lung, liver, and bone (49). External dose-response relationships have been demonstrated for leukemia (47) and for all solid cancers at sites with low potential for plutonium exposure (ie, solid cancers other than lung, liver, and bone) (48, 49). The solid cancer dose response was consistent with linearity.
In addition, there are extensive data on persons exposed for therapeutic or diagnostic medical reasons, many of whom have been followed for decades (39, 41). Dose-response relationships from fractionated exposure at high therapeutic doses (mean organ dose >4 Gy) have been demonstrated for several site-specific cancers, including cancers of the breast, lung, thyroid, bone, brain, bladder, stomach, and pancreas (50–52). Dose-response relationships have also been demonstrated in medical studies with more modest doses (mean organ dose 0.1–4 Gy) (53–56). These studies may be more relevant for the purposes of this monograph and tend to involve persons exposed for diagnostic reasons or for treatment for nonmalignant disease.
Evidence From Biology
For the class of tissue reaction (formerly deterministic) effects, the International Commission on Radiological Protection (ICRP) (57) assumes there is a threshold dose, below which there is no effect and that the response (probability of effect) smoothly increases above that point. Biologically, it is much more likely there is a threshold for tissue reaction (formerly deterministic) effects than for stochastic effects; tissue reaction effects ensue when a sufficiently large number of cells are damaged within a certain critical time period that the body cannot replace them (57, 58), but other mechanisms may also be involved (57). As outlined by Harris (59) [but see also (60)], there are compelling biological data to suggest that cancer arises from a failure of cell differentiation and that it is largely unicellular in origin. Canonically, cancer is thought to result from mutagenic damage to a single cell, specifically to its nuclear DNA, which in principle could be caused by a single radiation track (60), although there is limited evidence of polyclonality for some tissues and tumor types (61).
It has been known for some time that the efficiency of cellular repair processes varies with dose and dose rate (60, 62), and this may be the reason for the curvature in cancer dose response and dose rate effects observed in some epidemiologic and animal data for some endpoints. DNA double-strand breakage is thought to be the most critical lesion induced by radiation (60); although there is evidence that other targets within the cell may also be involved (63, 64), as reviewed by Little et al. (65) and Doss et al. (66), the relevance of these to cancer risk in humans at low doses is questionable. Repair of double-strand breaks relies on a number of pathways, even the most accurate of which, homologous recombination, is prone to errors (62); other repair pathways—for example, nonhomologous end joining, single-strand annealing—are intrinsically much more error prone (62, 67). The variation in efficacy of repair that undoubtedly occurs will affect the magnitude of unrepaired and misrepaired damage and, whereas unrepaired damage is likely to result in cell death, misrepaired damage is likely to result in mutation.
Relating to this, there has been considerable discussion in the literature about the existence of a dose-effect threshold or even beneficial (hormetic) carcinogenic effects (65, 68) at low doses. There is little evidence for this either for cancer in the Japanese atomic bomb survivors (45, 69, 70) or in various other datasets [eg, in groups exposed in utero (71), in some of the studies evaluated in this monograph (6, 8, 13), and some others (42)]; naturally, thresholds below a certain size cannot be ruled out in all these datasets, but the totality of evidence suggests that thresholds cannot be larger than about 10 mGy. Taken together with the biological data discussed above, thresholds or hormetic effects much above 10 mGy can be largely discounted for cancer.
A low LET dose of 1 mGy corresponds to about one electron track hitting each cell nucleus in the field of exposure (60, 62). As Brenner et al. (72) point out, this means that at low doses (10 mGy or less over a year) of low LET radiation, it is unlikely that temporally and spatially separate electron tracks could cooperatively produce DNA damage. Brenner et al. (72) surmise from this that in this low-dose region, DNA damage at a cellular level would be proportional to dose. Even at slightly higher levels of dose (<100 mGy), it is likely that the effects of low dose rate–low LET radiation (<5 mGy/h) (73) would be approximately linear, because the probability of cancer would be proportional to the number of electron track traversals of nuclear DNA. However, it is to be expected that there will be variation in the effect per unit dose by age at exposure and age, given the multi-stage nature of the carcinogenic process (74, 75), so that the overall probability of excess radiation risk associated with a temporally distributed low-dose rate–radiation field will be a weighted sum of radiation doses accumulated in small intervals of time or age. There would be expected effects of dose rate also, which may be independent of the effects of dose. The ICRP recommends application of a dose and dose-rate effectiveness Factor of 2 by which excess risks per unit dose at low doses (roughly <100 mSv) and low dose rates (approximately <5 mSv/h) are less than those at high doses or dose rates (40). The biological basis for such dose and dose rate effects is likely to be the saturation of repair mechanisms after high-dose–rate exposure, which will tend to increase the effectiveness of cancer induction per unit dose.
Summary
These extensive epidemiologic and radiobiologic data generally support a linear exposure response at low doses even in the absence of direct evidence and provide little evidence of a dose threshold or beneficial (hormetic) effects (65, 68). In the low-dose studies reviewed in this monograph, the null hypothesis is usually taken to be that that there is no risk from radiation, that is, that the linear slope is zero. Given the well-established carcinogenicity, a more appropriate null hypothesis could be that the linear slope is positive or that it takes on the value that would be expected based on current risk estimates. As discussed later, this point can be reframed as an emphasis on confidence intervals, which indicate the set of values that would not be rejected if taken as the null hypothesis value.
Analytical Methods
In view of the limited statistical power inherent in most studies with small risks such as those of persons exposed at low doses or low dose rates, it is important to analyze data using statistical methods with close to optimal power and that result in risk estimates that are as precise as possible. In addition, it should be possible to compare risk estimates with those from other studies, especially those that form the basis of radiation protection standards, which have been developed primarily from the LSS cohort (39–41).
Dose Response
A strength of the studies described in this monograph is the availability of dose estimates for individuals and the use of internal comparisons. Analyses were based on a hazard function of the form B(x1, x2, …,xk) RR(z), where B(x1, x2, … xk) is the baseline rate (rate in the absence of radiation exposure) expressed as a function of variables x1, x2, …, xk, which nearly always include age at risk (attained age) and sex, and usually additional variables. Most analyses have been based on a model in which the relative risk (or hazard ratio or rate ratio) (RR) for radiation exposure is assumed to be a linear function of dose, that is, RR(z) = 1 + β z, where β z is the excess relative risk (ERR = RR – 1), z is radiation dose, and β is the ERR per unit of dose. The linear model is chosen so that results can be readily compared with those from other epidemiologic studies and because of biologic considerations as discussed above. Departures from linearity can be evaluated by comparing the fit of the linear model with that of more general parametric models such as the linear-quadratic model and nonparametric models (2, 3, 10, 13, 19, 23, 27–29). However, in studies of populations with a limited dose range, statistical power is usually low for detecting departures from linearity. Such departures should nevertheless be investigated, recognizing that failure to reject linearity does not exclude nonlinearity in the dose response.
Many of the studies evaluated in this monograph involve chronic exposure that is received over a period of several years with dose estimates available for each of several time periods, for example, for each calendar year. In these studies, the dose metric that is emphasized is cumulative exposure (sum of annual exposures), which increases over time and has appropriately been treated as a time-dependent variable in studies evaluated in this monograph. Often cumulative exposure is calculated up to m years preceding the time at risk, where m is the lag period that reflects a minimal time between exposure and the outcome event. Most of the studies have used a lag of 2 years for leukemia and 10 years for solid cancers.
Statistical properties of the linear relative risk model can be problematic especially with dose distributions that are nearly always highly skewed to the right. Although the asymptotic convergence (as the sample size N→∞) is still guaranteed, when the number of outcomes is small, asymptotic approximations used to obtain statistical tests and confidence intervals may perform poorly (ie, may yield incorrectly sized tests) and estimates may be biased. For this reason, most studies have used likelihood ratio procedures that perform better than Wald-based procedures. Even with likelihood ratio-based procedures, a small number of cases with high doses can have an unduly influential effect. This might, for example, have been a problem in estimating the CI in a small cohort study (n = 6242) of Taiwanese residents who were exposed to chronic gamma irradiation from materials contaminated with 60Co that were used in their apartment buildings. This study had only 11 cases of leukemia with three in the highest dose category of 100+ mGy and none in the lowest category of less than 5 mGy (5). Despite difficulties in inferences based on skewed dose distributions, it is important to conduct analyses using a quantitative dose variable, either continuous dose or a continuous variable comprised of means of a sufficient number of dose categories (particularly at high doses) to characterize the distribution fully. If this is not done, statistical power may be substantially compromised.
In part to address these problems, several studies (4–8, 11, 12, 16, 24) have fitted log-linear models in which the logarithm of RR is a linear function of dose. Although linear and log-linear models will likely lead to very similar results for testing the null hypothesis of no excess risk, parameter estimates and confidence intervals from log-linear models are not directly comparable with those from linear models. When the radiation dose used in the analysis is the sum of doses received at different times, the two models reflect different assumptions; with the linear model, the effects of doses received at different times are assumed to add whereas with the log-linear model they are assumed to multiply. In addition to analyses that treat cumulative dose as a continuous variable, many studies (3, 5, 6, 10, 13–19, 25, 27–30) present RR by categories of dose and in many cases indicate the numbers of cases or deaths in each category. A consistent increase in risk with increasing dose provides more compelling evidence of a causal association than does an increased risk among exposed persons (compared with unexposed persons) that does not depend on dose. Based on a subjective evaluation, the Techa River cohort (2), the Great Britain (GB) background study (6), the UK computed tomography (CT) study (10), the pooled thyroid cancer study (13), the Chornobyl clean-up workers study (19, 29), and the International Nuclear Workers Study (INWORKS) (27) exhibit what appear to be reasonably consistent increases with dose. The categorical estimates (with confidence intervals) and the fitted linear dose response are presented graphically in several studies, which allows a visual evaluation of the shape of the dose response. Presentation of the numbers of outcomes by dose category serves to indicate clearly the dependence of results on a small number of cases at higher doses.
Interstudy Comparisons and Effect Modification
Data on persons exposed at higher doses such as the LSS cohort have demonstrated that the radiation effect can be modified by sex, age at exposure, time since exposure, and other variables. This effect modification needs to be considered when comparing results from a low-dose study with those from the LSS. This is often accomplished by conducting special analyses of subsets of the LSS data that are more comparable with the low-dose study of interest. For example, solid cancer risk estimates based on male LSS members exposed between ages 20 and 60 years (27, 76) have been used for comparison with estimates based on nuclear workers. For some specific cancer sites (eg, stomach), baseline rates for the Japanese LSS members are substantially higher (or lower) than those in Western countries. It is not clear whether relative or absolute risks are more comparable across countries with different baseline rates, but this should be considered as a possible explanation for differences in ERR estimates in countries with different baseline risks. Most studies of persons exposed at low doses have not presented absolute excess risk estimates.
The way in which risk is modified by age and other variables is not necessarily the same for chronic low-dose exposure as for a single acute exposure (as in the LSS), and thus direct evaluation of how these variables modify risk is of interest. This can be accomplished by displaying ERRs by categories of variables such as sex and age, and testing dependencies on these variables (trends and heterogeneity). Although such tests tend to have low power, they have sometimes suggested patterns that are different from those observed in the LSS. For example, data from the INWORKS cohort suggest that solid cancer relative risks increase with increasing age at exposure (77), the opposite of the pattern observed in the LSS (39). Effect modification has been investigated in a few of the stronger studies (2, 3, 26, 77).
Pooled Analyses
The studies evaluated in this monograph included several pooled analyses, which combine individual data from several study populations and are a valuable tool not only for increasing study power but also for formally evaluating the consistency of findings across studies and summarizing the overall findings. This approach has proved especially valuable in studies of nuclear workers where both national (22, 25, 28) and international (23, 27) pooled analyses have been carried out and have also been used to evaluate data on thyroid cancer (13) and leukemia risk from low-dose exposure in childhood (42). Evaluating comparability of dosimetry and heterogeneity of results among studies are important aspects of pooled analyses. Consistency of results across studies, especially those from different types of study populations, increases confidence that substantial biases are not present and suggests generalizability of the study findings.
Other Issues Relevant to Interpretation
Confounding
Confounding has been discussed in depth by Schubauer-Berigan et al. (36). To address potential confounding, dose-response analyses are adjusted for attained age, sex, and other variables either through stratification or by including categorical or continuous variables in the expression for the baseline risk. It is worth noting that adequate control for confounding requires reasonably accurate measurement and modeling of the confounding variables. Although analyses are nearly always adjusted for attained age and sex, the choice of additional variables is not straightforward because under-adjustment can lead to bias whereas over-adjustment can lead to loss of statistical power.
The potential for a confounding variable to affect conclusions about an observed statistical association between radiation exposure and disease is greater for the small relative risks expected from low doses of radiation than for larger relative risks (Supplementary Appendix A, available online). It is also likely to be greater for solid cancers than for leukemia, especially leukemia in childhood. Note that if estimates derived from the LSS are reasonably correct, relative risks for solid cancers at a dose of 100 mSv are predicted to be about 1.05 (32, 45). However, for leukemia, relative risks at 100 mSv tend to be in the range of 2–4 for exposure in adulthood and even higher for exposure in childhood (32, 33), making it less plausible to ascribe increased RRs entirely to confounding. Schubauer-Berigan et al. (36) make the important point that potential for bias does not necessarily mean that bias is present.
Disease Endpoints
Linet et al. (37) evaluated issues related to limitations in outcome assessment and concluded that, with few exceptions, any bias that might have resulted from such limitations would have been towards the null.
In this monograph, we evaluate both leukemia and solid cancers. Leukemia is evaluated separately because data on persons exposed at higher doses indicate that the magnitude of the ERR, temporal patterns, and possibly the shape of the dose response for leukemia differ markedly from those for solid cancers. In the pooled analysis of persons exposed to low dose radiation in childhood, thyroid cancer is evaluated because the thyroid gland is known to be highly radiosensitive to exposure in childhood (13). For studies of childhood CT exposure (10, 12, 78), leukemia and brain cancer were evaluated because these are the most common cancers in children.
In studies involving whole-body irradiation, including most of the environmental and occupational studies, we evaluate all solid cancers as a group as a way of achieving larger numbers and increasing statistical power compared with evaluating individual site-specific cancers. Although LSS results (32, 45) suggest that radio-sensitivity may vary by organ, formal tests have not rejected homogeneity. Although the magnitude and shape of the dose response may vary by cancer site, when this heterogeneity is not too great, analyses of all solid cancers as a group can provide a useful summary of radiation effects in exposed populations with relatively uniform whole-body exposures, especially in studies where low power precludes meaningful site-specific analyses.
Several studies involving whole-body radiation present analyses for each of several cancer sites (14–16, 18, 22, 24, 25, 28, 79). Although this monograph does not review these site-specific analyses, we note that interpretation of site-specific results is complicated by the problem of multiple comparisons (including false positive and negative results and the likelihood of upward bias in nominally statistically significant results). Methods for addressing such multiple comparisons have been proposed (80, 81).
Impact of Dose Uncertainty
Dose estimates used in epidemiologic studies are subject to uncertainties. This is true both for the low-dose studies evaluated in this monograph (35) and in the higher dose studies (such as the LSS) that have been used to estimate risks at lower doses through linear extrapolation. The effects of these uncertainties on dose-response analyses depend on the types of error and their magnitude. We assume that errors are nondifferential, that is, that they do not depend on whether the patient has the health effect being studied, an assumption that is reasonable for most of the studies considered in this monograph. Supplementary Appendix B (available online) provides additional details.
The impact of dose uncertainties on dose-response relationships depends on whether the errors are classical or Berkson and on whether they are shared or independent. Classical error can be thought of as error that results from an imprecise measurement device such as a film dosimeter. Berkson error can be thought of as grouping error, or error that occurs when the mean for a group is substituted for the individual doses within the group. An example of the latter is the use of a single factor to convert “recorded” external doses in nuclear worker studies to organ doses when the true factor depends on the specific (but unknown) radiation environment of the individual workers. In many studies with Berkson error, there may also be errors in the group means that are assigned to individuals. These errors are correlated because they are shared by the individuals to whom the group mean is assigned. Further and more rigorous discussion of these issues in the context of radiation studies is found in Schafer and Gilbert (82) and in Gilbert (83).
Many of the effects of dose uncertainty (ie, measurement error in dose assessment) are well known (84, 85). For example:
Loss of power in dose-response analyses that use imperfectly estimated doses compared with using true doses
Biases in the assessment of the dose-response relationship between the disease outcome and imperfect dose compared with the relationship with true dose
Distortion of inferences (confidence intervals) due to shared (non-independent) dosimetry errors
Loss of study power, point 1, is irretrievable unless one has the option of improving dose estimates. Both classical and Berkson error can lead to loss of power (84, 86–88). However, tests of the null hypothesis that there is no dose response, calculated in the usual way, reflect this loss of power and yield the correct P value. Nevertheless, risk estimates may be biased as discussed below.
Considering point 2, nondifferential classical errors generally weaken the dose response with linear risk coefficients biased towards zero, whereas Berkson errors cause much less bias in most situations (84, 85). Methods that reduce attenuation to the null are often a key element of measurement error analysis. A classic example of “de-attenuation” is the measurement error analysis applied to the LSS dosimetry systems (since DS86) using “regression calibration.” This method replaces the unknown true dose with its expectation given the measured dose and then performs analyses as if the true dose was known (84, 89). In the LSS, the effect of de-attenuation for lognormal classical errors with roughly a 30–40% coefficient of variation (90–92) is quite modest, with the estimated linear ERR coefficient generally increased by between 4% and 11%, depending on the assumptions made about the magnitude of the dose uncertainties. A similar range of adjustments is seen in certain environmentally exposed groups. Even cases where dose errors are considerably larger, up to approximately 60%, did not result in adjustments to linear risk coefficients of more than 25% (93). There are several major studies ongoing, updates of the UK background radiation study (6), and the UK-NCI CT study (10, 78) where analysis will take account of uncertainties in dose estimates.
Interestingly, the resulting de-attenuation of dose response achieved by regression calibration does not usually alter the statistical significance level of a test of no relationship between dose and disease. Roughly, de-attenuation raises the risk estimate but also (and nearly proportionally) increases the standard error of the risk estimate. Thus, confidence intervals for corrected dose-response estimates tend to cross zero (no effect) only if confidence intervals for uncorrected dose response also cross zero.
The last effect noted, point 3, is especially important in complex exposure estimation (usefully described as “dose reconstruction”). In complex dose reconstruction systems, uncertainty in some important dose determinant can cause correlated or shared errors in many individual dose estimates. For example, if a source term (total radiation released from a plant for example) is uncertain then this would affect some, or all, of the study dose estimates. In this situation, correcting for shared errors involves widening the confidence limits for the risk estimates on a multiplicative scale and does not change significance levels for testing the null hypothesis that there is no association between dose and disease (87, 88). The extent that confidence intervals are widened depends on the size of the shared components of dose error. To represent complex patterns of shared (and unshared) uncertainty, Monte-Carlo dosimetry systems have been devised, for example, see the studies of 131I exposure in Kazakhstan (94) and surrounding the Hanford site (95) and of internal and external exposures at the Mayak plutonium production facility (96) and along the Techa river in Russia (97) that provide many realizations of possible dose, rather than a single best dose. Finding the best ways to use these multiple realizations in epidemiologic analysis is a topic of ongoing measurement error research (88, 98).
Estimation of internal dose may involve especially complex measurement issues. Even if one or many relevant samples, for example, whole-body counts for strontium exposure as in the Techa River study (97) are available, turning counts into (annual) dose histories requires understanding of the biological and radiological properties of the exposure. Modeling the biological properties, such as the placement of 90Sr proximal to red bone marrow, may involve uncertain parameters that affect an individual’s dose history and require data on intake times that are also subject to error. In the occupational studies evaluated in this monograph, internal doses are of concern primarily because they might distort the estimates of risk from external dose. Because estimates of doses from internal exposure are often not available, other approaches have been used to address this issue. For example, the INWORKS study reported results from analyses that excluded those workers with the greatest potential for internal exposure and from analyses that excluded cancers of the lung, liver, and bone, the three main sites of plutonium deposition.
An issue relevant to nuclear worker studies concerns limitations of dosimeters used in the 1940s and 1950s when minimum detection levels (MDLs) were higher than in later years and when dosimeters were exchanged weekly instead of monthly as in later years. Doses below the detection limit were in many cases recorded as zero, which could lead to underestimation of dose in this early period. Other practices included assigning one-half of the MDL, which could lead to overestimation of dose because, in some facilities, some workers with little potential for radiation exposure were monitored. In later years, the MDL was smaller and dosimeters were exchanged less frequently so that this was much less of a problem. There is also an issue of “notional” dose, in which for regulatory purposes the dose from missing periods might be replaced by the appropriate fraction of the annual dose limit (50 mSv/y for much of the period). Daniels et al. (35) note that various efforts in the United Kingdom and United States, in particular using a variety of imputation methods (99, 100), to adjust results for doses below the detection limit did not find evidence of serious bias in risk estimates. Sensitivity analysis can (and should) be applied more broadly to examine the dependence of risk estimates on whether individuals with special dose estimation problems have an important effect on the risk estimation.
Ideally, analyses should take account of uncertainties in dose estimates, but in practice this is complex and requires that dose errors from various sources be quantified and that the correlations of errors between persons be understood. A relatively simple approach, regression calibration, will at least increase risk estimates by correcting for the attenuation towards the null that results from classical error. However, even this requires some knowledge of the magnitude of the classical error, from which the true underlying dose distribution can be obtained (84). More computationally onerous methods of correcting for dose error are also possible and more fully take account of the uncertainty distribution. These are described in Supplementary Appendix B (available online). Dose uncertainty has not been fully accounted for in most of the studies presented in this monograph (35). For these studies, the most likely effects are underestimation of risk coefficients (ERR per Gy) and confidence intervals that are too narrow (because of classical as well as shared Berkson error, as discussed above in paragraphs 5 and 6 of this section). It is very unlikely that dose uncertainty will bring about a spurious statistically significant dose-response relationship (84, 88, 101).
Power and Precision
A key criterion for determining the contribution of a low-dose study to our understanding of radiation risks at low doses is the precision of the resulting risk estimates and the power of the study. Precision and power depend primarily on the expected magnitude of the risk, the magnitude and distribution of the doses, and the number of deaths or incident cases (41). In the power calculations presented here, estimates for the LSS cohort are used to determine the assumed magnitude of the risk, taking account, to the extent feasible, of age at exposure and follow-up time. In all cases, we estimate the power of a one-sided test of trend (with type I error rate [=α] of 5%) using the methodology described by Little et al. (34) and briefly described in Supplementary Appendix C (available online). That is, we estimate the probability that a test of the null hypothesis of no effect would be rejected if the true ERR per unit dose was equal to the estimate from the LSS cohort.
These power calculations were necessarily based on individual study data that were readily available, usually data included in publications. The available data differed by study, and not all studies provided data on organ dose. In a few of the nuclear worker studies, the distributions of person-years by lagged dose were not available so that power calculations had to be based on the distribution of people by cumulative dose at the end of the study. Because many persons will have spent much of their follow-up in lower dose groups than the one they end up in, power will generally be overestimated by use of a distribution of persons rather than person-years. Calculations had to be based on the overall dose distributions and thus could not account for differences in these distributions by age, calendar year, and other variables. In addition, calculations did not take account of dose estimation errors. Although the estimates of power shown in Table 1 provide a general idea of the power of various studies, they need to be interpreted cautiously. There were a few studies for which power calculations could not be made, because we could not derive an associated dose distribution from the published information.
Estimates of power are presented in Table 1 with additional detail in Supplementary Appendix C Table C2 (available online). For most studies, including particularly the smaller studies of nuclear workers (14–19), power is low as reflected in the wide confidence intervals for the estimated ERR (1). Of the 18 studies for which power calculations were made (Table 1), nine had power that was less than 20% for all endpoints evaluated. Only seven studies had power that exceeded 50% for any endpoint. These were the Great Britain background (leukemia) (6), the Techa River (solid cancer) (2), the pooled analysis of persons exposed to low dose radiation in childhood (thyroid cancer) (13), the National Registry for Radiation Workers (NRRW) (all cancers excluding leukemia incidence) (25), US workers (leukemia) (28), INWORKS (leukemia, solid cancers) (23, 27), and the US Radiologic Technologists (USRT) (basal cell carcinomas) (21). Because it was beyond the scope of this monograph to derive pooled statistical tests and estimates based on all relevant data (1), we could not calculate power based on combined data. Such power would undoubtedly be considerably larger than that for individual studies. As would be expected if the assumed LSS-based ERR per unit dose were reasonably correct, studies with greater power would be more likely to have positive estimates of the ERR per unit dose that were at least close to statistical significance, whereas studies with low power would be more likely to have estimates with wide confidence intervals that included zero. Discrepancies from this pattern likely reflect limitations in the data available for power calculations and/or differences between LSS-based ERR per unit dose and that estimated in the low-dose study and/or chance.
Although power calculations focus on statistical testing, we are rarely in the position of accepting or rejecting a finding based on the P value from a single study. In fact, many other criteria must be considered as we discuss in the section below on causation. However, the P value, which is the probability that an observed outcome resulted from chance if there were no underlying risk, is a useful value to consider in evaluating a study’s findings. In the next section, we discuss the relationship between statistical tests and confidence intervals.
Interpretation of Confidence Intervals and Statistical Tests
Earlier we noted that because radiation is a well-established carcinogen, an appropriate null hypothesis might be that the linear slope (ERR per unit dose) is positive rather than the more common null hypothesis that this slope is zero. Thus, rather than no effect we can test the null hypothesis that the ERR per unit of dose takes on any postulated value, for example, a value based on current risk estimates. Relevant to this point, we remind readers that confidence intervals are defined as the set of values that would not be rejected at the given level of type I error if taken as the null hypothesis. For example, Richardson et al. (27) present an estimate for all solid cancers from the INWORKS study of 0.047 per 100 mGy with a 90% confidence interval of 0.018 to 0.079 that can be interpreted as indicating that values less than 0.018 or greater than 0.079 can be rejected with P less than .10 for a two-tailed test. Specifically, the interval includes 0.032, an estimate based on male members of the LSS exposed between 20 and 65 years of age. Thus, the INWORKS data are found to be compatible with the point estimate from the LSS but also compatible with both smaller and larger values. Thus, taking the null hypothesis to be that there is a positive radiation effect can be reframed as emphasizing confidence intervals rather than tests of the null hypothesis of no effect.
Consideration of statistical power (as reflected in confidence intervals) is important in interpreting null results. In each of the studies evaluated in this monograph in which the ERR did not differ statistically significantly from zero (4, 7, 9, 12, 14–22, 24, 26, 28, 30), the confidence intervals included positive values that were usually higher than those that form the basis of radiation protection standards; thus, it could be low statistical power rather than the absence of a true effect that is responsible for the lack of statistical significance. Similarly, studies with (for some endpoints) negative estimates of the linear slope [eg, (4, 7, 9, 15, 17, 21, 24, 30)] should not be interpreted as evidence for hormesis when the confidence intervals include positive values.
Causation
Interpreting findings from epidemiologic studies consists at least in part of evaluating whether it is likely that the observed findings reflect a causal relationship. The English epidemiologist and statistician, Austin Bradford Hill, described nine principles that can be helpful in assessing causality (102). Bradford Hill notes that the principles are to be applied in a situation in which “our observations reveal an association between two variables, perfectly clear-cut and beyond what we would care to attribute to the play of chance. What aspects of that association should we especially consider before deciding that the most likely interpretation of it is causation?” He also notes that it is not necessary for all principles to be satisfied for an association to be causal, but neither do they prove a causal interpretation of an association. Since Bradford Hill’s article (102), it has been recognized that establishing causation is more complex than recognized and discussed by Bradford Hill (103, 104), particularly in view of advances in molecular biology. The first seven principles are listed below with comments about their applicability to studies of low-dose radiation exposure. Principles 8 (Experiment) and 9 (Analogy) do not apply to the low-dose radiation studies evaluated in this monograph.
1) Strength. This is the effect size, which for low-dose radiation studies is measured by the magnitude of the ERR. Although the size of the ERR at low doses is not large, it is as expected based on linear extrapolation from data on persons exposed at higher doses. Hill (102) notes that “we must not be too ready to dismiss a cause-and-effect hypothesis merely on the grounds that the observed association appears to be slight.”
2) Consistency. The effects of low-dose radiation exposure have been evaluated by many investigators in environmental, medical, and occupational settings, using both cohort and case-control designs. The ERRs per unit dose shown in Figures 1 and 2 of Berrington de Gonzalez et al. (1) are statistically compatible with one another within the categories defined by endpoint (leukemia/sold cancers) and age at exposure (childhood/adult). A possible exception is the Canadian cardiac imaging study (11).
3) Specificity. This principle is not strictly met because there are other agents that can cause leukemia and solid cancers. However, the lack of dose response for most noncancer endpoints and for types of cancer (such as CLL) that have not been typically linked with radiation can add support for a causal relationship rather than the effect of some insidious common bias (eg, differential error in dose assessment between cancer cases and controls).
4) Temporality. In all studies evaluated, exposure preceded the outcome. In most studies, analyses allowed for a 2-year lag for leukemia and a 5- or 10-year lag for solid cancers. However, in some of the medical CT studies [eg (10, 12)], this was hard to evaluate because the imaging may have been for early signs and thus could not “cause” the outcome—even when a lag period is applied this would result in bias, an example of confounding due to reverse causation.
5) Biological gradient. Dose-response analyses were conducted for all studies. In studies with adequate power to achieve statistical significance, categorical analyses were presented and indicated a consistent increase with increasing dose (6, 13, 25, 78).
6) Plausibility. There is a plausible mechanism for a linear dose response developed from experimental radiobiological studies that applies to the low doses in these studies.
7) Coherence. The general finding of a linear effect at low doses is consistent with biological data. The ERRs per unit dose from these studies are mostly consistent with those estimated from extensive data on populations exposed at moderate doses, including data from the LSS cohort. That is, the estimated ERRs per unit dose from these studies often have wide confidence intervals that include the LSS-based estimates.
Taken as a group, the studies of low-dose radiation exposure meet five of the Bradford Hill principles: consistency, temporality, biological gradient, plausibility, and coherence. However, as noted by Bradford Hill, “What I do not believe—and this has been suggested—is that we can usefully lay down some hard-and-fast rules of evidence that must be observed before we accept cause and effect. None of my nine [principles] can bring indisputable evidence for or against the cause-and-effect hypothesis and none can be required as a sine qua non. What they can do, with greater or less strength, is to help us to make up our minds on the fundamental question—is there any other way of explaining the set of facts before us, is there any other answer equally, or more, likely than cause and effect?” (102). It is in this spirit that we have discussed the Bradford Hill principles. The Summary article in this monograph (105) provides final comments on the interpretation of the 26 epidemiologic studies of persons exposed at low doses of radiation.
The articles in this monograph consider studies of the radiation dose response in populations for which individual or individualized dose estimates are available, and the population mean doses are less than 100 mGy. This article has focused on issues in the analysis and interpretation of the results of such low-dose studies. Given the likely magnitude of any radiation effect at the low doses in these studies, most studies have low power to detect a statistically significant dose response and are vulnerable to confounding by disease risk factors that are correlated with dose. In recent years, concerns have also been raised about the effects of uncertainties in individual dose estimates on risk estimates from studies of disease risks in populations with low (and moderate) doses.
Studies of radiation effects on disease risk at moderate to high doses (100 mGy or more) in animals and humans provide compelling evidence of radiation-associated increases in the rates of cancer and, possibly, some noncancer outcomes. For many outcomes, the dose response is consistent with linearity over the range of doses considered. In view of these findings and the limited power of most low-dose studies, the results should not be interpreted simply based on the statistical significance of a test of the null hypothesis of no dose response, but also on the consistency with results for comparable outcomes in more heavily exposed populations. Because of the evidence of effects at higher doses, there is value in studies of low-dose populations despite the limited power of any specific study. There is a need for additional efforts to consider the consistency of results from the collection of low-dose–study results and the results of moderate- and high-dose studies. It is also important to avoid interpreting an individual nonstatistically significant result as indicating that there is no risk at low doses, especially if the study is small or the doses are especially low.
One must be cognizant of the potential effects of unmeasured confounders and do what one can to adjust for these effects using explicit adjustment or stratification, when possible. However, it is generally inappropriate to simply dismiss the results of well-designed and carefully analyzed low-dose studies based on vague concerns about potential confounding.
The impact of dose uncertainty on risk estimates is a topic of increasing interest in radiation effect research. However, dose uncertainty can bias risk estimates (usually downward), and failure to allow for the effect of dose uncertainty, especially shared uncertainties, can result in confidence intervals that are too narrow, although dose uncertainty is unlikely to affect tests of statistical significance. Methods to allow for the effects of shared uncertainty are being developed, but more work is needed on this important aspect of radiation risk estimation. Importantly, unless dose estimation is dependent on the outcome, dose uncertainty is very unlikely to induce a spurious statistically significant result.
Analysis and interpretation of radiation effects on disease risks (or other outcomes) from low-dose epidemiologic studies present many challenges, but despite the limitations and challenges of these studies, the collection of results from these dose-response analyses and the comparison of the findings of these studies with those from more powerful studies of populations with higher doses provides useful and important information on radiation effects on disease risks at low doses.
Funding
This work was supported by the Intramural Research Program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.
Notes
Affiliations of authors: Radiation Epidemiology Branch, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD, USA (ESG, MPL); Hirosoft International, Eureka, CA, USA (DLP); Department of Preventive Medicine, School of Medicine, University of Southern California, Los Angeles, CA, USA (DOS).
The authors have no conflicts of interest.
The authors are grateful for the detailed and helpful comments of the referee.
Supplementary Material
References
- 1. Berrington de Gonzalez A, Daniels RD, Cardis E, et al. Epidemiological studies of low-dose ionizing radiation and cancer: rationale and framework for the monograph and overview of eligible studies. J Natl Cancer Inst Mongr. 2020;2020(56):97–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Davis FG, Yu KL, Preston D, et al. Solid cancer incidence in the Techa River incidence cohort: 1956-2007. Radiat Res. 2015;184(1):56–65. [DOI] [PubMed] [Google Scholar]
- 3. Davis S, Day RW, Kopecky KJ, et al. Childhood leukaemia in Belarus, Russia, and Ukraine following the Chernobyl power station accident: results from an international collaborative population-based case-control study. Int J Epidemiol. 2006;35(2):386–396. [DOI] [PubMed] [Google Scholar]
- 4. Han YY, Youk AO, Sasser H, et al. Cancer incidence among residents of the Three Mile Island accident area: 1982-1995. Environ Res. 2011;111(8):1230–1235. [DOI] [PubMed] [Google Scholar]
- 5. Hsieh WH, Lin IF, Ho JC, et al. 30 years follow-up and increased risks of breast cancer and leukaemia after long-term low-dose-rate radiation exposure. Br J Cancer. 2017;117(12):1883–1887. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Kendall GM, Little MP, Wakeford R, et al. A record-based case-control study of natural background radiation and the incidence of childhood leukaemia and other cancers in Great Britain during 1980-2006. Leukemia. 2013;27(1):3–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Nikkilä A, Erme S, Arvela H, et al. Background radiation and childhood leukemia: a nationwide register-based case-control study. Int J Cancer. 2016;139(9):1975–1982. [DOI] [PubMed] [Google Scholar]
- 8. Spycher BD, Lupatsch JE, Zwahlen M, et al. ; for the Swiss Pediatric Oncology Group. Background ionizing radiation and the risk of childhood cancer: a census-based nationwide cohort study. Environ Health Perspect. 2015;123(6):622–628. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Tao Z, Akiba S, Zha Y, et al. Cancer and non-cancer mortality among inhabitants in the high background radiation area of Yangjiang, China (1979-1998). Health Phys. 2012;102(2):173–181. [DOI] [PubMed] [Google Scholar]
- 10. Berrington de Gonzalez A, Salotti JA, McHugh K, et al. Relationship between paediatric CT scans and subsequent risk of leukaemia and brain tumours: assessment of the impact of underlying conditions. Br J Cancer. 2016;114(4):388–394. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Eisenberg MJ, Afilalo J, Lawler PR, et al. Cancer risk related to low-dose ionizing radiation from cardiac imaging in patients after acute myocardial infarction. CMAJ. 2011;183(4):430–436. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Journy N, Roue T, Cardis E, et al. Childhood CT scans and cancer risk: impact of predisposing factors for cancer on the risk estimates. J Radiol Prot. 2016;36(1):N1–N7. [DOI] [PubMed] [Google Scholar]
- 13. Lubin JH, Adams MJ, Shore R, et al. Thyroid cancer following childhood low-dose radiation exposure: a pooled analysis of nine cohorts. J Clin Endocrinol Metab. 2017;102(7):2575–2583. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Ahn YS, Park RM, Koh DH.. Cancer admission and mortality in workers exposed to ionizing radiation in Korea. J Occup Environ Med. 2008;50(7):791–803. [DOI] [PubMed] [Google Scholar]
- 15. Akiba S, Mizuno S.. The third analysis of cancer mortality among Japanese nuclear workers, 1991-2002: estimation of excess relative risk per radiation dose. J Radiol Prot. 2012;32(1):73–83. [DOI] [PubMed] [Google Scholar]
- 16. Boice JD Jr, Cohen SS, Mumma MT, et al. Updated mortality analysis of radiation workers at Rocketdyne (Atomics International), 1948-2008. Radiat Res. 2011;176(2):244–258. [DOI] [PubMed] [Google Scholar]
- 17. Caldwell GG, Zack MM, Mumma MT, et al. Mortality among military participants at the 1957 PLUMBBOB nuclear weapons test series and from leukemia among participants at the SMOKY test. J Radiol Prot. 2016;36(3):474–489. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Jeong M, Jin YW, Yang KH, et al. Radiation exposure and cancer incidence in a cohort of nuclear power industry workers in the Republic of Korea, 1992-2005. Radiat Environ Biophys. 2010;49(1):47–55. [DOI] [PubMed] [Google Scholar]
- 19. Kesminiene A, Evrard AS, Ivanov VK, et al. Risk of hematological malignancies among Chernobyl liquidators. Radiat Res. 2008;170(6):721–735. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Kitahara CM, Linet MS, Balter S, et al. Occupational radiation exposure and deaths from malignant intracranial neoplasms of the brain and CNS in U.S. radiologic technologists, 1983-2012. AJR Am J Roentgenol. 2017;208(6):1278–1284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Lee T, Sigurdson AJ, Preston DL, et al. Occupational ionising radiation and risk of basal cell carcinoma in US radiologic technologists (1983-2005). Occup Environ Med. 2015;72(12):862–869. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Leuraud K, Fournier L, Samson E, et al. Mortality in the French cohort of nuclear workers. Radioprotection. 2017;52(3):199–210. [Google Scholar]
- 23. Leuraud K, Richardson DB, Cardis E, et al. Ionising radiation and risk of death from leukaemia and lymphoma in radiation-monitored workers (INWORKS): an international cohort study. Lancet Haematol. 2015;2(7):e276–e281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Merzenich H, Hammer GP, Troltzsch K, et al. Mortality risk in a historical cohort of nuclear power plant workers in Germany: results from a second follow-up. Radiat Environ Biophys. 2014;53(2):405–416. [DOI] [PubMed] [Google Scholar]
- 25. Muirhead CR, O'Hagan JA, Haylock RGE, et al. Mortality and cancer incidence following occupational radiation exposure: third analysis of the National Registry for Radiation Workers. Br J Cancer. 2009;100(1):206–212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Preston DL, Kitahara CM, Freedman DM, et al. Breast cancer risk and protracted low-to-moderate dose occupational radiation exposure in the US Radiologic Technologists Cohort, 1983-2008. Br J Cancer. 2016;115(9):1105–1112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Richardson DB, Cardis E, Daniels RD, et al. Risk of cancer from occupational exposure to ionising radiation: retrospective cohort study of workers in France, the United Kingdom, and the United States (INWORKS). BMJ. 2015;351:h5359. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Schubauer-Berigan MK, Daniels RD, Bertke SJ, et al. Cancer mortality through 2005 among a pooled cohort of U.S. nuclear workers exposed to external ionizing radiation. Radiat Res. 2015;183(6):620–631. [DOI] [PubMed] [Google Scholar]
- 29. Zablotska LB, Bazyka D, Lubin JH, et al. Radiation and the risk of chronic lymphocytic and other leukemias among Chornobyl cleanup workers. Environ Health Perspect. 2013;121(1):59–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Zablotska LB, Lane RS, Thompson PA.. A reanalysis of cancer mortality in Canadian nuclear workers (1956-1994) based on revised exposure and cohort data. Br J Cancer. 2014;110(1):214–223. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Preston DL, Ron E, Tokuoka S, et al. Solid cancer incidence in atomic bomb survivors: 1958-1998. Radiat Res. 2007;168(1):1–64. [DOI] [PubMed] [Google Scholar]
- 32. Ozasa K, Shimizu Y, Suyama A, et al. Studies of the mortality of atomic bomb survivors, report 14, 1950-2003: an overview of cancer and noncancer diseases. Radiat Res. 2012;177(3):229–243. [DOI] [PubMed] [Google Scholar]
- 33. Hsu W-L, Preston DL, Soda M, et al. The incidence of leukemia, lymphoma and multiple myeloma among atomic bomb survivors: 1950-2001. Radiat Res. 2013;179(3):361–382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Little MP, Wakeford R, Lubin JH, et al. The statistical power of epidemiological studies analyzing the relationship between exposure to ionizing radiation and cancer, with special reference to childhood leukemia and natural background radiation. Radiat Res. 2010;174(3):387–402. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Daniels RD, Kendall GM, Thierry-Chef I, et al. Strengths and weaknesses of dosimetry used in studies of low-dose radiation exposure and cancer. J Natl Cancer Inst Monogr. 2020;2020(56):114–132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Schubauer-Berigan MK, Berrington de Gonzalez A, Cardis E, et al. Evaluation of confounding and selection bias in epidemiologic studies of populations exposed to low-dose, high-energy photon radiation. J Natl Cancer Inst Monogr. 2020;2020(56):133–153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Linet MS, Schubauer-Berigan MK, Berrington de Gonzalez A.. Outcome assessment in epidemiologic studies of low-dose radiation exposure and cancer risks: sources, level of ascertainment, and misclassification. J Natl Cancer Inst Monogr. 2020;2020(56):154–175. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Armstrong B, Brenner DJ, Baverstock K, et al. Radiation. Volume 100D. A Review of Human Carcinogens. Lyon, France: International Agency for Research on Cancer; 2012. [Google Scholar]
- 39.Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation NRC. Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII - Phase 2. Washington, DC: National Academy Press; 2006. [PubMed] [Google Scholar]
- 40.International Commission on Radiological Protection. The 2007 recommendations of the International Commission on Radiological Protection. ICRP publication 103. Ann ICRP. 2007;37(2–4):1–332. [DOI] [PubMed] [Google Scholar]
- 41.United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). UNSCEAR 2006 Report Annex A Epidemiological Studies of Radiation and Cancer New York: United Nations; 2008:13–322.
- 42. Little MP, Wakeford R, Borrego D, et al. Leukaemia and myeloid malignancy among people exposed to low doses (<100 mSv) of ionising radiation during childhood: a pooled analysis of nine historical cohort studies. Lancet Haematol. 2018;5(8):e346–e358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Cullings HM, Grant EJ, Egbert SD, et al. DS02R1: improvements to atomic bomb survivors' input data and implementation of dosimetry system 2002 (DS02) and resulting changes in estimated doses. Health Phys. 2017;112(1):56–97. [DOI] [PubMed] [Google Scholar]
- 44. Cahoon EK, Preston DL, Pierce DA, et al. Lung, laryngeal and other respiratory cancer incidence among Japanese atomic bomb survivors: an updated analysis from 1958 through 2009. Radiat Res. 2017;187(5):538–548. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45. Grant EJ, Brenner A, Sugiyama H, et al. Solid cancer incidence among the life span study of atomic bomb survivors: 1958-2009. Radiat Res. 2017;187(5):513–537. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Brenner AV, Preston DL, Sakata R, et al. Incidence of breast cancer in the Life Span Study of atomic bomb survivors: 1958-2009. Radiat Res. 2018;190(4):433–444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Kuznetsova IS, Labutina EV, Hunter N.. Radiation risks of leukemia, lymphoma and multiple myeloma incidence in the Mayak cohort: 1948-2004. PLoS One. 2016;11(9):e0162710. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Sokolnikov M, Preston D, Gilbert E, et al. Radiation effects on mortality from solid cancers other than lung, liver, and bone cancer in the Mayak worker cohort: 1948-2008. PLoS One. 2015;10(2):e0117784. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Sokolnikov M, Preston D, Stram DO.. Mortality from solid cancers other than lung, liver, and bone in relation to external dose among plutonium and non-plutonium workers in the Mayak Worker Cohort. Radiat Environ Biophys. 2017;56(1):121–125. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.National Council on Radiation Protection and Measurements (NCRP). NCRP Report No 170 Second Primary Cancers and Cardiovascular Disease After Radiation Therapy Bethesda, MD: National Council on Radiation Protection and Measurements (NCRP); 2012.
- 51. Gilbert ES, Curtis RE, Hauptmann M, et al. Stomach cancer following Hodgkin lymphoma, testicular cancer and cervical cancer: a pooled analysis of three international studies with a focus on radiation effects. Radiat Res. 2017;187(2):186–195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Hauptmann M, Borge Johannesen T, Gilbert ES, et al. Increased pancreatic cancer risk following radiotherapy for testicular cancer. Br J Cancer. 2016;115(7):901–908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Preston DL, Mattsson A, Holmberg E, et al. Radiation effects on breast cancer risk: a pooled analysis of eight cohorts. Radiat Res. 2002;158(2):220–235. [DOI] [PubMed] [Google Scholar]
- 54. Darby SC, Reeves G, Key T, et al. Mortality in a cohort of women given X-ray therapy for metropathia haemorrhagica. Int J Cancer. 1994;56(6):793–801. [DOI] [PubMed] [Google Scholar]
- 55. Little MP, Stovall M, Smith SA, et al. A reanalysis of curvature in the dose response for cancer and modifications by age at exposure following radiation therapy for benign disease. Int J Radiat Oncol Biol Phys. 2013;85(2):451–459. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56. Weiss HA, Darby SC, Doll R.. Cancer mortality following X-ray treatment for ankylosing spondylitis. Int J Cancer. 1994;59(3):327–338. [DOI] [PubMed] [Google Scholar]
- 57.International Commission on Radiological Protection. ICRP statement on tissue reactions and early and late effects of radiation in normal tissues and organs—threshold doses for tissue reactions in a radiation protection context. ICRP publication 118. Ann ICRP. 2012;41(1–2):1–322. [DOI] [PubMed] [Google Scholar]
- 58. Edwards AA, Lloyd DC.. Risks from ionising radiation: deterministic effects. J Radiol Prot. 1998;18(3):175–183. [DOI] [PubMed] [Google Scholar]
- 59. Harris H. A long view of fashions in cancer research. Bioessays. 2005;27(8):833–838. [DOI] [PubMed] [Google Scholar]
- 60.United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). Sources and Effects of Ionizing Radiation. UNSCEAR 1993 Report to the General Assembly, with Scientific Annexes New York: United Nations; 1993:1–922.
- 61. Parsons BL. Multiclonal tumor origin: evidence and implications. Mutat Res. 2018;777:1–18. [DOI] [PubMed] [Google Scholar]
- 62.National Council on Radiation Protection and Measurements (NCRP). Report No 136 Evaluation of the Linear-Nonthreshold Dose-Response Model FOR Ionizing Radiation Bethesda, MD: National Council on Radiation Protection and Measurements (NCRP); 2001.
- 63. Morgan WF. Non-targeted and delayed effects of exposure to ionizing radiation: I. Radiation-induced genomic instability and bystander effects in vitro. Radiat Res. 2003;159(5):567–580. [DOI] [PubMed] [Google Scholar]
- 64. Morgan WF. Non-targeted and delayed effects of exposure to ionizing radiation: II. Radiation-induced genomic instability and bystander effects in vivo, clastogenic factors and transgenerational effects. Radiat Res. 2003;159(5):581–596. [DOI] [PubMed] [Google Scholar]
- 65. Little MP, Wakeford R, Tawn EJ, et al. Risks associated with low doses and low dose rates of ionizing radiation: why linearity may be (almost) the best we can do. Radiology. 2009;251(1):6–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Doss M, Little MP, Orton CG.. Point/counterpoint: low-dose radiation is beneficial, not harmful. Med Phys. 2014;41(7):070601. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.International Commission on Radiological Protection. Low-dose extrapolation of radiation-related cancer risk. Ann ICRP. 2006;35(4):1–140. [DOI] [PubMed] [Google Scholar]
- 68. Tubiana M, Feinendegen LE, Yang C, et al. The linear no-threshold relationship is inconsistent with radiation biologic and experimental data. Radiology. 2009;251(1):13–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69. Little MP, Muirhead CR.. Curvature in the cancer mortality dose response in Japanese atomic bomb survivors: absence of evidence of threshold. Int J Radiat Biol. 1998;74(4):471–480. [DOI] [PubMed] [Google Scholar]
- 70. Pierce DA, Preston DL.. Radiation-related cancer risks at low doses among atomic bomb survivors. Radiat Res. 2000;154(2):178–186. [DOI] [PubMed] [Google Scholar]
- 71. Wakeford R, Little MP.. Risk coefficients for childhood cancer after intrauterine irradiation: a review. Int J Radiat Biol. 2003;79(5):293–309. [DOI] [PubMed] [Google Scholar]
- 72. Brenner DJ, Doll R, Goodhead DT, et al. Cancer risks attributable to low doses of ionizing radiation: assessing what we really know. Proc Natl Acad Sci USA. 2003;100(24):13761–13766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73. Wakeford R, Tawn EJ.. The meaning of low dose and low dose-rate. J Radiol Prot. 2010;30(1):1–3. [DOI] [PubMed] [Google Scholar]
- 74. Little MP. Are two mutations sufficient to cause cancer? Some generalizations of the two-mutation model of carcinogenesis of Moolgavkar, Venzon, and Knudson, and of the multistage model of Armitage and Doll. Biometrics. 1995;51(4):1278–1291. [PubMed] [Google Scholar]
- 75. Little MP, Hawkins MM, Charles MW, et al. Fitting the Armitage-Doll model to radiation-exposed cohorts and implications for population cancer risks. Radiat Res. 1992;132(2):207–221. [PubMed] [Google Scholar]
- 76. Cardis E, Vrijheid M, Blettner M, et al. Risk of cancer after low doses of ionising radiation: retrospective cohort study in 15 countries. Br Med J. 2005;331(7508):77–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77. Daniels RD, Bertke SJ, Richardson DB, et al. Examining temporal effects on cancer risk in the international nuclear workers' study. Int J Cancer. 2017;140(6):1260–1269. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78. Pearce MS, Salotti JA, Little MP, et al. Radiation exposure from CT scans in childhood and subsequent risk of leukaemia and brain tumours: a retrospective cohort study. Lancet. 2012;380(9840):499–505. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79. Richardson DB, Cardis E, Daniels RD, et al. Site-specific solid cancer mortality after exposure to ionizing radiation: a cohort study of workers (INWORKS). Epidemiology. 2018;29(1):31–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80. Pierce DA, Preston DL.. Joint analysis of site-specific cancer risks for the atomic bomb survivors. Radiat Res. 1993;134(2):134–142. [PubMed] [Google Scholar]
- 81. Richardson DB, Hamra GB, MacLehose RF, et al. Hierarchical regression for analyses of multiple outcomes. Am J Epidemiol. 2015;182(5):459–467. [DOI] [PubMed] [Google Scholar]
- 82. Schafer DW, Gilbert ES.. Some statistical implications of dose uncertainty in radiation dose-response analyses. Radiat Res. 2006;166(1):303–312. [DOI] [PubMed] [Google Scholar]
- 83. Gilbert ES. The impact of dosimetry uncertainties on dose-response analyses. Health Phys. 2009;97(5):487–492. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84. Carroll RJ, Ruppert D, Stefanski LA, et al. Measurement Error in Nonlinear Models. A Modern Perspective. Boca Raton, FL: Chapman and Hall/CRC; 2006:1–488. [Google Scholar]
- 85. Thomas D, Stram D, Dwyer J.. Exposure measurement error: influence on exposure-disease relationships and methods of correction. Annu Rev Public Health. 1993;14(1):69–93. [DOI] [PubMed] [Google Scholar]
- 86. Bateson TF, Wright JM.. Regression calibration for classical exposure measurement error in environmental epidemiology studies using multiple local surrogate exposures. Am J Epidemiol. 2010;172(3):344–352. [DOI] [PubMed] [Google Scholar]
- 87. Stram DO, Preston DL, Sokolnikov M, et al. Shared dosimetry error in epidemiological dose-response analyses. PLoS One. 2015;10(3):e0119418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88. Zhang Z, Preston DL, Sokolnikov M, et al. Correction of confidence intervals in excess relative risk models using Monte Carlo dosimetry systems with shared errors. PLoS One. 2017;12(4):e0174641. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89. Pierce DA, Kellerer AM.. Adjusting for covariate errors with nonparametric assessment of the true covariate distribution. Biometrika. 2004;91(4):863–876. [Google Scholar]
- 90. Pierce DA, Stram DO, Vaeth M.. Allowing for random errors in radiation dose estimates for the atomic bomb survivor data. Radiat Res. 1990;123(3):275–284. [PubMed] [Google Scholar]
- 91. Pierce DA, Stram DO, Vaeth M, et al. The errors-in-variables problem: considerations provided by radiation dose-response analyses of the A-bomb survivor data. J Am Statist Assoc. 1992;87(418):351–359. [Google Scholar]
- 92. Stram DO, Sposto R.. Recent uses of biological data for the evaluation of A-bomb radiation dosimetry. J Radiat Res. 1991;32(suppl 1):122–135. [DOI] [PubMed] [Google Scholar]
- 93. Little MP, Kwon D, Zablotska LB, et al. Impact of uncertainties in exposure assessment on thyroid cancer risk among persons in Belarus exposed as children or adolescents due to the Chernobyl accident. PLoS One. 2015;10(10):e0139826. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94. Land CE, Kwon D, Hoffman FO, et al. Accounting for shared and unshared dosimetric uncertainties in the dose response for ultrasound-detected thyroid nodules after exposure to radioactive fallout. Radiat Res. 2015;183(2):159–173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95. Kopecky KJ, Davis S, Hamilton TE, et al. Estimation of thyroid radiation doses for the Hanford thyroid disease study: results and implications for statistical power of the epidemiological analyses. Health Phys. 2004;87(1):15–32. [DOI] [PubMed] [Google Scholar]
- 96. Napier BA. The Mayak Worker Dosimetry System (MWDS-2013): an introduction to the documentation. Radiat Prot Dosimetry. 2017;176(1–2):6–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97. Degteva MO, Napier BA, Tolstykh EI, et al. Enhancements in the Techa River Dosimetry System: TRDS-2016D code for reconstruction of deterministic estimates of dose from environmental exposures. Health Phys. 2019;117(4):378–387. [DOI] [PubMed] [Google Scholar]
- 98. Kwon D, Hoffman FO, Moroz BE, et al. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation. Statist Med. 2016;35(3):399–423. [DOI] [PubMed] [Google Scholar]
- 99. Inskip H, Beral V, Fraser P, et al. Further assessment of the effects of occupational radiation exposure in the United Kingdom Atomic Energy Authority mortality study. Br J Ind Med. 1987;44(3):149–160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100. Muirhead CR, Goodill AA, Haylock RGE, et al. Second Analysis of the National Registry for Radiation Workers. Chilton, UK: National Radiological Protection Board; 1999. [DOI] [PubMed] [Google Scholar]
- 101. Stram DO, Kopecky KJ.. Power and uncertainty analysis of epidemiological studies of radiation-related disease risk in which dose estimates are based on a complex dosimetry system: some observations. Radiat Res. 2003;160(4):408–417. [DOI] [PubMed] [Google Scholar]
- 102. Hill AB. The environment and disease: association or causation? Proc R Soc Med. 1965;58:295–300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103. Rothman KJ, Greenland S.. Causation and causal inference in epidemiology. Am J Public Health. 2005;95(S1):S144–S150. [DOI] [PubMed] [Google Scholar]
- 104. Fedak KM, Bernal A, Capshaw ZA, Gross S.. Applying the Bradford Hill criteria in the 21st century: how data integration has changed causal inference in molecular epidemiology. Emerg Themes Epidemiol. 2015;12(1):14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105. Hauptmann M, Daniels RD, Cardis E, et al. Epidemiological studies of low-dose ionizing radiation and cancer: summary bias assessment and meta-analysis. J Natl Cancer Inst Mongr. 2020;2020(56):188–200. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.