Fifty years ago, when the Atomic Bomb Casualty Commission came into being, the health consequences of exposure to ionizing radiation were poorly understood. It was known that, under appropriate circumstances in experimental systems, ionizing radiation could be carcinogenic, mutagenic, and teratogenic. However, the applicability of this information to the human species was uncertain, and epidemiologic studies were few in number. Moreover, most of the studies that were available involved individuals exposed to radiotherapy with whom the effects of radiation could be confounded with those of the illness being treated. Little was known about the dose–response relationships involved, the relative biological effectiveness of different qualities of radiation, or the factors that could modify risk. Undoubtedly, these uncertainties contributed to the decision of Colonel Ashley Oughterson and the Joint Commission for the Investigation of the Effects of the Atomic Bomb to recommend to Major General Norman Kirk, the Surgeon General of the Army, the need for long term studies of the effects of exposure to atomic radiation in Hiroshima and Nagasaki. This recommendation led President Truman to direct the National Academy of Sciences–National Research Council to undertake the requisite studies. But these uncertainties are echoed in the vagueness of the planning documents that established the Atomic Bomb Casualty Commission. These noted that the areas of concern were “cancer, leukemia, shortened life span, reduced vigor, altered development, sterility, modified genetic pattern, changes in vision, ‘shifted epidemiology,’ abnormal pigmentation, and epilation” (1).
The organizational and logistic problems that confronted the newly created research institution were formidable (2). Japan was a devastated, occupied nation with few resources of its own. As a consequence, the Atomic Bomb Casualty Commission had to be largely self-sufficient. It needed to establish its own clinical facilities, recruit and train staff, and develop the means to meet other needs, such as transportation, essential to the fulfillment of its charge. Nonetheless, within a few years after the initiation of these studies, three radiation-related effects on health had been seen. These were an increase in the occurrence of “radiation cataracts,” an increase in the frequency of leukemia, particularly of the acute variety, and an increase in mental retardation among those survivors exposed prenatally. Each of these findings is a story in itself. Briefly the findings arose as follows:
It was known in 1945, largely as an outgrowth of the use of x-irradiation in treating cancer of the brain, that if the eye is within the radiation beam and receives a sufficiently high dose, a characteristic change will occur in the lens that leads to a loss in its translucency. To determine whether similar changes had occurred among the survivors in Hiroshima and Nagasaki, an ophthalmologic survey was initiated in the late summer of 1949. Among the 1,000 survivors examined in the course of the survey—231 of whom were exposed within 1,000 m—10 cases of presumed radiation opacities were observed (3). All were exposed within 550–950 m of the hypocenter and all had epilated, suggesting that their doses had been large, possibly 1 Gy or more. The frequency of radiation cataracts among survivors exposed within 1 km was estimated to be ≈2.5%, but it was thought that this figure might increase with time. A second, larger survey of 3,700 survivors in 1951–1953 revealed 154 individuals with posterior subcapsular plaques large enough to be visible with an ophthalmoscope (4).
Subsequent studies of the atomic bomb survivors, as well as those of persons whose opacities stem from the therapeutic use of irradiation, suggest that there is an exposure threshold—a dose below which these changes do not occur (5). This value is not known precisely, although clinical studies following x-irradiation suggest that it may be in the neighborhood of 2 Gy. The threshold value may, of course, differ for various qualities of irradiation. Based on the atomic bomb survivors, the threshold for γ-irradiation, measured in terms of the dose absorbed by the eye, appears to be ≈0.73 Gy whereas the threshold for neutron irradiation is much lower, under 0.10 Gy. These values, particularly the one for γ-irradiation, seem lower than estimates from the clinical studies, but if the neutron dose is weighted to account for its greater biological effectiveness, the estimated minimal dose of radiation is ≈1.5 Sv. It must be noted, however, that a threshold could be spurious, arising solely because of our inability to detect very small changes, and may not reflect an actual threshold in their occurrence.
The first intimation that leukemia was elevated among the survivors arose through the perceptiveness of a young Japanese physician, Takuso Yamawaki. As early as 1949, he believed that he was seeing more cases of leukemia in his clinical practice than he expected, and he sought the advice of hematologists at the Atomic Bomb Casualty Commission, who confirmed his diagnoses. This finding, the first evidence of a possible increase in any cancer among the survivors, immediately prompted an effort to confirm and extend what apparently was being seen. The task was made difficult, however, by the absence of individual dose estimates, the lack of a systematic case-finding mechanism, and uncertainties about the size of the population at risk. Nevertheless, through clinical examinations, physician referrals, and death certificates, some 19 individuals were identified who had either died from or had the onset of leukemia in the years 1948–1950 among an estimated 98,265 survivors in Hiroshima based on the 1949 census of survivors of the city (and 10 cases among 96,962 survivors in Nagasaki) (6). When these cases and the survivors were distributed by distance from the hypocenter, a significant increase in the number of cases was seen within 2 km. And, by 1953, when 50 cases of leukemia had accumulated in Hiroshima, Moloney and Kastenbaum (7) were able to show that cases increased in frequency from 1 in 12,625 individuals exposed at 2,500 m or beyond to 1 in 80 if exposure occurred within 1 km. Thus, the risk within 1 km was more than 150-fold greater than that at a distance where the dose was presumably very low. When the cases and individuals at risk were divided into those with a history of epilation, oropharyngeal lesions, or purpura or some combination of these and those without such symptoms, the increase in incidence with declining distance from the hypocenter was more striking among those individuals with complaints associated with acute radiation sickness. Statistically, the data were consistent with a straight-line relationship between the logarithm of the distance of the survivors from the hypocenter and the logarithm of the incidence of leukemia, a fact suggested but not established in the earlier study.
As the number of cases continued to grow, it was possible to examine the relationship of the type of leukemia to exposure and the gender and age distribution of the affected individuals as well as their distance from the hypocenter. Acute forms were the most common, followed by the chronic myelocytic types, but a paucity of cases of chronic lymphocytic leukemia also was noted. Although age was recorded in terms of the apparent onset of symptoms, rather than age at exposure, the risk of leukemia was clearly greater at younger ages. Finally, because some of these affected individuals had been seen repeatedly in the hematological surveys that had occurred immediately before or shortly after the establishment of the Atomic Bomb Casualty Commission, it was possible to gain some insight into the length of time intervening between exposure and the onset of leukemia. It appeared to lie between 2 and 8 years.
Soon after the introduction of radium therapy, case reports began to appear that suggested that, when this therapy was used on pregnant women (generally to treat a malignancy of the cervix of the uterus), the developing fetus often was seriously affected. Finally, in 1929, two obstetricians, Leopold Goldstein and Douglas Murphy, attempted a more systematic study (8). Questionnaires were sent to some 1,700 gynecologists and radiologists in the United States seeking to identify women who had received radiation therapy during pregnancy. One hundred and six women were identified, and of these, 74 were delivered of full-term children. Thirty-eight of these children had more or less serious disturbances of health or development, and 16 of the 38 were described by these investigators as “microcephalic idiotic children.” Fifteen of these were born to women who had received either radium or x-ray treatments early in pregnancy. This was a number far greater than would have been expected if no causality were involved.
These observations stimulated an investigation of the children exposed in utero to atomic radiation. This effort led to the establishment of at least three overlapping samples of individuals exposed prenatally to the atomic bombings of Hiroshima and Nagasaki. These samples are the culmination of a series of actions that occurred between 1950 and 1960. In brief, four avenues of ascertaining information about prenatally exposed survivors were used: the birth records required by Japanese law and maintained by the appropriate city offices in Hiroshima and Nagasaki; the supplementary schedules of the Japanese National Censuses of 1950 and 1960, which sought to identify all survivors then alive; the 1950 Sample Census conducted by the Commission; and finally, the fortuitous recognition of a prenatally exposed individual through the Commission’s Master File of survivors or some other chance encounter. The nature and time of availability of these sources of data defined and limited their utility. For example, individuals ascertained only by means of the 1960 census can contribute to mortality data for the years subsequent to 1960 but have never been a part of either the initial clinical sample or its subsequent revision.
Cognitive function was measured in a number of different ways, varying with the ages of the children at the time of each biennial examination. While they were still young, assessment of the children’s intelligence rested largely on the examining pediatrician’s clinical impression of their intellectual status relative to their peers, their ability to count and perform simple arithmetic problems, and their capacity to form coherent sentences. Once they were older, however, structured intelligence tests were used. These included two Japanese intelligence tests, the Tanaka B and the Koga, the Bender–Gestalt, and the Goodenough “Draw-a-Man” test.
Analysis of these data has shown that the developing human brain is especially sensitive to radiation-related damage in the period 8 to 15 weeks after ovulation when the cerebral cortex is being established. This damage is manifested as an increased occurrence of severe mental retardation (9) as well as a diminution in the intelligence quotient (10) and school performance (11) among those prenatally exposed survivors not deemed to be mentally retarded clinically. Studies of seizures, especially those without known precipitating cause, also exhibit a radiation effect on those exposed 8–15 weeks after ovulation (12). Radiation can induce small head size as well as mental retardation but the biologic events that subtend these abnormalities are still unclear. However, magnetic resonance imaging of the brains of some of the mentally retarded survivors has revealed a large region of abnormally situated gray matter, suggesting an abnormality in neuronal migration although some of the loss in cognitive function may be ascribable to cell killing.
Initially, the studies we have described briefly focused on the demonstration of radiation-related effects and not on dose–response relationships because even crude estimates of individual doses were not available. For this purpose, opportunistic samples could suffice but not long afford the basis for a critical assessment of risk. Accordingly, in 1955, the Francis Committee, named after Thomas Francis, an epidemiologist and virologist, urged a reorientation of the studies. They argued that the individual studies had to serve the whole, which only could be done by unifying them through a focus on a common set of survivors. The new research strategy they proposed, termed the “Unified Central Program,” included a mortality surveillance, known as the “Life Span Study,” a clinical study to assess health and morbidity, termed the “Adult Health Study,” and a program of autopsies. The former two center on fixed samples of survivors and suitably age-, gender-, and city-matched comparison persons, whereas the last entails the pathological study of as many deceased individuals from the mortality surveillance as possible. This reorientation began in 1957, and the focus on fixed samples or cohorts has served and continues to serve the studies well.
With the establishment of the Unified Study Program, the development of a leukemia registry, and the evolution of means to assign individual doses of γ and neutron radiation, understanding of the leukemogenic effect of atomic bomb exposure improved materially. Today, the incidence of leukemia is known to be related to dose—the higher the dose the greater the risk—but this increase is not simply proportional to the dose an individual receives. Risk rises slowly to ≈0.5 Gy and then accelerates.
The frequency of new cases of leukemia among the survivors reached a peak about 1952 and has declined steadily since. It had not, however, completely disappeared as recently as 1985, suggesting a period of risk after exposure of at least 40 rather than the 25 years that has been accepted generally. Moreover, when incidence by dose was examined in relation to age at the time of the bombing and the calendar time of disease onset, it seems that the higher the dose, the greater the radiation effect in the early period, before October 1955, and the more rapid the decline in risk in subsequent years. The leukemogenic effect occurred later among individuals who were older at the time of the bombing.
As the number of cases of leukemia continued to grow, it was possible to confirm and extend the earlier findings on the relationship of the different types of this disorder to dose. The radiation-related risk of acute lymphocytic leukemia as well as “the other types” of acute leukemia, such as acute myelogenous leukemia, seemed higher among survivors exposed at younger ages, whereas the frequency of chronic granulocytic leukemia was greater among individuals who were middle-aged or older when exposed. But, it was not clear whether the different types had different dose–response relationships, and as earlier investigators noted, at least one form of leukemia, chronic lymphocytic, did not appear to be radiation-related in the Japanese. However, few cases actually have been seen, and no truly reliable statement of the risk of this form of leukemia among the survivors is possible, although it should be noted that other groups of individuals exposed to above-background levels of ionizing radiation, such as radiation workers, also have not shown a radiation-related increase in chronic lymphocytic leukemia.
Recently, it has been possible to reclassify most of the cases of leukemia occurring among members of the Life Span Study by using the French–American–British system and to reanalyze the accumulated information. This reanalysis has clarified some previously puzzling aspects of the data but also has raised some new questions regarding radiation-related leukemogenesis. For example, it has been recognized for some time that cases of chronic lymphocytic leukemia have occurred only in Nagasaki. This was puzzling. Reclassification, however, reveals that most of these cases are, in fact, instances of adult T cell leukemia, and it has been demonstrated that infection with the human T lymphotrophic virus type 1 associated with this form of leukemia is common in areas of the westernmost major island of Japan, Kyushu, including Nagasaki, but is relatively rare in the western part of Honshu where Hiroshima is located.
An even more recent analysis, based on 231 cases of leukemia occurring between 1950 and 1987 among survivors receiving doses of 4 Gy or less, suggests that the effects of irradiation differ depending on the type of leukemia involved. Preston and his colleagues (13) find the effect of exposure, as measured by the excess absolute risk, to be somewhat more pronounced on the occurrence of acute myelogenous leukemia (1.1 cases/104 PYSv) and chronic myelogenous leukemia (0.9 cases) than on acute lymphocytic leukemia (0.6 cases) whereas the excess relative risk is greater for acute lymphocytic (9.1) and chronic myelogenous leukemia (6.2) than for acute myelogenous leukemia (3.3). This difference in the absolute and relative risks is not unexpected because acute lymphocytic leukemia is less common than the other two subtypes and a smaller absolute risk can give rise to a higher relative risk under these circumstances. Moreover, they find that only acute myelogenous leukemia exhibits a distinct nonlinear dose–response function; there was no evidence of nonlinearity for the other subtypes. If this apparent difference in the dose-response function is real, it would have interesting implications for radiation-related leukemogenesis, but it must be kept in mind that the number of cases of acute lymphocytic leukemia or chronic myelogenous leukemia is small relative to the number of cases of acute myelogenous leukemia and that the capacity to discriminate among different dose–response functions is correspondingly poorer. These analyses further confirm earlier observations that survivors exposed before the age of 20 are more likely to develop acute leukemia (notably acute lymphatic leukemia) than older survivors, but the latter are more prone to develop chronic myelogenous leukemia.
Toward the end of the 1950s, largely from the clinical studies of members of the Adult Health Study, evidence began to emerge of an increase in thyroid cancer (14, 15). And soon thereafter it was apparent that cancers of the breast, lung, and stomach also were elevated among the survivors. Subsequent years have not only confirmed these early findings but have extended the list of radiation-related malignancies to include the esophagus, colon, liver, ovaries, skin, salivary glands, and urinary bladder (16, 17). Moreover, once individual specific dose estimates became available, more precise estimates of risk could be made. These estimates revealed risk to be a function of age at exposure, the young (those under 20 at time of bombing) having the highest risk, but the differences in risk between males and females was small. Risk of solid tumors collectively was linear with dose, that is, risk increased in direct proportion to the dose (see Table 1).
Table 1.
Dose, Sv | Subjects | Observed deaths | Expected deaths | Attributable fraction, % |
---|---|---|---|---|
0 (<0.006) | 36,459 | 3,013 | 0 | 0 |
0.005–0.1 | 32,849 | 2,795 | 34 | 1 |
0.1–0.2 | 5,467 | 504 | 29 | 6 |
0.2–0.5 | 6,308 | 632 | 75 | 12 |
0.5–1.0 | 3,202 | 336 | 78 | 23 |
1.0–2.0 | 1,608 | 215 | 70 | 33 |
>2.0 | 679 | 83 | 49 | 59 |
Total | 86,572 | 7,578 | 334 |
Adapted from Preston et al. (16).
Over these findings, however, hung one nagging uncertainty. How reliable were the diagnoses on death certificates? Comparison of the death certificate statements with findings at autopsy revealed confirmation rates to be generally high, particularly for cancer, but detection rates were often low (18). But fortunately in 1957 and 1958, a tumor registry had been established in each of these cities. These registries could and do provide information on the incidence of cancer, and these incidences can be compared with the mortality findings. Such comparison reveals a high correspondence in the risk estimated from the mortality surveillance and that from the registry data (19, 20).
Until relatively recently, it generally has been thought that all of the life-shortening seen among the survivors was attributable to the increased frequency of cancer. However, some 20 years ago, evidence began to emerge, albeit it weak, showing that noncancer mortality also might be increased (21). Initially, it was tempting to believe that the apparent increase was spurious, ascribable to errors in causes of death as revealed by death certificate. Although we are still uncertain whether the effect is real or attributable to some as-yet-unrecognized bias in the data, the evidence that the effect may be real grows. First, careful study of the errors inherent in death certificates fails to account for the increase that has been seen (22). Second, data emerging from the biennial clinical examinations of the survivors are beginning to mirror the findings seen in the mortality surveillance.
In retrospect, these findings make more interesting an earlier study of the incidence of stroke and coronary heart disease in the years 1958 through 1974 among the survivors participating in the Adult Health Study. This investigation reported the incidence of these two circulatory diseases to be significantly higher than anticipated among women in Hiroshima who were exposed heavily—who received a T65 dose of 2 Gy or more. Because an exposure effect was not seen among women in Nagasaki or in men in either city and there was evidence of a higher autopsy rate among heavily exposed women in Hiroshima, which could have led to a higher rate of recognition of stroke or heart disease, there was a reluctance to accept this seeming association as real. Nevertheless, the effect could not be explained by an inadvertent confounding of such known risk factors as smoking, an elevated level of serum cholesterol, or the occurrence of hypertension (high blood pressure), that contribute to the occurrence of cardiovascular disease. This earlier study has been extended to include the years from 1974 through 1985 using the newer doses. The results confirm those found earlier among heavily exposed women in Hiroshima, but now there is also a statistically significant increase in the incidence of “heart disease” among heavily exposed men in Nagasaki. Within the other two gender–city groups (Hiroshima men and Nagasaki women), the association of exposure with risk of stroke or coronary artery disease remains equivocal and is not statistically significant. Although this recent study makes more plausible an association, it does not remove all of the uncertainties. There has been a statistically demonstrable, temporal lowering in the frequency of cerebrovascular disease in postwar Japan, which commonly has been attributed to dietary changes. This has precipitated a concern that the same westernization of the diet (which has presumably contributed to the diminution in cerebrovascular disease) might increase the frequency of cardiovascular disease, although there is little direct evidence to support this apprehension. As judged by participants in the Adult Health Study, serum cholesterol levels have been rising steadily with time in Japan (some 25 mg-percent, on average, in the past 30 years), and elevated levels of serum cholesterol have been associated with a higher frequency of cardiovascular disease, but there has been no consistent upward trend in the occurrence of myocardial infarction. However, the possibility of a temporal trend makes it more difficult to demonstrate a true radiation effect because the trend itself, if one exists, is so poorly understood. There are other observations that suggest some biological rather than chance basis for these findings. First, x-ray examinations of the Adult Health Study participants have revealed the frequency of calcification of the aortic arch and the abdominal aorta to increase with dose, and second, ophthalmic studies have shown retinal arteriosclerosis also to increase in frequency with dose. As yet, however, it has not been possible to integrate these findings into a coherent biological explanation of the apparent increase with dose of deaths ascribable to causes other than cancer, and it may be some time before this is possible.
As the studies in Japan have proceeded, each new finding (see Table 2) has raised new questions that demand resolution. Some effects seen among the survivors have not been observed in other exposed populations, and effects have been reported in these other groups that have not been seen in the atomic bomb survivors. The origin of these differences must be resolved if our understanding of the biological effects of ionizing radiation is to be complete. Still, other unsolved problems involve a better characterization of the contribution of host and environmental factors to the occurrence of radiation-related malignancy. A variety of studies from many areas in the world indicates that some cancers aggregate in families, suggesting that genetic or familial factors play a part in their etiology. But it is not known whether those individuals in Hiroshima and Nagasaki who have developed malignancies, presumably related to their exposure to ionizing radiation, come from families who, even in the absence of such exposure, are cancer-prone. If this should be so, how could this information be used to identify those persons with the greater risk? And how much greater is that risk? Similarly, it is important to determine more reliably the trend in risk with attained age among the younger survivors who generally have exhibited the higher relative risks thus far.
Table 2.
Significant radiation-related increase |
Malignant tumors: leukemia, cancers of the breast (female), colon, liver, lung, ovary, skin (nonmelanoma), stomach, and thyroid |
Lenticular opacities |
Small head size, mental retardation, diminished IQ and school performance, increased frequency of seizures (prenatally exposed) |
Retarded growth and development (among survivors exposed at young age or prenatally) |
Chromosome abnormalities in lymphocytes |
Somatic mutation in erythrocytes and lymphycytes |
Suggestive radiation-related increase |
Malignant tumors: cancers of the esophagus and urinary bladder, malignant lymphoma, salivary gland tumors, and, possibly, multiple myeloma |
Adult-type malignancies among the prenatally exposed |
Impairment of neuromuscular development among the survivors exposed in utero |
Parathyroid disease |
Mortality from diseases other than malignant tumors, specifically cardiovascular disease and liver cirrhosis, at higher doses |
Specific (humoral or cell-mediated) changes in immunologic competence |
No radiation-related increase seen to date |
Malignant tumors: chronic lymphocytic leukemia, osteosarcoma |
Acceleration of aging |
Sterility or infertility among the prenatally or postnatally exposed |
F1: congenital abnormalities, mortality, including childhood cancer, chromosome aberrations and in biochemically identifiable genes |
Improving the estimates of cancer risk as well as other radiation-related damage necessarily will remain a central activity of the Radiation Effects Research Foundation through the years immediately ahead, but developing better estimates without an understanding of the underlying molecular and cellular processes that are involved is an empty victory. Not to denigrate the importance of risk analysis; it can be helpful in more ways than just through the quantitative expression of risk. It can identify issues that should be of concern to experimentalists and are addressed more readily in experimental systems than epidemiologically. This method argues for a more dynamic interaction than commonly occurs between epidemiologists and statisticians, on the one hand, and experimental biologists, on the other. Nevertheless, in the final analysis, intelligent intervention and the amelioration of risk must be based on biological understanding.
Developments in biology have lifted a corner of the curtain that has obscured this understanding. To further these advances, and in particular their application to radiation-related damage, it is not only important that current tissue repositories, which focus primarily on malignant tumors, be supported but that means be found to collect and store tissues on a wider sampling of exposed individuals, most of whom will not die from a malignancy. These tissues and cells can serve as the bases for future molecular and cellular studies as newer techniques become available. The Adult Health Study has been, and will undoubtedly continue to be, the primary source of much of this biological material, which argues not only for the continuation of these examinations but means, in turn, that the Foundation must maintain an active laboratory program, one with a staff and facilities capable of using the newer techniques as they evolve.
There are still other ways that the studies in Hiroshima and Nagasaki can contribute to the betterment of human health and the quality of life. Over the 30-odd years that the Adult Health Study has continued, an enormous body of data has accumulated pertinent to the process of aging among the Japanese and presumably other ethnic groups. Analysis of these data holds promise of insights into childhood precursors of subsequent cardiovascular disease, for example, or into events premonitory of the occurrence of senile dementia later in life. It can be presumed, therefore, that, in the years ahead, the data of the Foundation will see important uses in the study of variation among individuals in the process of aging.
Future studies of the prenatally exposed undoubtedly will focus on the relationship of exposure to ionizing radiation and aging, including that of the central nervous system. The prenatally exposed survivors are unusual in many respects, not the least of which is the fact that they are the only group of survivors whose life experience subsequent to exposure can be followed from birth to death and, as such, can provide unique insights into the effect of exposure on aging. Obviously, such studies must have direction, and recent experimental investigations could provide this.
Numerous events are involved in the process that brings forth a functional brain, any one of which is potentially susceptible to radiation damage and could lead to a different result. Patently, there is a need to confirm and extend the findings on cerebral cortical impairment after prenatal exposure. To do so, however, will entail more neurologically focused clinical examinations than has occurred in the past, including the various techniques now available to image the brain of the still living prenatally exposed atomic bomb survivors. These studies could have value well beyond the immediate assessment of the risk of prenatal exposure to irradiation and could contribute to a deeper understanding of human embryonic and fetal development, to a clearer appreciation of the diversity among individuals in the age at achievement of specific embryonic or fetal landmarks, and to a sharper definition of the developmental ages most vulnerable to exposure to chemical or physical teratogens.
As yet, among the prenatally exposed survivors, there have been no studies directed toward the effect of irradiation on specific, cortical functions. Nevertheless, many of these functions can be investigated with a surprising degree of precision, and the time at which cortical neurogenesis is initiated in these areas, and its duration, often is known reasonably well. Particularly appealing are the various aspects of visual function. Some 30% of the cortex appears to be involved in the processing of visual stimuli, and the mechanisms through which this processing occurs are better understood than for any other cortical area.
Members of the prenatally exposed clinical sample are still examined biennially at the Radiation Effects Research Foundation. This examination emphasizes general health, but a search should also be made for evidence of central nervous system damage. The neurological examination itself does not now but should include tests of motor control and development. Some cognitive tests, such as word association, learning ability, and memory and intelligence, should also be included. But there is the opportunity to do more. In light of experimental findings on other primates, careful studies of auditory and visual acuity, olfaction, and taste should be contemplated. Evidence of an earlier loss in hearing or in vision that normally accompanies aging should also be sought because a lesser initial number of neuronal cells could lead to earlier manifestation of an aging central nervous system.
Cancer among the in utero exposed is another area that has yet to be fully exploited. Although this group of individuals has not exhibited a higher likelihood of developing childhood malignancies, possibly because of the small number of individuals involved, evidence has accumulated and continues to accumulate that their risk of adult-onset cancers is elevated (23, 24). However, the number of cases seen thus far remains small, and site-specific analyses are still problematic, but the years immediately ahead should bring a substantial increase in the data available for analysis because these individuals now have entered those ages in life when the incidence of cancer increases dramatically. Experience with the postnatally exposed survivors has shown that, for solid tumors, the increased risk does not manifest itself until those survivors reach the cancer-prone ages. Finally, ambiguities still exist in the doses assigned to specific survivors. These ambiguities obviously need to be resolved if the full value of the studies in Japan are to realized.
Clearly, the fabric of effects of exposure to the bombing of these cities is not fully woven. Some heretofore unsuspected consequences surely will emerge as the studies continue, and others will be better defined. Over half of the survivors are still alive (see Table 3), and what their future holds only can be judged in terms of what the past has revealed. It is obviously of the utmost importance that the studies continue because only thereby will answers be found to such issues as the effect of age at exposure on subsequent risk and the duration of expression of that risk.
Table 3.
Year
| ||||||
---|---|---|---|---|---|---|
1995 | 2000 | 2005 | 2010 | 2015 | 2020 | |
Age at exposure, years | ||||||
0–9 | 16,450 | 15,990 | 15,290 | 14,280 | 12,710 | 10,390 |
10–19 | 14,500 | 13,540 | 12,040 | 9,800 | 6,780 | 3,620 |
≥20 | 12,800 | 8,910 | 5,430 | 2,710 | 970 | 100 |
Total | 43,750 | 38,440 | 32,760 | 26,790 | 20,460 | 14,110 |
Average attained age, years | ||||||
64.7 | 67.9 | 71.3 | 74.7 | 78.0 | 81.3 | |
Average age at the time of the bombings, years | ||||||
14.7 | 12.9 | 11.3 | 9.7 | 8.0 | 6.3 |
References
- 1.Beebe G W. Epidemiol Rev. 1979;1:184–210. doi: 10.1093/oxfordjournals.epirev.a036210. [DOI] [PubMed] [Google Scholar]
- 2.Schull W J. The Effects of Atomic Radiation: A Half Century of Studies from Hiroshima and Nagasaki. New York: Wiley; 1995. [Google Scholar]
- 3.Cogan D G, Martin S F, Kimura S J, Ikui H. Trans Am Ophthalmol Soc. 1950;48:62–87. [PMC free article] [PubMed] [Google Scholar]
- 4.Sinskey R M. Am J Ophthalmol. 1955;39:285–293. doi: 10.1016/0002-9394(55)91270-8. [DOI] [PubMed] [Google Scholar]
- 5.Otake M, Neriishi K, Schull W J. Radiat Res. 1996;146:339–348. [PubMed] [Google Scholar]
- 6.Folley J H, Borges W, Yamawaki T. Am J Med. 1952;13:11–21. doi: 10.1016/0002-9343(52)90285-4. [DOI] [PubMed] [Google Scholar]
- 7.Moloney W C, Kastenbaum M A. Science. 1954;121:308–309. doi: 10.1126/science.121.3139.308. [DOI] [PubMed] [Google Scholar]
- 8.Goldstein L, Murphy D P. Am J Roentgenol. 1929;22:322–331. [Google Scholar]
- 9.Otake, M., Yoshimaru, H. & Schull, W. J. (1987) Radiation Effects Res. Foundation TR 16–87.
- 10.Schull, W. J. & Otake, M. (1986) Radiation Effects Res. Foundation TR 7–86.
- 11.Otake, M., Schull, W. J., Fujikoshi, Y. & Yoshimaru, H. (1988) Radiation Effects Res. Foundation TR 2–88.
- 12.Dunn K, Yoshimaru H, Otake M, Schull W J. Am J Epidemiol. 1990;131:114–123. doi: 10.1093/oxfordjournals.aje.a115464. [DOI] [PubMed] [Google Scholar]
- 13.Preston D L, Kusumi S, Tomonaga M, Izumi S, Ron E, Kuramoto A, Kamada N, Dohy H, Matsuo T, Nonaka H, et al. Radiat Res. 1994;137:S68–S97. [PubMed] [Google Scholar]
- 14.Hollingsworth, D. R., Hamilton, H. B., Tamagaki, H. & Beebe, G. W. (1962) Atomic Bomb Casualty Commission TR 4–62.
- 15.Nagataki S, Shibata Y, Inoue S, Yokoyama N, Izumi M, Shimaoka K. J Am Med Assoc. 1994;272:364–370. [PubMed] [Google Scholar]
- 16.Preston D L, Mabuchi K, Pierce D A, Shimizu Y. In: Implications of New Data on Radiation Cancer Risk. Boice J D Jr, editor. Bethesda, MD: National Council on Radiation Protection and Measurements; 1997. pp. 31–40. [Google Scholar]
- 17.Pierce D A, Shimizu Y, Preston D L, Vaeth M, Mabuchi Y. Radiat Res. 1996;146:1–27. [PubMed] [Google Scholar]
- 18.Ron, E., Carter, R. L., Jablon, S. & Mabuchi, K. (1992) Radiation Effects Res. Foundation CR 6–92.
- 19.Mabuchi K, Ron E, Preston D L. In: Implications of New Data on Radiation Cancer Risk. Boice J D Jr, editor. Bethesda, MD: National Council on Radiation Protection and Measurements; 1997. pp. 41–50. [Google Scholar]
- 20.Thompson D E, Mabuchi K, Ron E, Soda M, Tokunaga M, Ochikubo S, Sugimoto S, Ikeda T, Terasaki M, Izumi S, et al. Radiat Res. 1994;137:S17–S67. [PubMed] [Google Scholar]
- 21.Shimizu Y, Kato H, Schull W J, Hoel D G. Radiat Res. 1992;130:249–266. [PubMed] [Google Scholar]
- 22.Sposto R, Preston D L, Shimizu Y, Mabuchi K. Biometrics. 1992;48:605–617. [PubMed] [Google Scholar]
- 23.Yoshimoto Y, Kato H, Schull W J. Lancet. 1988;2:665–669. doi: 10.1016/s0140-6736(88)90477-1. [DOI] [PubMed] [Google Scholar]
- 24.Delongchamp R R, Mabuchi K, Yoshimoto Y. Radiat Res. 1997;147:385–395. [PubMed] [Google Scholar]