Abstract
Recognition is growing regarding the possibility that terrorism or large-scale accidents could result in potential radiation exposure of hundreds of thousands of people and that the present guidelines for evaluation after such an event are seriously deficient. Therefore, there is a great and urgent need for after-the-fact biodosimetric methods to estimate radiation dose. To accomplish this goal, the dose estimates must be at the individual level, timely, accurate, and plausibly obtained in large-scale disasters. This paper evaluates current biodosimetry methods, focusing on their strengths and weaknesses in estimating human radiation exposure in large-scale disasters at three stages. First, the authors evaluate biodosimetry’s ability to determine which individuals did not receive a significant exposure so they can be removed from the acute response system. Second, biodosimetry’s capacity to classify those initially assessed as needing further evaluation into treatment-level categories is assessed. Third, we review biodosimetry’s ability to guide treatment, both short- and long-term, is reviewed. The authors compare biodosimetric methods that are based on physical vs. biological parameters and evaluate the features of current dosimeters (capacity, speed and ease of getting information, and accuracy) to determine which are most useful in meeting patients’ needs at each of the different stages. Results indicate that the biodosimetry methods differ in their applicability to the three different stages, and that combining physical and biological techniques may sometimes be most effective. In conclusion, biodosimetry techniques have different properties, and knowledge of their properties for meeting the different needs for different stages will result in their most effective use in a nuclear disaster mass-casualty event.
Keywords: radiation damage, radiation dose, radiation effects, radiation, ionizing
INTRODUCTION
The use of biodosimetry to measure radiation dose after-the-fact has become a very important and high-priority field due to the need for governments to be prepared for the heightened potential for exposures of large numbers of individuals from acts of terrorism or accidents (Grace et al. 2010). Biodosimetry would play a pivotal role in nuclear events, because estimating radiation dose would greatly help to medically evaluate the injured in four different ways. Biodosimetry would help estimate how many people received doses that did not require acute care, classify those patients who need further evaluation into treatment-level categories, guide actual treatment, and help providers and patients with the long-term consequences of exposures to ionizing radiation, including planning for treatment and patient compensation.
While there are other circumstances in which biodosimetry can be used, this article focuses on its use in events involving large numbers of individuals who are potentially exposed to unknown and variable doses of radiation, for purposes of accurately addressing their immediate and acute needs. The authors focus on large numbers because with only small numbers of subjects, the available health care system would be able to assess and treat them all. The logistic challenges posed by such large events include the need to act rapidly and effectively to assess large numbers of individuals under very chaotic conditions that involve disruption of the usual social and medical support systems. Indeed, as Gougelet et al. argue, not only is the local medical system unlikely to be equipped to deal with such a surge of people with these needs, there is a high probability that the facilities and staff might themselves be compromised by the event (Gougelet et al. 2010). Furthermore, housing, communication networks, transportation, and social supports are all likely to be uncertain and possibly compromised. These conditions would make it difficult to implement biodosimetric methods that require highly specialized expertise and/or collection and transport of samples to a distant site, and would complicate the concomitant requirement to relocate the individual for possible feedback and act upon results.
While the need for biodosimetry is most pertinent to identify efficiently and urgently exposure levels that may lead to acute radiation syndrome (ARS), biodosimetry may also be important to identify doses lower than what would be expected to produce ARS for two potentially important purposes: 1) early decisions on whether to administer some types of mitigating agents, and 2) in the presence of combined injury, because then relatively lower levels of radiation (e.g., 1 Gy) can lead to very significant changes in outcomes (Messerschmidt 1989; Pellmar and Ledney 2005; DiCarlo et al. 2008; Prasanna et al. 2010).
While the focus of this paper is on evaluating biodosimetry’s ability to facilitate treatment decisions involving acute care in such catastrophic events, after-the-fact biodosimetry also has the potential to improve clinically-relevant scientific understanding about the long-term effects of non-lethal exposures to ionizing radiation in large populations (Simon et al. 2010). In addition, biodosimetry’s ability to estimate dose long after the exposure may be important to guide policies about long-term treatment for individuals and establish evidence to be used to compensate victims.
EFFECTIVE DOSIMETRY MEASUREMENT IN LARGE POPULATIONS: TERMS AND CONCEPTS
The overall operational definition of biodosimetry for this paper is the determination of the dose of ionizing radiation from biological processes or structures; unlike classical physical dosimetry, this does not require any preplacement of the material that is used for the dosimetry. Measurements can estimate the dose an individual may have received based either on the extent and type of biological responses or the physical changes in biological structures that occur in proportion to the exposure. Three criteria are minimally necessary for an effective biodosimetric technique: 1) The dose can be assessed after-the-fact; 2) the technique can assay at the level of an individual; and 3) the technique can provide information sufficient to determine what actions should be taken for that individual, e.g., whether or not to enter the medical care system for active treatment and surveillance or whether or not to receive a mitigating agent.
Below are described the two main methods of biodosimetry, biologically-based and physically-based, and their advantages and disadvantages in potentially helping patients at the three stages of radiation exposure following a catastrophic event. Methods for dose estimation based on clinical parameters are also described, which while not strictly biodosimetry methods, are very relevant because of current recommendations to use these techniques to estimate dose.
Biologically-based biodosimetry
Biologically-based biodosimetry approaches are based on biological processes or parameters that are affected by ionizing radiation. Examples of this approach include determination of gene activation, evidence of activation of repair mechanisms, and production of unusual metabolic products (Amundson et al. 2003, 2004, 2010; Long et al. 2007; Sasaki 2009; Dons and Cerveny 1989; Brengues et al. 2010; Britten et al. 2010; Chen et al. 2010; Fenech 2010; Hill 2010; Ossetrova et al. 2010; Sharma et al. 2010; Wilkins et al. 2010). Usually they are based on cellular mechanisms whose natural functions are to control pathophysiological processes and to deal with damaged molecules.
There are some potential significant advantages to using these types of biological parameters. Many cellular mechanisms, especially those involved in actively responding to damage from radiation, have the capability to be very sensitive to exposure because, as biological responses, they tend to become amplified in order to carry out their physiological role. Perhaps their greatest value is that they tend to reflect the actual amount of biological damage to the individual rather than the actual dose received. The actual damage may be more useful for clinical decision-making because damage is not only proportional to the dose, but also to the individual’s response to the radiation. While technically challenging, it seems plausible that some of these techniques can be made highly automated, as has occurred in other aspects of molecular biology (Amundson et al. 2010; Chen et al. 2010).
Biologically-based dosimeters, however, also have some potentially significant limitations. First, the same properties that make them potentially sensitive and indicative of functional biological damage lead to potential artifacts regarding their use as dosimeters per se. Few if any of the parameters proposed for biological dosimetry are completely specific for radiation damage; instead, they are usually indicators of cellular responses to any damage and/or to the activation of repair pathways. Consequently, they are likely to be affected by the presence of other factors such as trauma, burns, or even psychological stress. The extent of the responses also is likely to vary among individuals both from inherent individual variation and as a result of pre-existing pathophysiology.
A second potential problem is that the responses are very likely to be time-dependent in complex ways. Because measurements are assessing responses, they cannot be detected until the pathways that produce the biological response are activated, and therefore are subject to a lag-time after exposure. They also reflect changes in biological responses over time, with most pathways having feedback loops designed to decrease responses over time or reactivate them if needed. Because these two effects lead to a typical pattern of a latent period followed by a rise, a plateau, and then a decline in the parameter being measured (plus the potential of also repeating the rise-plateau-decline cycle), their interpretation requires knowledge of both the time interval since the exposure and the expected pattern of change over time. It also is possible that these time-dependent changes may vary among individuals or according to the presence of trauma or stress.
Lastly, most of the methods require one or more blood drawings or another invasive procedure, and samples may need to be transported to specialized labs for processing, adding significantly to the logistical problems associated with these methods.
In summary, considering the significant advantages of the biological parameters for more informed decisions about care for ARS and their potential limitations for immediate determination at points of triaging, it is likely that they will be most useful when used in combination with other types of dosimetry, especially physically-based biodosimetry.
Physically-based biodosimetry
Physically-based biodosimetry approaches are based on physical parameters measured in the tissues of individuals; for example, levels of long-lived radiation-induced free radicals detected by Electron Paramagnetic Resonance (EPR) (Brady et al. 1968; Swartz 1965, 2006) and Optically Stimulated Luminescence (OSL) (Bøtter-Jensen et al. 2003; Godfrey-Smith 2008; DeWitt et al. 2010). These physical changes in tissues occur independently of any biological responses to radiation and therefore are not subject to the lag-time or feedback cycles seen with biological parameters. EPR dosimetry is especially well-developed and established as one of the principal methods for estimating doses many years after an exposure (originally, this technology was based on exfoliated teeth). More recently, EPR has been used for acute dosimetry using measurements of teeth in situ (Salikhov and Swartz 2005; Swartz et al. 2005, 2006, 2007; Williams et al. 2007, 2010) and fingernail clippings (Romanyukha et al. 2010; Swarts 2010; Wilcox et al. 2010).
Physical methods have some very attractive features for after-the-fact dosimetry. First, because they are based on predictable changes in tissues that are unlikely to vary by individual, they can be viewed as analogous to an individual’s wearing external dosimeters at the time of exposure. However, unlike the usual dose monitoring devices that must be carried to be effective, teeth and fingernails are always “present” on the body to detect exposure. Second, physical measurements are essentially invariant with respect to time. The “readout” from these natural dosimeters can usually be made at any time after the event and, for teeth especially, persist indefinitely. Third, the methods to measure dose in teeth or fingernails with EPR are potentially fully field deployable; i.e., they are mobile, and do not require special facilities or support and could be operated by personnel with minimal training. Currently, processes are being developed to fully automate the processing of results such that 10 min of training is sufficient to become an operator, and less than 5 min will be needed to measure and obtain results for an individual. Fourth, perhaps the most significant advantage is that, as physical dosimeters, they are not affected by the factors that affect the biological parameters noted above. This implies that the dose estimate reflects only the actual radiation dose received rather than any other cause for damage and is time invariant. Fifth, the physical methods, especially those based on fingernails and toenails, can provide spatially localized measurements of dose, and thereby provide information on the homogeneity of the exposure. These properties make the physical methods particularly well-suited for the initial determination of exposure and the assessment of the need for treatment or further surveillance.
Physical dosimetry methods also have some limitations. The in vivo tooth dosimetry technique, for example, currently requires several minutes for each measurement. Because high throughput is particularly important in catastrophic events, use of this technique would require many instruments operating concurrently to handle large numbers of people. The physical techniques cannot reflect whether the total dose was received in a short period or accumulated over time, which would impact the clinical consequences of the dose and therefore the treatment decisions. Finally, it is unlikely that they will achieve the very high sensitivity of the biological methods or reflect the biological impact of the dose in the particular individual.
In summary, considering the potential advantages of the physical methods, especially for initial screening, and their potential limitations for estimating the clinical significance of the damage, it is likely that they will be quite useful, especially when used in combination with biologically-based dosimetry because of the complementary strengths and limitations of the two types of approaches.
Dose estimation based on clinical parameters
Several clinical measurements, such as serial lymphocyte counts and the timing of nausea and vomiting, have been widely suggested as a means to estimate dose after-the-fact (CDC Radiation Emergencies 2005; Waselenko et al. 2004). While not strictly meeting the definition of biodosimetry, they are very relevant because of current recommendations to use these techniques to estimate dose. For details, see existing published guidelines for managing the medical response to unplanned exposures to ionizing radiation (CDC Radiation Emergencies 2005; Rojas-Palma et al. 2009). However, as detailed in the following section, in large-scale events, these methods may be more pertinent for management after the initial triage rather than for the initial determination of management. On the other hand, these measurements are potentially useful when dealing with small numbers of subjects who would likely all be hospitalized or under close surveillance and for the management of patients already screened into the medical treatment system. In both large- and small-scale events, this information could be useful to ensure that their clinical course is managed appropriately and to gauge their response to mitigating agents or treatment. Table 1 summarizes the advantages and disadvantages of the three methods discussed in the above sections.
Table 1.
Advantages and Disadvantages of Types of Biodosimetry Methods and a Clinical Alternative.
Technology | Description | Advantages | Disadvantages |
---|---|---|---|
Biologically-based Dosimetry | Measuring biological (cellular) processes or parameters that are affected by ionizing radiation | Measures actual cellular damage rather than dose received (individuals can have variable damage when given the same dose) |
|
Physically-based Dosimetry | Estimating actual radiation dose via physical changes to body structures such as teeth and nails |
|
|
Dose Estimation Based on Clinical Parameters | Measuring serial lymphocyte counts, chromosomal aberrations and the timing of nausea and vomiting | Potentially useful when dealing with small numbers of subjects who would likely all be hospitalized or under close surveillance and for the management of patients already screened into the medical treatment system |
|
LIMITATIONS OF CURRENT GUIDELINES FOR RESPONSE TO LARGE EVENTS
Numerous authoritative guidelines published in official government websites and peer-reviewed articles have been developed by consensus committees comprised of renowned experts on radiation effects on humans. As an excellent example, see Medical Management of the Acute Radiation Syndrome: Recommendations of the Strategic National Stockpile Radiation Working Group (Waselenko et al. 2004). The consensus presented in this example states that individual radiation dose can be assessed by several methods: determining the time to onset and severity of nausea and vomiting, decline in absolute lymphocyte count over several hours or days after exposure, appearance of chromosome aberrations (including dicentrics and ring forms) in peripheral blood lymphocytes, and documentation over time of clinical signs and symptoms (affecting the hematopoetic, gastrointestinal, cerebrovascular, and cutaneous systems). Usually these methods have been employed for radiation events such as nuclear power plant accidents or other incidents involving few people. The resulting recommendations about performing multiple and complex clinical observations over time are suitable for such limited numbers of potentially exposed individuals, all of whom can be followed by medical experts. However, these same features are particularly ill-suited for the goal of immediate triage of a large population into categories for treatment or not, in the context of a catastrophic event.
Determining the time to onset and severity of nausea and vomiting would appear to be a very convenient and effective method for rapid triage, using symptoms that can be observed readily in the field by untrained personnel. A closer look at the evidence to support the use of this determination for biodosimetry, however, reveals substantial limitations. A recent rigorous re-evaluation of the data set used for these recommendations (principally from Chernobyl) indicated that the standard error of prediction for the intended purpose is very high (200%) (Demidenko et al. 2009). There also are several problems in the database on which that relationship was based, which further significantly limit its usefulness for estimating dose and guiding clinical care:
imprecise and inconsistent recording of data (doses were spread over time, rendering it difficult to determine onset; estimates of dose were not well established or consistently recorded; the presence of combined injury in many individuals was not well recorded or adjusted for use in assessing the clinical consequences)
selection bias (analyses did not include individuals who did not vomit)
psychogenic factors were ignored (e.g., the potential effect of panic and fright in the exposed population leading to psychogenic symptoms was not recorded).
In addition to questions about the evidence regarding the specificity and sensitivity of this symptom to indicate dose, there is an important policy consideration: i.e., these symptoms could be readily induced. Thus, if time-to-emesis was an important guideline to determine radiation exposure, terrorists could exploit this. For example, terrorists could include emetic agents within a device, simulating radiation exposure and thereby increasing panic and confusion.
Decline in absolute lymphocyte count over several hours or days after radiation exposure is a well-established phenomenon. It has been very useful in managing exposures involving small numbers of individuals, especially with the use of multiple counts per individual over the first two days (Goans et al. 2001, 1997). However, this method requires skilled personnel and technical facilities and access to the subject at several time points. Each of these features is likely to be impractical for nuclear events involving large numbers of people with the associated major disruption of communication and services related to the event.
Appearance of chromosome aberrations (including dicentrics and ring forms) in peripheral blood lymphocytes is also a well-established methodology for estimating radiation exposure. It has usefully estimated doses as low as 1 Gy, which is well below the 2 Gy dose that is generally accepted as the threshold for significant acute clinical risk (Amundson et al. 2001; Bauchinger 1984; Bender and Gooch 1966). The assay, however, requires that blood samples be obtained and then transferred to a specialized laboratory where they are incubated for at least 48 h before being prepared and evaluated by experts. There are currently only a few laboratories in the world with the capability of assaying multiple samples. While considerable effort has been made to establish networks among these laboratories and to establish additional laboratories (Albanese et al. 2007; Prasanna et al. 2005; Carr and Christie 2010), the total available capacity to process samples remains quite low. Besides the concern for insufficient capacity to manage large populations, the logistical complexities of transporting samples and then matching the results to the appropriate individuals would also be significant in a major event.
Documentation of clinical signs and symptoms (affecting the hematopoetic, gastrointestinal, cerebrovascular, and cutaneous systems) over time is essential for effective clinical treatment of individuals with significant radiation exposure. It exemplifies an important paradigm in clinical medicine: treat the patient (i.e., the problem as manifested in signs and symptoms) and not the dose. While this method is appropriate for caring for patients who already have been identified as being at risk and starting to have signs and symptoms, the length of time required for the clinical manifestations to appear and to be documented in individuals precludes this method’s use for the initial triage.
NEEDS FOR BIODOSIMETRY TO PREDICT ACUTE RESPONSES TO AN EXPOSURE
The need for dosimetry, even when restricted to analyzing its usefulness for predicting acute responses to radiation, varies considerably, depending on the context of exposure. Therefore, an analysis of which characteristics are most useful should also first take into account these varying contexts. We have already mentioned some of these criteria in discussing the strengths and limitations of various types of dosimetry. Here we focus on varying contexts (i.e., the purpose of dosimetry in a given situation) to highlight the different needs that various biodosimeters address and why a planned coordinated use of the biodosimetric methods provides the most flexibility and effective deployment. This section concludes by discussing the clinical and other individual-level factors that may impact which features of biodosimetry are the most useful.
The dimensions that define the contexts in which various biodosimetry techniques are most useful include:
the number of people who are potentially exposed
the known or estimated likelihood that people in the area have received a clinically-meaningful exposure
the amount of concurrent disruption of communications and of social and medical networks within the immediate region of the incident
the state of the infrastructure outside of the affected region
the availability and level of personnel who can respond to the incident
many other factors
These factors, like the need for dosimetry itself, will not be static but will evolve over time after the precipitating event.
The section below discusses these dimensions in the context of whether the specific purpose for the measurement is to (1) facilitate making an initial screening decision as to whether or not the individual has been exposed at a clinically-important level or should be triaged for further medical assessment, (2) assess dose to determine the treatment that will probably be required, or (3) guide active acute treatment. The authors argue that the distinctions among these three purposes for dosimetry should not be based on invariant, predetermined threshold levels on each dimension, but instead vary depending on the system’s capacity to deal with the numbers of individuals who are involved. The best choices of dosimetry therefore depend on a careful analysis of each situation.
Purpose: Initially triage individuals into the medical system for further assessment or determine if they do not need to be followed further in the near term
In any situation in which large numbers of people have potentially been exposed, it is very unlikely that everyone, regardless of exposure, can be monitored closely for the onset of classical clinical indications of ARS. Furthermore, in most plausible scenarios involving large numbers, the medical care system and indeed the general infrastructure of the region are likely to be overwhelmed and unable to monitor everyone fully.
Under catastrophic nuclear scenarios, the most important task for guiding the initial management of these patients is to obtain a reliable dose estimate using the simplest screening process that is possible. This usually will be an assessment of whether a subject has received less than 2 Gy (the usual threshold to decide no further assessment is needed) or more than 2 Gy (to be triaged to receive further assessment and likely treatment). A simple screening process could significantly increase the efficiency of the initial response. Under some circumstances where the number of potential victims is very high, the threshold might be set higher and, if fewer individuals are involved, lower. Within the chosen level for action, first responders and emergency managers of the event could remove individuals who don’t require immediate or urgent medical care from the emergency medical care system, thereby allowing subjects who would most benefit from medical care to receive it in a timely manner.
Criteria for such initial screening could be set so as to tolerate a modest degree of false positives (i.e., sending individuals for assessment or treatment who turn out not to need it) but should avoid false negatives (i.e., failing to place people into the medical care system who would benefit from active treatment). The false positive threshold could also depend to some degree on the numbers of people who could be handled by the available medical system; the higher the capacity, the higher the false positive rates that would be allowed. These arguments follow from the assumption, expanded below, that people triaged into the potentially-exposed category (e.g., more than 2 Gy) can be further studied to obtain a more precise estimate of dose and categorized for more targeted action.
In summary, the output of such a screening is information that can be the basis for dichotomizing the population as to whether or not they had a clinically-significant exposure. The most appropriate biodosimetric techniques for this level of use, then, need to be simple and rapid in providing results, have a low false-negative rate, a high throughput to handle large numbers, and be available at the emergency care sites at the event.
Purpose: Assign patients to levels of treatment
The next level of use of biodosimetry, which could follow the initial screening, would be to assist in assigning individuals as rapidly and effectively as possible into major action classes. The number of categories would depend on the volume of people triaged for care and the capabilities of the medical care system for addressing their treatment. Under some circumstances, such as the limited availability of stem-cell transplantation, it would be desirable for the biodosimetry technique to provide reliable estimates for subclasses of risk so that the limited capabilities for high-intensity treatments could be used most effectively. The authors present an example based on three categories. [More detailed analyses of proposed triage categories using high-accuracy biodosimetry methods and consideration of medical resources have been proposed in other papers in this volume (Rea et al. 2010; Riecke et al. 2010)].
Category 1. Identify false positives and those near 2 Gy
These individuals would not need urgent medical care. They might possibly need to be evaluated for risks of long-term effects but would have little need for prompt actions. Individuals assigned to this category could leave the emergency medical care system, at least during the period of time when there is greatest stress and potential for overwhelming the system.
Category 2. Admit patients into the medical care system for observation and, as needed, active medical care
This would be done to reduce the probability of a near-term deleterious clinical course due to ARS. This group is likely to require active symptomatic medical care and may also receive complex (and potentially risky) more aggressive treatments, such as bone marrow transplantation and/or high doses of radiation-mitigating drugs. The assignment of individuals into this action class would typically occur when the dose is in the range of 3–8 Gy.
Category 3. Provide palliative or expectant care
This level would identify individuals whose radiation exposure is too high for effective active or mitigating therapy. The actual threshold level may vary under the conditions of the event and the ability of the system to provide advanced care; however, a likely threshold would be 8 Gy. If fewer individuals are involved and the treatment capability is not overwhelmed, the threshold for entry into this category would probably be increased. On the other hand, if the healthcare system was potentially overwhelmed, the dose range for active treatment (i.e., placement into Category 2) might be narrowed on both ends. That is, more people could be placed into Category 1 (by raising the minimum dose to qualify for active treatment) and more placed into Category 3 (by lowering the maximum dose to qualify for active treatment).
Many of the useful characteristics of biodosimetry techniques for this more refined sorting into action categories would differ from those required for the initial triage. The information would not need to be available as rapidly. While it would be desirable to avoid the need to transport the samples, it would sometimes be feasible to transport samples, especially to nearby facilities such as an emergency center set up near the event site. The throughput could be less. Techniques for measuring dose could include bringing expert operators to the site. It would be important for the technique to have a low false-assignment rate, i.e., neither assigning too high nor too low a category or subcategory. For this purpose, an estimate of dose within ±0.5 to 1.0 Gy of the actual dose is probably sufficient, because the known variation in response among individuals receiving the same exposure dose is likely to render more precise estimates of dose clinically irrelevant.
Purpose: Guide treatment
Another use of biodosimetry is to guide the treatment of individual patients, or measure their recovery and response to mitigating agents within a medical facility. In general, the preferred method to guide treatment is to base decisions on clinical signs and symptoms. However, there are considerable advantages, especially under catastrophic conditions in which many people present for treatment, for clinicians to make judicious use of treatment options based on accurate estimates of the exposure dose of the patents under their care. Equally important, but providing different clinically-useful information, would be to have indicators as to whether the whole body was exposed (homogeneous exposure) or whether parts of the body were shielded (inhomogeneous exposure) (Prassana et al. 2010). For example, in the latter case, treatment decisions would be affected by knowing that spontaneous hematopoetic recovery may occur from marrow that did not receive a high radiation dose.
This particular purpose requires biodosimetry methods that are quite different from those needed for initial screening or further triaging. Their urgency for immediate output is no longer critical, inasmuch as most active treatments do not need to be started immediately. High throughput is also less important because this type of dosimetry would likely be done within the medical system, using established hospitals or field hospitals rather than emergency care systems. Also, many fewer people would need to be assessed for these advanced treatment plans (particularly assuming that the actions described in the first two purposes have successfully narrowed the population needing treatment). The availability of clinical experts and specialized laboratories is more feasible, so this type of dosimetry can use methods that require high expertise, repeated sampling, and analysis of the samples in specialized laboratories. There is a need for moderately well-resolved estimates of dose for this purpose. Again, an estimate within ± .05 Gy of their actual dose would probably be sufficient because more precise estimates of doses are unlikely to add to the clinical usefulness since individual variations are known to fall within this range. In this setting, too, methods that can assess whether doses were spatially localized could be very useful to complement highly-resolved methods for biodosimetry.
Factors that may significantly modify the requirements for biodosimetry for response to acute events
Some factors may impact the type of information that is needed very significantly, e.g., by altering the requirements for precision, the time when the results need to be available, and/or the need for identifying the homogeneity of the exposures. Five of the most important and most plausible factors are considered here. There may be other factors, depending upon the particular circumstances, which may be important in other situations.
The presence of combined injury
Experimental data and limited clinical observations indicate that radiation, when combined with other injuries such as burns and wounds, has a significantly increased deleterious effect on the outcomes. Radiation doses as low as 1 Gy can increase wound-related mortality from 10% to greater than 70% (Pellmar and Ledney 2005; Messerschmidt 1989; Ledney and Elliott 2010).
Because of this apparent synergistic impact, biodosimetry may play a very important additional role in planning effective care and efficient overall use of available resources. Under the very plausible circumstance in which the number of injuries that can be promptly treated by the available facilities and personnel is exceeded, accurate dose assessment in combination with injury assessment will make it feasible to assign some injured individuals for palliation due to their high expected mortality while selecting others for active treatment who will likely benefit.
The requirements for biodosimetry for this purpose will be quite different. Better resolution of dose will be needed for initial screening than what would be required to screen individuals without additional injury. For example, 2 Gy may be sufficient for primary screening in the absence of combined injury, but <1 Gy resolution may be needed to triage in the presence of combined injuries. Having the results of the measurement available rapidly is likely to be even more important for combined injury because of the urgency to make decisions about the type of treatment for both the physical injury and the radiation injury. Because the number of individuals with trauma will likely be limited, dosimetry techniques that have low throughput can be used.
The availability of dose mitigating agents
The potential availability of dose mitigating agents also could profoundly affect the needs for biodosimetry (Alexander et al. 2007). Like any drug, dose-mitigating agents have potentially serious side effects and toxicity and cannot benefit patients who have no need for them. Therefore, it is most prudent to identify and provide such drugs only to patients with documented significant exposure rather than treat the entire large, potentially exposed population, because even a low incidence of serious side effects could result in a substantial number of injured people— most of whom would receive no benefit. If the mitigating agents were in short supply, this would add to the need to identify and treat only those who could benefit. On the other hand, these agents might be most efficacious if they were initiated soon after exposure. The time urgency, combined with the value of identifying people who could benefit, leads to the importance of having rapidly available dosimetry results to support appropriate administration of dose-mitigating agents.
Inhomogeneous exposures
There are important implications for medical decision-making to know whether exposure was homogeneously spread across the whole body or if a marrow-containing part of the body was substantially shielded. For the hematological syndrome, protection of even a modest volume of the bone marrow significantly reduces the effects of radiation, enabling the patient to use the protected marrow as the source for replenishing damaged marrow (Bond and Robinson 1967; Kereiakes et al. 1972; Smith 1983; Prassana et al. 2010). Therefore, there is a need for biodosimetric techniques that can distinguish whether or not the exposure is inhomogeneous. This determination is most likely to be accomplished with the physically-based techniques for two reasons: These methods assess total dose received (albeit at a defined point in time), and they can assess site-specific total dose. Currently, biodosimetry using fingernails and toenails is the most well-developed technique for assessing multiple sites (i.e., two hands and two feet) per person for dose measurements. There are some biological methods that also appear to have the capability to indicate the occurrence of partial body exposures to the bone marrow (Prassana et al. 2010).
There also are some very significant pathophysiological consequences of high radiation doses to specific organs, particularly the gastrointestinal track and the lungs. Especially for inhomogeneous exposures, a high dose to a critical organ will determine the true biological risk, even though the total body exposure may not have been at a life-threatening level. This complicating factor gives rise to the need for biodosimetric techniques that can detect organ-specific damage after-the-fact; these are most likely to be based on a biological parameter.
The presence of different types of radiation
The various methods for biodosimetry have usually been developed using high-energy, low linear energy transfer (LET) radiation (i.e., x-rays and gamma rays) as the source for the experimentally-induced radiation exposures in animals or samples from human subjects. However, exposures with high-LET radiation can occur, as indicated in some of the radiation disaster planning scenarios.
Despite the absence of detailed data, there are some general principles that can be useful to consider as to how current techniques to measure dose may apply to other types of radiation. First, it is unlikely that the measurements obtained by either physically- or biologically-based techniques will vary significantly in estimating doses from low-LET radiation. Second, while neutrons are one potential high-LET source of radiation present in a large-scale nuclear explosion, there are only limited circumstances where whole-body exposure to neutrons would be a significant factor leading to ARS. While there are regions near the center of a nuclear blast where there would be a high flux of neutrons, such regions also have a high probability of fatal effects from burns and blast, and thus a concern for either ARS or biodosimetry is unfortunately moot. For the population who are outside of this center region, fallout with particulates is the most likely source of high-LET radiation. As long as these particulates are not ingested or inhaled, they would affect only the skin (which is likely to be washed immediately). Therefore, high-LET radiation is unlikely to be an important factor that would alter the need for dosimetry for the purpose of effective screening or triage for treatment.
A biological assay based on an endpoint that measures effective biological damage could provide very useful information about the effects of different types of radiation. However, to date such assays have not been established and tested for their effectiveness in resolving differential damage due to high-LET radiation.
There is some specific information available on responses of EPR dosimetry to neutrons (Bochvar et al. 1997; Kang et al. 2003; Tikunov et al. 2005; Trompier et al. 2004; Zdravkova et al. 2002). However, teeth are essentially unresponsive to neutrons because of the paucity of hydrogen atoms in enamel, thereby limiting EPR dosimetry based on teeth from being sensitive to this source. EPR of fingernails should in theory be responsive to exposure to neutrons, but detailed data are not yet available.
Exposures at different dose rates
The general effect of dose rate on biological damage is well known (in general, spreading out the time of exposure decreases the biological effects of radiation because the body repairs some of the damage). However, the importance of most variations in dose rate for causing ARS is not resolved. Most scenarios assume that the exposures will be received over very short periods and at high dose rates. However, it is plausible that, under some circumstances where the exposure was primarily due to fallout and subjects remain unshielded during long stays within the fallout field, their exposure would be more prolonged and received at lower dose rates. This was certainly the case at Chernobyl.
Currently there are no biodosimetric methods that provide unambiguous information about dose rate. The physical methods of EPR and OSL, by their nature, only indicate total dose; they are unaffected by the rate at which the dose was delivered. Biologically based techniques may be affected by dose rate, but this dependency has not been established rigorously.
NEEDS FOR BIODOSIMETRY TO EVALUATE THE POTENTIAL LONG-TERM EFFECTS OF AN EXPOSURE TO IONIZING RADIATION
While many of the aspects of biodosimetry for acute responses also apply to evaluation of long-term effects of ionizing radiation, some considerations and factors differ substantially (Simon et al. 2010). Two such issues of particular importance relate to dose levels and field logistics. The relevant exposures for considering long-term consequences are likely to be much lower than those for ARS. This increases the need for precision in dose estimates but decreases the need to cover a large range of doses, particularly those at the higher end, since long-term survival is low for those receiving high doses. The logistical requirements for rapid measurements and immediate output are significantly reduced for purposes of long-term risk assessment and treatment. Similarly, the urgency and need to process huge numbers concurrently for this purpose is moot, because prompt action is not needed.
There are several different uses for measurements long after-the-fact, which may affect the way the biodosimetry techniques are utilized. The most extensive interest to date has been in estimating the magnitude of the exposures from events that occurred months to years before the measurements. Here, measurements serve to relate biologically-observed phenomena to the dose received, both for basic understanding of the etiology and to assist with the care and consoling of the subjects or compensation for injuries received. There are, unfortunately, several large-scale exposures that have motivated making such measurements (Chumak et al. 1998; Ikeya et al. 1986; Ishii et al. 1990; Nakamura et al. 1998a and b; Romanyukha et al. 2000) and have provided unique data about the long-term effects of ionizing radiation in humans. Such information is essential to advance understanding of the long-term consequences of exposures to ionizing radiation from any source and is an important use of biodosimetry.
COMPARATIVE CHARACTERISTICS OF BIODOSIMETRY TECHNIQUES
As noted, the various biodosimetry techniques each have different characteristics and are based on differing parameters. The physical techniques measure changes that are caused directly by the radiation and do not require biological processing. They can thus be measured immediately and persist indefinitely or with a simple decay over time. The biological techniques, in contrast, focus on the body’s biological response to damage to tissues and therefore most closely align with assessing both the damage sustained and the recovery of the individual and the particular organ systems; however, such responses vary complexly ove time and require time to develop. Because the varying circumstances under which these techniques are most likely to be employed present different needs and have differing logistical problems, the same characteristic may be a strength under some circumstances and a weakness in others. Therefore, it is useful to consider these characteristics in some detail.
However, before focusing on characteristics that vary among the three types of dosimetry techniques, i.e., biologically-based, physically-based, and clinical measures (including cytogenetic assays), it is useful to note two characteristics common to all. First, all biodosimetry techniques are applied at the level of the individual. This is in contrast with other population-based methods for dose estimation including: (1) those based on classical physical dosimetry techniques, e.g., film badges and environmental monitors; (2) simulation models of the events, e.g., models that predict exposures in the path of fallout according to wind and weather patterns and distance from the epicenter; and (3) models of biological processing of particulates if the latter are involved. These methods are not applied at the level of the individual. Second, all biodosimetry techniques are likely to be employed in the context of having knowledge available from these population-based estimates. Thus they can all be further informed and supplemented by applying the population-based estimates to the area where an individual was located at the time of detonation. Thus, wise utilization of biodosimetry techniques necessitates considering when and how to use multiple, corroborative information about exposure from both population-based as well as individual-level techniques.
There are several characteristics of biodosimetric techniques, each of which may be more or less important depending on the purpose, as elaborated above. Here the focus is on the most important characteristics that affect their appropriate utilization.
Capacity
Capacity or “throughput” of the device refers to the number of people who can be processed in a given time-frame. Criteria to evaluate capacity need to include both the number of tests that can be processed by the technique in a given timeframe, and also the overall time period for a given test to produce results, i.e., the time elapsed between obtaining the sample and getting clinically useful information from its measurement. Note, however, that the “throughput” needs for an individual device can be offset by the potential to have multiple devices of the same type performing parallel processing.
A proposed standard for a dosimetry technique to be judged as having ‘sufficient capacity’ to deal with a large acute radiation event has been estimated by federal sources (BARDA–Department of Health and Human Services 2008) as the ability to complete up to 1 million assays within 48 h. This “standard” makes sense for the purpose of initially screening people into those needing further assessment vs. those who do not. After individuals have been initially screened and found to be appropriate for medical management, the need to be able to handle a very high volume of people is likely to be much lower, and the need to make the measurements very quickly would also be lower.
Interval before measurements can be made and the length of time that the measurement is still feasible
Several other criteria are strongly interrelated with the capacity of the techniques as well as important in their own right. One factor that affects the true capacity of a biodosimetry technique, i.e., its ability to deliver results within a given time following exposure, is whether there is an interval before measurements can be made. Especially for initial screening and triage, there are advantages to using a dosimetry technique that does not require time for the assay to become positive. This is a characteristic of the existing physically-based techniques, such as EPR dosimetry. Some physical techniques (e.g., EPR based on nails) may have a gradual fall-off in measurable intensity, and this needs to be taken into account in assessing when such measurements are appropriate. Others, such as EPR tooth dosimetry, have an immediate and permanent signature that reflects the dose. Most biologically-based assays measure changes in natural processes that must take time to develop before they can be measured, e.g., genes must be activated or the metabolism of damaged molecules must occur. Most of these assays also will have a level that changes over time, and therefore the interpretation of the results needs to account for this time dependency, as well as how this process may vary among individuals.
Two other factors that impact throughput are (1) the delays caused by the transportation and processing of samples at specialized and distant sites, and (2) the expertise required in some cases to take the samples, make measurements, and process the results. These are briefly addressed next.
Interval before results are available
For immediate screening-triage for mass casualty events, it is desirable to have the results immediately available; some biodosimetry techniques (e.g., physical techniques and some proposed biological techniques) can provide immediate readout at the site of the measurement. For other purposes such as categorizing previously-screened people and guiding care of patients under treatment, particularly in a hospital setting, immediacy of information may not be as important. Some dosimetry techniques use samples that, after they are obtained, require considerable time to process. Sometimes the complexity of the measurements requires longer elapsed times. For example, in the case of dicentrics, the changes need to evolve under laboratory conditions (because cytogenetic changes cannot be manifested until the cells undergo division). Clearly, techniques providing rapid and on-site results are best used where time urgency is critical, and the latter methods are best used when urgency is not crucial.
Requirements for specialized personnel
Many of the currently employed or proposed biodosimetry approaches require specific expertise. In principle, such expertise might be provided via prior training of all potential responders, by planned transport of experts to the emergency site or by moving the samples to the experts. However, each of these alternatives will likely be difficult to implement if there is a large-scale event involving severe logistical disruptions. For example, prior training of first responders for the necessary specific tasks is likely to be impractical because of the large number of individuals involved nation-wide and the difficulty in keeping up specialized training. But even if this were accomplished, in real situations the first responders will be limited in number and fully occupied with other responsibilities (Gougelet et al. 2010).
Transport of experts to the site may be difficult to accomplish. Transporting samples to the experts and back to the treatment site is likely to pose great challenges under these circumstances too. It will be difficult to undertake the initial sample transport to the experts and then to match the results with the person measured because of the expected dispersal of the population after the event, combined with the poor communication that is expected in a mass-casualty event (as was dramatically illustrated in the aftermath of Hurricane Katrina in 2005). For large events, therefore, it is likely to be essential that the techniques can be carried out by non-expert personnel, i.e., persons who happen to be present in the area and are not incapacitated by the event. This puts a premium on automation and simplification of techniques to be used for high-throughput screening.
All of these problems become less acute for smaller incidents and/or when the people who are potentially exposed can be transported to a sophisticated medical care facility, such as a hospital.
Capability of being used in the field (field-readiness)
“Field-readiness” of a biodosimetry technique involves two considerations: the feasibility of obtaining the samples under field conditions and the capability to provide results while the patient is still in the field. As already noted, transport of samples may be limiting under many plausible scenarios, as will reconnection of results to the individual after a period of elapsed time, especially if the subject is not institutionalized for care.
Utilization of a technique in the field would be facilitated by having the device/technique strategically pre-positioned (probably at sites established for response teams) or for the device/technique to be moved to the site quickly and without damage to the device. It is also important to consider what utilities (such as electrical lines, water, or sanitary conditions) must be available and functioning at the site to carry out the measurements.
Inasmuch as it is unlikely that any system could be designed to be fully self-evident to operate, the devices should include the capability to train unfamiliar users with simple and complete instructions sufficient to permit their timely use.
Precision
The necessity for precision in any biodosimetry method varies by the intended purpose. To determine the long-term effects of ionizing radiation, precision of the measurements is very important because of the need to develop information about the relationship between the dose and the long-term effects; ideally precision would be in terms of milligray (mGy). On the other hand, for triage into treatment categories for acute effects, relatively large uncertainties in the order of 0.5 to 1 Gy may be quite tolerable because of the lack of clinical implications from the uncertainties. For initial screening, a key parameter will be the threshold for which the screening is set. As noted above, the threshold should be set to minimize false negatives, with the assumption that false positives can be sorted out in the next steps of the medical process. For special purposes, such as triaging individuals who have injuries such as burns combined with radiation, there may be more need for precision in the measurement of the radiation dose because of the strong synergistic effect on the outcomes.
Applicability to the population
It is likely that some of the techniques will not be applicable to all individuals in the population. For example, some individuals will not be suitable for EPR dosimetry of teeth because they have no natural teeth. Demographic factors such as age (infancy as well as frail elderly) or ethnicity may affect the applicability of techniques or the interpretation of their results. On the other hand, it is unlikely that there will be a need for complete sampling of everyone who is potentially exposed in order to carry out the initial screening and triage. Instead, estimates for a given unmeasured individual could be based on measurements from other individuals who were in the same location at the time of the event.
Cost per test
The costs of making the measurements include both the consumables required for each measurement and fixed costs, including the labor involved in making the measurements plus capital costs for the device and any other specialized equipment that is needed. In general, when the application involves making many assays, the capital costs are likely to have only a modest impact on marginal costs, so the capital cost per test will be small even for expensive equipment. However, in considering cost, it is also important to take into account the cost of maintaining the equipment in operating condition, including periodic testing of functionality and for storing the equipment under appropriate conditions.
SUMMARY AND CONCLUSION
Biodosimetry is a set of complex and different techniques, each of which has particular advantages, disadvantages, and likely niches.
Biologically-based methods have the potential to be quite sensitive and to reflect the biological importance of the radiation exposure.
Biological methods also have the potential be confounded by a number of different factors such as stress, burns, and wounds; have a complex time course; and may be affected by pre-existing variations in the individual.
Physical methods can provide total dose at specific points at any time after exposure with immediate readout, are unlikely to be affected by comorbidities or individual differences in the responses to irradiation, and can estimate dose at specific body-locations.
Physical methods do not indicate the biological implications of the exposure or variations in dose rate.
The niche for various types of dosimetry will likely differ, depending on whether the purpose is for immediate screening and triage, for determining the extent of risk in individuals with significant exposure, or for guiding patient care over the course of treatment.
It is unlikely that any one method or any one type of measurement will be the approach of choice for most applications.
The most useful information is likely to be obtained when multiple, complementary biodosimetry methods are used in a staged and coordinated fashion, especially using a combination of a physically-based method and a biologically-based method (Blakely 2010; Riecke et al. 2010; Gougelet et al. 2010).
The data from biodosimetry will be most useful when such data are integrated with other types of information about the exposure.
Acknowledgments
Research supported by NIH (U19AI067733) and DARPA (HR0011-08-C-0023 and HR0011-08-C-0022).
Footnotes
Disclaimer: The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.
REFERENCES
- Albanese J, Martens K, Arnold JL, Kelley K, Kristie V, Forte E, Schneider M, Dainiak N. Building Connecticut’s clinical biodosimetry laboratory surge capacity to mitigate the health consequences of radiological and nuclear disasters: a collaborative approach between the state biodosimetry laboratory and Connecticut’s medical infrastructure. Radiat Meas. 2007;42:1138–1142. [Google Scholar]
- Alexander GA, Swartz HM, Amundson SA, Blakely WF, Buddemeier B, Gallez B, Dainiak N, Goans RE, Hayes RB, Lowry PC. BiodosEPR-2006 Meeting: acute dosimetry consensus committee recommendations on biodosimetry applications in events involving uses of radiation by terrorists and radiation accidents. Radiat Meas. 2007;42(6–7):972–996. [Google Scholar]
- Amundson SA, Garty G, Chen Y, Salerno A, Turner H, Zhang J, Lyulko OV, Bertucci A, Xu Y, Wang H, Simaan N, Randers-Pehrson G, Yao L, Brenner DJ. The RABIT: a rapid automated biodosimetry tool for radiological triage. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181ab3cb6. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Amundson SA, Bittner M, Meltzer P, Trent J, Fornance AJ. Biological indicators for the identification of ionizing radiation exposure in humans. Expert Rev Mol Diagn. 2001;1:211–219. doi: 10.1586/14737159.1.2.211. [DOI] [PubMed] [Google Scholar]
- Amundson SA, Grace MB, McLeland CB, Epperly MW, Yeager A, Zhan Q, Greenberger JS, Fornace AJ. Human in vivo radiation-induced biomarkers: gene expression changes in radiotherapy patients. Cancer Res. 2004;64:6368–6371. doi: 10.1158/0008-5472.CAN-04-1883. [DOI] [PubMed] [Google Scholar]
- Amundson SA, Lee RA, Koch-Paiz CA, Bittner ML, Meltzer P, Trent JM, Fornace AJ. Differential responses of stress genes to low dose-rate gamma irradiation. Mol Cancer Res. 2003;1:445–452. [PubMed] [Google Scholar]
- BARDA-Department of Health and Human Services. Physical and biological dosimetry techniques and devices useful in initial triage after radiologic and nuclear events—RFI-BARDA-08-21A (Archived)—Federal Business Opportunities. [Accessed 17 November 2008]; Available at https://www.fbo.gov/index?s=opportunity&mode=form&id=997f67c8155b32dfa03e01951dfa77cc&tab=core&cview=0&cck=1&au=&ck=.
- Bauchinger M. Cytogenetic effects in human lymphocytes as a dosimetry system. Berlin: Springer-Verlag; 1984. [Google Scholar]
- Bender MA, Gooch PC. Somatic chromosome aberrations induced by human whole-body irradiation: the “Recuplex” criticality accident. Radiat Res. 1966;29:568–582. [PubMed] [Google Scholar]
- Blakely WF. Multiple parameter radiation injury assessment using a nonhuman primate radiation model—Biodosimetry applications. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181b0306d. 000–000. [DOI] [PubMed] [Google Scholar]
- Bochvar I, Kleshchenko E, Kushnereva K, Levochkin F. Sensitivity of human tooth enamel to α-radiation and neutrons. Atom Energy. 1997;83:845–847. [Google Scholar]
- Bond VP, Robinson CV. Effect of ionizing radiation on the haematopoietic tissue. Proceedings of an International Atomic Energy Agency Panel. Vienna: IAEA; 1967. Bone-marrow stem-cell survival in the nonuniformly exposed mammal; pp. 69–74. [Google Scholar]
- Bøtter-Jensen L, McKeever S, Wintle A. Amsterdam, Boston, London: Elsevier Science; 2003. Optically stimulated luminescence dosimetry. [Google Scholar]
- Brady JM, Aarestad NO, Swartz HM. In vivo dosimetry by electron spin resonance spectroscopy. Health Phys. 1968;15:43–47. doi: 10.1097/00004032-196807000-00007. [DOI] [PubMed] [Google Scholar]
- Brengues M, Paap B, Bittner M, Amundson SA, Seligmann B, Korn R, Lenigk R, Zenhausern F. Biodosimetry on small blood volume using gene expression assay. Health Phys. 2010;98 doi: 10.1097/01.HP.0000346706.44253.5c. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Britten R, Mitchell S, Johnson AM, Singletary SJ, Keeney SK, Nyalwidhe JO, Karbassi I, Lonart G, Sanford LD, Drake RR. The identification of serum biomarkers of high-LET radiation exposure and biological sequelae. 2010;98 doi: 10.1097/HP.0b013e3181acff7c. 000–000. [DOI] [PubMed] [Google Scholar]
- Carr Z, Christie DH. Global networking for biodosimetry laboratory capacity surge in radiation emergencies. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181abaad4. 000–000. [DOI] [PubMed] [Google Scholar]
- CDC Radiation Emergencies. [Accessed 13 November 2008];Acute radiation syndrome: A fact sheet for physicians [online] 2005 Available at: http://www.bt.cdc.gov/radiation/arsphysicianfactsheet.asp.
- Chen Y, Ying T, Nowak I, Wang N, Hyrien O, Wilkins R, Ferrarotto C, Sun H, Dertinger SD. Validating high throughput micronucleus analysis of peripheral reticulocytes for radiation biodosimetry: benchmark against dicentric and cbmn assays in a mouse model. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181abaae5. 000–000. [DOI] [PubMed] [Google Scholar]
- Chumak V, Likhtarev I, Shalom S, Meckbach R, Krjuchkov V. Chernobyl experience in field of retrospective dosimetry: reconstruction of doses to the population and liquidators involved in the accident. Radiat Protect Dosim. 1998;77:91–95. [Google Scholar]
- Demidenko E, Williams BB, Swartz HM. Radiation dose prediction using data on time to emesis in the case of nuclear terrorism. Radiat Res. 2009;171:310–319. doi: 10.1667/RR1552.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeWitt R, Klein DM, Yukihara EG, Simon SL, McKeever SWS. OSL and tooth enamel and its potential use in post-exposure triage. Health Phys. 2010;98 doi: 10.1097/01.HP.0000347997.57654.17. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DiCarlo AL, Hatchett RJ, Kaminski JM, Ledney GD, Pellmar TC, Okunieff P, Ramakrishnan N. Medical countermeasures for radiation combined injury: radiation with burn, blast, trauma and/or sepsis. Report of an NIAID Workshop, March 26–27, 2007. Radiat Res. 2008;169:712–721. doi: 10.1667/RR1295.1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dons RF, Cerveny TJ. Triage and treatment of radiation-injured mass casualties. In: RI Walker, TJ Cerveny., editors. Medical consequences of nuclear warfare. Part I Textbook of military medicine. Vol 2. Falls Church, VA: TMM Publications, Office of the Surgeon General; 1989. [Google Scholar]
- Fenech M. He lymphocyte cytokinesis-block micronucleus cytome assay and its application in radiation biodosimetry. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181b85044. 000–000. [DOI] [PubMed] [Google Scholar]
- Goans RE, Holloway EC, Berger ME, Ricks RC. Early dose assessment in criticality accidents. Health Phys. 2001;81:446–449. doi: 10.1097/00004032-200110000-00009. [DOI] [PubMed] [Google Scholar]
- Goans RE, Holloway EC, Berger ME, Ricks RC. Early dose assessment following severe radiation accidents. Health Phys. 1997;72:513–518. doi: 10.1097/00004032-199704000-00001. [DOI] [PubMed] [Google Scholar]
- Godfrey-Smith D. Toward in vivo OSL dosimetry of human tooth enamel. Radiat Meas. 2008;43:854–858. [Google Scholar]
- Gougelet RM, Nicolalde RJ, Rea M, Swartz HM. The view from the trenches: part 1—emergency medical response plans and the need for EPR screening. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181a6de7d. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grace MB, Moyer BR, Prasher J, Cliffer KD, Ramakrishnan N, Kaminski J, Coleman N, Manning RG, Maidment BW, Hatchett R. Rapid radiation dose assessment for radiological public health emergencies: roles of NIAID and BARDA. Health Phys. 2010;98 doi: 10.1097/01.HP.0000348001.60905.c0. 000–000. [DOI] [PubMed] [Google Scholar]
- Hill R. Dynamics of micronuclei in mouse skin fibroblasts after gamma irradiation. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181b02f90. 000–000. [DOI] [PubMed] [Google Scholar]
- Ikeya M, Miki T, Kai A, Hoshi M. ESR dosimetry of a-bomb radiation using tooth enamel and granite rocks. Radiat Prot Dosim. 1986;17:181–184. [Google Scholar]
- Ishii H, Ikeya M, Okano M. ESR dosimetry of teeth of residents close to Chernobyl reactor accident. J Nucl Sci Technol. 1990;27:1153–1155. [Google Scholar]
- Kang CM, Park KP, Song JE, Jeoung DI, Cho CK, Kim TH, Bae S, Lee SJ, Lee YS. Possible biomarkers for ionizing radiation exposure in human peripheral blood lymphocytes. Radiat Res. 2003;159:312–319. doi: 10.1667/0033-7587(2003)159[0312:pbfire]2.0.co;2. [DOI] [PubMed] [Google Scholar]
- Kereiakes J, Van de Riet W, Born C, Ewing C, Silberstein E, Saenger E. Active bone-marrow dose related to hematological changes in whole-body and partial-body 60Co gamma radiation exposures. Radiol. 1972;103:651–656. doi: 10.1148/103.3.651. [DOI] [PubMed] [Google Scholar]
- Ledney GD, Elliott TB. Combined injury: Factors with potential to impact radiation dose assessments. Health Phys. 2010;98 doi: 10.1097/01.HP.0000348466.09978.77. 000–000. [DOI] [PubMed] [Google Scholar]
- Long XH, Zhao ZQ, He XP, Wang HP, Xu QZ, An J, Bai B, Sui JL, Zhou PK. Dose-dependent expression changes of early response genes to ionizing radiation in human lymphoblastoid cells. Int J Mol Med. 2007;19:607–615. [PubMed] [Google Scholar]
- Messerschmidt O. Combined effects of radiation and trauma. Adv Space Res. 1989;9:197–201. doi: 10.1016/0273-1177(89)90438-9. [DOI] [PubMed] [Google Scholar]
- Nakamura N, Katanic JF, Miyazawa C. Contamination from possible solar light exposures in ESR dosimetry using human tooth enamel. J Radiat Res. 1998a;39:185–191. doi: 10.1269/jrr.39.185. [DOI] [PubMed] [Google Scholar]
- Nakamura N, Miyazawa C, Sawada S, Akiyama M, Awa A. A close correlation between electron spin resonance (ESR) dosimetry from tooth enamel and cytogenetic dosimetry from lymphocytes of Hiroshima atomic-bomb survivors. Int J Radiat Biol. 1998b;73:619–627. doi: 10.1080/095530098141870. [DOI] [PubMed] [Google Scholar]
- Ossetrova N, Sandgren DJ, Gallego S, Blakely WF. Combined approach of hematological biomarkers and plasma protein saa for improvement of radiation dose assessment in triage biodosimetry applications. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181abaabf. 000–000. [DOI] [PubMed] [Google Scholar]
- Pellmar TC, Ledney GD. Combined injury: radiation in combination with trauma, infectious disease, or chemical exposures. Bethesda, MD: Armed Forces Radiobiology Research Institute; 2005. A467834. [Google Scholar]
- Prasanna PG, Martin PR, Subramanian U, Berdycheviski R, Krasnopolsky K, Duffy KL, Manglapus GL, Landauer MR, Srinivasan V, Boreham D, Hagan MP, Jinaratan V, Blakely WF. Cytogenetic biodosimetry for radiation disasters: recent advances. Bethesda MD: Armed Forces Radiobiology Research Inst; 2005. [Accessed on 9 Novermber 2009]. (AFFRI)-CD-05-2, p10-1-10-14 A667834; 2005. Available at http://en.scientificcommons.org/18624650. [Google Scholar]
- Prasanna PG, Moroni M, Pellmar TC. Triage dose assessment for partial-body exposure: dicentric analysis. Health Phys. 2010;98 doi: 10.1097/01.HP.0000348020.14969.4. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rea M, Gougelet RM, Nicolalde RJ, Swartz HM. Triaging categories and medical guidelines for the acute radiation syndrome. Health Phys. 2010;98 000–000. [Google Scholar]
- Riecke A, Ruf CG, Meineke V. Assessment of radiation damage—the need for a multiparametric and integrative approach with the help of both clinical and biological dosimetry. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181b97306. 000–000. [DOI] [PubMed] [Google Scholar]
- Rojas-Palma C, Liland A, Jerstad AN, Etherington G, Perez MR, Rahola T, Smith K. TMT handbook: Triage, monitoring and treatment of people exposed to ionising radiation following a malevolent act. Norway: Lobo Media AS; 2009. [Google Scholar]
- Romanyukha A, Ignatiev E, Vasilenko E, Drozhko E, Wieser A, Jacob P, Keirim-Markus I, Kleschenko E, Nakamura N, Miyazawa C. EPR dose reconstruction for Russian nuclear workers. Health Phys. 2000;78:15–20. doi: 10.1097/00004032-200001000-00004. [DOI] [PubMed] [Google Scholar]
- Romanyukha A, Reyes RA, Trompier F, Benevides LA. Status of EPR dosimetry in fingernails. Health Phys. 2010;98 doi: 10.1097/01.HP.0000347999.01948.74. 000–000. [DOI] [PubMed] [Google Scholar]
- Salikhov I, Swartz H. Measurement of specific absorption rate for clinical EPR at 1200 MHz. Appl Magn Reson. 2005;28:287–291. [Google Scholar]
- Sasaki MS. Advances in the biophysical and molecular bases of radiation cytogenetics. Int J Radiat Biol. 2009;85:26–47. doi: 10.1080/09553000802641185. [DOI] [PubMed] [Google Scholar]
- Sharma M, Halligan BD, Wakim BT, Savin VJ, Cohen EP, Moulder JE. The urine proteome for radiation biodosimetry: effect of total body versus local kidney irradiation. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181b17cbd. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Simon SL, Bouville A, Kleinerman R. Current use and future needs of biodosimetry in studies of long-term health risk following radiation exposure. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181a86628. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith H. Dose-effect relationships for early response to total body irradiation. J Soc Radiol Protect. 1983;3:5–19. [Google Scholar]
- Swarts S. Quantitative analysis of EPR spectra taken from irradiated nails, application to measuring the radiation induced signal. Health Phys. 2010;98 000–000. [Google Scholar]
- Swartz HM, Molenda RP, Lofberg RT. Long-lived radiation-induced electron spin resonances in an aqueous biological system. Biochem Biophys Res Commun. 1965;21:61–65. doi: 10.1016/0006-291x(65)90426-2. [DOI] [PubMed] [Google Scholar]
- Swartz HM, Burke G, Coey M, Demidenko E, Dong R, Grinberg O, Hilton J, Iwasaki A, Lesniewski P, Kmiec M, Lo K, Javier Nicolalde R, Ruuge A, Sakata Y, Sucheta A, Walczak T, Williams BB, Mitchell CA, Romanyukha A, Schauer DA. In vivo EPR for dosimetry. Radiat Meas. 2007;42:1075–1084. doi: 10.1016/j.radmeas.2007.05.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Swartz HM, Iwasaki A, Walczak T, Demidenko E, Salikhov I, Khan N, Lesniewski P, Thomas J, Romanyukha A, Schauer D, Starewicz P. In vivo EPR dosimetry to quantify exposures to clinically significant doses of ionising radiation. Radiat Prot Dosim. 2006;120:163–170. doi: 10.1093/rpd/nci554. [DOI] [PubMed] [Google Scholar]
- Swartz HM, Iwasaki A, Walczak T, Demidenko E, Salikhov I, Lesniewski P, Starewicz P, Schauer D, Romanyukha A. Measurements of clinically significant doses of ionizing radiation using non-invasive in vivo EPR spectroscopy of teeth in situ. Appl Radiat Isotopes. 2005;62:293–299. doi: 10.1016/j.apradiso.2004.08.016. [DOI] [PubMed] [Google Scholar]
- Tikunov D, Trompier F, Ivannikov A, Clairand I, Herve M, Khailov A, Skvortsov V. Relative sensitivity of tooth enamel to fission neutrons: effect of secondary protons. Radiat Meas. 2005;39:509–514. [Google Scholar]
- Trompier F, Fattibene P, Tikunov D, Bartolotta A, Carosi A, Doca M. EPR dosimetry in a mixed neutron and gamma radiation field. Radiat Protect Dosim. 2004;110:437–442. doi: 10.1093/rpd/nch225. [DOI] [PubMed] [Google Scholar]
- Waselenko JK, MacVittie TJ, Blakely WF, Pesik N, Wiley AL, Dickerson WE, Tsu H, Confer DL, Coleman CN, Seed T. Medical management of the acute radiation syndrome: recommendations of the Strategic National Stockpile Radiation Working Group. Ann Intern Med. 2004;140:1037–1051. doi: 10.7326/0003-4819-140-12-200406150-00015. [DOI] [PubMed] [Google Scholar]
- Wilcox DE, He X, Gui J, Ruuge AE, Li H, Williams BB, Swartz HM. Dosimetry based on EPR spectral analysis of fingernail clippings. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181b27502. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wilkins R, Flegal FN, Devantier Y, McNamee JP. Quickscan dicentric chromosome analysis for radiation biodosimetry. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181aba9c7. 000–000. [DOI] [PubMed] [Google Scholar]
- Williams BB, Sucheta A, Dong R, Sakata Y, Iwasaki A, Burke G, Grinberg O, Lesniewski P, Kmiec M, Swartz HM. Experimental procedures for sensitive and reproducible in situ EPR tooth dosimetry. Radiat Meas. 2007;42:1094–1098. doi: 10.1016/j.radmeas.2007.05.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Williams BB, Dong R, Kmiec M, Burke G, Demidenko E, Gladstone D, Nicolalde RJ, Sucheta A, Lesniewski P, Swartz HM. In vivo EPR tooth dosimetry. Health Phys. 2010;98 doi: 10.1097/HP.0b013e3181a6de5d. 000–000. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zdravkova M, Denis JM, Gallez B, Debuyst R. Sensitivity of whole human teeth to fast neutrons and gamma-rays estimated by L-band EPR spectroscopy. Radiat Meas. 2002;35:603–608. doi: 10.1016/s1350-4487(02)00125-7. [DOI] [PubMed] [Google Scholar]