Abstract
With the release of the landmark report Toxicity Testing in the 21st Century: A Vision and a Strategy, the U.S. National Academy of Sciences, in 2007, precipitated a major change in the way toxicity testing is conducted. It envisions increased efficiency in toxicity testing and decreased animal usage by transitioning from current expensive and lengthy in vivo testing with qualitative endpoints to in vitro toxicity pathway assays on human cells or cell lines using robotic high-throughput screening with mechanistic quantitative parameters. Risk assessment in the exposed human population would focus on avoiding significant perturbations in these toxicity pathways. Computational systems biology models would be implemented to determine the dose-response models of perturbations of pathway function. Extrapolation of in vitro results to in vivo human blood and tissue concentrations would be based on pharmacokinetic models for the given exposure condition. This practice would enhance human relevance of test results, and would cover several test agents, compared to traditional toxicological testing strategies. As all the tools that are necessary to implement the vision are currently available or in an advanced stage of development, the key prerequisites to achieving this paradigm shift are a commitment to change in the scientific community, which could be facilitated by a broad discussion of the vision, and obtaining necessary resources to enhance current knowledge of pathway perturbations and pathway assays in humans and to implement computational systems biology models. Implementation of these strategies would result in a new toxicity testing paradigm firmly based on human biology.
Toxicity testing is approaching a pivotal point where it is poised to take advantage of the revolution in biology and biotechnology. The current system is the product of an approach that has addressed advances in science by incrementally expanding test protocols or by adding new tests without evaluating the testing system in light of overall risk-assessment and risk-management needs. That approach has led to a system that is somewhat cumbersome with respect to the cost of testing, the use of laboratory animals, and the time needed to generate and review data. In combination with varied statutory requirements for testing, it has also resulted in a system in which there are substantial differences in chemical testing, with many chemicals not being tested at all despite potential human exposure to them. Furthermore, the data that are generated might not be ideal for answering questions regarding risk to human health. Accordingly, the U.S. Environmental Protection Agency (EPA) recognized that the time had come for an innovative approach to toxicity testing and asked the National Research Council (NRC) to develop a long-range vision and strategy for toxicity testing. In response to the U.S. EPA’s request, the NRC convened the Committee on Toxicity Testing and Assessment of Environmental Agents, which prepared a report. This document is based on the NRC Committee’s report on Toxicity Testing and Assessment of Environmental Agents.
HISTORICAL PERSPECTIVE OF REGULATORY TOXICOLOGY
To gain an appreciation of current toxicity-testing strategies, it is helpful to examine how they evolved, why differences arose among and within federal agencies, and who contributed to the process. The current strategies have their foundation in the response to a tragedy that occurred in 1937 (Gad & Chengelis, 2001). At that time, few laws prevented the sale of unsafe food or drugs. A labeling law prohibited the sale of “misbranded” food or drugs, but the law could be enforced only on the basis of criminal charges that arose after sale of a product. During fall 1937, the Massengil Company marketed a drug labeled “Elixir of Sulfanilamide,” which was a solution of sulfanilamide in diethylene glycol. From the recognition of the drug’s toxicity to its removal from the market by the Food and Drug Administration (FDA), it caused at least 73 deaths. The tragedy revealed the inadequacy of the existing law. FDA was able to act only because the drug had been mislabeled; at that time, an elixir was defined as a product that contained alcohol. If the company had labeled the drug “Solution of Sulfanilamide,” FDA would not have been able to act.
As a result of the sulfanilamide tragedy, Congress passed the Food, Drug, and Cosmetic Act (FDCA) of 1938, which required evidence (that is, from toxicity studies in animals) of drug safety before marketing (Gad & Chengelis, 2001). Major amendments to the FDCA in 1962, known as the Kefauver–Harris Amendments, strengthened the original law and required proof not only of drug safety but of drug efficacy. More extensive clinical trials were required, and FDA had to indicate affirmative approval of a drug before it could be marketed. The approval process thus changed from one based on premarket notification to one based on premarket approval.
The FDCA also dealt with food-safety issues and was amended in 1958 to require manufacturers to demonstrate the safety of food additives (Frankos & Rodricks, 2001). FDA was given authority to develop toxicity studies for assessing food additives and to specify criteria to be used in assessing safety. As a result of the need for scientific safety assessments, toxicologists in FDA, academe, and industry developed the first modern protocols in toxicology during the 1950s and 1960s (see, for example, FDA, 1959). Those protocols helped to shape the toxicity-testing programs that are in use today.
Differences in testing strategies between drugs and foods arose in FDA because of differences in characteristics and regulatory requirements (Frankos & Rodricks, 2001). Drugs are chemicals with intended biologic effects in people, whereas food additives—such as antioxidants, emulsifiers, and stabilizers—have intended physical and chemical effects in food. Thus, a drug manufacturer must demonstrate the desired biologic effect, and a food-additive manufacturer must demonstrate the absence of measurable biologic effect. Regarding regulatory requirements, the FDCA requires clinical trials in humans for drug approval; there is no such requirement for food additives. FDA considers risks and benefits when approving a drug but considers only safety when approving a food additive. Thus, differences in approaches to food and drug testing have evolved.
The public has long been concerned about the safety of intentional food additives and drugs. By the late 1960s, concern about exposure to chemical contaminants in the environment was also growing. In 1970, the U.S. EPA was established “to protect human health and to safeguard the natural environment—air, water, and land—upon which life depends” (U.S. EPA, 2005a). Over the years, U.S. EPA has developed toxicity-testing strategies to evaluate pesticides and industrial chemicals that may eventually appear as food residues or as environmental contaminants.
The 1947 Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) required the registration of pesticides before marketing in interstate or foreign commerce (Conner et al., 1987). The statute was first administered by the U.S. Department of Agriculture, but authority was transferred to the U.S. EPA when it was created. FIFRA has been amended several times, but the 1972 amendments transformed FIFRA and gave U.S. EPA new powers, such as classification of pesticides and regulation of pesticide residues on raw agricultural commodities. Although registration remained the centerpiece of the act, one amendment required proof that the pesticide did not cause “unreasonable adverse effects” on humans or the environment (Conner et al., 1987). That amendment was largely responsible for the testing strategy that eventually emerged in U.S. EPA.
The other critical pieces of legislation that helped to shape the current toxicity-testing strategy for pesticides were amendments to the FDCA. In 1954, the Miller Amendment “required that a maximum acceptable level (tolerance) be established for pesticide residues in foods and animal feed” (Conner et al., 1987). The Food Quality Protection Act of 1996 amended the FDCA (and FIFRA) and “fundamentally changed the way EPA regulates pesticides” (U.S. EPA, 2005b). Some of the most important changes were the establishment of a risk-based standard for pesticide residues on all foods, the requirement that U.S. EPA “consider all non-occupational sources of exposure … and exposure to other pesticides with a common mechanism of toxicity when setting tolerances,” the requirement that U.S. EPA set tolerances that would ensure safety for infants and children, and the requirement that U.S. EPA develop and implement an endocrine-disruptor screening program (U.S. EPA, 2006a).
FIFRA, the FDCA, and the amendments to them are responsible for the current toxicity-testing strategy for pesticides, which typically requires extensive testing before a pesticide can be marketed. The strategy for evaluating industrial chemicals is different. The Toxic Substances Control Act (TSCA) was passed in 1976 to address control of new and existing industrial chemicals not regulated by other statutes (Kraska, 2001). Although manufacturers are required to submit premanufacturing notices—which include such information as chemical identity, intended use, manufacturing process, and expected exposure—no specific toxicity testing is required [see NRC (2006a) interim report for more information on the extent of chemical testing under TSCA]. Instead, the strategy for evaluating industrial chemicals relies heavily on the use of structure–activity relationships.
FDA’s drug and food-additive testing programs and U.S. EPA’s pesticide testing program represent strategies designed to support safety evaluations of chemicals before specified uses. Other testing can occur in response to regulatory concerns regarding environmental agents. For example, U.S. EPA sponsors some toxicity testing, epidemiologic studies, and test development to support its regulatory mandates, such as those under the Safe Drinking Water Act. The Health Effects Institute (HEI), a joint U.S. EPA- and industry-sponsored organization, funds toxicity studies to inform regulatory decisions on air pollutants. As regulatory concerns arise, industry may initiate testing to evaluate further dose-response relationships of important environmental contaminants. The National Toxicology Program (NTP)—which was created in 1978 to “coordinate toxicology testing programs within the federal government[,] … strengthen the science base in toxicology[,] … develop and validate improved testing methods[,] … [and] provide information about potentially toxic chemicals to health, regulatory, and research agencies, scientific and medical communities, and the public” (NTP, 2005a)—performs toxicity tests on agents of public-health concern. For example, its chronic bioassay has become the gold standard for carcinogenicity testing. The NTP has been instrumental in the acceptance and integration of new tests or approaches in toxicity-testing strategies. It has initiated development of medium- and high-throughput tests to address the ever-growing number of newly introduced chemicals and the existing chemicals and breakdown products that have not been tested [The NTP’s general approach as described in its Roadmap for the Future is reviewed in the NRC (2006a) report.] Tests proposed by NTP and others that are alternatives to standard protocols are formally reviewed by an interagency authority, the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), to ensure that they have value in regulatory decision making.
Another organization that has influenced toxicity-testing programs in the United States is the Organization for Economic Cooperation and Development (OECD). OECD is an organization that “provides a setting where governments can compare policy experiences, seek answers to common problems, identify good practice and co-ordinate domestic and international policies” (OECD, 2006, p. 7). OECD’s broad interests include health and the environment. OECD has been instrumental in developing internationally accepted, or harmonized, toxicity-testing guidelines. The goal of the harmonization program is to reduce the repetition of similar tests conducted by member countries to assess the toxicity of a given chemical. Other OECD programs that have influenced toxicity-testing approaches or strategies include those to define the tests required for a minimal data set for a chemical and to determine the approach to screening endocrine disruptors.
RISK ASSESSMENT
The toxicity data generated by the strategies and programs just described are most often used in a process called risk assessment to evaluate the risk associated with exposure to an agent. The 1983 NRC report Risk Assessment in the Federal Government: Managing the Process, which presented a systematic and organized paradigm, set a standard for risk assessment. The report outlined a three-phase process in which scientific data are moved from the laboratory or the field into the risk-assessment process and then on to decision makers to determine regulatory options.
The research phase is marked by data generation and method development, including basic research and routine testing. For any particular risk assessment, the data used may have many sources, including studies of laboratory animals, clinical tests, epidemiologic studies, and studies of animal and human cells in culture. The data may be reported in peer-reviewed publications, in the general scientific literature and government reports, and in unpublished reports of specific tests undertaken for an assessment.
In the risk-assessment phase, selected data are interpreted and used to evaluate a potential risk to human health and the environment. The 1983 NRC report described this phase in terms of four components: hazard identification (analysis of the available data to describe qualitatively the nature of the response to toxic chemicals, such as tumors, birth defects, and neurologic effects); dose-response analysis (quantification of the relationship between exposure and the response observed in studies used to identify hazard); exposure assessment (quantification of expected exposure to the agent among the general population and differently exposed groups); and risk characterization (synthesis and integration of the analyses in the three other components to estimate the likelihood and scope of risk among the general, sensitive, and differently exposed populations). Although risk assessment is based on scientific data, the process is characterized by gaps in data and fundamental scientific knowledge, and it relies on models, extrapolation, and other inference methods. The process turns to science policies—choice of mathematical models, safety factors, and assumptions—to fill in data and knowledge gaps. Science policies used in risk assessment are distinct from the regulatory policies developed for risk-management decisions described below.
Risk management moves the original data—now synthesized and integrated in the form of a risk characterization—to those responsible for making regulatory decisions. The decision makers consider the products of the risk assessment with data from other fields (for example, economics), societal and political issues, and interagency and international factors to decide whether regulation is needed and, if so, its nature and scope.
The 1983 NRC report and later reports (NRC, 1993, 1996; U.S. EPA, 1998a) recognized a planning and scoping stage in which a host of scientific and societal issues are considered in advance of research and risk assessment. That activity includes examining the expected scope of the problem, available data and expected data needs, cost and time requirements, legal considerations, and community-related issues. The present report identifies some of those considerations and other, public-health considerations as “risk contexts” and underlines their important role in decisions related to toxicity testing (see discussion under “The Committee’s Second Task and Approach” in this section).
Reviews and critiques of the 1983 NRC paradigm have for the most part focused on the risk-assessment module and its four components. A review of the literature shows considerably less attention to the research module and the risk-management module. The present report focuses on the research module, in which testing is conducted; however, it ventures into some risk-assessment considerations.
The Committee’s First Task and Key Points From its Interim Report
Anticipating the impact of the many scientific advances and the changing needs of the assessment process, U.S. EPA recognized the need to review existing strategies and develop a long-range vision for toxicity testing and assessment. The committee that was formed in response to U.S. EPA’s request and convened in March 2004 includes experts in developmental toxicology, reproductive toxicology, neurotoxicology, immunology, pediatrics and neonatology, epidemiology, biostatistics, in vitro methods and models, molecular biology, pharmacology, physiologically based pharmacokinetic and pharmacodynamic models, genetics, toxicogenomics, cancer hazard assessment, and risk assessment.
As a first task, the committee was asked to review several relevant reports by U.S. EPA and others and to comment on aspects pertaining to new developments in toxicity testing and proposals to modify current approaches. Accordingly, the committee reviewed the 2002 U.S. EPA evaluation of its reference-dose and reference-concentration process (U.S. EPA, 2002), the International Life Sciences Institute Health and Environmental Sciences Institute draft reports on a tiered toxicity-testing approach for agricultural-chemical safety evaluations (ILSI-HESI, 2004a, 2004b, 2004c), the 2004 European Union report on the REACH (Registration, Evaluation and Authorisation of Chemicals) program, and the 2004 report on the near-term and long-term goals of the National Toxicology Program (NTP, 2004). The committee’s interim report, released in December 2005, fulfilled the first part of the study.
As discussed in its interim report (NRC, 2006a), the committee’s review of current toxicity-testing strategies revealed a system that had reached a turning point. Agencies typically have responded to scientific advances and emerging challenges by simply altering individual tests or adding tests to existing regimens. That patchwork approach has not provided a fully satisfactory solution to the fundamental problem—the difficulty in meeting four objectives simultaneously: depth, providing the most accurate, relevant information possible for hazard identification and dose-response assessment; breadth, providing data on the broadest possible universe of chemicals, endpoints, and life stages; animal welfare, causing the least animal suffering possible and using the fewest animals possible; and conservation, minimizing the expenditure of money and time on testing and regulatory review.
The committee identified several recurring themes and questions in the various reports that it was asked to review. The recurring themes included the following:
-
□
The inherent tension between breadth, depth, animal welfare, and conservation and the challenge to address one of these issues without worsening another.
-
□
The importance of distinguishing between testing protocols and testing strategies while considering modifications of current testing practices.
-
□
The possible dangers in making tests so focused that they evaluate only one endpoint in one species and thus provide no overlap to verify results.
-
□
The need for both chemical-specific tailored testing to enhance understanding of a particular chemical’s mode of action and uniform testing protocols and strategies to enhance comparability.
-
□
The importance of recognizing that toxicity testing for regulatory purposes should be conducted primarily to serve the needs of risk management.
The recurring questions that arose during the committee’s review included the following: Which environmental agents should be tested? How should priorities for testing chemicals be set? What strategies for toxicity testing are the most useful and effective? How can toxicity testing generate data that are more useful for human health risk assessment? How can toxicity testing be applied to a broader universe of chemicals, life stages, and health effects? How can environmental agents be screened with minimal use of animals and efficient expenditure of time and other resources? How should tests and testing strategies be evaluated?
In considering those questions, the committee came to several important conclusions. First, the intensity and depth of testing should be based on practical needs, including the use of the chemical, the likelihood of human exposure, and the scientific questions that testing must answer to support a reasonable science-policy decision. Fundamentally, the design and scope of a toxicity-testing approach need to reflect risk-management needs. Thus, the goal is to focus resources on the evaluation of the more sensitive adverse effects of exposures of greatest concern rather than on full characterization of all adverse effects irrespective of relevance for risk-assessment and risk-management needs. Second, priority setting should be a component of any testing strategy that is designed to address a large number of chemicals. Chemicals to which people are more likely to be exposed or to which some segment of the population might receive relatively high exposures should undergo more in-depth testing, and this concept is embedded in several existing and proposed strategies. Third, there are major gaps in current toxicity-testing approaches. The importance of the gaps is a matter of debate and depends on whether effects of public-health importance are being missed by current approaches. Testing every chemical for every possible health effect over all life stages is impractical; however, the emerging technologies hold great promise for screening chemicals more rapidly. Fourth, testing strategies will need to be evaluated with respect to the value of information that they provide in light of the four objectives already discussed—depth, breadth, animal welfare, and conservation. In evaluating new tests, there remains the difficult question of what should serve as the gold standard for performance. Simply comparing the outcomes of new tests with the outcomes of currently used tests might not be the best approach; determining whether it is will depend on the reliability and relevance of the current tests.
The Committee’s Second Task and Approach
For the second part of the study, the committee’s statement of task was to build on the work presented in the first report and develop a long-range vision and strategic plan to advance the practices of toxicity testing and human health assessment of environmental contaminants. The committee was directed to consider the following specific issues:
-
□
Improvements in the assessment of key exposures (for example, potential susceptibility of specific life stages and groups in the general population) and toxicity outcomes (for example, endocrine disruption and developmental neurotoxicity).
-
□
Incorporation of state-of-the-science testing and assessment procedures, methods, and approaches, such as genomics, proteomics, transgenics, bioinformatics, and pharmacokinetics.
-
□
Methods for increasing efficiency in experimental design and reducing the use of laboratory animals.
-
□
Potential uses and limitations of new or alternative testing methods.
-
□
Application of emerging computational and molecular techniques in risk assessment. Issues to be considered included the data necessary to validate the techniques, the limitations of the techniques, the use of such methods to identify plausible mechanisms or pathways of toxicity, and the use of mechanistic insights in risk assessments or testing decisions.
To prepare its final report, the committee held six meetings from April 2005 to June 2006. Three of the meetings included public sessions during which the committee heard presentations by staff members of several U.S. EPA offices, including the Office of Prevention, Pesticides and Toxic Substances, the Office of Children’s Health Protection, the Office of Water, the Office of Solid Waste and Emergency Response, and the Office of Air and Radiation. The committee also heard presentations by persons in other government agencies, industry, and academe.
To develop its long-range vision, the committee identified a variety of scenarios for which toxicity-testing information would be needed to make a decision. Some common scenarios, defined by the committee as “risk contexts” for which toxicity testing is used to generate information needed for decision making, are outlined next.
-
□
Evaluation of new environmental agents. This category covers chemicals that have the potential to appear as environmental contaminants. It includes pesticides; industrial chemicals; chemicals that are destined for use in, for example, consumer products; and chemicals that might be emitted by the combustion of new fuels or new manufacturing processes. It would also include their break-down products. Because of the large number of new agents that are introduced each year, a mechanism is needed to test the agents rapidly for potential toxicity. Questions have been raised about the safety of and risk posed by new categories of potential environmental agents, such as those introduced through nanotechnology and biotechnology. This category would also include those substances.
-
□
Evaluation of existing environmental agents. Many substances already in the environment have not been evaluated for toxicity. In some cases, a need to evaluate specific existing environmental agents may arise from the discovery of a new source or exposure pathway or from a better understanding of human exposure on the basis of, for example, biomonitoring data. In other cases, scrutiny may be necessary when toxicity is newly recognized, such as toxicity in a worker population. In addition, the backlog of untested chemicals in commerce requires assessment to ensure that the chemicals in use today do not pose unacceptable risks at current exposures. Thus, toxicity testing for existing environmental agents requires a variety of testing approaches, from basic screening of a huge set of chemical agents to use of specific data generated by new exposure or health-effects information.
-
□
Evaluation of a site. In many areas, soil or water has been contaminated by, for example, former industrial, military, or power-generation activities. If a new use, such as the building of a school or office building, is proposed for such a site, a primary goal would be to protect the health of future users of the site. Other goals could include evaluating the risks to neighbors posed by such a site or determining the degree and type of cleanup needed. Sites that are in use also might need evaluation, such as sites of industrial workplaces, schools, or office buildings. Those evaluations almost always involve concerns about exposures to site-specific chemical mixtures.
-
□
Evaluation of potential environmental contributors to a specific disease. Many diseases are suspected of having an etiology that is, at least in part, environmental. A higher prevalence of a disease in one geographic area than in another might require decision makers to consider the role of environmental agents in the disparity. Understanding the role of environmental agents in a prevalent disease can also help to target actions that need to be taken. For example, asthma, which has seen an increase in prevalence over the last two decades in Western societies, is now known to be induced or aggravated by air pollutants. That understanding has allowed decision makers to take action against some pollutants, but other causes or triggers of asthma could yet be discovered.
-
□
Evaluation of the relative risks posed by environmental agents. A risk manager might need to choose between different manufacturing processes or different solvents. Consumers might wish to distinguish between products on the basis of their potential risks to children. A proponent of a new chemical or process might wish to show that it has a lower risk in some ways than the current chemical or process. Such decisions might require less complex risk characterizations if they focus on the possible outcomes or exposures to be compared rather than requiring an in-depth understanding of the risks associated with each possible choice. This scenario emphasizes the need for toxicity-testing information to be directly comparable, standardized, and quantifiable so that such comparisons can be made.
Thus, a primary goal of the committee was to develop a flexible toxicity-testing strategy that would be responsive to the different toxicity-testing needs of the various risk contexts outlined above. Another goal of the committee was to consider the powerful new technologies that have become available and will continue to evolve. For example, bioinformatics, which applies computational approaches to describe and predict biologic function at the molecular level, and systems biology, which is a powerful approach to describing and understanding fundamental mechanisms by which biologic systems operate, have pushed biologic understanding into a new realm. Moreover, genomics, proteomics, and metabolomics offer great potential and are being used to study human disease and to evaluate the safety of pharmaceutical products. Those and other tools are considered to be important in any future toxicity-testing strategy.
ORGANIZATION OF THE DOCUMENT
This document is organized into five more sections. In the section titled “Vision,” the limitations of the current toxicity-testing system, the design goals for a new system, and the options considered by the committee are discussed. An overview of the new long-range vision for toxicity testing of environmental agents is also presented. Each component of the new vision is discussed in greater detail in the section “Components of the Vision.” “Tools and technologies” that might be used in the future toxicity-testing paradigm are described in the subsequent section. Implementation of the new vision over the course of several decades is considered in the section “Developing the Science Base and Assays to Implement the Vision.” In the final section, “Prerequisites for Implementing the Vision in Regulatory Contexts,” the committee considers the implications of the long-range vision given the current regulatory framework.
VISION
Make no little plans. They have no magic to stir men’s blood and probably themselves will not be realized. Make big plans; aim high in hope and work, remembering that a noble, logical diagram once recorded will never die, but long after we are gone will be a living thing, asserting itself with ever-growing insistency. (Daniel Hudson Burnham, Architect, Designer of the 1893 Chicago World’s Fair)
The goal of toxicity testing is to develop data that can ensure appropriate protection of public health from the adverse effects of exposures to environmental agents. Current approaches to toxicity testing rely primarily on observing adverse biologic responses in homogeneous groups of animals exposed to high doses of a test agent. However, the relevance of such animal studies for the assessment of risks to heterogeneous human populations exposed at much lower concentrations has been questioned. Moreover, the studies are expensive and time-consuming and can use large numbers of animals, so only a small proportion of chemicals have been evaluated with these methods. Adequate coverage of different life stages, of endpoints of public concern, such as developmental neurotoxicity, and of mixtures of environmental agents is a continuing concern. Current tests also provide little information on modes and mechanisms of action, which are critical for understanding interspecies differences in toxicity, and little or no information for assessing variability in human susceptibility. Thus, the committee looked to recent scientific advances to provide a new approach to toxicity testing.
A revolution is taking place in biology. At its center is the progress being made in the elucidation of cellular-response networks. Those networks are interconnected pathways composed of complex biochemical interactions of genes, proteins, and small molecules that maintain normal cellular function, control communication between cells, and allow cells to adapt to changes in their environment. A familiar cellular-response network is signaling by estrogens in which initial exposure results in enhanced cell proliferation and growth of specific tissues or in proliferation of estrogen-sensitive cells in culture (Frasor et al., 2003). In that type of network, initial interactions between a signaling molecule and various cellular receptors result in a cascade of early, midterm, and late responses to achieve a coordinated response that orchestrates normal physiologic functions (Landers & Spelsberg, 1992; Thummel, 2002; Rochette-Egly, 2003).
Bioscience is rapidly enhancing our knowledge of cellular-response networks and allowing scientists to begin to uncover the manner in which environmental agents perturb pathways to cause toxicity. Pathways that can lead to adverse health effects when sufficiently perturbed are termed toxicity pathways. Responses of cells to oxidative stress caused by exposure to diesel exhaust particles (DEP) constitute an example of toxicity pathways within a cellular-response network (Xiao et al., 2003). In a dose-related fashion, in vitro exposures to DEP lead to activation of a hierarchic set of pathways. First, cell antioxidant signaling is increased. As the dose increases, inflammatory signaling is enhanced; finally, at higher doses, there is activation of cell-death (apoptosis) pathways (Nel et al., 2006). Thus, in the cellular-response network dealing with oxidative stress, the antioxidant pathways activated by DEPs are normal adaptive signaling pathways that assist in maintaining homeostasis; however, they are also toxicity pathways in that they lead to adverse effects when oxidant exposure is sufficiently high. The committee capitalizes on the recent advances in elucidating and understanding toxicity pathways and proposes a new approach to toxicity testing based on them.
New investigative tools are providing knowledge about biologic processes and functions at an astonishing rate. In vitro tests that evaluate activity in toxicity pathways are elucidating the modes and mechanisms of action of toxic substances. Quantitative high-throughput assays can be used to expand the coverage of the universe of new and existing chemicals that need to be evaluated for human health risk assessment (Roberts, 2001; Inglese, 2002; Inglese et al., 2006; Haney et al., 2006). The new assays can also generate enhanced information on dose-response relationships over a much wider range of concentrations, including those representative of human exposure. Pharmacokinetic and pharmacodynamic models promise to provide more accurate extrapolation of tissue dosimetry linked to cellular and molecular endpoints. The application of toxicogenomic technologies and systems-biology evaluation of signaling networks will permit genomewide scans for genetic and epigenetic perturbations of toxicity pathways. Thus, changes in toxicity pathways are envisioned as the basis of a new toxicity-testing paradigm for managing the risks posed by environmental agents instead of apical endpoints from whole-animal tests.
This section provides an overview of the committee’s vision but first discusses the limitations of current toxicity-testing strategies, the design goals for a new system, and the options that the committee considered. Key terms used throughout this report are listed and defined in Table 1.
TABLE 1.
|
Limitations of Current Testing Strategies
The exposure-response continuum shown in Figure 1 effectively represents the current approach to toxicologic risk assessment. It focuses primarily on adverse health outcomes as the endpoints for assessing the risk posed by environmental agents and establishing human exposure guidelines. Although intermediate biologic changes and mechanisms of action are considered in the paradigm, they are viewed as steps along the pathway to the ultimate induction of an adverse health outcome.
Traditional toxicity-testing strategies undertaken in the context of the preceding paradigm have evolved and expanded over the last few decades to reflect increasing concern about a wider variety of toxic responses, such as subtle neurotoxic effects and adverse immunologic changes. The current system, which relies primarily on a complex set of whole-animal-based toxicity-testing strategies for hazard identification and dose-response assessment, has difficulty in addressing the wide variety of challenges that toxicity testing must meet today. Toxicity testing is under increasing pressure to meet several competing demands:
-
□
Test large numbers of existing chemicals, many of which lack basic toxicity data.
-
□
Test the large number of new chemicals and novel materials, such as nanomaterials, introduced into commerce each year.
-
□
Evaluate potential adverse effects with respect to all critical endpoints and life stages.
-
□
Evaluate potential toxicity in the most vulnerable members of the human population.
-
□
Minimize animal use.
-
□
Reduce the cost and time required for chemical safety evaluation.
-
□
Acquire detailed mechanistic and tissue-dosimetry data needed to assess human risk quantitatively and to aid in regulatory decision making.
The current approach relies primarily on in vivo mammalian toxicity testing and is unable to meet those competing demands adequately. In 1979, about 62,000 chemicals were in commerce (Government Accounting Office [GAO], 2005). Today, there are 82,000, and about 700 are introduced each year (GAO, 2005). The large volume of new and current chemicals in commerce is not being fully assessed (see the committee’s interim report, NRC, 2006a). One reason for the testing gaps is that the current testing is so time-consuming and resource-intensive. Furthermore, only limited mechanistic information is routinely developed to understand how most chemicals are expected to produce adverse health effects in humans. Those deficiencies limit the ability to predict toxicity in human populations that are typically exposed to much lower doses than those used in whole-animal studies. They also limit the ability to develop predictions about similar chemicals that have not been similarly tested. The following sections describe several limitations of the current system and describe how a system based on toxicity pathways would help to address them.
Low-Dose Extrapolation From High-Dose Data
Traditional toxicity testing has relied on administering high doses to animals of nearly identical susceptibility to generate data for identifying critical endpoints for risk assessment. Historically, exposing animals to high doses was justified by a need for sufficient statistical power to observe high incidences of toxic responses in small test populations with relatively short exposures. In many cases, daily doses in animal toxicity tests are orders of magnitude greater than those expected in human exposures. Thus, the use of high-dose animal toxicity tests for predicting risks of specific apical human endpoints has remained challenging and controversial. Inferring effects at lower doses is difficult because of inherent uncertainty in the nature of dose-response relationships. Effects at high doses may result from metabolic processes that contribute negligibly at lower doses or may arise from biologic processes that do not occur with treatment at lower doses. In contrast, high doses may cause overt toxic responses that preclude the detection of biologic interactions between the chemical and various signaling pathways that lead to subtle but important adverse effects. The vision proposed in this report offers the potential to obtain direct information on toxic effects at exposures more relevant to those experienced by human populations.
Animal-to-Human Extrapolation
Other concerns arise about the relationship between the biology of the test species and the heterogeneous human population. Animals have served as models of human response for decades because the biology of the test animals is, in general, similar to that of humans (NRC, 1977). However, although the generality holds true, there are several examples of idiosyncratic responses in test animals and humans in which chemicals do not have a specific toxic effect in a test species but do in humans and vice versa. A classic example is thalidomide: Rats are resistant, and human fetuses are sensitive. The committee envisions a future in which tests based on human cell systems can serve as better models of human biologic responses than apical studies in different species. The committee therefore believes that, given a sufficient research and development effort, human cell systems have the potential to largely supplant testing in animals.
Mixtures
Current toxicity-testing approaches have been criticized because of their failure to consider co-exposures that commonly occur in human populations. Because animal toxicity tests are time-consuming and resource-intensive and result in the sacrifice of animals, it is difficult to use them for substantial testing of chemical mixtures (NRC, 1988; Cassee et al., 1998; Feron et al., 1995; Lydy et al., 2004; Bakand et al., 2005; Pauluhn, 2005; Teuschler et al., 2005). Furthermore, without information on how chemicals exert their biologic effects, testing of mixtures is a daunting task. For example, testing of mixtures in animal assays could involve huge numbers of combinations of chemicals and the use of substantial resources in an effort of uncertain value. In contrast, testing based on toxicity pathways could allow grouping of chemicals according to their effects on key biologic pathways. Combinations of chemicals that interact with the same toxicity pathway could be tested over broad dose ranges much more rapidly and inexpensively. The resulting data could allow an intelligent and focused approach to the problem of assessing risk in human populations exposed to mixtures.
Design Criteria for a New Toxicity Testing Paradigm
The committee discussed the design criteria that should be considered in developing a strategy for toxicity testing in the future. As discussed in the committee’s interim report (NRC, 2006a), which did much to frame those criteria, the goal is to improve toxicity testing by accomplishing the following objectives:
-
□
Provide broader coverage of chemicals and their mixtures, endpoints, and life-stage vulnerabilities.
-
□
Reduce the cost and time of testing, increase efficiency and flexibility, and make it possible to reach a decision more quickly.
-
□
Use fewer animals and cause minimal suffering to animals that are used.
-
□
Develop a more robust scientific basis of risk assessment by providing detailed mechanistic and dosimetry information and by encouraging the integration of toxicologic and population- based data.
The committee considered those objectives as it weighed various options. The following section discusses some of the options considered by the committee.
Options for a New Toxicity-Testing Paradigm
In developing its vision for toxicity testing, the committee explored four options, as presented in Table 2. The baseline option (Option I) applies current toxicity-testing principles and practices. Accordingly, it would use primarily in vivo animal toxicity tests to predict human health risks. The difficulties in interpreting animal data obtained at high doses with respect to risks in the heterogeneous human population would not be circumvented. Moreover, because whole-animal testing is expensive and time-consuming, the number of chemicals addressed would continue to be small. The continued use of relatively large numbers of animals for toxicity testing also raises ethical issues and is inconsistent with emphasis on reduction, replacement, and refinement of animal use (Russell & Burch, 1959). Overall, the current approach does not provide an adequate balance among the four objectives of toxicity testing identified in the committee’s interim report: depth of testing, breadth of testing, animal welfare, and conservation of testing resources.
TABLE 2.
Option I, in vivo | Option II, tiered in vivo | Option III, in vitro and in vivo | Option IV, in vitro |
---|---|---|---|
Animal biology | Animal biology | Primarily human biology | Primarily human biology |
High doses | High doses | Broad range of doses | Broad range of doses |
Low throughput | Improved throughput | High and medium throughput | High throughput |
Expensive | Less expensive | Less expensive | Less expensive |
Time-consuming | Less time-consuming | Less time-consuming | Less time-consuming |
Use of relatively large numbers of animals | Use of fewer animals | Use of substantially fewer animals | Use of virtually no animals |
Based on apical endpoints | Based on apical endpoints | Based on perturbations of critical cellular responses | Based on perturbations of critical cellular responses |
Some screening using computational and in vitro approaches; more flexibility than current methods | Screening using computational approaches possible; limited animalstudies that focus on mechanism and metabolism | Screening using computational approaches |
The committee extensively considered the expanded use of tiered testing (Option II) to alleviate some of the concerns with present practice. The tiered approach to toxicity testing entails a stepwise process for screening and evaluating the toxicity of agents that still relies primarily on test results in whole animals. The goal of tiered testing is to generate pertinent data for more efficient assessment of potential health risks posed by an environmental agent, taking into consideration available knowledge on the chemical and its class, its modes or mechanisms of action, and its intended use and estimated exposures (Carmichael et al., 2006). Those factors are used to refine testing priorities to focus first on areas of greatest concern in early tiers and then to move judiciously to advanced testing in later tiers as needed. In addition, an emphasis on pharmacokinetic studies in tiered approaches has been considered in recent discussions of improving toxicity testing of pesticides (Carmichael et al., 2006; Doe et al., 2006).
Tiered testing has been recommended in evaluating the toxicity of agricultural products (Doe et al., 2006), in screening for endocrine disruptors (Charles, 2004), and in assessing developmental toxicity (Spielman, 2005) and carcinogenicity (Stavanja et al., 2006) of chemicals and products. A tiered-testing approach also has the promise to include comparative genomic studies to help to identify genes, transcription-factor motifs, and other putative control regions that are involved in tissue responses (Ptacek & Sell, 2005). The increasing complexity of biologic information—including genomic, proteomic, and cell-signaling information—has encouraged the use of a more systematic multilevel approach in toxicity screening (Yokota et al., 2004).
The systematic development of tiered, decision-tree selection of more limited suites of animal tests could conceivably provide toxicity-testing data nearly equivalent to those currently obtained but without the need to conduct tests for as many apical endpoints. The use of appropriately chosen computational models and in vitro screens might also permit sound risk-management decisions in some cases without the need for in vivo testing. Both types of tiered-testing strategies offer the potential of reducing animal use and toxicity-testing costs and allowing flexibility in testing based on risk-management information needs. Although the committee recognized the potential for incremental improvement in toxicity testing through a tiered approach, Option II still represents only a small step in improving coverage, reducing costs and animal use, and increasing mechanistic information in risk assessment. It still relies on whole-animal testing and is geared mainly toward deciding which animal tests are required in risk assessment for any specific agent. Although tiered testing might be pursued more formally in a transition to a more comprehensive toxicity-testing strategy, it does not meet most of the design criteria discussed earlier.
In the committee’s view, a more transformative paradigm shift is needed to achieve the objectives for toxicity testing set out in its interim report, represented by Options III and IV in Table 2. The committee’s vision is built on the identification of biologic perturbations of toxicity pathways that can lead to adverse health outcomes under conditions of human exposure. The use of a comprehensive array of in vitro tests to identify relevant biologic perturbations with cellular and molecular systems based on human biology could eventually eliminate the need for whole-animal testing and provide a stronger, mechanistically based approach for environmental decision making. Computational models could also play a role in the early identification of environmental agents potentially harmful to humans, although further testing would probably be needed. This new approach would be less expensive and less time-consuming than the current approach and result in much higher throughput. Although the reliance on in vitro results lacks the whole-organism integration provided by current tests, toxicologic assessments would be based on biologic perturbations of toxicity pathways that can reasonably be expected to lead to adverse health effects. Understanding of the role of such perturbations in the induction of toxic responses would be refined through toxicologic research. With the further development of in vitro test systems of toxicity pathways and the tools for assessing the dose-response characteristics of the perturbations, the committee believes that its vision for toxicity testing will meet the four objectives set out in its interim report.
Full implementation of the high-throughput, fully human-cell-based testing scheme represented by Option IV in Table 2 would face a number of scientific challenges. Major concerns are related to ensuring adequate testing of metabolites and the potential difficulties of evaluating novel chemicals, such as nanomaterials and biotechnology products with in vitro tests. Those challenges require maintenance of some whole-animal tests into the foreseeable future, as indicated in Option III, which includes specific in vivo studies to assess formation of metabolites and some mechanistic studies of target-organ responses to environmental agents and leaves open the possibility that more extensive in vivo toxicity evaluations of new classes of agents will be needed. Like Option IV, Option III emphasizes the development and application of new in vitro assays for biologic perturbations of toxicity pathways. Thus, although the committee notes that Option IV embodies the ultimate goal for toxicity testing, the committee’s vision for the next 10–20 years is defined by Option III.
The committee is mindful of the methodologic developments that will be required to orchestrate the transition from current practices toward its vision. During the transition period, there will be a need to continue the use of many current test procedures, including whole-animal tests, as the tools needed to implement the committee’s vision fully are developed. The steps that need to be taken to achieve the committee’s vision are discussed further in the section “Developing the Science Base and Assays to Implement the Vision.”
The committee notes that European approaches to improve toxicity testing emphasize the replacement of animal tests with in vitro methods (Gennari et al., 2004). However, a major goal of the European approaches is to develop in vitro batteries that can predict the outcome of high-dose testing in animals. The committee distinguishes those in vitro tests from the ones noted in Options III and IV. In vitro studies promise to provide more mechanistic information and to allow more extensive and more rapid determinations of biologic perturbations that are directly relevant to human biology and exposures.
Overview of the Committee’s Long-Range Vision for Toxicity Testing
The framework outlined in Figure 2 forms the basis of the committee’s vision for toxicity testing in the 21st century. The figure indicates that the initial perturbations of cell-signaling motifs, genetic circuits, and cellular-response networks are obligatory changes related to chemical exposure that might eventually result in disease. The consequences of a biologic perturbation depend on the magnitude of the perturbation, which is related to the dose, the timing and duration of the perturbation, and the susceptibility of the host. Accordingly, at low doses, many biologic systems may function normally within their homeostatic limits. At somewhat higher doses, clear biologic responses occur. They may be successfully handled with adaptation, although some susceptible people may respond. A more intense or persistent perturbation may overwhelm the capacity of the system to adapt and may lead to tissue injury and possibly to adverse health effects.
In this framework, the goals of toxicity testing are to identify critical pathways that when perturbed can lead to adverse health outcomes and to evaluate the host susceptibility to understand the effects of perturbations on human populations. To implement the new toxicity-testing approach, toxicologists will need to evolve a comprehensive array of test procedures that will allow the reliable identification of important biologic perturbations in key toxicity pathways. And epidemiologists and toxicologists will need to develop approaches to understand the range of host susceptibility within populations. Viewing toxic responses in that manner shifts the focus away from the apical endpoints emphasized in the traditional toxicity-testing paradigm, toward biologic perturbations that can be identified more efficiently without the need for whole-animal testing and toward characterizing host vulnerability to provide the context for assessing the implications of test results.
Figure 3 illustrates the major components of the committee’s proposed vision: chemical characterization, toxicity testing, and dose-response and extrapolation modeling. Each component is discussed in further detail in the next section, and the tools and technologies that might play some role in the future paradigm are discussed in section “Tools and Technologies.”
Chemical characterization involves consideration of physicochemical properties, environmental persistence, bioaccumulation potential, production volumes, concentration in environmental media, and exposure data. Computational tools, such as quantitative structure–activity relationship models and bioinformatics, may eventually be used to categorize chemicals, predict likely toxicity and metabolic pathways, screen for relative potency with predictive models, and organize large databases for analysis and hypothesis generation.
Toxicity testing in the committee’s vision seeks to identify the perturbations in toxicity pathways that are expected to lead to adverse effects. The focus on biologic perturbations rather than apical endpoints is fundamental to the committee’s vision. If adopted, the vision will lead to a major shift in emphasis away from whole-animal testing toward efficient in vitro tests and greater human surveillance. Targeted testing is also used to identify or explore functional endpoints associated with adverse health outcomes and may include in vivo metabolic or mechanistic studies.
Dose-response modeling is used to describe the relationship between biologic perturbations and dose in quantitative terms and optimally mechanistic terms; extrapolation modeling is used to make predictions of possible effects in human populations at prevailing environmental exposure concentrations. Computational modeling of toxicity pathways evaluated with specific high-throughput tests themselves will be a key tool for establishing dose-response relationships. Pharmacokinetic models, such as physiologically based pharmacokinetic models, will assist in extrapolating from in vitro to in vivo conditions by relating concentrations active in toxicity-test systems in vitro to human blood concentrations.
At each step, population-based data and human-exposure information should be considered. For example, human biomonitoring and surveillance can provide data on exposure to environmental agents, host susceptibility, and biologic change that will be key for dose-response and extrapolation modeling. Throughout, the information needs for risk-management decision making must be borne in mind because they will to a great extent guide the nature of the testing required. Thus, the population-based data and exposure information and the risk contexts are shown to encircle the core toxicity-testing strategy in Figure 3.
The components of the toxicity-testing paradigm are semi-autonomous but interrelated modules, containing specific sets of underlying technologies and capabilities. Some chemical evaluations may proceed stepwise from chemical characterization to toxicity testing to dose-response and extrapolation modeling, but that sequence might not always be followed. A critical feature of the new vision is consideration of risk context at each step and the ability to exit the strategy at any point whenever enough data have been generated to inform the decision that needs to be made. Also, the proposed vision emphasizes the generation and use of population-based data and exposure estimates when possible. The committee notes that the development of surveillance systems for chemicals newly introduced into the market will be important. The new vision encourages the collection of such data on important existing chemicals from biomonitoring, surveillance, and molecular epidemiologic studies. Finally, flexibility is needed in the testing of environmental agents to encourage the development and application of novel tools and approaches. The evolution of the toxicity-testing process, as envisioned here, must retain flexibility to encourage incorporation of new information and new methods as they are developed and found to be useful for evaluating whether a given exposure poses a risk to humans. That will require formal procedures for the phasing in or phasing out of standard testing methods. Indeed, that process is attuned to the need for efficient testing of all chemicals in a timely, cost-effective fashion.
The committee envisions a reconfiguration of toxicity testing through the development of in vitro medium- and high-throughput assays. The in vitro tests would be developed not to predict the results of current apical toxicity tests but rather as cell-based assays that are informative about mechanistic responses of human tissues to toxic chemicals. The committee is aware of the implementation challenges that the new toxicity-testing paradigm would face. For example, toxicity testing must be able to address the potential adverse health effects both of chemicals in the environment and of the metabolites formed when the chemicals enter the body. Much research will be needed to ensure that the new system evaluates the effects of the chemicals and their metabolites fully. Moreover, as we shift from a focus on apical endpoints to perturbations in toxicity pathways, there will be a need to develop an appropriate science base to support risk-man-agement actions based on the perturbations. Implementation of the vision and the possible challenges are discussed in the section “Developing the Science Base and Assays to Implement the Vision.”
COMPONENTS OF THE VISION
The committee foresees pervasive changes in toxicity testing and in interpretive risk-assessment activities. The current approach to toxicity testing focuses on predicting adverse effects in humans on the basis of studies of apical endpoints in whole-animal tests. In the committee’s vision, in vitro mechanistic tests provide rapid evaluations of large numbers of chemicals, greatly reduced live-animal use, and results potentially more relevant to human biology and human exposures. As discussed in the previous section, “Vision,” toxicity testing can be increasingly reconfigured with the accrual of better understanding of biologic pathways perturbed by toxicants and of the signaling networks that control activation of the pathways. The use of systems-biology approaches that integrate responses over multiple levels from molecules to organs will enable a more holistic view of biologic processes, including an understanding of the relationship between perturbations in toxicity pathways and consequences for cell and organism function. The central premise of the committee’s vision is that toxicant-induced responses can be quantified with appropriate cellular assays and that empirical or mechanistic models of pathway perturbations can be used as the basis of environmental decision making. Combining a fundamental understanding of cellular responses to toxicants with knowledge of tissue dosimetry in cell systems and in exposed human populations will provide a suite of tools to permit more accurate predictions of conditions under which humans are expected to show pathway perturbations by toxicant exposure. The institutional and infrastructural changes required to achieve the committee’s vision will include changes in the types of tests that support toxicity testing and how toxicity, mechanistic information, and epidemiologic data are used in regulatory decision making. The regulatory transition from the current emphasis on apical endpoint toxicity tests to reliance on perturbations of toxicity pathways will raise many issues. The challenges to implementation and a strategy to implement the vision are discussed in the section “Developing the Science Base and Assays to Implement the Vision.”
This section discusses individual components of the vision: chemical characterization (component A), toxicity testing (component B), dose-response and extrapolation modeling (component C), population-based and human exposure data (component D), and risk contexts (component E). Component B is composed of a toxicity-pathway component and a limited targeted-testing component. The toxicity-pathway component will be increasingly dominant as more and more high-throughput toxicity-pathway assays are developed and validated. Surveillance and biomonitoring data will be needed to understand the effects of toxicity-pathway perturbations on humans. Finally, the overall success of the new paradigm will depend on ensuring that toxicity testing meets the information needs of environmental decision making given the risk contexts.
Component A: Chemical Characterization
An overview of component A is provided in Figure 4. Chemical characterization is meant to address key questions, including the compound’s stability in the environment, the potential for human exposure, the likely routes of exposure, the potential for bioaccumulation, the likely routes of metabolism, and the likely toxicity of the compound and possible metabolites based on chemical structure or physical or chemical characteristics. Thus, data would be collected on physical and chemical properties, use characteristics, possible environmental concentrations, possible metabolites and breakdown products, initial molecular interactions of compounds and metabolites with cellular components, and possible toxic properties. A variety of computational methods might be used to predict those properties when data are not available. Decisions could be made after chemical characterization about further testing that might or might not be required. For example, if a chemical were produced in such a manner that it would never reach the environment or if it were sufficiently persistent and biologically reactive, further toxicity evaluation might not be necessary for regulatory decision making. Moreover, computational tools for estimating biologic activities and potency could be useful in assessing characteristics of compounds during their development or in a premanufacturing scenario to rule out development or introduction of compounds that are expected to lead to biologically important perturbations in toxicity pathways. In most cases, chemical characterization alone is not expected to be sufficient to reach decisions about the toxicity of an environmental agent.
The tools for chemical characterization will include a variety of empirical and computational methods. As outlined in the committee’s first report (NRC, 2006a), computational approaches that can and most likely will be used are in the following categories: tools to calculate physical and chemical properties, models that predict metabolism and metabolic products of a chemical, structure–activity relationship (SAR) and quantitative SAR (QSAR) models that predict biologic activity from molecular structure, and models that predict specific molecular interactions, such as protein–ligand binding, tissue binding, and tissue solubility. An array of computational tools is available to calculate physical and chemical properties (Volarath et al., 2004; Olsen et al., 2006; Grimme et al., 2007; Balazs, 2007). Tools for assessing metabolic fate and biologic activity are continually evolving, and many of the more accurate and refined examples rely on proprietary technology or proprietary databases. Databases that support the most predictive tools may therefore end up being proprietary and substantially different from those available in the public domain. The committee urges the U.S. Environmental Protection Agency (EPA) to consider taking a lead role in ensuring public access to the data sets that are developed for predictive modeling and in providing the resources necessary for the continual evolution of methods to develop SAR, QSAR, and other predictive modeling tools.
Many models used to predict hazard are based only on structure and physical and chemical properties and rely on historical data sets. Their reliability is limited by the relevant data sets, which are continually evolving and increasing in size and accessibility. That is, the predictive value of the structure– activity rules will depend on the chemicals in the data set from which they are derived—their prevalence, structures, and whether they have the toxic activity of interest (see, for example, Battelle, 2002). Computational approaches for predicting toxicity and molecular interactions are available for only a small number of endpoints, such as estrogen-receptor binding, and their predictive value can be low (Battelle, 2002). As approaches improve with time and experience and as the data sets available for model development become larger and more robust, computational tools should become much more useful for chemical characterization, predicting activity in toxicity pathways, and early-stage decision making.
Component B: Toxicity Testing of Compounds and Metabolites
The long-term vision makes the development of predictive toxicity-pathway-based assays the central component of a broad toxicity-testing strategy for assessing biologic activity of new or existing compounds. The assays will be conducted primarily with cells or cell lines, optimally with human cells or cell lines, and as time passes, the need for traditional apical animal tests will be greatly reduced and optimally eliminated. The overview of component B provided in Figure 5 indicates that toxicity testing will include both pathway testing and targeted testing, which are discussed further in the following.
A period of transition is inevitable because of the need to develop the full suite of toxicity-pathway tests that will be required for a comprehensive assessment of toxicity. Challenges related to the transition from the current paradigm oriented to apical endpoints to that outlined here are addressed separately in the section “Developing the Science Base and Assays to Implement the Vision.”
Toxicity Pathways
The committee’s vision focuses on toxicity pathways. Toxicity pathways are simply normal cellular response pathways that are expected to result in adverse health effects when sufficiently perturbed. For example, in early studies of cancer biology, specific genes that were associated with malignant growth and transformation were called oncogenes (those promoting unrestrained cell replication) and tumor-suppressor genes (those restricting replication). Both oncogenes and tumor-suppressor genes were later found to code for proteins that played important roles in normal biology. For example, oncogenes were involved in cell replication, and suppressor-gene products normally halted some key part of the replication process. However, mutations (such as those which can be induced by some environmental agents) were found to make oncogenes constitutively active or to cause a great reduction in or loss of activity of suppressor genes.
It is the ability of otherwise normal cellular response pathways to be targets for environmental agents that leads to their definition as toxicity pathways. Perturbations of toxicity pathways can be evaluated with a variety of assays, including relatively straightforward biochemical assays, such as receptor binding or reporter-gene expression, or more integrated cellular response assays, such as assays to evaluate proliferation of an estrogen-responsive cell line after treatment with environmental agents. Cellular responses can be broadly dichotomized as those requiring recognition of the structure of an environmental agent and those occurring because of reactivity of the environmental agent. In the first case, the three-dimensional structure is recognized by macromolecular receptors, as with estrogenic compounds. Accordingly, tests for the structurally mediated responses could be based on binding assays or on integrated cellular-response events, such as proliferation, induction of new proteins, or alteration of phosphorylation status of cells after exposure to environmental agents. In the second case, with reactivity-driven responses, the compound or a metabolite reacts with and damages cellular structures. Reactive compounds have the capacity to be much more promiscuous in their targets in cells, and the initial stress responses to tissue reactivity with these agents may also trigger adaptive changes to maintain homeostasis in the face of increased cellular stress (see Figure 2).
Biologic systems from single cells to complex plant and animal organisms have evolved many mechanisms to respond to and counter stressors in their environment. Many responses are mediated through coordinated changes in expression of genes in specific patterns, which result in new operational characteristics of affected cells (Ho et al., 2006; Schilter et al., 2006; Singh & DuMond, 2007). Many stress-response pathways—such as those regulated by hsp90-mediated regulation of chaperone proteins, by Nrf2-mediated antioxidant-element control of cellular glutathione, or by steroid-hormone family (for example, PPAR, CAR, and PXR) receptor-mediated induction of xenobiotic metabolizing enzymes—are conserved across many vertebrate species (Aranda & Pascual, 2001; Handschin & Meyer, 2005; Westerheide & Morimoto, 2005; Kobayashi & Yamamoto, 2006). Initial responses to stressors represent adaptation to maintain normal function. When stressors are applied at increasingly high concentrations in combination with other stressors, in sensitive hosts, or during sensitive life stages, adaptation fails, and adverse effects occur in the cell and organism (see Figure 2).
As stated, the committee’s long-range vision capitalizes on the identification and use of toxicity pathways as the basis of a new approach to toxicity testing and dose-response modeling. An important question for toxicity-testing strategies concerns the number of pathways that might need to be examined as primary targets of chemical toxicants. For example, in the case of reproductive and developmental toxicity, the National Research Council Committee on Developmental Toxicology listed 17 primary intracellular and intercellular signaling pathways that were then known to be involved in normal development (NRC, 2000). Those pathways and the various points for toxic interaction with them are potential targets of chemicals whose structures mimic or disrupt portions of them. Some of the pathways are also important at other life stages, and biologically significant perturbations of them might result in long-lasting effects or effects that are manifested later in life. As discussed in the section “Developing the Science Base and Assays to Implement the Vision,” considerable effort will be required to determine which pathways ultimately to include in the suite of toxicity pathways for testing and what patterns and magnitudes of perturbations will lead to adverse effects.
Some examples of toxicity pathways that could be evaluated with high-throughput methods are listed next, where the consequences of pathway activation are also noted. Most tests are expected to use high-throughput methods, but others could include medium-throughput assays of more integrated cellular responses, such as cytotoxicity, cell proliferation, and apoptosis. Simpler assays, such as receptor binding or reactivity of compounds with targets (for example, tests of inhibition of cholinesterase activity), also could be used as needed.
-
□
Nrf2 antioxidant-response pathway (McMahon et al., 2006; Zhang, 2006). The activation of antioxidant-response element signaling occurs through oxidation of sentinel sulfhydryls on the protein Keap1. Some agents, such as chlorine, activate Nrf2 signaling in vitro, and the oxidative stress likely is the cause of irritation and toxicity in the respiratory tract.
-
□
Heat-shock-response pathway (Maroni et al., 2003; Westerheide & Morimoto, 2005). The activation of protein synthesis by HSP1 transcription factor signaling maintains cellular proteins in an active folded configuration in response to stressors that cause unfolding and denaturation.
-
□
PXR, CAR, PPAR, and AhR response pathways (Waxman, 1999; Handschin & Meyer, 2005; Hillegass et al., 2006; Timsit & Negishi, 2006; Li et al., 2006). The activation of xenobiotic-metabolizing pathways by transcriptional activation reduces concentrations of some biologically active xenobiotics and enhances elimination from the body as metabolites (Nebert, 1994); it can also increase the activation of other xenobiotics to more toxic forms. The toxicity and carcinogenicity of some agents, such as polyaromatic hydrocarbons, occur because of production of mutagenic metabolites by inducible oxidative enzymes.
-
□
Hypo-osmolarity-response pathway (Subramanya & Mensa-Wilmot, 2006). Cellular stressors damage the integrity of the cellular membranes and activate p38 MAP kinase-mediated pathways to counter them (Van Wuytswinkel et al., 2000). The p38 MAP kinase functionality for the stress responses is conserved across eukaryotes.
-
□
DNA-response pathways (Nordstrand et al., 2007). Damage to DNA structures induces repair enzymes that act through GADD45 (Sheikh et al., 2000) and other proteins. Unrepaired damage increases the risk of mutation during cell division and increases the risk of cancer.
-
□
Endogenous-hormone-response pathways (NRC, 1999; Harrington et al., 2006). Enhancement or suppression of activity of transcriptionally active hormone receptors—including estrogen, androgen, thyroid, and progesterone receptors (Aranda & Pascual, 2001)—leads to altered homeostasis and alteration in biologic functions that are controlled by the receptors.
The biologic revolution now making its way into toxicity testing sets the stage for the design of mechanistic cell-based assays that can be evaluated primarily with high-throughput approaches to testing. The promise of the novel cell-system assays is becoming apparent in advances in several areas: genomic studies of cellular signaling networks affected by chemical exposures, identification of common toxicity pathways that regulate outcomes in diverse tissues, and understanding of networks that control cell responses to external stressors. To ensure the value of results for use in environmental decision making, the toxicity-pathway assays should be amenable to measurements of dose-response relationships over a broad range of concentrations. Chemical concentrations should be measured directly in the media used in the toxicity-pathway assays when administered concentrations might not represent the concentrations in vitro (for example, in the case of volatile compounds).
Finding new assays for assessing the dose-response characteristics of the toxicity pathways will have high priority for research and standardization. Environmental agents on which animal, human, and cellular evidence consistently demonstrates increased risk of adverse health outcomes could serve as positive controls for evaluation of toxicity-pathway assays. Those controls would serve as standards for the evaluation of the ability of other compounds to perturb the assayed toxicity pathways. Negative controls would also be needed to evaluate the specificity of responses for the key toxicity pathways. For risk implications in specific populations, interpretation of the studies would consider the results of the assays coupled with information on host susceptibility from other human cell or tissue assays and population-based studies. The research needed to implement the toxicity-pathway approach is discussed further in the section “Developing the Science Base and Assays to Implement the Vision.”
Targeted Testing
As discussed in the section “Vision,” an integral part of the committee’s vision is targeted testing, which would be used to complement toxicity-pathway testing and used in the following circumstances:
-
□
To clarify substantial uncertainties in the interpretation of toxicity-pathway data.
-
□
To understand effects of representative prototype compounds from classes of materials, such as nanoparticles, that may activate toxicity pathways not included in a standard suite of assays.
-
□
To refine a risk estimate when the targeted testing can reduce uncertainty and when a more refined estimate is needed for decision making.
-
□
To investigate the production of possibly toxic metabolites of new compounds.
-
□
To fill gaps in the toxicity-pathway testing strategy to ensure that critical toxicity pathways and endpoints are adequately covered.
One of the challenges of developing an in vitro test system to evaluate toxicity is the current inability of cell assays to mirror the metabolism of a whole animal (Coecke et al., 2006). For the foreseeable future, any in vitro strategy will need to include a provision to assess likely metabolites with whole-animal testing. The metabolites would also need to be tested in a suite of in vitro assays. For very reactive metabolites, the suite of assays should include cell models that have biotransformation enzymes required for metabolism. Although it may become possible to make comprehensive predictions of metabolism of environmental agents, any plan to implement the vision here will probably have to rely on some metabolite-identification studies in whole animals. Another challenge is adequate development of in vitro assays to identify reliably toxicity pathways that are causally related to neurodevelopment and other physiologic processes that depend on timing and patterns of exposure and the interactions of multiple pathways. In the near term, targeted in vivo testing will most likely be needed to address those types of toxicities.
Targeted testing might be conducted in vivo or in vitro, depending on the conditions and the toxicity tests available. In the case of metabolite studies, one approach might be to dose small groups of animals with radiolabeled compound, to separate and characterize the excreted radioactivity with modern analytic techniques, and to compare the metabolite structure with known chemistries to determine the need for testing specific metabolites. Similar studies might be conducted in tissue bioreactors, especially a liver bioreactor or cocultures of cells from human liver and other tissues that might make the studies more applicable to human metabolism. Concerns raised in evaluations of metabolism could necessitate synthesis of specific metabolites that would then be tested in the main toxicity-pathway assays. In the development of the European Centre for the Validation of Alternative Methods, there has been extensive discussion of the challenges of capturing the possible toxicity of metabolites so as not to miss ultimate toxicities of substances with in vitro testing (Coecke et al., 2005, 2006).
Although targeted tests could be based on existing toxicity-test systems, they will probably differ from traditional tests in the long term. They could use transgenic species, isogenic strains, new animal models, or other novel test systems (see the committee’s interim report [NRC, 2006a] for further discussion) and could include a toxicogenomic evaluation of tissue responses over wide dose ranges. Whatever system is used, testing protocols would maximize the amount of information gained from whole-animal toxicity testing. For example, routinely used whole-animal toxicity-testing protocols could provide mode-of-action information on toxicity pathways and target tissues in short-term repeat studies. They could emphasize measurement of metabolite formation and applications of transcriptomics and bioinformatics; future designs might include other “-omic” approaches as the technologies mature and the costs of such studies decrease. Toxicogenomic studies of 14–30 days could provide tissues for microarray analysis and information on pathology. They would harvest a suite of major tissues, mRNA analysis would be performed, and bioinformatics analysis would be conducted to evaluate dose-response relationships in connection with changes in genes and groups of related genes. mRNA from tissues with evidence of pathologic alterations at high doses might also be examined with the major tissues. Thus, the targeted testing in the committee’s vision will not necessarily resemble the standard whole-animal assays now conducted either in the protocol used or in the information gained.
Component C: Dose-Response and Extrapolation Modeling
The committee’s vision includes dose-response and extrapolation modeling modules, which are discussed next; an overview of this component is provided in Figure 6.
Empirical Dose-Response Modeling
As they are currently used in toxicity testing with apical endpoints, empirical dose-response (EDR) models often describe a relationship between the incidence of the endpoint and either the dose given to the animal or the concentration of the environmental agent or its metabolite in the target tissue. In the long-range vision, the committee believes that EDR models will be developed for environmental agents primarily on the basis of data from in vitro, mechanistically based assays described in component B. The EDR models would describe the relationship between the concentration in the test medium and the degree of in vitro response; in some cases, they would provide an estimate of some effective concentration at which a specified level of response occurs. The effective concentration could describe, for example, a percentage of maximal response or a statistical increase above background for a more integrated assay, such as an enhanced-cell-proliferation assay. Considerations in the interpretation of in vitro response metrics would include responses in positive and negative controls, their statistical variability, background historical data, and the experimental dose-response data on the test substance. In general, the toxicity-pathway evaluations require consideration of increases in continuous rather than dichotomous responses.
Dose measures in targeted-testing studies conducted in whole animals could also be expressed in relation to a measure of tissue or plasma concentrations of the parent compound or a metabolite in the organism, such as blood concentration, area under a concentration–time course curve, and rate of metabolism. Preferably, the concentrations would be based on empirical measurements rather than on predictions from pharmacokinetic models. The main reason for insisting that the in vivo studies have a measure of tissue concentration is to permit comparison with the results from the in vitro assays.
In some risk contexts, an EDR model based on in vitro assay results might provide adequate data for a risk-management decision, for example, if host-susceptibility factors of a compound in humans are well understood and human biomonitoring provides good information about its tissue or blood concentrations and about other exposures that affect the toxicity pathway in a human population. Effective concentrations in the suite of in vitro mechanistic assays could be adjusted for host susceptibility and then compared with the human biomonitoring data. In the absence of detailed biomonitoring data and host-susceptibility information, predictions of human response to a toxicant will require building on the data provided by the in vitro EDR models and using physiologically based pharmacokinetic (PBPK) models and perhaps host-susceptibility information on related compounds.
Extrapolation Modeling
Extrapolation modeling encompasses the analytic tools required to predict exposures that might result in adverse effects in human populations primarily on the basis of results of hazard testing completed in component B. In the committee’s vision, extrapolation modeling would most likely include PBPK modeling to equate tissue-media concentrations from toxicity testing with tissue doses expected in humans; toxicity-pathway modeling that provides an understanding of the biologic components that control the toxicity-pathway response in vitro; and consideration of human data on host susceptibility and background exposure that provide the context for interpreting the modeling results. As stated in the committee’s interim report (NRC, 2006a), the computational approaches must be validated, adequately explained, and made accessible to peer review to be valuable for risk assessment. Models not accessible for review may be useful for many scientific purposes but are not appropriate for regulatory use.
Toxicity-Pathway Dose-Response Models
Models of toxicity-pathway perturbations need to be developed to interpret results from toxicity tests in a mechanistic rather than simply empirical manner; they should be achievable in the near future. Toxicity-pathway models should be more readily configured than models of organism-level toxicity because they describe only the toxicity pathway itself and the initial chemical-related perturbations that are believed to be obligatory but not necessarily sufficient for causing the overt adverse health effect.
Several models of normal signaling pathways have been developed, for example, for heat-shock response (El-Samad et al., 2005; Rieger et al., 2005), platelet-derived growth-factor signaling (Bhalla et al., 2002), and nuclear factor kappa-B-mediated inflammatory signaling in response to cytokines, such as tumor necrosis factor-alpha (Hoffmann et al., 2002; Cho et al., 2003). Also, a screen for anticancer drugs has been developed by using the Nrf2 antioxidant-response pathway (Wang et al., 2006a), and a preliminary Nrf2 oxidative-stress model has been developed (Zhang, 2006) to examine chlorine as an oxidative stressor and to evaluate both adaptive and overtly toxic responses of cells in culture. Toxicity-pathway dose-response models optimally would describe the interaction of chemicals with cell constituents that activate or repress the pathway (that is, control it) and describe the cellular consequences of activation (that is, the cellular responses, usually altered gene expression, to changes in normal control). Table 3 and Figure 7 illustrate these concepts in terms of the activation of the Nrf2 antioxidant stress-response pathway.
TABLE 3.
In nontoxic environments, antioxidant genes are repressed through inactivation of the transcriptional regulator Nrf2. The cytoplasmic protein Keap-1 binds Nrf2 and sequesters Nrf2 in the cytoplasm, where it cannot activate transcription of antioxidant genes (see Figure 7). Nrf2 bound to Keap-1 is then quickly degraded through the Cul3-based E3 ligase system (Kobayashi et al., 2004). In toxic environments, some oxidants interact with thiol groups on Keap-1, causing Nrf2 to be released and translocated to the nucleus. Once in the nucleus, Nrf2 heterodimerizes with a small Maf protein and binds to antioxidant response elements; this leads to expression of antioxidant-stress proteins and phase 2 detoxifying enzymes (Motohashi & Yamamoto, 2004). |
The negative-feedback response loop has two major portions, each of which could be the target of model development. First, the inactivation of Keap-1 by oxidants and the later formation of the Nrf2-Maf het-erodimer are response circuits that can be mathematically modeled to predict low-dose toxic responses. Second, the expression of antioxidant-stress proteins and phase 2 detoxifying enzymes can also be modeled to predict low-dose toxic responses. |
Although the toxicity-pathway models are discussed here as part of component C of the vision, creation of the models would occur as a natural extension of developing and validating the in vitro toxicity-pathway tests discussed in component B. In other words, the committee envisions that the models would be developed for many assays in component B. The committee recognizes that in the near term there will be continued reliance on default approaches for low-dose extrapolation, such as the linear dose-response model and application of uncertainty factors to benchmark doses or no-observed-adverse-effect levels. The application of uncertainty and adjustment factors to precursor biologic responses from perturbations will not necessarily involve the same factors as currently used in U.S. EPA risk assessments for noncancer endpoints.
The committee emphasizes the important distinction between models for toxicity-pathway perturbations and biologically based dose-response (BBDR) models for apical responses. Approaches to BBDR modeling for complex apical responses—such as cancer (Moolgavkar & Luebeck, 1990; Conolly et al., 2003), developmental toxicity (Leroux et al., 1996), and cytotoxicity (Reitz et al., 1990; el-Masri et al., 1996)—have focused on integrated processes, such as proliferation, apoptosis, necrosis, and mutation. Experimental studies and biologic and toxicologic research are still required to guide the development and validation of such models. Although toxicity-testing strategies would be enhanced by availability of quantitative BBDR models for apical responses, this type of modeling is still in its infancy and probably will not be available for risk-assessment applications in the near future. Progress in developing the models will rely heavily on biologic studies of disease processes in whole animals and mathematical descriptions of the processes. The committee sees BBDR-model development for apical endpoints as part of a much longer range research program and does not see routine development of the models from toxicity-pathway testing data in the foreseeable future.
Physiologically Based Pharmacokinetic Modeling
PBPK models assist in extrapolations of dosimetry among doses, dose routes, animal species, and classes of similar chemicals (Clark et al., 2004). They also support risk assessment, aid in designing and interpreting the results of biomonitoring studies (Clewell et al., 2005), and facilitate predictions of human body burden based on use and exposure patterns in specific populations. The development of PBPK models requires variable investment, depending on the chemical. For well-studied classes of compounds, PBPK-model development might require collection of compound-specific characteristics or statistical analysis to incorporate descriptions of human variability and to describe uncertainty (see, for example, Bois et al., 1996; Fouchecourt et al., 2001; Poulin & Theil, 2002; Theil et al., 2003). For less well-studied classes of chemicals, model development might require collection of timecourse data on tissue concentrations (see, for example, Sarangapani et al., 2002). Validation of existing models is an important consideration. The possibility of studying the pharmacokinetics of low concentrations in environmentally or occupationally exposed humans provides many opportunities for checking the validity of PBPK models. Advances in analytic chemistry permit kinetic studies at extremely low doses that enable opportunities for such studies.
In the future, QSAR should allow estimation of such parameters as blood-tissue partitioning, metabolic rate constants, and tissue binding and could give rise to predictive PBPK models validated with a minimal research investment in targeted studies in test animals. The goal of developing predictive PBPK models dates back to efforts to develop in vitro tools to measure model parameters or to develop QSAR models to predict model parameters on the basis of physical and chemical characteristics or properties (Gargas et al., 1988, 1989).
Component D: Population-Based and Human Exposure Data
Population-based and human exposure data will be crucial components of the new toxicity-testing strategy. They will be critical for selecting doses in in vitro and targeted in vivo testing, for interpreting and extrapolating from high-throughput test results, for identifying and understanding toxicity pathways, and for identifying toxic chemical hazards. Figure 8 provides an overview of component D, and the following subsections discuss how population-based and exposure data can be integrated with toxicity testing.
Population-Based Data and the Toxicity-Testing Strategy
The new toxicity-testing strategy emphasizes the collection of data on the fundamental biologic events involved in the activation of toxicity pathways after exposure to environmental agents. The collection of mechanistic data on fundamental biologic perturbations will provide new opportunities for greater integration of toxicity testing and population-based studies. In some cases, coordination of the tests will be required; interpretation of toxicity-test results will require an understanding of how human susceptibility factors and background exposures affect the toxicity pathway and how those factors and exposures vary among people.
Genetic epidemiology provides an excellent example of the integration of information from toxicity testing in the long-range vision and population-based studies. It seeks to determine the relationship between specific genes in the population and disease. The finding of genetic loci associated with susceptibility potentially can inform biologists of important cellular proteins that affect disease and can uncover novel disease pathways. Toxicity-testing assays can then be designed to investigate and evaluate the finding and the effects of exogenous chemicals on the disease pathways. For example, human studies have provided information on DNA damage in arsenic-exposed people and motivated laboratory studies on cultured human cells to determine specific DNA-repair pathways affected by arsenic (Andrew et al., 2006).
Conversely, as understanding of toxicity pathways grows, specific genetic polymorphisms that increase or decrease susceptibility to adverse effects of exposure to environmental agents can be more accurately predicted. For example, genetic polymorphisms in some DNA repair and detoxification genes result in higher levels of chromosomal and genomic damage based on the micronuclear centromere content in tissue samples from welders occupationally exposed to welding fumes (Iarmarcovai et al., 2005). Although a substantial amount of normal genetic variation has been identified, only a small fraction of the variation may play a substantive role in influencing differences in human susceptibility. Understanding the biology of the toxicity pathways provides insight into how genetic susceptibility may play an important role. Specifically, a toxicity-testing strategy with a mechanistic focus should define pathways and indicate points that are rate-limiting or are critical signaling nodes in cellular-response systems. Identifying those nodes will allow the potential effects of genotypic variation to be better determined and integrated into chemical-toxicity assessments.
Another example of the interplay between toxicity testing and epidemiology is the generation of potentially important data on biomarkers. The committee’s vision emphasizes studies conducted in human cells that indicate how environmental agents can affect human biologic response. The studies will suggest biomarkers of early biologic effects that could be monitored in human populations (NRC, 2006b). Studying the markers in a variety of cellular systems will help to determine the biomarkers that are best for systematic testing and for use in population-based studies.
Population-health surveillance may indicate human health risks that were not detected in toxicity tests. For example, although pharmaceutical products are subject to extensive toxicologic and clinical testing before their introduction into the marketplace, pharmacovigilance programs have identified adverse health outcomes that were not detected in preclinical and clinical testing (Lexchin, 2005; IOM, 2007). Food-flavoring agents provide another illustrative example. In 2000, several cases of bronchiolitis obliterans, a severe and rare pulmonary disorder, were described in former workers at a microwave-popcorn plant (Akpinar-Elci et al., 2002). Exposure to vaporized flavoring agents used in the production process was associated with decreased lung function (Kreiss et al., 2002). Flavoring-associated respiratory disease was also documented among food-product workers and among workers in facilities that manufactured the flavoring agents (Lockey et al., 2002). Although the toxicity of the flavoring agents was confirmed in animal studies (Hubbs et al., 2002), their inhalation hazards during manufacture and food-product production was not recognized at the time of product approval. Situations in which toxicity testing is not adequately conducted or fails to identify an important human health risk emphasize the need to integrate population-based studies into any toxicity-testing paradigm and the need to collect human data in a structured manner so that they can be used effectively by the toxicology community.
Human-Exposure Data and the Toxicity-Testing Strategy
Human-exposure data may prove to be pivotal as toxicity testing shifts from the current apical endpoint whole-animal testing to cell-based testing. Several types of information will be useful. The first is information collected by manufacturers, users, agencies, or others on exposures of employees in the workplace or on environmental exposures of the population at large. Such exposure information would be considered in the setting of dose ranges for in vitro toxicity testing and of doses for collecting data in targeted pharmacokinetic studies and in selecting concentrations to use in human PBPK models.
Other valuable information will come from biomonitoring surveys of the population that measure environmental agents or their metabolites in blood, urine, or other tissues. New sensitive analytic tools that allow measurement of low concentrations of chemicals in cells, tissues, and environmental media enable tracking of biomarkers in the human population and the environment (Weis et al., 2005; NRC, 2006b). Comparison of concentrations of agents that activate toxicity pathways with concentrations of agents in biologic media in human populations will help to identify populations that may be overexposed, to guide the setting of human exposure guidelines, and to assess the cumulative impact of chemicals that influence the same toxicity pathway. The ability to make such comparisons will be greatly strengthened by a deeper understanding of the pharmacokinetic processes that govern the absorption, distribution, metabolism, and elimination of environmental agents by biologic systems. The enhanced ability to identify media concentrations that can evoke biologic responses will help to reduce the uncertainties associated with a focus on apical effects observed at high doses in animal testing.
The importance of biomonitoring data emphasizes the need to support and expand such programs as the National Biomonitoring Program conducted by the Centers for Disease Control and Prevention (CDC, 2001, 2003, 2005). Those programs have greatly increased the understanding of human population exposure and have provided valuable information to guide toxicity testing. In time, biomonitoring will enable assessment of the status of the toxicity-pathway activation in the population. That information will be critical in understanding the implications of high-throughput results for the population and for identifying susceptible populations.
Component E: Risk Contexts
Toxicity testing is valuable only if it can be used to make more informed and more efficient responses to public-health concerns faced by regulators, industry, and the public. Early in this article, the committee identified five broad risk contexts requiring decisions about environmental agents, which are listed in Figure 9. Each decision-making context creates a need for toxicity-testing information that, if fulfilled, can help to identify the most effective ways to reduce or eliminate health risks posed by environmental agents.
Some of the risk contexts require rapid screening of environmental agents numbering in the tens of thousands. Others require highly refined dose-response information on effects at environmental concentrations, the ability to test chemical mixtures, or the use of focused assays targeted to specific toxicity pathways or endpoints. Some risk contexts may require the use of population-based approaches, including population health surveillance and biomonitoring. The committee believes that its vision for a new toxicity-testing paradigm will help to respond to decision-making needs, whether regulatory or nonregulatory, and will allow evaluation of all substances of concern whatever their origin might be. Specific implications of the vision for risk management can be illustrated by considering the five risk contexts identified in the first section.
-
□
Evaluation of new environmental agents. Two issues arise in the testing of new chemicals or products. First, emerging technologies might require novel testing approaches. For example, nanotechnology, which focuses on materials in the nanometer range, will present challenges in toxicity testing that might not be easily addressed with existing approaches (Institute of Medicine [IOM], 2005; Borm et al., 2006; Gwinn & Vallyathan, 2006; Nel et al., 2006; Powell & Kanarek, 2006). Specifically, the toxic properties of a nanoscale material will probably depend on its physical characteristics, not on the toxic properties of the substance or element itself (such as titanium or carbon) that makes up the material. The nanoscale material might be evaluated with new in vitro tests specially designed to identify biologic perturbations that might be expected from exposure to it. As discussed earlier in this section, nanoscale materials may require some targeted whole-animal testing to ensure that all biologically significant effects are identified. Second, because many new commercial chemicals are developed each year, there is a need for a mechanism to screen them rapidly for potential toxicity. With an emphasis on high- and medium-throughput screens, the committee’s vision for toxicity testing accommodates screening a large number of chemicals.
-
□
Evaluation of existing environmental agents. Two issues arise in the testing of existing environmental agents. For widespread and persistent environmental agents that cannot be easily removed from the human environment and can have potentially significant health effects, an in-depth evaluation of toxic properties is important. The committee’s vision, with its emphasis on toxicity-pathway analysis, will provide the deep understanding needed for refined evaluation of the potential human health effects and risks. As in the evaluation of new environmental agents, there is a need for effective screening methods so that the potential toxicity of the tens of thousands of agents already in the environment can be evaluated. The committee’s toxicity-testing strategy, with high-throughput toxicity-pathway assays, should permit greater coverage of the existing environmental agents that have not been adequately tested for toxicity.
-
□
Evaluation of a site. Sites invariably contain a mixture of chemical agents. Evaluation of mixtures has proved to be difficult in the existing toxicity-testing strategy (see the section “Vision”). High-throughput assays, as emphasized by the committee, may be the best approach for toxicity assessment of mixtures because they are more easily used to assess combinations of chemicals. Biomonitoring data—whose collection is highlighted in the committee’s vision—can be especially useful in site investigations to identify problematic exposures.
-
□
Evaluation of potential environmental contributors to a specific disease. Public-health problems, such as clusters of cancer cases or outbreaks of communicable diseases, can have an environmental component. Asthma has distinct geographic, temporal, and demographic patterns that strongly suggest environmental contributions to its incidence and severity (Woodruff et al., 2004) and provides an excellent illustration of how the committee’s vision could help to elucidate the environmental components of a disease. First, animal models of asthma have been plagued by important species differences, which limit the utility of standard toxicity-testing approaches (Pabst, 2002; Epstein, 2004). Second, substantial data are available on toxicity pathways involved in asthma (Maddox & Schwartz, 2002; Pandya et al., 2002; Lutz & Sulkowski, 2004; Lee et al., 2005; Chan et al., 2006; Nakajima & Takatsu, 2006; Abdala-Valencia et al., 2007); the pathways should be testable with high-throughput assays, which could permit the evaluation of many environmental agents for a potential etiologic role in the induction or exacerbation of asthma. Third, environmental agents that raise concern in the high-throughput assays could have high priority in population-based studies for evaluation of their potential link to asthma in human populations, such as workers. The high-throughput assays that are based on evaluation of toxicity pathways can survey large numbers of environmental agents and identify those which operate through a mechanism that may be relevant to a disease of interest, as in the case of asthma, and may help to generate useful hypotheses that can then be examined in population-based studies.
-
□
Evaluation of the relative risks posed by environmental agents. It is often useful to assess the relative risks associated with different environmental agents, such as pesticides or pharmaceutical products, that could have been developed for the same purpose. The new toxicity-testing paradigm will provide information on relative potencies established by computational toxicology, toxicity-pathway analysis, dose-response analysis, and targeted testing.
The future toxicity-testing strategy envisioned by the committee will be well suited to providing the relevant data needed to make the critical risk-management decisions required in the long term.
Toxicity-Testing Strategies in Practice
To illustrate how the results of the tests envisioned by the committee may be applied in specific circumstances, two hypothetical examples of environmental agents that may pose risks to human health are considered. The first example is an irritant gas, and the second is an environmental agent that acts by interactions with estrogen receptors. The committee emphasizes that these examples are intended not to recommend definitive procedures for conducting human health risk assessment but simply to show how assessment might be approached. As the research discussed in the section “Developing the Science Base and Assays to Implement the Vision” is conducted, much will be learned, and new tests and methods to incorporate results into assessments will emerge.
Example 1: Irritant Gas
Toxicity testing and empirical dose-response analysis
-
□
Among a larger group of gases tested in multiple high-throughput assays, the agent caused dose-related responses in test assays for glutathione depletion, Nfr2 oxidative-stress pathway activation, inflammatory pathway responses, and general cytotoxicity. Most other human toxicity-pathway tests had negative results, but the test gas was routinely cytotoxic in systems in which gases were easily tested. Nrf2 pathway activation proved to be the most sensitive endpoint, with an EC10 of 10 ppm and a lower bound on the EC10 of 6.5 ppm, where EC10 or ED10 is the concentration or dose that causes a 10% increase in the response or effect over that of the control.
-
□
A known hydrolysis product of the test gas—one produced in stoichiometric equivalents on hydrolysis of the gas—produced similar responses in vitro when tested over a thousand-fold concentration range (0.001–1 mM). The test provided a lower bound ED101 of 0.12 mM for Nrf2 pathway activation. The hydrolysis product was tested in a broad suite of toxicity pathways and showed little evidence of pathway specific responses, but consistently showed toxic responses at concentrations much above 1.0 mM.
-
□
At nontoxic concentrations, the compound showed no evidence of mutagenicity.
Extrapolation
-
□
Low dose. With positive-control oxidants, low-dose behavior of the Nrf2 pathway was shown to be nonlinear because of high gain in the feedback loops that control activation of this adaptive stress-response pathway. A concentration of one-tenth the lower bound on the EC10 would not be expected to cause substantial pathway activation. That concentration would serve as a starting point for consideration of susceptibility factors, preexisting disease in the human population, and possible coexposures to similarly acting compounds.
-
□
In vitro to in vivo. Extrapolation from the in vitro system used a human pharmacokinetic model derived from a computational fluid-dynamics approach. Model inputs derived partially from SAR included reaction rates of the gas in tissues and species-specific breathing rates. The pharmacokinetic dosimetry model was used to calculate the exposure concentrations that would yield 0.012 mM hydrolysis product (that is, 0.12 mM/10) in the nose and lungs during a continuous human inhalation exposure. The pharmacokinetic model, run in Markov-chain Monte Carlo fashion to account for variability and uncertainty, provided lower bound estimates of 2.5–0.6 ppm for the lungs and 15–3 ppm for the nose. Sensitivity analysis of the combined toxicity-pathway dosimetry model indicated key biologic and pharmacokinetic factors that had important roles in dose delivery and the circuitry governing Keap1 and Nrf2 signaling.
-
□
Susceptibility. Susceptibility would depend heavily on polymorphisms in critical portions of the Nrf2 pathway. People with higher than average Keap1 or lower than average Nrf2 could fail to have an adaptive response to oxidative stressors and could progress to toxicity at lower exposure concentrations. The observed polymorphisms in the human population and sensitivity with pre-existing diseases suggest that estimates arising from the dose-response analysis should be reduced by a factor of 10.
Risk-assessment guidance
-
□
The exposure concentration derived from the high-throughput toxicity-pathway screens and the associated interpretive tools could be used in setting reference standards. The assessment would indicate that the concentration should ensure that an exposure would not lead to biologically significant responses to the compound. In addition, the risk narrative would state that this exposure limit should be protective of other downstream responses—such as respiratory tract toxicity—that might be of concern at higher concentrations, because even adaptive, precursor responses are being avoided.
-
□
Estimates of cumulative risk should be considered for situations with simultaneous exposures to the irritant gas and other gases that affect Nrf2 signaling.
Human surveillance
-
□
Surveillance studies of workers or other human populations potentially exposed to the irritant gas could test for evidence of Nfr2 oxidative-stress pathway activation and inflammatory pathway responses, possibly using induced sputum samples. To evaluate the results, any increases in activation in the exposed population could be compared with pathway activation in control human populations.
Example 2: Estrogenic Agonist
Toxicity testing and empirical dose-response analysis
-
□
Members of a large group of commercial chemicals were tested in multiple high-throughput in vitro assays. One of them triggered dose-related activation of estrogenic signaling in receptor-binding assays and increased DNA replication—indicative of cell proliferation—in human breast-cancer cells in vitro. Binding assays for this compound had the lowest ED10 values; assay indicators of gene transcription and DNA replication occurred at much higher concentrations. QSAR methods also predicted an estrogenic effect on the basis of a library of tested compounds. All other human toxicity-pathway tests were negative or showed responses at much higher concentrations. The test compound had low cytotoxicity in most screens and produced estrogen-receptor activation at concentrations one-tenth of those that produce signs of cell toxicity.
-
□
A short-term, mechanistic in vivo study with ovariectomized female rats confirmed mild estrogenic action in vivo and moderate evidence of gene expression for responses in utero or in breast tissues. Predicted conjugated metabolites of the compound were without activity in those assays.
Extrapolation
-
□
Experience with estrogen and other estrogenic chemicals indicates the existence of susceptible populations—such as pubescent girls, fetuses, and infants—that require additional protection and attention. In addition, chemicals that bind to and activate the estrogen receptor may act additively with one another. The extrapolation needs to consider the compound uses, subpopulations that are likely to be exposed to it, other background exposures to estrogenic agents in these subpopulations, and the estimated tissue dose in pregnant and nonpregnant women, fetuses, and infants.
-
□
Research on estrogen and estrogen agonists reveals that if receptor occupancy in the most sensitive tissues in susceptible humans is increased by less than x% by this exposure or any combined exposure to estrogenic compounds, an appreciable activation of downstream responses or a biologically significant increase in their activation would be unlikely. An alternative assessment would be based on a functional response in a toxicity-pathway assay, such as transcriptional activation.
-
□
Human PBPK models for the compound would be used to model absorption, distribution to sensitive tissues, and elimination of active parent compound. The models (for example, Markov-chain Monte Carlo PBPK model) would be designed to account for human variability in pharmacokinetics and modeling uncertainty. The PBPK models could generate a point-of-departure exposure concentration or a daily intake at which there would be less than x% increase in receptor occupancy or less than x% change in transcriptional activation in susceptible populations (for example, fetuses) and in 95% to 99% of the exposed general population. The PBPK models could also provide the blood concentration associated with the change in receptor occupancy or transcriptional activation. That blood concentration could be expressed in units of “estrogen equivalence” to simplify comparisons with estrogen and similarly acting estrogen agonists. Also, on the basis of estrogen equivalence, the models could be used to assess the effects of cumulative exposure to exogenous estrogenic compounds and could be checked against biologic monitoring data in the human population for validity and to ensure that the point of departure is not overestimated.
Risk-assessment guidance
-
□
Reference doses and concentrations used in decision making could be based on a point of departure derived as already described. The reference dose would consider factors, such as susceptibility, that could be altered by polymorphisms in critical portions of downstream estrogen-response pathways or in conjugation with enzymes that clear the compound before it reaches the systemic circulation.
Human surveillance
-
□
Human surveillance of workers exposed to the compound could detect subtle indications of early effects in humans if they were to occur.
Toxicity Testing and Risk Assessment
A major application of the results of toxicity testing is in the risk assessment of environmental agents. As illustrated in Figure 10, the committee’s vision for toxicity testing is consistent with the risk-assessment paradigm originally put forward by the National Research Council in 1983. Chemical characterization and toxicity-pathway evaluation would be involved in hazard identification. Pharmacokinetic models would be used to calibrate in vitro and human dosimetry and thereby facilitate the translation of dose in cellular systems to dose in human organs and tissues. Population-based studies would be used to confirm or explore effects observed in cellular systems to suggest biologic perturbations that require clarification in in vitro tests and to interpret findings in in vitro studies in the context of human populations. All would work together to permit establishment of human exposure guidelines based on risk avoidance, which could be used to enforce scientifically based regulatory standards or support nonregulatory risk-management strategies.
Mode-of-action information is important for informing the dose-response component of the risk-assessment paradigm. A deep understanding of mode of action involves studying the mechanistic pathways by which toxic effects are induced, including the key molecular and other biologic targets in the pathways. Thus, the committee’s vision, outlined in the sections “Vision” and “Components of the Vision” of this report, is a shift away from traditional toxicity testing that focuses on demonstrating adverse health effects in experimental animals toward a deeper understanding of biologic perturbations in key toxicity pathways that lead to adverse health outcomes. The committee believes that its vision of toxicity testing would better inform the assessment of the potential human health risks posed by exposure to environmental agents and ensure efficient testing methods.
TOOLS AND TECHNOLOGIES
The committee provided an overview of its vision for toxicity testing, and described the main components of the vision in previous sections. Here, tools and technologies that might be used to apply the committee’s vision are briefly discussed. The tools and technologies will evolve and mature over time, but many are already available. The committee emphasizes that technologies are evolving rapidly, and new molecular technologies will surely be available in the near future for mapping toxicity pathways, assessing their functions, and measuring dose-response relationships.
Tools and Technologies for Chemical Characterization
Various computational methods are available for chemical characterization. The discussion here focuses on structure–activity relationship (SAR) analyses, which use physical and chemical properties to predict the biologic activity, potential toxicity, and metabolism of an agent of concern. All are conceptually based on the similar-property principle—that is, that chemicals with similar structure are likely to exhibit similar activity (Tong et al., 2003). Accordingly, biologic properties of new chemicals are often inferred from properties of similar existing chemicals whose hazards are already known. Specifically, SAR analysis involves building mathematical models and databases that use physical properties (such as solubility, molecular weight, dissociation constant, ionization potential energies, and melting point) and chemical properties (such as steric properties, presence or absence of chemical moieties or functional groups, and electrophilicity) to predict biologic or toxicologic activity of chemicals. SAR analyses can be qualitative (for example, recognition of structural alerts, that is, chemical functional groups and substructures) or quantitative (for example, use of mathematical modeling to link physical, chemical, and structural properties with biologic or toxic endpoints) (Benigni, 2004). Key factors in the successful application of SAR methods include proper representation and selection of structural, physical, and chemical molecular features; appropriate selection of the initial set of compounds (that is, the “training set”) and methods of analysis; the quality of the biologic data; and knowledge of the mode or mechanism of toxic action (McKinney et al., 2000).
Current applications of SAR analyses include soft drug design, which involves improving the therapeutic index of a drug by manipulating its steric and structural properties (Bodor, 1999); design and testing of chemotherapeutic agents (van den Broek et al., 1989); nonviral gene and targeted-gene delivery (Congiu et al., 2004); creating predictive models of carcinogenicity to replace animal models (Benigni, 2004); predicting the toxicity of chemicals, particularly pesticides and metals (Walker et al., 2003a); and predicting the environmental fate and ecologic effects of industrial chemicals (Walker et al., 2003b). Among the available predictive-toxicity systems, the most widely used are statistically based correlative programs (such as CASE/MultiCASE and TOPKAT) and rule-based expert systems (such as DEREK and ONCOLOGIC) (McKinney et al., 2000).
There are many examples of successful applications of SAR and quantitative SAR (QSAR) analysis. One successful application of SAR analysis in risk assessment is the modeling of Ah-receptor-binding affinities of dioxin-like compounds, including the structurally related polychlorinated dioxins, dibenzofurans, and biphenyls. Specifically, SAR methods were used to establish a common mechanism of action for toxic effects and in the further development of toxic equivalency factors in risk assessments involving exposure to complex mixtures of those compounds (van der Berg et al., 1998). Other successful applications have examined how structural alterations influence toxicity. For example, toxic effects of nonpolar anesthetics are mediated by a nonspecific action on cell membranes and have been shown to be directly correlated to their log octanol–water partition coefficient (log Kow). However, the polar anesthetics—which include such chemicals as phenols, anilines, pyridines, nitrobenzenes, and aliphatic amines—generally show an anesthetic potency 5–10 times higher than expected on the basis of their log Kow alone (Soffers et al., 2001).
Much effort has been directed toward the modeling and prediction of specific toxicities, particularly mutagenicity and carcinogenicity because of the importance of these endpoints, the cost and length of full rodent assays for carcinogenesis, and the availability of high-quality data for modeling purposes. Experimental observation has led to the identification of several structural alerts that can cause both mutation and cancer, including carbonium ions (alkyl, aryl, and benzylic), nitrenium ions, epoxides and oxonium ions, aldehydes, polarized double bonds (alpha- and beta-unsaturated carbonyls or carboxylates), peroxides, free radicals, and acylating intermediates (Benigni & Bossa, 2006). The structural alerts for mutagenicity and carcinogenicity have been incorporated into expert systems for predicting toxic effects of chemicals (Simon-Hettich et al., 2006).
A number of structural alerts also have been associated with developmental toxicity. They were identified on the basis of known developmental responses to environmental agents, such as valproic acids, hydrazides, and carbamates (Schultz & Seward, 2000; Cronin, 2002; Walker et al., 2004). Studies have demonstrated that the presence of a hydroxyl group is required for estrogenic activity of biphenyls; symmetric derivatives are 10 times more active than nonsymmetric ones (Schultz et al., 1998). The relationship between the size and shape of the nonphenolic moiety and estrogenic potency among para-substituted phenols demonstrated the trend of increasing estrogenicity with increased molecular size (Schultz & Seward, 2000). Thus, although predictive models for some toxic endpoints, such as mutagenicity, already exist, more mechanistically complex endpoints—such as acute, chronic, or organ toxicity—are more difficult to predict (Schultz & Seward, 2000; Simon-Hettich et al., 2006).
One final application of SAR analysis is in predicting absorption, distribution, metabolism, and excretion. Qualitative SARs, QSARs, and the related quantitative structure–property relationships have been successfully used to estimate such key properties as permeability, solubility, biodegradability, and cytochrome P-450 metabolism (Feher et al., 2000; Bugrim et al., 2004); to predict drug half-life values (Anderson, 2002); and to describe penetration of the blood-brain barrier (Bugrim et al., 2004).
As indicated earlier, the predictive ability of different models depends on selecting the correct molecular descriptors for the particular toxic endpoints, the appropriate mathematical approach and analysis, and a sufficiently rich set of experimental data. The ability to adapt existing models continuously by building on larger and higher quality data sets is crucial for the improvement and ultimate success of these approaches.
Mapping Toxicity Pathways
As discussed in the sections “Vision” and “Components of the the Vision,” the key component of the committee’s vision is the evaluation of perturbations in toxicity pathways. Many tools and technologies are available that can aid in the identification of biologic signaling pathways and the development of assays to evaluate their function. Recent advances in cellular and molecular biology, “-omics” technologies, and computational analysis have contributed considerably to the understanding of biologic signaling processes (Daston, 1997; Ekins et al., 2005). Within the last 15 years, multiple cellular response pathways have been evaluated in increasing depth, as is evidenced by the progress in the basic knowledge of cellular and molecular biology (Fernandis & Wenk, 2007; Lewin et al., 2007). Moreover, systems biology constitutes a powerful approach to describing and understanding the fundamental mechanisms by which biologic systems operate. Specifically, systems biology focuses on the elucidation of biologic components and how they work together to give rise to biologic function. A systems approach can be used to describe the fundamental biologic events involved in toxicity pathways and to provide evolving biologic modeling tools that describe cellular circuits and their perturbations by environmental agents (Andersen et al., 2005a). A longer term goal of systems biology is to create mathematical models of biologic circuits that predict the behavior of cells in response to environmental agents qualitatively and quantitatively (Lander & Weinberg, 2000). Progress in that regard is being made in developmental biology (Cummings & Kavlock, 2005; Slikker et al., 2005). The subsections that follow outline tools and technologies that will most likely be used to elucidate the critical toxicity pathways and to develop assays to evaluate them.
In Vitro Tests
The committee foresees that in vitro assays will make up the bulk of the toxicity tests in its vision. In vitro tests are currently used in traditional toxicity testing and indicate the success of developing and using in vitro assays (Goldberg & Hartung, 2006). In vitro tests include the 3T3 neural red uptake phototoxicity assay (Spielman & Liebsch, 2001), cytotoxicity assays (O’Brien & Haskins, 2007), skin-corrosivity tests, and assays measuring vascular injury using human endothelial cells (Schleger et al., 2004). Many tests have been validated by the European Centre for the Validation of Alternative Methods. The committee notes that the current in vitro tests originated as alternatives to or replacements of other toxicity tests. In the committee’s vision, in vitro assays will evaluate biologically significant perturbations in toxicity pathways and thus are not intended to serve as direct replacements of existing toxicity tests.
The committee envisions the use of human cell lines for the in vitro assays. Cell lines have been used for a long time in experimental toxicology and pharmacology. Human cell lines are readily available from tissue-culture banks and laboratories and are particularly attractive because they offer the possibility of working with a system that maintains several phenotypic and genotypic characteristics of the human cells in vivo (Suemori, 2006). Differentiated functions, functional markers, and metabolic capacities may be altered or preserved in cell lines, depending on culture conditions, thereby allowing testing of a wide array of agents in different experimental settings. Other possibilities include using animal cells that are transfected to express human genes and proteins. For example, various cell lines—such as V79, CHO, COS-7, NIH3T3, and HEPG2—have been transfected with complementary DNA (cDNA, DNA synthesized from mature mRNA) coding for human enzymes and used in mutagenesis and drug-metabolism studies (Potier et al., 1995). Individual enzymes have also been stably expressed to identify the major human isoenzymes, such as cytochromes P-450 and UDP-glucuronosyltransferases, responsible for the metabolism of potential therapeutic and environmental agents. The metabolic in vitro screens with human enzymes are usually conducted as a prelude to clinical studies.
A major limitation of using human cell lines is the difficulty of extrapolating data from the simple biologic system of single cells to the complex interactions in whole animals. Questions have also been raised concerning the stability of cell lines over time, the reproducibility of responses over time, and the ability of cell lines to account for genetic diversity of the human population. Nonetheless, cell lines have been used as key tools in the initial screening and evaluation of toxic agents and the characterization of properties of cancer cells (Suzuki et al., 2005) and in gene profiling with microarrays (Wang et al., 2006b). The high-throughput methods now becoming more common will allow the expansion of the methods to larger numbers of endpoints, wider dose ranges, and mixtures of agents (Inglese, 2002; Inglese et al., 2006).
High-Throughput Methods
A critical feature of the committee’s vision is the use of high-throughput methods that will allow economical screening of large numbers of chemicals in a short period. The pharmaceutical industry provides an example of the successful use of high-throughput methods. Optimizing drug-candidate screening is essential for timely and cost-effective development of new pharmaceuticals. Without effective screening methods, poor drug candidates might not be identified until the preclinical or clinical phase of the drug-development process, and this could lead to high costs and low productivity for the pharmaceutical industry (Lee & Dordick, 2006). Pharmaceutical companies have turned to high-throughput screening, which allows automated simultaneous testing of thousands of chemical compounds under conditions that model key biologic mechanisms (Fischer, 2005). Such technologies as hybridization, microarrays, real-time polymerase chain reaction, and large-scale sequencing are some of the high-throughput methods that have been developed (Waring & Ulrich, 2000). High-throughput assays are useful for predicting several important characteristics related to the absorption, distribution, metabolism, excretion, and toxicity of a compound (Gombar et al., 2003). They can predict the interaction of a compound with enzymes, the metabolic degradation of the compound, the enzymes involved in its biotransformation, and the metabolites formed (Masimirembwa et al., 2001). That information is integral for selecting compounds to advance to the next phase of drug development, especially when many compounds may have comparable pharmacologic properties but differing toxicity profiles (Pallardy et al., 1998). High-throughput assays are also useful for rapid and accurate detection of genetic polymorphisms that could dramatically influence individual differences in drug response (Shi et al., 1999).
Microarrays
Microarray technologies have allowed the development of the field of toxicogenomics, which evaluates changes in genetic response to environmental agents or toxicants. These technologies permit genome-wide assessments of changes in gene expression associated with exposure to environmental agents. The identification of responding genes can provide valuable information on cellular response and some information on toxicity pathways that might be affected by environmental agents. Some of the tools and technologies are described next.
Microarrays are high-throughput analytic devices that provide comprehensive genome-scale expression analysis by simultaneously monitoring quantitative transcription of thousands of genes in parallel (Hoheisel, 2006). The Affymetrix GeneChip Human Genome U133 Plus 2.0 Array provides comprehensive analysis of genomewide expression of the entire transcribed human genome on a single microarray (Affymetrix Corporation, 2007). Whole-genome arrays are also available for the rat and mouse. The use of the rat arrays will probably increase as the relationships between specific genes and markers on the arrays become better understood.
Protein microarrays potentially offer the ability to evaluate all expressed proteins in cells or tissues. Protein-expression profiling would allow some understanding of the relationship between transcription (the suite of mRNAs in the cell) and the translational readout of the transcripts (the proteins). Protein microarrays have diverse applications in biomedical research, including profiling of disease markers and understanding of molecular pathways, protein modifications, and protein activities (Zangar et al., 2005). However, whole-cell or tissue profiling of expressed proteins is still in the developmental stage. These techniques remain expensive, and the technology is in flux.
Differential gene-expression experiments use comparative microarray analysis to identify genes that are upregulated or downregulated in response to experimental conditions. The large-scale investigation of differential gene expression attaches functional activity to structural genomics. Whole-genome-expression experiments involve hundreds of experimental conditions in which patterns of global gene expression are used to classify disease specimens and discover gene functions and toxicogenomic targets (Peeters & Van der Spek, 2005). Gene-expression profiling will have a role in identifying toxicity pathways in whole-animal studies but is not expected to be the staple technology for identifying and mapping the pathways.
High-Throughput Functional Genomics
Functional genomics should be distinguished from toxicogenomics. Toxicogenomics is a broad field combining expertise in toxicology, genetics, molecular biology, and environmental health and includes genomics, proteomics, and metabonomics, whereas functional genomics as described here is a specialized discipline that attempts to understand the functions of genes within cellular networks.
Large-scale evaluations of the status of gene expression and protein concentrations in cells allow understanding of the integrated biologic activities in tissues and can be used to catalog changes after in vivo or in vitro treatment with environmental agents. However, evaluation of the organization and interactions among genes in toxicity pathways requires approaches referred to as functional genomics, which encompass a different suite of molecular tools (Brent, 2000). The tools are designed to catalog the full suite of genes that are required for optimal activity of a toxicity pathway. The evaluation of the readout of those functional screens with bioinformatic analysis provides key data about the organization of toxicity pathways and guides computational methods that model the consequences of perturbation of the pathways by environmental agents.
Functional analysis requires a cell-based assay that provides a convenient, automated cell-based measure of functioning of a toxicity pathway (Akutsu et al., 1998; Michiels et al., 2002; Chanda et al., 2003; Lum et al., 2003; Berns et al., 2004; Huang et al., 2004) and requires the ability to automate treatment of the cells with individual cDNAs or small interfering RNAs (siRNAs), which are relatively short RNA oligomers that appear to play important roles in inhibiting gene expression (Hannon, 2002; Meister & Tuschl, 2004; Mello & Conte, 2004; Hammond, 2005). Treatment of the cells with a particular cDNA causes overexpression of the gene (and presumably the protein) that is coded by it. In contrast, treatment with gene-specific siRNA causes knockdown of specific proteins by enhancing degradation of the mRNA from the gene. High-throughput methods permit automation of such cell-based assays by the use of robots and libraries of cDNAs and siRNAs. The screens show which genes increase and which decrease activity of the toxicity pathway.
Computational Biology
Computational biology uses computer techniques and mathematical modeling to understand biologic processes. It is a powerful tool to cope with the ever-increasing quantity and quality of biologic information on genomics, proteomics, gene expression, gene varieties, genotyping techniques, and protein and cell arrays (Kriete & Eils, 2006). Computational tools are used in data analysis, data mining, data integration, network analysis, and multiscale modeling (Kitano, 2005). Computational biology is particularly useful for systems biology in understanding structural, regulatory, and kinetic models (Barabasi & Oltvai, 2004); in modeling signal transduction (Eungdamrong & Iyengar, 2004); and in analyzing genome information and its structural and functional properties (Snitkin et al., 2006). Furthermore, computational biology is used to predict toxic effects of chemical substances (Simon-Hettich et al., 2006), to understand the toxicokinetics and toxicodynamics of xenobiotics (Ekins, 2006), to determine gene-expression profiling of cancer cells (Katoh & Katoh, 2006), to help in the development of genomic biomarkers (Ginsburg & Haga, 2006), and to design virtual experiments to replace or reduce animal testing (Vedani, 1999). In drug design and discovery, novel computational technologies help to create chemical libraries of structural motifs relevant to target proteins and their small molecular ligands (Balakin et al., 2006; O’Donoghue et al., 2006).
Cellular signaling circuits handle an enormous variety of functions. Apart from replication and other functions of individual cells, signaling circuits must implement the complex logic of development and function of multicellular organisms. Computer models are helpful in understanding that complexity (Bhalla et al., 2002). Recent studies have extended such models to include electrical, mechanical, and spatial details of signaling (Bhalla, 2004a, 2004b). The mitogen-activated protein kinase (MAPK) pathway is one of the most important and extensively studied signaling pathways; it governs growth, proliferation, differentiation, and survival of cells. Widely varied mathematical models of the MAPK pathway have led to novel insights and predictions as to how it functions (Orton et al., 2005; Santos et al., 2007).
Predictive computational models derived from experimental studies have been developed to describe receptor-mediated cell communication and intracellular signal transduction (Sachs et al., 2005). Physicochemical models attempt to describe biomolecular transformations, such as covalent modification and intermolecular association, with physicochemical equations. The models make specific predictions and work mostly with pathways that are better understood. They can be viewed as translations of familiar pathway maps into mathematical forms (Aldridge et al., 2006). Integrated mechanistic and data-driven modeling for multivariate analysis of signaling pathways is a novel approach to understanding multivariate dependence among molecules in complex networks and potentially can be used to identify combinatorial targets for therapeutic interventions and toxicity-pathway targets that lead to adverse responses (Hua et al., 2006).
In Vivo Tests
As discussed in two previous sections, in vivo tests will most likely be used in the foreseeable future to evaluate the formation of metabolites and some mechanistic aspects of target-organ responses to environmental agents, including genomewide evaluation of gene expression. The previous section “Components of the Vision” noted that careful design of those studies could substantially increase the value of information obtained. For example, evaluation of cellular transcriptomic patterns from tissues of animals receiving short-term exposures may provide clues to cellular targets of environmental agents and assist in target-tissue identification. (See the section “Components of the Vision” for further discussion of protocol changes that could increase the value of toxicity tests.) Moreover, technologic advances in detection and imaging have the potential for improving in vivo testing. For example, positron-emission tomography (PET) is an imaging tool that can determine biochemical and physiologic processes in vivo by monitoring the activity of radiolabeled compounds (Paans & Vaalburg, 2000). Because PET can detect the activity of an administered compound at the cellular level, its use in animal models can result in the incorporation of mechanistic processes and an understanding of the pathologic effects of a candidate compound (Rehmann & Jayson, 2005).
Tools and Technologies for Dose-Response and Extrapolation Modeling
As discussed in sections, ‘Vision’ and ‘Components of the Vision’, two types of modeling will be critical for implementing the committee’s vision: physiologically based pharmacokinetic (PBPK) models and dose-response models of perturbations of toxicity pathways. PBPK models will allow dose extrapolation from in vitro conditions used for assessing toxicity-pathway perturbations to projected human exposures in vivo. Mechanistic models of perturbations of toxicity pathways should aid in developing low-dose extrapolation models that consider the biologic structure of the cellular circuitry controlling pathway activation.
Physiologically Based Pharmacokinetic Models
Assessing the risk associated with human chemical exposure has traditionally relied on the extrapolation of data from animal models to humans, from one route of exposure to another, and from high doses to low doses. Such extrapolation attempts to relate the extent of external exposure to a toxicant to the internal dose in the target tissue of interest. However, differences in biotransformation and other pharmacokinetic processes can introduce error and uncertainty into the extrapolation of toxicity from animals to humans (Kedderis & Lipscomb, 2001).
PBPK models provide a physiologic basis for extrapolating between species and routes of exposure and thus allow estimation of the active form of a toxicant that reaches the target tissue after absorption, distribution, and biotransformation (Watanabe et al., 1988). However, PBPK results can differ significantly in the hands of different modelers (Hattis et al., 1990), and improved modeling approaches for parameter selection and uncertainty analysis are under discussion. PBPK models might also be useful for estimating the effect of exposure at different life stages, such as pregnancy, critical periods of development, and childhood growth (Barton, 2005). Interindividual differences can be incorporated into PBPK models by integrating quantitative information from in vitro biotransformation studies (Bois et al., 1995; Kedderis & Lipscomb, 2001).
The more pervasive use of PBPK approaches in the new strategy for toxicity testing will be in basing dosimetry extrapolations on estimates of partitioning, metabolism, and interactions among chemicals derived from in vitro measurements or perhaps even from SAR or QSAR techniques. Those extrapolations will require some level of validation that might require data from kinetic studies in volunteers or from biomonitoring studies in human populations. In the committee’s vision for toxicity testing, the development of PBPK models from SAR predictions of partitioning and metabolism would decrease animal use, and continued improvements in the in vitro to in vivo extrapolations of kinetics will support the translation from test-tube studies of perturbations to predictions.
Dose-Response Models of Toxicity Pathways
Dose-response modeling of toxicity pathways involves the integration of mechanistic and dosimetric information about the toxicity of a chemical into descriptive mathematical terms to provide a quantitative model that allows dose and interspecies extrapolation (Conolly, 2002). New techniques in molecular biology, such as functional genomics, will play a key role in the development of such models because they provide more detailed information about the organization of toxicity pathways and the dose-response relationships of perturbations of toxicity pathways by environmental agents. Dose-response models have been developed for cell-signaling pathways and used in risk assessment (Andersen et al., 2002). They have found important applications in studying chemical carcinogenesis (Park & Stayner, 2006). In particular, models of cancer formation have been developed to describe the induction of squamous-cell carcinomas of the nasal passage in rats exposed to formaldehyde by inhalation, taking into account both tissue dosimetry and the nonlinear effects of cellular proliferation and formation of DNA– protein cross-links (Slikker et al., 2004a, 2004b; Conolly et al., 2004). However, alternative implementations of the formaldehyde model gave substantially different results (Subramaniam et al., 2006). Emerging developments in systems biology allow modeling of cellular and molecular signaling networks affected by chemical exposures and thereby produce an integrated modeling approach capable of predicting dose-response relationships of pathway perturbations by developmental and reproductive toxicants (Andersen et al., 2005b).
In the next decades, the dose-response modeling tools for perturbations should progress relatively rapidly to guide low-dose extrapolations of initial interactions of toxic compounds with biologic systems. The quantitative lineage of early perturbations with apical responses is likely to develop more slowly. For the foreseeable future, the continued refinement of biologic models of signaling circuitry should guide the extrapolation approaches necessary for conducting risk assessment with the toxicity-pathway tests as the cornerstone of toxicity-testing methods.
DEVELOPING THE SCIENCE BASE AND ASSAYS TO IMPLEMENT THE VISION
Rapid advances in the understanding of the organization and function of biologic systems provide the opportunity to develop innovative mechanistic approaches to toxicity testing. In comparison with the current system, the new approaches should provide wider coverage of chemicals of concern, reduce the time needed for generating toxicity-test data required for decision making, and use animals to a far smaller extent. Accordingly, the committee has proposed development of a testing structure that evaluates perturbations in toxicity pathways and relies on a mix of high- and medium-throughput assays and targeted in vivo tests as the foundation of its vision for toxicity testing. This section discusses the kinds of applied and basic research needed to support the new toxicity-testing approach, the institutional resources required to support and encourage it, and the valuable products that can be expected during the transition from the current apical endpoint testing to a mechanistically based in vivo and in vitro test system.
Most tests in the committee’s vision would be unlike current toxicity tests, which generate data on apical endpoints. The mix of tests in the vision include in vitro tests that assess critical mechanistic endpoints involved in the induction of overt toxic effects rather than the effects themselves and targeted in vivo tests that ensure adequate testing of metabolites and coverage of endpoints. The move toward a mechanism-oriented testing paradigm poses challenges. Implementation will require (1) the availability of suites of in vitro tests—preferably based on human cells, cell lines, or components—that are sufficiently comprehensive to evaluate activity in toxicity pathways associated with the broad array of possible toxic responses; (2) the availability of targeted tests to complement the in vitro tests and ensure overall adequate data for decision making; (3) models of toxicity pathways to support application of in vitro test results to predict general-population exposures that could potentially cause adverse perturbations; (4) infrastructure changes to support the basic and applied research needed to develop the tests and the pathway models; (5) validation of tests and test strategies for incorporation into chemical-assessment guidelines that will provide direction on interpreting and drawing conclusions from the new assay results; and (6) acceptance of the idea that the results of tests based on perturbations in toxicity pathways are adequately predictive of adverse responses and can be used in decision making. Development of the new assays and the related basic research—the focus of this section—requires a substantial research investment over quite a few years. Institutional acceptance of the new tests and the requisite new risk-assessment approaches—the focus of the next section, “Prerequisites for Implementing the Vision in Regulatory Contexts”—also require careful planning. They cannot occur overnight.
Ultimately, the time required to conduct the research needed to support large-scale incorporation of the new mechanistic assays into a test strategy that can adequately and rapidly address large numbers of agents depends on the institutional will to commit resources to support the changes. The committee believes that with a concerted research effort, over the next 10 years high-throughput test batteries could be developed that would substantially improve the ability to identify toxicity hazards caused by a number of mechanisms of action. Those results in themselves would be a considerable advance. The time for full realization of the new test strategy, with its mix of in vitro and in vivo test batteries that can rapidly and inexpensively assess large numbers of substances with adequate coverage of possible endpoints, could be 20 or more years.
This section starts by discussing basic research that will provide the foundation for assay development. It then outlines a research strategy and milestones. It concludes by discussing the scientific infrastructure that will support the basic and applied research required to develop the high-throughput and targeted testing strategy envisioned by the committee.
Scope of Scientific Knowledge, Methods, and Assay Development
This section outlines the scientific inquiry required to develop the efficient and effective testing strategy envisioned by the committee. Several basic-research questions need to be addressed to develop the knowledge base from which toxicity-pathway assays and supporting testing technologies can be designed. The discussion here is intended to provide a broad overview, not a detailed research agenda. The committee recognizes the challenges and effort involved in addressing some of these research questions.
Knowledge Development
Knowledge critical for the development of high-throughput assays is emerging from biologic, medical, and pharmaceutical research. Further complementary, focused research will be needed to address fully the key questions that when answered will support toxicity-pathway assay development. Those questions are outlined in Table 4 and elaborated next.
TABLE 4.
Toxicity-pathway identification—What are the key pathways whose perturbations result in toxicity? Multiple pathways—What alteration in response can be expected from simultaneous perturbations of multiple toxicity pathways? |
Adversity—What adverse effects are linked to specific toxicity-pathway perturbations? What patterns and magnitudes of perturbations are predictive of adverse health outcomes? |
Life stages—How can the perturbations of toxicity pathways associated with developmental timing or aging be best captured to enable the advancement of high-throughput assays? |
Effects of exposure duration—How are biologic responses affected by exposures of different duration? |
Low-dose response—What is the effect on a toxicity pathway of adding small amounts of toxicants in light of preexisting endogenous and exogenous human exposures? |
Human variability—How do people differ in their expression of toxicity-pathway constituents and in their predisposition to disease and impairment? |
-
□
Toxicity-pathway identification. The key pathways that, when sufficiently perturbed, will result in toxicity will be identified primarily from future, current, and completed studies in the basic biology of cell-signaling motifs. Identification will involve the discovery of the protein components of toxicity pathways and how the pathways are altered by environmental agents. Many pathways are under investigation with respect to the basic biology of cellular processes. For example, the National Institutes of Health (NIH) has a major program under way to develop high-throughput screening (HTS) assays based on important biologic responses in in vitro systems. HTS has the potential to identify chemical probes of genes, pathways, and cell functions that may ultimately lead to characterization of the relationship between chemical structure and biologic activity (Inglese et al., 2006). Determining the number and nature of toxicity pathways involved in human disease and impairment is an essential component of the committee’s vision for toxicity testing.
-
□
Multiple pathways. Adverse biologic change can occur from simultaneous perturbations of multiple toxicity pathways. Environmental agents typically affect more than one toxicity pathway. Although the committee envisions the design of a suite of toxicity tests that will provide broad coverage of biologic perturbations in all key toxicity pathways, biologic perturbations in different pathways may lead to synergistic interactions with important implications for human health. For some adverse health effects, an understanding of the interplay of multiple pathways involved may be important. For others, the research need will be to identify the pathway affected at the lowest dose of the environmental agent.
-
□
Adversity. An understanding of possible diseases or functional losses that may result from specific toxicity-pathway perturbations will support the use of pathway perturbations for decision making. Current risk assessments rely on toxicity tests that demonstrate apical adverse health effects, such as disease or functional deficits, that are at various distances downstream of the toxicity-pathway perturbations. In the committee’s vision, the assessment of potential human health impact will be based on perturbations in toxicity pathways. For example, activation of estrogenic action to abnormal levels during pregnancy is associated with undescended testes and, in later life, testicular cancer. Research will be needed to understand the patterns and magnitudes of the perturbations that will lead to adverse effects. As part of the research, biomarkers of effect that can be monitored in humans and studied in whole animals will be useful.
-
□
Life stages. An understanding of how pathways associated with developmental timing or aging can be adversely perturbed and lead to toxicity will be needed to develop high-throughput assays that can capture and adequately cover developmental and senescing life stages. Many biologic functions require coordination and integration of a wide array of cellular signals that interact through broad networks that contribute to biologic function at different life stages. That complexity of pathway interaction holds for reproductive and developmental functions, which are governed by parallel and sequential signaling networks during critical periods of biologic development. Because of the complexity of such pathways, the challenge will be to identify all important pathways that affect such functions to ensure adequate protection against risks to the fetus and infant. That research will involve elucidating temporal changes in key toxicity pathways that might occur during development and the time-dependent effects of exposure on these pathways.
-
□
Effects of exposure duration. The dose of and response to exposure to a toxicant in the whole organism depend on the duration of exposure. Thus, conventional toxicity testing places considerable emphasis on characterizing risks associated with exposures of different duration, from a few days to the test animal’s lifetime. The ultimate goal in the new paradigm is to evaluate conditions under which human cells are likely to respond and to ensure that these conditions do not occur in exposures of human populations. Research will be needed to understand how the dose-response relationships for perturbations might change with the duration of exposure and to understand pathway activation under acute, subchronic, and chronic exposure conditions. The research will involve investigating the differential responses of cells of various ages and backgrounds to a toxic compound and possible differences in responses of cells between people of different ages.
-
□
Low-dose response. The assessment of the potential for an adverse health effect from a small environmental exposure involves an understanding of how the small exposure adds to preexisting exposures that affect the same toxicity pathways and disease processes. For the more common human diseases and impairments, a myriad of exposures from food, pharmaceuticals, the environment, and endogenous processes have the potential to perturb underlying toxicity pathways. Understanding how a specific environmental exposure contributes, with the other exposures, to modulate a toxicity pathway is critical for the understanding of low-dose response. Because the toxicity tests used in the committee’s long-range vision are based largely on cellular assays involving sensitive biomarkers of alterations in biologic function, it will be possible to study the potential for adverse human health effects at doses lower than is possible with conventional whole-animal tests. Given the cost-effectiveness of the computational methods and in vitro tests that form the core of the toxicity testing, it will be efficient to evaluate effects at multiple doses and so build a basis of detailed dose-response research.
-
□
Human variability. People differ in their expression of toxicity-pathway constituents and consequently in their predisposition to disease and impairment. An understanding of differences among people in the level of responsiveness of particular toxicity pathways is needed to interpret the importance of small environmental exposures. The comprehensive mapping of toxicity pathways provides an unprecedented opportunity to identify gene loci and other determinants of human sensitivity to environmental exposures. That research will support the development of biomarkers of exposure, effect, and susceptibility for surveillance in the human population, and these discoveries in turn will support an assessment of host susceptibility for use in extrapolating results from the in vitro assays to the general population and susceptible groups. The enhanced ability to characterize interindividual differences in sensitivity to environmental exposures will provide a firmer scientific basis of the establishment of human exposure guidelines that can protect susceptible subpopulations.
Research on most, or all, of the subjects just described is going on in the United States and internationally. It is taking place in academe, industry, and government institutions and is funded by foundations and the federal government, mainly to understand the basis of human disease and treatment. Private firms, such as pharmaceutical and biotechnology companies, conduct the research for product development. However, efforts directed specifically toward developing toxicity-testing systems are small.
Test and Analytic Methods Development
The research just described will provide the foundation for the development of toxicity tests and comprehensive testing approaches. The categories of toxicity tests and methods to be developed are outlined next, and the primary questions to be answered in their development are presented in Table 5.
TABLE 5.
Methods to predict metabolism—How can adequate testing for metabolites in the high-throughput assays be ensured? |
Chemical-characterization tools—What computational tools can best predict chemical properties, metabolites, xenobiotic–cellular and molecular interactions, and biologic activity? |
Assays to uncover cell circuitry—What methods will best facilitate the discovery of the circuitry associated with toxicity pathways? |
Assays for large-scale application—Which assays best capture the elucidated pathways and best reflect in vivo conditions? What designs will ensure adequate testing of volatile compounds? |
Suite of assays—What mix of pathway-based high- and medium-throughput assays and targeted tests will provide adequate coverage? What targeted tests should be developed to complement the toxicity-pathway assays? What are the appropriate positive and negative controls that should be used to validate the assay suite? |
Human-surveillance strategy—What surveillance is needed to interpret the results of pathway tests in light of variable human susceptibility and background exposures? |
Mathematical models for data interpretation and extrapolation—What procedures should be used to evaluate whether humans are at risk from environmental exposures? |
Test-strategy uncertainty—How can the overall uncertainty in the testing strategy be best evaluated? |
-
□
Methods to predict metabolism. A key issue to address at an early phase will be development of methods to ensure adequate testing for metabolites in high-throughput assays. Understanding the range of metabolic products and the variation in metabolism among humans and being able to simulate human metabolism as needed in test systems are critical for developing valid toxicity-pathway assays. Without such methods, targeted in vivo assays will be needed to evaluate metabolism.
-
□
Chemical-characterization tools. In addition to metabolism, further development of tools to support chemical characterization will be important. The tools will include computational and structure–activity relationship (SAR) methods to predict chemical properties, potential initial interactions of a chemical and its metabolites with cellular molecules, and biologic activity. A National Research Council report (NRC, 2000) indicated that early cellular interactions are important in understanding potential toxicity and include receptor–ligand interactions, covalent binding with DNA and other endogenous molecules, peroxidation of lipids and proteins, interference with sulfhydryl groups, DNA methylation, and inhibition of protein function. Good predictive methods for chemical characterization will reduce the need for targeted testing and enhance the efficiency of the testing.
-
□
Assays to uncover cell circuitry. Development of methods to facilitate the discovery of the circuitry associated with toxicity pathways will involve functional genomic techniques for integrating and interpreting various data types and for translating dose-response relationships from simple to complex biologic systems, for example, from the pathway to the tissue level. It will most likely require improved methods in bioinformatics, systems biology, and computational toxicology. Some advances in overexpression with complementary DNA (cDNA) and gene knockdown with small inhibitory RNAs are likely to allow improved pathway mapping and will also lead to studies with cells or cell lines that are more readily transfectable.
-
□
Assays for large-scale application. Several substantive issues will need to be considered in developing assays for routine application in a testing strategy. First, as pathways are identified, medium- and high-throughput assays that adequately evaluate pathways and human biology will be developed, including new, preferably human, cell-based cultures for assessment of perturbations. Second, the assay designs that best capture the elucidated pathways and can be applied for rapid large-scale testing of chemicals will need to be identified. Third, an important design criterion for assays will be that they are adequately reflective of the in vivo cellular environment. For any given assay, that will involve an understanding of the elements of the human cellular environment that must be simulated and of culture conditions that affect response. Fourth, the molecular evolution of cell lines during passage in culture and related interlaboratory differences that can result will have to be controlled for. Fifth, approaches for the testing of volatile compounds will require early attention in the development of high-throughput assays; this has been a challenge for in vitro test systems in general. Sixth, assay sensitivity (the probability that the assay identifies the phenomenon that it is designed to identify) and assay specificity (the probability that the assay does not identify a phenomenon as occurring when it does not) will be important considerations in assay design. Individual assays and test batteries should have the capability to predict accurately the effects that they are designed to measure without undue numbers of false positives and false negatives. And seventh, it will be important to achieve flexibility to expand or contract the suites of assays as more detailed biologic understanding of health and disease states emerges from basic research studies.
-
□
Suite of assays. An important criterion for the development of a suite of assays for assessing the potential for a substance to cause a particular type of disease or group of toxicities will be adequate coverage of causative mechanisms, affected cell types, and susceptible individuals. Ensuring the right mix of pathway-based high-throughput assays and targeted tests will involve research. For diseases for which toxicity pathways are not fully understood, targeted in vivo or other tests may be included to ensure adequate coverage.
-
□
Human-surveillance strategy. Human data on the fundamental biologic events involved in the activation of toxicity pathways will aid the interpretation of the results of high-throughput assays. They will provide the basis of understanding of determinants of human susceptibilities related to a toxicity pathway and of background exposures to compounds affecting the pathway. Research will be needed to assess how population-based studies can best be designed and conducted to complement high-throughput testing and provide the information necessary for data interpretation.
-
□
Mathematical models for data interpretation and extrapolation. Procedures for evaluating the impact of human exposure concentrations will involve pharmacokinetic and other modeling methods to relate cell media concentrations to human tissue doses and biomonitoring data and to account for exposure patterns and interindividual variabilities. To facilitate interpretation of high-throughput assay results, models of toxicity pathways (see the section “Components of the Vision”) and other techniques will be needed to address differences among people in their levels of activation of particular response pathways. Although it is not a key aspect of the current vision, in the distant future research may enable the development of biologically based dose-response models of apical responses for risk prediction.
-
□
Test-strategy uncertainty. Methods to evaluate the overall uncertainty in a possible testing strategy will assist the validation and evolution of the new methods. Formal methods could be developed that use systematic approaches to evaluate uncertainty in predicting from the test battery results the doses that should be without biologic effect in human populations. These uncertainty evaluations can be used in the construction and selection of testing strategies.
Whether the testing strategy will detect and predict harmful exposures will depend on whether the major toxicity pathways are addressed by the high-throughput assays or covered by the targeted in vivo and other tests. To ensure that the test system is adequate, the committee envisions a multipronged approach that includes the following components:
-
□
A continuing research and evaluation program to develop, improve, and assess the testing program.
-
□
Adequate validation of the assays, including examination of false-negative and false-positive rates, by applying the assays to sufficient numbers of chemicals of known toxicity.
-
□
A robust program of biomonitoring, human health surveillance, and molecular epidemiology to assess exposures and early indicators of toxicity, to aid in interpretation of high-throughput assay results, and to monitor exposures to ensure that toxic ones are not missed.
Aspects of those endeavors are discussed in the following subsections.
Strategy for Knowledge and Assay Development and Validation
The research strategy to develop the computational tools, suites of in vitro assays, and complementary targeted tests envisioned by the committee will likely involve contributions on multiple fronts, including the following:
-
□
Basic biologic research to obtain the requisite knowledge of toxicity pathways and the potential human health impacts when the pathways are perturbed.
-
□
Science and technology milestones that ensure timely achievement of assays and tool development for the new paradigm.
-
□
Phased basic and applied research to demonstrate success in the transition to the testing emphasis on toxicity pathways.
The basic-research effort will be directed at discovering and mapping toxicity pathways that are the early targets of perturbation by environmental agents and at understanding how agents cause the perturbations. That will be followed by research focused on the design of assays that can be used to determine, first, whether an agent has the potential to perturb the pathway and, if so, the levels and durations of exposure required. The scientific inquiry will involve research at multiple levels of biologic organization, that is, understanding the nature of toxicity pathways at the molecular and cellular levels and how toxicity-pathway alterations may translate to disease processes in tissues, organs, and the whole organism. Some of the tools and technologies that enable this research are described in the previous section “Tools and Technologies.”
In each broad field of toxicity testing, such as neurotoxicology and reproductive and developmental toxicity, systematic approaches to assay development, assay validation, and generalized acceptance of the assays will be organized and pursued. As the research questions presented in the previous section are answered, milestones would be achieved in an orderly manner. Some important milestones to move from pathway research through assay development to validated test strategies are presented in broad brushstrokes in Table 6. The committee recognizes that the implementation of its recommendations would entail extensive planning and expert deliberation; through those processes, the important milestones would be subdivided, elaborated, reshaped, and perhaps even replaced.
TABLE 6.
Develop rapid methods and systems to enable in vitro dosing with chemical stressors (including important metabolites and volatile compounds). |
Create and adapt human, human-gene-transfected rodent, and other cell lines and systems, with culture medium conditions, to have an adequate array of in vitro human cell and tissue surrogates. |
Adapt and develop technologies to enable the full elucidation of critical toxicity pathways causing the diseases by the mechanisms selected for pilot project study. |
Develop toxicity-pathway assays that fully explore the possible effects of exogenous chemical exposure on the diseases and mechanisms selected for a pilot-project study, thereby demonstrating proof of concept. |
Establish efficient approaches for validating suites of high-throughput assays. |
Develop the infrastructure for data management, assay standardization, and reporting to enable broad data sharing across academic, government, industry, and nongovernmental organization sectors and institutions. |
The research would progress in sequential phases, whose timelines would overlap. The committee finds that four phases would evolve as follows:
Phase I: Toxicity-pathway elucidation
A focused research effort is pursued first to understand the toxicity pathways for a select group of health effects (that is, apical endpoints) or molecular mechanisms. Early in this first phase, a data-storage, -access, and -management system would be established to enable broad use of the data being generated to facilitate the understanding of the toxicity pathways and research and knowledge development in later phases. A third element of this phase would involve developing standard practices for research methods and reporting of results so that they are understandable and accessible to a broad audience of researchers and to facilitate consistency and validity in the research methods used. Research in this phase would also focus on developing tools for predicting metabolism, characterizing chemicals, and planning a strategy for human surveillance and biomonitoring of exposure, susceptibility, and effect markers associated with the toxicity-pathway perturbations.
Phase II: Assay development and validation
High- and medium-throughput assays would be developed for toxicity pathways and points for chemical perturbation in the pathways organized for assay development. During this phase, attempts would be pursued to develop biologic markers of exposure, susceptibility, and effect for use in surveillance and biomonitoring of human populations where these toxicity pathways might activated.
Phase III: Assay relevance and validity trial
The third phase would explore assay use, usually in parallel with traditional apical tests. It would screen chemicals that would not otherwise be tested and would begin the biomonitoring and surveillance of human populations.
Phase IV: Assembly and validation of test batteries
Suites of assays would be proposed and validated for use in place of identified apical tests.
Some of the key science and technology development activities for the phases are listed out in Figure 11, and some of the critical aspects are described next. All phases would include research on toxicity pathways. Progression through the phases would involve exploring the research questions outlined in Table 4.
Phase I: Toxicity-Pathway Elucidation
Research to understand toxicity pathways
Phase I research would develop pathway knowledge from which assays for health effects would emerge. Systems-biology approaches—including molecular profiling microarrays, pathway mining, and other high-resolution techniques—would reveal key molecular interactions. Mechanistic understanding provides the basis for identifying the key molecular “triggers” or mechanisms of interactions that can alter biologic processes and ultimately cause toxicity after an environmental exposure. Those nodal triggers or interactions would be modeled in vitro and computationally to provide a suite of appropriate assays for detecting toxicity-pathway perturbations and the requisite tools for describing dose-response relationships.
Early efforts would explore possible toxicity pathways for health effects where there is fairly advanced knowledge of mechanisms of toxicity, molecular signaling and interactions. As a case study, the following sketches out how knowledge development might begin for toxic responses that are associated with estrogenic signaling alterations caused by agonists and antagonists of estrogen function.
Even our current appreciation of the number of potential toxicity pathways highlights the breadth of responses that might be evaluated in various high-throughput assays. Consideration of adverse responses at the level of the intact organism that might be associated with altered signaling through estrogen-receptor-mediated responses illustrates some of the challenges. Xenobiotic-caused alteration in estrogen signaling can occur or be measured at a number of points in the various process that affect estrogen actions, including steroidogenesis, hormone transport and elimination, receptor binding and alteration in numbers of receptors, and changes in nuclear translocation. Those pathways may also be evaluated at different levels of organization—ligand binding, receptor translocation, transcriptional activation, and integrated cellular responses. Some of the processes are outlined here.
-
□
Estrogen steroidogenesis. Upstream alterations in steroidogenesis pathways or other independently regulated pathways that affect endocrine signaling would be explored. Knowledge development would focus on understanding of enzymatic function for key steroidogenesis pathways and the interactions of the pathways with each other and on understanding of how key elements of the pathways might be altered, including alterations of precursors, products, and metabolites when pathway dysregulation occurs. The research might involve quantitative assessment of key enzyme functions in in vitro and in vivo systems, analytic techniques to measure various metabolites, and modeling to understand the target and key steps that undergo estrogen-related dysregulation. Other assays would develop SAR information on compounds already associated with altered steroidogenesis in other situations.
-
□
Estrogen–receptor interactions. Much is known about the molecular interactions between xenobiotics and estrogen receptors (ERs), for example, direct xenobiotic interaction with ERs, including differential interaction with specific ER subtypes, such as ER-a and ER-b xenobiotic interactions with discrete receptor domains that give rise to different biologic consequences, such as interactions with the ligand-binding domain that could cause conformational changes that activate or inhibit signaling; and direct xenobiotic interactions with other components of the ER complex, including accessory proteins, coactivators, and other coregulatory elements. Most responses associated with altered estrogen signaling would be more easily evaluated in assays that evaluated a larger scale function, such as receptor activation of estrogen-mediated transcription of reporter genes or estrogen-mediated cell responses (for example, cell proliferation of estrogen-sensitive cells in vitro).
-
□
Processes that lead to estrogenic transgenerational epigenetic effects. Assay development to address estrogen-induced transgenerational epigenetic effects would involve understanding how early-life exposures to estrogenic compounds permanently alter transcriptional control of genes, understanding how such early-life exposures might be priming events for later-life alterations in reproductive competence or the development of cancer, and understanding how such exposures may produce transgenerational effects. Specific approaches in this research might include genomewide methods to analyze the patterns of DNA methylation with and without estrogenic exposure, quantification of histone modifications, measurements of microRNAs, and the dissection and mechanistic understanding of hormonal inputs to the epigenetic regulatory phenomena.
Those are just a few examples of the kinds of research on estrogenic compounds that would support assay development. The approaches include relatively small-scale research efforts for processes that are fairly well understood (such as direct ligand–receptor interactions) and larger endeavors for the yet-to-be-explained processes (such as the epigenetic and transgenerational effects of early-life estrogenic-compound exposure). A holistic understanding of estrogenic and other pathways and signaling in humans would be derived incrementally by building on studies in a wide variety of species and tissues. New information from basic studies in biology is likely to lead to improved assays for testing specific toxicity pathways.
The identified estrogenic pathways and signaling processes, once understood, would serve as the substrate for further pathway mining to highlight the critical events that could be tested experimentally in assay systems, that is, events that are obligatory for downstream, apical responses and occur at the lowest exposure of a biologic system to an environmental agent. With studies on the organization of response circuitry controlling the toxicity-pathway responses, a dose-response model would be based on key, nodal points in the circuits that control perturbations, rather than on the overall detail of all steps in the signaling process.
Assessing validity of pathway knowledge and linkage to adversity at the organism level
The next step in pathway elucidation would be the assessment of the validity of the pathway knowledge, which would proceed in two steps and involve the broader scientific community.
First, the validity would be tested by artificially modulating the pathways to establish that predicted downstream molecular consequences are consistent and measurable. The perturbations could take place, for example, with the use of standard reference compounds, such as 17b-estradiol, or discrete molecular probes, such as genetically modified test systems, knockout models, or other interventions with siRNA or small-molecule inhibitors of key enzymes of other cellular factors.
Second, the consequences of pathway disruption for the organism—the linkage of molecular events to downstream established biologic effects considered to be adverse or human disease—would be assessed. For the case of perturbations of estrogen signaling, it may include linkage with results from short-term in vivo assays, such as an increase in uterine weight in rats in the uterotrophic assay. The link between the toxicity pathways and adverse effects at the level of the whole organism would be assessed in a variety of in vivo and in vitro experiments.
Development of data-storage, data-access, and data-management systems
Very early stage in Phase I, data-storage, -access, and -management systems should be developed and standardized. As the altered-estrogen-signaling case study indicates, the acquisition of the knowledge to develop high-throughput testing assays would involve the discovery of toxicity pathways and networks from vast amounts of data from studies of biologic circuitry and interactions of environmental agents with the circuitry. Organization of that knowledge would require data analysis and exploration by interdisciplinary teams of scientists. Understanding the relationships of pathways to adverse endpoints would also involve large-volume data analysis, as would the design of test batteries and their validation. Those efforts could be stymied without easy and wide public access to databases of results from a broad array of research studies: high-throughput assays, quantitative-SAR model development, protein and DNA microarrays, pharmacokinetic and metabolomic experiments, in vivo apical tests, and human biomonitoring, clinical, and population-based studies. Central repositories for “-omics” data are under development and exist to a small extent for some in vivo toxicity data. The scale of data storage and access envisioned by the committee is much larger.
The data should be available, regardless of whether they were generated by industry, academe, federal institutions, or foundations. However, the data-management system must also be able to accommodate confidential data but allow for data sharing of confidential components of the database among parties that agree to the terms of confidentiality. The data-management system would also provide procedures and guidelines for adequate quality control. Central storage efforts would need to be coordinated and standardized as appropriate to ensure usefulness.
Standardization of research assays and results
With the development of data-management systems, processes for standardizing platforms would have to be developed. Currently, there is little standardization of microarrays, although such efforts are moving more quickly with the Minimum Information About a Microarray Experiment formats now in use (Brazma et al., 2001). Too much standardization can stifle innovation, so approaches to identifying and using the appropriate level of standardization would be needed. Bioinformatics should proceed jointly with the development of assay-platform technology. Data-management systems would have to evolve flexibly to accommodate new data forms and assay platforms.
Phase II: Assay Development and Validation
After the Phase I validity assessment, pathways would be selected for assay development. The focus would be on critical toxicity pathways that lead reliably to adverse effects for the organism and that are not secondary consequences of other biologic perturbations. The first subsection of this section outlined some of the technical issues that would require research to support assay development.
The case-study example of altered estrogen signaling already given indicates how assays may follow from toxicity-pathway identification. Understanding the direct gene-regulation consequences of modulated ER-mediated transcriptional activation would lead to specific assays for quantitative assessment of transcription (RNA), translation (protein), metabolite markers, and altered function. Rapid assays to evaluate function on the scale of receptor activation of estrogen-mediated transcription of reporter genes or even estrogen-mediated cell responses, such as cell proliferation of estrogen-sensitive cells in vitro, could be developed to assess altered estrogen signaling.
Also important for assessing the potential for perturbations in estrogen signaling would be reliable assays for detecting estrogen receptor interactions rapidly. Specific assays that might be developed include ligand–receptor binding assays and more sophisticated computational structural models of ligand interactions with receptor and receptor-complex conformational changes. Further sets of assays would be needed to address the wide variety of toxicity pathways by which estrogenic compounds can operate. In this phase, biomarkers of effect, susceptibility, and exposure would be developed for use in human biomonitoring and surveillance.
Demonstrating that a test is reliable and relevant for a particular purpose is a prerequisite for its routine use for regulatory acceptance. But establishing the validity of any new toxicity assay can be a formidable process—expensive, time-consuming, and logistically and technically challenging. Development of efficient approaches for validating the new mechanistically based assays would add to the challenge. How can the assays come into use within a reasonable time and be sufficiently validated to be used with confidence? That question is discussed by considering first the relevant existing guidance on validation and then the challenges faced in validating the new tests. Finally, some general suggestions are made regarding validation of the new tests. In making its suggestions, the committee acknowledges the considerable work going on in institutions in the United States and Europe to improve validation methods.
Existing validation guidance
Guidelines on the validation of new and revised methods for regulatory acceptance have been developed by both regulatory agencies and consortia (ICCVAM/NICEATM, 2003; OECD, 2005). Such guidelines focus on multifactorial aspects of a test, which cover the following elements:
-
□
Definition of test rationale, test components, and assay conduct and the provision of details on the test protocol.
-
□
Consideration of the relationship of the test-method endpoints to the biologic effect of interest.
-
□
Characterization of reproducibility in and among laboratories, transferability among laboratories, sources of variability, test limits, and other factors related to the reliability of test measurements (sometimes referred to as internal validity).
-
□
Demonstrated biologic performance of the test with reference chemicals, comparison of the performance with that of the tests it is to replace, and description of test limitations (sometimes referred to as external validity).
-
□
Availability, peer review, and good-laboratory-practices status of the data supporting the validation of the test method.
-
□
Independent peer review of the methods and results of the test and publication in the peer-reviewed literature.
Criteria for regulatory acceptance of new test methods have also been published (ICCVAM/NICEATM, 2003). They cover some of the subjects noted already and include criteria related to robustness (insensitivity to minor changes in protocol), time and cost effectiveness, capability of being harmonized and accepted by agencies and international groups, and capability of generating useful information for risk assessment.
Validation of a new test method typically is a prerequisite for regulatory acceptance but is no guarantee of acceptance. It establishes the performance characteristics of a test method for a particular purpose. Different regulatory agencies may decide that they have no need for a test intended for a given purpose, or they may set their criteria of acceptable performance higher or lower than other agencies. To minimize problems associated with acceptance, the Organization for Economic Cooperation and Development (OECD, 2005) recommends that validation and peer-review processes take place before a test is considered for acceptance as an OECD test guideline. OECD recognizes, however, that factors beyond the technical performance of an assay may be viewed differently by different regulatory authorities.
Challenges in validating mechanistically based assays
Validation of the mechanistically based tests envisioned by the committee may be especially challenging for several reasons. First, the tests in the new paradigm that are based on nonapical findings depart from current practice used by regulatory agencies in setting health advisories and guidelines based on apical outcomes. Relevant policy and legal issues are discussed at length in the next section, “Prerequisites for Implementing the Vision in Regulatory Contexts,” and are not covered here except to note that scientific acceptance of a test and its relationship to disease is a critical component of establishment of the validity of the test for regulatory purposes.
Second, the new “-omics” and related technologies will need to be standardized and refined before specific applications can be validated for regulatory purposes (Corvi et al., 2006). Such preliminary work could be seen as an elaborate extension of the routine step of test-method optimization or prevalidation leading to validation of conventional in vivo or in vitro assays. The committee also notes earlier in this report that some degree of standardization will be necessary early to promote understanding and use of assay findings by researchers for knowledge development.
Third, because “-omics” and related technologies are evolving rapidly, the decision to halt optimization of a particular application and begin a formal validation study will be somewhat subjective. Validation and regulatory acceptance of a specific test do not preclude incorporating later technologic advances that would enhance its performance. If it is warranted, the effects of such modifications on performance can be evaluated through an expedited validation that avoids the burdens of a second full-blown validation.
Fourth, the committee envisions that a suite of new tests typically will be needed to replace an individual in vivo test, given that apical findings can be triggered by multiple mechanisms. Consequently, although it is current practice to validate a single test against the corresponding conventional test and then to look for one-to-one correspondence, the new paradigm would routinely entail validation of test batteries and would use multivariate comparisons.
Fifth, existing validation guidelines focus on concordance between the results of the new and the existing assays. In practice, that often means comparing results from cell-based in vitro assays with in vivo data from animals. One of the challenges of validating the medium- and high-throughput assays in the new vision—with its emphasis on human-derived cells, cell lines, and cellular components—will be to identify standards of comparison for assessing their relevance and predictiveness while aiming for a transformative paradigm shift that emphasizes human biology, mechanisms of toxicity, and initial, critical perturbations of toxicity pathways.
Sixth, it is anticipated that virtually all xenobiotics will perturb signaling pathways to some degree, so a key challenge will be to determine when a perturbation leads to downstream toxicity and when it does not. Thus, specificity may be a bigger challenge than sensitivity.
Assay validation under new toxicity-testing paradigm
Validation should not be viewed as an inflexible process that proceeds sequentially through a fixed series of steps and is then judged according to unvarying criteria. For example, because validation assesses fitness for purpose, such exercises should be judged with the specific intended purpose in mind. A test’s intended purpose may vary from use as a preliminary screening tool to use as the definitive test. Similarly, a new test may be intended to model one or a few toxicity mechanisms for a given apical endpoint but not the full array of mechanisms. Given that the new paradigm would emerge gradually, it would be important to consider validating incremental gains, while recognizing their current strengths and weaknesses.
Consequently, applying a one-size-fits-all approach to validation is not conducive to the rapid incorporation of emerging science or technology into regulatory decision making. A more flexible approach to assay validation would facilitate the evolution of testing toward a more mechanistic understanding of toxicity endpoints; the form the validation should take is a point of discussion and deliberation (Balls et al., 2006; Corvi et al., 2006). For nonregulatory use of assays, preliminary data-gathering, and exploration of mechanisms, at a minimum some general guidance on assay performance appears warranted for intended assays. For assays to be used routinely, somewhat rigorous performance standards and relevance would have to be established.
Returning to the case study on estrogen signaling, the validation sequence involves the development of specific assays that track the key molecular triggers linked to human estrogenic effects. This validation component is largely focused first on validating that the assay components recapitulate the key molecular interactions already described here and then on the traditional approach of looking at assay performance in terms of reproducibility and relevance.
Assessing intralaboratory and interlaboratory reproducibility is more straightforward than assessing relevance, which is sometimes labeled accuracy. To assess relevance, assays would be formally linked to organism-level adverse health effects. For example, they would provide the basis of evaluating the level of molecular change that potentially corresponds to an adverse effect. In addition, reference compounds would be used to determine the assays’ positive and negative predictive value. Ideally, substances known to cause and substances known not to cause the effect in humans would be used as the reference agents for positive and negative predictivity. In the absence of adequate numbers of xenobiotics known to be positive and negative in humans, animal data may have to be used in validation. For the assays based on human cell lines, that could be problematic, and some creativity and flexibility in the validation process would be desirable. For example, rodent-based cell assays comparable with the human assay could be used to establish relevance and to support the use of the human cell-based assay.
Phase III: Assay Relevance and Validity Trial
Once assays are developed and formally validated, they would become available for use. The committee suggests three distinct strategies that could aid in the assessment of test validity and relevance and could further the development of improved assays.
First, research entities, such as the National Toxicology Program (NTP), should further develop and run the experimental high-throughput assays, some before they are fully validated, on chemicals that have already been extensively tested with standard or other toxicity tests. The NTP has, for example, initiated mechanistic high-throughput assays on at least 500 chemicals that have already been tested using NTP cancer and reproductive and developmental toxicity studies, and, in collaboration with the NIH Molecular Library Initiative, further developed and applied cell-based screening assays that can be automated (NTP, 2006). The U.S. Environmental Protection Agency (EPA) National Center for Computational Toxicology (NCCT) also has an initiative to screen numerous pesticides and some industrial chemicals in high-throughput tests. Those processes would be essential for validating the new assays and for learning more about which health effects can be predicted from specific perturbations of toxicity pathways.
Second, new validated assays should be conducted in parallel with existing toxicity tests for chemicals, such as pesticides and pharmaceuticals, that will be undergoing or have recently undergone toxicity testing under regulatory programs. This research testing, which would be conducted by research entities, would help to foster the evolution of the assays into cell-based test batteries to eventually replace current tests. The testing would also help to gauge the positive and negative predictive values of the various assays and thereby help to avoid (or at least begin to quantify) the associated risks with missing important toxicities with the new assays or incorporating a new assay that detects meaningless physiologic alterations that are not useful for predicting human risk.
Third, as the new assays are developed further and validated, they should be deployed as screens for evaluation of chemicals that would not currently undergo toxicity testing, such as existing high-production-volume chemicals that have not been tested or have been evaluated only with the screening information data set, or new chemicals that are not currently subject to test requirements. Used as screens for chemicals that would otherwise not be tested or be subject only to little testing, the assays could begin to help to set priorities for testing and could also help to guide the focus of any testing that may be required. Eventually they could provide the basis of an improved framework for addressing chemicals for which testing is limited or not done at all. This is illustrated in Figure 12.
Resources will be required to implement the three approaches: testing of chemicals with large and robust data sets of apical tests, parallel research testing of chemicals subject to existing regulatory testing requirements, and applying high-through-put screens to chemicals that are currently not tested. In making those suggestions, the committee is not recommending expanding test requirements for pesticides or pharmaceuticals. Rather, it notes that the tests developed will be a national resource of wide benefit and worthy of funding by federal research programs. Voluntary testing by industry using validated new assays should also be encouraged. The three approaches are anticipated to pay off substantially in the longer term as scientists, regulators, and stakeholders develop enough familiarity and comfort with the new assays that they begin to replace current apical endpoint tests and as mechanistic indicators are increasingly used in environmental decision making.
In addition to the high-throughput testing by NTP and U.S. EPA of chemicals with robust data sets already described here, the committee notes the increasing use of mechanistic assays, primarily for further evaluation of chemicals that have demonstrated toxicity in standard apical assays. The mechanistic studies are done to evaluate further a tailored subset of toxicity pathways, such as those involving the peroxisome proliferators-activated receptor, the aryl hydrocarbon receptor, and thyroid and sex hormones. Some companies are also using high-throughput assays to guide internal decision making in new chemical development, but their results typically are not publicly available.
A recent example of how the high-throughput assays could play out in the near term is the risk assessment of perchlorate. The data on perchlorate include standard subchronic- and chronic-toxicity tests and developmental-neurotoxicity tests, but risk assessments and regulatory decisions have been based on perturbation of iodide-uptake inhibition—the known toxicity pathway through which perchlorate has its effects (U.S. EPA, 2006b; NRC, 2006c). If a new chemical were found to inhibit iodide uptake, standard toxicity tests would not be necessary to demonstrate the predictable effects on thyroid hormone and neurodevelopment. Regulatory decisions could be based on the dose-response relationship for iodide-uptake inhibition. The new data on perchlorate-susceptible subpopulations (for example, those with low iodide) emerging from biomonitoring would also be considered (see Blount et al., 2006). Such a chemical would need to undergo a full battery of toxicity-pathway testing to ascertain that no other important pathways that might have effects at lower doses were disrupted.
In the long run, using upstream indicators of toxicity from high-throughput assays based on toxicity pathways can be more sensitive and hence more protective of public health then using apical-endpoint observations from assays in small numbers of live rodents. However, while the new assays are under development, there will be a long period of uncertainty during which the false-positive and false-negative rates of the testing battery will remain unclear, and the ability of the battery to adequately predict effects in susceptible subpopulations or during susceptible life stages will also be unclear. During the phase-in period and afterward, there will be a need to pay close attention to whether important toxicities are being missed or are being exaggerated by the toxicity-pathway screening battery. The concern about missing important toxic endpoints is one of the main reasons for the committee’s recommendation for a long phase-in period during which the new assays are run in parallel with existing assays and tested on chemicals on which there are already large robust data sets of apical findings. Parallel testing will allow identification of toxicities that might be missed if the new assays were used alone and will compel the development of assays to address these gaps.
Many additional issues would need to be considered during the interim phase of assay development. For example, technical issues, such as cell-culture conditions, and selective pressures that result in molecular evolution of cell lines over time and across laboratories could result in issues that could be addressed only with experience and careful review of assay results. Parallel use of new assays and current tests would probably continue for some time before the adoption of the new assays as first-tier screens or as definitive tests of toxicity.
Phase IV: Assembly and Validation of Test Batteries
Once toxicity pathways are elucidated and translated into high-throughput assays for a broad field of toxicity testing, such as neurotoxicology, a progressively more comprehensive suite of validated medium- to high-throughput tests would become available to cover the field. Single assays would not be comprehensive or predictive in isolation but would be assembled into suites with targeted tests that would cover the field. The suite or “panel” of assays and the scoring of the assays would need to be assessed. This may involve a computational assessment of multivariate endpoints. Turning again to the estrogen-signaling case study, known estrogen modulators should register as positive in one or more assays. Confidence in the suite of assays can come from the knowledge that all known mechanisms of estrogenic-signaling alteration are modeled.
The development and assessment of batteries and the overall testing strategy would be facilitated by a formal uncertainty evaluation. For the different risk contexts and decisions to be made (see the section “Components of Vision”), the preferred test batteries may differ in sensitivity, in this context the probability that the battery identifies as harmful a dose that is harmful, and specificity, the probability that a test battery identifies as not harmful a dose that is not harmful. In screening, the effect of a false-negative finding of no harm at a given dose can be far more costly than a false-positive finding of harm (see, for example, Lave & Omenn, 1986). The ability to characterize the specificity and sensitivity of the test battery would aid the consideration of the cost effectiveness and value of the information to be obtained from the test battery (Lave and Omenn, 1986; Lave et al., 1988) and ultimately help to identify preferred test strategies.
Although considerable effort would be directed at the construction of high-throughput batteries, targeted tests would probably also be needed in routine testing strategies to address particular risk contexts (for example, registration of a pesticide for food uses). Still, the endpoint-focused targeted assays should by no means remain static. Instead, they should evolve to incorporate new refinements. For example, the rapid developments in imaging technologies have offered scientists important new tools to enhance the collection of information from animal bioassays. Promising new assays that use nonmammalian models, such as Caenorhabditis elegans, are in development. Combined mammalian assays that incorporate a broader array of sensitive endpoints in a more efficient manner have been developed. The committee assumes that development of those approaches will continue, and it encourages development and validation of them in targeted testing. As newer targeted-testing approaches become available, older apical approaches should be retired.
Intermediate products of assay-development research
One important benefit of the research described is that it could add public-health protection and refinement to current regulatory testing. For example, in some risk contexts, particularly widespread human exposure to existing chemicals, the dose-response data from toxicity-pathway tests could help to refine quantitative relationships between adverse effects identified in the apical tests and perturbations in toxicity pathways and improve the evaluation of perturbations at the low end of the dose-response curve. The results of the toxicity-pathway tests could provide data to aid in interpreting the results of apical tests on a given substance and may guide the selection of further follow-up tests or epidemiologic surveillance. The mechanistic assays would also help to permit the extrapolation of toxicity findings on a chemical under study to other chemicals that target the same mechanism. Additional benefits and research products anticipated for use in the near term include the following:
-
□
A battery of inexpensive medium- and high-throughput screening assays that could be incorporated into tiered-testing schemes to identify the most appropriate tests or to provide preliminary results for screening risk assessments. With experience, the assays would support the phase-out of apical endpoint tests.
-
□
Early cell-based replacements for some in vivo tests, such as those for acute toxicity.
-
□
Work to develop consensus approaches for DNA-reactivity and mutagenicity assays and strategies for using mechanistic studies in cancer risk assessment.
-
□
Online libraries of results of medium- and high-throughput screens for use in toxicity prediction and improving SAR models. For classes of chemicals well studied in apical endpoint tests, the comparison of results from high-throughput studies with those from whole-animal studies could provide the basis of extrapolating toxicity to untested chemicals in the class.
-
□
Elucidation of the mechanisms of toxicity of chemicals well studied in high-dose apical endpoint tests. Research to achieve the vision must include the study of perturbations of toxicity pathways of well-studied chemicals, many of which have widespread human exposure. Such research would bring about better understanding of the mechanisms of toxicity of the chemicals and improve risk assessment. Chemicals with known adverse effects and mechanisms well elucidated with respect to toxicity pathways would be good candidates to serve as positive controls in the high-throughput assays. Such studies would help to distinguish between exposures that result in impaired function and disease and exposures that result in adaptation and normal biologic function (see Figure 2).
-
□
Indicators of toxicity-pathway activation in the human population. This knowledge could be used to understand the extent to which a single chemical might contribute to disease processes and would be critical for realistic dose-response modeling and
-
□
Refined analytic tools for assessing the pharmacokinetics of environmental agents in humans exposed at low concentrations. Such evaluations could be used directly in risk assessments based on apical endpoint tests and could aid in design and interpretation of in vitro screens.
-
□
Improvements in targeted human disease surveillance and exposure biomonitoring.
Building a Transformative Research Program
Instituting Focused Research
A long-term, large-scale concerted effort is needed to bring the new toxicity-testing paradigm to fruition. A critical element is the conduct of transformative research to provide the scientific basis of creating the new testing tools and to understand the implications of test results and how they may be applied in risk assessments used in environmental decision making.
What type of institutional structure would be most appropriate for conducting and managing the research effort? It is beyond the committee’s charge and expertise to make specific recommendations either to change or to create government institutions or to alter their funding decisions. The committee will simply sketch its thoughts on an appropriate institutional structure for implementing the vision. Other approaches may also be appropriate.
The committee notes that an institutional structure should be selected with the following considerations in mind:
-
□
The realization of the vision will entail considerable research over many years and require substantial funding—hundreds of millions of dollars.
-
□
Much of the research will be interdisciplinary and consequently, to be most effective, should not be dispersed among discipline-specific laboratories.
-
□
The research will need high-level coordination to tackle the challenges presented in the vision efficiently.
-
□
The research should be informed by the needs of the regulatory agencies that would adapt and use the emerging testing procedures, but the research program should be insulated from the short-term orientation and varied mandates of the agencies.
Interdisciplinarity, Adaptability, and Timeline
The need for an institutional structure that encourages and coordinates the necessarily multidisciplinary research cannot be overstated, and a spirit of interdisciplinarity should infuse the research program. Accordingly, the effort would need to draw on a variety of technologies and a number of disciplines, including basic biology, bioinformatics, biostatistics, chemistry, computational biology, developmental biology, engineering, epidemiology, genetics, pathology, structural biology, and toxicology. Good communication and problem solving across disciplines are a must, as well as leadership adept at fostering interdisciplinary efforts. The effort will have to be monitored continually, with the necessary cross-interactions engineered, managed, and maintained.
The testing paradigm would be progressively elaborated over many years or decades as experience and successes accumulate. It should continue to evolve with scientific advances. Its evolution is likely to entail midcourse changes in the direction of research as breakthroughs in technology and science open more promising leads. Neither this committee nor any other constituted committee will be able to foresee the full suite of possibilities or potential limitations of new approaches that might arise with increasing biologic knowledge. The research strategy just outlined provides a preview to the future and suggests general steps needed to arrive at a new toxicity-testing paradigm. Some of the suggested steps would need to be reconsidered as time passes and experience is developed with new cell-based assays and interpretive tools, but no global change in the vision, which the committee regards as robust, is expected.
The transition from existing tests to the new tests would require active management, involvement of the regulatory agencies, and coherent long-range planning that invests in the creation of new knowledge while refining current testing and, correspondingly, stimulating changes in risk-assessment procedures and guidelines. Over time, the research expertise and infrastructure involved in testing regimes could be transformed in important ways as the need for animal testing decreases and pathway-related testing increases.
The committee envisions that the new knowledge and technology generated from the proposed research program will be translated to noticeable changes in toxicity-testing practices within 10 years. Within 20 years, testing approaches will more closely reflect the proposed vision than current approaches. That projection assumes adequate and sustained funding. As in the Human Genome Project, progress is expected to be nonlinear, with the pace increasing as technologic and scientific breakthroughs are applied to the effort.
Cross-Institution and Sector Linkages
The research to describe cellular-response networks and toxicity pathways and to develop the complementary human biomonitoring and surveillance strategy would be part of larger current efforts in medicine and biotechnology. Funding of that research is substantial in medical schools and other academic institutions, some U.S. federal and European agencies, and pharmaceutical, medical, and biotechnology industries. Links among different elements in the research community involved in relevant research will be needed to capitalize on the new knowledge, technologies, and analytic tools as they develop. Mechanisms for ensuring sustained communication and collaboration, such as data sharing, will also be needed.
Some form of participation by industry and public-interest groups should be ensured. Firms have a long-term interest in the new paradigm, and most stand to gain from more efficient testing requirements. Public-health and environmental interest groups, as well as those promoting alternatives to animal testing, should also be engaged.
Funding
A large-scale, long-term research program is needed to elucidate the cellular-response networks and individual toxicity pathways within them. Given the scientific challenges and knowledge development required, moderately large funding will be required. The committee envisions a research and test-development program similar in scale to the NTP or the Institute for Systems Biology in Seattle, WA.
The success of the project will depend on attracting the best thinkers to the task, and the endeavor would compete with related research programs in medicine, industry, and government for these researchers. Attracting the best researchers in turn would depend on an adequately funded and managed venture that appears well placed to succeed.
Institutional Framework
The committee concludes that an appropriate institutional structure for the proposed vision is a research institute that fosters multidisciplinary research intramurally and extramurally. A strong intramural research program is essential. The effort cannot succeed merely by creating a virtual institution to link and integrate organizations that are performing relevant research and by dispersing funding on relevant research projects. A mission-oriented, intramural program with core multidisciplinary programs to answer the critical research questions can foster the kind of cross-discipline activity essential for the success of the initiative. There would be far less chance of success within a reasonable period if the research were dispersed among different locations and organizations without a core integrating and organizing institute. A collocated, strong intramural research initiative will enable the communication and problem solving across disciplines required for the research and assay development.
Similarly, a strong, well-coordinated, targeted extramural program will leverage the expertise that already exists within academe, pharmaceutical companies, the biotechnology sector, and elsewhere and foster research that complements the intramural program. Through its intramural and highly targeted extramural activities, the envisioned research institute would provide the nexus through which the new testing tools would be conceived, developed, validated, and incorporated into coherent testing schemes.
The committee sees the research institute funded and coordinated primarily by the federal government, given the scale of the necessary funding, the multiyear nature of the project, and links to government regulatory agencies. That does not mean that there will be no role for other stakeholders. Biotechnology companies, for example, could co-fund specific projects. Academic researchers could conduct research with the program’s extramural funds. Moreover, researchers in industry and academe will continue making important progress in fields related to the proposed vision independently of the proposed projects.
The key institutional question is where to house the government research institute that carries out the intramural program of core multidisciplinary research and manages the extramural program of research. Should it be an existing entity, such as the National Institute of Environmental Health Sciences (NIEHS), or a new entity devoted exclusively to the proposed vision? The committee notes that the recognized need for research and institutional structures that transcend disciplinary boundaries to address critical biomedical research questions has spawned systems-biology institutes and centers at biomedical firms and several leading universities in the country. However, the committee found few examples in the government sector. The Department of Energy (DOE) Genomics GTL Program seeks to engineer systems for energy production, site remediation, and carbon sequestration based on systems-biology research on microorganisms. In its review of this DOE program, NRC (2006c) found collocated, integrated vertical research to be essential to its success.
If one were to place the proposed research program into an existing government entity, a possible choice would be the NTP, a multiagency entity administered and housed in NIEHS. The NTP has several features that suggest it as a possible institutional home for the research program envisioned here, including its mandate to develop innovative testing approaches, its multiagency character, the similarities between its Vision and Roadmap for the Future and what is envisioned here, and its expertise in validating new tests through the NTP Interagency Center for the Evaluation of Alterative Toxicological Methods and its sister entity, the Interagency Coordinating Committee on the Validation of Alternative Methods, and in “-omics” testing at its Center for Toxicogenomics. It is conceivable that the NTP could absorb the research mandate outlined here if its efforts dramatically scaled up to accommodate the focused program envisioned. If it were placed in the NTP, structures would have to be in place to ensure that the day-to-day technical focus on short-term problems of high-volume chemical testing would not impede progress in evolving testing strategies. As the new test batteries and strategies are developed and validated, they would be moved out of the research arm and be made available for routine application.
The committee considered housing the proposed research institute in a regulatory agency and notes that this could be problematic. The science and technology budgets of regulatory agencies have been under considerable stress and appear unlikely to sustain such an effort. Although U.S. EPA’s NCCT has initiated important work in this field, the scale of the endeavor envisioned by the committee is substantially larger and could not be sufficiently supported if recent trends in congressional budgeting for the U.S. EPA continue. For example, U.S. EPA’s science and technology research budget has been suboptimal and decreasing in real dollars for a number of years (U.S. EPA, 2006b, 2007).
The research portfolio entailed by the committee’s vision will also require active management to maintain relevance and the scientific focus needed for knowledge development. Although sufficient input from regulatory agencies is needed, insulation of the institute from the short-term orientation of regulatory-agency programs that depend on the results of toxicologic testing is important.
In the end, the committee noted that wherever the institute is housed, it should be structured along the lines of the NTP, with intramural and focused extramural components and interagency input but with its own focused mission and funding stream.
Scientific Surprises and the Need for Midcourse Corrections
Research often brings surprises, and today’s predictions concerning the promise of particular lines of research are probably either pessimistic or optimistic in some details. For example, the committee’s vision of toxicity testing stands on the presumption that a relatively small number of pathways can provide sufficiently broad coverage to allow a moderately sized set of high- and medium-throughput assays to be developed for the scientific community to use with confidence and that any important gaps in coverage can be addressed with a relatively small set of targeted assays. That presumption may be found to be incorrect. Furthermore, the establishment of links between perturbations and apical endpoints may prove especially challenging for some endpoints. Thus, as the research proceeds and learning takes place, adjustments in the vision and the research focus can be anticipated.
In addition to program oversight noted above, the research program should be assessed every 3–5 years by well-recognized scientific experts independently of vested interests in the public and private sectors. The assessment would weigh practical progress, the promise of methods on the research horizon, and the place of the research in the context of other research, and it would recommend midcourse corrections.
Concluding Remarks for Developing the Science Base and Assays to Implement the Vision
In the traditional approach to toxicity testing, the whole animal provides for the integration and evaluation of many toxicity pathways. Yet each animal study is time-consuming and expensive and results in the use of many animals. In addition, many animal studies need to be done to evaluate different endpoints, life stages, and exposure durations. The new approach may require individual assays for hundreds of relevant toxicity pathways. Despite that apparent complexity, emerging methods allow testing of many pathways extremely rapidly and efficiently (for example, in microarrays or wells). If positive signals from the assays can be used with confidence to guide risk management, the new approach will ultimately prove more efficient than the traditional one.
It is clear, however, that much development and refinement will be needed before a new and efficient system could be in place. For some kinds of toxicity, such as developmental toxicity and neurotoxicity, the identification of replacement toxicity-pathway assays might be particularly challenging, and some degree of targeted testing might continue to be necessary. In addition, the validation process may uncover unexpected and challenging technical problems that will require targeted testing. Finally, the parallel interim process may discover that some categories of chemicals or of toxicity cannot yet be evaluated with toxicity-pathway testing. Nonetheless, the committee envisions the steady evolution of toxicity testing from apical endpoint testing to a system based largely on toxicity-pathway batteries in a manner mindful of information needs and of the capacity of the test system to provide information.
In the long term, the committee expects toxicity pathways to become sufficiently well understood and calibrated for batteries of high-throughput assays to provide a substantial fraction of the toxicity-testing data needed for environmental decision making. Exposure monitoring, human surveillance for early perturbations of toxicity-response pathways, and epidemiologic studies should provide an additional layer of assurance that early indications of adverse effects would be detected if they occurred. The research conducted to realize the committee’s vision would support a series of substantial improvements in toxicity testing in the relatively near term.
PREREQUISITES FOR IMPLEMENTING THE VISION IN REGULATORY CONTEXTS
The committee’s vision sets the stage for transformative changes in toxicity testing in the regulatory agencies and the larger scientific community. Although advances in the state of the science are indispensable to realization of the vision, corresponding institutional changes are also important. The changes will promote acceptance of the principles and methods envisioned. Acceptance will depend on several factors, some having scientific origins. For example, the new testing requirements will be expected to reflect the state of the science and to be founded on peer-reviewed research, established protocols, validated models, case examples, and other scientific features. Other factors stem from administrative procedures associated with rulemaking, such as documenting scientific sources; providing opportunities for scientific experts, stakeholders, and the interested public to participate; and consulting with sister agencies and international organizations.
This section explores the conditions required for using the new testing strategy for regulatory purposes. It focuses on the federal agencies and identifies institutional outlooks and orientation—both tangible, such as budget and staffing, and intangible, such as leadership and commitment—that can determine the pace and degree to which the vision is incorporated into agency culture and practice. The section also addresses the fundamental issues related to the use and the validity of the new concepts, technologies, and resulting data for the specific purpose of developing federal regulations.
The committee’s vision anticipates continual change over the next two to three decades. Beyond the scientific and procedural considerations summarized in this section, the state of the economy, changing environmental conditions and social perspectives, and other dynamics that shape the political climate will influence legislative changes and federal budgets that, in turn, will determine the future of toxicity testing in the regulatory context.
Institutional Change to Meet the Vision
Attitudes and Expectations
Full realization of the vision depends on the promotion of new testing principles and methods in the scientific community at large. As in the past, some changes will originate outside the regulatory agencies and work their way into agency practice, and others will originate in the agencies and work their way into the larger scientific community. In both cases, far-reaching shifts in orientation and perception will be critical. For risk assessors and researchers, the shifts will be from familiar types of studies and established procedures involving overt effects in laboratory animals and cross-species extrapolation to new approaches that focus on how chemicals, both endogenous and exogenous, interact in human disease processes (Lieber, 2006). Many analysts in and outside the agencies will have to apply their expertise in new ways.
The need for a change in attitude and orientation extends far beyond risk assessors and the toxicity-testing community. Most difficult, perhaps, will be the new level of scientific understanding needed to enable many participants, especially nonscientists, to become sufficiently informed to engage in discussion of the new methods. Lawmakers who determine policy and appropriate funds, federal executives who determine research priorities, politically accountable managers and decision makers who use data-based risk assessment for making regulatory decisions, courts that review those decisions, and the public, which has an interest in the need for and nature of regulations, will need to become acquainted with new terminology and concepts.
Nonscientists will grasp some aspects of the new science—such as having regulations based on data derived from human cells, cell lines, and tissues rather than on laboratory animals—more easily than other aspects, such as the molecular basis of chemical changes that lead to adverse health effects. Ideally, individual or institutional “champions” will emerge to foster and guide the implementation process.
Developing and Cultivating Expertise
Effective implementation depends on competent scientists and informed agency management. Those factors are crucial: Agency progress depends on the expertise and experience of the technical staff and a supportive management structure. Incorporating new tests and testing strategies into risk-assessment practices and agency testing guidelines will go no further or faster than staffing permits.
For several decades, academic institutions have prepared scientists for toxicity testing and risk analysis through training in chemistry, biology, toxicology, pharmacology, and the related medical and engineering disciplines. Agency scientists receive their basic undergraduate and postgraduate education and training from external institutions and bring their training to bear on their work for the agencies. For many, pre-agency experience includes postdoctoral fellowships, internships, or first jobs at universities, industry laboratories, consulting laboratories, and other outside organizations. The kind of expertise currently available in the agencies therefore reflects in large measure expertise in the larger scientific community. That tradition has contributed to a large and stable cadre of well-trained scientists in the federal agencies that have science-based responsibilities. Thus, implementing the vision will require an infusion of new scientists who have education and experience in the new technologies and special training for current scientific staff and managers.
Scientists in academe, industry, and consulting laboratories and organizations have had a productive exchange with those in regulatory agencies through professional conferences and workshops, joint research projects, and peer-review activities. Fostering and accelerating those activities will be critical for implementing the vision and will require congressional and management support of targeted investment in developing and sustaining agency expertise. Scientists gravitate to attractive, well-funded, and well-staffed programs. To hire and retain high-caliber scientists in the numbers and disciplines needed, agencies will need congressional and management support of the vision reflected in budget allocations and hiring authorizations.
Policies to Foster Development and Use of New Tests
Institutional change does not come easily. The history of toxicity testing indicates that the pace and extent of change will depend in part on policies and incentives. Some policies and incentives to encourage the use and development of the new tests by agencies are discussed here.
First, continued progress in the use of the new technologies constitutes the greatest incentive to reconfiguring agency testing programs in line with the vision. Policies to support and reward effective use of new testing concepts and methods should be implemented. Apart from historical high-visibility examples, such as the Human Genome Project, current broad-scale examples include the development and use of mechanistic data and the expanding list of “-omics” applications.
Second, policies to encourage the use of data generated with the new testing paradigm in chemical assessments by the agencies will be important. That will involve the evolution of agencies’ risk-assessment methods and guidelines as the new tests are developed and used. For decades, the federal agencies have promulgated formal risk-assessment guidelines, based in part on consultation with outside scientists and the public, that codify generally accepted concepts and methods to be followed in assessing the hazards, dose-response relationships, exposures, and risks related to environmental agents (for example, U.S. EPA, 1991, 1996, 1998b, 2005c). Policies to include the new technologies in agency assessments can foster and accelerate their acceptance and institutionalization.
Third, congressional funding of agencies to implement the vision is essential to support relevant research and staffing, encourage work with external scientists outside the agencies, recognize accomplishments by scientists and their management, and support other policies to promote change.
Fourth, dependence of market access on the conduct of specific toxicity tests can be a policy incentive. For example, the European Union’s Registration, Evaluation and Authorisation of Chemicals (REACH) program requires generation of a basic set of toxicity data on new industrial chemicals before the chemicals can enter the market; the program also sets deadlines for receipt of basic toxicity data on existing industrial chemicals. Another example is the registration of pesticides in the United States.
Fifth, scientific progress in toxicity testing depends on work in academic and private-sector laboratories and in the federal sector. Congressional and agency policies and activities must ensure that sufficiently informative data generated from effective new methods are used in the regulatory process and that the large expenditures of money are not in vain.
Sixth, policies designed to overcome tendencies to resist novel approaches and maintain the status quo will be important. Implementing the vision requires periodic re-examination of testing programs and strategies in each agency and possibly a return to Congress to address outdated and ineffective programs that might impede implementation of novel tests and improved testing strategies.
Regulatory Use of New Methods
The committee’s vision sets the stage for transformative change in developing data to meet regulatory objectives codified in laws passed by Congress. Although the term toxicity testing rarely, if ever, appears in the major statutes administered by the U.S. Environmental Protection Agency (EPA), the availability of reliable data on “adverse effects” and health or environmental “risk” is an underlying assumption in them. The Clean Water Act, the Clean Air Act, the Toxic Substances Control Act (TSCA), and pesticide and Superfund legislation are based on the availability of data for risk assessment and regulatory decision making for chemicals in their jurisdictions.
The data can have several sources. Some statutes—such as the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), the Food Quality Protection Act, and TSCA—authorize U.S. EPA to require the producers of some chemicals to develop and submit specific categories of data to the agency. Other statutes—such as the Clean Air Act, the Clean Water Act, and the Safe Drinking Water Act—require toxicity data to be considered but depend mainly on information available in the scientific literature or government laboratory reports (in some cases, these statutes authorize U.S. EPA to apply TSCA and FIFRA testing requirements to chemicals in their jurisdiction). Regardless of the statute or the data source, toxicity data are indispensable for well-reasoned conclusions on the nature and dimensions of risk and for well-grounded decisions on the necessity of regulation to protect the public health or the environment and on the nature and scope of any such regulations.
As discussed in previous sections, the committee’s vision will result in the generation of data on perturbations in toxicity pathways with the use of high- and medium-throughput assays. A few of the test methods considered in this report have a long history and a place in the current regulatory testing programs and current risk-assessment guidelines and practices. Others are in early stages of development and have yet to be considered for regulatory use. Still others that will be used eventually are not yet on the drawing board or even imagined. Debate on the scientific validity of nonapical test methods and the application of the resulting data should be expected, and controversy could stall or bar the use of new test methods by regulatory agencies.
The discussion here addresses the prospect of controversy and focuses on the validity and defensibility of the new approaches. The primary measure of validity for regulatory purposes is scientific validity. Evidence of reliability and credibility that satisfies established scientific criteria is the principal basis for adopting and adapting new testing concepts and methods for regulatory use. Validity in this sense does not require de novo testing or further confirmation of previously validated scientific tests (see the section “Developing the Science Base and Assays to Implement the Vision”). Rather, it involves producing documentary evidence that the tests have been validated consistently with standard scientific criteria. The objective is to avoid bringing unproven tests and the resulting data into the regulatory system. However, there are also policy and procedural aspects to validation, so the discussion also addresses administrative policies and procedures and other nonscientific considerations related to promulgating and defending government testing practices and requirements. New data and data categories developed in line with the proposed changes in testing can be expected to affect many aspects of risk assessment and risk management. This section comments mainly on testing requirements.
Scientific Prerequisites of Validity
The federal agencies have a 75-year history of developing and promulgating toxicity-testing requirements for external entities, such as pesticide and drug manufacturers, and internal guidance for government laboratories (see introductory sections of this report). Documenting the validity, reliability, and relevance of test methods to the satisfaction of the scientific community has been and will continue to be an essential first step in identifying appropriate methods for use in the regulatory context. That documentation can also provide information and a tutorial for decision makers, the public, and the courts.
Individual agency testing requirements do not arise de novo. For example, U.S. EPA’s Office of Pesticide Programs promulgates test guidelines and requirements only after a comprehensive development and review process involving public comment, harmonization with other international organizations, and peer review by experts in the field (see, for example, 63 Fed. Reg. 41845–41848(1998) and U.S. EPA, 2006c). Documentary evidence of validity has many sources and takes several forms. It includes evidence that customary criteria of scientific acceptance, such as peer review and publication in scholarly journals, have been satisfied. Use by other laboratories, other government agencies, or international organizations, such as the Organization for Economic Cooperation and Development, is an indication of scientific acceptability. As new methods emerge, case studies and peer-reviewed testing guidelines, standardized operating procedures, and practice can be used to document validity.
Establishing and documenting the validity of the new nonapical test methods and the validity of markers of adverse responses corresponding to perturbations of toxicity pathways will be important milestones in implementing the committee’s vision for regulatory use. Some considerations for accomplishing this are discussed next.
Adopting and Adapting New Test Systems and Methods
The vision prompts questions regarding the extent to which scientific progress using primarily human cells, cell lines, and cellular components in vitro can replace and, ideally, surpass in vivo mammalian systems as predictors of toxic effects in humans. Testing with cellular systems derived from human tissue and from nonmammalian systems is backed by an impressive scientific literature and has a long history that includes major contributions to cancer research and the Human Genome Project.
Regulatory agencies also use in vitro systems for toxicity testing and risk assessment. In vitro mode-of-action data were central elements when U.S. EPA proposed revisions to the cancer guidelines more than 10 years ago and in the final guidelines (U.S. EPA, 2005c). Mode-of-action data are featured in a wide array of risk assessments in U.S. EPA, other government institutions, and the private sector (for example, Meek et al., 2003; CalEPA, 2004; NTP, 2005b; IARC, 2006). U.S. EPA’s exploration of mode-of-action approaches illustrates the use of information on biologic perturbations involved in key toxicity pathways.
With few exceptions, such studies are used in the regulatory context mainly to supplement or complement data from in vivo studies. As a result, despite the established value of in vitro systems for many purposes, increased reliance on them for regulatory testing may require further evidence of validity. As discussed in this report, a particularly important aspect of establishing validity concerns metabolism. Many of the issues are highlighted in the following statement:
Several major problems are encountered in studying metabolism-related toxicity in vitro: (a) modeling human metabolism …; (b) maintaining tissue-specific function in vitro; (c) selecting an appropriate xenobiotic metabolizing system; (d) keeping enzyme activity stable over time; and (e) the adverse effects to toxicity-indicator cells of subcellular metabolizing fractions …. Two further problems [are] the testing of mixtures of chemicals that might require different enzyme systems … and … the inactivation of exogenous biotransformation systems, due to exposure to certain solvents and test substance. (Coecke et al., 2006, 57)
Unresolved scientific issues of that type are potential barriers to full validation and acceptance of some new concepts and methods for use in the regulatory context. Such issues show that although the vision conforms to the current movement from in vivo to in vitro test systems, a new set of scientific and related issues may replace interspecies extrapolation as a source of controversy. For example, using human cell lines in culture instead of laboratory animals to identify early perturbations in a cellular-response network avoids the uncertainties associated with the customary animal-to-human extrapolation. But such human-to-human methods introduce new issues and related uncertainties, such as extrapolation from isolated cells in tissue culture to intact humans and from the genetic backgrounds of the cultured cells to the genetic backgrounds of individuals or populations of interest for risk-assessment purposes.
Incorporation of emerging methods depends in part on the status of the new methods in the scientific community, which in turn depends on the reliability of new test systems in identifying compounds with known biologic activities. The generic question is “readiness” for regulatory use. Methods still under development are not necessarily barred, but until they are fully tested and documentable, questions regarding extrapolation, relevance, and possible controversy with respect to use for regulatory purposes can be expected.
Identifying and Defining Markers and Indicators of Adverse Responses
The vision calls for replacing current tests for apical endpoints, such as tumors and birth defects, with mechanistically based testing that identifies early markers of disease and potential risk. The new tests focus on perturbations that are expected to produce adverse responses. This aspect of the vision presents validation issues that require two kinds of documentation, one scientific and one policy-related.
As discussed earlier, assessment of scientific validity will require evidence, such as peer-reviewed publications and other indicators of acceptance in the scientific community. Similar documentation will be required for other new endpoint categories identified as early indicators of perturbations of critical pathways that have the potential to cause toxic effects.
The policy question is an old one: What constitutes an adverse effect? The regulatory trigger for many statutes administered by U.S. EPA is an adverse effect or some variation. For example, the Safe Drinking Water Act calls for establishing contaminant concentrations at which “no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety.” A FIFRA provision calls for preventing “unreasonable adverse effects on the environment,” a phrase that includes nontarget animals as well as humans. As a result, identifying adverse effects is the objective of many current testing practices and regulations and will be critical for the use of new test methods and data.
Historically, both in legislation and in practice, testing and regulation have focused on apical endpoints, particularly clinically, anatomically, or histopathologically observable endpoints, such as tumors, birth defects, and neurologic impairments. That precedent could provide a basis of resistance to a move from traditional apical endpoints to perturbations of toxicity pathways. However, despite the historical emphasis, scientific and regulatory sources make clear that adverse effects embrace a wide array of endpoint categories. Table 7 provides some definitions that are consistent with the vision’s approach to toxicity testing.
TABLE 7.
Definition | Source |
---|---|
“Adverse effect: A biochemical change, functional impairment, or pathologic lesion that affects the performance of the whole organism, or reduces an organism’s ability to respond to an additional environmental challenge.” | IRIS, 2007 |
“Adverse effect: Change in the morphology, physiology, growth, development or life span of an organism, system or (sub) population that results in an impairment of functional capacity, an impairment of the capacity to compensate for additional stress, or an increase in susceptibility to other external influences.” | Renwick et al., 2003 |
“Adverse effects are changes that are undesirable because they alter valued structural or functional attributes of the entities of interest …. The nature and intensity of effects help distinguish adverse changes from normal … variability or those resulting in little or no significant change.” | Sergeant, 2002 |
“The spectrum of undesired effects of chemicals is broad. Some effects are deleterious and others are not …. [Regarding drugs], some side effects … are never desirable and are deleterious to the well-being of humans. These are referred to as the adverse, deleterious, or toxic effects of the drug.” | Klaassen & Eaton, 1991 |
“All chemicals produce their toxic effects via alterations in normal cellular biochemistry and physiology …. It should also be recognized that most organs have a capacity for function that exceeds that required for normal homeostasis, sometimes referred to as functional reserve capacity.” | Klaassen & Eaton, 1991 |
In this case, establishing validity for regulatory purposes involves documenting (1) sources that justify a broad interpretation of adverse effects as a concept and (2) published papers and other materials that show the relationship between responses in toxicity pathways and disease. Case studies that link specific chemicals, mechanistic endpoints, and disease would be useful.
Policy and Procedural Prerequisites of Validity
Ideally, new test systems and agency guidelines that incorporate them will coevolve. In that regard, opportunities for public participation are as important as scientific measures of validity. For the courts, in laboratories subject to government testing requirements, and in the public forum, the perceived legitimacy of new testing approaches depends also on nonscientific factors.
Establishing a Record
For any of the components of the vision, documentary evidence of scientific validity as reviewed earlier in this report makes up the substantive portion of the record, but evidence of public participation is also important. Current U.S. EPA practice often includes extensive discussion with scientists in universities, industry, advocacy groups, and other government agencies at public conferences and workshops. Informal or formal notice-and-comment rulemaking procedures and external peer review are critical steps in the development and issuance of new testing and risk-assessment guidance (U.S. EPA, 1998c, 2005c).
Audience and Communication Issues
The committee’s vision is the product of extensive scientific thought supported by a substantial body of scientific evidence. The scientific principles and methods involved in the implementation of the committee’s vision are well known in the scientific community, a major constituency in the discussion of the scientific validity of data derived from toxicity tests for regulatory use. Scientists have long recognized the importance of effective communication of scientific results to a wide variety of stakeholders in toxicity testing, including other scientists, regulatory authorities, industry, the mass media, nongovernment organizations, and the public (NRC, 1989; Leiss, 2001; Krewski et al., 2006; ATSDR, 2007). However, because of the transformative nature of the committee’s vision for toxicity testing, communication of the scientific basis of the vision and its implications for risk assessment of environmental agents will be challenging.
Here, there is a need for clarity in communicating the essence of the committee’s vision to affected parties. The nature and scientific complexity of the unfamiliar and more sophisticated methods promoted in the vision may require new communication approaches. The scientific community may be best positioned to understand the scientific basis on which the committee’s vision rests but may need time to appreciate its implications fully. Acceptance of the committee’s vision in the scientific community will require further elaboration of the technical details of its implementation and generation of new scientific evidence to support the move away from apical endpoints to perturbations of toxicity pathways. The broad participation of the scientific community in the elaboration of the committee’s vision for toxicity testing is essential for its success.
Even more challenging will be the nonscientists’ understanding and acceptance of the committee’s vision. Regulatory authorities will need to consider how current risk-assessment practices can be adapted to make use of the types of toxicity-testing data underlying the committee’s vision to arrive at human exposure guidelines for environmental agents judged, on the basis of the new test results, to have toxic potential. Lawmakers will need to determine whether the regulatory statutes that form the basis of such guidelines need to be modified to reflect the greater reliance on indicators of toxicity-pathway perturbations than on overt health outcomes. For regulatory and legal experts to support the implementation of the committee’s vision, it is essential that the fundamental biologic tenets underlying it be clearly articulated and reinforced by the development of the scientific data needed to support the shift away from a focus on apical outcomes to biologic perturbations of key toxicity pathways. The communication challenge will be to portray the benefits of adopting the committee’s vision in scientifically valid terms without confusing the vision with over-reliance on intricate scientific detail.
Adoption of the committee’s vision will require acceptance by politicians and the public alike. There will undoubtedly be a lack of support for its implementation if the scientific essence of the vision (the notion of toxicity pathways and the effects of perturbing them) is not communicated in understandable terms. Data will need to be generated to demonstrate that avoidance of such perturbations will provide a level of protection against the potential health risks posed by environmental agents at least as great as the current level. It will also be important to demonstrate that adoption of the committee’s vision will permit an assessment of the potential risks associated with many more agents than is possible with current toxicity-testing practices and that this expanded coverage of the universe of environmental agents can be achieved cost-effectively.
The vision for toxicity testing in the 21st century articulated here represents a paradigm shift from the use of experimental animals and apical endpoints toward the use of more efficient in vitro tests and computational techniques. Implementation of the vision, which will provide much broader coverage of the universe of environmental agents that warrant our attention from a risk-assessment perspective, will require a concerted effort on the part of the scientific community. A substantial commitment of resources will be required to generate the scientific data needed to support that paradigm shift, which can be achieved only with the steadfast support of regulators, lawmakers, industry, and the general public. Their support will be garnered only if the essence of the committee’s vision can be communicated to all stakeholders in understandable terms.
CONCLUSIONS
Change often involves a pivotal event that builds on previous history and opens the door to a new era. Pivotal events in science include the discovery of penicillin, the elucidation of the DNA double helix, and the development of computers. All were marked by inauspicious beginnings followed by unheralded advances over a period of years but ultimately resulted in a pharmacopoeia of lifesaving drugs, a map of the human genome, and a personal computer on almost every desk in today’s workplace.
Toxicity testing is approaching such a scientific pivot point. It is poised to take advantage of the revolutions in biology and biotechnology. Advances in toxicogenomics, bioinformatics, systems biology, epigenetics, and computational toxicology could transform toxicity testing from a system based on whole-animal testing to one founded primarily on in vitro methods that evaluate changes in biologic processes using cells, cell lines, or cellular components, preferably of human origin. Anticipating the impact of recent scientific advances, the U.S. Environmental Protection Agency (EPA) asked the National Research Council (NRC) to develop a long-range vision for toxicity testing and a strategic plan for implementing the vision.
This article is based on the NRC Committee’s report on Toxicity Testing and Assessment of Environmental Agents, prepared in response to U.S. EPA’s request, and envisions a major campaign in the scientific community to advance the science of toxicity testing and put it on a forward-looking footing. The potential benefits are clear. Fresh thinking and the use of emerging methods for understanding how environmental agents affect human health will promote beneficial changes in testing of these agents and in the use of data for decision making. The envisioned change is expected to generate more robust data on the potential risks to humans posed by exposure to environmental agents and to expand capabilities to test chemicals more efficiently. A stronger scientific foundation offers the prospect of improved risk-based regulatory decisions and possibly greater public confidence in and acceptance of the decisions.
With those goals in mind, the committee presents in this report a vision for mobilizing the scientific community and marshalling scientific resources to initiate and sustain new approaches, some available and others yet to be developed, to toxicity testing. This report speaks to scientists in all sectors—government, public interest, industry, university, and consulting laboratories—who design and conduct toxicity tests and who use test results to evaluate risks to human health. The report also seeks to inform and engage decision makers and other leaders who shape the nature and scope of government regulations and who establish budgetary priorities that will determine progress in advancing toxicity testing in the future. The full impact of the committee’s wide-ranging recommendations can be achieved only if both scientists and nonscientists work to advance the objectives set forth in the vision.
The Vision
The current approach to toxicity testing relies primarily on a complex array of studies that evaluate observable outcomes in whole animals, such as clinical signs or pathologic changes, that are indicative of a disease state. Partly because that strategy is so time-consuming and resource-intensive, it has had difficulty in meeting many challenges encountered today, such as evaluating various life stages, numerous health outcomes, and large numbers of untested chemicals. The committee debated several options for improving the current system but concluded that a transformative paradigm shift is needed to achieve the design criteria set out in the committee’s interim report: (1) to provide broad coverage of chemicals, chemical mixtures, outcomes, and life stages, (2) to reduce the cost and time of testing, (3) to use fewer animals and cause minimal suffering in the animals used, and (4) to develop a more robust scientific basis for assessing health effects of environmental agents. For a further discussion of the options considered by the committee, see the section “Options for a New Toxicity-Testing Paradigm.”
The committee considered recent scientific advances in defining a new approach to toxicity testing. Substantial progress is being made in the elucidation of cellular-response networks—interconnected pathways composed of complex biochemical interactions of genes, proteins, and small molecules that maintain normal cellular function, control communication between cells, and allow cells to adapt to changes in their environment. For example, one familiar cellular-response network is signaling by estrogens in which initial exposure results in enhanced cell proliferation and tissue growth in specific tissues. Bioscience is enhancing our knowledge of cellular-response networks and allowing scientists to begin to uncover how environmental agents perturb pathways in ways that lead to toxicity. Cellular response pathways that, when sufficiently perturbed, are expected to result in adverse health effects are termed toxicity pathways. The committee envisions a new toxicity-testing system that evaluates biologically significant perturbations in key toxicity pathways by using new methods in computational biology and a comprehensive array of in vitro tests based on human biology.
Components of the Vision
Figure 3 illustrates the major components of the committee’s vision: chemical characterization, toxicity testing, and dose-response and extrapolation modeling. The components of the vision, which are described in the subsections that follow, are distinct but interrelated modules involving specific sets of technologies and scientific capabilities. Some chemical evaluations may proceed in a stepwise manner—from chemical characterization to toxicity testing to dose-response and extrapolation modeling—but such a sequential evaluation need not always be followed in practice. A critical feature of the new vision is consideration of the risk context (the decision-making context that creates the need for toxicity-testing information) at each step and the ability to exit the strategy at any point when sufficient data have been generated for decision making. The vision emphasizes the generation and use of population-based and human exposure data where possible for interpreting test results and encourages the collection of such data on important chemicals with biomonitoring, surveillance, and epidemiologic studies. Population-based and human exposure data, along with the risk context, will play a role in both guiding and using the toxicity information that is produced. Finally, the vision anticipates the development of a formal process to phase in and phase out test methods as scientific understanding of toxicity-testing methods expands. That process addresses the need for efficient testing of all chemicals in a timely, cost-effective fashion.
Chemical Characterization
Chemical characterization is meant to provide insights to key questions, including a compound’s stability in the environment, the potential for human exposure, the likely routes of exposure, the potential for bioaccumulation, possible routes of metabolism, and the likely toxicity of the compound and possible metabolites based on chemical structure or physical or chemical characteristics. Thus, data would be collected on physical and chemical properties, use, possible environmental concentrations, metabolites and breakdown products, initial molecular interactions of compounds and metabolites with cellular components, and possible toxic properties. A variety of computational methods might be used to predict those properties and characteristics. After chemical characterization, decisions might be made about what further testing is required or whether it is needed at all. In most cases, chemical characterization alone is not expected to be sufficient to reach decisions about the toxicity of an environmental agent.
Toxicity Testing
In the vision proposed (Figure 3), toxicity testing has two components: toxicity-pathway assays and targeted testing. The committee expects that when the vision is achieved, predictive, pathway-based assays will serve as the central component of a broad toxicity-testing strategy for assessing the biologic activity of new and existing compounds. Targeted testing will serve to complement the assays and support evaluation.
Toxicity Pathways
Figure 2 illustrates the activation of a toxicity pathway. The initial perturbations of cell-signaling motifs, genetic circuits, and cellular-response networks are obligatory changes resulting from chemical exposure that might eventually result in disease. The consequences of a biologic perturbation depend on its magnitude, which is related to the dose, the timing and duration of the perturbation, and the susceptibility of the host. Accordingly, at low doses, many biologic systems may function normally within their homeostatic limits. At somewhat higher doses, clear biologic responses occur. They may be successfully handled by adaptation, although some susceptible people may respond. More intense or persistent perturbations may overwhelm the capacity of the system to adapt and lead to tissue injury and possible adverse health effects.
The committee’s vision capitalizes on the identification and use of toxicity pathways as the basis of new approaches to toxicity testing and dose-response modeling. Accordingly, the vision emphasizes the development of suites of predictive, high-throughput assays (high-throughput assays are efficiently designed experiments that can be automated and rapidly performed to measure the effect of substances on a biologic process of interest—these assays can evaluate hundreds to many thousands of chemicals over a wide concentration range to identify chemical actions on gene, pathway, and cell function) that use cells or cell lines, preferably of human origin, to evaluate relevant perturbations in key toxicity pathways. Those assays may measure relatively simple processes, such as binding of environmental agents with cellular proteins and changes in gene expression caused by that binding, or they may measure more integrated responses, such as cell division and cell differentiation. Although the majority of toxicity tests in the vision are expected to use high-throughput methods, other tests could include medium-throughput assays of more integrated cellular responses, such as cytotoxicity, cell proliferation, and apoptosis. Over time, the need for traditional animal testing should be greatly reduced and possibly even eliminated.
Targeted Testing
Targeted testing would be used to complement toxicity-pathway tests and to ensure adequate evaluation. It would be used (1) to clarify substantial uncertainties in the interpretation of toxicity-pathway data; (2) to understand effects of representative prototype compounds from classes of materials, such as nanoparticles, that may activate toxicity pathways not included in a standard suite of assays; (3) to refine a risk estimate when the targeted testing can reduce uncertainty, and a more refined estimate is needed for decision making; (4) to investigate the production of possibly toxic metabolites; and (5) to fill gaps in the toxicity-pathway testing strategy to ensure that critical toxicity pathways and endpoints are adequately covered. One of the challenges of developing an in vitro test system to evaluate toxicity is the current inability of cell assays to mirror metabolism in the integrated whole animal. For the foreseeable future, any in vitro strategy will need to include a provision to assess likely metabolites through whole-animal testing.
Targeted testing might be conducted in vivo or in vitro, depending on the toxicity tests available. Although targeted tests could be based on existing toxicity-test systems, they will probably differ from traditional tests in the future. They could use transgenic species, isogenic strains, new animal models, or other novel test systems and could include a toxicogenomic evaluation of tissue responses over wide dose ranges. Whatever system is used, testing protocols would maximize the amount of information gained from whole-animal toxicity testing.
Dose-Response and Extrapolation Modeling
In the vision proposed (Figure 3), dose-response models would be developed for environmental agents primarily on the basis of data from mechanistic, in vitro assays as described in the toxicity-testing component. The dose-response models would describe the relationship between concentration in the test medium and degree of in vitro response. In some risk contexts, a dose-response model based on in vitro results might provide adequate data to support a risk-management decision. An example could involve compounds for which host-susceptibility factors in humans are well understood and for which human biomonitoring provides good information about tissue or blood concentrations of the compound and other related exposures that affect the toxicity pathway in a human population.
Extrapolation modeling estimates the environmental exposures or human intakes that would lead to human tissue concentrations similar to those associated with perturbations of toxicity pathways in vitro and would account for host susceptibility factors. In the vision proposed, extrapolation modeling has three primary components. First, a toxicity-pathway model would provide a quantitative, mechanistic understanding of the dose-response relationship for the perturbations of the pathways by environmental agents. Second, physiologically based pharmacokinetic modeling would then be used to predict human exposures that lead to tissue concentrations that could be compared with the concentrations that caused perturbations in vitro. Third, human data would provide information on background chemical exposures and disease processes that would affect the same toxicity pathway and provide a basis for addressing host susceptibility quantitatively.
Population-Based and Human Exposure Data
Population-based and human exposure data are important components of the committee’s toxicity-testing strategy (Figure 3). Those data can help to inform each component of the vision and ensure the integrity of the overall testing strategy. The shift toward the collection of more mechanistic data on fundamental biologic perturbations in human cells will require greater use of biomonitoring and human-surveillance studies for data interpretation. Moreover, the interaction between population-based studies and toxicity tests will improve the design of each study type for answering questions about the importance of molecular, cellular, and genetic factors that influence individual and population-level health risks. Because the vision emphasizes studies conducted in human cells that indicate how environmental agents can affect human biologic responses, the studies will suggest biomarkers (indicators of human exposure, effect, or susceptibility) that can be monitored and studied in human populations.
As toxicity testing shifts to cell-based studies, human exposure data from biomonitoring studies [such as those recommended in the National Research Council report Human Biomonitoring for Environmental Chemicals (NRC, 2006b)] may prove pivotal. Such data can be used to select doses for toxicity testing that can provide information on biologic effects at environmentally relevant exposures. More important, comparison of concentrations that activate toxicity pathways with concentrations of agents in blood, urine, or other tissues from human populations will help to identify potentially important exposures to ensure an adequate margin of safety in setting human exposure guidelines.
Risk Context
Toxicity testing is useful ultimately only if it can be used to facilitate more informed and efficient responses to the public-health concerns of regulators, industry, and the public. Common scenarios, defined by the committee as “risk contexts,” for which toxicity testing is used to make decisions include evaluation of potential environmental agents, existing environmental agents, sites of environmental contamination, environmental contributors to a human disease, and the relative risk of different environmental agents. Some risk contexts require rapid screening of tens of thousands of environmental agents; some require highly refined dose-response data, extending down to environmentally relevant exposure concentrations, and some require the ability to test chemical mixtures or to use assays focused on specific mechanisms. Some risk contexts might require the use of population-based approaches, including population health surveillance and biomonitoring. With its emphasis on high-throughput assays that use human cells, cell lines, and components to evaluate biologically significant perturbations in key toxicity pathways, the vision presented here will assist the decision-making process in each risk context.
Implementation of the Vision
Implementation of the vision will require (1) the availability of suites of in vitro tests—preferably based on human cells, cell lines, or components—that are sufficiently comprehensive to evaluate activity in toxicity pathways associated with the broad array of possible toxic responses; (2) the availability of targeted tests to complement the in vitro tests and ensure an adequate toxicity database for risk-management decision making; (3) computational models of toxicity pathways to support application of in vitro test results to predict exposures in the general population that could potentially lead to adverse changes; (4) infrastructure changes to support the basic and applied research needed to develop the tests and the pathway models; (5) validation of tests and test strategies for incorporation into chemical-assessment guidelines that will provide direction in interpreting and drawing conclusions from the new assay results; and (6) evidence justifying that the results of tests based on perturbations in toxicity pathways are adequately predictive of adverse health outcomes to be used in decision making.
A substantial and focused research effort will be needed to meet those requirements. The research will need to develop both new scientific knowledge and new toxicity-testing methods. Key questions that need to be addressed regarding knowledge and method development are highlighted in Table 4 and Table 5, respectively.
The research and development needed to implement the vision would progress in phases whose timelines would overlap. Phase I would focus on elucidating toxicity pathways; developing a data-storage, -access, and -management system; developing standard protocols for research methods and reporting; and planning a strategy for human surveillance and biomonitoring to support the toxicity-pathway testing approach. Phase II would involve development and validation of toxicity-pathway assays and identification of markers of exposure, effect, and susceptibility for use in surveillance and biomonitoring of human populations. Phase III would evaluate assays by running them in parallel with traditional toxicity tests, on chemicals with large data sets, and on chemicals that would not otherwise be tested as a screening process. Parallel testing will allow identification of toxicities that might be missed if the new assays were used alone and will compel the development of assays to address these gaps. Surveillance and biomonitoring of human populations would also begin during Phase III. Finally, the validated assays would be assembled into panels in Phase IV for use in place of identified traditional toxicity tests.
Validation will be a critical component of the research and development phases. Establishing the validity of any new toxicity assay can be a formidable process—expensive, time-consuming, and logistically and technically demanding. For several reasons, validation will be especially challenging for the mechanistically based tests envisioned by the committee. First, the test results to be generated in the new paradigm depart from the traditional data used by regulatory agencies to set health advisories and guidelines. Second, the many new technologies developed will need to be standardized and refined before specific applications are validated for regulatory purposes. Third, because new technologies are evolving rapidly, the decision to halt optimization of a particular application and begin a formal validation study will be somewhat subjective. Fourth, the committee envisions that a suite of new tests will typically be needed to replace a specific traditional test. Fifth, existing guidelines focus on concordance between the results of new and existing assays; the difficulty will be to find standards for comparison that can assess the relevance and predictivity of the new assays. Sixth, because virtually all environmental agents will perturb signaling pathways to some degree, a key challenge will be to determine when such perturbations are likely to lead to toxic effects and when they are not.
A long-term, large-scale concerted effort is needed to bring the committee’s vision for toxicity-testing to fruition. A critical factor for success is the conduct of the transformative research to establish the scientific basis of new toxicity-testing tools and to understand the implications of test results and their application in risk assessments used in decision making. The committee concludes that an appropriate institutional structure that fosters multidisciplinary intramural and extramural research is needed to achieve the vision. The effort will not succeed merely by creating a virtual institution to link and integrate organizations that perform relevant research and by dispersing funding on relevant research projects. Mission-oriented intramural and extramural programs with core multidisciplinary activities within the institute to answer the critical research questions listed earlier in this report can foster the kind of interdisciplinary activity essential for the success of the initiative. There would be far less chance of success within a reasonable time if the research were dispersed among different locations and organizations without a core integrating and organizing institute to enable the communication and problem solving required across disciplines.
Research frequently brings surprises, and today’s predictions about the promise of lines of research might prove to be too pessimistic or too optimistic in some details. Therefore, the committee recommends that an independent scientific assessment of the research program supporting implementation of the vision be conducted every 3 to 5 years to provide advice for midcourse corrections. The interim assessments would weigh progress, evaluate the promise of new methods on the research horizon, and refine the committee’s vision in light of the many scientific advances that are expected to occur in the near future.
Regulatory acceptance of the new toxicity-testing strategy will depend on several factors. New testing requirements will be expected to reflect the state of the science and be founded on peer-reviewed research, established test protocols, validated models, and case studies. Other factors affecting regulatory acceptance stem from administrative procedures associated with rulemaking, such as documenting scientific sources; providing opportunities for scientific experts, stakeholders, and the interested public to participate; and consulting with sister agencies and international organizations. Implementing the vision will require improvements and focused effort over a period of decades. However, given the political will and the availability of funds to adapt the current regulatory system to take advantage of the best possible scientific approaches to toxicity testing in the future, the committee foresees no insurmountable obstacles to implementing the vision presented here.
Resources are always limited, and current toxicity-testing practices are long established and deeply ingrained in some sectors. Thus, some resistance to the vision proposed by this committee is expected. However, the vision takes full advantage of current and expected scientific advances to enhance our understanding of how environmental agents can affect human health. It has the potential to greatly reduce the cost and time of testing and to lead to much broader coverage of the universe of environmental agents. Moreover, the vision will lead to a marked reduction in animal use and focus on doses that are more relevant to those experienced by human populations. The vision for toxicity testing in the twenty-first century articulated here is a paradigm shift that will not only improve the current system but transform it into one capable of overcoming current limitations and meeting future challenges.
Footnotes
This article is based on the report Toxicity Testing in the 21st Century: A Vision and a Strategy, prepared by the Committee on Toxicity Testing and Assessment of Environmental Agents. The report was originally published by the U.S. National Research Council in 2007. It is reproduced here with permission of the National Academies Press to serve as the basis for a discussion of progress made toward the implementation of the recommendations in the report, as reflected in the invited papers included in this volume. D. Krewski is the Natural Sciences and Engineering Research Council of Canada Chair in Risk Science at the University of Ottawa.
References
- Abdala-Valencia H, Earwood J, Bansal S, Jansen M, Babcock G, Garvy B, Wills-Karp M, Cook-Mills JM. Non-hematopoietic NADPH oxidase regulation of lung eosinophilia and airway hyperresponsiveness in experimentally-induced asthma. Am J Physiol Lung Cell Mol Physiol. 2007;292:L1111–L1125. doi: 10.1152/ajplung.00208.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Affymetrix Corporation. Gene Chip Arrays. Affymetrix Corporation; 2007. Available at http://www.affymetrix.com/products/arrays/specific/hgu133plus.affx. [Google Scholar]
- Akpinar-Elci M, Kanwal R, Kreiss K. Bronchiolitis obliterans syndrome in popcorn plant workers. Am J Respir Crit Care Med. 2002;165:A526–A526. [Google Scholar]
- Akutsu T, Kuhara S, Maruyama O, Miyano S. A system for identifying genetic networks from gene expression patterns produced by gene disruption and overexpressions. Genome Inform Ser Workshop Genome Inform. 1998;9:151–160. [PubMed] [Google Scholar]
- Aldridge BB, Burke JM, Lauffenburger DA, Sorger PK. Physicochemical modeling of cell signaling pathways. Nat Cell Biol. 2006;8:1195–1203. doi: 10.1038/ncb1497. [DOI] [PubMed] [Google Scholar]
- Andersen ME, Yang RS, French CT, Chubb LS, Dennison JE. Molecular circuits, biological switches, and nonlinear dose-response relationships. Environ Health Perspect. 2002;110(suppl. 6):971–978. doi: 10.1289/ehp.02110s6971. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Andersen ME, Dennison JE, Thomas RS, Conolly RB. New directions in incidence dose-response modeling. Trends Biotechnol. 2005a;23:122–127. doi: 10.1016/j.tibtech.2005.01.007. [DOI] [PubMed] [Google Scholar]
- Andersen ME, Thomas RS, Gaido KW, Conolly RB. Dose-response modeling in reproductive toxicology in the systems biology era. Reprod Toxicol. 2005b;19:327–337. doi: 10.1016/j.reprotox.2004.12.004. [DOI] [PubMed] [Google Scholar]
- Anderson S. The state of the world’s pharmacy: A portrait of the pharmacy profession. J Interprof Care. 2002;16:391–404. doi: 10.1080/1356182021000008337. [DOI] [PubMed] [Google Scholar]
- Andrew AS, Burgess JL, Meza MM, Demidenko E, Waugh MG, Hamilton JW, Karagas MR. Arsenic exposure is associated with decreased DNA repair in vitro and in individuals exposed to drinking water arsenic. Environ Health Perspect. 2006;114:1193–1198. doi: 10.1289/ehp.9008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aranda A, Pascual A. Nuclear hormone receptors and gene expression. Physiol Rev. 2001;81:1269–1304. doi: 10.1152/physrev.2001.81.3.1269. [DOI] [PubMed] [Google Scholar]
- ATSDR (Agency for Toxic Substances and Disease Registry) A primer on health risk communication principles and practices. Atlanta, GA: U.S. Department of Health and Human Services, Agency for Toxic Substances and Disease Registry; 2007. Available at http://www.atsdr.cdc.gov/risk/riskprimer/index.html [accessed March 20, 2007] [Google Scholar]
- Bakand S, Winder C, Khalil C, Hayes A. Toxicity assessment of industrial chemicals and airborne contaminants: Transition from in vivo to in vitro test methods: A review. Inhal Toxicol. 2005;17:775–787. doi: 10.1080/08958370500225240. [DOI] [PubMed] [Google Scholar]
- Balakin KV, Kozintsev AV, Kiselyov AS, Savchuk NP. Rational design approaches to chemical libraries for hit identification. Curr Drug Discov Technol. 2006;3:49–65. doi: 10.2174/157016306776637564. [DOI] [PubMed] [Google Scholar]
- Balazs AC. Modeling self-assembly and phase behavior in complex mixtures. Annu Rev Phys Chem. 2007;58:211–233. doi: 10.1146/annurev.physchem.58.032806.104520. [DOI] [PubMed] [Google Scholar]
- Balls M, Amcoff P, Bremer S, Casati S, Coecke S, Clothier R, Combes R, Corvi R, Curren R, Eskes C, Fentem J, Gribaldo L, Halder M, Hartung T, Hoffmann S, Schectman L, Scott L, Spielmann H, Stokes W, Tice R, Wagner D, Zuang V. The principles of weight of evidence validation of test methods and testing strategies. The report and recommendations of ECVAM workshop 58. Altern Lab Anim. 2006;34:603–620. doi: 10.1177/026119290603400604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barabasi AL, Oltvai ZN. Network biology: Understanding the cell’s functional organization. Nat Rev Genet. 2004;5:101–113. doi: 10.1038/nrg1272. [DOI] [PubMed] [Google Scholar]
- Barton HA. Computational pharmacokinetics during developmental windows of susceptibility. J Toxicol Environ Health A. 2005;68:889–900. doi: 10.1080/15287390590912180. [DOI] [PubMed] [Google Scholar]
- Battelle. Evaluation of SAR predictions of estrogen receptor binding affinity. U.S. Environmental Protection Agency; Columbus, OH: 2002. (EPA/68-W-01-023. Work Assignment 2–3). Prepared for. [Google Scholar]
- Benigni R. Chemical structure of mutagens and carcinogens and the relationship with biological activity. J Exp Clin Cancer Res. 2004;23:5–8. [PubMed] [Google Scholar]
- Benigni R, Bossa C. Structure–activity models of chemical carcinogens: State of the art, and new directions. Ann Ist Super Sanita. 2006;42:118–126. [PubMed] [Google Scholar]
- Berns K, Hijmans EM, Mullenders J, Brummelkamp TR, Velds A, Heimerikx M, Kerkhoven RM, Madiredjo M, Nijkamp W, Weigelt B, Agami R, Ge W, Cavet G, Linsley PS, Beijersbergen RL, Bernards R. A large-scale RNAi screen in human cells identifies new components of the p53 pathway. Nature. 2004;428:431–437. doi: 10.1038/nature02371. [DOI] [PubMed] [Google Scholar]
- Bhalla US. Signaling in small subcellular volumes. I. Stochastic and diffusion effects on individual pathways. Biophys J. 2004a;87:73–744. doi: 10.1529/biophysj.104.040469. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bhalla US. Signaling in small subcellular volumes. II. Stochastic and diffusion effects on synaptic network properties. Biophys J. 2004b;87:745–753. doi: 10.1529/biophysj.104.040501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bhalla US, Ram PT, Iyengar R. MAP kinase phosphatase as a locus of flexibility in a mitogen activated protein kinase signaling network. Science. 2002;297:1018–1023. doi: 10.1126/science.1068873. [DOI] [PubMed] [Google Scholar]
- Blount BC, Pirkle JL, Osterloh JD, Valentin-Blasini L, Caldwell KL. Urinary perchlorate and thyroid hormone levels in adolescent and adult men and women living in the United States. Environ Health Perspect. 2006;114:1865–1871. doi: 10.1289/ehp.9466. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bodor N. Recent advances in retrometabolic design approaches. J Control Release. 1999;62:209–222. doi: 10.1016/s0168-3659(99)00040-1. [DOI] [PubMed] [Google Scholar]
- Bois FY, Krowech G, Zeise L. Modeling human interindividual variability in metabolism and risk: The example of 4-aminobiphenyl. Risk Anal. 1995;15:205–213. doi: 10.1111/j.1539-6924.1995.tb00314.x. [DOI] [PubMed] [Google Scholar]
- Bois FY, Gelman A, Jiang J, Maszle DR, Zeise L, Alexeef G. Population toxicokinetics of tetrachloroethylene. Arch Toxicol. 1996;70:347–355. doi: 10.1007/s002040050284. [DOI] [PubMed] [Google Scholar]
- Borm PJ, Robbins D, Haubold S, Kuhlbusch T, Fissan H, Donaldson K, Schins R, Kreyling W, Lademann J, Krutmann J, Warheit D, Oberdorster E. The potential risks of nanomaterials: A review carried out for ECETOC. Part Fibre Toxicol. 2006;3:11. doi: 10.1186/1743-8977-3-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brazma A, Hingamp P, Quackenbush J, Sherlock G, Spellman P, Stoeckert C, Aach J, Ansorge W, Ball CA, Causton HC, Gaasterland T, Glenisson P, Holstege FC, Kim IF, Markowitz V, Matese C, Parkinson H, Robinson A, Sarkans U, Schulze-Kremer S, Stewart J, Taylor R, Vilo J, Vingron M. Minimum information about a microarray experiment (MIAME)-toward standards for microarray data. Nat Genet. 2001;29:365–371. doi: 10.1038/ng1201-365. [DOI] [PubMed] [Google Scholar]
- Brent R. Genomic biology. Cell. 2000;100:169–183. doi: 10.1016/s0092-8674(00)81693-1. [DOI] [PubMed] [Google Scholar]
- Bugrim A, Nikolskaya T, Nikolsky Y. Early prediction of drug metabolism and toxicity: Systems biology approach and modeling. Drug Discov Today. 2004;9:127–135. doi: 10.1016/S1359-6446(03)02971-4. [DOI] [PubMed] [Google Scholar]
- California Environmental Protection Agency. Public health goal for arsenic in drinking water. Office of Environmental Health Hazard Assessment, California Environmental Protection Agency; 2004. Available at http://www.oehha.ca.gov/water/phg/pdf/asfinal.pdf. [Google Scholar]
- Carmichael NG, Barton HA, Boobis AR, Cooper RL, Dellarco VL, Doerrer NG, Fenner-Crisp PA, Doe JE, Lamb IV JC, Pastoor TP. Agricultural chemical safety assessment: A multisector approach to the modernization of human safety requirements. Crit Rev Toxicol. 2006;36:1–7. doi: 10.1080/10408440500534354. [DOI] [PubMed] [Google Scholar]
- Cassee FR, Groten JP, van Bladeren PJ, Feron VJ. Toxicological evaluation and risk assessment of chemical mixtures. Crit Rev Toxicol. 1998;28:73–101. doi: 10.1080/10408449891344164. [DOI] [PubMed] [Google Scholar]
- Centers for Disease Control and Prevention. National report on human exposure to environmental chemicals. Atlanta, GA: CDC; 2001. Available at http://www.noharm.org/details.cfm?ID=745&type=document. [Google Scholar]
- Centers for Disease Control and Prevention. Second national report on human Exposure to Environmental Chemicals. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention; 2003. Available at http://www.serafin.ch/toxicreport.pdf. [Google Scholar]
- Centers for Disease Control and Prevention. Third national report on human exposure to environmental chemicals. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention; 2005. Available at http://www.cdc.gov/exposurereport/pdf/thirdreport.pdf. [Google Scholar]
- Chan RC, Wang M, Li N, Yanagawa Y, Onoe K, Lee JJ, Nel AE. Pro-oxidative diesel exhaust particle chemicals inhibit LPS-induced dendritic cell responses involved in T-helper differentiation. J Allergy Clin Immunol. 2006;118:455–465. doi: 10.1016/j.jaci.2006.06.006. [DOI] [PubMed] [Google Scholar]
- Chanda SK, White S, Orth AP, Reisdorph R, Miraglia L, Thomas RS, DeJesus P, Mason DE, Huang Q, Vega R, Yu DH, Nelson CG, Smith BM, Terry R, Linford AS, Yu Y, Chirn GW, Song C, Labow MA, Cohen D, King FJ, Peters EC, Schultz PG, Vogt PK, Hogenesch JB, Caldwell JS. Genome-scale functional profiling of the mammalian AP-1signaling pathway. Proc Natl Acad Sci USA. 2003;100:12153–12158. doi: 10.1073/pnas.1934839100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Charles GD. In vitro models in endocrine disruptor screening. ILAR J. 2004;45:494–501. doi: 10.1093/ilar.45.4.494. [DOI] [PubMed] [Google Scholar]
- Cho KH, Shin SY, Lee HW, Wolkenhauer O. Investigations into the analysis and modeling of the TNFa-mediated NF-kB-signaling pathway. Genome Res. 2003;13:2413–2422. doi: 10.1101/gr.1195703. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clark LH, Setzer RW, Barton HA. Framework for evaluation of physiologically-based pharmacokinetic models for use in safety or risk assessment. Risk Anal. 2004;24:1697–1717. doi: 10.1111/j.0272-4332.2004.00561.x. [DOI] [PubMed] [Google Scholar]
- Clewell HJ, Gentry PR, Kester JE, Andersen ME. Evaluation of physiologically based pharmacokinetic models in risk assessment: An example with perchloroethylene. Crit Rev Toxicol. 2005;35:413–433. doi: 10.1080/10408440590931994. [DOI] [PubMed] [Google Scholar]
- Coecke S, Blaauboer BJ, Elaut G, Freeman S, Freidig A, Gensmantel N, Hoet P, Kapoulas VM, Ladstetter B, Langley G, Leahy D, Mannens G, Meneguz A, Monshouwer M, Nemery B, Pelkonen O, Pfaller W, Prieto P, Proctor N, Rogiers V, Rostami-Hodjegan A, Sabbioni E, Steiling W, van de Sandt JJ. Toxicokinetics and metabolism. Altern Lab Anim. 2005;33(suppl. 1):147–175. doi: 10.1177/026119290503301s15. [DOI] [PubMed] [Google Scholar]
- Coecke S, Ahr H, Blaauboer BJ, Bremer S, Casati S, Castell J, Combes R, Corvi R, Crespi CL, Cunningham ML, Elaut G, Eletti B, Freidig A, Gennari A, Ghersi-Egea JF, Guillouzo A, Hartung T, Hoet P, Ingelman-Sundberg M, Munn S, Janssens W, Ladstetter B, Leahy D, Long A, Meneguz A, Monshouwer M, Morath S, Nagelkerke F, Pelkonen O, Ponti J, Prieto P, Richert L, Sabbioni E, Schaack B, Steiling W, Testai E, Vericat JA, Worth A. Metabolism: A bottleneck in in vitro toxicological test development. The report and recommendations of ECVAM workshop 54. Altern Lab Anim. 2006;34:49–84. doi: 10.1177/026119290603400113. [DOI] [PubMed] [Google Scholar]
- Congiu A, Pozzi D, Esposito C, Castellano C, Mossa G. Correlation between structure and transfection efficiency: A study of DC-Chol-DOPE/DNA complexes. Colloids Surf B Biointerfaces. 2004;36:43–48. doi: 10.1016/j.colsurfb.2004.04.006. [DOI] [PubMed] [Google Scholar]
- Conner JD, Ebner LS, O’Connor CA, Volz C, Weinstein KW. Pesticide regulation handbook. New York: Executive Enterprises Publication; 1987. [Google Scholar]
- Conolly RB. The use of biologically based modeling in risk assessment. Toxicology. 2002;181–182:275–279. doi: 10.1016/s0300-483x(02)00295-0. [DOI] [PubMed] [Google Scholar]
- Conolly RB, Kimbell JS, Janszen D, Schlosser PM, Kalisak D, Preston J, Miller FJ. Biologically motivated computational modeling of formaldehyde carcinogenicity in the F344 rat. Toxicol Sci. 2003;75:432–447. doi: 10.1093/toxsci/kfg182. [DOI] [PubMed] [Google Scholar]
- Conolly RB, Kimbell JS, Janszen D, Schlosser PM, Kalisak D, Preston J, Miller FJ. Human respiratory tract cancer risks of inhaled formaldehyde: Dose-response predictions derived from biologically-motivated computational modeling of a combined rodent and human dataset. Toxicol Sci. 2004;82:279–296. doi: 10.1093/toxsci/kfh223. [DOI] [PubMed] [Google Scholar]
- Corvi R, Ahr HJ, Albertini S, Blakey DH, Clerici L, Coecke S, Douglas GR, Gribaldo L, Groten JP, Haase B, Hamernik K, Hartung T, Inoue T, Indans I, Maurici D, Orphanides G, Rembges D, Sansone SA, Snape JR, Toda E, Tong W, van Delft JH, Weis B, Schechtman LM. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use. Environ Health Perspect. 2006;114:420–429. doi: 10.1289/ehp.8247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cronin MT. The current status and future applicability of quantitative structure– activity relationships (QSARs) in predicting toxicity. Altern Lab Anim. 2002;30(suppl. 2):81–84. doi: 10.1177/026119290203002S12. [DOI] [PubMed] [Google Scholar]
- Cummings A, Kavlock R. A systems biology approach to developmental toxicology. Reprod Toxicol. 2005;19:281–290. doi: 10.1016/j.reprotox.2004.10.001. [DOI] [PubMed] [Google Scholar]
- Daston GP. Advances in understanding mechanisms of toxicity and implications for risk assessment. Reprod Toxicol. 1997;11:389–396. doi: 10.1016/s0890-6238(96)00153-0. [DOI] [PubMed] [Google Scholar]
- Doe JE, Boobis AR, Blacker A, Dellarco V, Doerrer NG, Franklin C, Goodman JI, Kronenberg JM, Lewis R, Mcconnell EE, Mercier T, Moretto A, Nolan C, Padilla S, Phang W, Solecki R, Tilbury L, van Ravenzwaay B, Wolf DC. A tiered approach to systemic toxicity testing for agricultural chemical safety assessment. Crit Rev Toxicol. 2006;36:37–68. doi: 10.1080/10408440500534370. [DOI] [PubMed] [Google Scholar]
- Ekins S. Systems-ADME/Tox: Resources and network applications. J Pharmacol Toxicol Methods. 2006;53:38–66. doi: 10.1016/j.vascn.2005.05.005. [DOI] [PubMed] [Google Scholar]
- Ekins S, Nikolsky Y, Nikolskaya T. Techniques: Applications of systems biology to absorption, distribution, metabolism, excretion and toxicity. Trends Pharmacol Sci. 2005;26:202–209. doi: 10.1016/j.tips.2005.02.006. [DOI] [PubMed] [Google Scholar]
- el-Masri HA, Thomas RS, Sabados GR, Phillips JK, Constan AA, Benjamin SA, Andersen ME, Mehendale HM, Yang RS. Physiologically based pharmacokinetic/pharmacodynamic modeling of the toxicology interaction between carbon tetrachloride and kepone. Arch Toxicol. 1996;70:704–713. doi: 10.1007/s002040050331. [DOI] [PubMed] [Google Scholar]
- El-Samad H, Kurata H, Doyle JC, Gross CA, Khammash M. Surviving heat shock: Control strategies for robustness and performance. Proc Natl Acad Sci USA. 2005;102:2736–2841. doi: 10.1073/pnas.0403510102. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Epstein MM. Do mouse models of allergic asthma mimic clinical disease? Int Arch Allergy Immunol. 2004;133:84–100. doi: 10.1159/000076131. [DOI] [PubMed] [Google Scholar]
- Eungdamrong NJ, Iyengar R. Computational approaches for modeling regulatory cellular networks. Trends Cell Biol. 2004;14:661–669. doi: 10.1016/j.tcb.2004.10.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feher M, Sourial E, Schmidt JM. A simple model for the prediction of blood-brain partitioning. Int J Pharm. 2000;201:239–247. doi: 10.1016/s0378-5173(00)00422-1. [DOI] [PubMed] [Google Scholar]
- Fernandis AZ, Wenk MR. Membrane lipids as signaling molecules. Curr Opin Lipidol. 2007;18:121–128. doi: 10.1097/MOL.0b013e328082e4d5. [DOI] [PubMed] [Google Scholar]
- Feron VJ, Groten JP, Jonker D, Cassee FR, van Bladeren PJ. Toxicology of chemical mixtures: Challenges for today and the future. Toxicology. 1995;105:415–427. doi: 10.1016/0300-483x(95)03239-c. [DOI] [PubMed] [Google Scholar]
- Fischer HP. Towards quantitative biology: Integration of biological information to elucidate disease pathways and to guide drug discovery. Biotechnol Annu Rev. 2005;11:1–68. doi: 10.1016/S1387-2656(05)11001-1. [DOI] [PubMed] [Google Scholar]
- Food and Drug Administration. Appraisal of the safety of chemicals in foods, drugs and cosmetics. Austin, TX: Association of Food and Drug Officials of the United States; 1959. (Staff of the Division of Pharmacology, Food and Drug Administration, Department of Health, Education and Welfare). [Google Scholar]
- Fouchecourt MO, Beliveau M, Krishnan K. Quantitative structure– pharmacokinetic relationship modeling. Sci Total Environ. 2001;274:125–135. doi: 10.1016/s0048-9697(01)00743-4. [DOI] [PubMed] [Google Scholar]
- Frankos VH, Rodricks JV. Food additives and nutrition supplements. In: Gad SC, editor. Regulatory toxicology. London: Taylor & Francis; 2001. pp. 133–166. [Google Scholar]
- Frasor J, Danes JM, Komm B, Chang KCN, Lyttle CR, Katzenellenbogen BS. Profiling of estrogen up- and down-regulated gene expression in human breast cancer cells: Insights into gene networks and pathways underlying estrogenic control of proliferation and cell phenotype. Endocrinology. 2003;144:4562–4574. doi: 10.1210/en.2003-0567. [DOI] [PubMed] [Google Scholar]
- Gad SC, Chengelis CP. Human pharmaceutical products. In: Gad SC, editor. Regulatory toxicology. London: Taylor & Francis; 2001. pp. 9–69. [Google Scholar]
- Gargas ML, Seybold PG, Andersen ME. Modeling the tissue solubilities and metabolic rate constant (Vmax) of halogenated methanes, ethanes, and ethylenes. Toxicol Lett. 1988;43:235–256. doi: 10.1016/0378-4274(88)90031-8. [DOI] [PubMed] [Google Scholar]
- Gargas ML, Burgess RJ, Voisard DE, Cason GH, Andersen ME. Partition coefficients of low-molecular-weight volatile chemicals in various liquids and tissues. Toxicol Appl Pharmacol. 1989;98:87–99. doi: 10.1016/0041-008x(89)90137-3. [DOI] [PubMed] [Google Scholar]
- Gennari A, van den Berghe C, Casati S, Castell J, Clemedson C, Coecke S, Colombo A, Curren R, Dal Negro G, Goldberg A, Gosmore C, Hartung T, Langezaal I, Lessigiarska I, Maas W, Mangelsdorf I, Parchment R, Prieto P, Sintes JR, Ryan M, Schmuck G, Stitzel K, Stokes W, Vericat JA, Gribaldo L. Strategies to replace in vivo acute systemic toxicity testing. The report and recommendations of ECVAM Workshop 50. Altern Lab Anim. 2004;32:437–459. doi: 10.1177/026119290403200417. [DOI] [PubMed] [Google Scholar]
- Ginsburg GS, Haga SB. Translating genomic biomarkers into clinically useful diagnostics. Expert Rev Mol Diagn. 2006;6:179–191. doi: 10.1586/14737159.6.2.179. [DOI] [PubMed] [Google Scholar]
- Goldberg AM, Hartung T. Protecting more than animals. Sci Am. 2006;294:84–91. doi: 10.1038/scientificamerican0106-84. [DOI] [PubMed] [Google Scholar]
- Gombar VK, Silver IS, Zhao Z. Role of ADME characteristics in drug discovery and their in silico evaluation: In silico screening of chemicals for their metabolic stability. Curr Top Med Chem. 2003;3:1205–1225. doi: 10.2174/1568026033452014. [DOI] [PubMed] [Google Scholar]
- Government Accounting Office. Chemical regulation: Options exist to improve EPA’s ability to assess health risks and manage its chemical review program. Washington, DC: U.S. Government Accounting Office; 2005. (GAO-05-458). Available at http://www.gao.gov/new.items/d05458.pdf. [Google Scholar]
- Grimme S, Steinmetz M, Korth M. How to compute isomerization energies of organic molecules with quantum chemical methods. J Org Chem. 2007;72:2118–2126. doi: 10.1021/jo062446p. [DOI] [PubMed] [Google Scholar]
- Gwinn MR, Vallyathan V. Nanoparticles: Health effects—pros and cons. Environ Health Perspect. 2006;114:1818–1825. doi: 10.1289/ehp.8871. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hammond SM. Dicing and slicing: The core machinery of the RNA interference pathway. Fed Eur Biochem Soc Lett. 2005;579:5822–5829. doi: 10.1016/j.febslet.2005.08.079. [DOI] [PubMed] [Google Scholar]
- Handschin C, Meyer UA. Regulatory network of lipid-sensing nuclear receptors: Roles for CAR, PXR, LXR, and FXR. Arch Biochem Biophys. 2005;433:387–396. doi: 10.1016/j.abb.2004.08.030. [DOI] [PubMed] [Google Scholar]
- Haney SA, LaPan P, Pan J, Zhang J. High-content screening moves to the front of the line. Drug Discov Today. 2006;11:889–894. doi: 10.1016/j.drudis.2006.08.015. [DOI] [PubMed] [Google Scholar]
- Hannon GJ. RNA interference. Nature. 2002;418:244–251. doi: 10.1038/418244a. [DOI] [PubMed] [Google Scholar]
- Harrington WR, Kim SH, Funk CC, Madak-Erdogan Z, Schiff R, Katzenellenbogen JA, Katzenellenbogen BS. Estrogen dendrimer conjugates that preferentially activate extranuclear, nongenomic versus genomic pathways of estrogen action. Mol Endocrinol. 2006;20:491–502. doi: 10.1210/me.2005-0186. [DOI] [PubMed] [Google Scholar]
- Hattis D, White P, Marmorstein L, Koch P. Uncertainties in pharmacokinetic modeling for perchloroethylene. I Comparison of model structure, parameters, and predictions for low-dose metabolism rates for models derived by different authors. Risk Anal. 1990;10:449–458. doi: 10.1111/j.1539-6924.1990.tb00528.x. [DOI] [PubMed] [Google Scholar]
- Hillegass JM, Murphy KA, Villano CM, White LA. The impact of aryl hydrocarbon receptor signaling on matrix metabolism: Implications for development and disease. Biol Chem. 2006;387:1159–1173. doi: 10.1515/BC.2006.144. [DOI] [PubMed] [Google Scholar]
- Ho SM, Tang WY, Belmonte de Frausto J, Prins GS. Developmental exposure to estradiol and bisphenol A increases susceptibility to prostate carcinogenesis and epigenetically regulates phosphodiesterase type 4 variant 4. Cancer Res. 2006;66:5624–5632. doi: 10.1158/0008-5472.CAN-06-0516. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoffmann A, Levchenko A, Scott ML, Baltimore D. The IkB-NF-kB signaling module: Temporal control and selective gene activation. Science. 2002;298:1241–1245. doi: 10.1126/science.1071914. [DOI] [PubMed] [Google Scholar]
- Hoheisel JD. Microarray technology: Beyond transcript profiling and genotype analysis. Nat Rev Genet. 2006;7:200–210. doi: 10.1038/nrg1809. [DOI] [PubMed] [Google Scholar]
- Hua F, Hautaniemi S, Yokoo R, Lauffenburger DA. Integrated mechanistic and data driven modeling for multivariate analysis of signaling pathways. J R Soc Interface. 2006;3:515–526. doi: 10.1098/rsif.2005.0109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huang Q, Raya A, DeJesus P, Chao SH, Quon KC, Caldwell JS, Chanda SK, Izpisua-Belmonte JC, Schultz PG. Identification of p53 regulators by genome-wide functional analysis. Proc Natl Acad Sci USA. 2004;101:3456–3461. doi: 10.1073/pnas.0308562100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hubbs AF, Battelli LA, Goldsmith WT, Porter DW, Frazer D, Friend S, Schwegler-Berry D, Mercer RR, Reynolds JS, Grote A, Castranova V, Kullman G, Fedan JS, Dowdy J, Jones WG. Necrosis of nasal and airway epithelium in rats inhaling vapors of artificial butter flavoring. Toxicol Appl Pharmacol. 2002;185:128–135. doi: 10.1006/taap.2002.9525. [DOI] [PubMed] [Google Scholar]
- Iarmarcovai G, Sari-Minodier I, Chaspoul F, Botta C, De Meo M, Orsiere T, Berge-Lefranc JL, Gallice P, Botta A. Risk assessment of welders using analysis of eight metals by ICP-MS in blood and urine and DNA damage evaluation by the comet and micronucleus assays; influence of XRCC1 and XRCC3 polymorphisms. Mutagenesis. 2005;20:425–432. doi: 10.1093/mutage/gei058. [DOI] [PubMed] [Google Scholar]
- IARC. IARC Monogr Eval Carcinogen Risks Hum. Vol. 86. Lyon, France: IARC Press; 2006. Cobalt in hard metals and cobalt sulfate, gallium arsenide, indium phosphide and vanadium pentoxide. [PMC free article] [PubMed] [Google Scholar]
- Inglese J. Expanding the HTS paradigm. Drug Discov Today. 2002;7(suppl. 18):S105–S106. doi: 10.1016/s1359-6446(02)02385-1. [DOI] [PubMed] [Google Scholar]
- Inglese J, Auld DS, Jadhav A, Johnson RL, Simeonov A, Yasgar A, Zheng W, Austin CP. Quantitative high-throughput screening: A titration-based approach that efficiently identifies biological activities in large chemical libraries. Proc Natl Acad Sci USA. 2006;103:11473–11478. doi: 10.1073/pnas.0604348103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Institute of Medicine. Implications of nanotechnology for environmental health research. Washington, DC: National Academies Press; 2005. [PubMed] [Google Scholar]
- Institute of Medicine. The future of drug safety: Promoting and protecting the health of the public. Washington, DC: National Academies Press; 2007. [Google Scholar]
- Integrated Risk Information System (IRIS) Glossary of IRIS terms. Integrated Risk Information System, U.S. Environmental Protection Agency; 2007. Available at http://www.epa.gov/IRIS/help_gloss.htm. [Google Scholar]
- Interagency Coordinating Committee on the Validation of Alternative Methods and National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicolgical Methods. ICCVAM Guidelines for the nomination and submission of new, revised, and alternative test methods. Research Triangle Park, NC: National Institute of Environmental Health Sciences, National Institutes of Health; 2003. (NIH publication 03-4508). [Google Scholar]
- International Life Sciences Institute Health and Environmental Sciences Institute. Systemic toxicity white paper. Systemic Toxicity Task Force, Technical Committee on Agricultural Chemical Safety Assessment, ILSI Health Sciences Institute; Washington, DC: 2004a. [Google Scholar]
- International Life Sciences Institute Health and Environmental Sciences Institute. Life stages white paper. Life Stages Task Force, Technical Committee on Agricultural Chemical Safety Assessment, ILSI Health Sciences Institute; Washington, DC: 2004b. [Google Scholar]
- International Life Sciences Institute Health and Environmental Sciences Institute. The acquisition and application of absorption, distribution, metabolism, and excretion (ADME) data in agricultural chemical safety assessments. ADME Task Force, Technical Committee on Agricultural Chemical Safety Assessment, ILSI Health Sciences Institute; Washington, DC: 2004c. [Google Scholar]
- Katoh M, Katoh M. Bioinformatics for cancer management in the post-genome era. Technol Cancer Res Treat. 2006;5:169–175. doi: 10.1177/153303460600500208. [DOI] [PubMed] [Google Scholar]
- Kedderis GL, Lipscomb JC. Application of in vitro biotransformation data and pharmacokinetic modeling to risk assessment. Toxicol Ind Health. 2001;17:315–321. doi: 10.1191/0748233701th119oa. [DOI] [PubMed] [Google Scholar]
- Kitano H. International alliance for quantitative modeling in systems biology. Mol Syst Biol. 2005;1:2005.0007. doi: 10.1038/msb4100011. Available at http://www.nature.com/msb/journal/v1/n1/pdf/msb4100011.pdf. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klaassen CD, Eaton DL. Principles of toxicology. In: Amdur MO, Doull J, Klaassen CD, editors. Casarett and Doull’s toxicology: The basic science of poisons. 4. New York: Pergamon Press; 1991. pp. 12–49. [Google Scholar]
- Kobayashi A, Kang MI, Okawa H, Ohtsuji M, Zenke Y, Chiba T, Igarashi K, Yamamoto M. Oxidative stress sensor Keap1 functions as an adaptor for CuI3-based E3 ligase to regulate proteasomal degradation of Nrf2. Mol Cell Biol. 2004;24:7130–7139. doi: 10.1128/MCB.24.16.7130-7139.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kobayashi M, Yamamoto M. Nrf2-Keap1 regulation of cellular defense mechanisms against electrophiles and reactive oxygen species. Adv Enzyme Regul. 2006;46:113–140. doi: 10.1016/j.advenzreg.2006.01.007. [DOI] [PubMed] [Google Scholar]
- Kraska RC. Industrial chemicals: Regulation of new and existing chemicals (the Toxic Substances Control Act and similar worldwide chemical control laws) In: Gad SC, editor. Regulatory toxicology. London: Taylor & Francis; 2001. pp. 244–276. [Google Scholar]
- Kreiss K, Gomaa A, Kullman G, Fedan K, Simoes EJ, Enright PL. Clinical bronchiolitis obliterans in workers at a microwave-popcorn plant. N Engl J Med. 2002;347:330–338. doi: 10.1056/NEJMoa020300. [DOI] [PubMed] [Google Scholar]
- Krewski D, Lemyre L, Turner MC, Lee JEC, Dallaire C, Bouchard L, Brand K, Mercier P. Public perception of population health risks in Canada: Health hazards and sources of information. Hum Ecol Risk Assess. 2006;12:626–644. [Google Scholar]
- Kriete A, Eils R. Introducing computational systems biology. In: Kriete A, Eils R, editors. Computational system biology. Boston: Elsevier Academic Press; 2006. pp. 1–14. [Google Scholar]
- Lander ES, Weinberg RA. Genomics: Journey to the center of biology. Science. 2000;287:1777–1782. doi: 10.1126/science.287.5459.1777. [DOI] [PubMed] [Google Scholar]
- Landers JP, Spelsberg TC. New concepts in steroid hormone action: Transcription factors, proto-oncogenes, and the cascade model for steroid regulation of gene expression. Crit Rev Eukaryot Gene Expr. 1992;2:19–63. [PubMed] [Google Scholar]
- Lave LB, Omenn GS. Cost-effectiveness of short-term tests for carcinogenicity. Nature. 1986;324:29–34. doi: 10.1038/324029a0. [DOI] [PubMed] [Google Scholar]
- Lave LB, Ennever FK, Rosenkranz HS, Omenn GS. Information value of the rodent bioassay. Nature. 1988;336:631–633. doi: 10.1038/336631a0. [DOI] [PubMed] [Google Scholar]
- Lee CT, Ylostalo J, Friedman M, Hoyle GW. Gene expression profiling in mouse lung following polymeric hexamethylene diisocyanate exposure. Toxicol Appl Pharmacol. 2005;205:53–64. doi: 10.1016/j.taap.2004.09.015. [DOI] [PubMed] [Google Scholar]
- Lee MY, Dordick JS. High-throughput human metabolism and toxicity analysis. Curr Opin Biotechnol. 2006;17:619–627. doi: 10.1016/j.copbio.2006.09.003. [DOI] [PubMed] [Google Scholar]
- Lieber MM. Towards an understanding of the role of forces in carcinogenesis: A perspective with therapeutic implications. Riv Biol. 2006;99:131–160. [PubMed] [Google Scholar]
- Leiss W, editor. In the chamber of risks: Understanding risk controversies. Montreal: McGill-Queen’s University Press; 2001. [Google Scholar]
- Leroux BG, Leisenring WM, Moolgavkar SH, Faustman EM. A biologically-based dose-response model for developmental toxicology. Risk Anal. 1996;16:449–458. doi: 10.1111/j.1539-6924.1996.tb01092.x. [DOI] [PubMed] [Google Scholar]
- Lewin B, Cassimeris L, Lingappa VR, Plopper G, editors. Cells. Sudbury, MA: Jones and Bartlett; 2007. [Google Scholar]
- Lexchin J. Drug withdrawals from the Canadian market for safety reasons, 1963–2004. Can Med Assoc J. 2005;172:765–767. doi: 10.1503/cmaj.045021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Li T, Chen W, Chiang JY. PXR induces CYP27A1 and regulates cholesterol metabolism in the intestine. J Lipid Res. 2006;48:373–384. doi: 10.1194/jlr.M600282-JLR200. [DOI] [PubMed] [Google Scholar]
- Lockey J, Mckay R, Barth E, Dahlsten J, Baughman R. Bronchiolits obliterans in the food flavoring manufacturing industry. Am J Respir Crit Care Med. 2002;165:A461. [Google Scholar]
- Lum L, Yao S, Mozer B, Rovescalli A, Von Kessler D, Nirenberg M, Beachy PA. doi: 10.1126/science.1081403. [DOI] [PubMed] [Google Scholar]
- Identification of Hedgehog pathway components by RNAi in Drosophila cultured cells. Science. 299:2039–2045. doi: 10.1126/science.1081403. [DOI] [PubMed] [Google Scholar]
- Lutz W, Sulkowski WJ. Vagus nerve participates in regulation of the airways: Inflammatory response and hyperreactivity induced by occupational asthmogens. Int J Occup Med Environ Health. 2004;17:417–431. [PubMed] [Google Scholar]
- Lydy M, Belden J, Wheelock C, Hammock B, Denton D. Challenges in regulating pesticide mixtures. Ecol Society. 2004;9:1–6. [Google Scholar]
- Maddox L, Schwartz DA. The pathophysiology of asthma. Annu Rev Med. 2002;53:477–498. doi: 10.1146/annurev.med.53.082901.103921. [DOI] [PubMed] [Google Scholar]
- Maroni P, Bendinelli P, Tiberio L, Rovetta F, Piccoletti R, Schiaffonati L. In vivo heat-shock response in the brain: Signaling pathway and transcription factor activation. Brain Res Mol Brain Res. 2003;119:90–99. doi: 10.1016/j.molbrainres.2003.08.018. [DOI] [PubMed] [Google Scholar]
- Masimirembwa CM, Thompson R, Andersson TB. In vitro high throughput screening of compounds for favorable metabolic properties in drug discovery. Comb Chem High Throughput Screen. 2001;4:245–263. doi: 10.2174/1386207013331101. [DOI] [PubMed] [Google Scholar]
- McKinney JD, Richard A, Waller C, Newman MC, Gerberick F. The practice of structure activity relationships (SAR) in toxicology. Toxicol Sci. 2000;56:8–17. doi: 10.1093/toxsci/56.1.8. [DOI] [PubMed] [Google Scholar]
- McMahon M, Thomas N, Itoh K, Yamamoto M, Hayes JD. Dimerization of substrate adaptors can facilitate cullin-mediated ubiquitylation of proteins by a “tethering” mechanism: A two-site interaction model for the Nrf2–Keap1 complex. J Biol Chem. 2006;281:24756–24768. doi: 10.1074/jbc.M601119200. [DOI] [PubMed] [Google Scholar]
- Meek ME, Bucher JR, Cohen SM, Dellarco V, Hill RN, Lehman-McKeeman LD, Longfellow DG, Pastoor T, Seed J, Patton D. A framework for human relevance analysis of information on carcinogenic modes of action. Crit Rev Toxicol. 2003;33:591–653. doi: 10.1080/713608373. [DOI] [PubMed] [Google Scholar]
- Meister G, Tuschl T. Mechanisms of gene slicing by double-stranded RNA. Nature. 2004;431:343–349. doi: 10.1038/nature02873. [DOI] [PubMed] [Google Scholar]
- Mello CC, Conte D., Jr Revealing the world of RNA interference. Nature. 2004;431:338–342. doi: 10.1038/nature02872. [DOI] [PubMed] [Google Scholar]
- Michiels F, van Es H, van Rompaey L, Merchiers P, Francken B, Pittois K, van der Schueren J, Brys R, Vandersmissen J, Beirinckx F, Herman S, Dokic K, Klaassen H, Narinx E, Hagers A, Laenen W, Piest I, Pavliska H, Rombout Y, Langemeijer E, Ma L, Schipper C, Raeymaeker MD, Schweicher S, Jans M, van Beeck K, Tsang IR, van de Stolpe O, Tomme P, Arts GJ, Donker J. Arrayed adenoviral expression libraries for functional screening. Nat Biotechnol. 2002;20:1154–1157. doi: 10.1038/nbt746. [DOI] [PubMed] [Google Scholar]
- Moolgavkar SH, Luebeck G. Two-event model for carcinogenesis: Biological, mathematical, and statistical considerations. Risk Anal. 1990;10:323–341. doi: 10.1111/j.1539-6924.1990.tb01053.x. [DOI] [PubMed] [Google Scholar]
- Motohashi H, Yamamoto M. Nrf2-Keap1 defines a physiologically important stress response mechanism. Trends Mol Med. 2004;10:549–557. doi: 10.1016/j.molmed.2004.09.003. [DOI] [PubMed] [Google Scholar]
- Nakajima H, Takatsu K. Role of cytokines in allergic airway inflammation. Int Arch Allergy Immunol. 2006;142:265–273. doi: 10.1159/000097357. [DOI] [PubMed] [Google Scholar]
- Nebert DW. Drug-metabolizing enzymes in ligand-modulated transcription. Biochem Pharmacol. 1994;47:25–37. doi: 10.1016/0006-2952(94)90434-0. [DOI] [PubMed] [Google Scholar]
- National Research Council. Drinking water and health. Vol. 1. Washington, DC: National Academy Press; 1977. [Google Scholar]
- National Research Council. Risk assessment in the federal government: Managing the process. Washington, DC: National Academy Press; 1983. [PubMed] [Google Scholar]
- National Research Council. Complex mixtures: Methods for in vivo toxicity testing. Washington, DC: National Academy Press; 1988. [PubMed] [Google Scholar]
- National Research Council. Improving risk communication. Washington, DC: National Academy Press; 1989. [Google Scholar]
- National Research Council. Issues in risk assessment. Washington, DC: National Academy Press; 1993. [Google Scholar]
- National Research Council. Understanding risk: Informing decisions in a democratic society. Washington, DC: National Academy Press; 1996. [Google Scholar]
- National Research Council. Hormonally active agents in the environment. Washington, DC: National Academy Press; 1999. [Google Scholar]
- National Research Council. Scientific frontiers in developmental toxicology and risk assessment. Washington, DC: National Academy Press; 2000. [PubMed] [Google Scholar]
- National Research Council. Toxicity testing for assessment of environmental agents: Interim report. Washington, DC: The National Academies Press; 2006a. [Google Scholar]
- National Research Council. Human biomonitoring for environmental chemicals. Washington, DC: The National Academies Press; 2006b. [Google Scholar]
- National Research Council. Review of the Department of Energy’s Genomics: GTL program. Washington, DC: The National Academies Press; 2006c. [Google Scholar]
- National Toxicology Program. The NTP vision: Toxicology in the 21st century: The role of the National Toxicology Program. National Toxicology Program, National Institute for Environmental Health Sciences; Research Triangle Park, NC: 2004. Available at http://ntp.niehs.nih.gov/ntp/main_pages/NTPVision.pdf. [Google Scholar]
- National Toxicology Program. History of NTP. National Toxicology Program; 2005a. Available at http://ntp-server.niehs.nih.gov. [Google Scholar]
- National Toxicology Program. Report on carcinogens. 11. U.S. Department of Health and Human Services, Public Health Service, National Toxicology Program; 2005b. Available at http://ntp.niehs.nih.gov/ntp/roc/toc11.html. [Google Scholar]
- National Toxicology Program. Current directions and evolving strategies. Research Triangle Park, NC: National Toxicology Program, National Institute of Environmental Health Sciences, National Institutes of Health; 2006. Available at http://ntp.niehs.nih.gov/files/NTP_CurrDir20061.pdf. [Google Scholar]
- Nel A, Xia T, Madler L, Li N. Toxic potential of materials at the nanolevel. Science. 2006;311:622–627. doi: 10.1126/science.1114397. [DOI] [PubMed] [Google Scholar]
- Nordstrand LM, Ringvoll J, Larsen E, Klungland A. Genome instability and DNA damage accumulation in gene-targeted mice. Neuroscience. 2007;145:1309–1317. doi: 10.1016/j.neuroscience.2006.10.059. [DOI] [PubMed] [Google Scholar]
- O’Brien P, Haskins JR. In vitro cytotoxicity assessment. Methods Mol Biol. 2007;356:415–425. doi: 10.1385/1-59745-217-3:415. [DOI] [PubMed] [Google Scholar]
- O’Donoghue SI, Russell RB, Schafferhans A. Three-dimensional structures in target drug discovery and validation. In: Leon D, Markel S, editors. In silico technologies in drug target identification and validation. 6. Boca Raton, FL: CRC Press; 2006. pp. 285–308. [Google Scholar]
- Olsen L, Rydberg P, Rod TH, Ryde U. Prediction of activation energies for hydrogen abstraction by cytochrome p450. J Med Chem. 2006;49:6489–6499. doi: 10.1021/jm060551l. [DOI] [PubMed] [Google Scholar]
- Organization for Economic Cooperation and Development. Guidance document on the validation and international acceptance of new or updated test methods for hazard assessment. Paris: Organisation for Economic Co-operation and Development; 2005. (OECD Series on Testing and Assessment No. 34. ENV/JM/Mono(2005)14). Available at http://appli1.oecd.org/olis/2005doc.nsf/linkto/env-jm-mono(2005)14. [Google Scholar]
- Organization for Economic Cooperation and Development. The OECD: Organization for Economic Co-operation and Development. Organization of Economic Co-operation and Development; 2006. Available at http://www.oecd.org/dataoecd/15/33/34011915.pdf. [Google Scholar]
- Orton RJ, Sturm OE, Vyshemirsky V, Calder M, Gilbert DR, Kolch W. Computational modeling of the receptor-tyrosine-kinase-activated MAPK pathway. Biochem J. 2005;392(pt. 2):249–261. doi: 10.1042/BJ20050908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paans AM, Vaalburg W. Positron emission tomography in drug development and drug evaluation. Curr Pharm Des. 2000;6:1583–1591. doi: 10.2174/1381612003398906. [DOI] [PubMed] [Google Scholar]
- Pabst R. Animal models for asthma: Controversial aspects and unsolved problems. Pathobiology. 2002;70:252–254. doi: 10.1159/000070737. [DOI] [PubMed] [Google Scholar]
- Pallardy M, Kerdine S, Lebrec H. Testing strategies in immunotoxicology. Toxicol Lett. 1998;102–103:257–260. doi: 10.1016/s0378-4274(98)00315-4. [DOI] [PubMed] [Google Scholar]
- Pandya RJ, Solomon G, Kinner A, Balmes JR. Diesel exhaust and asthma: Hypotheses and molecular mechanisms of action. Environ Health Perspect. 2002;110(suppl. 1):103–112. doi: 10.1289/ehp.02110s1103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Park RM, Stayner LT. A search for thresholds and other nonlinearities in the relationship between hexavalent chromium and lung cancer. Risk Anal. 2006;26:79–88. doi: 10.1111/j.1539-6924.2006.00709.x. [DOI] [PubMed] [Google Scholar]
- Pauluhn J. Overview of inhalation exposure techniques: Strengths and weaknesses. Exp Toxicol Pathol. 2005;57(suppl. 1):111–128. doi: 10.1016/j.etp.2005.05.014. [DOI] [PubMed] [Google Scholar]
- Peeters JK, Van der Spek PJ. Growing applications and advancements in microarray technology and analysis tools. Cell Biochem Biophys. 2005;43:149–166. doi: 10.1385/CBB:43:1:149. [DOI] [PubMed] [Google Scholar]
- Potier M, Lakhdar B, Merlet D, Cambar J. Interest and limits of human tissue and cell use in pharmacotoxicology. Cell Biol Toxicol. 1995;11:133–139. doi: 10.1007/BF00756514. [DOI] [PubMed] [Google Scholar]
- Poulin P, Theil FP. Prediction of pharmacokinetics prior to in vivo studies. II Generic physiologically based pharmacokinetic models of drug disposition. J Pharm Sci. 2002;91:1358–1370. doi: 10.1002/jps.10128. [DOI] [PubMed] [Google Scholar]
- Powell MC, Kanarek MS. Nanomaterial health effects-Part 1: Background and current knowledge. Wisconsin Med J. 2006;105:16–20. [PubMed] [Google Scholar]
- Ptacek T, Sell SM. A tiered approach to comparative genomics. Brief Funct Genomic Proteomic. 2005;4:178–185. doi: 10.1093/bfgp/4.2.178. [DOI] [PubMed] [Google Scholar]
- Rehmann S, Jayson GC. Molecular imaging of antiangiogenic agents. Oncologist. 2005;10:92–103. doi: 10.1634/theoncologist.10-2-92. [DOI] [PubMed] [Google Scholar]
- Reitz RH, Mendrala AL, Corley RA, Quast JF, Gargas ML, Andersen ME, Staats DA, Conolly RB. Estimating the risk of liver cancer associated with human exposures to chloroform using physiologically based pharmacokinetic modeling. Toxicol Appl Pharmacol. 1990;105:443–459. doi: 10.1016/0041-008x(90)90148-n. [DOI] [PubMed] [Google Scholar]
- Renwick AG, Barlow SM, Hertz-Picciotto I, Boobis AR, Dybing E, Edler L, Eisenbrand G, Greig JB, Kleiner J, Lambe J, Muller DJ, Smith MR, Tritscher A, Tuijtelaars S, van den Brandt PA, Walter R, Kroes R. Risk characterization of chemicals in food and diet. Food Chem Toxicol. 2003;41:1211–1271. doi: 10.1016/s0278-6915(03)00064-4. [DOI] [PubMed] [Google Scholar]
- Rieger TR, Morimoto RI, Hatzimanikatis V. Mathematical modeling of the eukaryotic heat-shock response: Dynamics of the hsp70 promoter. Biophys J. 2005;88:1646–1658. doi: 10.1529/biophysj.104.055301. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberts SA. High-throughput screening approaches for investigating drug metabolism and pharmacokinetics. Xenobiotica. 2001;31:557–589. doi: 10.1080/00498250110060978. [DOI] [PubMed] [Google Scholar]
- Rochette-Egly C. Nuclear receptors: Integration of multiple signaling pathways through phosphorylation. Cell Signal. 2003;15:355–366. doi: 10.1016/s0898-6568(02)00115-8. [DOI] [PubMed] [Google Scholar]
- Russell WMS, Burch RL. The principles of humane experimental technique. London: Methuen; 1959. [Google Scholar]
- Sachs K, Perez O, Pe’er D, Lauffenburger DA, Nolan GP. Causal protein-signaling networks derived from multiparameter single-cell data. Science. 2005;308:523–529. doi: 10.1126/science.1105809. [DOI] [PubMed] [Google Scholar]
- Santos SD, Verveer PJ, Bastiaens PI. Growth factor-induced MAPK network topology shapes Erk response determining PC-12 cell fate. Nat Cell Biol. 2007;9:324–330. doi: 10.1038/ncb1543. [DOI] [PubMed] [Google Scholar]
- Sarangapani R, Teeguarden J, Plotzke KP, McKim JM, Jr, Andersen ME. Dose-response modeling of cytochrome p450 induction by rats by octamethylcyclotetrasiloxane. Toxicol Sci. 2002;67:159–172. doi: 10.1093/toxsci/67.2.159. [DOI] [PubMed] [Google Scholar]
- Schilter B, Marin-Kuan M, Delatour T, Nestler S, Mantle P, Cavin C. Ochratoxin A: Potential epigenetic mechanisms of toxicity and carcinogenicity. Food Addit Contam. 2006;22(suppl. 1):88–93. doi: 10.1080/02652030500309319. [DOI] [PubMed] [Google Scholar]
- Schleger C, Platz SJ, Deschl U. Development of an in vitro model for vascular injury with human endothelial cells. ALTEX. 2004;21(suppl. 3):12–19. [PubMed] [Google Scholar]
- Schultz TW, Seward JR. Health effects related structure–toxicity relationships: A paradigm for the first decade of the new millennium. Sci Total Environ. 2000;249:73–84. doi: 10.1016/s0048-9697(99)00512-4. [DOI] [PubMed] [Google Scholar]
- Schultz TW, Sinks GD, Bearden AP. QSAR in aquatic toxicology: A mechanism of action approach comparing toxic potency to Pimephales promelas, Tetrahymena pyriformis, and Vibrio fischeri. In: Devillers J, editor. Comparative QSAR. London: Taylor & Francis; 1998. pp. 51–110. [Google Scholar]
- Sergeant A. Ecological risk assessment: History and fundamentals. In: Paustenbach DJ, editor. Human and ecological risk assessment: Theory and practice. New York: John Wiley and Sons; 2002. pp. 369–442. [Google Scholar]
- Sheikh MS, Hollander MC, Fornance AJ., Jr Role of Gadd45 in apoptosis. Biochem Pharmacol. 2000;59:43–45. doi: 10.1016/s0006-2952(99)00291-9. [DOI] [PubMed] [Google Scholar]
- Shi MM, Bleavins MR, de la Iglesia FA. Technologies for detecting genetic polymorphisms in pharmacogenomics. Mol Diagn. 1999;4:343–351. doi: 10.1016/s1084-8592(99)80011-3. [DOI] [PubMed] [Google Scholar]
- Simon-Hettich B, Rothfuss A, Steger-Hartmann T. Use of computer-assisted prediction of toxic effects of chemical substances. Toxicology. 2006;224:156–162. doi: 10.1016/j.tox.2006.04.032. [DOI] [PubMed] [Google Scholar]
- Singh KP, DuMond JW., Jr Genetic and epigenetic changes induced by chronic low dose exposure to arsenic of mouse testicular Leydig cells. Int J Oncol. 2007;30:253–260. [PubMed] [Google Scholar]
- Slikker W, Jr, Andersen ME, Bogdanffy MS, Bus JS, Cohen SD, Conolly RB, David RM, Doerrer NG, Dorman DC, Gaylor DW, Hattis D, Rogers JM, Setzer RW, Swenberg JA, Wallace K. Dose-dependent transitions in mechanisms of toxicity: Case studies. Toxicol Appl Pharmacol. 2004a;201:226–294. doi: 10.1016/j.taap.2004.06.027. [DOI] [PubMed] [Google Scholar]
- Slikker W, Jr, Andersen ME, Bogdanffy MS, Bus JS, Cohen SD, Conolly RB, David RM, Doerrer NG, Dorman DC, Gaylor DW, Hattis D, Rogers JM, Woodrow Setzer R, Swenberg JA, Wallace K. Dose-dependent transitions in mechanisms of toxicity. Toxicol Appl Pharmacol. 2004b;201:203–225. doi: 10.1016/j.taap.2004.06.019. [DOI] [PubMed] [Google Scholar]
- Slikker W, Xu Z, Wang C. Application of a systems biology approach to developmental neurotoxicology. Reprod Toxicol. 2005;19:305–319. doi: 10.1016/j.reprotox.2004.10.003. [DOI] [PubMed] [Google Scholar]
- Snitkin ES, Gustafson AM, Mellor J, Wu J, DeLisi C. Comparative assessment of performance and genome dependence among phylogenetic profiling methods. BMC Bioinformatics. 2006;7:420. doi: 10.1186/1471-2105-7-420. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Soffers AE, Boersma MG, Vaes WH, Vervoort J, Tyrakowska B, Hermens JL, Rietjens IM. Computer-modeling-based QSARs for analyzing experimental data on biotransformation and toxicity. Toxicol In Vitro. 2001;15:539–551. doi: 10.1016/s0887-2333(01)00060-1. [DOI] [PubMed] [Google Scholar]
- Spielmann H. Predicting the risk of developmental toxicity from in vitro assays. Toxicol Appl Pharmacol. 2005;207(suppl. 2):375–380. doi: 10.1016/j.taap.2005.01.049. [DOI] [PubMed] [Google Scholar]
- Spielmann H, Liebsch M. Lessons learned from validation of in vitro toxicity test: From failure to acceptance into regulatory practice. Toxicol In Vitro. 2001;15:585–590. doi: 10.1016/s0887-2333(01)00070-4. [DOI] [PubMed] [Google Scholar]
- Stavanja MS, Ayres PH, Meckley DR, Bombick ER, Borgerding MF, Morton MJ, Garner CD, Pence DH, Swauger JE. Safety assessment of high fructose corn syrup (HFCS) as an ingredient added to cigarette tobacco. Exp Toxicol Pathol. 2006;57:267–281. doi: 10.1016/j.etp.2005.10.003. [DOI] [PubMed] [Google Scholar]
- Stephens SM, Rung J. Advances in systems biology: Measurement, modeling and representation. Curr Opin Drug Discov Devel. 2006;9:240–250. [PubMed] [Google Scholar]
- Subramaniam RP, Crump KS, Chen C, White P, Van Landingham C, Fox JF, Schlosser P, Covington TR, DeVoney D, Vandenberg JJ, Preuss P, Whalan J. The role of mutagenicity in describing formaldehyde-induced carcinogenicity: Possible inferences using the CIIT model; Presented at the Society of Risk Analysis Annual Meeting; December 3–6; Baltimore, MD. 2006. [Google Scholar]
- Subramanya S, Mensa-Wilmot K. Regulated cleavage of intracellular glycosylphosphatidylinositol in a trypanosome: Peroxisome-to-endoplasmic reticulum translocation of a phospholipase C. Fed Eur Biochem Soc J. 2006;273:2110–2126. doi: 10.1111/j.1742-4658.2006.05225.x. [DOI] [PubMed] [Google Scholar]
- Suemori H. Establishment and therapeutic use of human embryonic stem cell lines. Hum Cell. 2006;19:65–70. doi: 10.1111/j.1749-0774.2006.00011.x. [DOI] [PubMed] [Google Scholar]
- Suzuki N, Higashiguchi A, Hasegawa Y, Matsumoto H, Oie S, Orikawa K, Ezawa S, Susumu N, Miyashita K, Aoki D. Loss of integrin alpha3 expression is associated with acquisition of invasive potential by ovarian clear cell adenocarcinoma cells. Hum Cell. 2005;8:147–155. doi: 10.1111/j.1749-0774.2005.tb00005.x. [DOI] [PubMed] [Google Scholar]
- Teuschler L, Klaunig J, Carney E, Chambers J, Conolly R, Gennings C, Giesy J, Hertzberg R, Klaassen C, Kode R, Paustenbach D, Yang R. Support of science-based decisions concerning the evaluation of the toxicology of mixtures: A new beginning. Atlanta, GA: Society in Toxicology sponsored meeting: Contemporary concepts in toxicology; 2005. (Charting the future: Building the scientific foundation for mixtures joint toxicity and risk assessment). Available at http://www.toxicology.org/ai/meet/MixturesWhitePapers.doc. [DOI] [PubMed] [Google Scholar]
- Theil FP, Guentert TW, Haddad S, Poulin P. Utility of physiologically based pharmacokinetic models to drug development and rational drug discovery candidate selection. Toxicol Lett. 2003;138:29–49. doi: 10.1016/s0378-4274(02)00374-0. [DOI] [PubMed] [Google Scholar]
- Thummel CS. Ecdysone-regulated puff genes 2000. Insect Biochem Mol Biol. 2002;32:113–120. doi: 10.1016/s0965-1748(01)00112-6. [DOI] [PubMed] [Google Scholar]
- Timsit YE, Negishi M. CAR and PXR: The xenobiotic-sensing receptors. Steroids. 2006;72:231–246. doi: 10.1016/j.steroids.2006.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tong W, Welsh WJ, Shi L, Fang H, Perkins R. Structure–activity relationship approaches and applications. Environ Toxicol Chem. 2003;22:1680–1695. doi: 10.1897/01-198. [DOI] [PubMed] [Google Scholar]
- U.S. Environmental Protection Agency. Guidelines for developmental toxicity risk assessment. Washington, DC: Risk Assessment Forum, U.S. Environmental Protection Agency; 1991. (EPA/600/FR-91/001). Available at http://www.epa.gov/NCEA/raf/pdfs/devtox.pdf. [Google Scholar]
- U.S. Environmental Protection Agency. Guidelines for reproductive toxicity risk assessment. Washington, DC: Risk Assessment Forum, U.S. Environmental Protection Agency; 1996. (EPA/630/R-96/009). Available at http://www.epa.gov/ncea/raf/pdfs/repro51.pdf. [Google Scholar]
- U.S. Environmental Protection Agency. Guidelines for ecological risk assessment. Washington, DC: U.S. Environmental Protection Agency, Risk Assessment Forum; 1998a. (EPA/630/R-95/002F). Available at oaspub.epa.gov/eims/eimscomm.getfile?p_download_id=36512. [Google Scholar]
- U.S. Environmental Protection Agency. Guidelines for neurotoxicity risk assessment. Washington, DC: Risk Assessment Forum, U.S. Environmental Protection Agency; 1998b. (EPA/630/R-95/001F). Available at http://www.epa.gov/ncea/raf/pdfs/neurotox.pdf. [Google Scholar]
- U.S. Environmental Protection Agency. Health effects test guidelines: OPPTS 870.4300 combined chronic toxicity/carcinogenicity. Washington, DC: Office of Prevention, Pesticides and Toxic Substances, U.S. Environmental Protection Agency; 1998c. (EPA 712-C-98-212). Available at http://www.epa.gov/opptsfrs/publications/OPPTS_Harmonized/870_Health_Effects_Test_Guidelines/Series/870-4300.pdf. [Google Scholar]
- U.S. Environmental Protection Agency. A review of the reference dose and reference concentration processes. Washington, DC: U.S. Environmental Protection Agency, Risk Assessment Forum; 2002. (Final report EPA/630/P-02/002F). Available at http://www.epa.gov/IRIS/RFD_FINAL%5B1%5D.pdf. [Google Scholar]
- U.S. Environmental Protection Agency. EPA commemorates its history and celebrates its 35th anniversary. U.S. Environmental Protection Agency; 2005a. Available at http://www.epa.gov/history/ [Google Scholar]
- U.S. Environmental Protection Agency. Food Quality Protection Act (FQPA) of 1996. U.S. Environmental Protection Agency, Office of Pesticides; 2005b. Available at http://www.epa.gov/opp00001/regulating/laws/fqpa/index.htm/ [Google Scholar]
- U.S. Environmental Protection Agency. Guidelines for carcinogen risk assessment. Washington, DC: Risk Assessment Forum, U.S. Environmental Protection Agency; 2005c. (EPA/630/P-03/001F). Available at http://www.epa.gov/raf/publications/pdfs/CANCER_GUIDELINES_FINAL_3-25-05.PDF. [Google Scholar]
- U.S. Environmental Protection Agency. Highlights of the Food Quality Protection Act of 1996. Office of Pesticides, U.S. Environmental Protection Agency; 2006a. Available at http://www.epa.gov/pesticides/regulating/laws/fqpa/fqpahigh.htm. [Google Scholar]
- U.S. Environmental Protection Agency. Science and research budgets for the US Environmental Protection Agency for fiscal year 2007; An advisory report by the Science Advisory Board. Washington, DC: U.S. Environmental Protection Agency; 2006b. (EPA-SAB-ADV-06-003). Available at http://yosemite.epa.gov/sab/sabproduct.nsf/36a1ca3f683ae57a85256ce9006a32d0/0EDAAECA1096A5B0852571450072E33E/$File/sab-adv-06-003.pdf. [Google Scholar]
- U.S. Environmental Protection Agency. Pesticides: Science and policy. Office of Pesticides, U.S. Environmental Protection Agency; 2006c. Available at http://www.epa.gov/pesticides/science/index.htm. [Google Scholar]
- U.S. Environmental Protection Agency. Comments on EPA’s strategic research directions and research budget for FY 2008, An advisory report of the US Environmental Protection Agency Science Advisory Board. Washington DC: U.S. Environmental Protection Agency; 2007. (EPA-SAB-ADV-07-004). Available at http://yosemite.epa.gov/sab/sab/product.nsf/997517EFA5FC48798525729F0073B4D4/$File/sab-07-004.pdf. [Google Scholar]
- van den Broek LA, Lazaro E, Zylicz Z, Fennis PJ, Missler FA, Lelieveld P, Garzotto M, Wagener DJ, Ballesta JP, Ottenheijm HC. Lipophilic analogues of sparsomycin as strong inhibitors of protein synthesis and tumor growth: A structure–activity relationship study. J Med Chem. 1989;32:2002–2015. doi: 10.1021/jm00128a051. [DOI] [PubMed] [Google Scholar]
- Van der Berg M, Birnbaum L, Bosveld AT, Brunstrom B, Cook P, Feeley M, Giesy JP, Hanberg A, Hasegawa R, Kennedy SW, Kubiak T, Larsen JC, van Leeuwen FX, Liem AK, Nolt C, Peterson RE, Poellinger L, Safe S, Schrenk D, Tillitt D, Tysklind M, Younes M, Waern F, Zacharewski T. Toxic equivalency factors (TEFs) for PCBs, PCDDs, PCDFs for human and wildlife. Environ Health Perspect. 1998;106:775–792. doi: 10.1289/ehp.98106775. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Van Wuytswinkel O, Reiser V, Siderius M, Kelders MC, Ammerer G, Ruis H, Mager WH. Response of Saccharomyces cerevisiae to sever osmotic stress: Evidence for a novel activation mechanism of the HOG MAP kinase pathway. Mol Microbiol. 2000;37:382–397. doi: 10.1046/j.1365-2958.2000.02002.x. [DOI] [PubMed] [Google Scholar]
- Vedani A. Replacing animal testing by virtual experiments: A challenge in computational biology. Chimia. 1999;53:227–228. [Google Scholar]
- Volarath P, Wang H, Fu H, Harrison R. Knowledge-based algorithms for chemical structure and property analysis. Conf Proc IEEE Eng Med Biol Soc. 2004;4:3011–3014. doi: 10.1109/IEMBS.2004.1403853. [DOI] [PubMed] [Google Scholar]
- Walker JD, Enache M, Dearden JC. Quantitative cationic-activity relationships for predicting toxicity of metals. Environ Toxicol Chem. 2003a;22:1916–1935. doi: 10.1897/02-568. [DOI] [PubMed] [Google Scholar]
- Walker JD, Jaworska J, Comber MH, Schultz TW, Dearden JC. Guidelines for developing and using quantitative structure–activity relationships. Environ Toxicol Chem. 2003b;22:1653–1665. doi: 10.1897/01-627. [DOI] [PubMed] [Google Scholar]
- Walker JD, editor. Quantitative structure–activity relationships for pollution prevention, toxicity screening, risk assessment, and web applications (QSAR II) Pensacola, FL: SETAC Press; 2004. [Google Scholar]
- Wang XJ, Hayes JD, Wolf CR. Generation of a stable antioxidant response element-driven reporter gene cell line and its use to show redox-dependent activation of nrf2 by cancer chemotherapeutic agents. Cancer Res. 2006a;66:10983–10994. doi: 10.1158/0008-5472.CAN-06-2298. [DOI] [PubMed] [Google Scholar]
- Wang SL, Lan FH, Zhuang YP, Li HZ, Huang LH, Zheng DZ, Zeng J, Dong LH, Zhu ZY, Fu JL. Microarray analysis of gene-expression profile in hepatocellular carcinoma cell, BEL-7402, with stable suppression of hLRH-1 via a DNA vector-based RNA interference. Yi Chuan Xue Bao. 2006b;33:881–891. doi: 10.1016/S0379-4172(06)60122-4. [DOI] [PubMed] [Google Scholar]
- Waring JF, Ulrich RG. The impact of genomics based technologies on drug safety evaluation. Annu Rev Pharmacol Toxicol. 2000;40:335–352. doi: 10.1146/annurev.pharmtox.40.1.335. [DOI] [PubMed] [Google Scholar]
- Watanabe PG, Schumann AM, Reitz RH. Toxicokinetics in the evaluation of toxicity data. Regul Toxicol Pharmacol. 1988;8:408–413. doi: 10.1016/0273-2300(88)90039-6. [DOI] [PubMed] [Google Scholar]
- Waxman DJ. P450 gene induction by structurally diverse xenochemicals: Central role of nuclear receptors CAR, PXR, and PPAR. Arch Biochem Biophys. 1999;369:11–23. doi: 10.1006/abbi.1999.1351. [DOI] [PubMed] [Google Scholar]
- Weis BK, Balshaw D, Barr JR, Brown D, Ellisman M, Lioy P, Omenn G, Potter JD, Smith MT, Sohn L, Suk WA, Sumner S, Swenberg J, Walt DR, Watkins S, Thompson C, Wilson SH. Personalized exposure assessment: Promising approaches for human environmental health research. Environ Health Perspect. 2005;113:840–848. doi: 10.1289/ehp.7651. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Westerheide SD, Morimoto RI. Heat shock response modulators as therapeutic tools for diseases of protein conformation. J Biol Chem. 2005;280:33097–33100. doi: 10.1074/jbc.R500010200. [DOI] [PubMed] [Google Scholar]
- Woodruff TJ, Axelrad DA, Kyle AD, Nweke O, Miller GG, Hurley BJ. Trends in environmentally related childhood illnesses. Pediatrics. 2004;113(suppl. 4):1133–1140. [PubMed] [Google Scholar]
- Xiao GG, Wang M, Li N, Loo JA, Nel AE. Use of proteomics to demonstrate a hierarchical oxidative stress response to diesel exhaust particle chemicals in a macrophage cell line. J Biol Chem. 2003;278:50781–50790. doi: 10.1074/jbc.M306423200. [DOI] [PubMed] [Google Scholar]
- Yokota F, Gray G, Hammitt JK, Thompson KM. Tiered chemical testing: A value of information approach. Risk Anal. 2004;24:1625–1639. doi: 10.1111/j.0272-4332.2004.00555.x. [DOI] [PubMed] [Google Scholar]
- Zangar RC, Varnum SM, Bollinger N. Studying cellular processes and detecting disease with protein microarrays. Drug Metab Rev. 2005;37:473–487. doi: 10.1080/03602530500205309. [DOI] [PubMed] [Google Scholar]
- Zhang DD. Mechanistic studies of the Nrf2-Keap1 signaling pathway. Drug Metab Rev. 2006;38:769–789. doi: 10.1080/03602530600971974. [DOI] [PubMed] [Google Scholar]