Skip to main content
Toxicological Sciences logoLink to Toxicological Sciences
. 2009 Jan 23;108(1):19–21. doi: 10.1093/toxsci/kfp008

Pragmatic Challenges for the Vision of Toxicity Testing in the 21st Century in a Regulatory Context: Another Ames Test? …or a New Edition of “the Red Book”?

Bette Meek 1, John Doull 1,1
PMCID: PMC2644399  PMID: 19168570

Most toxicologists were pleased when the National Research Council appointed a committee in 2004 to review established methodologies and develop a long-range vision and strategy for toxicity testing in the future. This committee reviewed reports from the U.S. Environmental Protection Agency (EPA) and other sources and issued an interim report in 2006 entitled “Toxicity Testing for Assessment of Environmental Agents” (NCR, 2006). This report distinguished general toxicity tests from those designed to evaluate specific health effects and classified such tests as battery, tiered or tailored depending on the approach. It also reviewed the use of human data, alternative approaches and emerging technologies. Three chapters in this report included cogent committee observations and these sections plus the summarized information in boxes, tables and the appendix make this soft-cover report a valuable reference companion for the subsequent hard cover report. One of the many observations in this report is that toxicity testing protocols never die but unlike old soldiers who fade away, “grow like Topsy” (e.g., derived from the character, Topsy, in Harriet Beecher Stowe's Uncle Tom's Cabin; to “grow like Topsy” is to grow ‘wild, with neither plan, structure or direction) in response to new perceived or real safety concerns. This approach results in a cookbook/checklist of protocols which is increasingly difficult to apply efficiently in the testing of new agents and is totally inadequate to deal with the substantial backlog of untested existing substances prioritized for consideration in evolving regulatory mandates.

The final report from this committee, “Toxicity Testing in the 21st Century: A Vision and a Strategy” was issued in 2007 (NCR, 2007a). One of the most important contributions of this new strategy is that it attempts to integrate exciting developments such as those in toxicogenomics and other approaches (NRC, 2007b) to increase efficiency and relevance of toxicity testing to risk assessment. The advocated use of human cells or tissues has potential to eliminate the need for interspecies extrapolation, to increase efficiencies in testing and to reduce the use of animals. An obvious criticism of this approach is that it will not work in complex systems such as the central nervous system where any pathway perturbation observed in animals or human cells or systems may be many steps away from the site of damage. And while the use of information from cells or tissues (in vitro) to predict effects in the whole organism (in vivo) presents challenges, it is premature to conclude that they cannot be addressed in the implementation of the vision and strategy in this report. Rather, what undermines significantly both the content and likely impact of the report is meaningful consideration of immediate regulatory challenges and associated advances and opportunities. Greater attention by the committee to better understand these pressures and advances might have led to efficient and pragmatic short term bridging strategies to increase likely success in long term advancement. It would have also reduced the potential of alienating those which it is most trying to influence.

While it is recognized that the principal objective of the committee exercise was development of a long-range vision and strategy, these additional aspects would seem to be critical to the stated objectives of the 2009 follow-up paper “Toxicity Testing in the 21st Century: Bringing the Vision to Life” by Andersen and Krewski (2009). Indeed, this document is intended to “initiate a dialog to identify challenges in implementing the vision and address obstacles to change.” Our comments, then, address two principal concerns: the first is that the strategy does not adequately distinguish between effects and adverse effects in a context with which the toxicological and risk assessment communities are currently familiar and the second is that understanding and consideration of the role of existing developments and barriers in regulatory risk assessment is inadequate to ensure meaningful uptake of the recommendations.

All biological effects are the result of an interaction between an agent and a target and this interaction is defined by the exposure (dose and time). The agent is defined by the effects it can produce following single or repeated exposures and the target by its susceptibility to these effects. All of these effects result from the action of the agent on the target (dynamics) or from the action of the target on the agent (kinetics) and the rate-limiting reactions (key events) can occur in either pathway (Rozman and Doull, 2001). The goal of the strategy proposed in this report is to use high-throughput testing to detect early pathway perturbations that disrupt normal function in the dynamic pathway. Agents such as dioxin and asbestos where the key events occur in the kinetic pathway will require a different approach. The report also equates the initial perturbations as predictors of adversity but as shown in Figure 2 of Andersen and Krewski (2009) adaptation can reverse such changes if they do not exceed the homeostatic limits. Thus these initial perturbations are not necessarily adverse. Most agents exhibit more than one effect with increasing exposure. These effects generally have different mechanisms or modes of action and would be expected to cause perturbations in several different pathways. Our concern is that agents will produce multiple perturbations of dynamic pathways and the testing strategy proposed in this report needs a clearly defined approach to categorize these effects as beneficial, adverse or irrelevant (normal variation) in the context of existing approaches in order to achieve credibility as a risk assessment tool with the regulatory community.

In considering the prerequisites for establishing the credibility of the new testing strategy in Chapter 6 of the 2007 report, the committee concluded that validation and adversity were critical issues. Validation is considered in chapter 5; however, adversity is addressed only to the extent that “additional research” was advocated to link effects to apical responses in animals. Part of the difficulty in addressing this aspect may be a poor fit with the four elements that frequently define adversity in regulatory guidelines (pathologic lesions, functional impairment, decreased susceptibility and biochemical change) which were suggested in 1980 and are overdue for re-evaluation and updating. A pragmatic and seemingly essential first step in addressing this re-evaluation of adversity would be a recommendation to relate early perturbations to apical endpoints in frameworks designed to systematically address consideration of key events in modes of action and their subsequent implications for dose-response in risk assessment (see e.g., Meek, 2008). This would be instrumental in advancing common understanding in both the research and risk assessment communities in potential appropriate application of data on early events in a toxicity pathway. Increasing experience in this context could provide the necessary basis for revisiting regulatory guidelines.

During our presentation to the NRC committee in 2005, we also addressed the importance of linking the vision of the committee to other on-going activities in regulatory risk assessment, the need to address critical challenges in moving the regulatory community towards the use of this approach and better balance of the focus on hazard with that on exposure. In fact, the ultimate performance indicator under progressive chemicals legislation currently is not testing and assessment but effective and efficient management of risk. Indeed, delay incurred by redesign of toxicity testing is inconsistent with current regulatory objectives worldwide to be more proactive in this area. Greater understanding and focus to more meaningfully address these regulatory pressures for informed management over the short term drawing maximally on existing toxicological data as part of a longer term strategy to develop more risk-based and efficient testing strategies likely has much more potential to meaningfully impact.

In particular, there has been no attempt to understand and/or integrate pragmatic developments in several jurisdictions (in particular in Canada and Europe) to address progressive regulatory requirements to efficiently consider much larger numbers of chemical substances. This includes tools developed to consider priorities from amongst the 23,000 compounds included on the Domestic Substances list under the Canadian Environmental Protection Act (Meek and Armstrong, 2007) and intelligent or integrated hierarchical testing strategies being developed in Europe for implementation of the legislation for Registration, Evaluation, and Authorization of Chemical Substances (Van Leeuwen et al., 2007). Objectives of initiatives under these programs relevant to the content of Andersen and Krewski (2009) include maximally drawing upon existing data on toxicity, as a basis to increase efficiency. The former also considered prioritization on the basis of much simpler and more discerning data and tools for the significantly potentially more influential component of risk assessment, namely exposure estimation. And while the predictive capacity of current computational technologies such as (quantitative) structure activity relationship analysis (including the threshold of toxicological concern) (Renwick et al., 2003) is necessarily limited currently owing principally to the nature of available toxicological data, their meaningful consideration has important implications for the design of future toxicity testing strategies including focus on coverage of “chemical space” versus individual substances as a critical criterion to increase efficiency and focus on in vitro testing strategies for particular modes of action for specific endpoints. These approaches also require limited new resources and promote more effective and efficient use of existing data as a basis to meaningfully contribute to early risk management.

Also lacking is any meaningful strategy to address competing science policy pressures to adopt simplified “default” approaches based on existing though limited toxicological data, as an alternative to developing more relevant and informative testing strategies. Relevant to this aspect are recommendations included in a recent report also from the NRC, entitled “Science and Decisions: Advancing Risk Assessment” (NCR, 2008). It is recommended therein that regulatory agencies such as EPA should work toward the development of explicitly stated defaults to take the place of implicit defaults and further, that clear, general standards for the level of evidence needed to justify the use of alternative assumptions in please of defaults. While transparency in consideration and weighting of various options to estimate risk is essential, existing bias to the use of default, regardless of its comparatively (often limited) basis adversely impacts incentive to develop more relevant and accurate methodology for testing and assessment. This bias results, in large part, from the seemingly greater onus to justify deviation from default, on the basis of principally implicit potentially erroneous science policy consideration that default is always protective.

It seems to us, then, that the most significant contribution of the report on ToxicityTesting in the 21st Century is to provide a powerful new approach to detect and characterize biological effects (e.g., “a new Ames test”?). Lacking, however, and likely to significantly undermine its impact is the lack of consideration of a short term strategy to transition from existing approaches in toxicity testing and to take into account, evolving regulatory pressures and associated pragmatic recent developments which more efficiently and effectively focus effort based on existing data (e.g., “a new edition of the Red Book”?). The strategy also fails to address what is likely its greatest barrier to its implementation, namely continued bias to adoption of relatively uninformed default approaches, based on implicit (and potentially erroneous) science policy judgments. Without this understanding and focus, our answer to the question posed in the title of this paper is that the proposed strategy is more like a new Ames test than a new edition of the “Red Book”.

References

  1. Andersen ME, Krewski D. Toxicity testing in the 21st century: bringing the vision to life. Toxicol. Sci. 2009;107:324–330. doi: 10.1093/toxsci/kfn255. [DOI] [PubMed] [Google Scholar]
  2. Meek ME. Recent developments in frameworks to consider human relevance of hypothesized modes of action for tumours in animals. Environ. Mol. Mutagen. 2008;49:110–116. doi: 10.1002/em.20369. [DOI] [PubMed] [Google Scholar]
  3. Meek ME, Armstrong VC. The assessment and management of industrial chemicals in Canada. In: Van Leeuwen K, Vermeire T, editors. Risk Assessment of Chemicals. Dordrecht, the Netherlands: Kluwer Academic Publishers; 2007. pp. 591–615. [Google Scholar]
  4. NRC (National Research Council) Toxicity Testing for Assessment of Environmental Agents. Washington, DC: National Academy Press; 2006. [Google Scholar]
  5. NRC (National Research Council) Toxicity Testing in the 21st Century: A Vision and a Strategy. Washington, DC: National Academy Press; 2007a. [Google Scholar]
  6. NRC (National Research Council) Applications of Toxicogenomic Technologies to Predictive Toxicology and Risk Assessment. Washington, DC: National Academy Press; 2007b. [PubMed] [Google Scholar]
  7. NRC (National Research Council) Science and Decisions: Advancing Risk Assessment. Washington, DC: National Academy Press; 2008. [Google Scholar]
  8. Renwick AG, Barlow SM, Hertz-Picciotto I, Boobis AR, Dybing E, Edler L, Eisenbrand G, Grieg JB, Kleiner J, Lambe J, et al. Risk characterization of chemicals in food and diet. Food Chem. Toxicol. 2003;41:1211–1271. doi: 10.1016/s0278-6915(03)00064-4. [DOI] [PubMed] [Google Scholar]
  9. Rozman KK, Doull J. The role of time as a quantifiable variable of toxicity and the experimental conditions when Haber's c x t product can be observed: implications for therapeutics. J. Pharmacol. Exp. Ther. 2001;296:663–668. [PubMed] [Google Scholar]
  10. Van Leeuwen CJ, Patlewicz GY, Worth AP. Intelligent testing strategies. In: Van Leeuwen K, Vermeire T, editors. Risk Assessment of Chemicals. Dordrecht, the Netherlands: Kluwer Academic Publishers; 2007. pp. 467–509. [Google Scholar]

Articles from Toxicological Sciences are provided here courtesy of Oxford University Press

RESOURCES