Skip to main content
Frontiers in Pharmacology logoLink to Frontiers in Pharmacology
. 2010 Apr 23;1:3. doi: 10.3389/fphar.2010.00003

Predictive Toxicity: Grand Challenges

Olavi Pelkonen 1,*
PMCID: PMC3112333  PMID: 21713120

Chemicals, be they exogenous or endogenous, affect organisms. Sometimes these effects are considered beneficial, for example when pharmaceuticals are used to treat diseases or their symptoms. However, in most cases the effects are considered indifferent, unintended, harmful or outright toxic, even threatening survival. Prevention of these effects is the best course of action, but this requires appropriate knowledge to assess the hazard and risk presented by a given chemical to a given organism. The best knowledge comes from an organism that has been exposed to the chemical in question in such doses that effects can be observed and evaluated. However, in most cases this knowledge is gained rather “late”, i.e., from situations in which an organism has already been exposed to a chemical and experienced its effects. Consequently, the most important challenge in toxicology is to develop tools and approaches that predict the toxicity potential of chemicals in an adequately reliable manner before humans, or any other living organisms, have been exposed to them.

Realization of this most important goal needs scientists with versatile skills and from different backgrounds, using a variety of tools and approaches. Perhaps the most urgent task is to refine, reduce, and replace testing systems based on mammalian species by submammalian or in vitro systems. The current paradigm of toxicity testing is heavily dependent on animal tests that were developed decades ago. Now, regulatory toxicology, at least in the EU, is challenged by the REACH legislation, which requires testing of thousands of chemicals in a matter of a few years. This is prohibitively expensive, and judged by some to be impossible.

It is clear that eventually robust and reliable in vitro testing systems will be developed to help toxicity prediction and risk assessment. However, this task requires a huge amount of basic research. We have to identify rate-limiting steps in the sequence of events leading to a manifest toxicity. These rate-limiting steps should be measured in a suitable in vitro system that contains appropriate biological components. The test outcome should be in direct relationship with what happens in vivo, i.e., the toxicity end-point (e.g., cytotoxicity) in a suitable biological component (e.g., hepatocytes in culture) should predict the potential in vivo hepatotoxicity. Toxicity testing always involves validation, even if validation is not very “exciting” from the scientific point of view. Still, it is perhaps the most important task from a practical point of view because the results from toxicity testing benefit human society in general.

The in vitro testing system represents only a small part of a whole living organism. Thus, to extrapolate the result, a number of other factors have to be known, measured or assumed. Putting in vitro experiments into a useful framework requires models and simulations, from simple correlations between in vitro results and in vivo knowledge (e.g., structure-activity relationships) to complex physiologically based models for the simulation of behavior and effects of chemicals. Models and simulations without experimental data are empty exercises, so the experimental data must be correct and reproducible. Current literature contains a vast amount of information; but still, it is a major task to put it into a curated form that can be reliably used for modeling, simulation and validation.

Integrated testing strategies have been suggested to solve the problems inherent in the current paradigms of toxicity testing. Such approaches involve the use of already existing information about chemicals and their congeners, combined with the data produced by in vitro and in vivo studies, which may include “omics” technologies, imaging techniques and high-throughput testing platforms and, if necessary, supplemented and confirmed by animal studies. Naturally, these integrated testing strategies are envisaged to rely heavily on intelligent computational techniques. But perhaps the biggest obstacle to embarking upon such an integrated strategy is a certain measure of conservatism in the regulatory system, although this seems currently to dissipate somewhat.

Will predictions based on these novel approaches ever be good enough to convince regulators, and people in general, to abandon animals as the primary biocomponent in toxicity testing? To reach this goal, new validation methods are needed, such as validation in connection with actual testing. Whether this would be acceptable for regulators is difficult to predict, but it is useful to look at the consequences of the current validation schemes: because the validation of a single test may take 5-10 years, tests currently under validation are based on “old” science. Somehow, scientific development should be allowed to be incorporated into the validation process; this is certainly one of the ultimate goals of Frontiers in Predictive Toxicity.


Articles from Frontiers in Pharmacology are provided here courtesy of Frontiers Media SA

RESOURCES