Abstract
The efficient management of the continuously increasing number of chemical substances used in today’s society is assuming greater importance than ever before. Toxicity testing plays a key role in the regulatory decisions of agencies and governments that aim to protect the public and the environment from the potentially harmful or adverse effects of these multitudinous chemicals. Therefore, there is a critical need for reliable toxicity-testing methods to identify, assess and interpret the hazardous properties of any substance. Traditionally, toxicity-testing approaches have been based on studies in experimental animals. However, in the last 20 years, there has been increasing concern regarding the sustainability of these methodologies. This has created a real need for the development of new approach methodologies (NAMs) that satisfy the regulatory requirements and are acceptable and affordable to society. Numerous initiatives have been launched worldwide in attempts to address this critical need. However, although the science to support this is now available, the legislation and the pace of NAMs acceptance is lagging behind. This review will consider some of the various initiatives in Europe to identify NAMs to replace or refine the current toxicity-testing methods for pharmaceuticals. This paper also presents a novel systematic approach to support the desired toxicity-testing methodologies that the 21st century deserves.
Keywords: toxicity testing, new approach methodologies, NAMs, animal testing, drug development, pharmaceuticals, risk assessment, 21st century toxicology, toxicology, regulatory toxicology, TOX21, TOX21c, Q(SAR), organ-on-a-chip, read-across, high-throughput screening, innovation
Introduction
In the 21st century, the ever-increasing population and the constant demand for new and improved technologies have set a huge challenge for regulatory toxicology (RT), as many of these innovations are based upon chemical substances. The search for novel chemical entities and the repurposing of existing chemicals are at the heart of the needs of everyday society. Major industries, such as pharmaceutical, food, cosmetics and agriculture, are continuously creating and remodelling chemicals. With this creativity comes the risk that these substances may have harmful effects on consumers and the environment [1]. Therefore, the effective management of the use and safety of such chemicals is crucial to the well-being of all.
Today’s RT is relatively new, resulting mainly from the infamous thalidomide incident in the early 1960’s. However, toxicity testing, one of the main pillars of RT, dates back many centuries. The history starts with Paracelsus (1493–1541), the father of toxicology, who demonstrated the dose–response relationship of numerous known remedies and toxins of the time, proving that ‘All substances are poisons; there is none which is not a poison. The right dose differentiates a poison and a remedy’ [2]. Finding the dose that has no observable adverse effect is crucial, and it has been one of the main aims of toxicity studies of pharmaceutical innovations [3]. It has been the responsibility of RT, including but not limited, to approve the right dose and control of the safe marketing of these pharmaceutical innovations [4].
In the 1920s, there was a marked increase with the use of animals for toxicity testing especially after the introduction of the LD50 (50% lethal dose) test by J. W. Trevan [5]. The increasing recognition of the need to quantify regulatory concepts, such as the acceptable daily intake (ADI) and no observable (adverse) effect level (NOEL or NOAEL), subsequently led to statistical innovations, which were mainly influenced by pharmacological evaluations [6]. For example, Bliss (1934) introduced first the applications of probit regression analysis to fit a dose–response model, which could help to calculate the LD50 [6]. Several other testing methods were developed through the 20th century, which required the extensive use of animals and were neither time nor cost effective [7]. Many of these are still used today, including the 2-year carcinogenicity bioassay and developmental/embryotoxicity studies. Recently, there has been some significant progress in replacing animal models and improving the toxicity-testing approaches, such as the replacement of the Draize Test with in vitro skin irritation testing [8].
However, toxicological risk assessment methodologies have remained relatively unchanged for more than 40 years, containing many well-known, well-used yet imperfect models [9]. Even today, standard toxicity testing mainly uses high dose, mostly chronic, exposure in animals from which, using linear extrapolations and apical endpoints, a specific substance is determined as potentially toxic in humans. Failure of these animal-based toxicity studies to provide reproducible data, due to the various endpoints in multiple testing, can lead to a high number of false-positive and false-negative results, and thereby statistical and biological inaccuracy [10]. This can lead to flawed toxicology-based decision-making by regulatory authorities, industries and governments despite these organizations doing everything they can to prevent it. For example, in case of the selection of NOAEL, the failure to reject a null hypothesis of there being, no difference between doses does not necessarily mean no difference at all in reality [6]. Or in case of thalidomide, the low significance of dose-dependent foetal anomalies in rat and mice compared to the white rabbit did not mean no teratogenic effect in human [11]. Furthermore, the incidence of adverse drug reactions (ADRs) from approved medications can result in withdrawal from the market. In Europe, approximately 3.6% of all hospital admissions and nearly 0.5% of deaths are due to ADRs, resulting not only in extra expense for the pharmaceutical industry, but also for the national healthcare systems [12].
The aim of the toxicological regulations of pharmaceuticals is the research, development and production of new and effective therapeutics for the market, while protecting consumers from unsafe products. However, regulation may seem an obstacle in drug development, as it requires high levels of evidence of safety and efficiency of the product before approval for use [13]. This can be due to a cautious approach trying to ensure there is no approval of an ineffective drug, which may even cause ADRs, and/or failing to approve a useful agent. Nevertheless, one drug can be a poison for an individual while being a lifesaver for another. For example, propulsid, a medication developed for heartburn was removed from the market in March 2000, following the death of eight people due to irregular heartbeat caused by its active component, cisapride [14]. However, patients with cerebral palsy can use this agent successfully in order to be able to digest food painlessly [15].
Since, the 1970s, the number of novel medications reaching the market has decreased, but the cost of R&D has increased [16], especially, in the regions of Europe, USA and Japan [16]. Current procedures used in preclinical or non-clinical phases of drug development are not totally reliable, thus finding alternative approaches for a more predictive assessment is desirable [78]. Overall, failure rate of candidate drugs in the clinical phase is >90%, which means for every 20 compounds reaching clinical trials, only 1 achieves a marketing authorization, a situation which has not changed in the last decade [17]. Mostly, preclinical data are based on animal models; however, translation between species does not always work; therefore, many adverse effects are only found in clinical phases or post-marketing [12]. In the last 10 years or so, delineation of the pathway of toxicity and development of new reliable in vitro and in silico screening methods have provided useful tools for screening molecules at an early stage, but these methods have limitations [18] (Table 1).
Table 1.
General advantages and disadvantages of study designs, including in vivo, in vitro and in silico
Advantage | Disadvantage | |
---|---|---|
In vivo | ✓ Whole system | ⊗ Animal models do not reflect humans |
✓ Wide availability of rodent species | ⊗ Expensive and time consuming | |
✓ Tests are well established, and limitations are taken into consideration | ⊗ Ethical concerns | |
⊗ High- to low-dose extrapolation | ||
⊗ Cannot account for metabolic, systemic or behavioural responses | ||
⊗ Low predictivity (~40–70% accurate) | ||
⊗ Lacks reproductivity | ||
⊗ High rate of false positives | ||
⊗ Rodent models are commonly interbred | ||
In vitro | ✓ Uses human cells | ⊗ Not a whole system |
✓ Cost and time effective | ⊗ Few systems developed to show interactions between different cell types | |
✓ Fewer ethical issues | ⊗ Cannot determine pharmacokinetic or systemic effects | |
✓ Wide availability of human cell types | ⊗ Test systems do not always reflect normal human physiological conditions | |
✓ Requires less test substance | ||
✓ Reduced waste and hazardous materials | ||
In silico | ✓ Reduces animals required | ⊗ Poor-quality data provides poor-quality results |
✓ Reduces screening time | ⊗ Sequences used often represent a fraction of desired protein | |
✓ Increase chances of hits | ⊗ Commonly restricted to Lipinski’s rule of 5 | |
⊗ Limited molecule diversity |
Clearly, there is a need for new approach methodologies (NAMs), which include new technologies and alternative strategies and ideally predictive models that can enable improved development of new and better medicines. Classical in vitro and in silico toxicological approaches along with new non-test applications, including (quantitative) structure–activity relationships ((Q)SARs), read-across, pharmacokinetic/pharmacodynamic and uncertainty factor models, are all collectively referred to as NAMs. In vitro toxicology with its cellular and organotypic models and assays aims to predict toxicity responses to different substances, including drug candidates, in human. The goal of in silico toxicology is to complement in vitro and in vivo toxicity tests. It has many different computational tools, including huge databases for storing toxicological properties of chemicals, software for simulation of biological systems and molecular dynamics, and modelling software for predicting toxicity [18]. Additionally, the impressive benefits of (Q)SARs, low cost, speed and potential to reduce animal-based toxicity testing, make it a useful piece of NAMs [19]. It demonstrates with mathematical/statistical relationship models the connection between the quantitatively measured chemical structure(s) and biological, including toxic activity of chemicals [19]. In addition, the read-across method, which predicts unknown toxicity of chemicals based on known toxicity of similar chemicals, is gaining attention as another member of NAMs as in terms of reproducibility outperforms animal testing [20].
The 3Rs (reduction, refinement and replacement of animals used in scientific research) provides the foundation for NAMs. Reduction of animal use in toxicity testing can be done with improvements to study design, including the use of historical in vivo data or using control groups for multiple experiments. Refinement of animal studies aims to reduce the pain and suffering animals may experience; this is most commonly achieved by improving the living conditions of the animals. These improvements can reduce stress and improve behaviours of the animals, which leads to more accurate results [21]. Replacement of animal testing remains a more challenging task. The paradigm change in toxicity testing moves towards increasing application of in silico models as it does not require animals or samples of tissue to be harvested [21]. Many areas have been developing in silico models, for example cancer research has various genetic models (cBioPortal, EPA ToxCast screening library, Gene Ontology) to help with prediction, diagnostic and prognostic markers or to characterize entire biological processes [22, 46].
However, the validation process of NAMs is not completely harmonized and has been slower than expected [23]. In theory, NAMs are accepted when they are considered qualified for a specific concept of use. However, there is no specification for how much evidence is required [24]. Also, the new methods are mostly validated against previous results of the chemical with animal models (retrospective); but what if the animal model was biased? Currently, the general rule is that with the introduction of new methods, there is only addition of extra knowledge to the drug candidate’s toxicity profile without eliminating the need for the old data gathering and testing methods [25]. However, there are great examples for implementing the vision of faster and cheaper toxicity testing, which are more relevant to humans and requires minimal number of tested animals. The Integrated Testing Strategy (ITS), which merges data from testing and non-testing methods, and the Integrated Approaches to Testing and Assessment (IATA) are brilliant examples of the many approaches that are working towards a complete paradigm shift in regulatory toxicity testing.
How then has toxicity testing been changing to realize the vision of more reliable, more rapid and less expensive approaches? Can the gap between research and regulation be completely eliminated? What successes have been achieved in drug development by revolutionizing toxicity-testing techniques? What is next for RT of pharmaceuticals? This paper aims to address these questions using some of the past, present and future European and a few milestones in American NAMs.
Methods
A systematic strategy for searching and collecting data was specified at the outset. Subject-specific professional books and chapters, journals, articles, reports and websites were used as a source of information, and they provided the foundation for data collection and analyses. The most suitable terms, used for search, were carefully selected. Table 2 indicates how key terms and suitable synonyms were categorized and used.
Table 2.
Searching parameters, defined key terms used in research
Main searching categories | Searching subcategories (key terms) |
---|---|
Regulatory toxicology | Working areas of regulatory toxicology; drug regulatory agencies by countries*; Website of MHRA, WHO, EMA and ICH; procedure guidelines; pharmacology; pharmaceuticals |
Drug regulatory toxicology requirements | Toxicity testing in drug regulation; ICH guidelines; clinical trial requirements; acute toxicity studies; sub-chronic studies; mutagenicity studies, carcinogenicity studies; teratogenicity studies; reproductive studies; in vitro studies and toxicity testing in drug regulation; in vivo studies and toxicity testing in drug regulation; phototoxicity; immunotoxicity |
Alternatives for animal-based toxicity testing | History of 3Rs (reduction, refinement and replacement of animals); use of animals in toxicity testing for human pharmaceuticals; non-clinical safety evaluation; initiatives, programmes and projects |
Novel technologies in toxicity testing of pharmaceuticals | In silico toxicity testing for human medicines; microphysiological systems; OOC technologies, ‘-omics’ technologies; biomedical microelectromechanical systems; microfluidics; biomimetics; computational modelling |
Search was conducted by using key terms in selected databases and information sources.
The collected book chapters were included if they were directly related to regulatory toxicology or toxicity testing for pharmaceutical medicines, and their date of publication was after 2014. Inclusion criteria for websites were that the website must be either an official website of a medicinal regulatory authority or it must contain the latest information related to various toxicity-testing methods in drug regulation or to drug regulation itself. Articles had less strict inclusion criteria as the time period for their publication was not specified. However, key terms or part of the key terms had to be mentioned or referred to either in the title or in the abstract (where abstract was available) of the selected articles.
Analysis of systematically collected books, book chapters, websites and articles involved the categorization of selected documents according to their relevance in the predetermined main structure of this paper. Identified information and data were used to prepare the appropriate figures and tables, as well as to serve as evidence for the discussion and the conclusion reached.
The principle of the spiral cycle and research onion processes described by Saunders et al. (2007) served as a foundation for the research methodology scheme created and used to collect the relevant information required (Fig. 1).
Figure 1.
Research methodology scheme. Subject-specific books, journals articles and websites were mainly used as a source of information. Before conduction of research inclusion/exclusion criteria, findings and key terms were identified. When collected information was found to be irrelevant or inadequate, parameters and key terms were refined, and new search was conducted. When all requirements met, data were recorded; however, findings were revised continuously.
Imperfect models
The ultimate goal of toxicity testing for novel human medicines or any chemical is the identification of any hazardous properties the substance may possess. The traditional methods rely mainly on toxicity endpoints (adverse effects) in the animal models. The adverse effects can be described as quantitative (such as the aforementioned LD50) or qualitative, including binary (toxic or non-toxic) and ordinary (low and high toxicity) outcomes. The complexity of these tests is that the adverse effect depends on various factors. Not only on the chemical properties of a substance, but also on the route of exposure (oral, dermal, inhalation, etc.), the dose (acute or multiple), frequency (single, multiple), duration (24 hours or 24 months), as well as on the biological (gender, age, etc.) and ADME properties [18].
There are numerous testing methods for screening potential toxic effects of drug candidates (Table 3), including, the current gold standard for carcinogenicity testing, the 2-year bioassay, which is carried out in two genetically distinct rodent species. During and after exposure to the chemical being tested, the animals are observed, and any tumour development is noted. If there are no signs of tumours, then the assay can be ended at 18 months for mice or 24 months for rats. The animals are then sacrificed, and histopathological studies carried out on all tissues [26].
Table 3.
The main properties of the carcinogenicity and the genotoxicity-testing methods
Study design | Description | Stage | Dose(s) | Duration | Species | Limitations |
---|---|---|---|---|---|---|
Carcinogenicity | Assess the potential of carcinogenetic effects occurring from exposure to pharmaceuticals relative to humans | Parallel with late-stage clinical trials | Relevant study designs and dosages can be found in ICH guideline (S1B) | ≥90 days up to 24 months | Two different rodent species (usually rat and mouse) | Costly, high number of animals required, time consuming and low-predictivity value |
Genotoxicity | Assess the potential of adverse genetic effects occurring from exposure to pharmaceuticals | Non-clinical | Various assays used with ≥2 endpoints, point mutation and chromosomal damage | Relevant study durations can be found in ICH guideline [S2(R1)] | Various in vitro assays (Ames bacterial pint mutation test, mammalian chromosome aberration test, mouse lymphoma thymidine kinase gene mutation test) and In vivo rodent bone-marrow chromosome damage test | Low specificity, few non-genotoxic tests available |
Representation of the main properties and limitations of selected study designs used for toxicity testing of drug candidates. The table was tabulated based on the safety guidelines of ICH.
The significant time and resources used during the 2-year bioassay make it a very expensive procedure [27]. However, the ability of this bioassay to predict human carcinogenicity is questionable, which has made the usefulness of this assay debatable [28]. There are also assumptions in the 2-year bioassay that are not always met. For example, the effects of a chemical in rodents will produce the same effects in humans. Rodents do not grow spontaneous tumours in the same tissues as human, consider the lung, the skin, the liver and the colon [29]. Also, many anticancer drugs increase the risk of secondary cancers 10–30 years after treatment, which could never be predicted by a 2-year bioassay [30].
Currently, genotoxicity uses both in vitro and in vivo tests. One of the main problems with the range of genotoxicity tests available is the high rate of false positives due to tests having low specificity (Table 3). This is best shown in the Ames test, which is only about 60% specific and lacks some of the eukaryotic properties found in humans due to using bacteria [31]. This problem of specificity was first officially raised in 2007 during a European Centre for the Validation of Alternative Methods (ECVAM) workshop, it was suggested that more mechanistic data on humans were required as well as clearer guidance on the use of positive results that have no human relevance [28]. To give an example, it is estimated that to detect a 1/10 000 Drug Induced Liver Injury (DILI) related to idiosyncrasies, 30 000 patients would be required during clinical trials [32]. The onset of DILI can happen weeks after treatment; this might be because the liver can repair damage up to a point; however, there are many different mechanisms that can induce hepatotoxicity. One study conducted a survey finding around 45% of hepatotoxicity tests done in both a rodent and non-rodent species failed to predict DILI in human trials [33]. Thereby, highlighting the need for novel models with improved predictability for hepatotoxicity. Arguably, the validation process is one of the major huddles faced by both in vitro and in silico tests as their outcomes are measured against the data of old tests. Validation is a five-step process (Fig. 2) where sensitivity, specificity, accuracy and reproducibility of a test are assessed.
Figure 2.
The five stages to regulatory acceptance. Test is optimized during the first two stages. Data are gathered during the third stage from prospective studies. Data are then peer reviewed by internal and external examiners to reduce bias and submitted to a regulatory body to be accepted or rejected.
However, the current practice is that novel technologies and methods do not replace the old ones. Rather they are added as extra steps to provide supporting information for toxicological risk assessments [25]. Differences in viewpoint between toxicologists and regulators can also lead to complications. The former wants to find the truth and explain the natural mechanisms through its hypothesis-based investigations, while the latter wants to solve problems through decisions based on available data, sometimes where the quality of data is less than ideal. In essentially, the science is advancing rapidly while the regulations are changing more slowly resulting in a disconnection that is hampering progress.
There have been many critiques of the current validation processes, especially regarding in vitro tests, the main concern being the time taken to approve a new test.
The time for change Tox21 and Tox21c
The toxicology in the 21st Century Consortium (Tox21) is a federal research collaboration between the US Environmental Protection Agency (EPA) and the US National Toxicology Program (NTP). It was started in 2004 and initially was composed of 22 experts in various fields of toxicology, risk assessment and animal welfare. The experts of Tox21 developed and published a milestone report in 2007, entitled ‘Toxicity Testing in the 21st Century – A Vision and a strategy’ (Tox21c). The main aim was to induce a paradigm shift in toxicity testing; the report focuses on developing testing methods to allow rapid and effective evaluation of the safety of medical products, food additives, commercial chemicals and pesticides [34]. The key concept of Tox21c is to identify and delineate the finite number of pathways to toxicity (PoT) [9]. By determining the connection between the chemical–biological interaction of a toxicant with its adverse outcome in molecular and cellular terms, our understanding of how chemical substances can be harmful to human health and the environment will be much improved [35].
The impact of Tox21c in finding better and more reliable alternatives to current model systems has been enormous. Determination of PoT using in vitro and in silico models to uncover toxic effects of chemicals based on their mechanism of action (MoA) rather than using animal models with toxicity endpoints (such as carcinogenicity or genotoxicity) is a more logical and favourable approach for toxicologists [37]. In vitro and in silico models can provide a faster, cheaper and, in some cases, more reliable toxicity assessment using advanced high-throughput screening (HTS) and computational resources (i.e. software, algorithms, databases, analysis methods, etc.). Over the last two decades and following the publication of Tox21c, derived initiatives aimed at improving collaboration between academia, industries and regulatory bodies have increased markedly. These collaborations have been searching for and identified the major technical advancements that can be used to improve risk assessment [1]. They have been also working towards the accelerated regulatory acceptance of alternative methods by facilitating the sharing of information and knowledge.
Tox21 takes advantage of technological advances to enable regulatory toxicity testing to move away from the traditional system, which is based on apical endpoints in vertebrate animal models, to one that is based on mechanistic endpoints in human-relevant in vitro models [36]. This goal is shared by many other organizations. In October 2018, Tox21 expanded its focus. In addition to its predominant research on developing and applying HTS methods for toxicity testing, it now aims to create and refine alternative test systems and will address limitations of current in vitro tests over the next 5 years [34].
IATA
Current chemical risk assessment is not capable of keeping up with the constantly increasing number of chemicals that require testing. It is for this reason that organizations are starting to take advantage of in silico technologies, alongside in vivo and in vitro methods, to better understand PoT and the development of adverse outcome pathways (AOP). To this end, the Organization for Economic Cooperation and Development (OECD) developed IATA. IATA integrates data from multiple methodologies (i.e. in silico, in vivo, in vitro, etc.) to better define the hazardous characteristics of chemicals, which can be used to aid regulatory decision-making [38]. The idea of integrated approaches is not a novel concept; with talks of IATA dating back to 2007 and with the ever-advancing computer technologies and NAMs, there has been an increased drive towards these approaches [39]. The process of IATA can be broadly summarized in three steps (Fig. 3), (1) gathering relevant existing data from multiple methodologies, (2) assessing if the weight of evidence (WoE) of gathered data is to a satisfactory level to make a regulatory decision or if further evidence is required and (3) generating new data to reach satisfactory level. IATA can utilize AOPs to help gather the existing data or to help generate further data when needed. A prime example of the use of IATA in the cosmetic industry, where the use of animals has been banned in Europe [40]. This has led impressive advancements in NAMs in this area. Due to this, there are IATA that are used in regulatory decision-making for acute toxicity endpoints such as skin irritation, corrosion and sensitization [41].
Figure 3.
Overview of steps for IATA and AOPs are used to support IATA (diagram adapted from OECD [38]).
The purpose of IATA is to allow flexibility and expert input in the gathering and organization of data from several methodologies. Despite the flexibility, there are some areas that have to be standardized and expert input removed, known as defined approaches (DAs). DAs are developed as our understanding of the mechanism leading to an adverse outcome progresses [41, 42]. However, as IATA using WoE and requiring expert input, it cannot currently be used for more complex processes or areas that do not have a large number of NAMs. This is due to the lack of understanding of the underlying mechanistic detail in these areas that therefore make an IATA difficult to develop.
While currently IATA is not commonly used, their potential is promising with the increasing advancements in NAMs and as our understanding of the underlying mechanisms improves. The prospect of using IATA as a regulatory tool seems very possible.
Development of NAMs
The need to develop human relevant testing strategies for toxicological risk assessments that are based only on reliable NAMs originates from the principles of the 3Rs, coined by Russell and Burch in 1959 [43]. While the refinement and reduction of animal use has been addressed and, in many cases obtained, the complete replacement of animal models has still a long way to go.
In general, the main goals of the various NAMs developing programmes are the development and standardization of novel alternative testing methods by utilizing many advanced technologies, and acceleration of the regulatory acceptance for full exploitation of these novel techniques for regulatory purposes. In recent years, there has been an increase of in vitro- and in silico-based initiatives (Fig. 4).
Figure 4.
Timeline of highlighted initiatives for NAMs. Chronological order and brief summary of NAMs are listed in this paper. Initiatives with shaded backgrounds are the American ones, while all listed with white background are located in Europe.
Toxicity Forecaster
Toxicity Forecaster (ToxCast) is part of the EPA’s Computational Toxicology Research Program (CompTox), which provides public access to computational toxicology research data via online databases or resources [44, 79]. These online data sources include Aggregated Computational Toxicology Resource (ACToR), Distributed Structure-Searchable Toxicity Database Network (DSSTox), Toxicity Reference Database (ToxRefDB), Exposure-Based Chemical Prioritization Database (ExpoCastDB) and Toxicity Forecaster Database (ToxCastDB) [44, 45]. These all contribute to provide better knowledge of, and access to, animal chemical toxicity studies and other exposure or toxicity data of chemicals based on their chemical structure, using quantitative models (both in vitro and in silico) for predictive toxicology. ToxCast was launched in three phases in 2007. The first phase was the ‘Proof of Concept’ phase, followed by the second phase between 2009 and 2015, and by the third phase, which was completed in 2018. The second two phases helped expand the starting database of the first phase. Since its launch in 2007, ToxCast has evaluated more than 4500 chemicals using over 700 different HTS assays [44]. These chemicals include bioactive small molecules that are listed in the NIH Chemical Genomics Centre (NCGC) Pharmaceutical Collection and being found useful for repurposing applications [47]. Despite the above programmes normally patent their innovative ideas, the EPA programme supports the transparency and sharing of data and knowledge as far as possible, which all contributes towards the paradigm shift in toxicity testing [48, 49].
By helping prioritize more and more chemicals based on their potential human risk, these and similar future projects can help enormously to repurpose small-molecule drugs approved for human use in a more consistent manner in order to use them in diseases, other than their initial application.
Fund for the Replacement of Animals in Medical Experiments
The Fund for the Replacement of Animals in Medical Experiments (FRAME) was founded in London in 1969 by Dorothy Hegarty. Her aim was to validate reliable and reproducible alternative methods for predicting adverse effects in humans [50].
The first toxicity committee of FRAME was established in 1979 and presented its first report on alternative toxicity-testing methods in 1982 [50]. It contributed greatly to the 1986 Animals Act (UK government), which complies with the European Union’s Directive 2010/63/EU regarding the protection of animals used for scientific purposes [50, 51].
In 1989, FRAME established the INVITTOX database for the collection of protocols of in vitro methods in toxicity testing, which today is part of the Scientific Information Service of the ECVAM (a.k.a. EURL ECVAM). ECVAM has been supported by the European Union Network of Laboratories for the Validation of Alternative Methods (NETVAL) since 2013 [52–55]. The work of FRAME has contributed to regulatory approval of newer alternative toxicity-testing methods, including the Direct Peptide Reactivity Assay (DPRA) for in vitro skin sensitization test of chemicals [50].
Current projects in the FRAME Alternatives Laboratory include development of a cell-based liver toxicology model using 3D printing, which has the potential to be used for HTS of drug candidates [56]. If this technology reaches the accuracy in predicting toxicity in human, then it has the potential to replace animals in hepatotoxicity testing.
National Centre for Replacement, Refinement and Reduction of Animals in Research and CRACK IT
Another important UK-based organization dedicated to the complete implementation of the principles of the 3Rs by facilitating collaboration among academic institutions, pharmaceutical companies, chemical and consumer product industries, regulatory bodies and other research founders, is the National Centre for Replacement, Refinement and Reduction of Animals in Research (NC3Rs). It was established in 2004, and it works through its open innovation programme CRACK IT [57]. This has been developed in two parts [1] CRACK IT Challenges, which funds collaborations between academia and industry for the benefits of both, and [2] CRACK IT Solutions, which aims to maximize scientific and commercial benefits from accelerating development and validation of novel technologies with potential 3Rs impact [57].
The main objectives of a recent workshop of NC3Rs, jointly hosted with Unilever in London in 2018, were the application of in vitro and in silico approaches for decision-making in safety assessments particularly within regulatory setting, the identification of scientific gaps that still need to be addressed, and the encouragement of the collaboration among members of the sector, including academia, industries and regulatory agencies [57, 82]. In addition, funded by Royal Dutch Shell PLC through CRACK IT Challenges, KREKATiS has developed a skin/eye (Q)SAR model, iSafeRabbit, for predicting skin/eye irritation and corrosivity potential of petrochemical substances, to replace animal studies [58]. iSafeRabbit is appropriate for regulatory purposes as it satisfies the five recommended OECD principles for QSARs. KREKATiS is expanding its scope to other chemicals, including pharmaceuticals [58].
European Partnership for Alternative Approaches to Animal Testing
The European Partnership for Alternative Approaches to Animal Testing (EPAA), launched in 2005, is a voluntary collaboration between the European Commissioners Verheugen and Potocnik, European trade associations, and companies from seven industry sectors, including the pharmaceutical industry [53, 54]. The aim of EPAA is to pool and share knowledge and resources in a more general context to accelerate the development, validation and acceptance of alternative methods to currently used animal models, thereby the application of the 3Rs principle, in toxicity testing [53, 54]. In addition, many companies of the European Federation of Pharmaceutical Industries and Associations (EFPIA) are members of this collaboration, which has had many fruitful projects. For instance, in 2009, data sharing was led by AstraZeneca and facilitated by NC3Rs, leading to the removal of the regulatory requirements for conventional single dose acute toxicity test for any human pharmaceuticals [53, 54]. This regulatory change was a historic landmark change in toxicity-testing requirements of drug development.
Innovate Medicines Initiative
The Innovate Medicines Initiative (IMI) is the largest public–private initiative and partnership in Europe between the European Commission and the European pharmaceutical industries (represented by the members of EFPIA) [59]. It supports 113 collaborative projects with 2225 participants for addressing issues, including antimicrobial resistance, diabetes, immune and brain disorders and the challenges of regulatory safety testing designs for human medicines [59, 84]. IMI was launched in 2008 with an overall budget of €5.3 billion [59].
Even though the IMI mainly focuses on the discovery of advanced medicines, the 3Rs principle plays key role in that and therefore, in many IMI projects, including eTOX, eTRANSAFE, MARCAR and VAC2VAC. eTOX and the MARCAR are two successfully finished IMI projects [60, 61]. eTOX integrates bioinformatics and chemo-informatics approaches for the development of expert systems to allow in silico prediction of toxicities, while MARCAR classifies biomarkers and molecular tumour for non-genotoxic carcinogenesis. Both VAC2VAC and eTRANSAFE projects are the ongoing ones, ending in 2021 and 2022, respectively [62, 63]. eTRANSAFE enhances translational safety assessment through integrative knowledge management while VAC2VAC compares vaccine LOT to vaccine LOT by consistency testing. Based on the success of IMI programme, the European Union’s research and innovation programme, Horizon 2020, which is a continuation of European Commission’s 7th Framework Programme (FP7), is funding the IMI2 with a budget of €1.65 billion from 2014 to 2020 [54]. This includes projects like the aforementioned eTRANSAFE and VAC2VAC [62, 63].
Safety Evaluation Ultimately Replacing Animal Testing
On the long road to transitioning from animal-testing-derived information approach of the regulatory field to a revolutionary way to identify and characterize toxicological hazards of chemical substances and predict safety, there is a significant role of the Safety Evaluation Ultimately Replacing Animal Testing (SEURAT) research initiative. It has been sponsored by the FP7, along with several other programs [64]. For instance, the Virtual Physiological Human (VPH) project of the VPH Institute in Belgium for the full realization of ‘in silico medicine’ is still ongoing. Another example is the EXERA on the development of 3D in vitro models of oestrogen-reporter mouse tissues for the pharmaco-toxicological analysis of nuclear receptor-interacting compounds from 2006 to 2009 in Italy [65, 66].
The first phase of SEURAT initiative is SEURAT-1, launched on 1 January 2011 with an overall budget of €50 million [64]. The goal of SEURAT-1 was the replacement of repeated dose systemic toxicity testing. This includes the development of innovative testing methods to support regulatory safety assessment, the introduction of a toxicological mode-of-action strategy to describe how any substance can affect AOPs, and the demonstration of proof-of-concept of novel regulatory paradigms at multiple levels (theoretical, systems and application) through the formulation of case studies [64]. The initiative is composed of six complementary research projects and one coordination and support project. Despite the complexity and efforts of SEURAT-1, more steps must be taken to reach the ultimate goal of overcoming the obstacle of predicting toxicity of drug candidates in complex biological systems.
Joint ad hoc expert group on the application of the 3Rs in Regulatory Testing of Medicinal Products
In 2010, the European Medicines Agency (EMA) set up the Joint ad hoc expert group on the application of the 3Rs in Regulatory Testing of Medicinal Products (JEG 3Rs) to provide advice and recommendations to committees, including the Committee for Medicinal Products for Human Use (CHMP), on all matters relating to the use of animals in the testing of medicines for regulatory purposes [67]. The group cooperates with both the EURL ECVAM and the European Directorate for the Quality of Medicines and Healthcare (EDQM) to improve and promote the application of 3Rs in the regulatory testing of pharmaceutical products. JEG 3Rs developed and proposed EMA guidelines, in collaboration with the Scientific Advice Working Party (SAWP) of CHMP, regarding regulatory acceptance processes of novel 3R testing approaches (EMA/CHMP/CVMP/JEG-3Rs/450091/2012) [67].
JEG 3Rs via EMA also proposed several suggestions towards changes in some International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) safety guidelines. For instance, since 2016, there have been suggestions for the carcinogenicity testing of the ICH S1 guidelines. However, the debate on the necessity of this 2-year bioassay in adding substantial value to the drug development program still remains [68].
Integrated European ‘Flagship’ Programme Driving Mechanism-based Toxicity Testing and Risk Assessment
The Integrated European ‘Flagship’ Programme Driving Mechanism-based Toxicity Testing and Risk Assessment (EU-ToxRisk) for the 21st century project is another exemplary European collaboration project funded by Horizon 2020, with a budget over €30 million [69]. It was launched on 1 January 2016, following on from SEURAT-1, and will end in 2022. It integrates advancements in cell biology, ‘-omics’ technologies, systems biology and computational modelling to define complex PoTs. Furthermore, it provides proof of concept for mechanism-based safety testing strategies and guidance for their universal application. It focuses on repeated dose systemic toxicity and developmental and reproductive toxicity.
Over the last 10–15 years, many initiatives have been launched. The achievements are significant and provide a solid foundation for future developments.
What is next?
Enormous improvements in in vitro human-derived cell culture methodologies, for instance development of micro-physiological systems (MPS) like organ-on-a-chip (OOC), are leading the focused efforts to find more suitable, reproducible and predictive alternative systems for toxicity-testing purposes [70]. OOC is a type of artificial organ that simulates the activities, mechanics and physiological response of entire organs [70]. Mainly, it utilizes human-induced pluripotent stem cells (hiPSCs). Currently, this multi-channel 3D microfluidic cell culture chip is receiving a lot of attention from pharmaceutical companies and regulatory authorities worldwide as a means of modelling sequential metabolism and identifying adverse side and/or off target effects [59]. The hepatic microfluidic bioreactor reactor of the SEURAT-1’s HeMiBio project is an excellent example, with its innovative culture system that integrate hepatocytes and non-parenchymal liver cells, which are derived from hiPSCs, into MPS [71]. The generated co-culture allows induction and maintenance of various types of mature hepatic cell function in a bioreactor that can provide clinically relevant information on drug and chemical clearance toxicity.
There are several projects aiming to create and analytically validate individual-organ-chip models into single platforms. For example, the multiple tissue chip testing centres (TCTCs) that are funded through the tissue chip for drug screening program by the NCATS in the USA [72]. Despite the fact that the concept of human-on-a-chip and the stem cell field itself are still fairly new, huge strides have been made in the last decade towards creating reliable and biologically relevant platforms, which can be applied not just to pre-clinical but also to clinical phases of drug development and in precision medicine (‘you-on-a-chip’) in the future [59].
The International Consortium for Innovation and Quality in Pharmaceutical Development (IQ) is mainly made up of pharmaceutical representatives to address some of the biggest problems in drug development, including the already mentioned DILI [73]. The IQ-DILI initiative focuses on eliminating the existing gaps in current regulatory guidelines on detecting, monitoring, managing and preventing DILI in clinical phases of drug development. This is another outstanding example to show, in addition to the many previously listed and described initiatives, how synergistic intentions and interests of regulators, industry and academia can lead the technological evolution of toxicity testing forward. However, is the validation of modern strategies for regulatory safety assessment possible in either the pharmaceutical industry or other industries? Can the traditional approaches, which largely rely on animal safety test data, be abandoned to eliminate or at least reduce the risk of flawed data from these animal tests? [81]
The relatively new principle that finite number of PoTs exist provides an excellent modern approach in drug development as toxicological pathways can be pharmacological targets or even pathways for efficacy. Previously, drug targets were identified through the analysis of pathways leading to the disease, which is the basic of concept of AOPs. Additionally, the pharmaceutical industry has already been using the sophisticated ITS approach in forms of serial target-specific mechanism-based tests, including various in vitro and in silico methods. Thereby, if one test fails to detect all possible MoA of a given compound, another test or NAM can be applied, thus decreasing the number of bad candidates reaching the regulatory testing. Despite both the ITS and PoT approaches being implemented by many recent initiatives and pharmaceutical companies, the gap between research and regulatory acceptance still remains. The human predictivity value of alternative methods is validated against the predictivity value of animal models, which probably prevents the abandoning of earlier components of regulatory safety assessments.
What could be done?
The regulations to validate toxicity tests may need to be completely overhauled to be able to keep up with scientific innovation. Although, there are examples of the acceptance of a few alternative testing methods, including the DPRA or the 3 T3 Neutral Red Uptake (3 T3 NRU) phototoxicity tests that made it to regulatory acceptance, further gradual introduction to facilitate paradigm shift is necessary [74]. The continuous evaluation of novel toxicity-testing methods, preferably without simply accumulating the new ones alongside the old ones, needs to be more specific and strategies for its implementation must be defined. Novel technologies should be fully exploited for regulatory purposes [75, 76]. For example, ‘-omics’ technologies that allow concurrent monitoring of many thousands of macromolecules, thereby revealing any functional disturbances. These technologies can give the necessary tools to examine and identify the differences in various cellular molecules, including DNA, RNA and proteins. However, the extensive biological interpretation of the role that these macromolecules and their related pathways play in physiological processes still needs to be fully delineated and understood.
Nevertheless, there are challenges to face before ‘-omics’, such as toxicogenomics can be used in RT. These challenges include how the data produced by these relatively new technologies can be fully incorporated in regulatory decision-making. In addition, computational modelling that allows analysis, visualization, simulation and prediction of toxicity, and any other mechanistic approaches should be more utilized in regulatory context.
From a pharmacological perspective, today’s toxicity-testing strategies are good but not efficient enough, a situation which needs to be change. The time, as well as the many requirements, it takes to validate NAMs and reach a complete regulatory acceptance for use further complicates this situation. The current complex process of drug development and the continued demand for more evidence by regulators on the safety, efficacy and quality of new drug candidates may be decreasing the number of valuable drugs that can reach the public. Nevertheless, many potential human medicines fail even before entering clinical trials due to demonstrating a toxic or adverse effect in animal models during the preclinical phase. The increasing concerns of regulatory agencies who, reasonably, fear authorizing unsafe medications do not help the paradigm shift needed. Therefore, the predictive value of toxicity testing needs to be further increased to prevent harm to the reputation of industry, regulation and academia.
From the brilliant examples currently ongoing the extensive changes in RT and specifically in toxicity testing can be clearly seen. There have been and are exemplary projects for replacing the conservative toxicity-testing methods. For example, the Horizon Europe, which is the next innovation framework programme of the European Commission, has five mission areas. One of them is cancer, which can provide an excellent opportunity for the emerging of further NAMs, providing more predictive and faster solution than the 2-year bioassay [83]. However, more needs to be done.
As a strategic step to promote the complete paradigm shift in toxicity testing, the development of a novel categorization system would be beneficial and valuable (Fig. 5). This pre-validation system of already developed and currently developing NAMs could use the principles of International Organization for Standardization (ISO). NAMs should gain a certificate and a specific number, which categorize them mainly on the bases of producing a human- or animal-relevant, alternative toxicity-testing method. The two main categories could be subdivided into in vivo, in vitro and in silico methods, which could be sub-subdivided into pathway-based, systems biology, computational biology, simulation studies and non-test approaches. All sub-subdivision can be further divided and divided again if necessary. This alternative ISO system could feed into an international database, which would provide worldwide access to the location, participants, validation requirements, time, descriptions and other specifications for all NAMs. This database could use ICH and other regulatory guidelines to further categorize and support developers of NAMs.
Figure 5.
Schematic representation of the suggested AISNAMs and IDBNAMs. The categorization system could feed into the international database (TT: toxicity testing).
Both the alternative ISO system for NAMs (AISNAMs) and the international database for NAMs (IDBNAMs) could promote and help the information flow of NAM initiatives and projects to the regulatory bodies as well as to the initiatives. They would provide transparency, while protecting the interest and benefit of each member. AISNAMs could help initiatives to know what regulations they must follow to gain regulatory approval and find investors. The transparency that the combination of this classification system and IDBNAMs could offer may help each member to better realize what is still needed and eliminate the gaps between research and regulatory acceptance. This could support the vision of a faster, cheaper and more relevant toxicity testing for not only pharmaceuticals but also for other chemical entities.
In addition, since the ban of animal testing in cosmetics in Europe, there has been a significant reduction in the number of registrations for new chemical approval [77]. Despite the impressive advancements in NAMs and their regulatory acceptance, there is still a heavy reliance on chemical data from previous animal tests, which may not allow the complete abandoning of the conservative paradigm. In this and similar situations, the recommended combined AISNAMs and IDBNAMs system could help to overcome this reliance on animal data.
Conclusions
Evolutional changes in the scientific community and in scientific developments are clearly reshaping and revolutionizing today’s ‘old’ toxicity testing for the better. Many examples in this paper demonstrate that great efforts had been made to realize the vision of more reliable, more rapid and less expensive approaches. Excellent technological and intellectual innovations are contributing to the elimination of existing gaps between toxicology and regulations.
However, at the time this paper has been written, the complete abandoning of traditional testing methods, for example in case of pharmaceuticals, is not possible. Probably, this is due to either the desired technology not existing yet or that it does exist but not in a ready-to-be-validated form. Despite the complete paradigm shift not a reality yet, it is highly possible, and seemingly it is the close future of toxicity testing.
The increasing application of NAMs at earlier phases of drug development, the complete shift of cosmetics towards NAMs and the continuous attempts of other areas to attain the desired toxicity-testing approaches are supporting the vision of both industries and RT. In this work, the authors have presented a new internationally shared scheme that combines a qualification/categorization and a database system. This approach may be applied to help further the transparency of NAMs and to support validation by regulatory bodies. A possible improvement of this approach consists of a complete global harmonization in all guidelines related to toxicity testing of not only new pharmaceutical but also other chemical entities. As a long-term effect, this novel scheme could eliminate those remaining gaps between research and regulations.
The author’s systematic review of the current structure and the techniques used has highlighted some areas that are changing and/or should be changed to allow the evolution of the regulatory toxicity testing that meets the needs of the 21st century.
Conflict of interest
There are no conflicts to declare.
References
- 1. Adeleye Y, Andersen M, Clewell R et al. Implementing toxicity testing in the 21st century (TT21C): making safety decisions using toxicity pathways, and progress in a prototype risk assessment. Toxicology 2015;332:102–11. [DOI] [PubMed] [Google Scholar]
- 2. Vinken M, Blaauboer BJ. In vitro testing of basal cytotoxicity: establishment of an adverse outcome pathway from chemical insult to cell death 2017, 104–10. [DOI] [PMC free article] [PubMed]
- 3. Baird TJ, Caruso MJ, Gauvin DV et al. NOEL and NOAEL: a retrospective analysis of mention in a sample of recently conducted safety pharmacology studies. J Pharmacol Toxicol Methods 2019;99:106597. [DOI] [PubMed] [Google Scholar]
- 4. Horii I. The principle of safety evaluation in medicinal drug - how can toxicology contribute to drug discovery and development as a multidisciplinary science? J Toxicol Sci 2016;41:SP49–67PMID: 28250284. [DOI] [PubMed] [Google Scholar]
- 5. LR DP. Alternative approaches in median lethality (LD50) and acute toxicity testing. Toxicol Lett 1989;49:159–70. [DOI] [PubMed] [Google Scholar]
- 6. Cox DR, Effron B. Statistical thinking for the 21st century scientists. Science Advences – Applied Mathematics 2017;3:e1700768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Doke SK, Dhawale SC. Alternatives to animal testing: a review. Saudi Pharmaceutical Journal 2015;23:223–9PMID: 26106269. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Lee S-H. Evaluation of acute skin irritation and phototoxicity by aqueous and ethanol fractions of Angelica keiskei. Experiment and Therapeutic Medicine 2013;5:45–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Bhattacharya S, Zhang Q, Carmichael PL et al. Toxicity testing in the 21st century: defining new risk assessment approaches based on perturbation of intracellular toxicity pathways. Journal Plos One 2011;110:40–46. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Blaauboer BJ. The long and winding road of progress in the use of in vitro data for risk assessment purposes: from “carnation test” to integrated testing strategies. Toxicology 2015;332:4–7. [DOI] [PubMed] [Google Scholar]
- 11. Vargesson N. Thalidomide-induced teratogenesis: history and mechanisms. Birth Defects Res C Embryo Today 2015;105:140–56PMID: 26043938. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Bouvy JC, Bruin ML, Koopmanschap MA. Epidemiology of adverse drug reactions in Europe: a review of recent observational studies. Drug Safety 2015;38:437–453. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Jalali, Rajinder K. Pharmacovigilance and Drug Safety Vohora Divyja and Singh Gursharan (ed.), Pharmaceutical Medicine and Transational Clinical Research. Edinburgh: Elsevier, 2018. [Google Scholar]
- 14. Page RL 2nd, O'Bryant CL, Cheng D et al. Drugs that may cause or exacerbate heart failure. AHA Journal 2016;134:e32–69. [DOI] [PubMed] [Google Scholar]
- 15. Fernando T, Goldman RD. Management of gastroesophageal reflux disease in pediatric patients with cerebral palsy. The Office Journal of the College of Family Physicians of Canada 2019;65:769–98. [PMC free article] [PubMed] [Google Scholar]
- 16. Alexander S, Oliver G, Markus H. Changing R&D models in research-based pharmaceutical companies. Journal of Transational Medicine 2016;14:NA. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Helen D, Jamie M. Trends in clinical success rates and therapeutic focus. Nat Rev Drug Discov 2019;18:495–6. [DOI] [PubMed] [Google Scholar]
- 18. Raies AB, Bajic VB. In silico toxicology: computational methods for the prediction of chemical toxicity. WIREs Computational Molecular Science 2016;6:147–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Neves BJ, Braga RC, Melo-Filho CC et al. QSAR-based virtual screening: advances and applications in drug discovery. Front Pharmacol 2018;9:1275. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Luechtefeld T, Marsh D, Rowlands C et al. Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility. Toxicol Sci 2018;165:198–212. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Törnqvist E, Annas A, Granath B et al. Strategic focus on 3R principles reveals major reductions in the use of animals in pharmaceutical toxicity testing. PLoS One 2014;9:e101638. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Jean-Quartier C, Jeanquartier F, Jurisica I et al. In silico cancer research towards 3Rs. BMC Cancer 2018;18:408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Parish ST, Aschner M, Casey W et al. An evaluation framework for new approach methodologies (NAMs) for human health safety assessment. Regul Toxicol Pharmacol 2020;112:104592. [DOI] [PubMed] [Google Scholar]
- 24. EPA List of Alternative Test Methods and Strategies (or New Approach Methodologies [NAMs]) .Lenexa: EPA, 2018. [Google Scholar]
- 25. Beken S, Kasper P, Laan J-W. Regulatory acceptance of alternative methods in the development and approval of pharmaceuticals In: Eskes C, Whelan M (eds), Validation of Alternative Methods for Toxicity Testing, Vol. 856 London: Springer International Publishing, 2016, 2. [DOI] [PubMed] [Google Scholar]
- 26. Parasuraman S. Toxicological screening. J Pharmacol Pharmacother 2011;2:74–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Smith CJ, Perfetti TA, King JA. Rodent 2-year bioassays and in vitro and in vivo genotoxicity tests insufficiantly predict risk or model development of human carcinomas. Toxicology Research and Application 2019;3. [Google Scholar]
- 28. Corvi R, Madia F, Guyton KZ et al. Moving forward in carcinogenicity assessment: report of an EURL ECVAM/ESTIV workshop. Toxicol In Vitro 2017;45:278–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Kemp CJ. Animal models of chemical carcinogenesis: driving breakthroughs in cancer research for 100 years. Cold Spring Harb Protoc 2015;2015:865–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Ward JM. The two-year rodent carcinogenesis bioassay—will it survive? J Toxicol Pathol 2007;20:13–9. [Google Scholar]
- 31. Rodríguez E, Piccini C, Sosa V et al. The use of the Ames test as a tool for addressing problem-based learning in the microbiology lab. J Microbiology Biol Educ 2012;13:175–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Lauschke VM, Hendriks DF, Bell CC et al. Novel 3D culture Systems for Studies of human liver function and assessments of the hepatotoxicity of drugs and drug candidates. Chem Res Toxicol 2016;29:1936–55. [DOI] [PubMed] [Google Scholar]
- 33. Olson H, Betton G, Robinson D et al. Concordance of the toxicity of Pharmaceuticals in Humans and in animals. Regul Toxicol Pharmacol 2000;32:56–67. [DOI] [PubMed] [Google Scholar]
- 34. Tox21 Toxicology in the 21st Century [Online] 2019. https://tox21.gov/ (cited: 29 January 2019).
- 35. Hartung T, Hoffmann S. Food for thought … on in silico methods in toxicology. ALTEX-Alternatives to Animal Experimentation 2009;26:155–66. [DOI] [PubMed] [Google Scholar]
- 36. Andersen ME, Krewski D. Toxicity testing in the 21st century: bringing the vision to life. Toxicol Sci 2009;107:324–30. [DOI] [PubMed] [Google Scholar]
- 37. Hartung T. Toxicology for the twenty-first century. Nature 2009;460:208–12. [DOI] [PubMed] [Google Scholar]
- 38. OECD OECD Series on Testing and Assessment No. 260: Guidance Document on the Use of Adverse Outcome Pathways in Developing Integrated Approaches to Testing and Assessment (IATA). Paris: OECD,s.n, 2016. [Google Scholar]
- 39. Tollefsen KE, Scholz S, Cronin MT et al. Applying adverse outcome pathways (AOPs) to support integrated approaches to testing and assessment (IATA). Regul Toxicol Pharmacol 2014;70:629–40. [DOI] [PubMed] [Google Scholar]
- 40. Balls M, Combes R, Worth A. The History of Alternative Test Methods in Toxicology. Stirling University Innovation Pk, UK: Academic Press, 2018, 9780128136973. [Google Scholar]
- 41. Coady K, Browne P, Embry M et al. When are adverse outcome pathways and associated assays “fit for purpose” for regulatory decision-making and Management of Chemicals? Integr Environ Assess Manag 2019;15:4153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42. Casati S. Integrated approaches to testing and assessment. Basic Clin Pharmacol Toxicol 2018;123:51–513018. [DOI] [PubMed] [Google Scholar]
- 43. Kandárová H, Letašiová S. Alternative methods in toxicology: pre-validated and validated methods. Interdiscip Toxicol 2011;4:107–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. EPA Distributed Structure-Searchable Toxicity (DSSTox) Database [Online] 2019. https://www.epa.gov/chemical-research/distributed-structure-searchable-toxicity-dsstox-database (cited: 15 February 2019).
- 45. Smith MN, Grice J, Cullen A et al. A toxicological framework for the prioritization of Children’s safe product act data. Int J Environ Res Public Health 2016;13:431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46. Knudsen TB. ToxCast and Virtual Embryo: in vitro data and in silico models for predictive toxicologyChapter 3.3 In: Seidle T, Spielmann H (eds). AXLR8–3 Alternative Testing StrategiesProgress Report 2012. Berlin: Springer-Verlag, 2012. [Google Scholar]
- 47. Huang R, Southall N, Wang Y et al. The NCGC pharmaceutical collection: a comprehensive resource of clinically approved drugs enabling repurposing and chemical genomics. Sci Transl Med 2011;3:80, 3001862–96s.l. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Milton C. What is the Future of Toxicity Testing? Aberdeen: University of Aberdeen, 2019. [Google Scholar]
- 49. Fischer I. Evolution of Regulatory Toxicology from a Pharmacological Perspective. Aberdeen: University of Aberdeen, 2019. [Google Scholar]
- 50. FRAME Fund for the Replacement of Animals in Medical Experiments [Online]. 2019https://frame.org.uk/ (cited: 31 January 2019).
- 51. EU Directive 2010/63/EU of the European Parliament and of the council of 22 September 2010 on the protection of animals used for scientific purposes. Off J Eur Union 2010;276:33–79. [Google Scholar]
- 52. EC EU-NETVAL (European Union Network of Laboratories for the Validation of Alternative Methods) [Online] 2017. https://ec.europa.eu/jrc/en/eurl/ecvam/alternative-methods-toxicity-testing/eu-netval (cited: 8 February 2019).
- 53. European Commission EU Reference Laboratory for Alternatives to Animal Testing [Online] 2019. https://ec.europa.eu/jrc/en/eurl/ecvam (cited: 10 February 2019).
- 54. European Commission European Partnership for Alternative Approaches to Animal Testing [Online] 2019. https://ec.europa.eu/growth/sectors/chemicals/epaa_en (cited: 20 February 2019).
- 55. Janusch A, van der Kamp, Bottrill K et al. Current Status and Future Developments of Databases on Alternative Methods. London: ATLA, 2018. [Google Scholar]
- 56. EU-NETVAL . EURL-ECVAM [Online] . https://ec.europa.eu/jrc/en/eurl/ecvam/alternative-methods-toxicity-testing/eu-netval (cited: 22 December 2019).
- 57. NC3Rs National Centre for the Replacement Refiniment & Reduction of Animals in Research [Online] 2019. https://www.nc3rs.org.uk/ (cited: 9 February 2019).
- 58. iSafeRabbit QSAR model for regulatory irritation/corrosion testing . NC3Rs [Online] https://nc3rs.org.uk/crackit/isaferabbit-qsar-model-regulatory-irritationcorrosion-testing (cited: 28 December 2019).
- 59. IMI Innovative Medicines Initiative [Online] 2019. https://www.imi.europa.eu/ (cited: 2 March 2019).
- 60. eTOX eTOX [Online] 2010. http://www.etoxproject.eu/ (cited: 21 February 2019).
- 61. MARCAR Marcar - Towards Novel Biomarkers for Cancer Risk Assessment [Online] 2010. http://www.imi-marcar.eu/ (cited: 4 February 2019).
- 62. eTRANSAFE eTRANSAFE [Online] 2017. http://etransafe.eu/ (cited: 31 January 2019).
- 63. VAC2VAC VAC2VAC - Caccine Batch to Vaccine Batch Comparison by Consistency Testing [Online] 2019. http://www.vac2vac.eu/ (cited: 16 February 2019).
- 64. SEURAT-1-EU SEURAT-1 - Towards the Replacement of In Vivo Repeated Dose Systemic Toxicity Testing [Online]. 2013. http://www.seurat-1.eu/ (cited: 17 February 2019).
- 65. VPH VPH Institute - Building the Virtual Physiological Human [Online] 2019https://www.vph-institute.org/ (cited: 23 February 2019).
- 66. Altaweb EU. Exera - Estrogen Receptors - Interacting Compounds [Online] 2009. http://www.altaweb.eu/exera/ (cited: 23 February 2019).
- 67. EMA Overview of Comments Received On ‘Guideline on Regulatory Acceptance of 3R (Replacement, Reduction, Refinement) Testing Approaches’ (EMA/CHMP/CVMP/JEG3Rs/450091/2012) .London: European Medicines Agency - CHMP and CVMP, 2017. [Google Scholar]
- 68. ICH The ICHS1 Regulatory Testing Paradigm of Carcinogenicity in Rats [Online]. https://database.ich.org/sites/default/files/S1_StatusReport_2019_0802.pdf (cited: 29 December 2019).
- 69. EUTOXRISK EUTOXRISK [Online] 2019http://www.eu-toxrisk.eu/ (cited: 3 March 2019).
- 70. Bhatia SN, Ingber DE. Microfluidic organs-on-chips. Nature 2014;32:760–72. [DOI] [PubMed] [Google Scholar]
- 71. European Comission SEURAT-1 Tools & Methods Catalogue s.l.: EC. EUR 28123 EN, Budapest: Európai Unió Háza, 2016. [Google Scholar]
- 72. NCATS National Center for Advencing translational sciences [Online] 2019https://ncats.nih.gov/tissuechip/projects/centers/2018 (cited: 21 February 2019).
- 73. IQ-DILI International Consortium for Innovation and Quality in Pharmaceutical Development [Online] 2016. https://www.iqdili.org/about-us (cited: 5 March 2019).
- 74. Kim K, Park H, Lim K-M. Phototoxicity: its mechanism and animal alternative test methods. Toxicological Research 2015;31:97–104. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75. MacGregor J. The future of regulatory toxicology: impact of the biotechnology revolution. Toxicol Sci 2003;75:236–48. [DOI] [PubMed] [Google Scholar]
- 76. Fielden MR, Matthews JB, Fertuck KC et al. In silico approaches to mechanistic and predictive toxicology: an introduction to bioinformatics for toxicologists. Crit Rev Toxicol 2002;32:67–112. [DOI] [PubMed] [Google Scholar]
- 77. Hoffstadt L. Scientific officer for hazard assessment at the European Chemicals Agency[interv.] In: Culliney K. (ed). CBD Global Summit. REACH Review, Cyprus: Cosmetics Design Europe, 2019. [Google Scholar]
- 78. Bento AF, Cavalli J et al. Non-clinical studies in the process of new drug development - part II: good laboratory practice, metabolism, pharmacokinetics, safety and dose translation to clinical studies. Braz J Med Biol Res 2016;49:e5646. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79. EPA United States Environmental Protection Agency - toxicity forecasting [Online] 2019. https://www.epa.gov/chemical-research/toxicity-forecasting (cited: 10 February 2019).
- 80. Andersen LW, Mackenhauer J, Roberts JC et al. Etiology and therapeutic approach to elevated lactate. Mayo Clin Proc 2013;88:1127–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81. Akhtar A. The flaws and human harms of animal experimentation. Camb Q Healthc Ethics 2015;24:407–19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82. Hartung T, Luechtefeld TH, Maertens A et al. Food for thought … integrated testing strategies for safety assessments. ALTEX 2013;1:3–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83. Jemal A, Ward EM, Johnson CJ et al. Annual report to the nation on the status of cancer, 1975-2014, featuring survival. J Natl Cancer Inst 2017;109:1–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84. Klonoff DC. Peronalized medicine for diabetes. J Diabetes Sci Technol 2008;2:335–41. [DOI] [PMC free article] [PubMed] [Google Scholar]