Abstract
In drug development, early recognition of a potential for blocking the human ether-a-go-go related gene (hERG) channels is perhaps the best way to avoid later disappointment when QT interval prolongation shows up in clinical trials. Knowledge of the hERG blocking liability offers the chance to modify the molecule to reduce, or even eliminate, this unwanted activity and lack of success in such modification is a good reason to stop further development of the molecule. In this issue of the BJP, different methods for early detection of hERG channel blocking liability are discussed by Pollard et al. One attractive approach is widespread screening of molecules at a very early stage of research to detect compounds with this liability and thereby eliminate them. There are now several methodologies available that offer hERG channel testing on a high-throughput format but entail a diverse selection of direct and indirect readouts of hERG channel blocking activity and all are subject to practical limitations that also need to be considered prior to investing in a particular experimental approach. The approach selected, if any, should reflect the resources and expertise available. In any case, it is essential to be aware of the experimental limitations and potential inaccuracies that are inherent to each approach.
This article is a commentary on Pollard et al., pp. 12–21 of this issue and is part of a themed section on QT safety. To view this issue visit http://www3.interscience.wiley.com/journal/121548564/issueyear?year=2010
Keywords: hERG, QT interval prolongation, integrated risk assessment
Introduction
The goal of reducing or eliminating the ability to block human ether-a-go-go related gene (hERG) channels, in particular for small molecules, has had a clear impact in programmes of drug discovery. In this issue of the BJP,Pollard et al. (2010) provide an excellent overview of possible experimental approaches aimed at earlyincorporation of the assessment of hERG blocking potential in drug research and development. As with many goals, there are a variety of paths leading to it and each organization involved in drug research and development must design an approach best suited to its own resources and budget.
Pollard et al. (2010) discuss several technologies that are available to test for hERG blocking activity on a high-throughput scale. It appears almost self-evident that one of these approaches to address this issue should be implemented as early as possible. Whereas we agree conceptually, some practical issues should also be kept in mind and weighed against the costs and efforts associated with any of the proposed high-throughput approaches. In fact, upon the evaluation of the various pros and cons, we have chosen to NOT implement a high-throughput assay for hERG activity (the reasons for which are shortly outlined below). For us, the acquisition of less but higher quality data has turned out to be more effective, particularly for the lead optimization process, than large amounts of data having higher variability and being subject to error.
From hit to lead
‘Hits’ emerging from high-throughput target screens may contain hundreds to thousands of compounds that one might want to test for hERG blocking activity. Some of the testing approaches cited by Pollard et al. (2010) would have no problem testing so many compounds. However, important issues can negatively affect the quality of the results generated and should be considered prior to embarking on large-scale hERG testing. First, the purity of compounds synthesized at this stage of research is not optimal, being often below 90%. Thus, the possible contribution of impurities to the test result is difficult to assess and could potentially contribute to a ‘false positive’ result. Perhaps, more troublesome are important physicochemical properties of early research compounds that are still unknown. In particular, adequate solubility of the compounds to be tested is critical to all high-throughput test systems. As the testing conditions may also require a pH of 7.4 and may be poorly tolerant of solubilizing agents, there is a real chance that the concentrations apparently being tested are in fact not being attained. It is difficult, if not impossible, within high-throughput test systems to ensure that the intended test concentrations have been achieved. This is compounded by the fact that such tests are typically run at relatively high concentrations (e.g. 1–30 µmol·L−1) to ensure adequate safety margins.
Is the quality of the data produced adequate for making decisions on the fate of chemical classes, given these uncertainties? Indeed, Pollard et al. suggest using the high-throughput tests onlyas an initial screen and recommend conducting definitive studies, using conventional electrophysiological approaches (and most likely with better characterized and purer test compounds) to define, unequivocally, the full hERG blocking potential. Thus, with high-throughput testing, one must recognize various pitfalls that affect data quality.
These comments are not intended to condemn high-throughput electrophysiological assays. They were designed for use in detecting potent blockers of ion channel drug targets and, in this context, they can be used effectively. What is critical is whether the search is for ion channel activity as the intended molecular target, or as an off-target effect. For each aim, the range of activities sought is quite different, as is the tolerance for either false negative versus false positive results.
With this in mind, the challenge in developing a test strategy is to optimize the balance between generating large volumes of data and the quality of the data thus generated. Although binding assays have been rightfully criticized for their well-known limitations (as well summarized by Pollard et al., 2010), they do offer the advantages of truly high-throughput and low costs. Given the other potential limitations of trying to implement an electrophysiological test at such an early stage, this may offer an acceptable compromise of quality but at modest cost. Supporting the binding assay for each chemical class with traditional electrophysiological methods is, however, strongly recommended.
With improving in silico tools to predict hERG blocking potential, such assays may also offer a true alternative. As it is not subject to issues such as test article purity, solubility or any other physicochemical limitation, an in silico approach may turn out to be ideal for estimating hERG blocking potential very early in the drug discovery process, for instance, in hit cluster prioritization. Although in silico approaches will probably never be perfect, a good in silico model may be as good as or even better than high-throughput methods applied to less-than-ideal test articles. As pointed out by Pollard et al., these in silico approaches are most effective if they are based on data coming from the actual chemical classes being optimized. Unfortunately, where these data are not available to train the in silico model, one cannot expect these approaches to work optimally.
Lead optimization
How good do our tests have to be for use in the subsequent lead optimization process? Experience indicates that most drug-like small molecules tend to have at least a modest potency for blocking hERG channels. Unfortunately, only few compounds are found that have little or no hERG blocking activity (e.g. IC50 > 30 µmol·L−1). On the other hand, there are relatively few compounds that turn out to be highly potent hERG inhibitors (i.e. IC50 in the mid to low nmol·L−1 range). These highly potent hERG blockers are also typically the ones that can be eliminated early in the lead optimization process where in silico or high-throughput approaches can detect them. This means that the vast majority of small molecules that we examine when optimizing lead structures exhibit hERG IC50s in the range from 1 to 10 µmol·L−1, a remarkably narrow range. Given this narrow range within which we actually work, our medicinal chemistry colleagues are intensely interested to know if their compound lies at the lower end of this range (e.g. high nmol·L−1– 3 µmol·L−1) or if it falls in the upper range (let us say 10–30 µmol·L−1) as this can be critical for developing a structure–activity relationship and deciding whether or not to advance a given compound. This required precision demands a test system with sensitivity high enough to be able to identify rather subtle differences in hERG blocking potency. For this reason, we have decided to rely exclusively on manual patch clamp studies throughout lead optimization.
As so many compounds fall into this ‘usual’ narrow range, the progress of compounds quite often becomes dependent upon other issues including the intended route of administration, the anticipated systemic exposure with therapeutic use (as rightfully pointed out by Pollard et al., a very difficult assessment in early research phases) and the amount of plasma protein binding. The intended clinical indication can also play a role in deciding how much activity on the hERG channel can be tolerated, but we agree fully with Pollard et al., that, increasingly, there are few circumstances that would support developing a compound that has a high risk of causing QT prolongation in patients.
We support the thesis of Pollard et al. that reducing hERG blocking activity early in drug development is likely to be the best procedure for avoiding QT prolongation in clinical trials. Nevertheless, testing drugs in vivo for effects on the QT interval in animal models is a subsequent, necessary step in drug development. There have been marked improvements in the sensitivity of in vivo preclinical test systems for detecting changes in the QT interval. However, even the best preclinical systems, using numbers of animals that are statistically valid and ethically sound, may not reach the test sensitivity of a modern clinical ‘thorough QT study’. The authors point out that clinical testing has become very good at picking up effects on the QT interval and that, in fact, the clinical tests may be more sensitive than most preclinical in vivotests. Thus, the best way of avoiding the unpleasant discovery of a clinical QT prolongation may be to invest more time early in research and ‘dial out’ hERG inhibition as much as possible. The methodological approach one ultimately chooses to achieve this goal will be dependent upon available resources and how they can best be utilized. We recommend caution, however, when considering high-throughput approaches. As alluring as they may sound, they carry with them real limitations that should be considered fully prior to their implementation.
Glossary
Abbreviations:
- hERG
human ether-a-go-go related gene
Reference
- Pollard DE, Gerges NA, Bridgland-Taylor MH, Easter A, Hammond T, Valentin J-P. An introduction to QT interval prolongation and non-clinical approaches to assessing and reducing risk. Br J Pharmacol. 2010;159:12–21. doi: 10.1111/j.1476-5381.2009.00207.x. [DOI] [PMC free article] [PubMed] [Google Scholar]