In May 2005, the ICH E14 document came up with guidance on assessing the propensity of a new drug to cause QT interval prolongation 1. Regulatory agencies were involved in working closely with pharmaceutical companies to develop the trial methodology for the thorough QT (TQT) study, which has since become a typical component of the programme for evaluating new molecular entities. While it is undoubtedly useful to have detailed data on potentially harmful compounds, the mushrooming of TQT studies has created a conundrum for clinical pharmacology journals. Back in 2010, Darpo reported that the US regulators had already reviewed 112 such trials, albeit with only a fraction of these being published 1. Should greater numbers of TQT studies be submitted to and published in scientific journals?
Proponents of scientific transparency and full release of datasets would undoubtedly argue that results of all clinical trials should be made publicly available. There is clear merit in this argument, but journal editors may counter with the viewpoint ‘Yes certainly, but not in my journal’. There are a number of difficult issues here. What is the added value of publishing TQT studies of every new molecular entity that is being considered? Does it change or improve clinical practice? Will publication of such studies stimulate major improvements in research design and methods amongst the academic community and pharmaceutical industry?
As always, the first step is to take stock of the situation. I conducted a simple PubMed search covering the last five years, based on the terms QT or QTC in the title, and other associated terms such as placebo or moxifloxacin, and trial‐related words. This yielded a total of 304 publications, which means just over one paper published on the topic of QT intervals every week since 2011. What are the contents of this glut of papers? Obviously it would be hugely time‐consuming to check hundreds of manuscripts, so I took the more expedient step of looking at 20 of the most recently published TQT studies in which a new molecular entity was evaluated.
Perhaps somewhat unsurprisingly, there were no detectable adverse effects on the QT interval for 19 of the 20 compounds I looked at, whilst in one instance, the measured change in QT interval was considered by the researchers to be a ‘false positive’ finding 2. From reading the abstracts, I had no reason to doubt that these studies (all published in the last 12 months) were conducted to a high technical standard that meets or exceeds regulatory requirements, and it was very likely that these manuscripts were sufficiently well written to satisfy peer reviewers and journal editors. Yet, I have to confess to feeling strangely unfulfilled after this rapid survey of the literature. To whom is this research important?
This topic is clearly important to regulatory authorities and the pharmaceutical industry who need to rule out potentially harmful compounds at an early stage. It may well be important to physicians and patients, but only if there is a high prevalence of compounds that have adverse effects on the QT interval, or if there is already a high degree of suspicion attached to a particular agent. In situations where only a minority of drugs are genuinely linked to QT prolongation, then the bulk of data from routine screening of all new molecules will generate vast amounts of reassuring findings of ‘no harm’. This, of course, comes at great expense to the pharmaceutical industry, and very little in the way of demonstrable cost‐effectiveness (based on estimates suggesting €2.4 million per sudden cardiac death prevented) 3. Equally, as the specificity of the test procedures has not been established, we do not know how many results are false positives, which may erroneously halt or delay the development of a potentially efficacious drug. Indeed, Hondeghem has argued that QT prolongation is not necessarily a good predictor of ventricular arrhythmias 4.
I personally feel that studies conducted to satisfy routine regulatory requirements should have different priorities for publication from studies that aim to address specific questions or concerns that have been raised by researchers, clinicians, or patients. I am not saying that TQT studies should not be published, but I do intend to raise the question of the best place for these datasets. As I firmly believe in full reporting of trial data, I suggest that we consider other options to make the data available. For instance, editors of journals that offer a mixed model of subscription, as well as open access, may choose to channel TQT manuscripts to the open access option. This, in effect, allows authors to bear the costs for publication of data in circumstances where the journal editors have judged the study to be of satisfactory technical quality but of lower importance relative to other manuscripts.
Personally, and perhaps more controversially, I advocate another option, which is that of the curated database. This could be part and parcel of the regulatory process, whereby QT data from regulatory submissions could be stored in a structured open‐access database. One major advantage of this option is that it reduces the resource‐intensive steps of having two or three peer reviewers scrutinize TQT manuscripts. After all, the methodology is standardized as well as being very well defined, and the regulatory authorities are perfectly capable of judging the quality of the work. Peer reviewers for scientific journals are scarce resources and are already groaning under the weight of numerous requests from multiple directions. Why not conserve our resources for work that merits closer scrutiny?
There are parallels to be drawn here from the past. Years ago, the advent of quicker and less expensive technology for quantifying genetic markers soon led to a profusion of publications reporting allele frequencies in diverse sets of people. The fad had to pass, and it did. Nowadays, gene frequency data can, and should be, deposited in one of many freely available online databases. Why not do the same for thorough QT studies? It would be quick, efficient, and the data would become available much sooner than if it had gone through a typical scientific publication route with time spent on drafting manuscripts, dealing with inevitable rejections and requests for revisions before finally finding a home for the manuscript. Finally, the beauty of a curated database is, of course, the ease and convenience of looking up and downloading results for a wide range of new compounds, instead of having to run a potentially laborious PubMed search. Of course, pharmacological science is progressing all the time, and who knows whether the TQT study will soon be replaced by another beast such as exposure‐response analysis based on careful QT assessments in early phase trials? 5.
Loke, Y. K. (2016) The thorough QT study – do we need more of the same?. Br J Clin Pharmacol, 81: 400–401. doi: 10.1111/bcp.12871.
References
- 1. Darpo B. The thorough QT/QTc study 4 years after the implementation of the ICH E14 guidance. Br J Pharmacol 2010; 159: 49–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Barbour AM, Magee M, Shaddinger B, Arya N, Tombs L, Tao W, Patel BR, Fossler MJ, Glaser R. Utility of concentration‐effect modeling and simulation in a thorough QT study of losmapimod. J Clin Pharmacol 2015; 55: 661–70. [DOI] [PubMed] [Google Scholar]
- 3. Bouvy JC, Koopmanschap MA, Shah RR, Schellekens H. The cost‐effectiveness of drug regulation: the example of thorough QT/QTc studies. Clin Pharmacol Ther 2012; 91: 281–8. [DOI] [PubMed] [Google Scholar]
- 4. Hondeghem LM. QTc prolongation as a surrogate for drug‐induced arrhythmias: fact or fallacy? Acta Cardiol 2011; 66: 685–9. [DOI] [PubMed] [Google Scholar]
- 5. Darpo B, Garnett C, Keirns J, Stockbridge N. Implications of the IQ‐CSRC Prospective Study: time to revise ICH E14. Drug Saf 2015; 38: 773–80. [DOI] [PubMed] [Google Scholar]
