Abstract
There is increasing concern about the reliability of biomedical research, with recent articles suggesting that up to 85% of research funding is wasted. This article argues that an important reason for this is the inappropriate use of molecular techniques, particularly in the field of RNA biomarkers, coupled with a tendency to exaggerate the importance of research findings.
Keywords: Reproducibility, Biomedicine, qPCR, Microarrays, Cancer, Next generation sequencing
1. Time to listen!
Most researchers are aware that the translation into clinical practice of even the most promising findings of basic research is slow and rare [1] and that the clinical effectiveness of drugs cannot be predicted reliably from early research-based evidence [2]. A string of five articles and an editorial printed in the Lancet in early 2014 addressed this issue, reaching the staggering conclusion that 85% of biomedical funding is wasted on research that is ill conceived, poorly executed, inappropriately analysed, inadequately reported, side-tracked by bias and stifled by red tape.
The first article considers research priorities and questions their relevance for human health in the context of a worldwide investment in biomedical research of around US$240 billion [3]. The second highlights the usual absence of detailed written protocols for and poor documentation of research and comments on the systematic preference for quantity rather than quality, and novelty rather than reliability [4]. The third article discusses the modern approach to the regulation, governance, and management of biomedical research and emphasises how inefficient management can easily compromise the interests of patients and public [5]. The fourth points out that a large percentage of health research-associated protocols, full study reports and participant-level datasets are rarely available, that there is selective reporting of methods and results and that this leads to the introduction of bias, detrimentally effects the care of patients and wastes huge amounts of research funding [6]. The final article re-emphasises the absolute requirement for accurate, exhaustive and transparent reporting and notes that whilst reporting guidelines are important, they are all much less adopted and adhered to than they should be [7]. Finally, the editorial discusses the consistent and colossal failure of initially promising research findings to translate into improvements in health in light of the many, economic, political, social and cultural factors that influence researchers, funders, regulators, institutions and companies [8]. The authors make the revolutionary suggestion that rather than using journal impact factors to assess academics, it might be more constructive to gauge the methodological rigour, transparency of reporting and reproducibility of results of their output, which would of course facilitate the publication of more reliable and biologically relevant data.
2. Time to act!
Clarion calls don’t come much louder, insistent or with more authority. The authors’ principal recommendations involve proposals to improve every aspect of research project procedures relating to standardisation, governance and the research environment infrastructure:
-
•
Enhance translational research outcomes by increasing clarity from researchers, funders and regulators with regards to research proposal justification and transparency of protocols [3].
-
•
Make research results more reliable and relevant by ensuring public availability of protocols, analysis plans, raw data and providing continuous training and professional development coupled to reward incentives [4].
-
•
Streamline management of biomedical research to make it fit for purpose by harmonising regulations and processes affecting research, together with appropriate research designs and monitoring [5].
-
•
Make information more widely available by adopting performance metrics for distribution of research protocols and results, development of standards for data sharing practices and introducing legally enforced policies on information sharing [6].
-
•
Ensure that research reports address the needs of research users by encouraging better and more complete reporting and investing in the development and maintenance of an infrastructure for reporting, reviewing and archiving [7].
3. Is anyone listening?
These are of course very laudable aims and all make perfect sense, yet it is hard not to cast a weary eye over previous such propositions, generally published by the same cohort of authors in similarly prominent high impact factor journals. It really is striking just for how long there have been reports about the poor quality of research methodology, inadequate implementation of research methods and use of inappropriate analysis procedures as well as lack of transparency of reporting. All have failed to stir researchers, funders, regulators, institutions or companies into action.
For example, an earlier, shorter, but similar narrative of the problems facing applied research expressed surprise at the level of waste caused by issues such as poorly designed research studies pursuing the wrong questions, then either delaying or failing to publish in peer-reviewed journals and finally reporting outcomes selectively [9]. The authors’ list of recommendations to address these issues was rather similar to the ones published five years later, and is echoed by several complementary analyses of the challenges facing – omics research with regards to sample size and bias [10], validation practices [11] and design, conduct, and analysis of such studies [12]. Whether these expositions have had much impact is anyone's guess, but the fact that there is an unremitting reiteration of the same issues suggests not. Remarkably in this age of supposed austerity, neither politicians nor the public seem to care that the majority of research effort is wasted. It is shocking that whilst primary health care budgets are being trimmed, patient waiting times are increasing and the medical infrastructure is crumbling, no media investigation scrutinises the huge waste that is so evident to researchers themselves and, if stopped, would bring huge benefits to “our health, our comfort and our grasp of truth” [12]. It is truly extraordinary that whilst most researchers are aware of the fundamental flaws afflicting biomedical research, very few individuals, institutions, journal editors or funding agencies are prepared to put their heads above the parapet of what sometimes looks suspiciously like conspiratorial silence.
4. Lack of reproducibility
There has been a view for some time that the major problem no longer is the validity of expression measurements, but rather, the reliability of inferences from the data [13]. This conveniently disregards the fact that the results from thousands of these studies remain in the scientific literature and are likely to confuse current opinions and confound future studies. Furthermore, whilst inappropriate data analysis is of course a major problem, the widespread disregard for and dismissal of technical variability continues to be at the heart of much of the reproducibility endemic in biomedical research. This is most obviously demonstrated by the fact that independent researchers cannot reproduce the vast majority of peer-reviewed published work.
Whilst repeatability, the successful replication of results and conclusions from one study in the same laboratory, is important, reproducibility, defined as the independent confirmation of results and conclusions from a study, is the bedrock on which the scientific principle is based. Sadly, it has become clear that the results reported by most publications in the biomedical field do not fulfil that basic criterion. In a recent investigation at least half of researchers surveyed had the personal experience of being unable to reproduce published data and many were unable to resolve the issue even when helped by original authors [14]. In another investigation 11/12 (92%) studies could not be reproduced when the methods outlined in the original papers were replicated exactly, 26/38 (68%) when the methods were adapted and 2/2 (100%) when different methods were used [15]. Incidentally, it is unclear whether the authors analysed 67 or 57 projects, as the numbers from one of their figures add up to 67, whereas those from another add up to 57. A third study could not reproduce 47/53 (89%) of findings even when the laboratories that conducted the original experiments were consulted about their methods and, in some cases, experiment were repeated in the original laboratory [16]. Tellingly, all studies for which findings could be reproduced provided complete information on reagents and authors had paid close attention to controls and investigator bias. An examination of 271 in vivo studies using rodent models or non-human primates concluded that only 60% of the articles included information about the number and characteristics of the animals and only 70% provided detailed descriptions of the statistical analyses [17]. In a similar vein, 54% of all resources referred to in over 200 recent publications in the fields of Neuroscience, Developmental Biology, Immunology, Cell and Molecular Biology and General Biology could not be adequately identified, regardless of domain, journal impact factor, or reporting requirements [18]. Yet another investigation set out to assess the study design, statistical analyses, and reporting of 156 papers published in 2008 on cerebrovascular research. Few papers were hypothesis-driven or properly designed, many did not include adequate information on methods and results and were characterised by poor statistical details [19]. All these events lead to the inevitable conclusion that there is a spectacular lack of attention to detail by researchers, reviewers and editors.
5. Technical issues
The viewpoint that there is no longer a problem with the validity of expression measurements is also challenged by the clear evidence that molecular techniques can be unfit for purpose. The problems associated with real-time quantitative PCR (qPCR) have been extensively aired [20], [21], [22], [23], [24], [25], [26], [27], [28], but the emphasis on qPCR has concealed the complications associated with the reverse transcription (RT) step, which converts RNA into cDNA. RT is a basic and essential molecular technique that feeds into a number of other methods that have become the mainstay of applications in modern biology, diagnostics, medicine, forensics and biotechnology. This reaction is carried out by a choice of different enzymes with different properties that are not just enzyme-specific but are also dependent on the advanced buffers supplied with the enzymes. There was early recognition that a consideration of mRNA structure is essential prior to any investigation into gene expression [29] and that reverse transcriptases (RTases) differ in their ability to transcribe through the extensive secondary structure in mRNA [30], [31]. This led to the realisation that both technical and biological variability had to be considered when analysing results from RT-qPCR experiments and that the variability of the RT step presented an important impediment to reliable data interpretation [32]. Subsequent reports showed that the method of cDNA priming affects both accuracy and reproducibility of RT-qPCR experiments [33], [34] and that reactions primed by target-specific primers are linear over a wider range than similar reactions primed by random primers [35]. The first empirical evidence for high variability being an inherent property of the RT step was provided in 2004 with two rather important, but woefully overlooked publications. The first showed that RT-qPCR gene expression measurements are comparable only when the same priming strategy, reaction conditions and RNA concentration are used in all experiments [36]. The second reported RT- and target-dependent cDNA yields that differed by up to 100-fold [37], something that also applies to digital RT-PCR [38]. The paper's judicious recommendation that assays should be run at least as duplicates starting with the reverse transcription reaction continues to be ignored, even though the soundness of that advice was demonstrated by a follow-up publication [39]. Yet another paper reported gene-related factors as the primary determinants of RT variability and called for an evaluation of the RT robustness of control genes in RT-qPCR normalisation [40]. A recent paper has demonstrated that the fold changes reported by most RNA biomarkers are well within the range of variability observed when multiple RT reactions are carried out on the same template [41]. The conclusions from these papers are rather stark and place question marks behind many of the results reported not just in the biomedical literature but in the scientific literature as a whole. Unfortunately, recommendations for improvement are universally ignored and a large number of publications that use the RT step are characterised by inadequate and non-transparent reporting and conclusions that are based on expression changes well within experimental variability.
6. MIQE
The “minimum information for publication of qPCR experiments” (MIQE) guidelines constitute a list of recommendations that target the reliability of qPCR results, with the aim of helping to ensure the integrity of the scientific literature, promote consistency between laboratories, and increase experimental transparency [25]. They are the most widely cited of such initiatives, which include MIAME [42] and MISFISHIE [43] targeted at various molecular techniques. MIQE is based on an earlier proposal aimed at improving the quality of RT-qPCR experiments in 2002 [44] and covers every aspect important to the qPCR assay itself, as well as issues relating to pre- and post-assay parameters. By 2009 it was pretty obvious that many publications using PCR-based methods were seriously flawed and that this impaired the readers’ ability to evaluate critically the quality of the published results. The actual trigger for the publication of the MIQE guidelines was provided by the after effects of a paper that used RT-qPCR to detect measles virus in children with autism [45]. These results were never reproduced independently [46], [47], [48] nor repeated by the authors themselves [49]. An analysis of the raw data underlying the original publication revealed a series of mistakes, inappropriate analysis methods and misinterpretations that completely invalidated any of the paper's findings [50], [51]. Despite the presentation of detailed evidence at the major autism trials in 2007 in the USA, three initial judgements and three subsequent appeals judgements confirming that the underlying data were unreliable, this paper has never been retracted and its false conclusions remains unrestricted in the scientific literature.
The number of citations of the original MIQE publication in the peer-reviewed literature has just passed 3000, it has been followed by several editorials on their implementation [26], [52], [53] and has encouraged the publication of the MIQE guidelines for digital PCR [54]. All major instrument and reagent manufacturers incorporate the guidelines into training their field application specialists, actively encourage their implementation, support and organise world-wide workshops and publish effective guides to help their realisation [55], [56]. Unfortunately, researchers themselves are far less enthusiastic and in addition to the thousands of peer-reviewed papers that are already published, thousands more are published every year without regard for the MIQE guidelines and report results and conclusions that are unlikely to be correct [57], [58]. Recent surveys of qPCR-based publications found that the problems addressed by the MIQE guidelines continue to persist [28], [59] and an investigation focusing on normalisation procedures in RT-qPCR-associated colorectal cancer biomarkers concluded that virtually none of the studies could be reliably assessed [60].
7. Biomarkers
The identification of clinically useful biomarkers has been one of the most active areas of research, with tens of thousands of papers claiming to have identified a wide range of diagnostic, prognostic or predictive biomarkers of human disease [8]. However, conclusions are often based on large-scale measurements of biological processes that “drown” any understanding in a “sea of measurements” [61] and inapplicable statistics [62]. This headlong stampede has only occasionally been punctuated by thoughtful, insightful and hence ignored counsels for cautions such as that provided by one of the earliest discussions of biomarkers that also included a checklist for evaluating their reliability prior to use in a clinical context [63].
The starting point for most biomarkers continues to be the analysis of microarray results, even though there is plenty of evidence that should council caution. One study that deserves to be considered more widely highlights the discordance between microarray results and those from non-microarray-based clinical and biological data [64]. Others have identified poor study design and patient selection, reliance on p-value as well as technical and analytical issues as obstacles in the identification of practically useful biomarkers, and include recommendations for marker reporting [65], [66]. A fairly recent report comments that “the most spectacular published signatures predictive for chemotherapy response are based on unreliable data” [67]. Another analysis is more optimistic in that it suggests that “reproducible [microarray] data can be achieved and clinical standards are beginning to emerge”, but adds the caveat that there is a need for establishing a suitable workflow to “correctly sample a tumour, isolate RNA and process this for microarray analysis, evaluate the data, and communicate findings in a consistent and timely fashion” [68].
The need for easy and complete public data access [7] and data checklists [69] is highlighted by the results of studies that reanalyse such data and cannot reproduce the original analyses. An analysis of 46 microarray studies addressing major cancer outcomes revealed that they were characterised by selective data analysis and discussion of results [70], whereas the analyses from only two of 18 microarray-based gene expression profiling studies could be reproduced [71]. Suboptimal design and inadequate assay information are typical features of cancer prognostic marker studies [72] and when combined with the strong reporting bias characteristic of many articles suggests that this literature may be largely unreliable [73]. The problem is as serious today as it was five, ten and 15 years ago. A recent microarray study found that the human genomic response to acute inflammatory stresses is not reproduced in current mouse models [74], a finding that challenges the use of mice as essential animal models in biomedical research. However, a reanalysis of the data not only found that human and murine gene expression patterns associated with inflammatory conditions are congruent, but also identified a number of pathways commonly regulated in humans and mice [75]. Incidentally, analysis-related discordance is just as apparent in other biomedical areas of research: 35% of published reanalyses of data from randomised clinical trials (RCTs) led to changes in findings that implied conclusions different from those of the original article about the types and number of patients who should be treated [76]. In addition, not only are many publications describing RCT outcomes inconsistent with their protocol, but also there is clear evidence for publication as well as outcome reporting bias. Studies reporting significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported [77]. If outcomes of RCTs are to be clinically useful, there is an obvious need to publish all results and outcomes, regardless of their significance.
Next generation sequencing (NGS), which allows hypothesis-neutral analysis with accuracy and a wide dynamic range and has started to supersede microarrays, is not exempt from these challenges. The variable lengths of RNA obtained after fragmentation prior to sequencing can be a source of bias, resulting in incorrect ranking of differentially expressed genes [78] and the technical variability of transcriptome sequencing experiments is sufficiently high so that RNA abundance estimates can substantially disagree even when coverage levels are high [79]. Quality assessment of sequencing data also appears to be somewhat less than complete, as witnessed by the surprise expressed by one group of authors when they noticed that standard DNA alignment algorithms assume that sequences are accurate and do not allow for the systematic incorporation of quality parameters into sequence alignments [80]. Unless prompt action is taken, it is easy to see how NGS data will become as unreliable as those obtained from qPCR and microarray experiments, except that there will be vastly more of them and troubleshooting will be proportionately more difficult.
8. Hype and retractions
The peer-reviewed literature is widely perceived as comprising articles that have been carefully evaluated by experts in the field for methodological consistency, completeness and soundness as well as for the logical consistency of scientific claims based on experimental results. Whilst it is acknowledged that not every conclusion is correct and that subsequent new data can lead to a re-evaluation of those findings, the combination of rigorous peer-review and publication in reputable journals bestows on peer-reviewed papers the appearance of abidance, authenticity and authority.
Whatever the veracity of this perception, the route by which research findings are disseminated to a wider public is considerably less assured. An examination of press releases by high-profile medical journals found that these do not routinely highlight study limitations, disclose the role of industry funding and present findings perceived to be newsworthy in a way that may exaggerate their importance [81]. The somewhat unhelpful advice of Nature Genetics for a scientist who finds his work being thus glorified is to reduce the hype by explaining the implications of the work clearly “without using our field's rich jargon” [82]. Unselective press coverage is also apparent, as demonstrated by a study that followed up the work presented in 147 conference abstracts and reported in 252 news stories [83]. After three years 25% of abstracts remained unpublished, 25% were published in low impact factor journals and 50% in journals with high impact factors. Thirty-nine publications chosen for front-page coverage followed the same pattern, suggesting that media attention is undiscerning and not focused on quality research. Most of these reports would have been preliminary findings with tentative validity and indiscriminate press coverage is likely to create a misplaced impression of confidence and validity and is detrimental to engendering trust in research or individuals engaged in research. Interestingly, sometimes even researchers themselves cannot agree on the relevance of their own published work. It should not be unreasonable to expect that all authors contributing to a research publication agree on the interpretation of results and any conclusions that are put forward. Remarkably, that is not necessarily the case as revealed by the results of a small study of ten papers that appeared in The Lancet [84]. These indicated that the discussions did not reflect the opinions of all contributing authors and that the authors frequently disagreed about the importance and implications of their own findings and where their research was heading.
Sometimes, it is necessary for a published article to be retracted. For example, a high profile article that used publicly available data to identify signatures from microarray profiles of human cancer cell lines with known in vitro drug sensitivity or resistance to predict in vivo chemotherapeutic responses [85] had to be withdrawn [86] after clinical trials had started because independent reanalysis of the data showed unintentional mislabelling and indexing errors that resulted in none of the original results being reproducible [87]. Retractions make up a miniscule percentage of all publications (871/>17 million between 1950 and 2007) [88], although the number of articles retracted per year has increased by between 11- and 19-fold from 2001 to 2010 [89]. Editors of high impact factor journals often use another device, “expressions of concern”, prior to retraction to question published data, with six journals accounting for 40% of such notifications [89]. Ironically, there is disagreement in the literature about the effects and subsequent impacts of retracted publications. It is not clear whether there is a correlation between journal impact factor and number of retractions, with one analysis showing none [90] and another reporting a strong correlation [91]. Certainly our own data show very clearly that there is a negative correlation between transparency of reporting and journal impact factor [28].
The most common reason for retraction is the uncovering of a scientific mistake, although there is increasing evidence of fabrication and data plagiarism [92], [93], [94]. Neither cause is usually cited as a reason for retraction [95], even though these papers represent a calculated effort to deceive [96]. Hence it is rather disconcerting that retractions may not be an effective means of removing fraudulent results from the literature [97]. An analysis of the impact of retracted articles in the biomedical literature from 1966 to 1997 found that even after publication of the retraction most of these articles continue to be cited as valid work with no mention of the retraction [92], with most citations implicitly or explicitly positive [98] even up to 24 years post retraction [99]. A more recent publication also found that a retraction did not affect the paper's citations rate within 12 months of retraction [100] and an investigation of the effect of rebuttals on seven high-profile original articles found that original articles were cited much more frequently than the rebuttals, with no reduced effect on annual citation numbers [101]. In contrast, other reports found that retraction significantly reduced subsequent citation [102], [103] and that this reduction extends to the authors’ other published work [104]. Remarkably, an unexpected 18% of authors self-cite retracted work post retraction [why would they do that?], but only 10% of those authors also cite the retraction notice [105]. Around 1% of the NIH budget between 1992 and 2012 (US$58 million) was spent on papers retracted due to misconduct, with each article accounting for a mean of $392,582 in direct costs [106]. Not surprisingly, there have been calls to institute more aggressive means of notification to the scientific community [107], yet the topic is rarely mentioned and different publishers differ in their retraction policies, which can make it not immediately obvious that a paper has been retracted [108]. Although retrieval of a publication in PubMed also retrieves the retraction statement, authors often rely on other authors’ citation to that work and so inadvertently enhance that articles legitimacy.
9. Biological relevance
I would like to conclude with a brief mention of another reason for the low ratio of translation of research promise to real-life biomedical application. Much biomedical research is performed in a clinical and methodological vacuum [109] and research findings are frequently extrapolated from disease models that have little resemblance to the corresponding human condition [110]. Even though their unsuitability is occasionally recognised [111], [112], [113], [114], [115], [116], [117], [118], there is a need to acknowledge more universally and transparently the limitations in the clinical translation of non-human models in general to man [119]. No matter how transparent, well-designed, analysed and reported, if research results are obtained from inappropriate models they can never be translated into a valid human disease context.
10. Conclusions
Late in the day, editors of some of the major scientific journals have come to realise that they have been seriously negligent in their approach to overseeing the publication of reliable research results and have started to confront their responsibilities [120], [121], [122]. A series of similarly worded editorials appeared in Nature Structural Biology [123], Nature Genetics [124], Nature Neuroscience [125], Nature Immunology [126], Nature Medicine [127], Nature Methods [128], Nature Cell Biology [129] and announced measures to “reduce irreproducibility”. These are a welcome, if long overdue acknowledgement of the journals’ acquiescence to such vast quantities of inadequate reporting of experimental detail in much of the peer-reviewed literature.
However, the proof is in the pudding and an analysis of ten random papers published in late 2014 in Nature-titled journals [130], [131], [132] reveals the astonishing fact that there is actually less technical information provided than in the past (Table 1). Every single publication uses a single, unvalidated reference gene for normalisation (all old favourites), all use the 2−ΔΔCq method with no attempt to calculate PCR efficiency and reporting of RNA quality control measures is non-existent. Yet for the ΔΔCq calculation to be valid, the amplification efficiencies of target and reference must be approximately equal [140]; if not the PCR must be optimised [141]. What hope of ever changing anything if journal editors choose to ignore their own guidelines?
Table 1.
A search was carried out on the Nature web site (www.nature.com) for the term “reverse transcription real time PCR” and ten papers published in the second half of 2014 were selected at random. Materials and methods and supplementary information was examined for information on RT-qPCR protocols.
Reference | [130] | [131] | [132] | [133] | [134] | [135] | [136] | [137] | [138] | [139] |
---|---|---|---|---|---|---|---|---|---|---|
RNA integrity | No | No | No | No | No | No | No | No | No | No |
RNA purity | No | No | No | No | No | No | No | No | No | No |
RT conditions | No | No | No | No | No | No | No | No | No | No |
PCR efficiency | No | No | No | No | No | No | No | No | No | No |
Primer sequence | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
Reference genes | β-Actin | 18S rRNA | 18S rRNA | GAPDH | Rpl32 | GAPDH | Actin | HPRT | GAPDH | GAPDH |
Validated | No | No | No | No | No | No | No | No | No | No |
In contrast, research published in Biomolecular Detection and Quantification (BDQ) will have to pass a careful double blind peer-review process that will examine the validity of its molecular study design, measurement, data analysis and reporting. Our aim is to establish a reputation for unbiased publishing of studies that adhere to best practice guidelines and promote excellence in molecular measurement and its data analysis. Subject areas are deliberately broad to allow BDQ to serve as a repository for sharing key findings across what may otherwise be disparate specialties. It is hoped that BDQ will attract a wide range of critical, thoughtful, well-designed and excellent publications to help provide the impetus towards a more transparent and meaningful scientific literature.
Finally, the question “Modern biomedical research: an internally self-consistent universe with little contact with medical reality?” [110] should now be extended to include the addition “and based on an unacceptable level of technical proficiency”. It will be a long time before the many contradictions apparent in every area of the life sciences are addressed, never mind corrected [142]. But who is going to lead this effort, and how many vested interests will conspire to retain the status quo?
11. Statement
The investigation on the measles virus/autism publication was carried out for the MMR vaccine litigation trial at the High Court of Justice in London and at the US Vaccine court. The author acted as an expert witness and was compensated by the solicitors acting for the principal defendants SmithKline Beecham Plc and Smith Kline & French Laboratories Ltd., Merck & Co Inc. and Sanofi Pasteur MSD Ltd. The author also acted as an expert witness for the US Department of Justice and was paid.
Acknowledgement
I am grateful to Greg Shipley for his helpful comments.
References
- 1.Contopoulos-Ioannidis D.G., Ntzani E., Ioannidis J.P. Translation of highly promising basic science research into clinical applications. Am J Med. 2003;114:477–484. doi: 10.1016/s0002-9343(03)00013-5. [DOI] [PubMed] [Google Scholar]
- 2.Naci H., Ioannidis J.P. How good is evidence from clinical studies of drug effects and why might such evidence fail in the prediction of the clinical utility of drugs? Annu Rev Pharmacol Toxicol. 2015;55:169–189. doi: 10.1146/annurev-pharmtox-010814-124614. [DOI] [PubMed] [Google Scholar]
- 3.Chalmers I., Bracken M.B., Djulbegovic B., Garattini S., Grant J., Gulmezoglu A.M. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383:156–165. doi: 10.1016/S0140-6736(13)62229-1. [DOI] [PubMed] [Google Scholar]
- 4.Ioannidis J.P., Greenland S., Hlatky M.A., Khoury M.J., Macleod M.R., Moher D. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–175. doi: 10.1016/S0140-6736(13)62227-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Al-Shahi Salman R., Beller E., Kagan J., Hemminki E., Phillips R.S., Savulescu J. Increasing value and reducing waste in biomedical research regulation and management. Lancet. 2014;383:176–185. doi: 10.1016/S0140-6736(13)62297-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Chan A.W., Song F., Vickers A., Jefferson T., Dickersin K., Gotzsche P.C. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383:257–266. doi: 10.1016/S0140-6736(13)62296-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Glasziou P., Altman D.G., Bossuyt P., Boutron I., Clarke M., Julious S. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–276. doi: 10.1016/S0140-6736(13)62228-X. [DOI] [PubMed] [Google Scholar]
- 8.Macleod M.R., Michie S., Roberts I., Dirnagl U., Chalmers I., Ioannidis J.P. Biomedical research: increasing value, reducing waste. Lancet. 2014;383:101–104. doi: 10.1016/S0140-6736(13)62329-6. [DOI] [PubMed] [Google Scholar]
- 9.Chalmers I., Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–89. doi: 10.1016/S0140-6736(09)60329-9. [DOI] [PubMed] [Google Scholar]
- 10.Ioannidis J.P. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Ioannidis J.P., Khoury M.J. Improving validation practices in omics research. Science. 2011;334:1230–1232. doi: 10.1126/science.1211811. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ioannidis J.P. How to make more published research true. PLoS Med. 2014;11:e1001747. doi: 10.1371/journal.pmed.1001747. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Blalock E.M., Chen K.C., Stromberg A.J., Norris C.M., Kadish I., Kraner S.D. Harnessing the power of gene microarrays for the study of brain aging and Alzheimer's disease: statistical reliability and functional correlation. Ageing Res Rev. 2005;4:481–512. doi: 10.1016/j.arr.2005.06.006. [DOI] [PubMed] [Google Scholar]
- 14.Mobley A., Linder S.K., Braeuer R., Ellis L.M., Zwelling L. A survey on data reproducibility in cancer research provides insights into our limited ability to translate findings from the laboratory to the clinic. PLOS ONE. 2013;8:e63221. doi: 10.1371/journal.pone.0063221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Prinz F., Schlange T., Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10:712. doi: 10.1038/nrd3439-c1. [DOI] [PubMed] [Google Scholar]
- 16.Begley C.G., Ellis L.M. Drug development: raise standards for preclinical cancer research. Nature. 2012;483:531–533. doi: 10.1038/483531a. [DOI] [PubMed] [Google Scholar]
- 17.Kilkenny C., Parsons N., Kadyszewski E., Festing M.F., Cuthill I.C., Fry D. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS ONE. 2009;4:e7824. doi: 10.1371/journal.pone.0007824. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Vasilevsky N.A., Brush M.H., Paddock H., Ponting L., Tripathy S.J., Larocca G.M. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148. doi: 10.7717/peerj.148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Vesterinen H.M., Egan K., Deister A., Schlattmann P., Macleod M.R., Dirnagl U. Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism. J Cereb Blood Flow Metab. 2011;31:1064–1072. doi: 10.1038/jcbfm.2010.217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Bustin S.A. IUL Press; La Jolla, CA: 2004. A–Z of quantitative PCR. [Google Scholar]
- 21.Bustin S.A. Real-time, fluorescence-based quantitative PCR: a snapshot of current procedures and preferences. Expert Rev Mol Diagn. 2005;5:493–498. doi: 10.1586/14737159.5.4.493. [DOI] [PubMed] [Google Scholar]
- 22.Kubista M., Andrade J.M., Bengtsson M., Forootan A., Jonak J., Lind K. The real-time polymerase chain reaction. Mol Aspects Med. 2006;27:95–125. doi: 10.1016/j.mam.2005.12.007. [DOI] [PubMed] [Google Scholar]
- 23.Bustin S.A. Real-time polymerase chain reaction – towards a more reliable, accurate and relevant assay. Eur Pharm Rev. 2008;6:19–27. [Google Scholar]
- 24.Bustin S.A. Real-time quantitative PCR – opportunities and pitfalls. Eur Pharm Rev. 2008;4:18–23. [Google Scholar]
- 25.Bustin S.A., Benes V., Garson J.A., Hellemans J., Huggett J., Kubista M. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem. 2009;55:611–622. doi: 10.1373/clinchem.2008.112797. [DOI] [PubMed] [Google Scholar]
- 26.Bustin S.A., Beaulieu J.F., Huggett J., Jaggi R., Kibenge F.S., Olsvik P.A. MIQE precis: practical implementation of minimum standard guidelines for fluorescence-based quantitative real-time PCR experiments. BMC Mol Biol. 2010;11:74. doi: 10.1186/1471-2199-11-74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Bustin S.A. Why the need for qPCR publication guidelines? The case for MIQE. Methods. 2010;50:217–226. doi: 10.1016/j.ymeth.2009.12.006. [DOI] [PubMed] [Google Scholar]
- 28.Bustin S.A., Benes V., Garson J., Hellemans J., Huggett J., Kubista M. The need for transparency and good practices in the qPCR literature. Nat Methods. 2013;10:1063–1067. doi: 10.1038/nmeth.2697. [DOI] [PubMed] [Google Scholar]
- 29.Kuo K.W., Leung M.F., Leung W.C. Intrinsic secondary structure of human TNFR-I mRNA influences the determination of gene expression by RT-PCR. Mol Cell Biochem. 1997;177:1–6. doi: 10.1023/a:1006862304381. [DOI] [PubMed] [Google Scholar]
- 30.Buell G.N., Wickens M.P., Payvar F., Schimke R.T. Synthesis of full length cDNAs from four partially purified oviduct mRNAs. J Biol Chem. 1978;253:2471–2482. [PubMed] [Google Scholar]
- 31.Brooks E.M., Sheflin L.G., Spaulding S.W. Secondary structure in the 3′ UTR of EGF and the choice of reverse transcriptases affect the detection of message diversity by RT-PCR. Biotechniques. 1995;19:806–815. [PubMed] [Google Scholar]
- 32.Bustin S.A., Dorudi S. Molecular assessment of tumour stage and disease recurrence using PCR-based assays. Mol Med Today. 1998;4:389–396. doi: 10.1016/s1357-4310(98)01324-0. [DOI] [PubMed] [Google Scholar]
- 33.Zhang J., Byrne C.D. Differential priming of RNA templates during cDNA synthesis markedly affects both accuracy and reproducibility of quantitative competitive reverse-transcriptase PCR. Biochem J. 1999;337:231–241. [PMC free article] [PubMed] [Google Scholar]
- 34.Lekanne Deprez R.H., Fijnvandraat A.C., Ruijter J.M., Moorman A.F. Sensitivity and accuracy of quantitative real-time polymerase chain reaction using SYBR green I depends on cDNA synthesis conditions. Anal Biochem. 2002;307:63–69. doi: 10.1016/s0003-2697(02)00021-0. [DOI] [PubMed] [Google Scholar]
- 35.Bustin S.A., Nolan T. Pitfalls of quantitative real-time reverse-transcription polymerase chain reaction. J Biomol Tech. 2004;15:155–166. [PMC free article] [PubMed] [Google Scholar]
- 36.Stahlberg A., Hakansson J., Xian X., Semb H., Kubista M. Properties of the reverse transcription reaction in mRNA quantification. Clin Chem. 2004;50:509–515. doi: 10.1373/clinchem.2003.026161. [DOI] [PubMed] [Google Scholar]
- 37.Stahlberg A., Kubista M., Pfaffl M. Comparison of reverse transcriptases in gene expression analysis. Clin Chem. 2004;50:1678–1680. doi: 10.1373/clinchem.2004.035469. [DOI] [PubMed] [Google Scholar]
- 38.Sanders R., Mason D.J., Foy C.A., Huggett J.F. Evaluation of digital PCR for absolute RNA quantification. PLOS ONE. 2013;8:e75296. doi: 10.1371/journal.pone.0075296. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Tichopad A., Kitchen R., Riedmaier I., Becker C., Stahlberg A., Kubista M. Design and optimization of reverse-transcription quantitative PCR experiments. Clin Chem. 2009;55:1816–1823. doi: 10.1373/clinchem.2009.126201. [DOI] [PubMed] [Google Scholar]
- 40.Linden J., Ranta J., Pohjanvirta R. Bayesian modeling of reproducibility and robustness of RNA reverse transcription and quantitative real-time polymerase chain reaction. Anal Biochem. 2012;428:81–91. doi: 10.1016/j.ab.2012.06.010. [DOI] [PubMed] [Google Scholar]
- 41.Bustin S., Dhillon H.S., Kirvell S., Greenwood C., Parker M., Shipley G.L. Variability of the reverse transcription step: practical implications. Clin Chem. 2015;61:202–212. doi: 10.1373/clinchem.2014.230615. [DOI] [PubMed] [Google Scholar]
- 42.Brazma A., Hingamp P., Quackenbush J., Sherlock G., Spellman P., Stoeckert C. Minimum information about a microarray experiment (MIAME) – toward standards for microarray data. Nat Genet. 2001;29:365–371. doi: 10.1038/ng1201-365. [DOI] [PubMed] [Google Scholar]
- 43.Deutsch E.W., Ball C.A., Berman J.J., Bova G.S., Brazma A., Bumgarner R.E. Minimum information specification for in situ hybridization and immunohistochemistry experiments (MISFISHIE) Nat Biotechnol. 2008;26:305–312. doi: 10.1038/nbt1391. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Bustin S.A. Quantification of mRNA using real-time reverse transcription PCR (RT-PCR): trends and problems. J Mol Endocrinol. 2002;29:23–39. doi: 10.1677/jme.0.0290023. [DOI] [PubMed] [Google Scholar]
- 45.Uhlmann V., Martin C.M., Sheils O., Pilkington L., Silva I., Killalea A. Potential viral pathogenic mechanism for new variant inflammatory bowel disease. Mol Pathol. 2002;55:84–90. doi: 10.1136/mp.55.2.84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Afzal M.A., Ozoemena L.C., O’Hare A., Kidger K.A., Bentley M.L., Minor P.D. Absence of detectable measles virus genome sequence in blood of autistic children who have had their MMR vaccination during the routine childhood immunization schedule of UK. J Med Virol. 2006;78:623–630. doi: 10.1002/jmv.20585. [DOI] [PubMed] [Google Scholar]
- 47.D'Souza Y., Fombonne E., Ward B.J. No evidence of persisting measles virus in peripheral blood mononuclear cells from children with autism spectrum disorder. Pediatrics. 2006;118:1664–1675. doi: 10.1542/peds.2006-1262. [DOI] [PubMed] [Google Scholar]
- 48.D'Souza Y., Dionne S., Seidman E.G., Bitton A., Ward B.J. No evidence of persisting measles virus in the intestinal tissues of patients with inflammatory bowel disease. Gut. 2007;56:886–888. doi: 10.1136/gut.2006.119065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Hornig M., Briese T., Buie T., Bauman M.L., Lauwers G., Siemetzki U. Lack of association between measles virus vaccine and autism with enteropathy: a case-control study. PLoS ONE. 2008;3:e3140. doi: 10.1371/journal.pone.0003140. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Bustin S.A. RT-qPCR and molecular diagnostics: no evidence for measles virus in the GI tract of autistic children. Eur Pharm Rev Dig. 2008;1:11–16. [Google Scholar]
- 51.Bustin S.A. Why there is no link between measles virus and autism. In: Fitzgerald M., editor. vol. I. Intech – Open Access Company; 2013. pp. 81–98. (Recent advances in autism spectrum disorders). [Google Scholar]
- 52.Bustin S., Penning L.C. Improving the analysis of quantitative PCR data in veterinary research. Vet J. 2012;191:279–281. doi: 10.1016/j.tvjl.2011.06.044. [DOI] [PubMed] [Google Scholar]
- 53.Bustin S. Transparency of reporting in molecular diagnostics. Int J Mol Sci. 2013;14:15878–15884. doi: 10.3390/ijms140815878. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Huggett J.F., Foy C.A., Benes V., Emslie K., Garson J.A., Haynes R. The digital MIQE guidelines: minimum information for publication of quantitative digital PCR experiments. Clin Chem. 2013;59:892–902. doi: 10.1373/clinchem.2013.206375. [DOI] [PubMed] [Google Scholar]
- 55.Taylor S., Wakem M., Dijkman G., Alsarraj M., Nguyen M. A practical approach to RT-qPCR-Publishing data that conform to the MIQE guidelines. Methods. 2010;50:S1–S5. doi: 10.1016/j.ymeth.2010.01.005. [DOI] [PubMed] [Google Scholar]
- 56.Taylor S.C., Mrkusich E.M. The state of RT-quantitative PCR: firsthand observations of implementation of minimum information for the publication of quantitative real-time PCR experiments (MIQE) J Mol Microbiol Biotechnol. 2014;24:46–52. doi: 10.1159/000356189. [DOI] [PubMed] [Google Scholar]
- 57.Garson J.A., Huggett J.F., Bustin S.A., Pfaffl M.W., Benes V., Vandesompele J. Unreliable real-time PCR analysis of human endogenous retrovirus-W (HERV-W) RNA expression and DNA copy number in multiple sclerosis. AIDS Res Hum Retroviruses. 2009;25:377–378. doi: 10.1089/aid.2008.0270. [author reply 379] [DOI] [PubMed] [Google Scholar]
- 58.Jacob F., Guertler R., Naim S., Nixdorf S., Fedier A., Hacker N.F. Careful selection of reference genes is required for reliable performance of RT-qPCR in human normal and cancer cell lines. PLOS ONE. 2013;8:e59180. doi: 10.1371/journal.pone.0059180. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Abdel Nour A.M., Azhar E., Damanhouri G., Bustin S.A. Five years MIQE guidelines: the case of the Arabian countries. PLOS ONE. 2014;9:e88266. doi: 10.1371/journal.pone.0088266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Dijkstra J.R., van Kempen L.C., Nagtegaal I.D., Bustin S.A. Critical appraisal of quantitative PCR results in colorectal cancer research: can we rely on published qPCR results? Mol Oncol. 2014;8:813–888. doi: 10.1016/j.molonc.2013.12.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Ioannidis J.P. Expectations, validity, and reality in omics. J Clin Epidemiol. 2010;63:945–949. doi: 10.1016/j.jclinepi.2010.04.002. [DOI] [PubMed] [Google Scholar]
- 62.Wacholder S., Chanock S., Garcia-Closas M., El Ghormli L., Rothman N. Assessing the probability that a positive report is false: an approach for molecular epidemiology studies. J Natl Cancer Inst. 2004;96:434–442. doi: 10.1093/jnci/djh075. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Schulte P.A. Validation of biologic markers for use in research on chronic fatigue syndrome. Rev Infect Dis. 1991;13(Suppl 1):S87–S89. doi: 10.1093/clinids/13.supplement_1.s87. [DOI] [PubMed] [Google Scholar]
- 64.Miklos G.L., Maleszka R. Microarray reality checks in the context of a complex disease. Nat Biotechnol. 2004;22:615–621. doi: 10.1038/nbt965. [DOI] [PubMed] [Google Scholar]
- 65.Henry N.L., Hayes D.F. Uses and abuses of tumor markers in the diagnosis, monitoring, and treatment of primary and metastatic breast cancer. Oncologist. 2006;11:541–552. doi: 10.1634/theoncologist.11-6-541. [DOI] [PubMed] [Google Scholar]
- 66.Ransohoff D.F. How to improve reliability and efficiency of research about molecular markers: roles of phases, guidelines, and study design. J Clin Epidemiol. 2007;60:1205–1219. doi: 10.1016/j.jclinepi.2007.04.020. [DOI] [PubMed] [Google Scholar]
- 67.Borst P., Wessels L. Do predictive signatures really predict response to cancer chemotherapy? Cell Cycle. 2010;9:4836–4840. doi: 10.4161/cc.9.24.14326. [DOI] [PubMed] [Google Scholar]
- 68.Enkemann S.A. Standards affecting the consistency of gene expression arrays in clinical applications. Cancer Epidemiol Biomarkers Prev. 2010;19:1000–1003. doi: 10.1158/1055-9965.EPI-10-0044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Kolker E., Ozdemir V., Martens L., Hancock W., Anderson G., Anderson N. Toward more transparent and reproducible omics studies through a common metadata checklist and data publications. OMICS. 2014;18:10–14. doi: 10.1089/omi.2013.0149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Ioannidis J.P., Polyzos N.P., Trikalinos T.A. Selective discussion and transparency in microarray research findings for cancer outcomes. Eur J Cancer. 2007;43:1999–2010. doi: 10.1016/j.ejca.2007.05.019. [DOI] [PubMed] [Google Scholar]
- 71.Ioannidis J.P., Allison D.B., Ball C.A., Coulibaly I., Cui X., Culhane A.C. Repeatability of published microarray gene expression analyses. Nat Genet. 2009;41:149–155. doi: 10.1038/ng.295. [DOI] [PubMed] [Google Scholar]
- 72.Kyzas P.A., Denaxa-Kyza D., Ioannidis J.P. Quality of reporting of cancer prognostic marker studies: association with reported prognostic effect. J Natl Cancer Inst. 2007;99:236–243. doi: 10.1093/jnci/djk032. [DOI] [PubMed] [Google Scholar]
- 73.Kyzas P.A., Denaxa-Kyza D., Ioannidis J.P. Almost all articles on cancer prognostic markers report statistically significant results. Eur J Cancer. 2007;43:2559–2579. doi: 10.1016/j.ejca.2007.08.030. [DOI] [PubMed] [Google Scholar]
- 74.Seok J., Warren H.S., Cuenca A.G., Mindrinos M.N., Baker H.V., Xu W. Genomic responses in mouse models poorly mimic human inflammatory diseases. Proc Natl Acad Sci U S A. 2013;110:3507–3512. doi: 10.1073/pnas.1222878110. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Takao K., Miyakawa T. Genomic responses in mouse models greatly mimic human inflammatory diseases. Proc Natl Acad Sci U S A. 2014 doi: 10.1073/pnas.1401965111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Ebrahim S., Sohani Z.N., Montoya L., Agarwal A., Thorlund K., Mills E.J. Reanalyses of randomized clinical trial data. J Am Med Assoc. 2014;312:1024–1032. doi: 10.1001/jama.2014.9646. [DOI] [PubMed] [Google Scholar]
- 77.Dwan K., Altman D.G., Arnaiz J.A., Bloom J., Chan A.W., Cronin E. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE. 2008;3:e3081. doi: 10.1371/journal.pone.0003081. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Oshlack A., Wakefield M.J. Transcript length bias in RNA-seq data confounds systems biology. Biol Direct. 2009;4:14. doi: 10.1186/1745-6150-4-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.McIntyre L.M., Lopiano K.K., Morse A.M., Amin V., Oberg A.L., Young L.J. RNA-seq: technical variability and sampling. BMC Genomics. 2011;12:293. doi: 10.1186/1471-2164-12-293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Frith M.C., Wan R., Horton P. Incorporating sequence quality data into alignment improves DNA read mapping. Nucleic Acids Res. 2010;38:e100. doi: 10.1093/nar/gkq010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Woloshin S., Schwartz L.M. Press releases: translating research into news. J Am Med Assoc. 2002;287:2856–2858. doi: 10.1001/jama.287.21.2856. [DOI] [PubMed] [Google Scholar]
- 82.Anon. Don’t feed the hype! Nat Genet. 2003;35:1. doi: 10.1038/ng0903-1. [DOI] [PubMed] [Google Scholar]
- 83.Schwartz L.M., Woloshin S., Baczek L. Media coverage of scientific meetings: too much, too soon? J Am Med Assoc. 2002;287:2859–2863. doi: 10.1001/jama.287.21.2859. [DOI] [PubMed] [Google Scholar]
- 84.Horton R. The hidden research paper. J Am Med Assoc. 2002;287:2775–2778. doi: 10.1001/jama.287.21.2775. [DOI] [PubMed] [Google Scholar]
- 85.Potti A., Dressman H.K., Bild A., Riedel R.F., Chan G., Sayer R. Genomic signatures to guide the use of chemotherapeutics. Nat Med. 2006;12:1294–1300. doi: 10.1038/nm1491. [DOI] [PubMed] [Google Scholar]
- 86.Potti A., Dressman H.K., Bild A., Riedel R.F., Chan G., Sayer R. Retraction: genomic signatures to guide the use of chemotherapeutics. Nat Med. 2011;17:135. doi: 10.1038/nm0111-135. [DOI] [PubMed] [Google Scholar]
- 87.Coombes K.R., Wang J., Baggerly K.A. Microarrays: retracing steps. Nat Med. 2007;13:1276–1277. doi: 10.1038/nm1107-1276b. [author reply 1277] [DOI] [PubMed] [Google Scholar]
- 88.Cokol M., Ozbay F., Rodriguez-Esteban R. Retraction rates are on the rise. EMBO Rep. 2008;9:2. doi: 10.1038/sj.embor.7401143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Grieneisen M.L., Zhang M. A comprehensive survey of retracted articles from the scholarly literature. PLOS ONE. 2012;7:e44118. doi: 10.1371/journal.pone.0044118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Singh H.P., Mahendra A., Yadav B., Singh H., Arora N., Arora M. A comprehensive analysis of articles retracted between 2004 and 2013 from biomedical literature – a call for reforms. J Tradit Complement Med. 2014;4:136–139. doi: 10.4103/2225-4110.136264. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 91.Fang F.C., Casadevall A. Retracted science and the retraction index. Infect Immun. 2011;79:3855–3859. doi: 10.1128/IAI.05661-11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Budd J.M., Sievert M., Schultz T.R. Phenomena of retraction: reasons for retraction and citations to the publications. J Am Med Assoc. 1998;280:296–297. doi: 10.1001/jama.280.3.296. [DOI] [PubMed] [Google Scholar]
- 93.Steen R.G. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics. 2011;37:249–253. doi: 10.1136/jme.2010.040923. [DOI] [PubMed] [Google Scholar]
- 94.Hettinger T.P. Research integrity: the experience of a doubting Thomas. Arch Immunol Ther Exp (Warsz) 2014;62:81–84. doi: 10.1007/s00005-014-0272-3. [DOI] [PubMed] [Google Scholar]
- 95.Wager E., Williams P. Why and how do journals retract articles? An analysis of Medline retractions 1988–2008. J Med Ethics. 2011;37:567–570. doi: 10.1136/jme.2010.040964. [DOI] [PubMed] [Google Scholar]
- 96.Steen R.G. Retractions in the scientific literature: do authors deliberately commit research fraud? J Med Ethics. 2011;37:113–117. doi: 10.1136/jme.2010.038125. [DOI] [PubMed] [Google Scholar]
- 97.Whitely W.P., Rennie D., Hafner A.W. The scientific community's response to evidence of fraudulent publication. The Robert Slutsky case. J Am Med Assoc. 1994;272:170–173. [PubMed] [Google Scholar]
- 98.Budd J.M., Sievert M., Schultz T.R., Scoville C. Effects of article retraction on citation and practice in medicine. Bull Med Libr Assoc. 1999;87:437–443. [PMC free article] [PubMed] [Google Scholar]
- 99.Korpela K.M. How long does it take for the scientific literature to purge itself of fraudulent material? The Breuning case revisited. Curr Med Res Opin. 2010;26:843–847. doi: 10.1185/03007991003603804. [DOI] [PubMed] [Google Scholar]
- 100.Trikalinos N.A., Evangelou E., Ioannidis J.P. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. J Clin Epidemiol. 2008;61:464–470. doi: 10.1016/j.jclinepi.2007.11.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Banobi J.A., Branch T.A., Hilborn R. Do rebuttals affect future science? Ecosphere. 2011;2:1–11. [Google Scholar]
- 102.Pfeifer M.P., Snodgrass G.L. The continued use of retracted, invalid scientific literature. J Am Med Assoc. 1990;263:1420–1423. [PubMed] [Google Scholar]
- 103.Furman J.L., Jensen K., Murray F. Governing knowledge in the scientific community: exploring the role of retractions in biomedicine. Res Policy. 2012;41:276–290. [Google Scholar]
- 104.Lu S.F., Jin G.Z., Uzzi B., Jones B. The retraction penalty: evidence from the Web of Science. Sci Rep. 2013;3:3146. doi: 10.1038/srep03146. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Madlock-Brown C.R., Eichmann D. The (lack of) impact of retraction on citation networks. Sci Eng Ethics. 2014 doi: 10.1007/s11948-014-9532-1. [DOI] [PubMed] [Google Scholar]
- 106.Stern A.M., Casadevall A., Steen R.G., Fang F.C. Financial costs and personal consequences of research misconduct resulting in retracted publications. Elife. 2014;3:e02956. doi: 10.7554/eLife.02956. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 107.Redman B.K., Yarandi H.N., Merz J.F. Empirical developments in retraction. J Med Ethics. 2008;34:807–809. doi: 10.1136/jme.2007.023069. [DOI] [PubMed] [Google Scholar]
- 108.Drury N.E., Karamanou D.M. Citation of retracted articles: a call for vigilance. Ann Thorac Surg. 2009;87(2):670. doi: 10.1016/j.athoracsur.2008.07.108. [letter] [DOI] [PubMed] [Google Scholar]
- 109.Ioannidis J.P. Evolution and translation of research findings: from bench to where? PLoS Clin Trials. 2006;1:e36. doi: 10.1371/journal.pctr.0010036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 110.Horrobin D.F. Modern biomedical research: an internally self-consistent universe with little contact with medical reality? Nat Rev Drug Discov. 2003;2:151–154. doi: 10.1038/nrd1012. [DOI] [PubMed] [Google Scholar]
- 111.Michie H.R. The value of animal models in the development of new drugs for the treatment of the sepsis syndrome. J Antimicrob Chemother. 1998;41(Suppl A):47–49. doi: 10.1093/jac/41.suppl_1.47. [DOI] [PubMed] [Google Scholar]
- 112.Larkin J.M., Porter C.D. Mice are unsuitable for modelling ABO discordance despite strain-specific A cross-reactive natural IgM. Br J Haematol. 2005;130:310–317. doi: 10.1111/j.1365-2141.2005.05609.x. [DOI] [PubMed] [Google Scholar]
- 113.Dasgupta G., BenMohamed L. Of mice and not humans: how reliable are animal models for evaluation of herpes CD8(+)-T cell-epitopes-based immunotherapeutic vaccine candidates? Vaccine. 2011;29:5824–5836. doi: 10.1016/j.vaccine.2011.06.083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Carcamo W.C., Nguyen C.Q. Advancement in the development of models for hepatitis C research. J Biomed Biotechnol. 2012;2012:346761. doi: 10.1155/2012/346761. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Choughule K.V., Barr J.T., Jones J.P. Evaluation of rhesus monkey and guinea pig hepatic cytosol fractions as models for human aldehyde oxidase. Drug Metab Dispos. 2013;41:1852–1858. doi: 10.1124/dmd.113.052985. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 116.Kalaszczynska I., Ruminski S., Platek A.E., Bissenik I., Zakrzewski P., Noszczyk M. Substantial differences between human and ovine mesenchymal stem cells in response to osteogenic media: how to explain and how to manage? Biores Open Access. 2013;2:356–363. doi: 10.1089/biores.2013.0029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117.Thomas R.C., Bath M.F., Stover C.M., Lambert D.G., Thompson J.P. Exploring LPS-induced sepsis in rats and mice as a model to study potential protective effects of the nociceptin/orphanin FQ system. Peptides. 2014;61:56–60. doi: 10.1016/j.peptides.2014.08.009. [DOI] [PubMed] [Google Scholar]
- 118.Grubor-Bauk B., Mullick R., Das S., Gowans E., Yu W. Immunocompetent mouse models to evaluate intrahepatic T cell responses to HCV vaccines. Hum Vaccin Immunother. 2014 doi: 10.4161/hv.34343. e34343. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Osuchowski M.F., Remick D.G., Lederer J.A., Lang C.H., Aasen A.O., Aibiki M. Abandon the mouse research ship? Not just yet! Shock. 2014;41:463–475. doi: 10.1097/SHK.0000000000000153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 120.Anon. Announcement: reducing our irreproducibility. Nature. 2013;496:398. [Google Scholar]
- 121.Anon. Journals unite for reproducibility. Nature. 2014;515:7. doi: 10.1038/515007a. [DOI] [PubMed] [Google Scholar]
- 122.McNutt M. Journals unite for reproducibility. Science. 2014;346:679. doi: 10.1126/science.aaa1724. [DOI] [PubMed] [Google Scholar]
- 123.Anon. Raising standards. Nat Struct Mol Biol. 2013;20:533. doi: 10.1038/nsmb.2590. [DOI] [PubMed] [Google Scholar]
- 124.Anon. Raising standards. Nat Genet. 2013;45:467. doi: 10.1038/ng.2621. [DOI] [PubMed] [Google Scholar]
- 125.Anon. Raising standards. Nat Neurosci. 2013;16:517. doi: 10.1038/nn.3391. [DOI] [PubMed] [Google Scholar]
- 126.Anon. Raising standards. Nat Immunol. 2013;14:415. doi: 10.1038/ni.2603. [DOI] [PubMed] [Google Scholar]
- 127.Anon. Raising standards. Nat Med. 2013;19:508. [Google Scholar]
- 128.Anon. Enhancing reproducibility. Nat Methods. 2013;10:367. doi: 10.1038/nmeth.2471. [DOI] [PubMed] [Google Scholar]
- 129.Anon. Raising reporting standards. Nat Cell Biol. 2013;15:443. [Google Scholar]
- 130.Zhang S.M., Zhu L.H., Chen H.Z., Zhang R., Zhang P., Jiang D.S. Interferon regulatory factor 9 is critical for neointima formation following vascular injury. Nat Commun. 2014;5:5160. doi: 10.1038/ncomms6160. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Masuda T., Iwamoto S., Yoshinaga R., Tozaki-Saitoh H., Nishiyama A., Mak T.W. Transcription factor IRF5 drives P2X4R+-reactive microglia gating neuropathic pain. Nat Commun. 2014;5:3771. doi: 10.1038/ncomms4771. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Noda S., Asano Y., Nishimura S., Taniguchi T., Fujiu K., Manabe I. Simultaneous downregulation of KLF5 and Fli1 is a key feature underlying systemic sclerosis. Nature Commun. 2014;5 doi: 10.1038/ncomms6797. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 133.Simmini S., Bialecka M., Huch M., Kester L., van de Wetering M., Sato T. Transformation of intestinal stem cells into gastric stem cells on loss of transcription factor Cdx2. Nature Commun. 2014;5 doi: 10.1038/ncomms6728. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 134.Vo N., Horii T., Yanai H., Yoshida H., Yamaguchi M. The Hippo pathway as a target of the Drosophila DRE/DREF transcriptional regulatory pathway. Sci Rep. 2014;4:7196. doi: 10.1038/srep07196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Boulle F., Massart R., Stragier E., Paizanis E., Zaidan L., Marday S. Hippocampal and behavioral dysfunctions in a mouse model of environmental stress: normalization by agomelatine. Transl Psychiatry. 2014;4:e485. doi: 10.1038/tp.2014.125. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Zhang Z., Zhang H., Li B., Meng X., Wang J., Zhang Y. Berberine activates thermogenesis in white and brown adipose tissue. Nat Commun. 2014;5:5493. doi: 10.1038/ncomms6493. [DOI] [PubMed] [Google Scholar]
- 137.Henze A.T., Garvalov B.K., Seidel S., Cuesta A.M., Ritter M., Filatova A. Loss of PHD3 allows tumours to overcome hypoxic growth inhibition and sustain proliferation through EGFR. Nat Commun. 2014;5:5582. doi: 10.1038/ncomms6582. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Qi B., Cong Q., Li P., Ma G., Guo X., Yeh J. Ablation of Tak1 in osteoclast progenitor leads to defects in skeletal growth and bone remodeling in mice. Sci Rep. 2014;4:7158. doi: 10.1038/srep07158. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Lee H., Lee J.K., Park M.H., Hong Y.R., Marti H.H., Kim H. Pathological roles of the VEGF/SphK pathway in Niemann-Pick type C neurons. Nat Commun. 2014;5:5514. doi: 10.1038/ncomms6514. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140.Livak K.J., Schmittgen T.D. Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)) Method. Methods. 2001;25:402–408. doi: 10.1006/meth.2001.1262. [DOI] [PubMed] [Google Scholar]
- 141.Schmittgen T.D., Livak K.J. Analyzing real-time PCR data by the comparative C(T) method. Nat Protoc. 2008;3:1101–1108. doi: 10.1038/nprot.2008.73. [DOI] [PubMed] [Google Scholar]
- 142.Johnson G., Nour A.A., Nolan T., Huggett J., Bustin S. Minimum information necessary for quantitative real-time PCR experiments. Methods Mol Biol. 2014;1160:5–17. doi: 10.1007/978-1-4939-0733-5_2. [DOI] [PubMed] [Google Scholar]