Skip to main content
Plastic and Reconstructive Surgery Global Open logoLink to Plastic and Reconstructive Surgery Global Open
. 2021 Sep 17;9(9):e3828. doi: 10.1097/GOX.0000000000003828

Discrepancies between Conference Abstracts and Published Manuscripts in Plastic Surgery Studies: A Retrospective Review

Alexander F Dagi 1, Gareth J Parry 1, Brian I Labow 1, Amir H Taghinia 1,
PMCID: PMC8448048  PMID: 34549011

Abstract

Background:

Inconsistency in results and outcomes between presented abstracts and corresponding published articles can negatively affect clinical education and care. The objective of this study was to describe the frequency of clinically meaningful change in results and outcomes between abstracts presented at the American Association of Plastic Surgeons annual conference and the corresponding published articles, and to determine risk factors associated with discrepancies.

Methods:

All abstracts delivered as oral presentations at the American Association of Plastic Surgeons conference (2006–2016) were reviewed. Results and outcomes were compared with those in corresponding articles. We defined clinically meaningful discrepancy as any change in the directionality of an outcome, or a quantitative change in results exceeding 10%.

Results:

Four hundred eighty-six abstracts were identified. Of these, 63% (N = 305) advanced to publication. Of the published studies, 19% (N = 59) contained a discrepancy. In 85% of these (N = 50), discrepancies could not be explained by random variation. Changes in sample size were associated with heightened risk for a discrepancy (OR 10.38, 95% CI 5.16–20.86, P < 0.001). A decrease in sample size greater than 10% increased the likelihood of a discrepancy by 25-fold (OR 24.92, 95% CI 8.66–71.68, P < 0.001), whereas an increase in sample size greater than 10% increased the likelihood of a discrepancy by eight-fold (OR 8.36, CI 3.69–19.00, P < 0.001).

Conclusions:

Most discrepancies between abstract and published article were not due to random statistical variation. To mitigate the possible impact of unreliable abstracts, we recommend abstracts be marked as preliminary, that authors indicate whether sample size is final at time of presentation, and that changes to previously reported results be indicated in final publications.

INTRODUCTION

Increasingly, conference abstracts from national surgical conferences are indexed on PubMed and maintained in the public domain. They have appeared as citations in medical textbooks and may serve as topics for teaching-rounds.1 Conference abstracts offer a platform for the sharing of time-sensitive results, which may advance the treatment of illness and provide an opportunity for rapid peer feedback and further academic investment. It is critical to determine the reliability of such pre-publication data, as unreliable results, if acted upon, may negatively impact clinical education, care, and resource-expenditure.

Recent meta-analyses have sought to investigate the reliability of plastic surgery conference abstracts. Advancement to peer-reviewed publication has been documented to occur, at best, in 72% of studies,2,3 and at worst, in 20% of studies.4 Previous efforts to describe the frequency of discrepancies between abstracts and corresponding published articles in plastic surgery studies have been limited by overly-broad inclusion criteria and subspecialty foci.57

The purpose of this study was to determine the frequency of clinically meaningful changes in results and outcomes between conference abstracts and corresponding published articles in the field of plastic surgery. We examined the extent to which such changes were statistically anomalous, and queried correlations between discrepancies and study characteristics to identify associative factors.

METHODS

All scientific abstracts delivered as oral presentations at the American Association of Plastic Surgeons annual conference from 2006 to 2016 were tracked. These abstracts were identified from the archives of the conference proceedings (https://meeting.aaps1921.org/Archives/). IRB approval was not required by our institution for this study.

A search was conducted in January 2020 to identify a resultant published article for each abstract. The search was conducted via PubMed, which includes MEDLINE and PubMed Central. Searches relied on authors’ names, in combination and individually, in conjunction with title key words. Two authors independently reviewed all prospective abstract–article matches to confirm consistency of match. The year 2016 was chosen as the final year for this retrospective study to allow at least 3 years of follow-up from the time of presentation to article publication. Research reported in abstracts was deemed unpublished if no match was found after searching for each author’s name along with key words.

The following data were collected from each abstract and corresponding article: title, authorship, institutional affiliations, date of presentation, introduction, methods, results/outcomes, and conclusions, including all numerical results, figures, and tables.

Primary Outcome: Discrepancies

Discrepancy between abstract and published article was defined to include only potentially clinically relevant change in results and outcomes, as in Theman et al’s study: any categorical change in the directionality of an outcome (ie, positive to negative) or any double-digit change (ie, >10%) in the point estimate of a result or complication.7

We examined discrepancies to estimate if they were consistent with random statistical variation. Specifically, in studies where the sample size decreased between abstract presentation and final article publication, it was assumed that new exclusion criteria had been applied and discrepancies could not be due to random variation. In published studies where the sample size increased, it was assumed the authors had collected additional data in the time between abstract presentation and article publication. We applied the point estimate from the abstract sample to provide an expected value for the additional sample. If the abstract provided details of the uncertainty around the point estimate in the abstract, we used this to calculate a 95% confidence interval around the expected value in the additional sample. Where a direct measure of uncertainty in the point estimate was not provided, it was calculated using the sample proportion, or standard deviation or error. We defined a discrepancy to be consistent with random variation if the observed point estimate within the new sample lay within the 95% confidence interval of the expected value. We defined a discrepancy to be inconsistent with random variation if the observed point estimate within the new sample lay outside the 95% confidence interval of the expected value. For example, if an abstract had a sample size of 50, and the published study had a sample of 75, we identified whether the results in the additional sample of 25 were within a statistically predictable range (random variation) of the abstract estimate. (See figure, Supplemental Digital Content 1, which displays visualizing discrepancies and possible random variation. http://links.lww.com/PRSGO/B786.)

Secondary Outcome: Factors Associated with Discrepancies

We explored the possible association of study discrepancies with (1) change in sample size between abstract presentation and article publication, (2) change in authorship, and (3) time to publication in months. Only discrepancies that could not be explained by random statistical variation were included for correlational analysis. We grouped change in sample size from abstract to final publication as greater than 10% decline, less than 10% decline, no change, less than 10% increase, and greater than 10% increase. We defined authorship change as a change in the first and/or senior author. We grouped time to publication as less than 12 months, 12–24 months, and more than 24 months. Chi-square tests were used to explore the association between study discrepancy and each of the three study factors, using the odds ratio and 95% confidence intervals to estimate correlations. We ran a multivariate logistic regression analysis with study discrepancy as a binary outcome and the three factors as independent variables, again using odds ratios and 95% confidence intervals to estimate associations.

All statistical tests were performed with SPSS, version 24.0 (IBM SPSS Statistics for Windows, Armonk, N.Y.). Statistical significance was defined by a P value less than 0.05.

RESULTS

Four hundred eighty-six scientific abstracts were identified from the archives of oral presentations given at the annual conference of the American Association of Plastic Surgeons between 2006 and 2016. (See figure 2, Supplemental Digital Content 2, which displays the studies reviewed for discrepancies analysis presented at the American Association of Plastic Surgeons annual conference 2006–2016. http://links.lww.com/PRSGO/B786.)

Sixty-three percent (N = 305) of scientific abstracts advanced to publication in a peer-reviewed, PubMed indexed journal (range 40.6% in 2007 to 73.8% in 2013). Median time to publication was 15 months (IQR 9–27). Nine studies were published before the conference occurred.

Discrepancies

Nineteen percent (N = 59) of published articles contained a discrepancy in the results when compared with the initial conference abstract (Table 1 provides descriptive examples). Of these, 5% (N = 3) had a change in the direction of a primary outcome. Separately, one article was retracted following publication for an unpublicized reason.

Table 1.

Descriptive Examples of Discrepancies between Abstracts and Final Publications

The flap loss rate dropped from 23% in the abstract to 15% in the article, while the sample size dropped from 165 patients to 157 patients.
In a study investigating complications following postmastectomy reconstruction, hypertension was not an independent risk factor for complication in the abstract (P > 0.05) and was an independent risk factor in the article (P < 0.05).
Good or excellent results were reported in 60% of patients who underwent primary reconstruction in the abstract, but in 100% of patients in the final article.
Microsurgery was not an independent risk factor for reoperative hematoma in the abstract, whereas it was an independent risk factor for reoperative hematoma in the final article.
Hypotensive anesthesia was originally reported not to reduce blood loss, whereas it was reported to reduce blood loss significantly in the final article.
A new technology was reported to significantly reduce hospital length-of-stay in the abstract, but it was reported to have no impact on hospital length-of-stay in the final article.
In a study analyzing the contributions of a plastic surgery department to a health care system, the average net revenue for primary inpatient admissions per relative value unit was $113 in the abstract versus $222 in the final article.

In six articles, a measure of uncertainty for the random variation analysis could not be derived. In 50 of the remaining cases, the change in result or outcome was inconsistent with random statistical variation.

Factors Associated with Discrepancies

Of the published studies, 34% (N = 103) had a change in sample size (Table 2). A decline in sample size between abstract presentation and article publication was found in 36% (N = 37) of studies with a sample size change.

Table 2.

Characteristics of Studies with Discrepancies in Results or Outcomes

Discrepancy Odds Ratio
Yes/No % Estimate 95% CI P
Sample size*
 No change 12/196 6.1% 1.00
 >10% decline 13/21 61.9% 24.92 (8.66, 71.68) <0.001
 <10% decline 6/16 37.5% 9.20 (2.86, 29.6) <0.001
 <10% increase 5/15 33.3% 7.67 (2.26, 26.02) 0.001
 >10% increase 18/51 35.3% 8.36 (3.69, 18.97) <0.001
Authorship change
 No 38/251 15.1% 1.00
 Yes 18/54 33.3% 1.60 (0.85, 3.01) 0.149
Time to publication (mo)
 <12 20/129 15.5% 1.00
 12–24 22/91 24.2% 1.74 (0.88, 3.42) 0.109
 24+ 14/85 16.5% 1.08 (0.51, 2.27) 0.850

*In six studies, it was not possible to determine whether sample size changed.

The odds of a discrepancy were 25-fold greater in studies with a decrease in sample size of more than 10% when compared with studies with no change in sample size (OR 24.92, 95% CI 8.66–71.68, P < 0.001). The odds of a discrepancy were next greatest when the sample size decreased by less than 10% (OR 9.20, 2.86–29.60, P < 0.001), followed by an increase in sample size of more than 10% (OR 8.36, 95% CI 3.69–18.97, P < 0.001), and an increase in sample size of less than 10% (OR 7.67, 95% CI 2.26–26.02, P < 0.001). Supplemental Digital Content 3 shows the odds of discrepancy by the degree of sample size change. (See figure 3, Supplemental Digital Content 3, which displays odds of a discrepancy due to nonrandom variation as a function of change in sample size. http://links.lww.com/PRSGO/B786.)

Changes in first and/or senior authorship were more common in studies with discrepancies, increasing the odds of a discrepancy by 1.60 (CI 0.85–3.01); however, this relationship was not statistically significant. Studies that took more time to publish (>24 months) were not significantly more likely to have a discrepancy (OR 1.08, CI 0.51–2.27, P = 0.85).

DISCUSSION

The purpose of this study was to determine the frequency of clinically meaningful changes in the results and outcomes of plastic surgery studies between abstract presentation and article publication. Preliminary reports of investigational results are critical to advancing the causes of medical education and clinical care. For this reason alone, even if no other, the reliability of early reports matters a great deal.

Publication Rates

Sixty-three percent of abstracts in a 10-year sample from the American Association of Plastic Surgeons annual conference advanced to publication. Nineteen percent contained a double-digit change in a quantitative result, including three studies with a change in direction of the outcome.

Previous studies in the field of plastic surgery examined rates of abstract advancement to publication. Varying results were demonstrated (range 20%–72%) from the American Association of Plastic Surgeons3,8,9; American Society for Aesthetic Plastic Surgery10; American Society of Plastic Surgeons3,9,11; British Association of Plastic, Reconstructive and Aesthetic Surgeons4; British Association of Plastic Surgeons12; Congress of the Korean Society of Plastic and Reconstructive Surgeons13; Canadian Society of Plastic Surgeons9; European Association of Plastic Surgeons8,14,15; Plastic Surgery Research Council3; and the Brazilian Congress of Plastic Surgery.5 The trend demonstrated by the results of these studies, and by our own, is that a substantial proportion of abstracts from even the most selective of international plastic surgery conferences will not advance to publication.

The same phenomenon also holds true in other academic medical fields.16 Lack of time, resources, incentives, or authorship agreement has been shown to prevent publication.17

Discrepancies in Results or Outcomes

The reliability of plastic surgery abstracts that do advance to publication has not been fully studied. Three previous articles described discrepancies in results or outcomes between conference abstracts and published articles in plastic surgery. Maisner et al and Theman et al focused on hand surgery and reconstructive microsurgery, specifically.6,7 Denadai et al examined discrepancies associated with abstracts presented at the Brazilian Congress of Plastic Surgery.5 The interpretation of the findings of Maisner et al and Denadai et al is limited by their studies’ overly-broad inclusion criteria: a discrepancy is defined as “any change in the qualitative or quantitative data.”5,6 A change of half a percent in a quantitative result would qualify as a discrepancy, even if not statistically unexpected or clinically meaningful. Maisner et al found discrepancies in 66% of cases, whereas Denadai et al in 20% of cases. Theman et al used more selective criteria in defining potentially clinically meaningful discrepancies with a 10% quantitative threshold.

None of these three studies examined the possibility that discrepancies in results might be explained by random statistical variation. There is a range of change that is statistically expected as a study progresses. This range is determined by the precision or uncertainty of the initial effect size, which is a consequence of the initial sample size. An uncertainty measure—such as a 95% confidence interval, standard deviation, or standard error—is essential to the interpretation and reporting of any effect, as well as to the prediction of statistically expectable change.18 Surprisingly, only one of 59 abstracts containing a discrepancy in our analysis initially provided an uncertainty measure. It was necessary to calculate such measures retrospectively for the remainder. Based on uncertainty, it would be reasonable to expect the three studies cited may have overestimated their outcomes.

We sought to quantify this potential bias through a random variation analysis. Even with this approach, however, in only 5% of studies could the discrepancies be explained by random variation. In an additional 10% of cases, the data available were insufficient to determine the source of the discrepancy. In the overwhelming majority of cases (≥85%), structural or methodological changes were responsible for the discrepancy in results or outcomes.

As an example, in a study on perioperative complications the sample size increased from 880 patients to 884 patients between the time of abstract presentation and article publication. Although hypertension was not predictive of the primary outcome originally (OR 1.5, 95% CI 0.9–2.6, P > 0.05), it did become predictive of the primary outcome in the article (OR 2.3, 95% CI 1.4–3.5, P < 0.05). Irrespective of the distribution of four new patients across the original cohorts of the 880-person sample—positive or negative outcome, hypertensive or nonhypertensive group—the new odds ratio was impossible to achieve. This change threw into question the accuracy of the initial data and statistical analysis, and also suggested that there may have been an undisclosed modification in study methodology between publication of the abstract and the final article publication.

Risk Factors Associated with Discrepancies

Meta-analyses have raised concerns about the reliability of peer-reviewed publications in plastic surgery publications.1923 Similar concerns have been raised repeatedly throughout all fields of academic medicine. The concerns address bias, and the validity, replicability, generalizability, interpretability, and redundancy of data.18,2436

When and why do study results change? These questions have not been previously examined systematically in the plastic surgery literature. New exclusion criteria and changes in study leadership may be relevant. We were surprised to discover 37 instances in which the sample size declined between abstract presentation and article publication. Sample size changes were in fact categorically associated with discrepancies. A decline in sample size of more than 10% was a particularly strong risk factor for a discrepancy (OR 24.92, 95% CI 8.66–71.68, P < 0.001). Curiously, an increase in sample size was also a risk factor for a discrepancy and rarely in a manner consistent with random variation, despite what one might expect from the continuation of a study. These associations may be used to caution readers and mitigate the possible impact of unreliable abstracts.

This review indicates that there is a need to improve standards of initial study design and to increase incentives to bring abstract reports to article publication. Approximately 50% of scientific abstracts included in this review failed to advance to publication or underwent clinically meaningful changes in results or outcomes. Presentation of abstracts at professional conferences provides a forum for introducing new research and receiving feedback. Feedback received following conference presentation or peer-review may correct for preexisting inadequacies in study design.37 This phenomenon may be construed positively as a corrective check on the study or negatively, as an indication of poor initial planning.3842 Nevertheless the frequency of discrepancies and the rate of nonpublication suggests scientific abstracts presented at plastic surgery conferences should not be relied upon uncritically or to alter standards of clinical care.

The archiving and indexing of conference abstracts may still be important for the reduction of redundancy and dissemination bias.43 Conference abstracts may also encourage the development of new areas of research.28 With that said, certain caveats apply. First, data must be explicitly marked as preliminary until they have achieved convincing and clinically relevant statistical significance. Second, because changes in sample size strongly correlate with material clinical and statistical changes in outcomes and results, we recommend that abstracts indicate whether sample size is final at the time of presentation and whether sample sizes have changed since previous publication or presentation. Finally, it would be reasonable to suggest that published studies indicate whether results or outcomes have changed from those reported in abstracts previously.

LIMITATIONS

The findings of our study may not apply to all plastic surgery conferences, fields, or research methodologies. Further stratification would be productive. The use of a 10% threshold to define discrepancies may have led to the exclusion of additional clinically meaningful discrepancies. Smaller changes in results could have been due to random variation or nonrandom variation equally, as this determination is dependent on the initial uncertainty of the effect size rather than on the degree of change in the effect size. Finally, studies that were published in non-PubMed indexed journals would not have been captured by our methodology.

CONCLUSIONS

Conference abstracts in plastic surgery cannot be relied upon categorically to provide precise or final results and outcomes for the purposes of clinical decision-making. Fifty percent of abstracts presented at the American Association of Plastic Surgeons annual conference in 2006–2016 failed to advance to publication or changed their results to a clinically meaningful degree. Changes in sample size correlated with discrepancies in results and outcomes, and most discrepancies between abstracts and articles could not be explained by random statistical variation. To lessen the chance that abstract results unduly impact clinical practice, we recommend plastic surgery conferences ensure that abstract data be explicitly marked as preliminary, that authors indicate whether the sample size is final at the time of presentation, and that published studies indicate whether results or outcomes have changed from those reported in abstracts previously.

Supplementary Material

gox-9-e3828-s001.pdf (124.6KB, pdf)

Footnotes

Published online 17 September 2021.

Disclosure: The authors have no financial interest in relation to the content of this article.

Related Digital Media are available in the full-text version of the article on www.PRSGlobalOpen.com.

REFERENCES

  • 1.Bhandari M, Devereaux PJ, Guyatt GH, et al. An observational study of orthopaedic abstracts and subsequent full-text publications. J Bone Joint Surg Am. 2002;84:615–621. [DOI] [PubMed] [Google Scholar]
  • 2.Peake M, Rotatori RM, Ovalle F, et al. Publishing conversion rates and trends in abstracts presented at the American Association for Hand Surgery annual meeting: a five-year review. Hand. 2019;16:1–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Asaad M, Rajesh A, Tarabishi AS, et al. Do we publish what we present? A critical analysis of abstracts presented at three plastic surgery meetings. Plast Reconstr Surg. 2020;145:1555–1564. [DOI] [PubMed] [Google Scholar]
  • 4.Kain N, Mishra A, McArthur P. Are we still publishing our presented abstracts from the British Association of Plastic and Reconstructive Surgery (BAPRAS)? J Plast Reconstr Aesthet Surg. 2010;63:1572–1573. [DOI] [PubMed] [Google Scholar]
  • 5.Denadai R, Araujo GH, Pinho AS, et al. Discrepancies between plastic surgery meeting abstracts and subsequent full-length manuscript publications. Aesthetic Plast Surg. 2016;40:778–784. [DOI] [PubMed] [Google Scholar]
  • 6.Maisner RS, Ayyala HS, Agag RL. Abstract to publication in microsurgery: what are the discrepancies? J Reconstr Microsurg. 2020;36:577–582. [DOI] [PubMed] [Google Scholar]
  • 7.Theman TA, Labow BI, Taghinia A. Discrepancies between meeting abstracts and subsequent full text publications in hand surgery. J Hand Surg Am. 2014;39:1585–90. e3. [DOI] [PubMed] [Google Scholar]
  • 8.Khorasani H, Lassen MH, Kuzon W, et al. Scientific impact of presentations from the EURAPS and the AAPS meetings: A 10-year review. J Plast Reconstr Aesthet Surg. 2017;70:31–36. [DOI] [PubMed] [Google Scholar]
  • 9.Gregory TN, Liu T, Machuk A, et al. What is the ultimate fate of presented abstracts? The conversion rates of presentations to publications over a five-year period from three North American plastic surgery meetings. Can J Plast Surg. 2012;20:33–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Williams S, Pirlamarla A, Rahal W, et al. How well do they convert? Trending ASAPS presentations to publication from 1995-2010. ASJOUR. 2017;37:NP15–NP19. [DOI] [PubMed] [Google Scholar]
  • 11.Sinno H, Izadpanah A, Izadpanah A, et al. Publication bias in abstracts presented to the annual scientific meeting of the American Society of Plastic Surgeons. Plast Reconstr Surg. 2011;128:106e–108e. [DOI] [PubMed] [Google Scholar]
  • 12.Oliver DW, Whitaker IS, Chohan DP. Publication rates for abstracts presented at the British Association of Plastic Surgeons meetings: how do we compare with other specialties? Br J Plast Surg. 2003;56:158–160. [DOI] [PubMed] [Google Scholar]
  • 13.Chung KJ, Lee JH, Kim YH, et al. How many presentations are published as full papers? Arch Plast Surg. 2012;39:238–243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.van der Steen LP, Hage JJ, Loonen MP, et al. Full publication of papers presented at the 1995 through 1999 European Association of Plastic Surgeons annual scientific meetings: a systemic bibliometric analysis. Plast Reconstr Surg. 2004;114:113–120. [DOI] [PubMed] [Google Scholar]
  • 15.Izadpanah A, Izadpanah A, Islur A, et al. Publication bias in plastic and reconstructive surgery: a retrospective review on 128 abstracts presented to the annual EURAPS meeting. Eur J Plast Surg. 2014;37:387–392. [Google Scholar]
  • 16.Toma M, McAlister FA, Bialy L, et al. Transition from meeting abstract to full-length journal article for randomized controlled trials. JAMA. 2006;295:1281–1287. [DOI] [PubMed] [Google Scholar]
  • 17.Sprague S, Bhandari M, Devereaux PJ, et al. Barriers to full-text publication following presentation of abstracts at annual orthopaedic meetings. J Bone Joint Surg Am. 2003;85:158–163. [DOI] [PubMed] [Google Scholar]
  • 18.Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–276. [DOI] [PubMed] [Google Scholar]
  • 19.Freshwater MF. Laboratory animal research published in plastic surgery journals in 2014 has extensive waste: a systematic review. J Plast Reconstr Aesthet Surg. 2015;68:1485–1490. [DOI] [PubMed] [Google Scholar]
  • 20.Agha RA, Lee SY, Jeong KJ, et al. Reporting quality of observational studies in plastic surgery needs improvement: a systematic review. Ann Plast Surg. 2016;76:585–589. [DOI] [PubMed] [Google Scholar]
  • 21.Lee JH. Addressing the strengthening the reporting of observational studies in epidemiology (STROBE) statement in archives of plastic surgery reports. Arch Plast Surg. 2014;41:1–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Samargandi OA, Hasan H, Thoma A. Methodologic quality of systematic reviews published in the plastic and reconstructive surgery literature: a systematic review. Plast Reconstr Surg. 2016;137:225e–236e. [DOI] [PubMed] [Google Scholar]
  • 23.Ascha M, Ascha MS, Gatherwright J. The importance of reproducibility in plastic surgery research. Plast Reconstr Surg. 2019;144:242–248. [DOI] [PubMed] [Google Scholar]
  • 24.Lee W, Bindman J, Ford T, et al. Bias in psychiatric case-control studies: literature survey. Br J Psychiatry. 2007;190:204–209. [DOI] [PubMed] [Google Scholar]
  • 25.Tooth L, Ware R, Bain C, et al. Quality of reporting of observational longitudinal research. Am J Epidemiol. 2005;161:280–288. [DOI] [PubMed] [Google Scholar]
  • 26.Pocock SJ, Collier TJ, Dandreo KJ, et al. Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ. 2004;329:883. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Altman DG. The scandal of poor medical research. BMJ. 1994;308:283–284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–89. [DOI] [PubMed] [Google Scholar]
  • 29.Macleod MR, Michie S, Roberts I, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383:101–104. [DOI] [PubMed] [Google Scholar]
  • 30.Lang TA, Lang T, Secic M. How to Report Statistics in Medicine: Annotated Guidelines for Authors, Editors, and Reviewers. Philadelphia, PA: American College of Physicians; 2006. [Google Scholar]
  • 31.Chan AW, Hróbjartsson A, Haahr MT, et al. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457–2465. [DOI] [PubMed] [Google Scholar]
  • 32.Carp J. The secret lives of experiments: methods reporting in the fMRI literature. Neuroimage. 2012;63:289–300. [DOI] [PubMed] [Google Scholar]
  • 33.Chan AW, Song F, Vickers A, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383:257–266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Chavalarias D, Ioannidis JP. Science mapping analysis characterizes 235 biases in biomedical research. J Clin Epidemiol. 2010;63:1205–1215. [DOI] [PubMed] [Google Scholar]
  • 35.Halpern SD, Karlawish JH, Berlin JA. The continuing unethical conduct of underpowered clinical trials. JAMA. 2002;288:358–362. [DOI] [PubMed] [Google Scholar]
  • 36.Yank V, Rennie D, Bero LA. Financial ties and concordance between results and conclusions in meta-analyses: retrospective cohort study. BMJ. 2007;335:1202–1205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Ioannidis JP, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Sully BG, Julious SA, Nicholl J. A reinvestigation of recruitment to randomised, controlled, multicenter trials: a review of trials funded by two UK funding agencies. Trials. 2013;14:166. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Editorial. Research integrity is much more than misconduct. Nature. 2019;570:5. [DOI] [PubMed] [Google Scholar]
  • 40.Dechartres A, Ravaud P. Better prioritization to increase research value and decrease waste. BMC Med. 2015;13:244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.García-Berthou E, Alcaraz C. Incongruence between test statistics and P values in medical papers. BMC Med Res Methodol. 2004;4:13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Moher D, Weeks L, Ocampo M, et al. Describing reporting guidelines for health research: a systematic review. J Clin Epidemiol. 2011;64:718–742. [DOI] [PubMed] [Google Scholar]
  • 43.Schmucker CM, Blümle A, Schell LK, et al. ; OPEN consortium. Systematic review finds that study data not published in full text articles have unclear impact on meta-analyses results in medical research. PLoS One. 2017;12:e0176210. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

gox-9-e3828-s001.pdf (124.6KB, pdf)

Articles from Plastic and Reconstructive Surgery Global Open are provided here courtesy of Wolters Kluwer Health

RESOURCES