Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2021 Jun 29;16(6):e0253713. doi: 10.1371/journal.pone.0253713

Publication speed in pharmacy practice journals: A comparative analysis

Antonio M Mendes 1, Fernanda S Tonin 1, Felipe F Mainka 1, Roberto Pontarolo 2, Fernando Fernandez-Llimos 3,4,*
Editor: Tim Mathes5
PMCID: PMC8241115  PMID: 34185802

Abstract

Background

Scholarly publishing system relies on external peer review. However, the duration of publication process is a major concern for authors and funding bodies.

Objective

To evaluate the duration of the publication process in pharmacy practice journals compared with other biomedical journals indexed in PubMed.

Methods

All the articles published from 2009 to 2018 by the 33 pharmacy practice journals identified in Mendes et al. study and indexed in PubMed were gathered as study group. A comparison group was created through a random selection of 3000 PubMed PMIDs for each year of study period. Articles with publication dates outside the study period were excluded. Metadata of both groups of articles were imported from PubMed. The duration of editorial process was calculated with three periods: acceptance lag (days between ‘submission date’ and ‘acceptance date’), lead lag (days between ‘acceptance date’ and ‘online publication date’), and indexing lag (days between ‘online publication date’ and ‘Entry date’). Null hypothesis significance tests and effect size measures were used to compare these periods between both groups.

Results

The 33 pharmacy practice journals published 26,256 articles between 2009 and 2018. Comparison group random selection process resulted in a pool of 23,803 articles published in 5,622 different journals. Acceptance lag was 105 days (IQR 57–173) for pharmacy practice journals and 97 days (IQR 56–155) for the comparison group with a null effect difference (Cohen’s d 0.081). Lead lag was 13 (IQR 6–35) and 23 days (IQR 9–45) for pharmacy practice and comparison journals, respectively, which resulted in a small effect. Indexing lag was 5 days (IQR 2–46) and 4 days (IQR 2–12) for pharmacy practice and control journals, which also resulted in a small effect. Slight positive time trend was found in pharmacy practice acceptance lag, while slight negative trends were found for lead and indexing lags for both groups.

Conclusions

Publication process duration of pharmacy practice journals is similar to a general random sample of articles from all disciplines.

Introduction

Publishing articles is the most common way to disseminate the results of research activities. The old “publish or perish” culture is fully alive [1]. Publication times have always been a major concern for authors because a delay in the publication leads to a delay in the dissemination of the knowledge they discovered and because their careers and future funding depend on publications [2]. These delays are amplified because many articles are peer reviewed and rejected at least once before they are finally published [3].

Authors pressure editors to have their manuscripts published quickly, and funding bodies analyze the times between their investment (funding) and their profit (publication). The Australian National Health and Medical Research Council reported that among the 77 RCTs funded between 2008 and 2010, 72% had their main results published after 8 years, with a median delay of 7.1 years after funding (95% CI 6.3 to 7.6) [4]. The U.S. National Heart, Lung, and Blood Institute (NHLBI) reported that the results of 42% of the 232 RCTs funded between 2000 and 2011 had been published before January 2013, with a median time from funding to publication of 25 months [5]. In a secondary analysis, the NHLBI identified the amount funded and the use of intention-to-treat analyses as the main variables that influence the speed at which the results of the trials are published [6]. In a recent analysis of 1509 trials that were registered in a clinical trials registration database with a completion time between 2009 and 2013 and that had a German university medical center as a leading institution, only 39% of the trials had their results published “in a timely manner (<24 months after completion)”, with a median publication delay time between 28 and 36 months [7].

The editorial process is a compilation of sequential steps with the final aim of ensuring the quality of the papers published. Fig 1 presents the different steps in the scholarly publication process and provides the names for the delays or lags following Lee et al.’s terminology [8]. Immediately after receiving a manuscript, editors face the responsibility of using the controversial measure of a “quick and brutal” desk rejection of potentially weak or irrelevant manuscripts [9]. If the submission is not desk rejected, the external peer review system starts and frequently involves with a two-round process. Peer review, which has been described as “crude and understudied, but indispensable”, is an essential part of the evaluation of publishability [10]. However, authors usually dislike the results of this process [11]. Although the duration of the editorial process is frequently criticized, better alternatives have not been created [12]. Before establishing peer review as a requirement for all their manuscripts, even the journals with the strongest reputations were accused of biased selection [13]. During emergency times, some journals have reported reducing the requirements by asking “peer reviewers to be especially vigilant in guarding against unreasonable demands for revision that are not essential to the main conclusions of a manuscript” [14,15].

Fig 1. Editorial process description with terms used in this study.

Fig 1

Trying to reduce publication times is not a new initiative. In fact, approximately 100 years ago, well-known scientists started using Letters to the Editor as a way to reduce the time to get their ideas published [16]. The advent of electronic publication seemed to solve the problem of publication delays but was soon revealed to be an incomplete solution [17]. Improving publication speed has become a major aim of journal editors [18]. However, a balance is needed between the urgency for obtaining the publication goal and the reliability of the scholarly publishing system based on rigorous peer review, as recently demonstrated [19].Pharmacy practice is a scientific discipline that suffers from a similar situation as other scientific areas and has a similar aim to have manuscripts published quickly. Rodriguez analyzed the time-to-index of three pharmacy practice journals between 2010 and 2011 and found a median time of 114 days (IQR, 98–141 days) [20]. Irwin and Rackham compared the time-to-index of pharmacy journals (158 days; SD 58) with the time-to-index of medicine journals (185; SD 96) and nursing journals (234; SD 107) [21]. In a new analysis of medical, nursing, and pharmacy journals entered in PubMed in 2012 and 2013, Rodriguez reported different results, with median times-to-index of 62 days (IQR 45–89), 187 days (IQR 148–255), and 280 days (IQR 134–400) for medical, pharmacy, and nursing journals, respectively [22]. However, to the best of our knowledge, no widescale comparison of publication times between pharmacy practice journals and other biomedical disciplines has been performed.

The aim of the study was to evaluate the duration of the publication process in pharmacy practice journals compared with the publication process duration in a random sample of articles indexed in PubMed.

Materials and methods

Data collection

A list of pharmacy practice journals was obtained from Mendes et al.’s study [23]. That study objectively classified the 285 pharmacy journals into six clusters, namely, ’Cell Pharmacology’ (20 journals), ’Molecular Pharmacology’ (46 journals), ’Clinical Pharmacology’ (57 journals), ’Pharmacy Practice’ (67 journals), ’Pharmaceutics’ (35 journals) and ’Pharmaceutical Analysis’ (60 journals). Of the 67 pharmacy practice journals, 33 are indexed in the U.S. National Library of Medicine (NLM) PubMed (https://www.pubmed.org). On February 2019, metadata of all the articles published in these 33 journals between 2009 and 2018 (ten years) were extracted from PubMed to create a pool of ‘pharmacy practice journal’ articles. Articles published in each year for each journal were retrieved from PubMed using the International Standard Serial Number (ISSN) of each journal with the [IS] field descriptor combined with the ‘AND’ Boolean operator to the year using the [DP] field descriptor.

To create a biomedical journals comparison group of randomly selected articles published in non-pharmacy practice journals, the first PMID [name used by the National Library of Medicine for the PubMed Unique Identifier] of each year between 2009 and 2018 were identified in PubMed. PubMed was selected because is the biggest bibliographic database of biomedical literature. The required sample size was calculated after a preliminary analysis of 12,380 randomly selected articles extracted from PubMed, which resulted in a mean delay of 125 days (SD 98) between article reception and publication, with 43.4% of the articles providing data for this calculation. Aiming to identify a 10-day (less than 10% the delay of the whole editorial process in the preliminary analysis) between-group mean difference in each year with an alpha error of 0.05 and a power of 80%, a sample of 1509 articles was obtained, and this sample size was calculated using G*Power (University of Kiel, Kiel). Considering the 50% of potentially incomplete metadata or inexistent PMIDs, a sample of 3000 PMIDs per year was created using the Research Randomizer website (https://www.randomizer.org). To obtain each set of randomly generated numbers, first and last PMID of each year were used as ‘number range’ limits, setting the ‘remain unique’ and the ‘markers-off’ options of the randomizer.org webpage. Data from the comparison group selection process are available in Supporting information–S1 Table in S1 Appendix. In February 2019, the metadata of the articles indexed with the randomly generated PMIDs were extracted from PubMed to create a pool of comparison biomedical journal articles using an identical metadata extraction process to the used for the pharmacy practice journals.

Articles were excluded from the analyses if the entry date [date when the metadata were introduced in PubMed] was between the study limits but the actual date of publication was outside the study period.

Data process

PubMed records of all the articles in both groups were imported into an EndNote X4 (Thomson Reuters, Toronto) and then exported into an Excel spread sheet (Microsoft, Redmond). Submission date (when the journal received the manuscript) was obtained from the PubMed field PHST-[received]; acceptance date (when the journal communicated the acceptance to the authors) was obtained from PHST-[accepted]; online publication date (when the journal made the article available in journal’s webpage) was obtained from the field DEP; entry date (when PubMed processes the metadata submitted by the publisher and made them available for searches) was obtained from the field EDAT; publication language was obtained from the field LA; and publication country was obtained from the field PL.

The CiteScore for 2018, percentile of the journal in 2018 CiteScore distribution and Scopus Sub-Subject Area were obtained from the Scopus Sources database (https://www.scopus.com/sources). Each journal’s Impact Factor (IF) for 2017 was obtained from the Journal Citation Reports, available through the Web of Science (https://jcr.clarivate.com). Comparison group journal business model was classified as ‘open access’ if the publishing journal was included in the Directory of Open Access Journals (DOAJ) list (https://doaj.org/csv) obtained in October 2019. For pharmacy practice journals, the condition of open access was assigned by inspection of journals’ websites.

Four lag time measures for the different steps of the publication and indexing process were calculated: total publication lag (days between ‘submission date’ and ‘online publication date’), acceptance lag (days between ‘submission date’ and ‘acceptance date’), lead lag (days between ‘acceptance date’ and ‘online publication date’), and indexing lag (days between ‘online publication date’ and ‘entry date’). Fig 1 depicts these time intervals. We have not considered an additional exiting lag time, the cataloging lag (time between the ‘entry date’ and the Medical Subject Heading (MeSH) allocation date [MHDA] because it would only be applicable to journals indexed in MEDLINE, not only in PubMed.

Data analysis

Categorical variables were presented as absolute values and frequencies. For continuous variables, normality was assessed through the Kolmogorov-Smirnov test with additional visual inspection of the Q-Q plot. Because of the poor metadata reporting rate identified in the preliminary analyses, we preferred not using any method to impute missing values due to the potential low reliability. Associations between two categorical variables were tested with the chi-squared test and reported with odds ratio and 95% confidence intervals. Correlations between two non-normal variables were calculated with Spearman’s rho. Comparisons between two independent non-normally distributed variables were conducted with the Mann-Whitney test. To calculate effects size measures of the differences between the two groups, which are preferred to null hypothesis test results, the U-statistic was converted into Cohen’s d in accordance with recommendations from Cohen [24] and Fritz et al. [25] and using the Psychometrica calculator (https://www.psychometrica.de/effect_size.html). Effect sizes were categorized as follows: <0.1 no effect, 0.1–0.4 small effect, 0.5–0.7 intermediate effect, and > 0.7 large effect [24]. Data were analyzed using SPSS v20 (IBM, Armonk) and RStudio v1.2 (RStudio Inc., Boston). Linear multivariate regressions were also performed for the three time lags, using group, IF, CiteScore, and open access condition as independent variables. Variance inflation factor (VIF) was used to estimate collinearity in multivariate models.

Results

The 33 pharmacy practice journals indexed in PubMed published a total of 26,256 articles between 2009 and 2018. These articles were published in three languages, English (90.0%), Japanese (8.1%), and French (1.9%). The distribution of these articles per journal and year is available in Supporting information–S2 Table in S2 Appendix. The CiteScore was calculated by Scopus for 25 of the 33 pharmacy journals indexed in PubMed; the median CiteScore was 1.09 (interquartile range, IQR 0.80–1.73). Only 8 journals had an IF; the median IF was 2.094 (IQR 1.505–2.688).

The 3,000 PMIDs randomly selected per year for the study period resulted in a total of 25,272 existing PMIDs, which led to 23,803 articles for the comparison group after excluding those published outside of the study period (Table 1). These articles were published in 5,601 different journals with a median of 2 articles published per journal (IQR 1–5); PLoS ONE was the most prevalent journal, with 471 articles. These articles were published in 27 different languages, with English (22,565 articles; 94.8%) and Chinese (335 articles; 1.4%) being the most common languages. The journals publishing these articles were published in 75 countries, with the United States (9,775 articles), the United Kingdom (5,883 articles), and the Netherlands (1,604 articles) as the most common countries. The CiteScore was calculated for 4,856 of the 5,601 journals, and the median CiteScore was 2.06 (IQR 1.20–3.20). A total of 3,835 journals had an IF; the median IF was 2.396 (IQR 1.538–3.607).

Table 1. Articles selected for the study.

Publication dates
year Non-pharmacy practice Pharmacy practice Total
2009 1,554 1,637 3,191
2010 1,677 2,018 3,695
2011 1,866 2,103 3,969
2012 3,567 2,462 6,029
2013 2,628 2,560 5,188
2014 1,311 2,743 4,054
2015 1,376 3,024 4,400
2016 2,071 3,037 5,108
2017 3,556 3,312 6,868
2018 4,197 3,360 7,557
23,803 26,256 50,059

Reporting rates of the publication process dates were significantly lower in the pharmacy practice articles than in the comparison group, with odds ratios between 0.30 and 0.53 (Table 2). This lower rate of reporting the publication process dates was mainly associated with the pharmacy practice journals published in the U.S., U.K. and Japan; pharmacy practice journals published in France, the Netherlands, Saudi Arabia, Spain and Switzerland have higher reporting rates than journals from these countries from the comparison group (Supporting information –S3 Tables 1 to 4 in S3 Appendix). Among the pharmacy practice articles, 7,086 (27.0%) reported all three dates, while 9,917 of the comparison articles (41.7%) reported all four dates (OR 0.52; 95% CI 0.50–0.54).

Table 2. Frequencies of article processing dates reported.

Pharmacy practice Non-pharmacy practice p-value* Odds ratio (95%CI)
Submission date 8,900 (33.9%) 11,656 (49.0%) <0.001 0.53 (0.52–0.56)
Acceptance date 8,964 (34.1%) 12,106 (50.8%) <0.001 0.50 (0.48–0.52)
Online publication date 9,469 (36.1%) 15,557 (65.4%) <0.001 0.30 (0.29–0.31)

*chi-square test.

(Data presented as number and percentage).

Due to the large sample size used, a significant difference was found in the lag times analyzed between the pharmacy practice group and the comparison group. However, the effect sizes of these differences ranged from no effect (Cohen’s d = 0.05) to small effect (Cohen’s d = 0.40) (Table 3). The acceptance lag slightly increased over time in pharmacy practice journals (Spearman’s rho = 0.163; p<0.001), while the trend was nearly flat in the comparison group (rho 0.045; p<0.001) (Fig 2). The lead lag decreased over time in both groups, rho = -0.230 in pharmacy practice journals and rho = -0.127 in comparison journals (Fig 3). The indexing lag remained stable over time in both groups (Fig 4). Detailed data are provided in Supporting information–S4 Tables 1 to 4 in S4 Appendix.

Table 3. Lag times (days) calculated in the two groups.

Pharmacy practice Non-pharmacy practice p-value** Cohen’s d
n Median IQR n Median IQR
Acceptance lag* 8,884 105 57–173 11,146 97 56–155 <0.001 0.081
Lead lag* 7,100 13 6–35 10,559 23 9–45 <0.001 0.335
Total lag* 7,086 138 79–217 9,899 131 82–197 0.002 0.048
Indexing lag* 9,189 5 2–46 14,266 4 2–12 <0.001 0.496

*Kolmogorov-Smirnov<0.001

**Mann-Whitney test; IQR = Inter-quartile range.

Fig 2. Violin plots of acceptance lag (time from submission to acceptance) in comparison (grey) and pharmacy practice (white) journals.

Fig 2

Y-axis presented in logarithmic scale; Crossbars represent median and 25% and 75% quartiles.

Fig 3. Violin plots of lead lag (time from acceptance to online publication) in comparison (grey) and pharmacy practice (white) journals.

Fig 3

Y-axis presented in logarithmic scale; Crossbars represent median and 25% and 75% quartiles.

Fig 4. Violin plots of indexing lag (time from online publication to PubMed indexing) in comparison (grey) and pharmacy practice (white) journals.

Fig 4

Y-axis presented in logarithmic scale; Crossbars represent median and 25% and 75% quartiles.

When only pharmacy practice journals were analyzed, important variations in the acceptance lag were found, ranging from medians of 46 days (IQR 27–80) in J Basic Clin Pharm, 48 days (IQR 32–71) in Pharmacy (Basel), 290 days (IQR 230–349) in Curr Pharm Teach Learn and 242 days (IQR 182–304) in Int J Pharm Pract. A trend analysis of the acceptance lag demonstrated a steady trend in the majority of the pharmacy practice journals during the study period; a few journals presented trends with significant correlations, but with no effect or small effect sizes. Only one journal, namely, Pharmacy (Basel), presented a moderate effect with a decrease in acceptance lag over time (rho = -0.436); this journal had the lowest acceptance lag among all the pharmacy practice journals in 2018, at 40 days (IQR 25–55). The complete data for processing lag times per year with the corresponding violin plots of pharmacy practice journals are available in Supporting information–S 5. The lead lag presented a significant correlation decrease during the study period for almost all the pharmacy practice journals; there was a large effect size (Spearman’s rho >0.7) only in Res Social Adm Pharm. The majority of the pharmacy practice journals also presented a significant correlation decrease in the indexing lag over time, but four journals presented high indexing lag times with medians greater than 100 days (Curr Pharm Teach Learn, Hosp Pharm, J Young Pharm, and Saudi Pharm J).

Compared with subscription journals, open access pharmacy practice journals presented a lower acceptance lag (72 days, IQR 46–118 vs. 128 days, IQR 72–203, p<0.001; OR = 0.612), a lower lead lag (10 days, IQR 6–26 vs. 14 days, IQR 7–37, p<0.001: OR = 0.172) and a greater indexing lag (92 days, IQR 14–276 vs. 5 days, IQR 2–25, p<0.001; OR = 0734). These differences in lag times between open access and subscription journals were nonexistent in the comparison group, with acceptance lags of 100 days (IQR 58–160) vs. 98 days (IQR 57–154) (p = 0.056; OR = 0.037), lead lags of 23 days (IQR 11–42) vs. 22 days (IQR 9–46) (p = 0.607, OR = 0.010), and indexing lags of 4 days (IQR 2–18) vs. 3 days (IQR 2–6) (p<0.001; OR = 0.169), respectively.

The acceptance lag was slightly associated with the CiteScore in the pharmacy practice group (rho = -0.131; p<0.001) and not the comparison group (p = 0.255). Impact factor presented also a slight inverse association with the acceptance lag in pharmacy practice journals (rho = -0.195, p<0.001) and in the comparison group (rho = -0.082, p<0.001); however, the effect sizes indicated no effects.

Multivariate analyses confirmed the associations found in the bivariate analyses for the three lag times (Table 4). Moderate VIF was found for CiteScore and IF, but always below the commonly accepted cutoff (VIF: 5).

Table 4. Multivariate analyses for the three lag times.

Acceptance lag R-square: 0.006; Durbin-Watson = 1.788
Beta 95%IC p-value VIF
Study group -3.910 -7.077: -0.742 0.016 1.072
CiteScore (2018) 1.000 -0.267: 2.268 0.122 2.991
Opean Access journal? -13.822 -17.237: -10.406 <0.001 1.005
journal Impact Factor -1.861 -2.653: -1.069 <0.001 3.041
Lead lag R-square: 0.052; Durbin-Watson = 1.620
Beta 95%IC p-value VIF
Study group -16.966 -18.570: -15.363 <0.001 1.088
CiteScore (2018) -5.779 -6.436: -5.121 <0.001 3.447
Opean Access journal? -0.097 -1.930: 1.737 0.918 1.029
journal Impact Factor 2.795 2.339: 3.251 <0.001 3.550
Indexing lag R-square: 0.152; Durbin-Watson = 0.550
Beta 95%IC p-value VIF
Study group 41.721 39.374: 44.069 <0.001 1.057
CiteScore (2018) -0.242 -0.959: 0.475 0.508 2.913
Opean Access journal? 63.494 60.803: 66.184 <0.001 1.018
journal Impact Factor 0.524 0.078: 0.969 0.021 2.968

95%IC: 95% confidence interval; VIF: Variance inflation factor.

Discussion

Our 10-year analysis of more than 23,000 articles published in pharmacy practice journals compared with a random sample of more than 20,000 articles published in other journals in the same period of time indicated that there was a very similar acceptance lag (time to accept a manuscript) between the two groups; however, there was a slightly lower lead lag (time to make the article available online) and a slightly longer indexing lag (time to index the article in PubMed) in the pharmacy journals than in the comparison group.

Our results showed that, once a manuscript is accepted by the journal, an article will be published online in approximately 13 days in pharmacy practice journals and 23 days in other journals. An additional five days is necessary to make the article available in PubMed. Both, lead and indexing lags in pharmacy practice journals and in the generic comparison group were smaller than those reported by Lee et al. among the Korean medical journals [8]. It seems that these post-acceptance delays are not important for authors when they ask for a more rapid editorial process, because at this point, they have already obtained a communication from the editor about the article’s acceptance [26].

Authors of articles both in pharmacy practice journals and in the comparison journals had to wait approximately 100 days to have their manuscript accepted. This acceptance lag is similar to the one reported by Lee et al. among Korean medical journals of 102 days [8]. About 100 days of acceptance lag was also reported in the Himmelstein analysis in which a quite flat historical trend was demonstrated [27]. Himmelstein also reported an analysis of Public Library of Science journals with an average 100 days acceptance lag among these journals [28]. This time is the main reason that authors complain about the slow editorial process, especially when they include the time spent in the previous editorial processes with negative feedback [26]. It is important to consider that this time period of more than three months includes the time that editors and reviewers take to scrutinize the manuscript, the time authors need to modify the original text to fulfill reviewers’ recommendations, and the time editors need to find a sufficient number of peer reviewers for the manuscript. The latter aspect has become a major concern for journal editors, with two to four invited reviewers needed for one acceptance [29,30].

Several solutions have been suggested to reduce the acceptance lag, usually related to reducing or eliminating the time devoted to the peer review stage, but the effectiveness of these solutions has not yet been demonstrated [31]. Stern and O’Shea suggested several strategies to move on from an “outdated publishing process” to an author-based decision process, where authors and not editors decide whether a manuscript should be published or not, and the “curation” process starts after the publication of the article [32]. This is the rationale of the preprint servers: repositories where authors deposit their manuscripts, which become immediately indexed and available to the general public with no prepublication peer review [33]. Scientific fields such as physics commonly use rapid publication systems to share ideas. However, health sciences and especially clinical sciences should carefully approach these “pre-refereed research outputs” [33]. Clinical decisions made by professionals and self-care decisions made by lay people are frequently made based on published articles, which may pose health risks. For instance, the immature publication of a COVID-19 letter to the editor [34] obliged the European Society of Cardiology to immediately release a position statement criticizing that letter to the editor and recommending patients and professionals to maintain their angiotensin receptor blocker treatments. Additionally, the role of these unreviewed preprints in evidence generation through systematic reviews and meta-analyses is not sufficiently elucidated.

In some scientific areas, peer reviewers are paid for their work, which leads an approximately 20% reduction in review time, but the quality of the reviews has not been evaluated [35]. Paying reviewers would increase publishing costs, which would increase subscription rates for authors’ institutions or article processing charges for authors. It seems more reasonable to think of strategies to increase invited reviewers’ acceptance rate by compensating them in other ways. The publishing system acknowledges three different types of contributors to an article: authors, collaborators, and acknowledged [36]. Although the contribution of a good peer reviewer can have a substantial impact on the final article’s version, peer reviewer contribution is only acknowledged with an email and voluntarily recorded in ad hoc created registries, which rarely are considered as individual merits. Alternatives to this silent contribution range from including the names of the reviewers in the article’s webpage or in the article’s first page to including them as part of a collective authorship of an editorial [37]. To increase the visibility and the recognition of a peer reviewer contribution to a given article, bibliographic databases such as PubMed could create a third category of contributorship: the peer reviewer. PubMed currently differentiates authors (retrievable with the [AU] field descriptor) and collaborators (retrievable with the [IR] field descriptor) [38]. Creating a third field descriptor for peer reviewers could increase the recognition of their contribution to the published article.

To be recognized as peer reviewer, the review task should not be performed anonymously. The debate regarding the pros and cons of open peer review is not new [39,40]. However, in recent times, an increasing number of journals have adopted mandatory open peer review, suggesting that anonymous comments can overstate weaknesses and unnecessarily destroy manuscripts that could be improved with constructive criticism [41]. More importantly, anonymous peer review allows a lack of transparency that favors peer-review fraud. Fake peer review has been associated with both predatory publishers [42] and ethical journals [43]. In our analysis, we found some journals with extremely short acceptance lags: two journals had median acceptance lags lower than 50 days, and 4 journals had 25% of their articles accepted in less than 32 days. It does not seem feasible to select reviewers, deliver their comments, receive the new version of the manuscript from the authors, and evaluate this new version in this short period of time.

Journals have their articles’ metadata available in PubMed (including MEDLINE and PubMed Central) only after passing a rigorous selection process by the NLM. One would think that journals accepted in PubMed have the highest quality standards in both their editorial process and in the transparency of this process. However, in our study approximately 73% of pharmacy practice journals and 59% of journals from the comparison group did not include the three dates of their editorial process among the metadata they provide to PubMed. These poor reporting practices were associated with a potential delay underestimation [2]. It is time to ask PubMed and other journal selection committees to mandate that journal publishers provide the dates of their editorial process as a means to ensure publishing transparency.

One should acknowledge that editorial process duration may be different for the different types of publications. However, publication delay analyses usually avoid publication type classification for a number of reasons [8,27,28]. Journals use a non-consistent terminology for publication types. In some journals, short reports include commentaries while other journals include these short reports as letters or research letters. Systematic reviews and meta-analyses are considered as original research papers in some journals and as review articles in other journals. In addition to this inconsistent terminology, publication type indexing is also inconsistent. Journals indexed with metadata provided to Medline can now classify their articles into 30 different publication types. Before February 2016, NLM had only 6 different publication types available to be provided in metadata by the publishers. However, journals indexed with metadata provided to PubMed Central can use their ad hoc created list of publication types. The default value of “Journal Article” will be assigned to any article not providing a valid publication type in their metadata set [44]. Subsequently, limiting bibliometric analyses to articles indexed with specific Publication Types may be misleading. Our analysis tried to avoid this inaccurate selection process and overcome this issue by using a big probabilistic randomly selected sample of articles as comparison group.

Although the “climbing upwards” number of existing journals has been blamed for increasing the number of review invitations [45], a recently published mathematical model demonstrated that the total number of reviewers required to publish one paper is inversely related to the acceptance rate, which should increase in parallel to the number of journals. This inverse relationship means that the higher the number of indexed journals, the lower the rejection rate and subsequently the lower the total number of reviewers necessary to publish a given number of articles [46].

However, educating authors about their important role as peer reviewers is probably the best mid-to-long-term strategy. If authors want to have their manuscripts published in a short time, they must increase their invitation acceptance rate.

Limitations

Our study has some limitations. The incomplete metadata indexing in PubMed may have influenced the results with unpredictable consequences. We only analyzed the pharmacy practice journals that were identified in Mendes et al.’s study; although they searched the major databases, they may have not identified all the pharmacy journals. Our study included all the articles published by the journals during the study period, including editorials, which have shorter processing times. However, these contributions should not have an important relative weight in the pool of articles. We only obtained indexing data for PubMed, which may not be generalizable for other bibliographic databases. However, PubMed is the more comprehensive bibliographic database of biomedical literature containing more than 32 million records.

Conclusion

A pool of articles published in pharmacy practice journals between 2009 and 2018 had similar acceptance lag times to a generic random sample of articles indexed in PubMed: approximately 100 days with a slightly increasing time trend. Small differences in lead time and indexing time do not differentiate the total duration of the editorial process of pharmacy practice journals from that of other journals.

Supporting information

S1 Appendix. Comparison group: Results of the PMIDs random selection.

(DOCX)

S2 Appendix. Articles published in the pharmacy practice journals between 2009 and 2018.

(DOCX)

S3 Appendix. Acceptance, lead, total, and indexing lag times for pharmacy practice journals (2009–2018).

(DOCX)

S4 Appendix. Acceptance, lead, total, and indexing lag times trends (2009–2018).

(DOCX)

S5 Appendix. Pharmacy practice journals’ data (violin plots with lines representing percentile25, median, and percentil75).

(DOCX)

Acknowledgments

Declarations

Ethics approval. This study does not require any ethics approval.

Data Availability

Data available at https://doi.org/10.17605/OSF.IO/QY2AE.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Brandon AN. "Publish or perish". Bull Med Libr Assoc. 1963;51:109–10. [PMC free article] [PubMed] [Google Scholar]
  • 2.Powell K. Does it take too long to publish research? Nature. 2016;530:148–51. doi: 10.1038/530148a [DOI] [PubMed] [Google Scholar]
  • 3.Huisman J, Smits J. Duration and quality of the peer review process: the author’s perspective. Scientometrics. 2017;113:633–50. doi: 10.1007/s11192-017-2310-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Strand LB, Clarke P, Graves N, Barnett AG. Time to publication for publicly funded clinical trials in Australia: an observational study. BMJ Open. 2017;7:e012212. doi: 10.1136/bmjopen-2016-012212 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Gordon D, Taddei-Peters W, Mascette A, Antman M, Kaufmann PG, Lauer MS. Publication of trials funded by the National Heart, Lung, and Blood Institute. N Engl J Med. 2013;369:1926–34. doi: 10.1056/NEJMsa1300237 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Gordon D, Cooper-Arnold K, Lauer M. Publication Speed, Reporting Metrics, and Citation Impact of Cardiovascular Trials Supported by the National Heart, Lung, and Blood Institute. J Am Heart Assoc. 2015;4:e002292. doi: 10.1161/JAHA.115.002292 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Wieschowski S, Riedel N, Wollmann K, Kahrass H, Muller-Ohlraun S, Schurmann C, et al. Result dissemination from clinical trials conducted at German university medical centers was delayed and incomplete. J Clin Epidemiol. 2019;115:37–45. doi: 10.1016/j.jclinepi.2019.06.002 [DOI] [PubMed] [Google Scholar]
  • 8.Lee Y, Kim K. Publication Delay of Korean Medical Journals. J Korean Med Sci. 2017;32:1235–42. doi: 10.3346/jkms.2017.32.8.1235 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Donato H, Marinho RT. Acta Medica Portuguesa and peer-review: quick and brutal! Acta Med Port. 2012;25:261–2. [PubMed] [Google Scholar]
  • 10.Kassirer JP, Campion EW. Peer review. Crude and understudied, but indispensable. JAMA. 1994;272:96–7. doi: 10.1001/jama.272.2.96 [DOI] [PubMed] [Google Scholar]
  • 11.Weber EJ, Katz PP, Waeckerle JF, Callaham ML. Author perception of peer review: impact of review quality and acceptance on satisfaction. JAMA. 2002;287:2790–3. doi: 10.1001/jama.287.21.2790 [DOI] [PubMed] [Google Scholar]
  • 12.Burke JF, Nallamothu BK, Ho PM. The Review and Editorial Process at Circulation: Cardiovascular Quality and Outcomes: The Worst System, Except for All the Others. Circ Cardiovasc Qual Outcomes. 2017;10. doi: 10.1161/CIRCOUTCOMES.117.003853 [DOI] [PubMed] [Google Scholar]
  • 13.Baldwin M. Credibility, Peer Review, and Nature, 1945–1990. Notes Rec R Soc Lond. 2015;69:337–52. doi: 10.1098/rsnr.2015.0029 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Rubin EJ, Baden LR, Morrissey S, Campion EW. Medical Journals and the 2019-nCoV Outbreak. N Engl J Med. 2020;382:866. doi: 10.1056/NEJMe2001329 [DOI] [PubMed] [Google Scholar]
  • 15.Ahima RS, Jackson S, Casadevall A, Semenza GL, Tomaselli G, Collins KL, Lieberman AP, Martin DM, Reddy P. Changing the editorial process at JCI and JCI Insight in response to the COVID-19 pandemic. J Clin Invest. 2020. doi: 10.1172/JCI138305 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Baldwin M. ’Keeping in the race’: physics, publication speed and national publishing strategies in Nature, 1895–1939. Br J Hist Sci. 2014;47:257–79. doi: 10.1017/s0007087413000381 [DOI] [PubMed] [Google Scholar]
  • 17.Ng KH. Exploring new frontiers of electronic publishing in biomedical science. Singapore Med J. 2009;50:230–4. [PubMed] [Google Scholar]
  • 18.Rapp PR. Editor’s comment: Improved publication speed at Neurobiology of Aging. Neurobiol Aging. 2015;36:2009. doi: 10.1016/j.neurobiolaging.2015.03.009 [DOI] [PubMed] [Google Scholar]
  • 19.Nature Index. COVID-19 research update: How peer review changed the conclusions of a coronavirus preprint. Available at: https://www.natureindex.com/news-blog/how-coronavirus-is-changing-research-practices-and-publishing. [Accessed Jul 13, 2020]
  • 20.Rodriguez RW. Delay in indexing articles published in major pharmacy practice journals. Am J Health Syst Pharm. 2014;71:321–4. doi: 10.2146/ajhp130421 [DOI] [PubMed] [Google Scholar]
  • 21.Irwin AN, Rackham D. Comparison of the time-to-indexing in PubMed between biomedical journals according to impact factor, discipline, and focus. Res Social Adm Pharm. 2017;13:389–93. doi: 10.1016/j.sapharm.2016.04.006 [DOI] [PubMed] [Google Scholar]
  • 22.Rodriguez RW. Comparison of indexing times among articles from medical, nursing, and pharmacy journals. Am J Health Syst Pharm. 2016;73:569–75. doi: 10.2146/ajhp150319 [DOI] [PubMed] [Google Scholar]
  • 23.Mendes AM, Tonin FS, Buzzi MF, Pontarolo R, Fernandez-Llimos F. Mapping pharmacy journals: A lexicographic analysis. Res Social Adm Pharm. 2019;15:1464–71. doi: 10.1016/j.sapharm.2019.01.011 [DOI] [PubMed] [Google Scholar]
  • 24.Cohen J. Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum; 1988. [Google Scholar]
  • 25.Fritz CO, Morris PE, Richler JJ. Effect size estimates: current use, calculations, and interpretation. J Exp Psychol Gen. 2012;141:2–18. doi: 10.1037/a0024338 [DOI] [PubMed] [Google Scholar]
  • 26.Wallach JD, Egilman AC, Gopal AD, Swami N, Krumholz HM, Ross JS. Biomedical journal speed and efficiency: a cross-sectional pilot survey of author experiences. Res Integr Peer Rev. 2018;3:1. doi: 10.1186/s41073-017-0045-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Satoshi Village. The history of publishing delays. Available at: https://blog.dhimmel.com/history-of-delays/. [Accessed Jul 13, 2020]
  • 28.Himmelstein D. Publication delays at PLOS and 3,475 other journals. Available at: http://blog.dhimmel.com/plos-and-publishing-delays/. [Accessed Jul 13, 2020]
  • 29.Didham RK, Leather SR, Basset Y. Don’t be a zero-sum reviewer. Insect Conserv Divers. 2017;10:1–4. [Google Scholar]
  • 30.Fernandez-Llimos F. Peer review and publication delay. Pharm Pract (Granada). 2019;17:1502. doi: 10.18549/PharmPract.2019.1.1502 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Kovanis M, Trinquart L, Ravaud P, Porcher R. Evaluating alternative systems of peer review: a large-scale agent-based modelling approach to scientific publication. Scientometrics. 2017;113:651–71. doi: 10.1007/s11192-017-2375-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Stern BM, O’Shea EK. A proposal for the future of scientific publishing in the life sciences. PLoS Biol. 2019;17:e3000116. doi: 10.1371/journal.pbio.3000116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Chiarelli A, Johnson R, Pinfield S, Richens E. Preprints and Scholarly Communication: Adoption, Practices, Drivers and Barriers. F1000Res. 2019;8:971. doi: 10.12688/f1000research.19619.2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Fang L, Karakiulakis G, Roth M. Are patients with hypertension and diabetes mellitus at increased risk for COVID-19 infection? Lancet Respir Med. 2020;8:e21. doi: 10.1016/S2213-2600(20)30116-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Thompson GD, Aradhyula SV, Frisvold G, Frisvold R. Does paying referees expedite reviews?: Results of a natural experiment. Southern Econ J. 2010;76:678–92. [Google Scholar]
  • 36.ICMJE. Defining the Role of Authors and Contributors. Available at: http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html. [Accessed Apr 16, 2020]
  • 37.Fernandez-Llimos F. Scholarly publishing depends on peer reviewers. Pharm Pract (Granada). 2018;16:1236. doi: 10.18549/PharmPract.2018.01.1236 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.National Library of Medicine. Authorship in MEDLINE. Available at: https://www.nlm.nih.gov/bsd/policy/authorship.html. [Accessed Apr 16, 2020]
  • 39.Groves T. Is open peer review the fairest system? Yes. BMJ. 2010;341:c6424. doi: 10.1136/bmj.c6424 [DOI] [PubMed] [Google Scholar]
  • 40.Khan K. Is open peer review the fairest system? No. BMJ. 2010;341:c6425. doi: 10.1136/bmj.c6425 [DOI] [PubMed] [Google Scholar]
  • 41.Walbot V. Are we training pit bulls to review our manuscripts? J Biol. 2009;8:24. doi: 10.1186/jbiol125 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Bowman JD. Predatory publishing, questionable peer review, and fraudulent conferences. Am J Pharm Educ. 2014;78:176. doi: 10.5688/ajpe7810176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Hadi MA. Fake peer-review in research publication: revisiting research purpose and academic integrity. Int J Pharm Pract. 2016;24:309–10. doi: 10.1111/ijpp.12307 [DOI] [PubMed] [Google Scholar]
  • 44.National Library of Medicine. XML Help for PubMed Data Providers. Available at: https://www.ncbi.nlm.nih.gov/books/NBK3828/#publisherhelp.PublicationType_O. [Accessed Jul 13, 2020]
  • 45.Rohn J. Why I said no to peer review this summer. Nature. 2019;572:417. doi: 10.1038/d41586-019-02470-2 [DOI] [PubMed] [Google Scholar]
  • 46.Fernandez-Llimos F, Salgado TM, Tonin FS. How many manuscripts should I peer review per year? Pharm Pract (Granada). 2020;18:1804. doi: 10.18549/PharmPract.2020.1.1804 [DOI] [PMC free article] [PubMed] [Google Scholar]

Decision Letter 0

Tim Mathes

16 Mar 2021

PONE-D-21-03317

Publication speed in pharmacy practice journals: a comparative analysis

PLOS ONE

Dear Dr. Fernandez-Llimos,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 24 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Tim Mathes

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.Thank you for stating the following in the Competing Interests section:

"FFL and FST are editors of the journal Pharmacy Practice. The other authors have no conflict of interest."

Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared.

Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf.

Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The article “Publication speed in pharmacy practice journals: a comparative analysis” by Mendes et al evaluates the duration of the publication process in Pharmacy practice journals compared to other scientific disciplines.

The article provides valuable and significant information with a large sample size of articles and carefully analyzed data. I addition, the article describes some of the pain points of the publication process that add to delays, with peer review as the major process that may significantly add to delays.

The quantification of the acceptance, lead and indexing lags are useful and interesting contributions. Additionally, the comparison of the present results to previous studies such as the Rodriguez data is interesting and sets up a good framework for the need to carry out this particular investigation.

Table 3 with lag times presents useful data to compare the two groups and supports the above notions.

Also interesting is the open access data presented in this study. The magnitude of the indexing lag 92 days vs 5 days for pharmacy (which seems an incredibly large difference…can an 18.4-fold difference be easily explained?)...compared to 4 days vs 3 days in the control seems to ask for further study/data. It would be important to explain this further in the text and perhaps clarify the reasons.

However, the manuscript also has some significant drawbacks. Large sections of text seem opinion based, especially the discussion. The introduction and discussion need to be significantly curtailed and seem to digress significantly from the hard data presented. I would recommend removing large sections of text that are not addressed in the analyses.

During “emergency times”, COVID, the peer review process, pre-prints are all mentioned…all of which are important and currently topical items when exploring publication speed…however, the reader is left with wanting more granular data or at least some direct data that addresses these influencing factors and their effect on publication speed. At best, the current dataset is indirect regarding these discussions. The article should be significantly shortened.

In brief, I would support publication after a very significant reduction in length… perhaps as a short note.

Reviewer #2: Abstract: The phrase “duration of the publication process” seems somewhat ambiguous. This could encompass elements of manuscript preparation all the way to publication of the article itself. Since it seems the rest of the description here focuses on the period between submission date and acceptance date, a better term might relate to the peer review and publication process.

For the comparison group, I think it should be specified whether the comparison group excluded pharmacy practice journals (assuming it did), but should be referred to as “non-pharmacy practice journals” for clarity. Also, it would be helpful to know if these journals potentially included pharmacy journals that did not involve practice (eg, pharmacology, pharmacokinetics, basic science, etc). I think there needs to be some description of the comparison group, eg, broadly speaking the discipline of the journal related to medicine, nursing, public health, basic science, etc. The random selection process seems like it could include any possible article, which would be an extremely heterogeneous group of articles.

Intro, paragraph 2: The discussion of RCT publishing metrics is interesting, but I wonder if there is anything more relevant to pharmacy practice, as pharmacists typically are not performing RCTs, but rather observational or pre-post studies.

Intro paragraph 3: The critique of the peer review process in the context of this research might be strengthened by statements that relate more closely to the duration of this process, since that seems to be the main outcome measure of this study.

Intro, paragraph 4: The final sentence of this paragraph relates to one of my earlier comments regarding the composition of the comparator group. Here, the authors state they aim to compare pharmacy practice journals with “other biomedical disciplines”. The other biomedical disciplines that composed the comparator group should be described in the abstract.

Data collection, paragraph 2: The authors aim to identify a 10-day difference between groups in publication time. I think this selection of a clinically important difference requires justification. This seems like a very short period and is well below the SD and IQR of analyses of pharmacy practice journals themselves. I think this may be too short of a period, considering the need for NLM to distribute resources responsibly for indexing, and the lower likelihood that pharmacy practice journals will publish something practice-changing as compared to general medical journals. If anything, this will only leave the analysis with greater power rather than less, but the selection requires justification. I also think this paragraph needs disambiguation of the term “article reception”.

Data process, paragraph 1: I think the terms related to the Medline metadata require definition to assist the readers in knowing how they differ. For example, it may be easy for a reader to not appreciate the difference between acceptance date and entry date. Additionally, I think some more context is needed regarding the terminology of “indexing lag”. This term might be interpreted as “indexing” with MeSH headings (ie, MHDA data in MEDLINE), but it seems that the authors are using Entrez date (EDAT data in MEDLINE), which is generally speaking the online publication date.

Data analysis: I’m not entirely certain of the benefit of presenting a Cohen’s d in this case. It seems to complicate the interpretation somewhat as its calculation loses the units of the measure. I think it would be more beneficial to report a point estimate and confidence interval for the difference between groups. After having read the remainder of the results, the authors (appropriately) have evaluated the effect of other variables on the outcome of various time periods. With this in mind, I wonder if a multivariable model adjusting for the variables while predicting each outcome would be beneficial. Alternatively, I was almost anticipating some matching procedure between pharmacy practice journals and comparator journals to ensure they have some similar characteristics (eg, MeSH headings for comparative study, meta-analysis, etc) to try to answer if there is something related to the journal discipline that delays this process, all other things being equal. Lastly, since time is an element to this analysis, I wonder if a survival analysis is more appropriate. If there are any exclusions for articles that do not have one of the “outcome” variables, it presents risk for bias in the analysis of simply comparing median times.

Results, paragraph 2: The authors describe the languages of journals in the comparison group here, but do not do so for the pharmacy practice “exposure” group. I think this should be reported, as any differences between the groups in language could influence availability of peer reviewers or other processes in the duration of the process the authors are measuring.

Table 1: I’m not sure how beneficial columns 2 through 4 are in this table. I think the most helpful information is knowing the distribution of publication dates for each group over time. Eg, I do not think there is much that readers can take away from knowing the first PMID that was eligible for inclusion from each 1-year period.

Table 2: The content in this table has confused me a bit regarding the methodology. I see that not all articles in either group consistently reported important data for the three variables. So have the articles without this data been excluded from analyses? I don’t recall reading that in the methods, so if this is the case, I believe it should be specified, or otherwise clarified.

Paragraph following table 2: The information on time trend is interesting, but I think needs more interpretation for readers. The correlation coefficient seems difficult to interpret in this situation – eg, it may help to explain the average annual improvement in lag time for each group.

Table 3: Similar to my above comment, I think instead of reporting Cohen’s d, a point estimate and confidence interval for a difference would be more interpretable and helpful.

Paragraph following table 3: These trends within pharmacy journals are interesting, but I think their presentation needs more justification. In other words, why mention these specific journals? Did they have the shortest and longest acceptance lags, or were in the upper and lower deciles? Some type of criterion for presentation should be specified. It would also help to include results for each journal in tabular format (though in a supplemental table, I imagine), potentially ordered by impact factor or some other metric of readership. On this topic, I find it a shortcoming that some of the more widely read pharmacy practice journals (eg, AJHP, Pharmacotherapy) do not have their results reported to serve as a barometer for pharmacists who are likely most familiar with these.

Discussion: This section is rather long given the scope of the article. The first three paragraphs are relevant in providing additional context to the findings of this analysis. The remainder, while offering some interesting insight and potential solutions, can be tapered somewhat. I also get the impression here more than in other sections of the manuscript that the authors take most issue with delays during the peer review process. Based on the definitions of each lag time, it seems that acceptance lag may be most representative of the peer review process. The authors might consider restricting the analysis and presentation to this measure only to give the article a tighter focus.

Limitations: I think a major limitation that should be stated is the heterogeneous nature of the comparison group. While I think the comparison group should be modified with matching or other techniques as described above, I think it is difficult to compare essentially pharmacy practice journals with “everything else”. There are so many journals in Pubmed that are not directly related to healthcare provision (eg, basic science) that it may make this comparison group somewhat unrepresentative.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

Attachment

Submitted filename: Mendes et al review_PLOS ONE.docx

PLoS One. 2021 Jun 29;16(6):e0253713. doi: 10.1371/journal.pone.0253713.r002

Author response to Decision Letter 0


16 Apr 2021

Reviewer #1:

The article “Publication speed in pharmacy practice journals: a comparative analysis” by Mendes et al evaluates the duration of the publication process in Pharmacy practice journals compared to other scientific disciplines.

The article provides valuable and significant information with a large sample size of articles and carefully analyzed data. I addition, the article describes some of the pain points of the publication process that add to delays, with peer review as the major process that may significantly add to delays.

The quantification of the acceptance, lead and indexing lags are useful and interesting contributions. Additionally, the comparison of the present results to previous studies such as the Rodriguez data is interesting and sets up a good framework for the need to carry out this particular investigation.

Table 3 with lag times presents useful data to compare the two groups and supports the above notions.

Also interesting is the open access data presented in this study. The magnitude of the indexing lag 92 days vs 5 days for pharmacy (which seems an incredibly large difference…can an 18.4-fold difference be easily explained?)...compared to 4 days vs 3 days in the control seems to ask for further study/data. It would be important to explain this further in the text and perhaps clarify the reasons.

However, the manuscript also has some significant drawbacks. Large sections of text seem opinion based, especially the discussion. The introduction and discussion need to be significantly curtailed and seem to digress significantly from the hard data presented. I would recommend removing large sections of text that are not addressed in the analyses.

During “emergency times”, COVID, the peer review process, pre-prints are all mentioned…all of which are important and currently topical items when exploring publication speed…however, the reader is left with wanting more granular data or at least some direct data that addresses these influencing factors and their effect on publication speed. At best, the current dataset is indirect regarding these discussions. The article should be significantly shortened.

In brief, I would support publication after a very significant reduction in length… perhaps as a short note.

RE: Thank you very much for your kind and encouraging comments. We could not reduce the length of the paper because reviewer #2 asked for additional information in several comments.

Reviewer #2:

Abstract: The phrase “duration of the publication process” seems somewhat ambiguous. This could encompass elements of manuscript preparation all the way to publication of the article itself. Since it seems the rest of the description here focuses on the period between submission date and acceptance date, a better term might relate to the peer review and publication process.

RE: In fact, we analyzed all the entire publication process, from the submission to the indexing. In the sub-section ‘Data process’, we explained the four different lag times we calculated: total publication lag, acceptance lag, lead lag, and indexing lag. The time between the submission date and acceptance date is covered by the acceptance lag, being the remaining lag times other parts of the editorial process. Figure1 depicts these time intervals.

For the comparison group, I think it should be specified whether the comparison group excluded pharmacy practice journals (assuming it did), but should be referred to as “non-pharmacy practice journals” for clarity. Also, it would be helpful to know if these journals potentially included pharmacy journals that did not involve practice (eg, pharmacology, pharmacokinetics, basic science, etc). I think there needs to be some description of the comparison group, eg, broadly speaking the discipline of the journal related to medicine, nursing, public health, basic science, etc. The random selection process seems like it could include any possible article, which would be an extremely heterogeneous group of articles.

RE: Thank you very much for this comment. We double-checked all the 23888 articles that we had originally included in the comparison group, and realized that a few were pharmacy practice journal articles. As they were a reduced group, results were virtually not affected by the deletion of these articles from the comparison group, but we have re-done all the calculations. So, now we can ensure that both groups are mutually exclusive. We clarified this in Methods section. Regarding the potential heterogeneity in this non-pharmacy practice group, we have intentionally aimed to compare pharmacy practice journals with all the non-pharmacy practice biomedical journals. In a previous research (doi: 10.1016/j.sapharm.2016.01.003), we demonstrated that pharmacy is not an homogeneous scientific area, and we cannot infer that non-practice pharmacy journals perform closer to pharmacy practice journals than to other biomedical journals. Probably, non-practice pharmacy journals may not be a category itself different to other biomedical areas (i.e., clinical, molecular or cell pharmacology are areas shared by pharmacists and physicians).

Intro, paragraph 2: The discussion of RCT publishing metrics is interesting, but I wonder if there is anything more relevant to pharmacy practice, as pharmacists typically are not performing RCTs, but rather observational or pre-post studies.

RE: As far as we know, this is the first study evaluating editorial process duration in pharmacy practice. So, we could not find any published data to frame the scene other than clinical trials. In paragraph 4 of the introduction we mentioned the results of Rodriguez assessing one of the steps of the editorial process we evaluated as a whole. Additionally, we cannot concur with the final statement of this comment, as per our previous research (doi: 10.1016/j.sapharm.2016.01.003).

Intro paragraph 3: The critique of the peer review process in the context of this research might be strengthened by statements that relate more closely to the duration of this process, since that seems to be the main outcome measure of this study.

RE: In intro paragraph 3 we have not criticized the peer review process. We are strongly supportive of peer review as a basic requirement for robust and reliable scholarly publishing. That’s why we quoted the Kassirer sentence “crude and understudied, but indispensable”[10]. We devoted paragraphs 4, 5, and 6 of the discussion to comment potential improvements to the peer review process.

Intro, paragraph 4: The final sentence of this paragraph relates to one of my earlier comments regarding the composition of the comparator group. Here, the authors state they aim to compare pharmacy practice journals with “other biomedical disciplines”. The other biomedical disciplines that composed the comparator group should be described in the abstract.

RE: We reworded the sentence to clarify this in the abstract.

Data collection, paragraph 2: The authors aim to identify a 10-day difference between groups in publication time. I think this selection of a clinically important difference requires justification. This seems like a very short period and is well below the SD and IQR of analyses of pharmacy practice journals themselves. I think this may be too short of a period, considering the need for NLM to distribute resources responsibly for indexing, and the lower likelihood that pharmacy practice journals will publish something practice-changing as compared to general medical journals. If anything, this will only leave the analysis with greater power rather than less, but the selection requires justification. I also think this paragraph needs disambiguation of the term “article reception”.

RE: We added a sentence in the mentioned paragraph. We performed a preliminary analysis and found, as mentioned in the sample size paragraph, a “mean delay of 125 days (SD 98) between article reception and publication”, which represents the entire publication process lag. 10 days represent the 8% of that time, which we considered appropriate to consider as difference to establish sample size. We agree with the reviewer that short period we selected have increased the statistical power or our analyses. But we specifically needed that power to ascertain differences between the two groups lead lag (13 vs. 23 days) or the indexing lag (5 vs. 4 days). But we again cannot concur with the reviewer consideration of pharmacy practice field, after reviewer’s statement “lower likelihood that pharmacy practice journals will publish something practice-changing as compared to general medical journals”. Pharmacy practice journals will surely not publish papers medical practice-changing, but will publish more pharmacy practice-changing papers than medical journals.

Data process, paragraph 1: I think the terms related to the Medline metadata require definition to assist the readers in knowing how they differ. For example, it may be easy for a reader to not appreciate the difference between acceptance date and entry date. Additionally, I think some more context is needed regarding the terminology of “indexing lag”. This term might be interpreted as “indexing” with MeSH headings (ie, MHDA data in MEDLINE), but it seems that the authors are using Entrez date (EDAT data in MEDLINE), which is generally speaking the online publication date.

RE: We added a short description for all the metadata retrieved from PubMed. We also added a sentence to clarify why we have not calculated the Cataloging lag (time to allocate MeSH).

Data analysis: I’m not entirely certain of the benefit of presenting a Cohen’s d in this case. It seems to complicate the interpretation somewhat as its calculation loses the units of the measure. I think it would be more beneficial to report a point estimate and confidence interval for the difference between groups. After having read the remainder of the results, the authors (appropriately) have evaluated the effect of other variables on the outcome of various time periods. With this in mind, I wonder if a multivariable model adjusting for the variables while predicting each outcome would be beneficial. Alternatively, I was almost anticipating some matching procedure between pharmacy practice journals and comparator journals to ensure they have some similar characteristics (eg, MeSH headings for comparative study, meta-analysis, etc) to try to answer if there is something related to the journal discipline that delays this process, all other things being equal. Lastly, since time is an element to this analysis, I wonder if a survival analysis is more appropriate. If there are any exclusions for articles that do not have one of the “outcome” variables, it presents risk for bias in the analysis of simply comparing median times.

RE: We used Cohen’s d following ASA recommendations, instead of Null Hypothesis tests (mentioned in paragraph 4 of the results). We completely agree with the reviewer that many additional calculations can be made to find out potential predictors of publication delay. In fact, we aim to continue this analysis with publication country as predictor, which in our preliminary analyses showed promising results. In this piece of research, we aimed to make a descriptive analysis of pharmacy practice versus the other biomedical areas.

Results, paragraph 2: The authors describe the languages of journals in the comparison group here, but do not do so for the pharmacy practice “exposure” group. I think this should be reported, as any differences between the groups in language could influence availability of peer reviewers or other processes in the duration of the process the authors are measuring.

RE: Thank you. This was an unintentional omission that we have now corrected. Having both groups with very similar English percentage, language may not be a major influence on the acceptance lag difference.

Table 1: I’m not sure how beneficial columns 2 through 4 are in this table. I think the most helpful information is knowing the distribution of publication dates for each group over time. Eg, I do not think there is much that readers can take away from knowing the first PMID that was eligible for inclusion from each 1-year period.

RE: We agree that Table 1 provided excessive information for readers, so we deleted columns 2 to 4.

Table 2: The content in this table has confused me a bit regarding the methodology. I see that not all articles in either group consistently reported important data for the three variables. So have the articles without this data been excluded from analyses? I don’t recall reading that in the methods, so if this is the case, I believe it should be specified, or otherwise clarified.

RE: We made the sample size calculations considering the poor metadata reporting we found in the preliminary analyses (and we mentioned “with 43.4% of the articles providing data for this calculation”). We have now added a sentence in Data Analysis section to further clarify.

Paragraph following table 2: The information on time trend is interesting, but I think needs more interpretation for readers. The correlation coefficient seems difficult to interpret in this situation – eg, it may help to explain the average annual improvement in lag time for each group.

RE: We agree with the reviewer that correlation coefficients may not be easily understood by some readers. However, as the trends were not constant, which may difficult the calculation of average improvements, we preferred adding a new online appendix (Supporting information 4) providing medians and IQR for each year.

Table 3: Similar to my above comment, I think instead of reporting Cohen’s d, a point estimate and confidence interval for a difference would be more interpretable and helpful.

RE: We preferred following American Statistical Association recommendations provided in 2019 supplementary issue, and specially Wasserstein & Lazar “Moving to a World Beyond “p < 0.05””, and provide an effects size measure to show the size of the difference between the two distributions.

Paragraph following table 3: These trends within pharmacy journals are interesting, but I think their presentation needs more justification. In other words, why mention these specific journals? Did they have the shortest and longest acceptance lags, or were in the upper and lower deciles? Some type of criterion for presentation should be specified. It would also help to include results for each journal in tabular format (though in a supplemental table, I imagine), potentially ordered by impact factor or some other metric of readership.

RE: We agree we had not explained why we mentioned those journals. They are the extremes of the distributions. We have now reworded the sentence to clarify. The results for each journal are provided in Supporting information – S 5.

On this topic, I find it a shortcoming that some of the more widely read pharmacy practice journals (eg, AJHP, Pharmacotherapy) do not have their results reported to serve as a barometer for pharmacists who are likely most familiar with these.

RE: American journal of Hospital Pharmacy, now called American Journal of Health-System Pharmacy, is included in the analyses. Unfortunately, this journal is one of those not reporting editorial process dates in their metadata submitted to PubMed. Regarding Pharmacotherapy, we have used an objective method to select journals to analyze (Res Social Adm Pharm. 2019;15:1464-71.). Pharmacotherapy appears as clinical pharmacology journals, and not pharmacy practice journal, so it is not included in our analysis.

Discussion:

This section is rather long given the scope of the article. The first three paragraphs are relevant in providing additional context to the findings of this analysis. The remainder, while offering some interesting insight and potential solutions, can be tapered somewhat. I also get the impression here more than in other sections of the manuscript that the authors take most issue with delays during the peer review process. Based on the definitions of each lag time, it seems that acceptance lag may be most representative of the peer review process. The authors might consider restricting the analysis and presentation to this measure only to give the article a tighter focus.

RE: We aimed to evaluate all the lag times in the editorial process. We agree that acceptance lag is the greatest delay in the process (as demonstrated by our results). Consequently, our discussion focused on the main cause of the duration of this lag time, the peer review process. Not by chance, peer-review process is having major attention in literature, and many alternatives (some cited in our discussion) are been suggested to overcome this issue. We’d appreciate the revierwer could be more specific guiding us in what should we reduce in the discussion.

Limitations: I think a major limitation that should be stated is the heterogeneous nature of the comparison group. While I think the comparison group should be modified with matching or other techniques as described above, I think it is difficult to compare essentially pharmacy practice journals with “everything else”. There are so many journals in Pubmed that are not directly related to healthcare provision (eg, basic science) that it may make this comparison group somewhat unrepresentative.

RE: We humbly disagree with the reviewer. Our aim was to compare the editorial process in pharmacy journals with what happens in other biomedical fields, not with specific fields. We added a sentence in Methods to explain why we selected PubMed as the source of the comparison group. We also added a justification to the already existing limitation about the use of a random sample of PubMed articles.

Attachment

Submitted filename: 02Reply to reviewers.docx

Decision Letter 1

Tim Mathes

4 May 2021

PONE-D-21-03317R1

Publication speed in pharmacy practice journals: a comparative analysis

PLOS ONE

Dear Dr. Fernandez-Llimos,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

 A premise to be considered for publication is a revision of the statistical analyses (compare comment of reviewer 2 regarding the statistical analyses). To facilitate interpretation it would be better to perform the analysis on the original scale, instead of using  Cohen’s d. Furthermore, a multivariate analysis should be performed because it can be expected that the univariate analyses are confounded and consequently it is not possible to assess the impact of individual factors.

Please submit your revised manuscript by Jun 18 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Tim Mathes

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addresses the queries I had made. Although they disagree with some of my comments, they have provided an explanation and have made some minor modifications that have improved the paper.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Jun 29;16(6):e0253713. doi: 10.1371/journal.pone.0253713.r004

Author response to Decision Letter 1


5 Jun 2021

Response to the editor:

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

A premise to be considered for publication is a revision of the statistical analyses (compare comment of reviewer 2 regarding the statistical analyses). To facilitate interpretation it would be better to perform the analysis on the original scale, instead of using Cohen’s d. Furthermore, a multivariate analysis should be performed because it can be expected that the univariate analyses are confounded and consequently it is not possible to assess the impact of individual factors.

RE: Thank you very much for the opportunity to improve our manuscript. Following previous recommendations from reviewer #2 we have now included the multivariate analyses by performing a multivariate linear regression for the three lag times. The results, presented in Table 4, confirm the findings reported with bivariate analysis.

Table 4. Multivariate analyses for the three lag times

Acceptance lag R-square: 0.006; Durbin-Watson= 1.788

Beta 95%IC p-value VIF

Study group -3.910 -7.077 : -0.742 0.016 1.072

CiteScore (2018) 1.000 -0.267 : 2.268 0.122 2.991

Opean Access journal? -13.822 -17.237 : -10.406 <0.001 1.005

journal Impact Factor -1.861 -2.653 : -1.069 <0.001 3.041

Lead lag R-square: 0.052; Durbin-Watson= 1.620

Beta 95%IC p-value VIF

Study group -16.966 -18.570 : -15.363 <0.001 1.088

CiteScore (2018) -5.779 -6.436 : -5.121 <0.001 3.447

Opean Access journal? -0.097 -1.930 : 1.737 0.918 1.029

journal Impact Factor 2.795 2.339 : 3.251 <0.001 3.550

Indexing lag R-square: 0.152; Durbin-Watson= 0.550

Beta 95%IC p-value VIF

Study group 41.721 39.374 : 44.069 <0.001 1.057

CiteScore (2018) -0.242 -0.959 : 0.475 0.508 2.913

Opean Access journal? 63.494 60.803 : 66.184 <0.001 1.018

journal Impact Factor 0.524 0.078 : 0.969 0.021 2.968

95%IC: 95% confidence interval; VIF: Variance inflation factor

Regarding the use of Cohen’d d in table 2, the comment Reviewer #2 provided in the previous round stated: “Table 3: Similar to my above comment, I think instead of reporting Cohen’s d, a point estimate and confidence interval for a difference would be more interpretable and helpful”. To attend this comment, we included the median (as a point estimate) and the IQR (instead the confidence interval, which we cannot use because we are presenting dispersion measure of a non-normally distributed variable). Additionally, we presented the null hypothesis test that that estimated the significance of the differences between the two groups under analysis. These two elements comply with the requirements of the reviewer #2. In addition to these requested elements, and following the recommendations of the American Statistical Association, we insist in providing the effect size measures of these differences, presented as the Cohen’s d. The latter should be understood as additional information after providing the requested information.

Attachment

Submitted filename: Response to the editor.docx

Decision Letter 2

Tim Mathes

11 Jun 2021

Publication speed in pharmacy practice journals: a comparative analysis

PONE-D-21-03317R2

Dear Dr. Fernandez-Llimos,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Tim Mathes

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Tim Mathes

17 Jun 2021

PONE-D-21-03317R2

Publication speed in pharmacy practice journals: a comparative analysis

Dear Dr. Fernandez-Llimos:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Tim Mathes

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Appendix. Comparison group: Results of the PMIDs random selection.

    (DOCX)

    S2 Appendix. Articles published in the pharmacy practice journals between 2009 and 2018.

    (DOCX)

    S3 Appendix. Acceptance, lead, total, and indexing lag times for pharmacy practice journals (2009–2018).

    (DOCX)

    S4 Appendix. Acceptance, lead, total, and indexing lag times trends (2009–2018).

    (DOCX)

    S5 Appendix. Pharmacy practice journals’ data (violin plots with lines representing percentile25, median, and percentil75).

    (DOCX)

    Attachment

    Submitted filename: Mendes et al review_PLOS ONE.docx

    Attachment

    Submitted filename: 02Reply to reviewers.docx

    Attachment

    Submitted filename: Response to the editor.docx

    Data Availability Statement

    Data available at https://doi.org/10.17605/OSF.IO/QY2AE.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES