Skip to main content
BMJ Open logoLink to BMJ Open
. 2019 Dec 19;9(12):e025511. doi: 10.1136/bmjopen-2018-025511

Do pharmacy intervention reports adequately describe their interventions? A template for intervention description and replication analysis of reports included in a systematic review

Mícheál de Barra 1,2,, Claire Scott 2,3, Marie Johnston 2, M De Bruin 2,4, Neil Scott 2, Catriona Matheson 5, Christine Bond 2, Margaret Watson 2,6
PMCID: PMC6937059  PMID: 31862736

Abstract

Introduction

Scientific progress and translation of evidence into practice is impeded by poorly described interventions. The Template for Intervention Description and Replication (TIDieR) was developed to specify the minimal intervention elements that should be reported.

Objectives

(1) To assess the extent to which outpatient pharmacy interventions were adequately reported. (2) To examine the dimension(s) across which reporting quality varies. (3) To examine trial characteristics that predict better reporting.

Methods

The sample comprised 86 randomised controlled trials identified in a Cochrane review of the effectiveness of pharmacist interventions on patient health outcomes. Duplicate, independent application of a modified 15-item TIDieR checklist was undertaken to assess the intervention reporting. The reporting/non-reporting of TIDieR items was analysed with principal component analysis to evaluate the dimensionality of reporting quality and regression analyses to assess predictors of reporting quality

Results

In total, 422 (40%) TIDieR items were fully reported, 395 (38%) were partially reported and 231 (22%) were not reported. A further 242 items were deemed not applicable to the specific trials. Reporting quality loaded on one component which accounted for 26% of the variance in TIDieR scores. More recent trials reported a slightly greater number of TIDieR items (0.07 (95% CI 0.02 to 0.13) additional TIDieR items per year of publication). Trials reported an 0.09 (95% CI 0.04 to 0.14) additional TIDieR items per unit increase in impact factor (IF) of the journal in which the main report was published.

Conclusions

Most trials lacked adequate intervention reporting. This diminished the applied and scientific value of their research. The standard of intervention reporting is, however, gradually increasing and appears somewhat better in journals with higher IFs. The use of the TIDieR checklist to improve reporting could enhance the utility and replicability of trials, and reduce research waste.

Keywords: pharmacy/standards, checklist, clinical trials as topic/standards, reproducibility of results, research report/standards


Strengths and limitations of this study.

  • We examined reporting quality in 86 trials of pharmacy interventions using the Template for Intervention Description and Replication (TIDieR).

  • Multiple regression on TIDieR scores illuminated the predictors of good intervention reporting.

  • A principal component analysis was used to explore the dimensionality of the TIDieR.

  • Suboptimal inter-rater reliability of TIDieR assessments suggest some subjectivity in our assessments.

  • We made various assumptions (eg, journal impact factors are stable over time) to create predictor variables for the regression analyses.

Background

Effective and efficient healthcare depends on trials in which the benefits and harms of interventions are experimentally assessed.1 Underspecification of interventions hampers both implementation of evidence-based practice in healthcare settings and scientific progress.2–5 While additional detail and clarity could be derived, for example, by contacting study authors, this places additional and unnecessary burden on reviewers, researchers and other users of this information.2 6

If interventions are underspecified, methodological decisions and results can be difficult to understand, evaluate and synthesise. Likewise, replication of interventions will be impossible if basic intervention characteristics like, for example, the frequency of interaction between healthcare worker and patient, is not presented. Without clear descriptions of interventions, the similarities and differences between interventions will be obscured and this will hinder research synthesis in systematic reviews and meta-analyses. Thus, inadequate description of interventions is an important potential source of waste within biomedical research.7

What is adequate reporting of an intervention? Hoffmann et al3 sought to answer this question using recommended consensus procedures8 including literature searches, a Delphi procedures and a face-to-face consensus meeting. The outcome was a checklist of 12 items to be included in the description of interventions. This Template for Intervention Description and Replication (TIDieR) is included in the Equator portfolio of checklists and guides, which are intended to enhance the reporting of trials and research more broadly.9

TIDieR also provides a basis for evaluating reporting in the published literature. To our knowledge, three studies have evaluated interventions reported in randomised controlled trials (RCTs) using an adapted version of TIDieR.4–6 Abell et al6 found that just 8% (6/74) of exercise-based cardiac rehabilitation intervention reports described the core TIDieR elements deemed to be essential for replication, though this increased to 43% once additional sources were examined and the trial authors were contacted. Jones et al5 reported that 1% (1/100) of perioperative care interventions included all TIDieR items and that, on average, 43% of TIDieR items were omitted. In physiotherapy interventions evaluated by Yamato et al,4 23% (46/200) omitted more than half of TIDieR items.

Other studies of pharmacy interventions have used the Descriptive Elements of Pharmacist Intervention Characterisation Tool (DEPICT10) tool. DEPICT is a checklist developed to identify key components of pharmacy interventions and is intended as both a writing guide for pharmacists seeking describe their interventions in clear and replicable manner, and as a tool for retrospective analysis of interventions in the published literature. Thus, the rationale and utility of TIDieR and DEPICT overlap substantially. Studies using DEPICT to evaluate reporting quality found 59% of chronic kidney disease interventions11 and ‘most’ asthma trials12 were not implementable based on the available intervention descriptions.

The focus of our study was on interventions implemented by pharmacists in outpatient settings. In recent years, the pharmacist’s role has changed substantially in many countries, with a move away from the traditional function of medicine supply to more behavioural/clinical roles. Pharmacists contribute to the safe and effective use of medicines through the delivery of services such as medication review,13 14 adherence support and advice to prescribers as well as enhanced roles in public health.15 To maximise the efficient use of resources, service development should be informed by research evidence and this is reflected in the growing number of RCTs of pharmacist interventions.16 17 However, the value of these RCTs for policy and practice is dependent on the quality of reporting of the trial and complete descriptions of the interventions tested.

The first aim of our study was to evaluate the reporting of intervention descriptions in RCTs of pharmacy interventions using a modified TIDieR checklist. Our use of TIDieR rather than DEPICT enabled comparison with reporting quality in other health domains. A second aim was to examine how TIDieR items covaried, examining the dimensionality of TIDieR using a principal component analysis (PCA). This enabled us to analyse underlying patterns of TIDieR inclusion and exclusion by examining what items tend to co-occur or cluster in intervention descriptions. Such a pattern of covariation between TIDieR items would suggest that groups of individual items reflect underlying dimension(s) of reporting quality. In other words, the PCA enabled us to investigate whether there was a subset of items which trials tend to report generally well or generally poorly. Identifying these dimensions could be useful since different dimensions of reporting quality may have unique and potentially modifiable causes.

The final aim was to explore whether other article and journal characteristics predicted the completeness of intervention description. We aimed to examine whether reporting improved over time or was associated with other measures of quality, namely risk of bias (RoB). We explored the relationship between reporting quality and journal prestige, measured by the impact factor (IF). Although higher IF journals are often expected to publish ‘better’ science, it is unclear if the reporting quality is superior. We examined if trial size (ie, number of participants) was associated with completeness of intervention reporting. We also examined if reporting space predicts clearer reporting by examining (1) if trials described in multiple manuscripts reported interventions more clearly and (2) if reports published in journals with higher word limits reported interventions more clearly. The space-related predictor variables were added following reviewer recommendations. The other predictors were agreed by the author team before the results were known.

Methods

A protocol for the study has not been published elsewhere.

Trial report selection

Eighty-six published trial reports (online additional file 1A) were identified in an interim update of a Cochrane review of non-dispensing outpatient pharmacy services17 and provided the data source for our study. Non-dispensing interventions aim to improve patient’s medication use (through, eg, education) or practitioner prescribing (through, eg, medication reviews). These trials were published between 1979 and 2015, inclusive, and the median year of publication was 2010. Sixty-six of the trials precede 2014, the publication year of TIDieR checklist.3 The Cochrane review included RCTs which evaluated interventions to improve patient health in non-hospitalised patients through the use or cessation of medication and which were led or primarily delivered by a pharmacist. The search terms used to identify these trials are included in online additional file 1B (see also de Barra et al17) and a Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist is available in online additional file 2.

Supplementary data

bmjopen-2018-025511supp001.pdf (237.5KB, pdf)

Supplementary data

bmjopen-2018-025511supp002.pdf (78.1KB, pdf)

TIDieR reporting

The 12-item TIDieR checklist was adapted by subdividing several items prior to its application in this study. Note that these modifications are intended to facilitate TIDieR’s use for the evaluation of published intervention descriptions and should not be construed as an attempt to modify of TIDieR more generally. One item, ‘why: rationale of the intervention’, was split into two separate items. First, a behavioural rationale item was used to assess whether the authors presented any rationale or theory to justify behavioural components. For example, the introduction of a daily pill box might be justified by referring to studies which show that forgetfulness lowers adherence. Second, a clinical rationale item was used to assess if the pharmacological component of the intervention had been justified. The checklist item ‘who: provider training/experience’ was specified using three items: intervention-specific training described; qualification of provider; and experience of provider. Definitions were created to specify when all items should be coded as ‘included’, ‘partially included’ or ‘not included’. Thus, the 12-item checklist was developed into a 15-item evaluation tool (box 1; scoring criteria provided in online additional file 1C). This tool was applied by two independent coders to assess the reporting of the interventions for the 86 trials.

Box 1. TIDieR checklist adapted for report evaluation.

Adapted TIDieR items

1. Brief name. Provides the name or a phrase that describes the intervention.

2a. Why (clinical). Describes clinical rationale, theory or goal of the elements essential to the intervention.

2b. Why (behavioural). Describe behavioural rationale, theory or goal of the elements essential to the intervention theory, or goal of the elements essential to the intervention.

3. What (materials). Describes any physical or informational materials used in the intervention.

4. What (procedures). Describe each of the procedures, activities and/or processes used in the intervention, including any enabling or support activities.

5a. Who (expertise). For each category of intervention provider, describes their expertise.

5b. Who (qualifications). For each category of intervention provider, describe their background.

5c. Who (training). For each category of intervention provider, describe specific training given.

6. How (delivery mode). Describe the modes of delivery (such as face-to-face or by some other mechanism, such as internet or telephone) of the intervention and whether it was provided individually or in a group.

7. Where (locations). Describe the type(s) of location(s) where the intervention occurred, including any necessary infrastructure or relevant features.

8. When and how much. Describe the number of times the intervention was delivered and over what period of time including the number of sessions, their schedule, and their duration, intensity or dose.

9. Tailoring. If the intervention was planned to be personalised, titrated or adapted, then describe what, why, when and how.

10. Modifications. If the intervention was modified during the course of the trial, describe the changes (what, why, when and how).

11. How well (planned). If intervention adherence or fidelity was assessed, describe how and by whom, and if any strategies were used to maintain or improve fidelity, describe them.

12. How well (actual). If intervention adherence or fidelity was assessed, describe the extent to which the intervention was delivered as planned.

See online additional file 1C for full scoring criteria.

TIDieR, Template for Intervention Description and Replication.

If reports made no mention of modification or fidelity/adherence assessment (items 10, 11 and 12 in Box 1), we assumed that these reports described trials without modification or adherence assessment and coded the relevant item as ‘non-applicable’. These items were also excluded from composite scores (see Predicting TIDieR reporting rate below). It should be noted that TIDieR checklist does not require authors to report on such modifications/adherence assessment if none occur.

The 15-item evaluation tool was iteratively developed by three authors and piloted on five papers. Once the tool was finalised, these papers were re-evaluated. Disagreements were resolved through discussion between the two coders and, where necessary, consultation with a third coder. The inter-rater reliability of TIDieR coding was examined using Cohen’s kappa with squared weights.18 TIDieR item presence and absence rates are presented descriptively.

Dimensionality of TIDieR

To investigate the dimensions of variability in TIDieR, a PCA19 was conducted on a Spearman’s correlation matrix of associations between TIDieR items. PCAs assess if different linear combinations of TIDieR variables summarise reporting quality using fewer variables. The number of components was determined using a parallel analysis.20 No rotation was performed.

Predicting TIDieR reporting rate

To generate composite scores for the multivariate analysis, we summed the number of items reported and, separately, the number of items not reported. This avoided the assumption of equidistance between included, partially included and not included. Another composite score was generated from the PCA results. Items that loaded on only one component at 0.6 or above (a threshold of ‘practical significance’21) were identified. We then created a weighted summed score by adding the weights of TIDieR items which met this criterion. Items were weighted as follows: not included=0, partly included=1 and included=2. While this does assume equidistance, the psychometric/variable-reduction approach outlined here enables the identification and measurement of different dimensions of reporting quality.

Journal IF was derived from Journal Citation Reports via the Web of Science citation indexing service.22 As in Dechartes et al,23 estimates were taken from the year 2016 rather than the year of publication. Ten IFs were unavailable because the journal had been discontinued. We therefore assumed that these journals were likely to have low IF and hence we imputed these 10 missing values with the minimum IF of the extant journals included. The analyses were repeated excluding these 10 to test the robustness of findings.

Similarly, contemporary journal word limits rather than word limits at publication year were used as these could not feasibly be accessed. Where journals had no word limit used the maximum word limit found in the journals with a limit (10 000 words). Where journal word limit was impossible to determine, we used the median limit (3500 words). Where journals had multiple word limits (eg, short article vs main article), we used the limit for the main empirical articles.

RoB was estimated using the Cochrane RoB tool which evaluates bias on seven criteria.24 Two authors independently applied the measure and resolved discrepancies through discussion with a third author. Unclear RoB and high RoB scores are the number of criteria which were graded as unclear RoB or high RoB, respectively (possible range for both variables: 0–7). For further information on RoB in these trials, see the associated Cochrane Review.17

A multivariate regression was performed on each of the dependent variables (number of TIDieR items reported, number of TIDieR items not reported, PCA score(s)). Given that items 10–12 in box 1 were typically non-applicable, these were excluded from calculation of the ‘TIDieR items reported’ and ‘TIDieR items not reported’ dependent variables. The following independent variables were included: (1) year of publication, to test for trend over time; (2) total sample size at baseline, to examine whether more complete reporting occurred with larger sample size; (3) IF, to examine whether trials published in more prestigious journals were better reported; (4) high RoB, to assess whether trials with other deficits were more likely to omit TIDieR items, (5) number of manuscripts, to assess if trials described in multiple manuscripts were better reported, and (6) word limit in publication journal, to assess if journal limits were associated with poorer reporting.

Correlation between different types of poor reporting

Additional correlation analyses focused on the relationship between TIDieR items included/excluded and unclear RoB. These were analysed in a correlation analysis rather than the regression equations because unclear RoB can also be considered to be an additional measure of reporting quality rather than a possible cause of poor reporting.

Patient and public involvement

We assumed that patients and public favour complete reporting of interventions since such reporting patterns may both increase the quality of healthcare and decrease research waste and costs. Thus, there was no patient or public involvement in this study.

Results

Inter-rater reliability of TIDieR scoring

Cohen’s kappa with squared weights,18 which takes into account the ordinal nature of the data was 0.5, which represents a fair/moderate level of agreement.25 When items scored as non-applicable were included as an additional category, kappa increased to 0.73. The differences between coders typically lay in deciding if items were reported versus partially reported, or between not reported and partially reported.

TIDieR reporting

There were 1290 items that could have been reported within this review (86 trials and 15 items/trial). Of these, 422 (40%) were reported, 395 (38%) were partially reported and 231 (22%) items were not reported. The remaining 242 items were scored as non-applicable. No trials fully reported all 15 items.

The mean number of TIDieR items included in each trial was 4.83 (SD: 1.92), possible range 0 to 12This mean score excludes items 10–12 which were typically rated as non-applicable. The mean number of TIDieR items not reported was 2.65 (SD: 1.55). As figure 1 illustrates, substantial differences in reporting frequency occurred between items as well as between trials.

Figure 1.

Figure 1

Top panel shows the proportion of TIDieR items not/partly or fully included in each of the 86 trial reports. Each bar represents one trial. Bottom panel shows the proportion of trials reports which report each of the 15 TIDieR items fully, partly or not at all. Three TIDieR items (13–15) were frequently scored non applicable. Each bar represents one TIDieR item. TIDieR, Template for Intervention Description and Replication.

Dimensionality of TIDieR

A Spearman’s correlation plot (online additional file 1D) indicated some covariation in TIDieR items. Bartlett’s test of sphericity indicated that the data were suitable for PCA (χ2 (66)=165.77, p<0.001) and the Kaiser-Meyer-Olkin test26 suggested reasonable (0.75) sampling adequacy. A parallel analysis, executed with the paran package in R, indicated that one factor should be extracted. A scree plot, which includes the random data-derived eigenvalues produced by the parallel analysis, can be seen in online additional file 1D. The first four eigenvalues were 3.1, 1.32, 1.25 and 1.14. Table 1 shows the factor loadings. This component accounted for 26% of the variance in TIDieR scores.

Table 1.

Component loadings for principle component analysis. Items in bold form the component measure

Component loading
Brief_name 0.02
Why_clinical 0.47
Why_behavioural 0.60
What_materials 0.72
What_procedure 0.74
Who_experience −0.38
Who_qualifications 0.25
Who_training 0.01
How_delivery_mode 0.66
Where_locations 0.24
When_how_much 0.58
Tailoring 0.64

Scores (not included=0, partially=1, included=2) on the four items with loadings of 0.6 or higher were summed to produce a PCA-derived dependent variable with a Cronbach’s alpha of 0.72.

Predicting TIDieR reporting rate

In all regression analyses, residual plots (linearity and homoscedasticity), QQ plots (normal errors) and the Durbin-Waton test (correlated errors) indicated that the assumptions of linear regression analyses were met; see online additional file 1E. Note that regression analyses which excluded the 10 reports with imputed IFs, show similar results.

The multiple regression analyses indicated an increase of 0.07 (95% CI 0.02 to 0.13) TIDieR items reported per year. For each additional unit of IF, 0.09 (95% CI 0.04 to 0.14) extra TIDieR items were included. The pattern of association with TIDIeR items not included and with PCA component scores was broadly similar, see table 2. The regression predicting the PCA component indicated that larger trials may be more poorly reported (beta=−0.08, 95% CI −0.14 to 0.02), though this pattern was not observed in the other two regressions. High RoB was not associated with TIDieR reporting.

Table 2.

Results of regression analyses predicting TIDieR scores

Number included Number not included PCA component 1
Constant 4.11 (2.88 to 5.34) 3.48 (2.43 to 4.53) 3.80 (2.63 to 4.97)
Year of publication 0.07 (0.02 to 0.13) −0.06 (−0.10 to –0.01) 0.04 (−0.01 to 0.10)
Sample size (per 100) −0.06 (−0.12 to 0.01) 0.05 (−0.01 to 0.10) −0.08 (−0.14 to −0.02)
Impact factor 0.09 (0.04 to 0.14) −0.02 (−0.07 to 0.02) 0.05 (−0.00 to 0.10)
Word limit (per 1000) 0.12(−0.05 to 0.29) −0.02 (−0.17 to 0.12) 0.11 (−0.06 to 0.27)
Papers per trial 0.19 (−0.46 to 0.83) −0.45 (−1.00 to 0.10) 0.21 (−0.40 to 0.83)
High risk of bias −0.06 (−0.45 to 0.34) −0.12 (−0.46 to 0.21) 0.07 (−0.30 to 0.45)
R-squared 0.26 0.17 0.18
F 0.59 2.77 2.98
P value 0.00 0.02 0.01
N 86 86 86

Betas in square brackets are 95% CIs.

TIDieR, Template for Intervention Description and Replication.

Correlation between different kinds of poor reporting

Trials with fewer uncertain RoB evaluations reported more TIDieR items (r=−0.24, p= 0.03) and a higher TIDieR component score (r=−0.23, p= 0.03). The relationship between uncertain RoB and TIDieR items not included was in the predicted direction but not statistically significant (r= 0.15, p= 0.15).

Discussion

These results indicate (1) that more than half of TIDieR items were not fully included in pharmacy trial reports, (2) that variability in TIDieR reporting can be captured by a single component and (3) that slight improvements are being achieved over time, and that trials with more complete reporting are more likely to be published in higher impact journals. Number of manuscripts or journal word limits did not predict reporting quality.

Frequency of reporting of TIDieR items

It is of concern that few intervention procedures were described with ‘sufficient detail for replication’ (a criterion for item 4, what: procedures). The interventions were complex and involved varied interactions with patients, yet the description of these interventions was typically brief, often comprising only a single paragraph or a few sentences.27–30

Procedural ambiguity was also apparent in low scores for item 9, tailoring. Interventions were typically tailored to the clinical, knowledge or behavioural situation of the patient, yet in most cases the nature of this tailoring was not made explicit as an ‘if-then’ rule. Rather, this was left to the judgement and clinical skills of the pharmacist. While this is likely to reflect daily practice, ambiguity could be attenuated by being explicit about the background, experience and training—including training evaluation—of those employing their clinical judgement. Such information would enable the reader to understand what clinical competencies are necessary to implement these behaviour change interventions in a comparable fashion.31 However, with few exceptions,32–34 trial reports were not fully explicit about the experience, qualifications and training of personnel.

The majority of trials included some mention of materials but a detailed description of the package of, for example, questionnaires or educational booklets supplied to each pharmacy was typically lacking. While the frequency of interventions was generally described, the duration was often not made explicit.35–38 In their evaluation of cardiac rehabilitation interventions, Abell et al6 also found that session frequency was reported more often than session duration. The fidelity at which the intervention was delivered was rarely mentioned, perhaps because this was rarely assessed. Although not the focus of this study, attention to the fidelity of intervention delivery may be important in explaining variability in effect size. In contrast to the clinical rationale, the behavioural rationale was reported less frequently. This requires a reason or theory for the selection of the proposed intervention and frameworks such as COM-B (capability, opportunity, motivation and behaviour)39 or the theoretical domain framework40 might be useful for future justification and reporting. In addition, the links between the proposed theoretical mechanism and the selected intervention could be specified.

Component structure of TIDieR

In the present analysis, variation in quality of reporting was captured by a single dimension and this dimension accounted for approximately one-quarter of the variance in TIDieR scores. Items loading heavily on this item were what, how and why (behavioural) items. That is to say, trials tended to report these items generally well or generally poorly. For reasons unclear to us, variance in reporting who executed the intervention was less well captured by this component. Future research on the dimensional structure of TIDieR may benefit from larger sample sizes. With this in mind, the dataset associated with this study been placed online.41

What predicts reporting quality?

Trial reports published in higher impact journals tended to have better reporting quality. This trend is consistent with a recent analysis of unclear RoB evaluations in 20 920 RCTs.23 Similarly, other studies have found a more general association between IF and methodological quality.42 43

Our results suggest an improvement in reporting quality over time, a result consistent with other studies.23 44 The rate of improvement is, however, very slow (0.8 additional items TIDieR items included per decade); time will tell if publication of TIDieR checklist, as well as evaluations similar to our study, will enhance the pace of improvement. The positive correlation between included TIDieR items and the number of unclear RoB evaluations suggests that trial reports with thorough intervention descriptions also tend to have well-described methodologies and points to the convergent validity of our reporting of quality measures.

Improving reporting quality

These results add to growing literature on the limits of intervention descriptions in the pharmacy literature10–12 45 and in other biomedical fields.3 We now suggest some practical steps that might be taken to improve the quality of reporting.

First, we suggest that trial authors use TIDieR and DEPICT checklists when designing, planning and reporting their intervention. Often, this projects data extractor’s initial impression that a report was thoroughly written was disproved once the checklist had been applied. Checklists simplify the writing process and prevent errors, much as checklists have done in other medical and non-medical domains.46

While we found no evidence that word limit/papers-per-trial predict reporting quality, it would be hasty to conclude these limits are irrlevant. If word limits are prohibitive, appendices or additional online materials should be considered, although the longevity of such resources has been questioned. Hoffmann et al47 found that several trials had placed materials online but the resources had not been maintained and had become inaccessible. Services such as Figshare48 and the Open Science Foundation49 enable materials—including video/audio files—to be shared and cited.

There is evidence to suggest that the use of checklists during peer review enhances reporting quality.50 The quality of reporting is likely to increase if reviewers assess reports using the checklist and/or if authors are required to state that they have complied with checklist recommendations. There may also be a role for journal editors in making TIDieR or DEPICT checklist a criterion for evaluation of manuscripts. Indeed, evidence from RCTs suggests that introducing guidelines to evaluate papers increases reporting quality.50 Editors and publishers may also facilitate improvements by either excluding methods sections from the article word counts or by facilitating the dissemination of intervention descriptions in accessible appendices.

Limitations

Agreement between the coders was less than ideal; the coders sometimes found it difficult to identify rules that unambiguously distinguished included from partly included or not included. In their evaluation of physiotherapy interventions, Yamato et al4 similarly found agreement for many TIDieR items was suboptimal.

Unlike earlier studies4 6 but as in Jones et al5 we opted not to code control group ‘interventions’. Studies have demonstrated that intervention effects are a function of what happens in the control group, and thus interpretation of effect size depends on understanding what happens in control groups.51 Nevertheless, our study focused on reporting of interventions and thus the focus was solely on intervention groups.

We used contemporary journal word limits and IFs rather than limits/IFs at the date of manuscript publication. We also imputed missing IFs and word limits. These decisions probably increase the chance of underestimating effect sizes.

Conclusions

Most pharmacy trials reviewed here lacked adequate intervention reporting. This diminished the applied and scientific value of the research and may stymie improvments in patient health. The standard of intervention reporting is, however, gradually increasing and appears somewhat better in journals with higher IFs. The use of TIDieR checklist to improve reporting could enhance the utility and replicability of trials, and reduce research waste.

Supplementary Material

Reviewer comments
Author's manuscript

Footnotes

Contributors: The idea of evaluating the trial reports using TIDieR comes from MJ; all authors contributed to the design. MidB and CS extracted the data, with support from MJ and MW. MidB performed the analysis with suggestions from MDB, MJ, NS, CB and CM. MidB wrote the manuscript with comments from all authors.

Funding: Funding was provided by the Chief Scientist Office, grant number CZH/4/1041. MidB was also funded by the Professor Roy Weir Career Development Fellowship. MW was funded by a Health Foundation Improvement Science Fellowship.

Competing interests: None declared.

Patient consent for publication: Not required.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data are available in a public, open access repository. Here is the Open Science Foundation link for the dataset: https://osf.io/a9mpw/

References

  • 1. Cochrane A. Effectiveness and efficiency: Random reflections on health services. Cambridge University Press, 1972. [DOI] [PubMed] [Google Scholar]
  • 2. Glasziou P, Meats E, Heneghan C, et al. What is missing from descriptions of treatment in trials and reviews? BMJ 2008;336:1472–4. 10.1136/bmj.39590.732037.47 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014;348:g1687 10.1136/bmj.g1687 [DOI] [PubMed] [Google Scholar]
  • 4. Yamato TP, Maher CG, Saragiotto BT, et al. How completely are physiotherapy interventions described in reports of randomised trials? Physiotherapy 2016;102:121–6. 10.1016/j.physio.2016.03.001 [DOI] [PubMed] [Google Scholar]
  • 5. Jones EL, Lees N, Martin G, et al. How well is quality improvement described in the perioperative care literature? A systematic review. Jt Comm J Qual Patient Saf 2016;42:196–AP10. 10.1016/S1553-7250(16)42025-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Abell B, Glasziou P, Hoffmann T. Reporting and replicating trials of exercise-based cardiac rehabilitation: do we know what the researchers actually did? Circ Cardiovasc Qual Outcomes 2015;8:187–94. 10.1161/CIRCOUTCOMES.114.001381 [DOI] [PubMed] [Google Scholar]
  • 7. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Obstetrics & Gynecology 2009;114:1341–5. 10.1097/AOG.0b013e3181c3020d [DOI] [PubMed] [Google Scholar]
  • 8. Moher D, Schulz KF, Simera I, et al. Guidance for developers of health research reporting guidelines. PLoS Med 2010;7:e1000217 10.1371/journal.pmed.1000217 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Equator Netowk, 2017. Available: http://www.equator-network.org/
  • 10. Rotta I, Salgado TM, Felix DC, et al. Ensuring consistent reporting of clinical pharmacy services to enhance reproducibility in practice: an improved version of DEPICT. J Eval Clin Pract 2015;21:584–90. 10.1111/jep.12339 [DOI] [PubMed] [Google Scholar]
  • 11. Salgado TM, Correr CJ, Moles R, et al. Assessing the implementability of clinical pharmacist interventions in patients with chronic kidney disease: an analysis of systematic reviews. Ann Pharmacother 2013;47:1498–506. 10.1177/1060028013501802 [DOI] [PubMed] [Google Scholar]
  • 12. Crespo-Gonzalez C, Fernandez-Llimos F, Rotta I, et al. Characterization of pharmacists' interventions in asthma management: a systematic review. J Am Pharm Assoc 2018;58:210–9. 10.1016/j.japh.2017.12.009 [DOI] [PubMed] [Google Scholar]
  • 13. Lowrie R, Lloyd SM, McConnachie A, et al. A cluster randomised controlled trial of a pharmacist-led collaborative intervention to improve statin prescribing and attainment of cholesterol targets in primary care. PLoS One 2014;9:e113370 10.1371/journal.pone.0113370 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Bruhn H, Bond CM, Elliott AM, et al. Pharmacist-led management of chronic pain in primary care: results from a randomised controlled exploratory trial. BMJ Open 2013;3:e002361 10.1136/bmjopen-2012-002361 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Avery AJ, Rodgers S, Cantrill JA, et al. A pharmacist-led information technology intervention for medication errors (pincer): a multicentre, cluster randomised, controlled trial and cost-effectiveness analysis. The Lancet 2012;379:1310–9. 10.1016/S0140-6736(11)61817-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Nkansah N, Mostovetsky O, Yu C, et al. Effect of outpatient pharmacists' non-dispensing roles on patient outcomes and prescribing patterns. Cochrane Database Syst Rev 2010;58 10.1002/14651858.CD000336.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. de Barra M, Scott CL, Scott NW, et al. Pharmacist services for non-hospitalised patients. Cochrane Database Syst Rev 2018;2 10.1002/14651858.CD013102 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull 1968;70:213–20. 10.1037/h0026256 [DOI] [PubMed] [Google Scholar]
  • 19. Abdi H, Williams LJ. Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics 2010;2:433–59. 10.1002/wics.101 [DOI] [Google Scholar]
  • 20. Horn JL. A rationale and test for the number of factors in factor analysis. Psychometrika 1965;30:179–85. 10.1007/BF02289447 [DOI] [PubMed] [Google Scholar]
  • 21. Hair JF, Black WC, Babin BJ, et al. Multivariate data analysis. Prentice hall Upper Saddle River, NJ, 1998. [Google Scholar]
  • 22. Web of Science Web of science, 2017. Available: http://apps.webofknowledge.com
  • 23. Dechartres A, Trinquart L, Atal I, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ 2017;357 10.1136/bmj.j2490 [DOI] [PubMed] [Google Scholar]
  • 24. Higgins JP, Green S. Cochrane handbook for systematic reviews of interventions. John Wiley & Sons, 2011. [Google Scholar]
  • 25. Fleiss L, Levin B, Paik MC. The measurement of interrater agreement : Statistical methods for rates and proportions. John Wiley & Sons, 1981. [Google Scholar]
  • 26. Kaiser HF. An index of factorial simplicity. Psychometrika 1974;39:31–6. 10.1007/BF02291575 [DOI] [Google Scholar]
  • 27. Schneiderhan ME, Shuster SM, Davey CS. Twelve-month prospective randomized study of pharmacists utilizing point-of-care testing for metabolic syndrome and related conditions in subjects prescribed antipsychotics. The Primary Care Companion for CNS Disorders 2014;16 10.4088/PCC.14m01669 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Malone DC, Carter BL, Billups SJ, et al. Can clinical pharmacists affect SF-36 scores in veterans at high risk for medication-related problems? Med Care 2001;39:113–22. 10.1097/00005650-200102000-00002 [DOI] [PubMed] [Google Scholar]
  • 29. Choe HM, Mitrovich S, Dubay D, et al. Proactive case management of high risk patients with type 2 diabetes mellitus by a clinical pharmacist: a randomised controlled trial. American Journal of Managed Care 2005;11:253–5. [PubMed] [Google Scholar]
  • 30. Castejón AM, Calderón JL, Perez A, et al. A community-based pilot study of a diabetes pharmacist intervention in Latinos: impact on weight and hemoglobin a1c. J Health Care Poor Underserved 2014;24:48–60. 10.1353/hpu.2014.0003 [DOI] [PubMed] [Google Scholar]
  • 31. Dixon D, Johnstone M. What competences are required to deliver behaviour change interventions: development of a health behaviour change competency framework. In submission. [DOI] [PMC free article] [PubMed]
  • 32. Hirsch JD, Steers N, Adler DS, et al. Primary care–based, pharmacist–physician collaborative medication-therapy management of hypertension: a randomized, pragmatic trial. Clin Ther 2014;36:1244–54. 10.1016/j.clinthera.2014.06.030 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Finley PR, Rens HR, Pont JT, et al. Impact of a collaborative care model on depression in a primary care setting: a randomized controlled trial. Pharmacotherapy 2003;23:1175–85. 10.1592/phco.23.10.1175.32760 [DOI] [PubMed] [Google Scholar]
  • 34. Simpson SH, Majumdar SR, Tsuyuki RT, et al. Effect of adding pharmacists to primary care teams on blood pressure control in patients with type 2 diabetes: a randomized controlled trial. Diabetes Care 2011;34:20–6. 10.2337/dc10-1294 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Charrois TL, McAlister FA, Cooney D, et al. Improving hypertension management through pharmacist prescribing; the rural Alberta clinical trial in optimizing hypertension (rural RxACTION): trial design and methods. Implementation Science 2011;6 10.1186/1748-5908-6-94 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Mahwi TO, Obied KA. Role of the pharmaceutical care in the management of patients with type 2 diabetes mellitus. Integr Pharm Res Pract 2013;4. [Google Scholar]
  • 37. López Cabezas C, Falces Salvador C, Cubí Quadrada D, et al. Randomized clinical trial of a postdischarge pharmaceutical care program vs regular follow-up in patients with heart failure. Farm Hosp 2006;30:328–42. 10.1016/S1130-6343(06)74004-1 [DOI] [PubMed] [Google Scholar]
  • 38. Lenander C, Elfsson B, Danielsson B, et al. Effects of a pharmacist-led structured medication review in primary care on drug-related problems and hospital admission rates: a randomized controlled trial. Scand J Prim Health Care 2014;32:180–6. 10.3109/02813432.2014.972062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation Science 2011;6 10.1186/1748-5908-6-42 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Michie S, Carey RN, Johnston M, et al. From theory-inspired to theory-based interventions: a protocol for developing and testing a methodology for linking behaviour change techniques to theoretical mechanisms of action. Annals of Behavioral Medicine 2018;52:501–12. 10.1007/s12160-016-9816-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. de Barra M. Do pharmacy intervention reports adequately report their interventions? A TIDieR analysis, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Lee KP, et al. Association of Journal quality indicators with methodological quality of clinical research articles. JAMA 2002;287:2805–8. 10.1001/jama.287.21.2805 [DOI] [PubMed] [Google Scholar]
  • 43. Kuroki LM, Allsworth JE, Peipert JF. Methodology and analytic techniques used in clinical research: associations with Journal impact factor. Obstet Gynecol 2009;114:877 10.1097/AOG.0b013e3181b5c9e8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Reveiz L, Chapman E, Asial S, et al. Risk of bias of randomized trials over time. J Clin Epidemiol 2015;68:1036–45. 10.1016/j.jclinepi.2014.06.001 [DOI] [PubMed] [Google Scholar]
  • 45. Kennie NR, Schuster BG, Einarson TR. Critical analysis of the pharmaceutical care research literature. Ann Pharmacother 1998;32:17–26. 10.1177/106002809803200101 [DOI] [PubMed] [Google Scholar]
  • 46. Gawande A. The checklist manifesto: How to get things right. Profile Books, 2010. [Google Scholar]
  • 47. Hoffmann TC, Erueti C, Glasziou PP. Poor description of non-pharmacological interventions: analysis of consecutive sample of randomised trials. BMJ 2013;347:f3755 10.1136/bmj.f3755 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Figshare Figshare, 2017. Available: https://figshare.com
  • 49. Open Science Foundation. Available: https://www.osf.io
  • 50. Cobo E, Cortés J, Ribera JM, et al. Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical Journal: masked randomised trial. BMJ 2011;343:d6783 10.1136/bmj.d6783 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. de Bruin M, Viechtbauer W, Hospers HJ, et al. Standard care quality determines treatment outcomes in control groups of HAART-adherence intervention studies: implications for the interpretation and comparison of intervention effects. Health Psychology 2009;28:668–74. 10.1037/a0015989 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjopen-2018-025511supp001.pdf (237.5KB, pdf)

Supplementary data

bmjopen-2018-025511supp002.pdf (78.1KB, pdf)

Reviewer comments
Author's manuscript

Articles from BMJ Open are provided here courtesy of BMJ Publishing Group

RESOURCES