Abstract
The aim of this study was to describe the research output and citation rates (academic impact) of public health dissemination and implementation research according to research design and study type.
A cross sectional bibliographic study was undertaken in 2013. All original data-based studies and review articles focusing on dissemination and implementation research that had been published in 10 randomly selected public health journals in 2008 were audited. The electronic database ‘Scopus’ was used to calculate 5-year citation rates for all included publications.
Of the 1648 publications examined, 216 were original data-based research or literature reviews focusing on dissemination and implementation research. Of these 72% were classified as descriptive/epidemiological, 26% were intervention and just 1.9% were measurement research. Cross-sectional studies were the most common study design (47%). Reviews, randomized trials, non-randomized trials and decision/cost-effectiveness studies each represented between 6 and 10% of all output.
Systematic reviews, randomized controlled trials and cohort studies were the most frequently cited study designs. The study suggests that publications that had the greatest academic impact (highest citation rates) made up only a small proportion of overall public health dissemination and implementation research output.
Keywords: Bibliographic, Public health, Research output, Implementation, Dissemination, Citation, Design
Highlights
-
•
Only a minority of publications in public health journals are translation research.
-
•
Guidance is limited for practitioners to translate research findings into practice.
-
•
Greater investment in dissemination and implementation research is necessary.
1. Introduction
Bibliographic reviews of published studies have been frequently used to describe research activity and characterise research that is undertaken (research output) (Milat et al., 2011, Weightman and Butler, 2011). Academic citation provides an objective measure of research use and academic impact (Weightman and Butler, 2011). Assessing the alignment of the type of research being published with measures of academic impact, like citation, provides one source of information to help prioritise research investment. For example studies in the health and medical literature have found that research designs at the top of evidence hierarchies, such as systematic reviews and randomized trials are the most highly cited (Willis et al., 2011, Patsopoulos et al., 2005) however typically represent only 4–6% of research output (Willis et al., 2011, Wolf and Williamson, 2009).
Implementation and dissemination research seeks to examine ways of moving research evidence, guidelines and best practice recommendations into health practice (Meslin et al., 2013). While bibliographic studies have been undertaken to describe public health research broadly (Milat et al., 2011), we are not aware of any previous studies that have examined research output or citation of implementation or dissemination research within the field. Research activity and citation of implementation and dissemination studies may perhaps differ from public health research generally. For example, randomized trials may be particularly difficult for dissemination and implementation interventions (Eccles et al., 2003) given organisations (rather than individuals) (Nutbeam and Bauman, 2006) are often the required unit of allocation. Furthermore, compared to established disciplines within public health, emerging fields of science such and implementation and dissemination may focus on developing measures to accurately measure and understand the determinants of implementation and dissemination processes before intervention development and testing can occur (Sanson-Fisher et al., 2008).
The aim of this study was to describe the research output and citation rates (academic impact) of public health dissemination and implementation research according to study type and research design.
2. Methods
2.1. Study sample
A cross-sectional bibliographic study was undertaken. Probability sampling, from a comprehensive database of indexed public health databases was used to maximise the potential representativeness of the sample. Ten journals were selected to provide a sufficient sample of manuscripts to enable comparisons between categories with sufficient precision. Ten Public Health journals were selected from English language journals listed in the category “Public, Environmental and Occupational Health” in the 2008 Journal Citation Reports using a computerised random number generator. Journal Citation Reports is an international data-base of indexed international scholarly journals. In 2008 the data-base included 162 journals in this category. Publications were included if it was an original data-based study or review article or it focused on dissemination and implementation research and were published in 2008. The National Institute of Health (NIH) and Institute of Medicine (IOM) endorsed description of translation stage 3 (‘T3’) research (Meslin et al., 2013, Glasgow et al., 2012) was used to define implementation and dissemination research. Such research investigates ways of moving evidence into health practice, and may include assessment of: evidence-practice gaps; barriers or enablers of policy or practice change; quality improvement initiatives; or effectiveness, implementation and dissemination intervention trials.
2.2. Measures
Two reviewers (ES and SLY) independently assessed the titles, abstracts or full texts of all publications in selected journals to determine eligibility and extracted data. Differences between reviewers were resolved via consensus. 5-year publication citation rates obtained from ‘Scopus’ database in 2013 were used to assess academic impact of all included publications. Scopus was selected as it represented provided a more comprehensive data-base of academic sources with greater global coverage than Web of Science, however did not include non-traditional online sources of Google Scholar (Kulkarni et al., 2009). Included studies were classified as descriptive/epidemiology, measurement, or intervention using previous definitions of such research and adapted to suit implementation and dissemination research (Milat et al., 2011, Sanson-Fisher et al., 2008). Research design descriptions from seminal methodological texts (Shadish et al., 2002, Mercer et al., 2007) were used to classify the research design of included studies as: i) systematic reviews/meta-analysis; ii) non-systematic reviews, iii) randomized controlled trials (RCTs); iv) non-randomized trials (including single group before and after studies); v) cohort studies; vi) cross sectional studies; vii) decision and/or cost-effectiveness studies and viii) other research designs such as case control or case studies.
2.3. Statistical analysis
Analyses were conducted using SAS version 9.3 statistical software. The proportions of publications that were classified as each study type and research design were used to assess research output. Descriptive statistics with 95% confidence intervals were calculated for measures of research output and citation. Analysis of Variance (ANOVA) was performed to assess differences in citation rate by design and study type. Statistical tests were two tailed with an alpha of 0.05.
3. Results
The selected journals, their number of volumes and issues for 2008, and their five- year citation impact factors as of 2013, can be seen in Table 1. The mean five-year impact factor of included journals (3.104) was higher than the mean five-year impact factor of all 162 journals listed in the Public, Environmental and Occupational Health category (1.608). Of the 1648 publications examined across 10 randomly selected public health journals, 216 (13%) were data-based original research or literature reviews focusing on dissemination and implementation research. The burden of disease focus of the implementation and dissemination studies was primarily non-communicable disease (n = 105, 49%) followed by communicable disease (n = 73, 34%) and injury (n = 12, 5.6%).
Table 1.
Journal name | Issues in 2008 | 5-year impact factor for 2013 | Included studies n (%)a |
---|---|---|---|
American Journal of Preventive Medicine | Vol 34, Issues1–6 Supplement, Issues 3,4,6 Vol 35, Issues 1–6 Supplement, Issues 1,2,3,5,6 |
5.092 | 36 (16.4) |
American Journal of Public Health | Vol 98, Issues 1–12 Supplement, Issue 9 |
4.997 | 36 (16.4) |
Australian and New Zealand Journal of Public Health | Vol 32, Issues 1–6 | 1.835 | 10 (4.5) |
BMC Public Health | Vol 8, Issues 1–12 | 2.781 | 82 (37.4) |
European Journal of Public Health | Vol 18, 1–6 Supplement, Issue 1 |
2.743 | 13 (5.9) |
International Journal of Public Health | Vol 53, Issues 1–6 | 2.605 | 4 (1.8) |
Journal of Public Health (UK) | Vol 30, Issues 1–4 | 2.312 | 6 (4.1) |
Journal of Public Health Policy | Vol 29, Issues 1–4 | 2.189 | 3 (1.3) |
Preventive Medicine | Vol 46, Issues 1–6 Vol 47, Issues 1–6 Supplement, Issue 1 |
3.917 | 17 (7.8) |
Scandinavian Journal of Public Health | Vol 36, Issues 1–8 Supplement, Issue 8 |
2.570 | 9 (4.1) |
Proportion of all 216 studies included in the review.
Almost three quarters of the 216 eligible publications were classified as descriptive/epidemiological studies, only 26% were intervention, and just 1.9% were measurement research (Table 2). The most common study design was cross-sectional (47%). All other study designs accounted for 6–10% of publications. The 5-year citations were significantly different by research design but not study type (Table 2). The most frequently cited study designs were systematic reviews, RCTs and cohort studies, while the least were cross-sectional and non-randomized trials.
Table 2.
Research output |
5-year citationb |
P values | ||||
---|---|---|---|---|---|---|
n | % | 95% CI | Mean | 95% CI | ||
Study type | 0.1638 | |||||
Descriptive | 156 | 72.2 | 66.2–78.2 | 13.9 | 11.6–16.2 | |
Measurement | 4 | 1.9 | 0.1–3.7 | 16.5 | − 15.0–47.8 | |
Intervention | 56 | 25.9 | 20.1–31.8 | 18.4 | 13.8–22.9 | |
Study design | 0.0017 | |||||
Systematic review/meta-analysis | 14 | 6.5 | 3.2–9.8 | 30.1 | 14.6–45.7 | |
Reviews | 15 | 6.9 | 3.5–10.3 | 15.3 | 8.7–21.8 | |
Randomized controlled trial | 17 | 7.9 | 4.3–11.5 | 22.2 | 13.9–30.5 | |
Non-randomized trial | 22 | 10.2 | 6.1–14.2 | 11.8 | 6.2–17.5 | |
Cohort | 14 | 6.5 | 3.2–9.8 | 18.3 | 9.5–27.1 | |
Cross sectional | 101 | 46.8 | 40.1–53.4 | 12.5 | 10.0–14.9 | |
Decision and cost effectiveness | 13 | 6.0 | 2.8–9.2 | 13.7 | 7.0–20.4 | |
Other study design | 20 | 9.3 | 5.4–13.1 | 14.0 | 5.8–22.1 |
Note. P values represent statistical comparison of means.
10 randomly selected public health journals: American Journal of Preventive Medicine, American Journal of Public Health, Australian and New Zealand Journal of Public Health, BMC Public Health, European Journal of Public Health, International Journal of Public Health, Journal of Public Health (UK), Journal of Public Health Policy, Preventive Medicine, Scandinavian Journal of Public Health.
5 year citation for the year 2013.
4. Discussion
To our knowledge, this is the first bibliographic study describing the research output and citation rates (academic impact) of public health dissemination and implementation research. The findings suggest that systematic reviews and meta-analyses, cohort studies and RCTs represent the most highly cited research designs in the field. Despite being infrequently cited, publications utilising cross sectional designs accounted for almost half of all implementation and dissemination research output.
Despite concerns regarding the feasibility, practicality and appropriateness of RCTs for implementation and dissemination research (Nutbeam and Bauman, 2006), such designs appear to dominate intervention research in this field and were highly cited. Such findings suggest that, notwithstanding the reported barriers to random assignment, randomized designs are both common and have considerable impact within the field. Furthermore, the study identified only four publications (1.9%) classified as pertaining to measures development despite these studies being frequently cited. As measure development is particularly important for the development of robust scientific disciplines (Sanson-Fisher et al., 2008), further research investment in this area is likely to make an important contribution to the quality of implementation science.
The findings of this study should be considered in the context of a number of limitations. The study randomly sampled from journals categorised as “Public, Environmental and Occupational Health” in the Journal Citation Reports – Web of Science database. Public health implementation and dissemination research is published across a variety of specialist implementation, general medical and other journal categories. As such the findings of this study may not be representative of all implementation and dissemination research in the field. Furthermore, we sampled papers published in 2008 in order to allow calculation of five years citation rates. Both the research output and citation patterns may have since changed. Furthermore, citation represent an objective yet limited measure of academic impact of published work (Weightman and Butler, 2011) and may not reflect broader societal impacts of research (Bornmann, 2013). Notwithstanding these limitations, the study provides useful characterisation of the field for researchers, funders and health practitioners interested in maximising the academic impact of such research.
5. Conclusions
Research describing implementation and dissemination intervention represent a small fraction of research published in public health journals. Greater investment in dissemination and implementation research, and in particular incentive for randomized trials, and systematic reviews may maximise the quality and impact of public health dissemination and implementation research in public health.
Conflict of interest statement
The study was funded by a grant awarded by the Hunter Medical Research Institute with support from Hunter New England Population Health and the University of Newcastle. The authors declare that they have no conflict of interest.
Financial disclosure
No financial disclosures were reported by the authors of this paper.
Transparency document
Acknowledgments
The study was funded by a grant awarded by the Hunter Medical Research Institute with support from Hunter New England Population Health (G1300572) and the University of Newcastle.
Footnotes
The Transparency document associated with this article can be found, in the online version.
References
- Bornmann L. What is societal impact of research and how can it be assessed? A literature survey. J. Am. Soc. Inf. Sci. Technol. 2013;64(2):217–233. [Google Scholar]
- Eccles M., Grimshaw J., Campbell M., Ramsay C. Research designs for studies evaluating the effectiveness of change and improvement strategies. Quality and Safety in Health Care. 2003;12(1):47–52. doi: 10.1136/qhc.12.1.47. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glasgow R.E., Vinson C., Chambers D., Khoury M.J., Kaplan R.M., Hunter C. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am. J. Public Health. 2012;102(7):1274–1281. doi: 10.2105/AJPH.2012.300755. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kulkarni A.V., Aziz B., Shams I., Busse J.W. Comparisons of citations in web of science, scopus, and google scholar for articles published in general medical journals. JAMA. 2009;302(10):1092–1096. doi: 10.1001/jama.2009.1307. [DOI] [PubMed] [Google Scholar]
- Mercer S.L., DeVinney B.J., Fine L.J., Green L.W., Dougherty D. Study designs for effectiveness and translation research: identifying trade-offs. Am. J. Prev. Med. 2007;33(2):139–154. doi: 10.1016/j.amepre.2007.04.005. [DOI] [PubMed] [Google Scholar]
- Meslin E.M., Blasimme A., Cambon-Thomsen A. Mapping the translational science policy ‘valley of death’. Clin. Transl. Med. 2013;2(1):1–8. doi: 10.1186/2001-1326-2-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milat A.J., Bauman A.E., Redman S., Curac N. Public health research outputs from efficacy to dissemination: a bibliometric analysis. BMC Public Health. 2011;11(1):934. doi: 10.1186/1471-2458-11-934. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nutbeam D., Bauman A.E. McGraw-Hill; New York: 2006. Evaluation in a Nutshell: A Practical Guide to the Evaluation of Health Promotion Programs. [Google Scholar]
- Patsopoulos N.A., Analatos A.A., Ioannidis J.P. Relative citation impact of various study designs in the health sciences. JAMA. 2005;293(19):2362–2366. doi: 10.1001/jama.293.19.2362. [DOI] [PubMed] [Google Scholar]
- Sanson-Fisher R.W., Campbell E.M., Htun A.T., Bailey L.J., Millar C.J. We are what we do: research outputs of public health. Am. J. Prev. Med. 2008;35(4):380–385. doi: 10.1016/j.amepre.2008.06.039. [DOI] [PubMed] [Google Scholar]
- Shadish W.R., Cook T.D., Campbell D.T. Houghton Mifflin; Boston: 2002. Experimental and Quasi-experimental Designs for Generalized Causal Inference. [Google Scholar]
- Weightman A.L., Butler C.C. Using bibliometrics to define the quality of primary care research. BMJ. 2011;342 doi: 10.1136/bmj.d1083. [DOI] [PubMed] [Google Scholar]
- Willis D.L., Bahler C.D., Neuberger M.M., Dahm P. Predictors of citations in the urological literature. BJU Int. 2011;107(12):1876–1880. doi: 10.1111/j.1464-410X.2010.10028.x. [DOI] [PubMed] [Google Scholar]
- Wolf D.M., Williamson P.A. Impact factor and study design: the Academic Value of Published Research (AVaRes) score. Ann. R. Coll. Surg. Engl. 2009;91(1):71–73. doi: 10.1308/003588409X359222. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.