Abstract
Objective
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement has been developed as a guideline for reporting systematic reviews and meta-analyses. Despite the prevalent use of the PRISMA statement in medicine and nursing, no studies have examined authors’ perception of it. The purpose of this study is to explore the perception of the PRISMA statement of authors who published reviews, meta-analyses, or both in nursing journals.
Design
Cross-sectional descriptive study.
Methods
An online survey was conducted among authors who published reviews, meta-analyses, or both in nursing journals between 2011 and 2017. The selected authors’ email addresses were extracted from the PUBMED database. A questionnaire—with a 10-point Likert scale (1—not important at all to 10—very important)—was developed to elicit their responses regarding their perception of not only the PRISMA statement as a whole, but also the individual items therein.
Results
Invitations were sent to 1960 valid email addresses identified, with 230 responses (response rate: 11.7%) and 181 completed responses (completion rate: 9.2%). The average perceived importance of the PRISMA statement was 8.66 (SD=1.35), while the perceived importance for the individual items ranged from 7.74 to 9.32. Six items were rated significantly higher than the average rating, whereas one item was rated significantly lower.
Conclusion
Most respondents perceived the PRISMA statement as important. Items related to information sources, selection, search-flow presentation, summary of findings, limitations and interpretation were deemed more important while the registration was deemed less so.
Keywords: PRISMA, publication policy, quality of reporting, research reporting, systematic reviews
Strengths and limitations of this study.
This pioneering study is the first to examine authors’ perception of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement.
The sampling frame, generated from PubMed, covered most of the eligible subjects in nursing.
The response rate of the survey is somewhat low.
Introduction
Systematic reviews and meta-analyses are essential tools for healthcare professionals in evaluating the effectiveness of existing medical interventions. Information synthesised from systematic reviews and meta-analyses are frequently used as the basis for the development and revision of clinical practice guidelines.1 The reliability, usefulness and scientific soundness of systematic reviews and meta-analyses depend on their methodologies and reporting quality. In this regard, journal editors have suggested that both contributing researchers and editorial boards of the journals are jointly responsible for ensuring the high quality of systematic reviews and meta-analyses.2 It is the obligation of researchers to conduct and report their findings according to international standards and guidelines where possible, whereas it is the prerogative of journal editors and contributors to set stringent criteria and adhere to them when considering manuscripts for publication.
Several research-reporting guidelines are available for conducting and reporting various types of studies in health sciences, such as the Consolidated Standards of Reporting Trials3 for randomised controlled trials and the Strengthening the Reporting of Observational studies in Epidemiology4 for observational studies. For systematic reviews and meta-analyses of interventional studies, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement5 is the most commonly used reporting guideline.
The PRISMA statement was developed in 2005 during a 3-day meeting in Canada by an assemblage of review authors, methodologists, clinicians, medical editors and consumers.5 A 27-item checklist in seven subsections was created through a consensual process informed by evidence.1 The PRISMA statement can be used by authors as a guideline to ensure the completeness of studies and to reduce reporting biases when conducting and reporting systematic reviews and meta-analyses. The statement can also be used by journal reviewers and editors to evaluate the reporting quality of manuscripts in consideration. Although it focuses on reporting systematic reviews and meta-analyses of randomised controlled trials, the PRISMA statement can also be used for systematic reviews and meta-analyses of other types of studies. The practical value of the PRISMA statement can be demonstrated by its having been cited for over 19 000 times up to July 2017.6
Several research studies evaluated methodological and reporting qualities of systematic reviews and meta-analyses.7–11 For example, in terms of the reporting quality, it has been reported that an average of 86.3% of systematic reviews published in gastroenterology and hepatology journals complied with the PRISMA guidelines,8 whereas only 57.1% of the those published in nursing journals did so.9
As of February 2018, 177 academic journals have endorsed the PRISMA statement (http://www.prisma-statement.org/), reflecting their recommendation for research contributors to adhere to the PRISMA guidelines when conducting and reporting systematic reviews or meta-analyses.
Despite the sizeable number of citations of the PRISMA statement in academic articles over the years, adherence among researchers to the items in the PRISMA statement was suboptimal. Nine PRISMA items were adhered to by fewer than 67% of the 2382 systematic reviews published after 2009.6 For systematic reviews published in nursing journals, the median adherence rate was lower than 60%.9 Currently, only three out of the 116 nursing journals endorse the PRISMA statement (http://www.prisma-statement.org/), namely Journal of Obstetric, Gynecologic, and Neonatal Nursing, Journal of the American Academy of Nurse Practitioners, and Nursing Research. Although journals such as the International Journal of Nursing Studies and Journal of Clinical Nursing do not formally endorse the PRISMA statement, they do recommend that contributors follow it when reporting their systematic reviews and meta analyses. Therefore, it is important to examine authors’ perception of the importance of the items in the PRISMA statement. To the best of our knowledge, no studies have examined authors’ perception of PRISMA. Thus, the aim of this study to address this academic gap by exploring how such authors from nursing journals perceive the importance of not only the PRISMA statement as a whole, but also the individual items therein.
Methods
Study design
A cross-sectional online survey was conducted to collect perception of the PRISMA statement of authors publishing reviews or meta-analyses in nursing journals.
Participants
Any authors who published reviews, meta-analyses, or both in nursing journals between 2011 and 2017 were invited to participate in the online survey regarding their perception of the PRISMA statement.
Strategic sampling of participants
A total of 116 nursing journals were identified from the nursing category of the Journal Citation Reports, Science Edition 2016 version (https://clarivate.com/products/journal-citation-reports/). A search was conducted on the PubMed database for articles published in these 116 journals between 1 January 2011 and 15 December 2017 with ‘review’ or ‘meta-analysis’ in their titles. We used ‘review’ rather than ‘systematic review’ as the searching term to be more inclusive because prior studies have indicated that some systematic reviews published in nursing journals might use other terms such as ‘systematic literature review’ in the title.12 A noteworthy difference between systematic reviews and traditional literature/narrative reviews is that the former requires predefined criteria for eligibility, systematic search strategy, quality assessment and synthesis of results, whereas the latter does not. We searched for articles published after 2011 to avoid obsolete and therefore invalid email addresses. The PubMed query used in the database search is included in online supplementary file 1. A total of 3877 articles were identified in the search. Article summary records were retrieved and downloaded from the PubMed database in the Extensible Markup Language (XML) file format. A Python script was then written to process the XML file, extracting the PMID, article titles, authors and their email addresses from each record into the Common-Separated Values format.
bmjopen-2018-026271supp001.pdf (68.2KB, pdf)
Sample size estimation
The authors’ perceptions of the PRISMA statement and its individual items were measured with a 10-point Likert scale. According to normal approximation, 6×SDs would cover 99% of the data; the SD was thus approximated to 1.67 (10/6). To achieve a 95% CI with a margin of error of 0.2, 270 responses would be needed.13 Based on prior research,14 the response rates for university staff and health educators are estimated to range from 10% to 20%. We assumed a low response rate from the eSurvey and estimated that 2700 invitations would be needed, given an assumed response rate of 10%.
Questionnaire
The 37 items in the questionnaire concerned different aspects: five focused on the authors’ demographic information; four on their experiences in conducting reviews and using the PRISMA guidelines; one on the overall evaluation of the importance to follow the PRISMA guidelines in conducting and reporting of systematic reviews using a 10-point Likert scale (1—not importance at all to 10—extremely important); and 27 on their perception of the importance of each individual item in the seven sections of the PRISMA guidelines using a 10-point Likert scale (1—not important at all to 10—very important). Open-ended questions were included in each subsection to gather qualitative data about their responses.
An electronic questionnaire was created using the eSurvey platform developed by the Information Technology (IT) department of the authors’ university.15 After pilot testing by the authors’ peers, a unique URL for the electronic questionnaire was generated. The questionnaire is attached as online supplementary file 2.
bmjopen-2018-026271supp002.pdf (554.1KB, pdf)
Data collection
Invitation emails, including a description of the study and the URL to the questionnaire, were sent to the target email addresses between 3 January 2018 and 7 January 2018. A reminder was sent on 17 January 2018. The survey was closed on 31 January 2018. Completed e-questionnaires were stored in the server of the IT department of the authors’ university.
Data analyses
Descriptive statistics, including frequencies and percentages, were used to summarise the results. Paired-sample t-test was used to examine differences between the overall and individual item ratings. Bonferroni’s method was used to adjust the level of significance due to multiple comparisons. All the analyses were conducted using IBM SPSS V.22.0 for Windows.16 Content analysis was conducted to analyse the qualitative responses using NVivo V.11 for Windows.17 The open-ended responses were analysed for initial coding; codes with similar meanings were then grouped into the same category.18
Patient and public involvement
No patients participated in this study; only authors from nursing journals were involved.
Results
A total of 2565 email addresses were identified from 1832 articles (out of the 3877 articles identified from the PubMed search as many of them did not include email addresses). On removal of duplicates and invalid email addresses, 2310 distinct email addresses remained, to each of which an email invitation was sent. Of these 2310 email addresses, 350 were invalid ones to which the invitation was undeliverable and bounced back, whereas 1960 were valid ones to which delivery was successful. A total of 230 authors attempted the questionnaire, 181 of whom completed it. Accordingly, the response rate was 11.7% (230/1960) and the completion rate was 9.2% (181/1960).
The respondents’ demographic information is summarised in table 1: 135 (74.6%) respondents were females and 138 (76.3%) were aged 41 or above. In terms of disciplines, 125 (69.1%) respondents specialised in nursing, followed by eight (4.4%) in public health and six (3.3%) in psychiatry.
Table 1.
Variables | n (%) |
Gender | |
Male | 46 (25.4) |
Female | 135 (74.6) |
Age | |
21–30 | 7 (3.9) |
31–40 | 36 (19.9) |
41–50 | 45 (24.9) |
50–60 | 62 (34.3) |
61 or above | 31 (17.1) |
Specialty | |
Nursing | 125 (69.1) |
Dentistry | 1 (0.6) |
Medicine | 1 (0.6) |
Microbiology | 1 (0.6) |
Obstetrics and gynaecology | 4 (2.2) |
Paediatrics | 5 (2.8) |
Pharmacology | 2 (1.1) |
Physiology | 2 (1.1) |
Psychiatry | 6 (3.3) |
Psychology | 2 (1.1) |
Public health | 8 (4.4) |
Surgery | 4 (2.2) |
Others | 20 (11.0) |
All of the 181 respondents knew what a systematic review was. Among them, 160 (88.4%) had published systematic reviews and 166 (91.7%) were aware of the PRISMA guidelines. The 166 respondents aware of the PRISMA guidelines were then asked to rate the overall importance of following the PRISMA guidelines in conducting and reporting systematic reviews based on a 10-point Likert scale, for which an average score of 8.66 (SD=1.40) was reported (table 2). The respondents also rated the importance of each of the 27 items in the PRISMA guidelines, of which the results are shown in table 3. The mean scores ranged from 7.75 (Item 5) to 9.35 (Item 17) with a median of 8.73 (Item 21). The rating for Item 5 was significantly lower than the overall rating. Conversely, the ratings for six items from different sections were significantly higher than the overall rating, namely Items 7 and 9 from the Methods section, Item 17 from the Results section and Items 24, 25 and 26 from the Discussion section.
Table 2.
Question | Yes |
Do you know what is systematic review? | |
Yes | 181 (100.0%) |
No | 0 (0.0%) |
Have you published any systematic review before? | |
Yes | 160 (88.4%) |
No | 21 (11.6%) |
Are you aware of the PRISMA guidelines? | |
Yes | 166 (91.7%) |
No | 15 (8.3%) |
Do you follow the PRISMA guidelines when conducting and reporting your systematic review? | |
Yes | 140 (77.3%) |
No (not required by journals) | 10 (5.5%) |
No (other reasons) | 3 (1.7%) |
Not applicable (did not conduct any systematic reviews) | 13 (7.2%) |
No response | 15 (8.3%) |
Importance of following PRISMA guidelines in conducting and reporting systematic review (1–10) | 8.66 (1.40) |
95% CI 8.45 to 8.88 |
PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
Table 3.
Item | Mean (SD) | 95% CI | P value* |
Title | |||
1. Identify the report as a systematic review, meta-analysis, or both | 8.98 (1.58) | (8.73 to 9.22) | 0.015 |
Abstract | |||
2. Provide a structured summary including | 8.87 (1.59) | (8.62 to 9.11) | 0.051 |
Introduction | |||
3. Describe the rationale for the review | 8.81 (1.45) | (8.58 to 9.03) | 0.223 |
4. Provide an explicit statement of questions being addressed with reference to PICOS | 8.67 (1.61) | (8.42 to 8.92) | 0.962 |
Method | |||
5. Indicate if a review protocol exists | 7.75 (2.18) | (7.41 to 8.08) | <0.001† |
6. Specify study and report characteristics used as criteria for eligibility | 8.90 (1.44) | (8.68 to 9.12) | 0.022 |
7. Describe all information sources in the search and date last searched | 9.07 (1.26) | (8.87 to 9.26) | <0.001† |
8. Present full electronic search strategy for at least one database | 8.61 (1.73) | (8.34 to 8.87) | 0.690 |
9. State the process for selecting studies | 9.16 (1.30) | (8.96 to 9.36) | <0.001† |
10. Describe method of data extraction from reports and any processes for obtaining and confirming data from investigators | 8.81 (1.54) | (8.57 to 9.04) | 0.247 |
11. List and define all variables for which data were sought and any assumptions and simplifications made | 8.70 (1.49) | (8.47 to 8.93) | 0.748 |
12. Describe methods used for assessing risk of bias of individual studies and how this information is to be used in any data synthesis | 8.64 (1.64) | (8.39 to 8.89) | 0.833 |
13. State the principal summary measures | 8.58 (1.66) | (8.33 to 8.84) | 0.509 |
14. Describe the methods of handling data and combining results of studies | 8.87 (1.45) | (8.65 to 9.10) | 0.089 |
15. Specify any assessment of risk of bias that may affect the cumulative evidence | 8.71 (1.44) | (8.49 to 8.93) | 0.697 |
16. Describe methods of additional analyses | 8.57 (1.60) | (8.33 to 8.82) | 0.455 |
Results | |||
17. Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each stage, ideally with a flow diagram | 9.35 (1.00) | (9.20 to 9.50) | <0.001† |
18. For each study, present characteristics for which data were extracted and provide the citations | 9.01 (1.40) | (8.80 to 9.23) | 0.007 |
19. Present data on risk of bias of each study and, if available, any outcome level assessment | 8.45 (1.79) | (8.17 to 8.72) | 0.075 |
20. For all outcomes considered (benefits or harms), present, for each study: (1) simple summary data for each intervention group; (2) effect estimates and CIs, ideally with a forest plot | 8.52 (1.64) | (8.27 to 8.77) | 0.231 |
21. Present results of each meta-analysis done, including CIs and measures of consistency | 8.73 (1.50) | (8.51 to 8.96) | 0.556 |
22. Present results of any assessment of risk of bias across studies | 8.51 (1.65) | (8.25 to 8.76) | 0.202 |
23. Give results of additional analyses | 8.48 (1.59) | (8.24 to 8.73) | 0.101 |
Discussion | |||
24. Summarise the main findings including the strength of evidence for each main outcome; consider their relevance to key groups | 9.20 (1.03) | (9.05 to 9.36) | <0.001† |
25. Discuss limitations at study and outcome level, and at review-level. | 9.08 (1.30) | (8.89 to 9.28) | <0.001† |
26. Provide a general interpretation of the results in the context of other evidence, and implications for future research | 9.27 (0.99) | (9.11 to 9.42) | <0.001† |
Funding | |||
27. Describe sources of funding for the systematic review and other support; role of funders for the systematic review | 8.43 (2.04) | (8.12 to 8.75) | 0.149 |
*P values were computed using paired sample t-test comparing each item with the overall rating.
†Significant at 5% level of significant after the Bonferroni’s adjustment.
PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses.
For the open-ended questions, the respondents were asked to share the reasons for their rating for each section. For the 166 respondents, 62 valid open-ended responses were received. Their perceptions of the importance of the items in the seven sections of the PRISMA guidelines are summarised in online supplementary file 3.
bmjopen-2018-026271supp003.pdf (142.2KB, pdf)
When asked to explain the importance of Item 1 (Title), the prevailing view was that compliance to it would ensure that the title provided clear information about the study (32 codes) and helped readers locate the work (25 codes). Item 2 (Abstract) was likewise deemed important since a well-written abstract would help readers quickly ascertain the purpose of the paper (28 codes). Nonetheless, some respondents found it unnecessary to provide a registration number for the systematic review in the Abstract. Furthermore, the respondents believed that adhering to Item 3 (Introduction) was important as the Introduction would acquaint the readers with the context of the study (12 codes) but some felt that the Population, Intervention, Comparison, Outcome (PICO) framework (Item 4—Introduction) was inflexible and had its limitations (12 codes). The PICO framework has been advocated for interventional studies.19 However, in nursing research, there may be other types of systematic reviews such as those of prevalence studies,20 and psychological properties of instruments.21 Therefore, the PICO framework may not be directly applicable in those cases.
The respondents also felt that abiding by Items 5–16 (Methods) was vital to ensuring the quality, rigour, and trustworthiness of the study (17 codes). However, a few respondents commented that not all items were applicable to some types of systematic reviews (five codes). For instance, one respondent opined that “the assessment of risk of bias, statement of risk ratio and explaining additional analyses depend on the study design … [For] a systematic review of cross-sectional surveys or a meta-synthesis I do not need this information" (Response 15).
When asked about the importance of Items 17–23 (Results), the respondents agreed that they were critical to research reporting (11 codes), but remarked that not all items could be complied with (13 codes), and that some might be less applicable to reviews that undertook narrative synthesis. They also regarded Items 24–26 (Discussion) as essential components when reporting research (nine codes) as it would inform readers of knowledge gaps, future practices, and implications (14 codes). As for Item 27 (Funding), the respondents felt that it was vital in systematic reviews as it would reveal potential areas for bias (10 codes) and allow authors to declare any conflicts of interest.
Discussion
Most respondents felt that the PRISMA statement was important and reported a mean overall rating of 8.66 (SD=1.40). In terms of the individual items, all but Item 5 were associated with an average score above 8.0, implying the perceived importance of those items among most respondents. Item 5—‘Indicate if a review protocol exists, if and where it can be accessed (eg, web address) and, if available, provide’—registered a mean score of 7.75, which is significantly lower than the overall rating.
For published systematic reviews, compliance of Item 5 to the PRISMA statement was often low. Panic et al 8 reported that only four out of 90 systematic reviews (4.4%) published in the gastroenterology and hepatology journals adhered to this item, while Tam et al 9 reported that only two out of 74 (2.7%) systematic reviews in nursing journals did so. Sideri et al 22 suggested that protocol registration of systematic reviews should be encouraged to improve the quality of published systematic reviews. A plausible explanation for the inadequate adherence and the comparatively low rating of the item lies in the low awareness of the platform to publish or register the protocol. Moreover, the protocol is not a prerequisite for publishing systematic reviews in most medical and nursing journals, though it is a requirement for publishing randomised controlled trials, as mandated by many journals.23
Six items were rated significantly higher than the overall rating: two from the Methods section, one from the Results section, and three from the Discussion section. The three items from the Methods and Results sections include:
Item 7—’Describe all information sources (eg, databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched’.
Item 9—‘State the process for selecting studies (ie, screening, eligibility, inclusion in systematic review and, if applicable, inclusion in the meta-analysis)’.
Item 17—’Give numbers of studies screened, those assessed for eligibility, and those included in the review, with reasons for exclusions at each stage, ideally with a flow diagram’.
These three items relate to the uniqueness in data collection and evaluation of the systematic reviews24 which constitute the major differences between systematic reviews and traditional literature reviews. When conducting a systematic review, the authors stipulate inclusion criteria for the review before the literature is selected, and they must demonstrate that these criteria are consistently adhered to.25 Therefore, a clear description of the sources for searching and selection procedure is essential. A recent study reported that all systematic reviews published in nursing journals revealed the databases used and at least 85.1% provided the last searched date.12 Tam et al 12 further reported that the rates of compliance to Items 7, 9 and 17 were 98.6%, 97.3% and 91.9%, respectively among systematic reviews published in nursing journals.
The scores of all three items from the Discussion and the subtotal scores of the section were significantly higher than the overall score. These three items are:
Item 24—’Summarise the main findings including the strength of evidence for each main outcome; consider their relevance to key groups (eg, healthcare providers, users and policy makers)'.
Item 25—’Discuss limitations at the study and outcome levels (eg, risk of bias), and at the review level (eg, incomplete retrieval of identified research and reporting bias)'.
Item 26—‘Provide a general interpretation of the results in the context of other evidence, and implications for future research’.
The purpose of the Discussion is to summarise the findings in a research context and to explain their meaning and importance.26 Traditionally, the discussion serves to convince readers of the rightness of the authors' data interpretation and speculation, and has been deemed as the most important part in a research article.27 For scientific articles, the discussion should include the principal findings, strengths and weaknesses, differences in results, meanings of the study such as possible mechanisms and implications for clinicians or policymakers, unanswered questions and future research.28 These points jointly constitute the content of the three items. In fact, this opinion can be observed from some open-ended responses to this section in our survey, as exemplified by ‘An essential component of reporting research’, ‘Informs knowledge gaps, future practice and implications’ and ‘Provides overall results’.
The current research represents the pioneering study in elucidating the perception of the PRISMA statement of authors who published systematic reviews and meta-analyses. We have attempted to include all the authors who had published systematic reviews, meta-analyses, or both in nursing journals between 2011 and 2017 as the participants for the study. The results reflected that most respondents perceived items in the PRISMA statement as important, implying their agreement in general that adherence to the statement when writing their manuscripts is beneficial. The advantages of such adherence include not only the establishment of a standard format but also the assured inclusion of all-important information, the omission of which will diminish the usefulness of the reviews.1 5 Several limitations of the study are noteworthy. First, the completion of the questionnaire by 181 respondents, leading to a completion rate of only 9.2%, limits the representativeness of the sample. Second, although all the email addresses were extracted from the included articles, they mainly belonged to the corresponding authors who were usually the senior authors29; hence, this might have constituted selection bias. Third, 350 out of 2310 (15.1%) email addresses were not valid during the time of the study. It has been reported that most nursing faculty members with doctoral degrees are in their early 50s, and the average retirement age for a nurse educator is 62.5 years old30; therefore, some of the authors might have retired. Finally, we did not attempt to search for email addresses from other sources so as to increase the number of valid email addresses.
Reporting guidelines are useful tools for authors, reviewers and editors to ensure the appropriateness of the content for manuscripts. It has been advocated that introduction to these guidelines should be included when teaching evidence-based practice.31 32 In this study, we found that authors publishing systematic reviews and meta-analyses in nursing journals deemed it important to follow the PRISMA statement to conduct and report their reviews. Future studies may focus on journal editors and peer reviewers to determine not only whether their views coincide with those of the authors of reviews and meta-analyses, but also whether they will formally endorse PRISMA in their journals.
Supplementary Material
Acknowledgments
We would like to thank all respondents of the survey.
Footnotes
Contributors: Study design: WWST, AT, BW and SYSG. Data collection: WWST, AT and BW. Data analysis: WWST and BW. Manuscript drafting: WWST, AT, BW and SYSG.
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests: None declared.
Ethics approval: National University of Singapore Institutional Review Board (Ref No. S-17-342E).
Provenance and peer review: Not commissioned; externally peer reviewed.
Data sharing statement: It is an anonymous online survey, the data are the responses from the participants. The data have been grouped and presented in the manuscript.
Patient consent for publication: Not required.
References
- 1. Liberati A, Altman DG, Tetzlaff J, et al. . The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009;339:b2700 10.1136/bmj.b2700 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Hale C, Griffiths P. Ensuring the reporting quality of publications in nursing journals: a shared responsibility? Int J Nurs Stud 2015;52:1025–8. 10.1016/j.ijnurstu.2015.02.009 [DOI] [PubMed] [Google Scholar]
- 3. Schulz KF, Altman DG, Moher D, et al. . CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c332 10.1136/bmj.c332 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. von Elm E, Altman DG, Egger M, et al. . Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 2007;335:806–8. 10.1136/bmj.39335.541782.AD [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Moher D, Liberati A, Tetzlaff J, et al. . Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009;339:b2535 10.1136/bmj.b2535 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Page MJ, Moher D. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement and extensions: a scoping review. Syst Rev 2017;6:263 10.1186/s13643-017-0663-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Fleming PS, Seehra J, Polychronopoulou A, et al. . Cochrane and non-Cochrane systematic reviews in leading orthodontic journals: a quality paradigm? Eur J Orthod 2013;35:244–8. 10.1093/ejo/cjs016 [DOI] [PubMed] [Google Scholar]
- 8. Panic N, Leoncini E, de Belvis G, et al. . Evaluation of the endorsement of the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement on the quality of published systematic review and meta-analyses. PLoS One 2013;8:e83138 10.1371/journal.pone.0083138 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Tam WW, Lo KK, Khalechelvam P. Endorsement of PRISMA statement and quality of systematic reviews and meta-analyses published in nursing journals: a cross-sectional study. BMJ Open 2017;7:e013905 10.1136/bmjopen-2016-013905 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Tao KM, Li XQ, Zhou QH, et al. . From QUOROM to PRISMA: a survey of high-impact medical journals' instructions to authors and a review of systematic reviews in anesthesia literature. PLoS One 2011;6:e27611 10.1371/journal.pone.0027611 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Tunis AS, McInnes MD, Hanna R, et al. . Association of study quality with completeness of reporting: have completeness of reporting and quality of systematic reviews and meta-analyses in major radiology journals changed since publication of the PRISMA statement? Radiology 2013;269:413–26. 10.1148/radiol.13130273 [DOI] [PubMed] [Google Scholar]
- 12. Tam WWS, Lo KKH, Khalechelvam P, et al. . Is the information of systematic reviews published in nursing journals up-to-date? a cross-sectional study. BMC Med Res Methodol 2017;17:151 10.1186/s12874-017-0432-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Scheaffer RL, Scheaffer RL. Elementary survey sampling. 7th edn Brooks/Cole, Cengage Learning: Boston, MA, 2012. [Google Scholar]
- 14. Schonlau M, Fricker RD, Elliott MN. Conducting research surveys via e-mail and the web. Santa Monica, CA: Rand, 2002. [Google Scholar]
- 15. eSurvey [program]. Information technology Centre. Singapore: National University of Singapore, 2010. [Google Scholar]
- 16. SPSS. Statistics for windows [program]. 22 version. Armonk, New York: IBM Corp, 2013. [Google Scholar]
- 17. NVivo. Qualitative data analysis software [program. Australia: QSR International Pty Ltd, 2015. [Google Scholar]
- 18. Cavanagh S. Content analysis: concepts, methods and applications. Nurse Res 1997;4:5–13. 10.7748/nr1997.04.4.3.5.c5869 [DOI] [PubMed] [Google Scholar]
- 19. Higgins JPT, Green S, Cochrane Collaboration. Cochrane handbook for systematic reviews of interventions. Chichester, England; Hoboken, NJ: Wiley-Blackwell, 2008. [Google Scholar]
- 20. Tung YJ, Lo KKH, Ho RCM, et al. . Prevalence of depression among nursing students: a systematic review and meta-analysis. Nurse Educ Today 2018;63:119–29. 10.1016/j.nedt.2018.01.009 [DOI] [PubMed] [Google Scholar]
- 21. Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses' knowledge, skills and attitudes for evidence-based practice. J Adv Nurs 2014;70:2181–95. 10.1111/jan.12454 [DOI] [PubMed] [Google Scholar]
- 22. Sideri S, Papageorgiou SN, Eliades T. Registration in the international prospective register of systematic reviews (PROSPERO) of systematic review protocols was associated with increased review quality. J Clin Epidemiol 2018;100:103–10. 10.1016/j.jclinepi.2018.01.003 [DOI] [PubMed] [Google Scholar]
- 23. BMJ. Is the BMJ the right journal for my research article? London, UK: British Medical Journal, 2018. [Google Scholar]
- 24. Cooper HM. The integrative research review: a systematic approach. Beverly Hills, California: Sage Publications, 1984. [Google Scholar]
- 25. Kowalczyk N, Truluck C. Literature reviews and systematic reviews: what is the difference? Radiol Technol 2013;85:4. [PubMed] [Google Scholar]
- 26. Kallestinova ED. How to write your first research paper. Yale J Biol Med 2011;84:10. [PMC free article] [PubMed] [Google Scholar]
- 27. Borja A. 11 steps to structuring a science paper editors will take seriously: a seasoned editor gives advice to get your work published in an international journal, How to Prepare a Manuscript for International Journals — Part 2, Elsevier Connect, Elsevier. 2014.. https://www.elsevier.com/connect/11-steps-to-structuring-a-science-paper-editors-will-take-seriously.
- 28. Docherty M, Smith R. The case for structuring the discussion of scientific papers. BMJ 1999;318:1224–5. 10.1136/bmj.318.7193.1224 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Bhattacharya S. Authorship issue explained. Indian Journal of Plastic Surgery 2010;43:233–4. 10.4103/0970-0358.73482 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. The National Advisory Council on Nurse Education and Practice (NACNEP). The Impact of the Nursing Faculty Shortage. On Nurse Education and Practice. Ninth Annual Report. 2010.. https://www.hrsa.gov/advisorycommittees/bhpradvisory/nacnep/Reports/ninthreport.pdf.
- 31. Berquist TH. Evidence-based medicine and key reporting guidelines: should AJR adopt these approaches? AJR Am J Roentgenol 2016;207:927–8. 10.2214/AJR.16.17189 [DOI] [PubMed] [Google Scholar]
- 32. Swanson JA, Schmitz D, Chung KC. How to practice evidence-based medicine. Plast Reconstr Surg 2010;126:1–94. 10.1097/PRS.0b013e3181dc54ee [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjopen-2018-026271supp001.pdf (68.2KB, pdf)
bmjopen-2018-026271supp002.pdf (554.1KB, pdf)
bmjopen-2018-026271supp003.pdf (142.2KB, pdf)