Introduction
Simulation has seen growing use in health care as a ‘tool, device and/or environment (that) mimics an aspect of clinical care’1 in order to improve health care provider performance, health care processes and, ultimately, patient outcomes.1–5 The use of simulation in health care has been accompanied by an expanding body of simulation-based research (SBR) addressing educational and clinical issues.6–15 Broadly speaking, SBR can be broken down into two categories: (1) research addressing the efficacy of simulation as a training methodology (ie, simulation-based education as the subject of research); and (2) research using simulation as an investigative methodology (ie, simulation as the environment for research).16 17 Many features of SBR overlap with traditional clinical or educational research. However, the use of simulation in research introduces a unique set of features that must be considered when designing the methodology, and reported when publishing the study.16–19
As has been shown in other fields of medicine,20 the quality of reporting in health professions education research is inconsistent and sometimes poor.1 11 21–23 Systematic reviews in medical education have quantitatively documented missing elements in the abstracts and main texts of published reports, with particular deficits in the reporting of study design, definitions of independent and dependent variables, and study limitations.21–23 In research specific to simulation for health care professions education, a systematic review noted many studies failing to ‘clearly describe the context, instructional design or outcomes’.1 Another study found that only 3% of studies incorporating debriefing in simulation education reported all the essential characteristics of debriefing.11 Failure to adequately describe the key elements of a research study impairs the efforts of editors, reviewers and readers to critically appraise strengths and weaknesses24 25 or apply and replicate findings.26 As such, incomplete reporting represents a limiting factor in the advancement of the field of simulation in health care.
Recognition of this problem in clinical research has led to the development of a growing number of reporting guidelines in medicine and other fields, including the Consolidated Standards of Reporting Trials (CONSORT) statement for randomised trials,27–30 the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) statement for observational studies31 32 and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement,33–35 among more than 250 others.36 Transparent reporting of research allows readers to clearly identify and understand ‘what was planned, what was done, what was found, and what conclusions were drawn’.31 In addition to these statements, experts have encouraged37 and published extensions to existing statements that focus on specific methodological approaches38 39 or clinical fields.40 41 In this study, we aimed to develop reporting guidelines for SBR by creating extensions to the CONSORT and STROBE statements specific to the use of simulation in health care research. These reporting guidelines are meant to be used by authors submitting manuscripts involving SBR, and to assist editors and journal reviewers when assessing the suitability of simulation-based studies for publication.
Methods
The study protocol was reviewed by the Yale University Biomedical Institutional Review Board and was granted exempt status. We conducted a multistep consensus process based on previously described steps for developing health research reporting guidelines.42 These steps involved (1) developing a steering committee; (2) defining the scope of the reporting guidelines; (3) identifying a consensus panel; (4) generating a list of items for discussion; (5) conducting a consensus meeting; and (6) drafting reporting guidelines and an explanation and elaboration document.
Development of the steering committee
A steering committee was formed consisting of 12 members with expertise in simulation-based education and research, medical education research, study design, statistics, epidemiology and clinical medicine. The steering committee defined the scope of the reporting guidelines, identified participants for the consensus process, generated a premeeting survey, planned and conducted the consensus meeting and, ultimately, drafted and refined the final version of the reporting guidelines and the explanation and elaboration document.
Defining the scope of the reporting guidelines
To clarify the scope of the reporting guideline extensions, we defined simulation as encompassing a diverse range of products including computer-based virtual reality simulators, high-fidelity and static manikins, plastic models and task trainers, live animals, inert animal products, human cadavers and standardised or simulated patients (ie, individuals trained to portray a patient). Our definition excluded research using computational simulation and mathematical modelling, as the guidelines were developed for research using human participants, either as learners or health care providers.1 The steering committee determined to create reporting guidelines encompassing two categories of SBR: (1) studies evaluating simulation for educational use; and (2) studies using simulation as investigative methodology.16 We identified the CONSORT28 and STROBE31 32 statements as reflecting the current reporting standards in health care research and aimed to develop extensions of these two statements for quantitative SBR. The CONSORT statement and extensions were developed for randomised trials, and the STROBE statement and extensions were developed for observational studies (cohort, case-control and cross-sectional study designs). Our guideline extensions are not intended for qualitative research, mixed-methods research or for validation studies.
Identification of consensus panel participants
The steering committee aimed to identify a consensus group with a broad range of expertise in SBR, including experience in conducting single and multicentre simulation-based studies, expertise in educational research, statistics, clinical epidemiology and research methodology, and with varying clinical backgrounds. We invited the editor-in-chief and editorial board members of three health care simulation journals: Simulation in Healthcare, BMJ Simulation & Technology Enhanced Learning and Clinical Simulation in Nursing, and editorial board members from two medical education journals: Medical Education and Advances in Health Sciences Education. In total, 60 expert participants were invited to complete the online survey.
Generating a list of items for discussion
Prior to the consensus meeting, we surveyed the expert participants via a premeeting survey (http://www.surveymonkey.com) to identify items in the CONSORT and STROBE statements that required an extension for SBR. The survey included all items from the CONSORT and STROBE statements and was pilot tested among steering committee members before being posted online. Participants were asked to provide suggested wording for the items they identified as requiring an extension. Participants were also given the option of suggesting new simulation-specific items for the CONSORT and STROBE statements. On the basis of methods previously used to develop extensions to the CONSORT statement,40 we used a cut-off of endorsement by at least one-third of respondents to identify high-priority items for discussion during the consensus meeting.
Consensus meeting
A 5 h consensus conference was conducted in January 2015 in New Orleans, USA, during the annual International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE) meeting. The initial 60 consensus panel participants were invited to attend the consensus conference as well as INSPIRE network members (ie, clinicians, researchers, educators, psychologists, statisticians and epidemiologists). The INSPIRE network is the world's largest health care simulation research network with a proven track record of conducting rigorous simulation-based studies in health care.43–50
The results of the online survey were circulated to each member of the steering committee, who were then assigned to review specific items from the CONSORT and STROBE statements based on their expertise. The consensus meeting started with a brief didactic presentation reviewing the CONSORT and STROBE statements, followed by a description of the study objectives and consensus process. In small groups, each steering committee member led a discussion with four or five individuals tasked with determining if a simulation-specific extension was required for their assigned items, and if so, to recommend wording for the extension. Consensus panel participants were evenly distributed among small groups and specifically assigned to review items based on their area of expertise. High-priority items were discussed at length, but all other checklist items were also discussed in the small groups.
Following small group discussion, the recommended simulation-specific extensions for the CONSORT and STROBE statements were presented to the entire group of participants. Each proposed extension was discussed before recommended wording was established. Minutes from the small and large group discussions were used to inform the development of the explanation and elaboration document.42
Drafting reporting guidelines
The proposed extensions were circulated for comment among all meeting participants and consensus panel participants who could not attend the meeting. The steering committee used the comments to further refine the extension items. To evaluate these items in practice, four members of the steering committee independently pilot tested the CONSORT and STROBE statements with simulation-specific extensions. They used two published SBR studies (ie, one for each type of SBR), while ensuring one study was a randomised trial and the other an observational study. Feedback from pilot testing informed further revisions. The final reporting guidelines with extensions were circulated to the steering committee one last time to ensure the final product accurately represented discussion during and after the consensus conference. An explanation and elaboration document was developed by the steering committee to provide further detail for each item requiring a simulation-specific extension.42
Results
Premeeting survey
There was a 75% response rate for the survey, with 45 of the 60 participants completing the entire survey. An additional 12 (20%) other participants partially completed the survey. Of the 57 participants who responded to the survey, 17 were medical journal editors or editorial board members, 24 had advanced degrees (master's degree, PhD), 16 with advanced degrees in medical education or educational psychology, 6 were nurses, 1 was a psychologist and 54 were physicians (representing anaesthesiology, critical care, emergency medicine, paediatrics and surgery). Of the three participants who did not complete the survey, two were physicians and one was a scientist. The results of the survey are described in online supplementary Digital Content (see supplementary table, Digital Content 1, Survey Responses).
bmjstel-2016-000124.supp1.pdf (216.6KB, pdf)
Consensus meeting
In total, 35 consensus panel participants who completed the premeeting survey attended the consensus conference. An additional 30 attendees were INSPIRE network members. Of the 65 total attendees at the consensus conference, 12 were medical journal editors or editorial board members, 18 had advanced degrees (master's degree, PhD), 4 were nurses, 1 was a psychologist and 60 were physicians (representing anaesthesiology, critical care, emergency medicine, paediatrics and surgery).
Eleven simulation-specific extensions were recommended for the CONSORT statement: item 1 (title and abstract), item 2 (background), item 5 (interventions), item 6 (outcomes), item 11 (blinding), item 12 (statistical methods), item 15 (baseline data), item 17 (outcomes and estimation), item 20 (limitations), item 21 (generalisability) and item 25 (funding). Participants agreed upon the importance of describing the rationale for and design of the simulation-based intervention. As many simulation-based studies use assessment tools as an outcome measure, participants thought it was important to report the unit of analysis and evidence supporting the validity and reliability of the assessment tool(s) when available. In the Discussion section, participants thought it was important to describe the limitations of SBR and the generalisability of the simulation-based outcomes to clinical outcomes (when applicable). Participants also agreed it was important to identify the simulator brand used in the study and if conflict of interest for intellectual property existed among investigators. The group did not feel that modifications to the CONSORT flow diagram were required for SBR. See table 1 for CONSORT extensions for SBR.
Table 1.
Item | Item no | CONSORT description (randomised controlled trials) | Extension for simulation-based research |
---|---|---|---|
Title and abstract | 1a, 1b | 1a: Identification as a randomised trial in the title 1b: Structured summary of trial design, methods, results and conclusions |
In abstract or key terms, the MeSH or searchable keyword term must have the word ‘simulation’ or ‘simulated’. |
Introduction | |||
Background | 2a, 2b | 2a: Scientific background and explanation of rationale 2b: Specific objectives or hypotheses |
Clarify whether simulation is subject of research or investigational method for research. |
Methods | |||
Trial design | 3a, 3b | 3a: Description of trial design (such as parallel, factorial) including allocation ratio 3b: Important changes to methods after trial commencement (such as eligibility criteria), with reasons |
|
Participants | 4a, 4b | 4a: Eligibility criteria for participants 4b: Settings and locations where the data were collected |
|
Interventions | 5 | The interventions for each group with sufficient details to allow for replication, including how and when they were actually administered | Describe the theoretical and/or conceptual rationale for the design of each intervention. Clearly describe all simulation-specific exposures, potential confounders and effect modifiers. |
Outcomes | 6a, 6b | 6a: Completely defined prespecified primary and secondary outcome measures, including how and when they were assessed 6b: Any changes to trial outcomes after the trial started, with reasons |
In describing the details of methods of assessment, include (when applicable) the setting, instrument, simulator type, timing in relation to the intervention, along with any methods used to enhance the quality of measurements. Provide evidence to support the validity and reliability of assessment tools in this context (if available). |
Sample size/study size | 7a, 7b | 7a: How sample size was determined 7b: When applicable, explanation of any interim analyses and stopping guidelines |
|
Randomisation: sequence generation | 8a, 8b | 8a: Method used to generate the random allocation sequence 8b: Type of randomisation; details of any restriction (such as blocking and block size) |
|
Randomisation: allocation concealment mechanism | 9 | Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned | |
Randomisation: implementation | 10 | Who generated the random allocation sequence, who enrolled participants and who assigned participants to interventions | |
Blinding (masking) | 11a, 11b | 11a: If done, who was blinded after assignments to interventions (eg, participants, care providers, those assessing outcomes) and how 11b: If relevant, description of the similarity of interventions |
Describe strategies to decrease risk of bias, when blinding is not possible. |
Statistical methods | 12a, 12b | 12a: Statistical methods used to compare groups for primary and secondary outcomes 12b: Methods for additional analyses, such as subgroup analyses and adjusted analyses |
Clearly indicate the unit of analysis (eg, individual, team, system) and identify repeated measures on subjects, and describe how these issues were addressed. |
Results | |||
Participant flow (a diagram is strongly recommended) | 13a, 13b | 13a: For each group, the numbers of participants who were randomly assigned, received intended treatment and were analysed for the primary outcome 13b: For each group, losses and exclusions after randomisation, together with reasons |
|
Recruitment | 14a, 14b | 14a: Dates defining the periods of recruitment and follow-up 14b: Why the trial ended or was stopped |
|
Baseline data | 15 | A table showing baseline demographic and clinical characteristics of each group | In describing characteristics of study participants, include their prior experience with simulation and other relevant features as related to the intervention(s). |
Numbers analysed | 16 | For each group, number of participants (denominator) included in each analysis and whether analysis was by original assigned groups | |
Outcomes and estimation | 17a, 17b | 17a: For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95% CI) 17b: For binary outcomes, presentation of absolute and relative effect sizes is recommended |
For assessments involving more than one rater, inter-rater reliability should be reported. |
Ancillary analyses | 18 | Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing prespecified from exploratory | |
Adverse events | 19 | All important harms or unintended effects in each group (for specific guidance, see CONSORT for harms) | |
Discussion | |||
Limitations | 20 | Trial limitations, addressing sources of potential bias, imprecision and, if relevant, multiplicity of analyses | Specifically discuss the limitations of simulation-based research. |
Generalisability | 21 | Generalisability (external validity) of the trial findings | Describe the generalisability of simulation-based outcomes to patient-based outcomes (if applicable). |
Interpretation | 22 | Interpretation consistent with results, balancing benefits and harms and considering other relevant evidence | |
Other information | |||
Registration | 23 | Registration number and name of trial registry | |
Protocol | 24 | Where the full trial protocol can be accessed, if available | |
Funding | 25 | Sources of funding and other support (such as supply of drugs), role of funders | List simulator brand and if conflict of interest for intellectual property exists. |
CONSORT, Consolidated Standards of Reporting Trials; MeSH, medical subject heading.
Ten extensions were drafted for the STROBE statement: item 1 (title and abstract), item 2 (background/rationale), item 7 (variables), item 8 (data sources/measurement), item 12 (statistical methods), item 14 (descriptive data), item 16 (main results), item 19 (limitations), item 21 (generalisability) and item 22 (funding). A similar emphasis was placed on the importance of describing all simulation-specific exposures, confounders and effect modifiers, as was discussed for the CONSORT. Other extensions for the STROBE were under similar categories as the proposed extensions for the CONSORT. See table 2 for STROBE extensions for SBR.
Table 2.
Item | Item no | STROBE description (observational studies) | Extension for simulation-based research |
---|---|---|---|
Title and abstract | 1a, 1b |
|
In abstract or key terms, the MeSH or searchable keyword term must have the word ‘simulation’ or ‘simulated’. |
Introduction | |||
Background/rationale | 2 | Explain the scientific background and rationale for the investigation being reported. | Clarify whether simulation is subject of research or investigational method for research. |
Objectives | 3 | State specific objectives, including any prespecified hypotheses. | |
Methods | |||
Study design | 4 | Present key elements of study design early in the paper. | |
Setting | 5 | Describe the setting, locations and relevant dates, including periods of recruitment, exposure, follow-up and data collection. | |
Participants | 6a, 6b, 6c |
|
|
Variables | 7 | Clearly define all outcomes, exposures, predictors, potential confounders and effect modifiers. Give diagnostic criteria, if applicable. | Describe the theoretical and/or conceptual rationale for the design of the intervention/exposure. Describe the intervention/exposure with sufficient detail to permit replication. Clearly describe all simulation-specific exposures, potential confounders and effect modifiers. |
Data sources/measurement | 8 | For each variable of interest, give sources of data and details of methods of assessment (measurement). Describe comparability of assessment methods if there is more than one group. | In describing the details of methods of assessment, include (when applicable) the setting, instrument, simulator type, timing in relation to the intervention, along with any methods used to enhance the quality of measurements. Provide evidence to support the validity and reliability of assessment tools in this context (if available). |
Bias | 9 | Describe any efforts to address potential sources of bias. | |
Study size | 10 | Explain how the study size was arrived at. | |
Quantitative variables | 11 | Explain how quantitative variables were handled in the analyses. If applicable, describe which groupings were chosen, and why. | |
Statistical methods | 12a, 12b, 12c, 12d, 12e |
|
Clearly indicate the unit of analysis (eg, individual, team, system) and identify repeated measures on subjects, and describe how these issues were addressed. |
Results | |||
Participants | 13a, 13b, 13c |
|
|
Descriptive data | 14a, 14b, 14c |
|
In describing characteristics of study participants, include their prior experience with simulation and other relevant features as related to the intervention(s). |
Outcome data | 15 | Cohort study: report numbers of outcome events or summary measures over time. Case-control study: report numbers in each exposure category or summary measures of exposure. Cross-sectional study: report numbers of outcome events or summary measures. |
|
Main results | 16a, 16b, 16c |
|
For assessments involving more than one rater, inter-rater reliability should be reported. |
Other analyses | 17 | Report other analyses done—eg, analyses of subgroups and interactions and sensitivity analyses. | |
Discussion | |||
Key results | 18 | Summarise key results with reference to study objectives. | |
Limitations | 19 | Discuss limitations of the study, taking into account sources of potential bias or imprecision. Discuss direction and magnitude of any potential bias. | Specifically discuss the limitations of simulation-based research. |
Interpretation | 20 | Give a cautious overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies and other relevant evidence. | |
Generalisability | 21 | Discuss the generalisability (external validity) of the study results. | Describe the generalisability of simulation-based outcomes to patient-based outcomes (if applicable). |
Other information | |||
Funding | 22 | Give the source of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based. | List simulator brand and if conflict of interest for intellectual property exists. |
MeSH, medical subject heading; STROBE, STrengthening the Reporting of OBservational studies in Epidemiology.
For the CONSORT and STROBE statements, extensive discussion occurred in the consensus meeting related to the educational intervention and controlling for simulation-specific variables that pose as potential threats to the internal validity of simulation studies. A group of consensus panel participants with expertise in simulation-based education and instructional design used their knowledge of educational theory, existing educational research guidelines51 and systematic reviews of SBR1 5–8 11 to address this issue (table 3). Table 3 offers an additional checklist of key elements specific to SBR, for item 5 (interventions) on the CONSORT statement and item 7 (variables) on the STROBE statement, that should be reported for all simulation studies, for both the intervention and control groups (if applicable).
Table 3.
Elements* | Subelements† | Descriptor |
---|---|---|
Participant orientation | Orientation to the simulator | Describe how participants were oriented to the simulator (eg, method, content, duration). |
Orientation to the environment | Describe how participants were oriented to the environment (eg, method, content, duration). | |
Simulator type16 | Simulator make and model | Describe the simulator make and model. |
Simulator functionality | Describe functionality and/or technical specifications that are relevant to the research question. Describe modifications, if any. Describe limitations of the simulator. | |
Simulation environment16 | Location | Describe where the simulation was conducted (eg, in situ clinical environment, simulation centre, etc) |
Equipment | Describe the nature of the equipment available (eg, type, amount, location, size, etc) | |
External stimuli | Describe any external stimuli (eg, background noise) | |
Simulation event/scenario16 | Event description | Describe if the event was programmed and/or scripted (eg, orientation to event, scenario progression, triggers). If a scenario was used, the scenario script should be provided as an appendix. |
Learning objectives | List the learning objectives and describe how they were incorporated into the event | |
Group vs individual practice | Describe if the simulation was conducted in groups or as individuals. | |
Use of adjuncts | Describe if adjuncts (eg, moulage, media, props) were used. | |
Facilitator/operator characteristics | Describe experience (eg, clinical, educational), training (eg, fellowship, courses) and profession. | |
Pilot testing | Describe if pilot testing was conducted (eg, number, duration, frequency). | |
Actors/confederates/standardised/simulated patients16 | Describe experience (eg, clinical, educational), training (eg, fellowship, courses), profession and gender. Describe various roles, including training, scripting, orientation, and compliance with roles. | |
Instructional design (for educational interventions)19 or exposure (for simulation as investigative methodology)16 | Duration | Describe the duration of the educational intervention. If the intervention involves more than one segment, describe the duration of each segment. |
Timing | Describe the timing of the educational intervention relative to the time when assessment/data collection occurs (eg, just-in-time training). | |
Frequency/repetitions | Describe how many repetitions were permitted and/or the frequency of training (eg, deliberate practice). | |
Clinical variation | Describe the variation in clinical context (eg, multiple different patient scenarios). | |
Standards/assessment | Describe predefined standards for participant performance (eg, mastery learning) and how these standards were established. | |
Adaptability of intervention | Describe how the training was responsive to individual learner needs (eg, individualised learning). | |
Range of difficulty | Describe the variation in difficulty or complexity of the task. | |
Non-simulation interventions and adjuncts | Describe all other non-simulation interventions (eg, lecture, small group discussion) or educational adjuncts (eg, educational video), how they were used, and when they were used relative to the simulation intervention. | |
Integration | Describe how the intervention was integrated into curriculum. | |
Feedback and/or debriefing11 | Source | Describe the source of feedback (eg, computer, simulator, facilitator). |
Duration | Describe the amount of time spent. | |
Facilitator presence Facilitator characteristics |
Describe if a facilitator was present (yes/no), and if so, how many facilitators. Describe experience (eg, clinical, educational), training (eg, fellowship, courses), profession and gender. |
|
Content | Describe content (eg, teamwork, clinical, technical skills and/or inclusion of quantitative data). | |
Structure/method | Describe the method of debriefing/feedback and debriefing framework used (ie, phases). | |
Timing | Describe when the feedback and/or debriefing was conducted relative to the simulation event (eg, terminal vs concurrent). | |
Video | Describe if video was used (yes/no), and how it was used. | |
Scripting | Describe if a script was used (yes/no) and provide script details as an appendix. |
*These elements may apply to the simulation intervention (eg, randomised control trial (RCT) or observational study with simulation as an educational intervention) or when simulation is the environment for research (eg, RCT or observational study using simulation as an investigative methodology). Elements should be described in sufficient detail to permit replication.
†Description required only if applicable.
In modelling the explanation and elaboration document after other similar documents published in conjunction with reporting guidelines,28 32 we provide a specific example for each item requiring a new extension coupled with the background and rationale for including that information for that item. We encourage readers to refer to the explanation and elaboration document to seek further detail about the nature and type of recommended reporting for each new extension (see text, online supplementary Digital Content 2, Explanation and Elaboration of the Simulation-Specific Extensions for the CONSORT and STROBE Statements).
bmjstel-2016-000124.supp2.pdf (660.6KB, pdf)
Discussion
We have developed reporting guidelines for SBR by creating extensions to the CONSORT28 and STROBE31statements. These new extensions were developed via a consensus building process with multiple iterative steps involving an international group of experts with diverse backgrounds and expertise. By creating extensions to the CONSORT and STROBE statements that can be applied to studies in both categories of SBR, we have developed reporting guidelines that are applicable to the majority of studies involving simulation in health care research. To further assist authors in reporting SBR studies, we have published an explanation and elaboration document as an appendix that provides specific examples and details for all the new simulation-specific extensions for the CONSORT and STROBE statements.
The CONSORT and STROBE statements with accompanying SBR extensions are meant to serve as a guide to reporting. As with other CONSORT and STROBE statements, the items are not meant to ‘prescribe the reporting… in a rigid format’, but rather the ‘order and format for presenting information depends on author preferences, journal style, and the traditions of the research field’.28 31 We encourage authors to refer to the explanation and elaboration document that provides details regarding specific elements related to individual items that should be reported for SBR. The use of reporting guidelines can have positive effects on various health care simulation stakeholders, including funders of SBR and those applying for funding (ie, use as a template for grant applications), educators (ie, use as a training tool) and students (ie, use to develop protocols for coursework or research).33 The application of these reporting guidelines will help to enhance the quality of reporting for quantitative SBR and assist journal reviewers and editors when faced with assessing the strengths and weaknesses of simulation-based studies in health care.24 52 53 We encourage journals publishing SBR to consider endorsing the simulation-specific extensions for the CONSORT and STROBE statements and adding these to their ‘Instructions for Authors’.
SBR has several unique factors that prompted us to develop simulation-specific extensions for the CONSORT and STROBE statements. First, there are a wide variety of simulators and simulation modalities available for use in research.16 This, coupled with a plethora of instructional design features in simulation-based educational research, makes describing the simulation intervention a critically important component of any educational study involving simulation (table 3).6 8 19 Second, SBR provides opportunity for the investigator to standardise the simulated environment and/or simulated patient condition. Standardisation of the environment and patient condition allows the investigator to account for many of the potential threats to internal validity that are associated with simulation. Clear reporting of standardisation strategies helps the reader understand how the independent variable was isolated (table 3).16 Third, many simulation studies involve capturing outcomes from a variety of data sources (eg, observation, video review, simulator data capture). When assessment instruments are used (eg, expert raters assessing performance), it is imperative to discuss the psychometric properties of these instruments.5 Existing guidelines fall short in this regard, and these new guidelines help to address this issue. Last, simulation-based studies assessing outcomes in the simulated environment only (eg, clinical performance) should attempt to provide evidence to support how the findings in the simulated environment translate to a valid representation of performance in the real clinical environment.3 By doing so, authors help to convey the relevance and importance of their findings.
Limitations
Our consensus process has several limitations. Although we had a 75% response rate for our survey, an additional 20% of participants only partially completed the survey. This may have potentially introduced a selection bias, although the survey represented only one step in our consensus building process. We include a wide variety of experts in our consensus meeting, but many of them had a paediatric clinical background. We minimised this potential bias by ensuring that each breakout group had at least one expert participant with a background outside of paediatrics. Furthermore, the principles of SBR are common across specialties and professions, and INSPIRE network members represent researchers who are recognised internationally for being leaders in SBR. We based our reporting guidelines on the CONSORT and STROBE guidelines developed by clinical researchers. Other guidelines could have been used as a starting point such as the American Educational Research Association (AERA) standards developed in 2006.54 Our logic was to start with reporting guidelines that were applicable to all types of research, thus providing us more flexibility in generating extensions for both types of SBR. Cross-checking against the AERA guideline does not reveal areas we might have missed. While we tried to develop reporting guidelines for all types of SBR, we recognise there may be specific types of research that may require new items or different extensions. For example, studies designed to evaluate the validity of simulation-based assessments vary in their reporting requirements. The STAndards for Reporting of Diagnostic accuracy (STARD) statement55 addresses these points, and a recent review operationalised these standards and applied them to SBR.56 Other reporting guidelines that might be amenable for simulation-specific extensions include the COnsolidated criteria for REporting Qualitative research (COREQ),57 and the Standards for QUality Improvement Reporting Excellence (SQUIRE)58 guidelines for reporting quality improvement studies. As the field of SBR grows, the simulation-specific extensions for the CONSORT and STROBE statements may need to be revised or refined. We encourage authors, reviewers and editors to visit our website (http://inspiresim.com/simreporting/) and provide feedback that will be used to inform subsequent revisions to these reporting guidelines.
Conclusions
The unique features of SBR highlight the importance of clear and concise reporting that helps readers understand how simulation was used in the research. Poor and inconsistent reporting makes it difficult for readers to interpret results and replicate interventions, and hence less likely for research to inform change that will positively influence patient outcomes. The use of standardised reporting guidelines will serve as a guide for authors wishing to submit manuscripts for publication, and in doing so, draw attention to the important elements of SBR and ultimately improve the quality of simulation studies conducted in the future.
Acknowledgments
The principal investigator AC had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. AC, DK, RM, TPC, VMN, EAH, JD-A, DAC, MP, JH and MA participated in study design, the consensus building process, and drafting and revising the manuscript and approving the final version of the manuscript for publication. YL, DM and ME contributed to interpretation of data, critically revising the manuscript for intellectual content and approving the final version of the manuscript. All authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy of the work are appropriately resolved.
Footnotes
Collaborators: INSPIRE Network Reporting Guideline Working Group: Dylan Bould, MBChB, MRCP, FRCA, Med, University of Ottawa, dylanbould@gmail.com; Ryan Brydges, PhD, University of Toronto, r.brydges@gmail.com; Michael Devita, MD, FCCM, FACP, Harlem Hospital Center, mdevita06@gmail.com; Jonathan Duff, MD, MEd, University of Alberta, jon.duff@albertahealthservices.ca; Sandeep Gangadharan, MD, Hofstra University School of Medicine, gangadsa@gmail.com; Sharon Griswold-Theodorson, MD, MPH, Drexel University College of Medicine, Sharon.griswold-theodorson@drexelmed.edu; Pam Jeffries, PhD, RN, FAAN, ANEF, George Washington University, pjeffries@gwu.edu; Lindsay Johnston, MD, Yale University School of Medicine, Johnston.lindsayc@gmail.com; Suzan Kardong-Edgren, PhD, RN, ANEF, CHSE, Robert Morris University, kardongedgren@rmu.edu; Arielle Levy, MD, MEd, University of Montreal, arielle.levy007@gmail.com; Lori Lioce, DNP, FNP-BC, CHSE, FAANP, The University of Alabama in Huntsville, lioceb@uah.edu; Marco Luchetti, MD, MSc, A. Manzoni General Hospital, m.luchetti@fastwebnet.it; Tensing Maa, MD, Ohio State University College of Medicine, tensing.maa@nationwidechildrens.org; William McGaghie, PhD, Northwestern University Feinberg School of Medicine, wmcgaghie@luc.edu; Taylor Sawyer, DO, MEd, University of Washington School of Medicine, tlsawyer@uw.edu; Dimitrios Stefanidis, MD, PhD, FACS, Carolinas HealthCare System, dimitrios.stefanidis@carolinashelathcare.org; Kathleen Ventre, MD, Children's Hospital Colorado, Kathleen.ventre@ucdenver.edu; Barbara Walsh, MD, University of Massachusetts School of Medicine, bwalshmd1@gmail.com; Mark Adler, MD, Feinberg School of Medicine, Northwestern University, m-adler@northwestern.edu; Linda Brown, MD, MSCE, Alpert Medical School of Brown University, lbrown8@lifespan.org; Aaron Calhoun, MD, University of Louisville, aaron.calhoun@louisville.edu; Aaron Donoghue, MD, MSCE, The Children's Hospital of Philadelphia, Donoghue@email.chop.edu; Tim Draycott, MD, FRCOG, Southmead Hospital, tdraycott@me.com; Walter Eppich, MD, MEd, Feinberg School of Medicine, Northwestern University, weppich@gmail.com; Marcie Gawel, MSN, BSN, MS, Yale University, Marcie.gawel@yale.edu; Stefan Gisin, MD, University Hospital Basel, Stefan.gisin@usb.ch; Lou Halamek, MD, Stanford University, halamek@stanford.edu; Rose Hatala, MD, MSc, University of British Columbia, rhatala@mac.com; Kim Leighton, PhD, RN, ANEF, DeVry Medical International's Institute for Research and Clinical Strategy, kleighton@devrygroup.com; Debra Nestel, PhD, Monash University, debra.nestel@monash.edu; Mary Patterson, MD, MEd, Cincinnati Children's Hospital, marydpatterson84@gmail.com; Jennifer Reid, MD, University of Washington School of Medicine, Jennifer.reid@seattlechildrens.org; Elizabeth Sinz, MD, FCCM, Penn State University College of Medicine, esinz@psu.edu; G. Ulufer Sivrikaya, MD, Antalya Training and Research Hospital, ulufers@gmail.com; Kimberly Stone, MD, MS, MA, University of Washington School of Medicine, Kimberly.stone@seattlechildrens.org; Anne Marie Monachino, MSN, RN, CPN, Children's Hospital of Philadelphia, monachino@email.chop.edu; Michaela Kolbe, PhD, University Hospital Zurich, mkolbe@ethz.ch; Vincent Grant, MD, FRCPC, University of Calgary, Vincent.grant@albertahealthservices.ca; Jack Boulet, PhD, Foundation for Advancement of International Medical Education and Research, jboulet@faimer.org; David Gaba, MD, Stanford University School of Medicine, gaba@stanford.edu; Peter Dieckmann, PhD, Dipl-Psych, Danish Institute for Medical Simulation, mail@peter-dieckmann.de; Jeffrey Groom, PhD, CRNA, Florida International University, groomj@flu.edu; Chris Kennedy, MD, University of Missouri Kansas City School of Medicine, ckennedy@cmh.edu; Ralf Krage, MD, DEAA, VU University Medical Center, r.krage@vumc.nl; Leah Mallory, MD, The Barbara Bush Children's Hospital at Maine Medical Center, mallol@mmc.org; Akira Nishisaki, MD, MSCE, The Children's Hospital of Philadelphia, nishisaki@email.chop.edu; Denis Oriot, MD, PhD, University Hospital of Poitiers, denis.oriot@gmail.com; Christine Park, MD, Feinberg School of Medicine, Northwestern University, csparkmd@gmail.com; Marcus Rall, MD, InPASS Institute for Patient Safety and Teamtraining, marcus.rall@inpass.de; Nick Sevdalis, PhD, King's College London, n.sevdalis@kcl.ac.uk; Nancy Tofil, MD, MEd, Universiity of Alabama at Birmingham, ntofil@peds.uab.edu; Debra Weiner, MD, PhD, Boston Children's Hospital, debra.weiner@childrens.harvard.edu; John Zhong, MD, University of Texas Southwestern Medical Center, john.zhong@childrens.com; Donna Moro-Sutherland, MD, Baylor College of Medicine, donnamsutherland@gmail.com; Dalit Eyal, DO, St. Christopher's Hospital for Children, dalit.eyal@drexelmed.edu; Sujatha Thyagarajan, DCH, FRCPCH, PediSTARS India, sujadoc@gmail.com; Barbara Ferdman, MD, University of Rochester Medical Center, brferdman@gmail.com; Grace Arteaga, MD, FAAP, Mayo Clinic (Rochester), arteaga.grace@mayo.edu; Tonya Thompson, MD, MA, The University of Arkansas for Medical Sciences, thompsontonyam@uams.edu; Kim Rutherford, MD, St. Christopher's Hospital for Children, kim.rutherford@drexelmed.edu; Frank Overly, MD, Alpert Medical School of Brown University, foverly@lifespan.org; Jim Gerard, MD, Saint Louis University School of Medicine, gerardjm@slu.edu; Takanari Ikeyama, MD, Aichi Children's Helath and Medical Center, taqnary@gmail.com; Angela Wratney, MD, MHSc, Children's National Medical Center, awratney@cnmc.org; Travis Whitfill, MPH, Yale University School of Medicine, travis.whitfill@yale.edu; Nnenna Chime, MD, MPH, Albert Einstein College of Medicine, nnenna_chime@yahoo.com; John Rice, PhD(c), US Department of the Navy (retired), john.rice@noboxes.org; Tobias Everett, MBChB, FRCA, The Hospital for Sick Children, tobias.everett@sickkids.ca; Wendy Van Ittersum, MD, Akron Children's Hospital, wenvan@gmail.com; Daniel Scherzer, MD, Nationwide Children's Hospital, Daniel.scherzer@nationwidechildrens.org; Elsa Vazquez Melendez, MD, FAAP, FACP, University of Illinois College of Medicine at Peoria, eluciav@uic.edu; Chris Kennedy, MD, University of Missouri Kansas School of Medicine, ckennedy@cmh.edu; Waseem Ostwani, MD, University of Michigan Health System, waseem@med.umich.edu; Zia Bismilla, MD, MEd, The Hospital for Sick Children, zia.bismilla@sickkids.ca; Pavan Zaveri, MD, MEd, Children's National Health System, pzaveri@childrensnational.org; Anthony Scalzo, MD, FACMT, FAAP, FAACT, Saint Louis University School of Medicine, scalzoaj@slu.edu; Daniel Lemke, MD, Baylor College of Medicine, dslemke@texaschildrens.org; Cara Doughty, MD, MEd, Baylor College of Medicine, cbdought@texaschildrens.org; Modupe Awonuga, MD, MPH, MRCP(UK), FRCPCH, FAAP, Michigan State University, awonuga@msu.edu; Karambir Singh, MD, Johns Hopkins University School of Medicine, ksingh14@jhu.edu; Melinda Fiedor-Hamilton, MD, MSc, Children's Hospital of Pittsburgh, fiedml@ccm.upmc.edu
Competing interests: The authors would like to acknowledge the support of the Laerdal Foundation for Acute Medicine, who have previously provided infrastructure support for the INSPIRE network, and the Society for Simulation in Healthcare, who provided funding to cover the expenses of the consensus meeting. AC (study design, writing, editing and review of manuscript) is supported by KidSIM-ASPIRE Simulation Infrastructure Grant, Alberta Children's Hospital Foundation, Alberta Children's Hospital Research Institute and the Department of Pediatrics, University of Calgary; VMN (study design, writing, editing and review of manuscript) is supported by Endowed Chair, Critical Care Medicine, Children's Hospital of Philadelphia and the following research grants: AHRQ RO3HS021583; Nihon Kohden America Research Grant; NIH/NHLBI RO1HL114484; NIH U01 HL107681; NIH/NHLBI 1U01HL094345-01; and NIH/NINDS 5R01HL058669-10. DM (data interpretation, writing, editing and review of manuscript) is funded by a University Research Chair. Nick Sevdalis (collaborator) is funded by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care South London at King's College Hospital NHS Foundation Trust. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. Sevdalis delivers safety and team skills training on a consultancy basis to hospitals in the UK and internationally via the London Safety & Training Solutions Ltd. The other authors have no other financial or conflict of interest disclosures.
Provenance and peer review: Not commissioned; internally peer reviewed.
Contributor Information
for the International Network for Simulation-based Pediatric Innovation, Research and Education (INSPIRE) Reporting Guidelines Investigators:
Dylan Bould, Ryan Brydges, Michael Devita, Jonathan Duff, Sandeep Gangadharan, Sharon Griswold-Theodorson, Pam Jeffries, Lindsay Johnston, Suzan Kardong-Edgren, Arielle Levy, Lori Lioce, Marco Luchetti, Tensing Maa, William McGaghie, Taylor Sawyer, Dimitrios Stefanidis, Kathleen Ventre, Barbara Walsh, Mark Adler, Linda Brown, Aaron Calhoun, Aaron Donoghue, Tim Draycott, Walter Eppich, Marcie Gawel, Stefan Gisin, Lou Halamek, Rose Hatala, Kim Leighton, Debra Nestel, Mary Patterson, Jennifer Reid, Elizabeth Sinz, G. Ulufer Sivrikaya, Kimberly Stone, Anne Marie Monachino, Michaela Kolbe, Vincent Grant, Jack Boulet, David Gaba, Peter Dieckmann, Jeffrey Groom, Chris Kennedy, Ralf Krage, Leah Mallory, Akira Nishisaki, Denis Oriot, Christine Park, Marcus Rall, Nick Sevdalis, Nancy Tofil, Debra Weiner, John Zhong, Donna Moro-Sutherland, Dalit Eyal, Sujatha Thyagarajan, Barbara Ferdman, Grace Arteaga, Tonya Thompson, Kim Rutherford, Frank Overly, Jim Gerard, Takanari Ikeyama, Angela Wratney, Travis Whitfill, Nnenna Chime, John Rice, Tobias Everett, Wendy Van Ittersum, Daniel Scherzer, Elsa Vazquez Melendez, Chris Kennedy, Waseem Ostwani, Zia Bismilla, Pavan Zaveri, Anthony Scalzo, Daniel Lemke, Cara Doughty, Modupe Awonuga, Karambir Singh, and Melinda Fiedor-Hamilton
References
- 1.Cook DA, Hatala R, Brydges R, et al. Technology enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA 2011;3306:978–88. 10.1001/jama.2011.1234 [DOI] [PubMed] [Google Scholar]
- 2.Zendejas B, Brydges R, Wang AT, et al. Patient outcomes in simulation-based medical education: a systematic review. J Gen Intern Med 2013;28:1078–89. 10.1007/s11606-012-2264-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Brydges R, Hatala R, Zendejas B, et al. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis. Acad Med 2015;90:246–56. 10.1097/ACM.0000000000000549 [DOI] [PubMed] [Google Scholar]
- 4.Cheng A, Grant V, Auerbach M. Using simulation to improve patient safety: dawn of a new era. JAMA Pediatr 2015;169:419–20. 10.1001/jamapediatrics.2014.3817 [DOI] [PubMed] [Google Scholar]
- 5.Cook DA. How much evidence does it take? A cumulative meta-analysis of outcomes of simulation-based education. Med Educ 2014;48:750–60. 10.1111/medu.12473 [DOI] [PubMed] [Google Scholar]
- 6.McGaghie WC, Issenberg SB, Petrusa ER, et al. A critical review of simulation-based medical education research: 2003–2009. Med Educ 2010;44:50–63. 10.1111/j.1365-2923.2009.03547.x [DOI] [PubMed] [Google Scholar]
- 7.McGaghie WC, Issenberg SB, Cohen ER, et al. Translational educational research: a necessity for effective health-care improvement. Chest 2012;142:1097–103. 10.1378/chest.12-0148 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Issenberg SB, McGaghie WC, Petrusa ER, et al. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27:10–28. 10.1080/01421590500046924 [DOI] [PubMed] [Google Scholar]
- 9.Cheng A, Lockey A, Bhanji F, et al. The use of high-fidelity manikins for advanced life support training—A systematic review and meta-analysis. Resuscitation 2015;93:142–9. 10.1016/j.resuscitation.2015.04.004 [DOI] [PubMed] [Google Scholar]
- 10.Cheng A, Lang TR, Starr SR, et al. Technology-enhanced simulation and pediatric education: a meta-analysis. Pediatrics 2014;133:e1313–23. 10.1542/peds.2013-2139 [DOI] [PubMed] [Google Scholar]
- 11.Cheng A, Eppich W, Grant V, et al. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ 2014;48:657–66. 10.1111/medu.12432 [DOI] [PubMed] [Google Scholar]
- 12.Ilgen JS, Sherbino J, Cook DA. Technology-enhanced simulation in emergency medicine: a systematic review and meta-analysis. Acad Emerg Med. 2013;20:117–27. 10.1111/acem.12076 [DOI] [PubMed] [Google Scholar]
- 13.Lorello GR, Cook DA, Johnson RL, et al. Simulation-based training in anaesthesiology: a systematic review and meta-analysis. Br J Anaesth 2014;112:231–45. 10.1093/bja/aet414 [DOI] [PubMed] [Google Scholar]
- 14.Zendejas B, Brydges R, Hamstra SJ, et al. State of the evidence on simulation-based training for laparoscopic surgery: a systematic review. Ann Surg 2013;257:586–93. 10.1097/SLA.0b013e318288c40b [DOI] [PubMed] [Google Scholar]
- 15.Dilaveri CA, Szostek JH, Wang AT, et al. Simulation training for breast and pelvic physical examination: a systematic review and meta-analysis. BJOG 2013;120:1171–82. 10.1111/1471-0528.12289 [DOI] [PubMed] [Google Scholar]
- 16.Cheng A, Auerbach M, Hunt EA, et al. Designing and conducting simulation-based research. Pediatrics 2014;133:1091–101. 10.1542/peds.2013-3267 [DOI] [PubMed] [Google Scholar]
- 17.LeBlanc VR, Manser T, Weinger MB, et al. The study of factors affecting human and systems performance in healthcare using simulation. Simul Healthc 2011;6(Suppl):S24–9. 10.1097/SIH.0b013e318229f5c8 [DOI] [PubMed] [Google Scholar]
- 18.Raemer D, Anderson M, Cheng A, et al. Research regarding debriefing as part of the learning process. Simul Healthc 2011;6(Suppl):S52–7. 10.1097/SIH.0b013e31822724d0 [DOI] [PubMed] [Google Scholar]
- 19.Cook DA, Hamstra SJ, Brydges R, et al. Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med Teach 2013;35:e867–98. 10.3109/0142159X.2012.714886 [DOI] [PubMed] [Google Scholar]
- 20.Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet 2014;383:267–76. 10.1016/S0140-6736(13)62228-X [DOI] [PubMed] [Google Scholar]
- 21.Cook DA, Beckman TJ, Bordage G. A systematic review of titles and abstracts of experimental studies in medical education: many informative elements missing. Med Educ 2007;41:1074–81. 10.1111/j.1365-2923.2007.02861.x [DOI] [PubMed] [Google Scholar]
- 22.Cook DA, Beckman TJ, Bordage G. Quality of reporting of experimental studies in medical education: a systematic review. Med Educ 2007;41:737–45. 10.1111/j.1365-2923.2007.02777.x [DOI] [PubMed] [Google Scholar]
- 23.Cook DA, Levinson AJ, Garside S. Method and reporting quality in health professions education research: a systematic review. Med Educ 2011;45:227–38. 10.1111/j.1365-2923.2010.03890.x [DOI] [PubMed] [Google Scholar]
- 24.Jüni P, Altman DG, Egger M. Systematic reviews in health care: assessing the quality of controlled clinical trials. BMJ 2001;323:42–6. 10.1136/bmj.323.7303.42 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Begg CB, Cho MK, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials: the CONSORT statement. JAMA 1996;276:637–9. 10.1001/jama.1996.03540080059030 [DOI] [PubMed] [Google Scholar]
- 26.Begley CG, Ioannidis JP. Reproducibility in science: improving the standard for basic and preclinical research. Circ Res 2015;116:116–26. 10.1161/CIRCRESAHA.114.303819 [DOI] [PubMed] [Google Scholar]
- 27.Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001;357:1191–4. 10.1016/S0140-6736(00)04337-3 [DOI] [PubMed] [Google Scholar]
- 28.Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c869. 10.1136/bmj.c869 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Moher D, Altman DG, Schulz KF, et al. Opportunities and challenges for improving the quality of reporting clinical research: CONSORT and beyond. CMAJ 2004;171:349–50. 10.1503/cmaj.1040031 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust 2006;185:263–7. [DOI] [PubMed] [Google Scholar]
- 31.Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med 2007;147:573–7. 10.7326/0003-4819-147-8-200710160-00010 [DOI] [PubMed] [Google Scholar]
- 32.Vandenbroucke JP, von Elm E, Altman DG, et al. STROBE Initiative, Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. PLoS Med 2007;4:e297. 10.1371/journal.pmed.0040297 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Moher D, Shamseer L, Clarke M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015;4:1. 10.1186/2046-4053-4-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009;151:264–9. 10.7326/0003-4819-151-4-200908180-00135 [DOI] [PubMed] [Google Scholar]
- 35.Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med 2009;151:W65–94. 10.7326/0003-4819-151-4-200908180-00136 [DOI] [PubMed] [Google Scholar]
- 36.Enhancing the Quality and Transparency of Health Research. Equator Network library for health research reporting. http://www.equator-network.org/library/ (accessed 28 May 2015).
- 37.Golub RM, Fontanarosa PB. Researchers, readers and reporting guidelines: writing between the lines. JAMA 2015;313:1625–6. 10.1001/jama.2015.3837 [DOI] [PubMed] [Google Scholar]
- 38.Campbell MK, Elbourne DR, Altman DG, CONSORT group. CONSORT statement: extension to cluster randomised trials. BMJ 2004;328:702–8. 10.1136/bmj.328.7441.702 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Piaggio G, Elbourne DR, Altman DG, et al. , CONSORT Group. Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA 2006;295:1152–60. 10.1001/jama.295.10.1152 [DOI] [PubMed] [Google Scholar]
- 40.Boutron I, Moher D, Altman DG, et al. Methods and processes of the CONSORT group: example of an extension for trials assessing nonpharmacologic treatments. Ann Intern Med 2008;148:W60–6. [DOI] [PubMed] [Google Scholar]
- 41.Little J, Higgins JP, Ioannidis JP, et al. STrengthening the REporting of Genetic Association studies (STREGA)—an extension of the STROBE statement. Eur J Clin Invest 2009;39:247–66. 10.1111/j.1365-2362.2009.02125.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Moher D, Schulz KF, Simera I, et al. Guidance for developers of health research reporting guidelines. PLoS Med 2010;7:e1000217. 10.1371/journal.pmed.1000217 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Cheng A, Hunt EA, Donoghue A, et al. Examining pediatric resuscitation education using simulation and scripted debriefing: a multicenter randomized trial. JAMA Pediatr 2013;167:528–36. 10.1001/jamapediatrics.2013.1389 [DOI] [PubMed] [Google Scholar]
- 44.Cheng A, Brown LL, Duff JP, et al. Improving cardiopulmonary resuscitation with a CPR feedback device and refresher simulations (CPR CARES study): a randomized clinical trial. JAMA Pediatr 2015;169:137–44. 10.1001/jamapediatrics.2014.2616 [DOI] [PubMed] [Google Scholar]
- 45.Cheng A, Overly F, Kessler D, et al. Perception of CPR quality: influence of CPR feedback, just-in-time CPR training and provider role. Resuscitation 2015;87:44–50. 10.1016/j.resuscitation.2014.11.015 [DOI] [PubMed] [Google Scholar]
- 46.Kessler DO, Arteaga G, Ching K, et al. Interns’ success with clinical procedures in infants after simulation training. Pediatrics 2013;131:e811–20. 10.1542/peds.2012-0607 [DOI] [PubMed] [Google Scholar]
- 47.Gerard JM, Kessler DO, Braun C, et al. Validation of global rating scale and checklist instruments for the infant lumbar puncture procedure. Simul Healthc 2013;8:148–54. 10.1097/SIH.0b013e3182802d34 [DOI] [PubMed] [Google Scholar]
- 48.Kessler D, Pusic M, Chang TP, et al. Impact of Just-in-Time and Just-in-Place simulation on intern success with infant lumbar puncture. Pediatrics 2015;135:e1237–1246. 10.1542/peds.2014-1911 [DOI] [PubMed] [Google Scholar]
- 49.Chang TP, Kessler D, McAninch B, et al. Script concordance testing: assessing residents’ clinical decision-making skills for infant lumbar puncture. Acad Med 2014;89:128–35. 10.1097/ACM.0000000000000059 [DOI] [PubMed] [Google Scholar]
- 50.Haubner LY, Barry JS, Johnston LC, et al. Neonatal intubation performance: room for improvement in tertiary neonatal intensive care units. Resuscitation 2013;84:1359–64. 10.1016/j.resuscitation.2013.03.014 [DOI] [PubMed] [Google Scholar]
- 51.Common guidelines for education research and development. A Report from the Institute for Education Sciences, US Department of Education and the National Science Foundation. http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf13126 (accessed 10 Jan 2015).
- 52.Cobo E, Cortes J, Ribera JM, et al. Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: masked randomised trial. BMJ 2011;343:d6783. 10.1136/bmj.d6783 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Egger M, Schneider M, Davey Smith G. Spurious precision? Meta-analysis of observational studies. BMJ 1998;316:140–4. 10.1136/bmj.316.7125.140 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.American Education Research Association, Standards for reporting on empirical social science research in AERA publications. Educ Res 2006;35:33–40. [Google Scholar]
- 55.Bossuyt PM, Reitsma JB, Bruns DE, et al., Standards for Reporting of Diagnostic Accuracy. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. BMJ 2003;326:41–4. 10.1136/bmj.326.7379.41 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Cook DA, Brydges R, Zendejas B, et al. Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality. Acad Med 2013;88: 872–83. 10.1097/ACM.0b013e31828ffdcf [DOI] [PubMed] [Google Scholar]
- 57.Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care 2007;19:349–57. 10.1093/intqhc/mzm042 [DOI] [PubMed] [Google Scholar]
- 58.Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for improvement studies in health care: evolution of the SQUIRE project. Ann Intern Med 2008;149:670–6. 10.7326/0003-4819-149-9-200811040-00009 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjstel-2016-000124.supp1.pdf (216.6KB, pdf)
bmjstel-2016-000124.supp2.pdf (660.6KB, pdf)