ABSTRACT
Aim:
The aim of this paper is to provide insights into conducting an implementation needs assessment using a case example in a less-research-intensive setting.
Design and methods:
In the case example, an implementation needs assessment was conducted, including (1) an environmental scan of the organization's website and preliminary discussions with key informants to learn about the implementation context, and (2) a formal analysis of the evidence–practice gap (use of sedation interruptions) deploying a chart audit methodology using legal electronic reports.
Results:
Our needs assessment was conducted over 5 months and demonstrated how environmental scans reveal valuable information that can inform the evidence–practice gap analysis. A well-designed gap analysis, using suitable indicators of best practice, can reveal compliance rates with local protocol recommendations, even with a small sample size. In our case, compliance with the prescribed practices for sedation interruptions ranged from 65% (n=53) to as high as 84% (n=69).
Conclusions:
Implementation needs assessments provide valuable information that can inform implementation planning. Such assessments should include an environmental scan to understand the local context and identify both current recommended best practices and local best practices for the intervention of interest. When addressing an evidence–practice gap, analyses should quantify the difference between local practice and desired best practice.
Impact:
The insights gained from the case example presented in this paper are likely transferrable to implementation research or studies conducted in similar, less-research-intensive settings.
Spanish abstract:
Keywords: critical care, environmental scan, evidence–practice gap, implementation needs assessment, sedation interruption
BACKGROUND
There is increased attention and support for the incorporation of evidence into nursing practice by researchers, organizations, and professional bodies around the world (e.g., the Registered Nursing Association of Ontario, the National Association of Countries, and JBI). According to one literature review, there were at least 60 associations around the world (from Canada, the United States, New Zealand, Finland, and Switzerland) that were undertaking evidence-based activities.1 However, having access to the best available evidence in guidelines, evidence summaries, and systematic reviews is only one pillar of the process for translating knowledge into practice.2 Careful consideration of the needs and processes in the implementation setting is an important prerequisite to knowledge uptake. Planning for implementation in health care is paramount, as it enables the integration of the best available evidence tailored to the unique needs of a particular health care setting. By carefully strategizing the implementation process based on local contexts and resources, health care providers can effectively translate evidence-based practices into tangible improvements in patient care. Successful implementation planning ensures that health care interventions are not only effective but also sustainable, fostering long-term advancements in health care delivery on a global scale.
Well-developed, evidence-informed guidelines are readily available around the globe. However, despite the availability of high-quality clinical practice guidelines, patients still do not receive the recommended care, contributing to their morbidity, mortality, and higher costs of care.3–5 The literature suggest that 17 years is the average time it takes for evidence to make it into practice.6 “Evidence-to-practice” or “know–do” gaps result in possible suboptimal care or a delay in benefits associated with best practice, which may arise from unsuccessful implementation.7
Planning for implementation gives the researcher or clinician the opportunity to gather critical information to inform future steps and phases of the project. Gaining a sense of the context is a principal step in the pre-planning phase of any implementation project or study and, needs to be done rigorously to provide meaningful information. To accomplish this step, Kitson and Straus8(p.100) describe an implementation needs assessment as “a systematic process for determining the size and nature of the gap between current and more desirable skills, attitudes, behaviors, and outcomes.” Evidence–practice gaps are a nursing concern and needs assessments are a feasible approach to understanding the nature and extent of these gaps. The strategies used to conduct a needs assessment depend on the purpose of the assessment, the type of data, and the resources that are available.8 An implementation needs assessment can be conducted as research, and when appropriate, it may require Research Ethics Board approval. Alternatively, it can be considered part of a quality improvement initiative and may not be undertaken as research. At the care provider level, chart audits, observational studies, competency assessments, and reflective practice are common approaches to assessing implementation needs. Little is known about the real-world challenges of conducting an implementation needs assessment, especially in smaller, less-research intensive hospitals or social organizations.
OBJECTIVES
The objectives of this paper are to:
-
1.
Describe the methods used in conducting an implementation needs assessment at a less-research-intensive tertiary hospital, and the needs assessment findings.
-
2.
Highlight the value of conducting a needs assessment and the associated implications for undertaking future implementation research at the setting.
-
3.
Describe the implications of our approach and provide advice for conducting implementation needs assessments in practice or as part of research.
THE CASE EXAMPLE: CONTEXT FOR AN IMPLEMENTATION NEEDS ASSESSMENT
Sedation interruptions (SI) are an evidence-informed intervention that has demonstrated positive outcomes for mechanically ventilated adults in critical care. Since the introduction of SI in the literature nearly 24 years ago,9 a gap continues to exist between recommended practice and actual practice10–13 SI are used to minimize the bioaccumulation of sedative medications as well as facilitating a proper neurological assessment, among other known benefits.14 SI are one of several bundled recommendations for mitigating post-intensive care syndrome that can affect survivors of critical illness.15,16
According to a recent review, there are 11 guidelines with 13 recommendations in favor of SI use in adult mechanically ventilated patients and two recommendations not to use SI in patients with increased intracranial pressure.17 Despite evidence suggesting there are important deficiencies with the quality of the methodological development of these guidelines, best practice is in favor of SI.17 Adherence to SI was noted as a topic of interest to staff (nurses and physicians) in a critical care unit (CCU) in northeastern Ontario, Canada, and that a best practice implementation study may be warranted to improve SI use. To plan our implementation study, we selected a planned action framework and created a strategy for carrying out the first phase, which involved conducting the implementation needs assessment.
METHODS
Guiding conceptual framework
The Knowledge to Action Framework (KTA) was derived from a synthesis of more than 30 planned action theories, frameworks, and models and is commonly used to guide implementation projects and studies.18–20 The KTA consists of two processes: knowledge creation and an action cycle.21,22 Similar to the JBI Evidence Implementation Framework,2 the action cycle of the KTA framework comprises seven phases that need to be addressed for the successful implementation of evidence into practice. The first phase is about identifying the clinical problem, selecting the evidence to assess the problem, and determining the know–do gap between current and best practice. The rationale for this phase is to understand the implementation environment or context (e.g., the nature of the setting: historical context, size, number, and types of providers and patients) and the nature and magnitude of the know–do gap (e.g., current practice and how it reflects (or not) best practice). The first phase promotes evidence-informed implementation planning and eventually, more successful implementation. Determining the know–do gap has some similarities to the planning phase of the Plan-Do-Study-Act, a commonly used tool in quality improvement activities.23 To supplement the KTA, we used a second planned action framework.
Our approach was also informed by the Implementation Roadmap,24 a more detailed planned action framework based on the KTA framework that comprises planning and implementation phases. These include (1) identifying and clarifying the issue; (2) building solutions and field-testing them; and (3) implementing, evaluating, and sustaining the changes. The two steps under Phase 1 correspond to the KTA action phase of determining the know–do gap. The process involves identifying the problem and the relevant best practices, including key indicators, and then gathering local evidence on the context and current practices. Understanding the context and current practices involves assessing the magnitude of the problem, determining current practices, and measuring the gap between evidence and practice.
Implementation needs assessment methodology
Our approach to conducting the implementation needs assessment started with an environmental scan of the hospital and gathering information specific to the CCU; and then, conducting an evidence–practice gap analysis. Our specific objectives for this study were to firstly conduct an environmental scan of the organization's website and hold preliminary discussions with key informants to learn about the implementation context, and secondly, to conduct a formal analysis of the evidence–practice gap (use of SI) through a chart audit methodology using electronic medical records. See Figure 1 for the organization and clarity of content in this paper.
Figure 1.
Arrangement and clarity of content.
Methods for environmental scan
To better understand the context for the gap analysis study, we conducted an environmental scan. Environmental scans in the health care sector are gaining popularity for examining the current state of programs, services, or policies.25 They can cover everything from casual discussions, observations, analysis of organizational documents (e.g., policies and procedures), or a secondary analysis of organizational databases (e.g., electronic health records or other micro-data storage systems. The purpose of our environmental scan was to understand and document the structure of the organization (hospital), specific setting (the CCU), and those who work there. The environmental scan had two components, during which information was gathered to better understand implementation considerations: (1) the implementation setting, and (2) the setting's readiness for implementation.
Component 1: Understanding the implementation setting
Scanning the environment. We scanned the organization's website to identify documents that described the hospital context (e.g., number of beds, staffing arrangement, relationships with academic centers, etc). We held preliminary informal discussions with key individuals. For example, we spoke with the clinical nurse educator in the CCU to better understand the internal staffing structure. We also spoke with the chief nursing executive officer to elicit support for the ethical conduct of nursing research in the CCU, and an electronic medical record (EMR) system analyst to become familiar with the documentation procedures in the CCU. We asked for relevant policies and procedures related to SI practice and for the CCU's standardized order set for mechanically ventilated patients. We also obtained examples of related electronic medical system documentation screens used by nurses and other multidisciplinary team members for charting SI.
Identifying best practices and determining indicators of best practice. As no existing review of best practice existed, we conducted a systematic review and appraisal of SI guidelines17 to determine best practice for the use of SI and how it compares with what was considered local best practice for SI. The Appraisal of Guidelines for Research and Evaluation (AGREE) II instrument26 was used to appraise the methodological development of 11 identified guidelines with 15 recommendations related to SI.17 This was useful for determining best practice from an international perspective rather than limiting the benchmark for best practice to one recommendation from one guideline. The review also provided the needed comparator for determining how closely the local CCU protocol reflected international recommendations. To determine what the hospital considered local best practice, we used the hospital's SI policy. The CCU had a standard order set applied to all adult mechanically ventilated patients admitted to the CCU that corresponded with a regional protocol (Sedation and Analgesia Protocol). Once the protocol is ordered by a physician, the nurse has the authority to implement the treatments and interventions included in the order.
We began to conceptualize indicators of SI use based on the standard order set for mechanically ventilated patients, as key performance metrics for evaluating the established protocol were absent locally and not posed in guidelines. In accordance with the local protocol, if the patient was stable (e.g., blood pressure stable, no neuromuscular blockades in use, not difficult to ventilate), all sedation was to be turned off until the Richmond Agitation Sedation Scale was between 0 and −1 (meaning alert and calm to not fully alert but has sustained awakening to voice). We reviewed the electronic medical system documentation templates to see what data were available and modified the proposed indicators to align with what we anticipated in the documentation screens.
Component 2: Understanding the setting's readiness for implementation
Because we chose to undertake the needs assessment as research rather than quality improvement, Research Ethics Board approval was required. We obtained the necessary research ethics approval documents from the organization's website and searched for specific documents regarding access to medical data for secondary research. We needed to contact representatives from the affiliated Research Ethics Board to understand more about the current processes and policies for conducting research at the organization.
Methods for evidence–practice gap analysis
The evidence–practice gap was determined using a retrospective chart audit to compare current practice against performance indicators that were developed for this study. The unit of analysis for measuring SI use was the number of days when a patient was eligible for an SI. Eligibility was defined as having a physician's order for an SI, the absence of neuromuscular blockade, and/or prone positioning. The definition of “stable” from the local protocol was also used to determine eligibility for SI.
Chart audit setting and sample selection. We used a purposive sampling strategy to obtain charts from patients admitted to the CCU from October 2019 to January 2021 who were mechanically ventilated for at least 24 hours and were 18 years of age or older at the time of admission. Sample size was calculated using a benchmark for estimating the expected proportion within the population that would have the clinical intervention of interest. In one large study, adherence with daily SI was reported to be 72.2% of all eligible study days for an average patient.27 If the proportion of the population expected to have the characteristic is over 50%, then the sample size calculation should be based on the proportion without the characteristic.28 The estimated proportion of patients not receiving an SI was estimated to be 27.8%. With a confidence interval of 0.20 and a confidence level of 95%, the sample needed was 81 patients’ electronic charts. We requested 200 unique patient charts to be pulled to account for the possibility that further exclusion would occur at the level of abstraction.29 The clinical records department generated a worklist of all the electronic charts that met the inclusion criteria from October 2019 (the launch of the new online EMR system) to the commencement of the chart audit, which was January 2021. We decided to use this timeframe as it was not feasible to undertake a manual audit of paper charts.
Chart audit data collection. For ease of data collection, the data abstraction tool was developed following the flow of information presented in the EMR documentation screens. We consulted an analyst to confirm that the information we were seeking would be available in the documentation screens. The reliability of the data abstraction tool was assessed independently by two reviewers on 5% (n=4 charts) of the total desired sample (n=81 charts).30 Minimal modifications to the instrument were made during the pilot. After consensus was achieved on the 5% sample of charts by the two independent reviewers, one reviewer went on to conduct data abstraction from the remaining charts. In addition to the indicators for1 adherence to the local protocol recommendation and, the2 evidence–practice gap, we collected data using a data minimization approach to protect the patients’ identity, which included their month and year of admission, admitting diagnosis, age in weeks, length of stay, and any other demographic information that we could use to describe the population.
Gap analysis. In the absence of established performance indicators, we developed two indicators that described (1) the adherence to the local protocol recommendation and, (2) the evidence–practice gap. Table 1 illustrates the indicators for analyzing the evidence–practice gap in the use of SI. Frequencies were tallied and the data were categorized as four distinct observations describing the use of SI: (1) appropriate use, (2) appropriate non-use, (3) overuse, and (4) underuse. The indicator for adherence was the magnitude of appropriate use and appropriate non-use, summed together, while the evidence–practice gap was defined as the magnitude of overuse and underuse, together. We also summarized the types and sources of information that we found useful for conducting a gap analysis.
Table 1.
Indicators for analyzing an evidence–practice gap in an intervention
| Indicators | Indicator calculations | Sub-indicators | |
| A. Appropriate use | |||
| 1. | Adherence to the local protocol's recommendation | A+B/total∗100 | B. Appropriate non-use |
| C. Overuse | |||
| 2. | Evidence–practice gap | C+D/total∗100 | D. Underuse of SI |
A. Appropriate use: Sedation interruption is performed when indicated by the local policy.
B. Appropriate non-use: Sedation interruption is not performed when not indicated.
C. Overuse: Sedation interruption is performed when not indicated.
D. Underuse: Sedation interruption is not performed when indicated.
FINDINGS FROM THE ENVIRONMENTAL SCAN
Component 1: Understanding the implementation setting
The environmental scan began in November 2021, approximately 3 months prior to the commencement of the chart audit (January to March 2022), with some overlap between its end and the beginning of the evidence–practice gap analysis. The environmental scan revealed valuable information about the clinical setting that informed the evidence–practice gap analysis, for example, the need to develop key performance indicators for SI. Table 2 displays a summary of the types and sources of information retrieved in the environmental scan and later used to plan the evidence–practice gap analysis. This information shaped the context of the implementation setting, which was not only important for interpreting the data but also for understanding the organization's administrative structure for approval of a future implementation study.
Table 2.
Types and sources of information for the environmental scan
| Category | Type of information | Access or source of information | Use of the information |
| Hospital operations | Published annual reports | Institution website – public domain | • Brief history of the organization. • Annual updates on operations. • Partnerships with other organizations. • External and internal structures. |
| Policy and procedures | Access through permission from unit educator and administration | • Clinician guidance and prescription for the intervention of interest. • Useful for developing key performance indicators of the practice intervention. |
|
| Electronic medical system | Access through permission from unit educator and administration | • Templates of mandatory and optional multidisciplinary documentation for the intervention of interest. | |
| Research ethics | Research ethics board documentation | Institution website – public domain | • Research policy (general information). • Submission form. • Instructions for completing the Research Ethics Board (REB) research project. • Annual renewal form. • Amendment form. • Final report. • Participant adverse/unanticipated event notification form. • Research agreement. |
| Ministry of Health and Long-Term Care | www.health.gov.on.ca | • Ontario's personal health information privacy legislation for the health sector (Health Sector Privacy Rules). • Personal Health Information Privacy Act (PHIPA) is the legislation that describes the permit and collection, use, and disclosure of personal health information for secondary purposes. • See section 37 (1) Permitted Use, and sub-section 3. |
|
| Government of Canada – panel on research ethics | www.ethics.gc.ca | • Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans – TCPS 2. | |
| Information and Privacy Commissioner of Ontario | www.ipc.on.ca | • Privacy by design approach to using personal health information in a way that is safe for the individuals and at the same time amenable to high-quality research. | |
| Gatekeepers | Clinical nurse educator | • Internal resource for policies and procedures, documentation, and internal structure of the unit. | |
| Clinical records manager or supervisor | Email consider cc REB administrator | • Signature is required for the REB application. | |
| Privacy officer | Email consider cc REB administrator and clinical records manager | • Signature is required for the REB application. | |
| Chief nursing executive officer (CNE) | • Signature strengthens the REB application but is not required. | ||
| Manager risk | Email, consider cc CNE officer | • Internal resource for ethical conduct of research involving human participants. |
Gatekeeper: An individual who is key for gaining approval and access to the personal health records for a chart audit. Secondary gatekeeper: An individual who is key for gaining access to information about the process and structure of the organization and unit of interest.
The environmental scan was valuable in setting up the SI evidence–practice gap analysis, as it provided the researchers with insight into the key stakeholders and gatekeepers who would influence decision-making and access to secondary data. There was stakeholder engagement throughout the needs assessment, starting with the chief nursing executive officer, who approved of the conduct of nursing-related research and expressed ongoing support for the advancement of SI knowledge in the CCU. The CCU clinical nurse educator agreed that SI was relevant to the unit and provided access to the local protocol. Engaging the clinical records department manager was essential for obtaining Research Ethics Board approval to conduct a retrospective chart audit. Key implementation setting information identified during the environmental scan included the level of care provided at the CCU; bed count; unit operations; staffing ratios for nurses, physician specialties, respiratory therapists, and pharmacists; as well as annual admission numbers.
Component 2: Understanding the setting's readiness for implementation
Informal discussions with senior leaders revealed that the hospital had experience with routine chart reviews for internal quality improvement initiatives; however, a formal retrospective chart audit had never been conducted at this facility by an external researcher. To access patient-level data, we needed research ethics approval from the hospital. The ethics application process took approximately 3 months from the day the application was submitted to receiving final approval for the chart audit.
Preparing for delays
Confidentiality agreements needed to be signed by anyone on the research team who may potentially view the raw data or the Medical Record Number (MRN) list. Identifying the individuals in advance of the application helped mitigate delays. Personal Health Information Protection Act (PHIPA) Research Agreements also needed to be signed by all principal and co-investigators. Completing this early was important because the same form had to be signed by the chief of staff prior to approval. Mandatory signatures were required from the clinical record department manager and the privacy officer prior to ethics approval.
In our case example, the chart audit took place between January and March 2021 (approximately 3 months), but we encountered several other delays. Staff turnover and internal processes can cause delays in preparing worklists and accessing records. Researchers should be prepared for these anticipated delays. An unanticipated delay occurred when we were given access to a different source of data than what was approved by the Research Ethics Board (legal electronic reports versus EMR, respectively). This unanticipated change led to an approximate 4-week delay to the commencement of the chart audit and necessitated changes to the data collection method. It became apparent that data abstraction from this type of document would take considerably more time than directly from the EMR. The reports were a synchronous account of multidisciplinary documentation (assessment order sets, ventilator settings, and vital signs, with narrative notes clustered at the end of the report), presented in chronological order from the first to the last day of admission. The reports were often between 100 and 700 pages in length. As such, the team discussed options for how to move forward in a feasible way.
Findings from the evidence–practice gap analysis
Having taken over 3 weeks (approximately 32 hours) to extract data from 28 randomly selected charts, we decided that this number of charts would be adequate to gain a sense of the SI evidence–practice gap, even though we would not have the ability to state conclusions with statistical certainty. The 28-chart sample satisfied the objectives of the practice gap analysis, as our intent was to gain a sense of the current state of SI use, confirming there was a gap that warranted a future implementation study.
There was a total of 156 records identified by the Clinical Records Department of mechanically ventilated patients admitted to the CCU from the time of the new EMR system launch to the time of record retrieval. After closer inspection of the records, 86 met the eligibility criteria; 51 of the patient records were excluded for having a ventilation time of less than 24 hours, transferred to another facility in less than 24 hours, expired, orders to not perform SI, absence of a continuous infusion, and one unavailable record.
Of the final abstracted sample of 28 patient records, the total number of eligible SI days was 82. The sample contained a range of patients with a variety of admitting diagnoses and reasons for mechanical ventilation, including respiratory failure, pneumonia (including COVID-19 patients), polysubstance overdose, and brain injury. Of the total 82 eligible SI days, SI occurred on 51 days in accordance with recommended practice (adherent); SI did not occur for an appropriate reason on 2 days (adherent); SI was documented to not have occurred but no reasons for not providing SI were documented on 13 days (non-adherent), and there was a failure to document SI on 16 days (e.g., it was not documented whether it was done or not done). Table 3 displays the results of the evidence–practice gap analysis. Depending on the scenario, the rates of adherence ranged from a low of 64.6% (51+2/82) to a high of 84% (51+2+16) of cases when it was assumed the 16 undocumented cases all represented appropriate use or appropriate non-use of SI. Depending on the scenario, non-adherence rates ranged from a low of 16% (13/82) to a high of 35% (13+16/82) of days, again when it was assumed the 16 undocumented cases all represented overuse or underuse of SI, respectively. This left considerable room for improvement, justifying the implementation study to improve the use of SI.
Table 3.
Results of the evidence–practice gap analysis
| Local best practice use | ||||
| Adherent | Non-adherent | Total | ||
| Sedation interruption | Performed | A. Appropriate use (n=51) 62% (82% if all undocumented cases included) |
C. Overuse (n=0) 0% to 19.5% (if all undocumented cases included) |
51 |
| Not performed | B. Appropriate non-use (n=2) 2.4 (22% if all undocumented cases included) |
D. Underuse (n=13) 16% to 35% (if all undocumented cases included) |
15 | |
| Failure to document | E. Failure to document n=16 (20%) | 16 | ||
| Total | (n=53) 65% or as high as (n=69) 84% (if all failure to document cases included) | (n=13) 16% or as high as (n=29) 35% (if all failure to document cases included) | 82 | |
A. Appropriate use: Sedation interruption is performed when indicated by the local policy.
B. Appropriate non-use: Sedation interruption is not performed when not indicated.
C. Overuse: Sedation interruption is performed when not indicated.
D. Underuse: Sedation interruption is not performed when indicated.
E. Failure to document(undocumented cases): Not documented whether sedation interruption was done or not done.
Unit of analysis is days when a patient is eligible for sedation interruption. Eligibility was defined as having a physician's order for a sedation interruption, the absence of neuromuscular blockade, and/or prone positioning. Appropriate reasons for non-use of SI include increasing intra-cranial pressure, prone positioning, or concurrent use of neuromuscular blockade.
We uncovered an unanticipated finding related to documentation which would have implications for the future implementation project. For approximately 20% of the days, there was no documentation about whether SI was performed or not. We also noticed that at times, nurses were not selecting an approved reason for not performing an SI (n=11 occurrences), and free text was entered into the “other” category. All 11 reasons (100%) that were documented in the “other” category did not align with the rationale offered by the protocol for not performing SI nor did they align with exclusionary criteria that can be found in the literature.13,31
DISCUSSION
Value of an implementation needs assessment
The value of the implementation needs assessment was two-fold. First, the environmental scan provided valuable information about the study setting. At the hospital level, we gained a sense of the organizational structure and were able to identify relevant gatekeepers. Engagement of gatekeepers can be invaluable to the research process, influencing what data is accessed and the facilitation of research activities to completion. Furthermore, identifying individuals in different departments early on in the process can ease the work of engaging champions later to oversee or promote best practices and the evaluation of implementation activities.24,32
The unit level information was necessary for the gap analysis study, and later, the interpretation of the data. Contextual details, such as number of beds, staffing structure, and types of admissions, were important for situating the data within boundaries when interpreting the data. For example, knowing the types of patients that are typically admitted to the unit helped the researchers recognize appropriate reasons for non-use of SI. A significant contextual finding from the environmental scan was that the unit was primarily run by an internist, while staff anesthetists managed the care of mechanically ventilated patients. This had implications for the continuity of care that was received by mechanically ventilated patients and possibly, the degree of involvement with SI from the multidisciplinary team. Lack of teamwork and organizational support have been frequently cited as barriers to implementing evidence-informed guidelines in other settings.33,34 Secondly, identifying the CCU's internal policies related to SI was essential for understanding what was expected of clinicians working in this CCU in relation to SI practice. The local policy served as the source for establishing indicators, facilitating a comprehensive understanding of SI use within the unit.
Practice and research implications
The insights that we gained from the case example can likely be transferred to similar settings and can be leveraged by both researchers and practitioners preparing for an implementation study or project. We categorized the implementation needs assessment into three stages of work: (1) before you measure: understanding the environment for implementation; (2) what to measure: determining indicators to measure the gap; and (3) how to measure: conducting a gap analysis. Table 4 illustrates these stages of work and important questions to ask when preparing for an implementation needs assessment. An initial set of questions was obtained from Tetroe and Graham (2013) and additional questions were included based on our experiences with an implementation needs assessment in this case example.
Table 4.
Important questions to ask when preparing for an implementation needs assessment
| Questions about comparing actual and desired practice Yes / No / Unsure |
| 1. Before conducting an evidence–practice gap analysis, understand the context for implementation |
| • Is the setting where research is to take place a research-intensive hospital or other? |
| • Has the institution or agency granted permission in the past for external research? |
| • Does the institution or agency have existing corporate standard operating procedure (SOP) for secondary use and disclosure of electronic data? Does this SOP apply to research conducted by external researchers? |
| • Are there existing clinical record department policies or procedures developed for chart audits? Do they apply to external researchers? |
| • Have you contacted key stakeholders and are they willing to assist with the provision of documents/information/contacts? |
| • Do you have someone with the right experience and skill to assist you with the pilot of the chart audit? |
| • Have you gathered all the documentation relevant to applying for Research Ethics Board approval? (See Table 2) |
| 2. What to measure in the evidence–practice gap analysis: determining indicators for measuring the gap |
| • What is/are the research question(s) that you are trying to answer? |
| • What is the evidence from which the gap is being measured? |
| • Do you require a benchmark to compare your findings or to establish a threshold to know the extent of the gap? |
| • What are the indicators determining that the intervention/action is or is not occurring? Do they need to be created? |
| • ∗Do your indicators have sufficient impact to lead to improvements in care, if addressed? |
| What will you measure and how will you measure it? Will this information answer your research question(s)? |
| • Will you have access to the information that you used to create the indicators to measure? |
| 3. How to measure the evidence–practice gap: considerations when conducting a gap analysis |
| • ∗How are you identifying a representative sample? |
| • How big does your sample need to be to get a sense of the extent and nature of the evidence–practice gap? |
| • Is your sample from the same medium? Will you have different types of documents in your sample that change over time? |
| • ∗How will you collect the information? |
| • Do you have a data extraction sheet developed that reflects the documents that you reviewed and created indicators for? |
| • Is the information that you need for data extraction available in the format that you anticipated? |
| • If your access to documentation has changed, is it still possible to abstract the indicators developed a priori? |
| • ∗How will you interpret the information? |
The initial set of questions was obtained from Tetroe and Graham (2013) and additional questions were added based on our experience with the implementation needs assessment, in Sharon ES, Tetroe J, Graham ID. Knowledge translation in health care: moving from evidence to practice. 2nd ed. In: Harrison, MB, editor. Chapter 3.2 Adapting knowledge to local context. Wiley; 2013.
Stage 1. Before conducting an evidence–practice gap analysis: understanding the implementation context
It is important to understand the internal structure of the organization in which the proposed implementation is to take place, as well as the target setting for the implementation (in this case example, the CCU). Knowing how a department is run and who the gatekeepers are (e.g., the clinical records department manager and privacy office) can affect access to information. Unanticipated findings can be revealed during the process that have implications for future work relevant to improving practice. For example, we encountered some delays and challenges undertaking the chart audit because the organization was less experienced with external researchers and were rightfully concerned about protecting personal information and abiding by privacy legislation. At the time of this implementation needs assessment, the hospital had not yet developed corporate Standard Operating Procedures (SOPs) for secondary use and disclosure of electronic data, whereas larger research-intensive organizations often have SOPs governing access and secondary use of the information contained in personal health records. This type of SOP outlines the criteria and process for researchers with the appropriate authority to access personal health information for research purposes. We recommend that researchers ask in advance about all SOPs that might be relevant to conducting an implementation needs assessment. Researchers should be aware that processes for research may be in development. This may be an opportunity for the researcher to connect the smaller hospital with the privacy officer at a larger research-intensive organization to share knowledge about how legislation and regulatory requirements can be incorporated into SOPs.
Stage 2. What to measure in the evidence–practice gap analysis: determining indicators for measuring the gap
Practitioners or researchers should begin by developing research questions that can generate actionable results. This can be achieved if the question directly relates to observations of a particular clinical practice setting. Sometimes, the research questions may not be obvious at first, and in this case, conducting an implementation needs assessment beginning with an environmental scan gave the researchers a sense of where research efforts should be focused. Once the practice issue becomes clearer and before embarking on a gap analysis, the researcher needs to identify the evidence to be implemented. A search needs to be conducted to identify relevant systematic reviews and meta-analyses of clinical practice guidelines related to the practice issue. Guidelines can be reviewed if none are already available in the literature. Local protocols, clinical pathways, decision-aids, electronic documentation systems (relevant order sets and nursing documentation screens) can also serve as a source for understanding local best practice. Indicators of best practice use will need to be developed, if not already available in the literature or from local policy. All possible permutations of best practice should be considered using evidence-informed practice when developing indicators to describe the magnitude and nature of the evidence–practice gap.
In our case, the evidence–practice gap was measured in terms of the disparity between desired practice (appropriate use of SI) from the protocol or policy, and current practice. Indicators of the gap needed to be developed to measure the magnitude and nature of adherence to the desired practice because existing practice guidelines recommendations were not accompanied by indicators. Future practitioners or researchers need to carefully consider how to define adherence and non-adherence to the desired practice.
Stage 3. How to measure the evidence–practice gap: considerations when conducting a gap analysis
Sample size is important, and to have generalizable findings, the sample size will need to be large enough to provide statistical inferences. Nonetheless, smaller sample sizes can provide a sense of the current state of practice and inform future implementation studies or projects, sometimes serving as baseline data.35 It may not always be possible to access the full EMR, and alternative strategies will need to be developed to try to obtain the desired data. For example, to gain access to the EMR, we collaborated with the clinical record department staff to obtain screen shots of the desired EMR documentation screens to answer our implementation needs assessment questions. If using additional measures to obtain the needed data, ensure that the process for doing so is piloted. In our case, we needed to revise the data abstraction tool to match the available information in the legal electronic reports. Important findings were identified despite having a smaller sample, including the rate of SI adherence being comparable to other larger studies.
We discovered that at times, nurses were not performing SI for appropriate reasons that aligned with the local protocol. Furthermore, the documented rationales often did not align with frequently cited reasons for not performing SI in randomized control trials13,27 and the sedation awakening trial safety screen described by the Society of Critical Care Medicine. Importantly, it became evident during the gap analysis that there was an issue related to accurate and timely documentation (College of Nurses of Ontario 2008 standards for documentation) that needed to be addressed by the site, ideally prior to any future implementation study.
Limitations and strengths
Our implementation needs assessment was designed to increase understanding of the context for a future SI implementation study, including whether the magnitude of the evidence–practice gap at a small tertiary hospital was sufficient to warrant an implementation study. This setting may not be reflective of all small tertiary hospitals; however, we do believe that many of the lessons learned during this study can benefit other practitioners or researchers attempting to understand evidence–practice gaps. Generalizability of the findings from the needs assessment may be limited but the lessons learned, and advice formulated because of the needs assessment process are likely transferable. As with all environmental scans, understanding the context was limited to the information that was available. Informal discussions with senior leaders and an analyst facilitated the development of relationships with key individuals in the organization that likely helped us move forward with the gap analysis and possibly improved willingness to support our future implementation study.
Our chart audit was limited because of potential, and at times unavoidable, challenges associated with this methodology. These included incomplete or missing data in the medical record, records lacking specific information, difficulty in interpreting or verifying documented information, and variability in the quality of documentation among health care professionals.36 Although the small sample size limited capacity for more complex statistical analysis, the 28 reports and resulting 82 days used for analysis gave us useful and actionable information with implications for our future implementation study.
CONCLUSIONS
An implementation needs assessment is a practical approach to gathering key information for planning an implementation project or study. Our experience reveals that an implementation needs assessment should include an environmental scan to gain a holistic picture of the local context and some of the factors that might influence implementation. The lessons learned are particularly relevant to practitioners and researchers working with a center that has less research experience. The implementation needs assessment should be designed to identify the current state of recommended best practice for the intervention of interest in addition to what the local site considers best practice to be. Local best practice may or may not align with the international guidelines or literature, but knowing this is important for measuring the know–do gap.
Any evidence–practice gap analysis should be designed to quantify the magnitude of the gap between local practice and desired or best practice. An important aspect of our approach was the development of indicators used to measure the evidence–practice gap, as such indicators are often not included in either local policy or published best practice recommendations. The four distinct categories that describe the use of a practice are appropriate use, appropriate non-use, overuse, and underuse. Each of these gives different useful information from an implementation perspective. Indicators of evidence–practice gaps can be thought of as the magnitude of the combined overuse and underuse of a practice. Knowing the extent of overuse and underuse has implications for selection of possible implementation strategies, as the factors that influence overuse of a practice may differ from those influencing its underuse. We found that access to information can be challenging at a smaller, less-research intensive hospital. Nonetheless, we encourage implementation researchers and practitioners to work with clinical and organizational stakeholders to obtain as much data as possible, and to be aware that even smaller than desired samples can still generate meaningful insights. The insights from the case example presented in this paper are likely transferrable to implementation research or studies conducted at similar, less-research intensive settings.
AUTHOR CONTRIBUTIONS
NDG, IDG, BV-W, LNP, DAF, and JES made substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data. NDG, IDG, BV-W, DAF, and JES were involved in drafting the manuscript or revising it critically for important intellectual content. NDG, IDG, BV-W, LNP, DAF, and JES gave final approval of the version to be published. NDG, IDG, BV-W, LNP, DAF, and JES agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
DATA AVAILABILITY
Raw data are not available due to ethical restrictions. Contact the corresponding author for additional information as required.
ETHICAL CONSIDERATIONS
The Research Ethics Board (REB) of the research site approved this project (file no. 2108–026). The principal investigator of this study was a doctoral student conducting research under the auspices of the University of Ottawa and as such received ethics approval from the university REB prior to the start of this project (file no. H-12-21-7555). Patient consent was not required for this study as it entailed data collection and analysis of secondary, de-identified personal health information. No engagement with patients occurred in this study.
Supplementary Material
Footnotes
The authors declare no conflicts of interest.
Supplemental digital content is available for this article.
REFERENCES
- 1.Holleman G, Eliens A, Van Vliet M, Van Achterberg T. Promotion of evidence-based practice by professional nursing associations: literature review. J Adv Nursing 2006; 53 (6):702–709. [DOI] [PubMed] [Google Scholar]
- 2.Porritt K, McArthur A, Lockwood C, Munn Z. JBI's approach to evidence implementation: a 7-phase process model to support and guide getting evidence into practice. JBI Evid Implement 2023; 21 (1):3–13. [DOI] [PubMed] [Google Scholar]
- 3.Kahn JM. Bringing implementation science to the intensive care unit. Curr Opin Crit Care 2017; 23 (5):398–399. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Levy MM, Evans LE, Rhodes A. The surviving sepsis campaign bundle: 2018 update. Crit Care Med 2018; 46 (6):997–1000. [DOI] [PubMed] [Google Scholar]
- 5.Wood E, Ohlsen S, Ricketts T. What are the barriers and facilitators to implementing collaborative care for depression? A systematic review. J Affect Disord 2017; 214:26–43. [DOI] [PubMed] [Google Scholar]
- 6.Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med 2011; 104 (12):510–520. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implementation Sci 2012; 7 (1):50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Kitson AL, Straus S. Chapter 3.1 Identifying knowledge to action gaps. In: Knowledge translation in health care. 2nd ed. BMJ Books; 2013. [Google Scholar]
- 9.Kress JP, Pohlman AS, O’Connor MF, Hall JB. Daily interruption of sedative infusions in critically ill patients undergoing mechanical ventilation. N Engl J Med 2000; 342 (20):1471–1477. [DOI] [PubMed] [Google Scholar]
- 10.Carrothers KM, Barr J, Spurlock B, Ridgely MS, Damberg CL, Ely EW. Contextual issues influencing implementation and outcomes associated with an integrated approach to managing pain, agitation, and delirium in adult ICUs. Crit Care Med 2013; 41: (9 Suppl 1): S128–S135. [DOI] [PubMed] [Google Scholar]
- 11.Darby TJ. Improving protocol adherence in the intensive care unit [a directed scholarly project in partial fulfillment of the requirements for the Degree of Doctor of Nursing Practice]. Bradley University; 2020. [Google Scholar]
- 12.Smith SN. The impact of nurses’ adherence to sedation vacations on ventilator associated pneumonia prevention. Georgia State University; 2013. [Google Scholar]
- 13.Sneyers B, Laterre P-F, Perreault MM, Wouters D, Spinewine A. Current practices and barriers impairing physicians’ and nurses’ adherence to analgo-sedation recommendations in the intensive care unit: a national survey. Crit Care 2014; 18 (1):655. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Vagionas D, Vasileiadis I, Rovina N, Alevrakis E, Koutsoukou A, Koulouris N. Daily sedation interruption and mechanical ventilation weaning: a literature review. Anaesthesiol Intens Ther 2019; 51 (5):380–389. [DOI] [PubMed] [Google Scholar]
- 15.DAS-Taskforce 2015, Baron R, Binder A, Biniek R, Braune S, Buerkle H, et al. Evidence and consensus based guideline for the management of delirium, analgesia, and sedation in intensive care medicine. Revision 2015 (DAS-Guideline 2015) - short version. Ger Med Sci. 2015;13(101227686):Doc19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Ely EW. The ABCDEF bundle: science and philosophy of how ICU liberation serves patients and families. Crit Care Med. 2017;45(2) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Graham ND, Graham ID, Vanderspank-Wright B, Varin MD, Nadalin Penno L, Fergusson DA, et al. A systematic review and critical appraisal of guidelines and their recommendations for sedation interruptions in adult mechanically ventilated patients. Aust Crit Care 2023; 36 (5):889–901. [DOI] [PubMed] [Google Scholar]
- 18.Azer SA. The top-cited articles in medical education: a bibliometric analysis. Acad Med 2015; 90 (8):1147–1161. [DOI] [PubMed] [Google Scholar]
- 19.Moore JL, Mbalilaki JA, Graham ID. Knowledge translation in physical medicine and rehabilitation: a citation analysis of the knowledge-to-action literature. Archiv Phys Med Rehabil 2022; 103 (7):S256–S275. [DOI] [PubMed] [Google Scholar]
- 20.Skolarus TA, Lehmann T, Tabak RG, Harris J, Lecy J, Sales AE. Assessing citation networks for dissemination and implementation research frameworks. Implement Sci 2017; 12 (1):97. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: Time for a map? J Cont Educ Health Prof 2006; 26 (1):13–24. [DOI] [PubMed] [Google Scholar]
- 22.Straus SE, Tetroe J, Graham ID. Knowledge translation in health care. 2nd ed. BMJ Books; 2013. [Google Scholar]
- 23.Christoff P. Running PDSA cycles. Current problems in pediatric and adolescent health care 2018; 48 (8):198–201. [DOI] [PubMed] [Google Scholar]
- 24.Harrison MB, Graham ID. Knowledge translation in nursing and healthcare: a roadmap to evidence-informed practice. Wiley Blackwell; 2021. [Google Scholar]
- 25.Charlton P, Kean T, Liu RH, Nagel DA, Azar R, Doucet S, et al. Use of environmental scans in health services delivery research: a scoping review. BMJ Open 2021; 11 (11):e050284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: advancing guideline development, reporting, and evaluation in health care. Prev Med 2010; 51 (5):421–424. [DOI] [PubMed] [Google Scholar]
- 27.Mehta S, Burry L, Cook D, Fergusson D, Steinberg M, Granton J, et al. Daily sedation interruption in mechanically ventilated critically ill patients cared for with a sedation protocol: a randomized controlled trial. JAMA 2012; 308 (19):1985–1992. [DOI] [PubMed] [Google Scholar]
- 28.Patient safety-quality improvement. Determining a statistically valid sample size. 2016. [Google Scholar]
- 29.Vassar M, Holzmann M. The retrospective chart review: important methodological considerations. J Educ Eval Health Prof 2013; 10:12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Allison JJ, Wall TC, Spettell CM, Calhoun J, Fargason CA, Jr, Kobylinski RW, et al. The art and science of chart review. Jt Comm J Qual Improv 2000; 26 (3):115–136. [DOI] [PubMed] [Google Scholar]
- 31. Society of Critical Care Medicine. ICU liberation bundle (A-F) implement the A-F elements of the ICU liberation bundle to improve outcomes while transforming culture [internet]. SCCM [cited 2022 Dec 1]. Available from: https://www.sccm.org/ICULiberation/ABCDEF-Bundles. [Google Scholar]
- 32.Santos WJ, Graham ID, Lalonde M, Demery Varin M, Squires JE. The effectiveness of champions in implementing innovations in health care: a systematic review. Implement Sci Commun 2022; 3 (1):80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.McArthur C, Bai Y, Hewston P, Giangregorio L, Straus S, Papaioannou A. Barriers and facilitators to implementing evidence-based guidelines in long-term care: a qualitative evidence synthesis. Implement Sci 2021; 16 (1):70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wolfensberger A, Meier MT, Clack L, Schreiber PW, Sax H. Preventing ventilator-associated pneumonia—a mixed-method study to find behavioral leverage for better protocol adherence. Infect Control Hosp Epidemiol 2018; 39 (10):1222–1229. [DOI] [PubMed] [Google Scholar]
- 35.Etchells E, Ho M, Shojania KG. Value of small sample sizes in rapid-cycle quality improvement projects: Table 1. BMJ Qual Saf 2016; 25 (3):202–206. [DOI] [PubMed] [Google Scholar]
- 36.Gearing RE, Mian IA, Barber J, Ickowicz A. A methodology for conducting retrospective chart review research in child and adolescent psychiatry. J Can Acad Child Adolesc Psychiatry 2006; 15 (3):126–134. [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Raw data are not available due to ethical restrictions. Contact the corresponding author for additional information as required.

