Abstract
Objectives
A patient safety intervention was tested in a 33-ward randomised controlled trial. No statistically significant difference between intervention and control wards was found. We conducted a process evaluation of the trial and our aim in this paper is to understand staff engagement across the 17 intervention wards.
Design
Large qualitative process evaluation of the implementation of a patient safety intervention.
Setting and participants
National Health Service staff based on 17 acute hospital wards located at five hospital sites in the North of England.
Data
We concentrate on three sources here: (1) analysis of taped discussion between ward staff during action planning meetings; (2) facilitators’ field notes and (3) follow-up telephone interviews with staff focusing on whether action plans had been achieved. The analysis involved the use of pen portraits and adaptive theory.
Findings
First, there were palpable differences in the ways that the 17 ward teams engaged with the key components of the intervention. Five main engagement typologies were evident across the life course of the study: consistent, partial, increasing, decreasing and disengaged. Second, the intensity of support for the intervention at the level of the organisation does not predict the strength of engagement at the level of the individual ward team. Third, the standardisation of facilitative processes provided by the research team does not ensure that implementation standardisation of the intervention occurs by ward staff.
Conclusions
A dilution of the intervention occurred during the trial because wards engaged with Patient Reporting and Action for a Safe Environment (PRASE) in divergent ways, despite the standardisation of key components. Facilitative processes were not sufficiently adequate to enable intervention wards to successfully engage with PRASE components.
Keywords: Organisation Of Health Services, Quality In Health Care, Qualitative Research
Strengths and limitations of this study.
We devised a process evaluation that had several robust qualitative data collection methods which complemented each other to build a comprehensive and holistic picture of how ward staff implemented the intervention.
Our approach allowed us to reveal how the differing ways in which staff teams engage with the intervention may impact on patient safety changes at a ward level.
Our novel analytic approach used pen portrait methodology in a differing way to how it has previously been used in health services research to document the journey of 17 wards interacting with an intervention over an 18-month period.
We have little understanding of whether the implementation of Patient Reporting and Action for a Safe Environment may have gained traction and fuelled subsequent ward-based change once the research team left the field.
The qualitative methods we chose were designed to capture a broad understanding of the contexts in which the intervention was implemented but had we known a priori that engagement was such a significant factor, we may have designed the process evaluation to specifically explore its influence.
Background
Measurement of patient safety has traditionally relied on information from staff such as incident reports or recording information about harms such as falls or pressure sores. Recently, patients have been emphasised as being an important detector for patient safety and likened to the ‘smoke detectors’ of safety.1 There is an increasing recognition that hospitals need to find better ways to capture and respond to the concerns of patients regarding the quality and safety of their care.2–4 However, patients are rarely asked about structural or procedural aspects of care which may contribute towards failures in patient safety. The Yorkshire Quality and Safety group have developed a patient safety intervention called Patient Reporting and Action for a Safe Environment (PRASE). This intervention first elicits patient perceptions on how a ward is performing on a series of issues which are known to contribute towards patient safety incidents and second, assists staff to interpret patient feedback to aid service improvements. This paper provides an account of a qualitative process evaluation of a randomised controlled trial (RCT) where PRASE was tested. PRASE was designed,5 6 tested for feasibility7 and trialled8 9 between 2010 and 2015. This period has been charted as an era of paradigm shift for patient safety research when the dominant ‘measure and manage’ orthodoxy has been enriched by approaches sensitive to setting and sociocultural/political influences.10 It became essential for a process evaluation to capture the nuances involved in the PRASE implementation.
Process evaluations have been used to explain suboptimum outcome effects, specifically whether there was a ‘fault’ with the intervention itself, its key components or with delivery.11 Latterly, they are often not only concerned with adherence to original plans but also with broader issues such as unintended consequences or the strengths and weaknesses of the intervention itself.12 Some process evaluations have been able to identify a precise ‘pinch point’ or problem with an essential component of the intervention that caused it to fail. In a UK trial of peer-led HIV prevention for gay men in London, no effect was shown. A qualitative process evaluation13 revealed that the essential component of ‘peer educator’ had not played out as intended during the course of the trial due to recruitment problems and the inability of peer educators to confidently communicate harm reduction messages to intended targets. Other process evaluations have been able to point to more general cultural or structural reasons why an intervention may not have succeeded. Dixon-Woods et al 14 evaluated why a US-developed patient safety intervention—regarding decreasing central line infections in intensive care units—struggled with implementation after the intervention was transferred to a UK setting. A post hoc qualitative evaluation revealed multiple reasons why, largely the result of cultural differences between the US and UK settings. It is clear from these examples that process evaluations can support the largely ‘experimental’ aim of RCTs by identifying specific ‘pinch points’ within an intervention itself or within the context that will help to explain success or failure.
The components of the intervention have been reported in detail elsewhere,8 as have the results of the randomised controlled trial which demonstrated no statistically significant effect between the intervention and control wards.9 A feasibility study was undertaken prior to commencement of the full RCT and details of our logic model and moderating factors are reported in the feasibility write up.7 Here, we provide a synopsis of the intervention and results of the trial for the reader to be able to view our process evaluation in context. Online supplementary appendix 1 provides a detailed summary of the cyclical activities and facilitative processes of the intervention that were trialled, alongside anticipated outcomes. Online supplementary appendix 2 describes the trial design and results. Briefly, this was a cyclical study with two phases of: (1) collecting patient feedback about safety from patients at the bedside; (2) collation of these data and ward staff interpreting them; (3) ward staff action planning to improve patient safety and (4) plans before, being implemented and monitored.
bmjopen-2016-014558supp001.pdf (476.6KB, pdf)
Methods
We conducted a robust process evaluation involving differing qualitative and quantitative methods8 which gathered comprehensive data about all 17 intervention wards. We drew on a published framework12 for designing process evaluations of cluster RCTs. The main a priori research question was: ‘where does the intervention work, how and why?’8 In this paper, we have chosen to focus on the ‘how’ and ‘why’ and present a detailed picture of how staff engaged with the intervention. We now apply our original research question to understand how and why the intervention did not work, given the intervention did not have a significant effect on outcomes. Six mixed methods were used in the wider process evaluation but due to the extent and depth of the data collected, we focused intensively on three qualitative methods for the purpose of this paper. The methods described below are those most pertinent to exploring how staff engaged with the intervention in the ways they did, and why. Data were collected between August 2013 and November 2014. National Health Service (NHS) ethical approval was granted in March 2013. LS, CM and JOH undertook methods 1 and 2. LS and CM conducted method 3. LS is a sociologist, JOH is a psychologist and CM has a background in sustainability. All were working as researchers on this study and educated to doctorate level in their respective fields.
In-depth analysis of taped discussion between ward staff
Action planning meetings (APMs) were digitally recorded for all 17 wards at both phases. At phase 2, one ward did not meet so we considered the recordings of 33 APMs. These ranged in length from 27 to 80 min (average 43 min). Our examination focused on which areas of patient feedback staff chose to make action plans on and which areas they chose not to. We wrote detailed notes while listening to the voice file. We structured our notes under the headings: (1) issues seen as important where actions were made (and why) (2) issues seen as important where no actions were made (and why not); (3) issues dismissed and reasons for this; (4) comments made by staff about PRASE process/study/team and (5) comments made by staff about the ward or hospital context.
Facilitator’s field notes
These notes were written shortly after the APM had finished and captured (1) implicit dynamics between staff, such as body language, tone of voice and other non-verbal cues, (2) environmental factors, such as descriptions of the physical space where the meeting was held and (3) facilitator’s overall impressions. Field notes were brief and gave a ‘snapshot’ of the meeting. There were three facilitators (LS, CM and JOH) across the 17 intervention wards, and each facilitator worked with the same wards across both phases of the study to ensure continuity. Field notes were also taken at key meetings and events held with Trust senior management personnel (particularly during set up and roll out of the study). These notes assisted in providing the research team with tacit knowledge of the culture of the site in which the intervention was being implemented.
Follow-up telephone interviews conducted with the APM lead
The purpose of these short, structured phone interviews was to ascertain whether action plans had been successfully implemented or not and why. They were conducted around 6 months after the APM, with the ‘PRASE lead’ for each ward. Each ward was responsible for nominating a named member of ward staff—who was part of the action planning meeting—to be the PRASE lead. This could be any member of the team but more often than not, the person who volunteered for the role was a senior nurse. A structured interview guide was used. Researchers had a proforma in front of them which contained details about each action plan per ward and they asked the PRASE lead whether each action plan had been implemented—yes, no or partially. Open questioning then continued to understand the factors surrounding this. Five additional questions were asked which focused on the PRASE lead’s opinion of the facilitative processes embedded in the study.
For the purpose of this paper, we wanted to understand qualitatively how wards had engaged (or not) with the PRASE intervention. Implementation fidelity (which captures adherence to specific intervention components) is reported quantitatively elsewhere.9 Engagement of staff with an intervention can be considered as closely aligned but different from this, referring more to the differing approaches and attitudes that staff take to the task in hand (implementation of different components of the intervention). This is different to whether they have simply delivered the task or not. We decided not to adhere to a numerical/scale definition of ‘engagement’ whereby differing wards attained a binary definition of either ‘engaged’ or ‘disengaged’ with the intervention. Instead, we undertook a nuanced analysis of staff approaches and attitudes to: conducting an action planning meeting, creating quality action plans and implementation of these action plans. We explored ‘engagement’ as a concept that we define as the ‘depth’ and ‘nature’ of ward teams’ approaches and attitudes to the intervention.
Informed by this understanding of engagement, a synthesis of the above data sources has provided us with a rich account of the ‘engagement trajectory’ of each ward and this was realised by creating a pen portrait of engagement. Pen portraits have been used previously in applied health research in fields as diverse as: end of life care,15 vulnerable old people being enabled to keep warm in their homes16 and sleeping practices among homeless drug users.17 Previously, they have provided a narrative account of a ‘typical’ participant in qualitative studies or as an analytic aide memoir. We used them in a slightly different manner to document the ‘journey’ of the wards throughout the trial from the perspective of the researcher who had worked closely with these wards for over 18 months. There is a lack of methodological literature pertaining to the construction of a pen portrait and this has been left to the discretion of individual research teams. We created a basic structure for the pen portraits which centred on the writing of a linear, longitudinal account of how each ward had engaged with relevant key components of the intervention and the contextual factors which influenced this, ensuring that all three data sources were drawn on. We did not use an existing theory or framework on which to extract the data for the pen portraits as we wanted the emergent findings to arise inductively from the data set. As staff engagement was our focus, we included as much material on this as possible (along with explanatory factors and necessary description). We excluded minutiae which did not add to the ‘big picture’ of the ward team’s engagement to maintain a focused pen portrait. The pen portrait for Holly ward is shown in online supplementary appendix 3 with the prose annotated as an illustration of how portraits were constructed from the three data sources outlined above.
Researchers took into account all the information contained within the pen portrait and attributed an overall ‘engagement trajectory’ label to each ward. LS wrote all pen portraits for Trusts A and B and CM for Trust C. We categorised the 17 different ward engagement trajectories into five main ‘engagement typologies’, which emerged from an analytical session centring mainly on consensus discussion between LS and CM.
We report three overarching themes in this paper, which are described in detail in the Findings section which follows. The above categorisation of engagement typologies led to the content of the first theme. After we were confident of the findings of this first theme, we then used the differences in engagement detailed in this theme to progress to themes 2 and 3. To achieve this, we looked between and across the engagement trajectories of all 17 wards to understand how engagement with the intervention related to components of local implementation. We returned to the detail of the pen portraits to understand commonality and difference, and from this we developed the coding framework for themes 2 and 3. We then checked our assumptions by testing the data in the pen portraits against our initial coding framework. After minor adaptions, we then coded the data in all 17 pen portraits. Overall, we used techniques derived from ‘adaptive theory’18 which allows for high level frameworks and conceptualisations to emerge from data rather than descriptive themes. Adaptive theory proposes a continual engagement between the arising empirical data and arising theoretical interpretations of the research, working in a continuous cycle with each cycle generating new explorations.
Findings
We now set out to understand the ways in which the 17 intervention wards engaged with the intervention. We are interested in how a multiplicity of engagement styles could have made an already complex intervention become hypercomplicated in its implementation phase. This ‘hyper complexity’ may have served to dilute key elements of the intervention. By ‘dilution’, we mean ‘non-standardisation’ of the intervention group, thereby reducing the potential for this to be meaningfully compared with a control group. We explore three high level themes, which emerged from the data. First, we will describe how there were palpable differences in the ways that ward teams engaged with the intervention. Next, we will look at how support for the intervention at the level of the Trust does not indicate ward-level support. Lastly, we will demonstrate that standardisation of facilitative processes by the research team does not ensure this filters down to implementation standardisation by ward staff. All quotation extracts are taken from pen portrait notes and all ward names have been ascribed a pseudonym.
The same intervention can be interacted within highly divergent ways
We were able to distinguish the intervention wards into five main ‘engagement typologies’ (see online supplementary appendix 4). They are the following:
consistently engaged throughout (seven wards)
partially engaged throughout (four wards)
increasing engagement as trial progressed (two wards)
decreasing engagement as trial progressed (two wards)
disengaged throughout (two wards).
Consistently engaged
This represents the largest category of how wards chose to participate in the trial with 7 out of the 17 residing here. These wards were fully signed up to the ethos of listening to and acting on patient feedback. They took part in a high proportion of the key components of the cyclical activities and made quality action plans which were largely implemented in both phases. A quality action plan can be defined as one which seeks to address issues identified in the patient data and was realistic, relatively timely and more than likely to be achieved. Motivation to take part in the research was high and improving patient safety was even higher.
Partially engaged
These four wards generally did everything asked of them by the research team and largely participated in intervention components but were sometimes lacklustre in their motivation towards improving patient safety. At times, it felt like action planning was just ‘going through the motions’. The ability of staff to implement action plans was mixed although this was sometimes due to external factors rather than inertia on the part of the ward staff themselves.
Increasing engagement
These two wards began their involvement with the trial in an ambivalent and—in the case of Maple ward—even hostile manner. However, as the study progressed and the ward staff began to understand what the research team were trying to achieve, engagement with the study solidified. The similarity between these two wards (despite being at different Trusts) is that the turning point for their engagement was attendance at the peer-centred midpoint meeting. This is reflected in Maple ward’s complete U-turn with implementation of quality action plans at phase two as compared with partial implementation of weak action plans at phase one.
Decreasing engagement
Conversely, another two wards engaged with the study relatively well at the beginning but, over time, slipped in their level of interest and involvement. Cherry ward is the only ward across all 17 who did not meet in an APM in phase two. The follow-up telephone interview revealed that the ward manager for Cherry did not believe the study was a priority. Oak ward had ambitious plans for their phase one action planning but had become dejected by the amount of time their plan was taking to come into effect. Subsequently, they declined to make an action plan in the phase two APM and appeared disengaged in the study.
Disengaged throughout
Although these two wards met in APMs for both phases of the trial, they were not interested in using the PRASE data to improve patient safety and viewed the study as a burden. However, the reasons for this response differed. Rowan were a low-performing ward whose ward manager preferred to concentrate on other initiatives rather than our research study. Elm ward were outwardly hostile to the ethos of the study, critical of the comments their patients had made to researchers and defensive of staff members. Despite agreeing to hold an APM, they consistently refused to make action plans.
Through an examination of these differing engagement trajectories, we can unpick where parts of the intervention may have led to divergent strategies for local implementation of the intervention on a ward-by-ward basis. These findings from ‘on the ground’ implementation by ward teams directly contradict some of the core assumptions held by the research team at the outset of intervention development—namely, that by providing facilitative processes, wards would be able to implement in a uniform manner. It is this aspect of a chasm between implementation expectations and reality which we now turn our attention to.
Trust-level support for an intervention does not predict the strength of ward-level engagement
A key assumption was that strong corporate, managerial-level support by the three participating Trusts would facilitate high-level engagement by wards. However, an examination of the differing types of engagement trajectories, shown in online supplementary appendix 4, throw doubt on this assumption and we can find little consistency in engagement style between the wards at the same Trust. For instance, Trust A is a small district general hospital in a semirural, affluent area. This Trust prides itself on being a forward thinking, cohesive workplace and senior management support for this intervention was exceptionally strong. However, when reduced down to the level of the ward, we can see that the four intervention wards at Trust A are represented across four distinct engagement trajectories (consistent = Beech; increasing = Maple; decreasing = Oak; disengaged = Elm). Engagement trajectories for each of the other two Trusts also differed considerably by ward. The implication here is that corporate culture—and receptivity to patient feedback at the level of the organisation—is not a simple predictor of engagement at ward level.
Unpicking these differences further, we find that despite a uniform message about the importance of a multidisciplinary approach to the study, wards seem to have interpreted this differently. Oak ward convened a strong first multidisciplinary APG with representatives from nursing, allied health professionals and support staff. In contrast, on some wards PRASE remained led and implemented by just one or two nursing staff. For example, Maple’s first APG consisted of just the ward manager, and pen portrait notes illustrate why.
A very tense meeting held with just the ward manager who appeared overtly stressed and about to implode. It was clear at this first APM that the ward manager had not understood the purpose of the study and became upset by some of the negative comments which her patients had made in the report. It was a difficult APM to convene as the ward manager thought she had to solve everything by herself and this was partially reinforced by the fact that she had not invited any of her staff to the APM. (Maple, Trust A)
The research team never envisaged that the intervention would be taken on by just one or two members of ward staff, and this was actively discouraged throughout but still persisted in 5 wards at phase one and 6 wards at phase two, across the 17 intervention wards. It is difficult to suggest a clear reason why this happened but it was often related to:
Front-line issues, such as no staff available to be released from direct patient care to attend APG;
A minority of ward managers viewing the study as yet another patient safety initiative that they just needed to ‘get on with’;
A misunderstanding of the multidisciplinary nature of the intervention.
Of most interest is the severe paucity of medical staff involved in the intervention with only four wards (all at Trust C) involving a medic. This was unforeseen at the outset and may have contributed to action plans that were narrower in scope than those generated by a strong multi-disciplinary meeting. Even those wards who managed to convene a strong multidisciplinary APG in phase one were often not able to sustain this level of input going into phase two. Towards the end of the study, it was disappointing to see that PRASE had unwittingly become badged as a ‘nursing initiative’. Medical and allied health professional input declined over time, and the workload was disproportionally being shouldered by individual ward managers (managerial nurses) who were for the most part already overloaded in their daily clinical roles.
Furthermore, an assumption was that a tight, coherent and most importantly consistent group of staff would engage with the intervention throughout the 14 months of staff involvement. In reality, staff movement around the NHS estate was high. This led to difficulties regarding ownership of action planning with some staff reluctant to proceed with action plans devised by their predecessors and others not believing it was worth the effort to become involved in the study if they were moving on shortly. A few ward teams changed their personnel completely between phase one and two of the study due to managerial reorganisation.
A massive change in staffing took place around the latter part of Phase one with a new Ward Manager and 80%–85% change in ward staff. The second phone interview revealed that other ward initiatives were taking place…the whole PRASE process was never wholly embraced because of intense ward improvement work, and staff flux, taking place at the same time. (Chestnut ward, Trust C)
It was never anticipated that such wholesale change would take place at the level of the individual ward teams within the lifetime of the trial and the intervention was unprepared for this. There was little formal capacity to continually reintroduce PRASE to new ward staff, despite researchers having to perform this ad hoc and unexpected role. Critically, it points to ownership of the intervention on the ground as a key factor in success. The engagement with the intervention becomes weak if it is passed around large numbers of different staff or if staff groups change on a dramatic scale.
Standardisation of facilitative processes by the research team does not necessarily ensure implementation standardisation by ward staff
A key intention of the facilitative processes was to ensure standardisation of implementation by ward staff. The process evaluation found that, in reality, these uniform training and facilitative processes resulted in little standardisation of approach to action planning regarding (1) the issues which staff chose to focus on or (2) whether the action plans were successfully implemented (or not). Our pen portraits point to three main issues that appear to underpin why:
Implementation of action plans were often related to buy in and collegiate working with other departments, some of whom were not willing to spend time, resource and effort on an issue which was not their own;
Existing pan-Trust safety and quality campaigns were prioritised over and above PRASE, to differing degrees which variably helped or hindered PRASE intentions;
Success was often the result of a complex interplay between the personal will of the staff involved in the APG and whether the study fitted into current ward priorities.
The following pen portrait excerpt from Apple ward exemplifies the first identified issue regarding buy in from other departments. This ward had several negative comments from patients that pain relief was not being given in a timely manner. To address this, the APG decided they needed assistance from pharmacy but this was not forthcoming and APG members were disappointed. This led to the contradictory position in phase two of engagement still being very present but the act of action planning itself becoming tokenistic.
This ward stayed engaged with the project the entire way through despite setbacks with their earliest action plan. The ward manager in particular clearly understood the purpose of the study and was sympathetic to receiving patient feedback. However, inertia may have crept in as their ‘outside the box’ thinking in phase one did not get any buy in from the pharmacy department. Action planning in phase two then became perfunctory even though engagement was still high. (Apple, Trust B)
The second issue of other safety campaigns being prioritised above this study relates to the capacity with which ward staff have within their normal clinical roles to be able to undertake improvement work. In several of the pen portraits, PRASE was described as ‘just one of many improvement initiatives which this ward are involved in’. Wards were under pressure to take part in hospital-wide initiatives that executive teams had deemed to be of most importance. While there was senior support for PRASE, it was not always significant in comparison to other initiatives. In some cases, the existence of other high-profile campaigns supported staff in achieving their PRASE action plans. Trust C launched a well-received ‘Hello my name is…’ patient experience campaign tying into national acknowledgement of the need for staff to introduce themselves and communicate better with patients at the bedside. On the wards where PRASE feedback had also drawn attention to this need, staff were supported to respond (through badges, awareness training and senior support) to do so.
However, the flip side of attention on more high-profile campaigns meant that—for some wards—PRASE became sidelined. Associated with this was a feeling of patient safety and quality ‘fatigue’ with the amount of initiatives in this area felt too numerous and therefore burdensome on staff time.
I got the sense that the ward manager saw PRASE as just another audit which she needed to go through the motions of…At one point during phase two, she admitted that in phase one she did not see the value in the study as she thought it just replicated other patient experience measures her ward is involved in. However, now she appreciates how it is different from the other measures. Working out where PRASE fitted in with other initiatives seemed to be a big issue for this ward manager. (Pine Ward, Trust B)
One strong finding to emerge was the use of PRASE data to reinforce safety or quality issues which the ward staff knew about tacitly but did not have robust data about to report to senior management. This finding emerged as a divisive issue. Some wards were pleased that the PRASE study reinforced staff opinion about the ward or validated on a larger scale the results of local audits. However, a minority of staff became irritated and instead viewed it as duplication.
Discussion
As introduced at the outset, some process evaluations are able to reveal specific ‘pinch points’ within the intervention itself,13 or within the overall setting in which it was applied,14 which help to explain why no effect was seen. The Consolidated Framework for Advancing Implementation Research advocates the ‘inner setting’ of an organisation as being influential in whether or not implementation can be achieved, where attention must be paid to the domains of: structural characteristics, networks and communications, culture, implementation climate.19 Our process evaluation found that this inner setting was so varied between wards within the intervention group that this led to a general ‘dilution’ of intervention implementation. We found striking differences between wards across all the above domains of ‘inner setting’—stability of ward teams, quality of relationships between different wards, basic assumptions towards receiving patient feedback and a learning climate (or lack of one). Significantly, we saw changes to the ‘inner setting’ constructs over time. The in-depth analysis of what happened within the intervention group generates useful insights for implementation of, and staff engagement with, patient safety initiatives to which we will now turn our attention.
The improvement of patient safety is already acknowledged as a cultural issue and the importance of factors such as teamwork, leadership and organisational processes operating at and between multiple levels.20 Navigating this territory—particularly the link between ‘sharp end’ ward safety initiatives and ‘blunt end’ corporate planning—has been documented as a necessary challenge. Initiatives which do not pay adequate attention in this regard are at best destined to fail, and at worst may overburden already demotivated staff.21 The facilitative processes incorporated into PRASE were designed to help address this challenge. These processes arose from the findings of feasibility testing of the intervention7 where our research team found that, for example, access to senior management at regular intervals and assistance in interpreting patient feedback were important factors which may support action planning. The assumption was that by providing these processes, staff would be better placed to successfully navigate complex organisational territory. It was anticipated this would allow some uniformity among the intervention group. In actuality, the facilitative processes were not adequate to ensure any such uniformity.
Dixon-Woods et al 22 22 developed a post-theorisation of why impressive results were seen in the original Michigan intervention—to decrease central line infections—in the USA. Six reasons are proposed as to why the programme worked. Of particular applicability is the ‘creation of a networked community’ where ward teams came together to build rapport and support for each other while identifying and resolving common barriers. Although part of the facilitative processes within the PRASE intervention aimed to attend to this need, it is likely that the community of wards involved in the study never reached a critical threshold in becoming an organic community who regularly reached out to each other. Further, specific leaders were targeted in the Michigan programme including hospital executives and clinical team leaders. This involvement of leaders at differing levels of the organisation is theorised as being integral to the success of the programme. Conversely, we found that involving senior management and matrons prior to the start of the study and then throughout its entirety had minimal effect on strengthening engagement with the intervention on the ground by front-line ward staff. Questions regarding the ability of senior management to support consistency of intervention adoption throughout an organisation, and the processes required to enable this further, were raised by our study and certainly warrant further exploration.
In this study, the relationships between different parts and levels of the organisation from senior management to ward teams to individuals were vital in achieving success. When these levels align well, as they appeared to do in one Trust—with respect to a culture shift around introducing and communicating to patients via an external patient experience campaign—it appears that much can be achieved. When they do not—for example, Apple ward who did not get buy in from the pharmacy department—staff at the ward level can become frustrated and demotivated. We therefore question the capacity of an externally designed intervention, even one with significant resources and facilitative processes, to provide the mechanisms to be continually adaptive to the organisational alignment between the sharp and blunt end at differing institutions. The challenges revealed here are about deeper organisational culture, systems and processes that need longer term development.
Our findings support the growing understanding that emphasis in patient safety research must continue to shift from the measure and manage orthodoxy of data collection to interpretation and process.10 In this research, the collection of patient feedback was the least problematic element. The complexity of what staff are being asked to do in interventions like PRASE (navigating multilayered organisational systems to implement improvements) requires much more consideration. In the broader but related policy area of patient experience, the overt emphasis and huge resource allocated to collecting patient experience data have not been matched by efforts to use and evaluate the impact of feedback on service improvement.23 There is increasing recognition that using data sources to change practice demands creativity and skills from staff; hence the tendency to present staff with data and expect change to happen as a result.24 Our intervention considered these issues a priori and hence facilitative processes were built into the trial yet they were not sufficiently robust enough to ensure a standardised implementation across intervention wards.
One interpretation of PRASE could be that ‘it failed’ due to showing no effect between the intervention and control wards. We believe this a simplistic view which does not take into account the wealth of positive benefits which patients and staff gained. First, it showed that patients are able to give feedback about the safety and quality of care and that they want to do this en masse (our consent rate was 85% of those patients approached by researchers and the number of patients recruited to the study was 2400 across five different hospital sites). Second, the process evaluation showed that most staff do believe the patient voice is important and there is an imperative to listen to, and act on, this voice. Third, despite local struggles, most staff do want to action plan to improve their patients’ care. The majority of the wards were receptive to receiving patient feedback—it is when they tried to move improvement work forward that problems arose.25 An additional gain which some staff identified was the ability of the patient feedback to allow staff to understand not only the patient perspective but also their priorities and to visualise the ward environment and systems ‘through the eyes of the patient’.
Limitations
We cannot know whether positive attitudes towards patient involvement in patient safety have continued on the intervention wards after the trial was completed. Improvements that staff were working towards may have gained impetus since the research team left. Equally, involvement in the trial may have kick-started ideas which, although did not come to fruition within its life span, may now be fuelling ward level action. Conversely, staff may have felt disempowered to enact improvement to the ward environment if their PRASE action plans had floundered. We have no format for measuring this ‘after effect’—either positive or negative—and little scope for knowing at what time point the evidence of a more long-lasting effect may be captured.
Our pen portrait methodology is a culmination of all sources of qualitative data collected and has its inherent weaknesses. This methodology is still in its relative infancy in relation to the way in which we have utilised it here. We were careful to draw equally on all sources of data to build a comprehensive narrative of the engagement trajectory of each ward. However, a differing analysis paying attention to fewer sources or an unequal weighing of sources may have pulled the narrative account in a slightly different direction. Further, it was difficult to categorise some wards firmly into their allocated engagement typology and arguably some could fit into several. Finally, process evaluation methods were developed a priori to the start of the trial so the design was very open. We devised a loose structure to capture qualitative intelligence on key trial processes. If prior knowledge existed that diversity of engagement within the intervention wards was to be so significant, it may have been possible to target a particular process evaluation framework for analysing outcomes in relation to this diversity, such as the ‘diffusion of innovation model’.26 It is a possibility that utilisation of differing methods may have provided other answers as to where different elements of the intervention worked, for whom and why.
Conclusion
Whereas previous process evaluations point towards specific pinch points or broader cultural issues to understand why an intervention showed no effect, this study points to an overall ‘dilution effect’ of the intervention. This was largely due to wards engaging with the intervention in highly divergent manners despite the standardisation of key components by the research team. The facilitative processes were inadequate to ensure full engagement across all wards in the study. A disconnect existed between senior management support for the study and how ward staff on the ground engaged with it more locally. The above findings assist in explaining why the trial saw no effect between intervention and control wards.
Supplementary Material
Acknowledgments
These are given in the author proof
Acknowledgments
Thanks to Gemma Louch, Jane Heyhoe and Yvonne Birks for research assistance in conducting some of the telephone interviews.
Footnotes
Twitter: laurainsaltaire
Contributors: RL, GA and JW are grant holders of the programme grant. LS devised the methodology of the process evaluation and wrote the qualitative protocol, with assistance from CM. LS, CM and JOH collected all data. LS and CM analysed and interpreted all data, with intellectual input from GA. LS and CM cowrote the first draft of the paper. All authors edited the first draft of the paper. All authors read and approved the final manuscript.
Funding: This study was funded by the National Institute for Health Research (NIHR) under its programme Grants for Applied Research scheme (‘Improving patient safety through the involvement of patients’, RP-PG-0108-10049).
Disclaimer: The views expressed are those of the authors and not necessarily those of the National Health Service (NHS), the NIHR or the Department of Health.
Competing interests: None declared.
Patient consent: Obtained.
Ethics approval: The study was approved by South Yorkshire NHS Research Ethics Committee on 15 March 2013 (Ref: 13/YH/0077). All participants gave informed consent to take part in this study.
Provenance and peer review: Not commissioned; externally peer reviewed.
Data sharing statement: Unpublished data is not available.
References
- 1. Bacon N. A smoke-alarm for patient safety and healthcare quality. Neil Bacon Blog 2010. http://neilbacon.wordpress.com/2010/12/13/a-smoke-alarm-for-patient-safety-and-healthcare-quality/ (Retrieved June 2015).
- 2. Francis R. Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry. London: The Stationery Office; 2013. [Google Scholar]
- 3. Berwick D. Improving the Safety of Patients in England. London: Department of Health; 2013. [Google Scholar]
- 4. Keogh B. Review Into the Quality of Care and Treatment Provided by 14 Hospital Trusts in England: Overview Report. London: Department of Health; 2013. [Google Scholar]
- 5. Giles SJ, Lawton RJ, Din I, et al. . Developing a patient measure of safety (PMOS). BMJ Qual Saf 2013;22:554–62. 10.1136/bmjqs-2012-000843 [DOI] [PubMed] [Google Scholar]
- 6. McEachan RR, Lawton RJ, O'Hara JK, et al. . Developing a reliable and valid patient measure of safety in hospitals (PMOS): a validation study. BMJ Qual Saf 2014;23:565–73. 10.1136/bmjqs-2013-002312 [DOI] [PubMed] [Google Scholar]
- 7. O'Hara JK, Lawton RJ, Armitage G, et al. . The patient reporting and action for a safe environment (PRASE) intervention: a feasibility study. BMC Health Serv Res 2016;16:676. 10.1186/s12913-016-1919-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Sheard L, O'Hara J, Armitage G, et al. . Evaluating the PRASE patient safety intervention—a multi-centre, cluster trial with a qualitative process evaluation: study protocol for a randomised controlled trial. Trials 2014;15:420 10.1186/1745-6215-15-420 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Lawton R, O'Hara JK, Sheard L, et al. . Can patient involvement improve patient safety? A cluster randomised control trial of the patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf 2017:bmjqs-2016-005570. 10.1136/bmjqs-2016-005570 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Lamont T, Waring J. Safety lessons: shifting paradigms and new directions for patient safety research. J Health Serv Res Policy 2015;20:1–8. 10.1177/1355819614558340 [DOI] [PubMed] [Google Scholar]
- 11. Oakley A, Strange V, Bonell C, et al. . Process evaluation in randomised controlled trials of complex interventions. BMJ 2006;332:413–6. 10.1136/bmj.332.7538.413 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Grant A, Treweek S, Dreischulte T, et al. . Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials 2013;14:15 10.1186/1745-6215-14-15 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Elford J, Sherr L, Bolding G, et al. . Peer-led HIV prevention among gay men in London: process evaluation. AIDS Care 2002;14:351–60. 10.1080/09540120220123739 [DOI] [PubMed] [Google Scholar]
- 14. Dixon-Woods M, Leslie M, Tarrant C, et al. . Explaining matching Michigan: an ethnographic study of a patient safety program. Implement Sci 2013;8:70. 10.1186/1748-5908-8-70 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Pleschberger S, Seymour JE, Payne S, et al. . Interviews on end-of-life care with older people: reflections on six european studies. Qual Health Res 2011;21:1588–600. 10.1177/1049732311415286 [DOI] [PubMed] [Google Scholar]
- 16. Tod AM, Lusambili A, Homer C, et al. . Understanding factors influencing vulnerable older people keeping warm and well in winter: a qualitative study using social marketing techniques. BMJ Open 2012;2:e000922. 10.1136/bmjopen-2012-000922 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Nettleton S, Neale J, Stevenson C. Sleeping at the margins: a qualitative study of homeless drug users who stay in emergency hostels and shelters. Crit Public Health 2012;22:319–28. 10.1080/09581596.2012.657611 [DOI] [Google Scholar]
- 18. Layder D. Sociological practice: linking Theory and Research. London: Sage, 1998. [Google Scholar]
- 19. Damschroder LJ, Aron DC, Keith RE, et al. . Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4:50. 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Taylor SL, Dy S, Foy R, et al. . What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf 2011;20:611–7. 10.1136/bmjqs.2010.049379 [DOI] [PubMed] [Google Scholar]
- 21. Dixon-Woods M, Baker R, Charles K, et al. . Culture and behaviour in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf 2014;23:106–15. 10.1136/bmjqs-2013-001947 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Dixon-Woods M, Bosk CL, Aveling EL, et al. . Explaining Michigan: developing an ex post theory of a quality improvement program. Milbank Q 2011;89:167–205. 10.1111/j.1468-0009.2011.00625.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Coulter A, Locock L, Ziebland S, et al. . Collecting data on patient experience is not enough: they must be used to improve care. BMJ 2014;348:g2225 10.1136/bmj.g2225 [DOI] [PubMed] [Google Scholar]
- 24. Gkeredakis E, Swan J, Powell J, et al. . Mind the gap: understanding utilisation of evidence and policy in health care management practice. J Health Organ Manag 2011;25:298–314. 10.1108/14777261111143545 [DOI] [PubMed] [Google Scholar]
- 25. Sheard L, Marsh C, O'Hara J, et al. . The Patient Feedback Response Framework - Understanding why UK hospital staff find it difficult to make improvements based on patient feedback: A qualitative study. Soc Sci Med 2017;178:19–27. 10.1016/j.socscimed.2017.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. McMullen H, Griffiths C, Leber W, et al. . Explaining high and low performers in complex intervention trials: a new model based on diffusion of innovations theory. Trials 2015;16:242 10.1186/s13063-015-0755-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
bmjopen-2016-014558supp001.pdf (476.6KB, pdf)