Skip to main content
Journal of the Intensive Care Society logoLink to Journal of the Intensive Care Society
. 2018 Jun 12;19(2 Suppl):1–162. doi: 10.1177/1751143718772957

Intensive Care Society State of the Art 2017 Abstracts

PMCID: PMC6136119

Abstract

Selected abstracts were presented as orals within the conference programme and the remaining accepted abstracts were presented as electronic posters.

Abstracts selected as oral presentations

OP.001

Supporting the patient innovator: Developing a novel communication device for tracheostomy patients in the ICU

Fiona Howroyd1, Ruth Capewell1 and Charlotte Small1

1Queen Elizabeth Hospital, Birmingham, UK

Abstract

F Howroyd, R Capewell, C Small, D Buckley, L Buckley, V Shingari, C Qian, D McWilliams, C Dawson, C Snelson (2017)

Their inability to communicate effectively whilst he had a tracheostomy on the Intensive Care Unit (ICU), had such a profound impact on Duncan Buckley and his wife, Lisa-Marie, that they developed a concept for a novel interactive communication device, called ICU CHAT. Together, they have been embedded within the multidisciplinary ICU research team at the Queen Elizabeth Hospital Birmingham (QEHB), supported by the Human Interface Technologies team from the University of Birmingham, and funded by the National Institute of Health Research Surgical Reconstruction and Microbiology Research Centre, to further develop their prototype for clinical trial.

The team followed a human centered design process (1), with the following stages:

1. Literature review of Augmentative and Alternative Communication (AAC) devices for patients with a tracheostomy on the ICU.

2. Bench testing with ICU survivors, family members and staff.

3. Multidisciplinary development of a clinical trial protocol.

The review determined that although there are a range of AAC devices available on the market, there are practical limitations to their use. The current literature does not explore factors relating to device requirements and patient usability in the ICU.

Bench testing established the safety and appropriateness of system components by combining usability assessment with qualitative appraisal. Testing determined that both patients and staff preferred to use a tablet/laptop sized screen as visual display attached to a mobile stand. The interface with the best usability ratings was the camera mouse; laptop software allowing tracking of facial gestures to control an on-screen cursor.

The clinical trial protocol “Feasibility of the use of a novel interactive communication device, ICU-CHAT, for patients with tracheostomy on the ICU” has secured Health Research Authority permissions and is open to recruitment at QEHB.

This early translational research demonstrates the ability of clinical and academic teams to support potential patient innovators, whose reflections on experiences during their ICU stay enable technology development to be truly patient-centered. The structured process used by the research group optimises usability engineering process (2) and facilitates regulatory approval prior to rigorous evaluation via clinical trial.

Bibliography

  • 1.BSI (2010) Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems (ISO 9241-210:2010). London: British Standards Institute.
  • 2.IEC (2007) IEC 62366: 2007 Medical devices–Application of usability engineering to medical devices. Geneva: International Electrotechnical Commission.

OP.002

The burden of early oliguria in critical illness

Neil Glassford1,2, Johan Mårtensson3, David Garmory1, Glenn Eastwood1,2, Michael Bailey2 and Rinaldo Bellomo1,4

1Department of Intensive Care Medicine, Austin Health, Melbourne, Australia

2ANZICS-RC, School of Public Health and Preventative Medicine, Monash University, Melbourne, Australia

3Section of Anaesthesia and Intensive Care Medicine, Department of Physiology and Pharmacology, Karolinska Institutet, Stockholm, Sweden

4School of Medicine, University of Melbourne, Melbourne, Australia

Abstract

Early oliguria (EO) during the first 24 h of intensive care unit (ICU) admission is more likely to be related to the reason for admission than to subsequent complications or therapeutic intervention, and so may be a more accurate predictor of outcome than later estimates of urine output (UO). Moreover, the accumulation of a number of discontinuous hours of EO may offer an earlier indication of renal dysfunction than the fixed periods common to modern definitions of acute kidney injury (AKI).

We used electronic patient record data, including hourly fluid balance information, to explore EO in terms of severity and duration to test whether patients accumulating hours of EO differ in demographics, process-of-care, and outcomes from those who do not. We developed statistical models to assess the predictive value of EO.

We studied 1911 patients; 61.6% male, with a median age of 65 (IQR: 51.3—74.7) years and a median APACHE III score of 56 (IQR: 42—72). Over the first 24 h of ICU admission, 1215 (63.6%) patients experienced EO, defined as a cumulative total of ≥4 hours of oliguria (urine output of <30 ml/h). Of these, 191 (15.7%) were exposed to EO within 6 hours of admission. Patients in the EO group were more unwell, required significantly more interventions and had higher ICU and hospital mortality (Table 1). EO was independently associated with ICU and hospital mortality after comprehensive composite adjustment (adjusted OR/hour EO 1.05, 95%CI:1.02–1.08, p = 0.003 and 1.04, 95%CI:1.02–1.07, p = 0.001 respectively).

We identified that EO accumulation is a common occurrence in patients admitted to the ICU for 24 h or more. The early burden of oliguria appears to be an important and novel additional predictor of outcome in critically ill patients.

OP.003

Long term consequences of acute kidney injury in survivors of critical illness – a national population-based cohort study

Steven Tominey1, Robert Lee2, Timothy S Walsh3,4 and Nazir Lone2,4

1University of Edinburgh Medical School, Edinburgh, UK

2Usher Institute of Population Health Sciences and Informatics, Edinburgh, UK

3MRC Centre for Inflammation Research, Edinburgh, UK

4University Department of Anaesthesia, Critical Care, and Pain Medicine, Edinburgh, UK

Abstract

Background: During periods of critical illness, acute kidney injury (AKI) is commonplace and in a proportion of patients renal replacement therapy (RRT) is required. Little research has been published relating to the long-term consequences of kidney injury in ICU survivors. We aimed to evaluate the association between AKI and both mortality and emergency hospital readmission at one-year following discharge for a complete five-year cohort Scottish ICU survivors. Additionally, we aimed to explore the causes for both outcomes.

Table 1.

Characteristics and outcomes of patients with early oliguria.

Non-EO EO p-value
696 1215
Age, years 62.95 (50.43-73.24) 66.35 (51.47-76.01) 0.004
Male 487 (69.97%) 691 (56.87%) <0.001
APACHE 3 score 54 (40-67) 58 (43-75) <0.001
Surgical admission 319 (45.83%) 617 (50.78%) 0.08
IPPV during admission 510 (73.28%) 872 (71.77%) 0.49
IPPV at admission 481 (94.31%) 763 (87.5%) <0.001
Duration of IPPV, hours 17.35 (8.82-49.25) 24.32 (11-82.03) <0.001
Baseline creatinine, micromol/l 80 (63-110) 78 (60-116) 0.33
KDIGO AKI 3 at 24 h 12 (1.7%) 122 (10%) <0.001
KDIGO AKI 3 at d7 24 (3.5%) 172 (14.2%) <0.001
CRRT during admission 14 (2%) 127 (10.5%) <0.001
ICU Mortality 30 (4.31%) 111 (9.14%) <0.001
Hospital Mortality 51 (7.33%) 175 (14.42%) <0.001

Methods: Participants were identified from the Scottish Intensive Care Society Audit Group database (01/01/2009-31/12/2013; n = 33,764), and linked to national hospital and death records. Those with end-stage renal disease requiring dialysis were excluded. Exposures: primary: receipt of RRT; secondary: AKI derived using modified-RIFLE criteria. Outcomes: 1-year mortality and emergency hospital readmission. Exploratory analyses were undertaken to examine causes of mortality and first emergency readmission. Associations between potential risk factors and outcomes were estimated using univariable Cox regression and multivariable Cox regression to adjust for potential confounders.

Results: Of 33,764 participants, 2,137 (6.33%) required RRT and 4,817 (13.96%) developed AKI defined by modified-RIFLE criteria. RRT was associated with increased crude 1-year mortality (10.90% vs 8.34%; HR = 1.33, 95%CI 1.16–1.52, p < 0.001) as was AKI (10.92% vs 8.38%; HR = 1.33. 95%CI: 1.21–1.46, p < 0.001). After adjustment for potential confounders, these associations were no longer significant (RRT adjHR = 1.08. 95%CI: 0.93–1.26, p = 0.297; AKI adjHR = 0.96. 95%CI: 0.86–1.07, p = 0.456). However, both RRT (47.03% vs 40.30%; adjHR = 1.09. 95%CI:1.01–1.17, p = 0.022) and AKI (47.50% vs 40.02%; adjHR = 1.08. 95%CI:1.03–1.13, p = 0.003) were associated with increased risk of 1-year emergency readmission, persisting after adjustment. The causes of death and emergency readmission differed in those receiving RRT, compared to those not. Specifically, there were increases in renal and endocrine causes of mortality and readmission, particularly acute and chronic kidney disease, and diabetes mellitus. For 1-year emergency readmission, increased gastrointestinal causes were additionally associated with the receipt of RRT.

Conclusions: This large, population-level cohort study has demonstrated that receipt of RRT is associated with a small increased risk of 1-year emergency readmission but not mortality. These findings will help clinicians and patients understand the long-term consequences associated with AKI and receipt of RRT during an ICU stay. The increased risk of readmission may indicate that hospital discharge policies need to be enhanced to reduce the risk of emergency readmission in patients receiving RRT.

Acknowledgements: Funding: Medical Research Scotland. We wish to thank SICSAG for providing data and the staff at participating hospitals.

OP.004

Rib fractures: Elderly patients receive lower standards of care than younger patients

Neil Roberts1, Emma Harrison1, James Butler1, Julia Gibb2, Rebecca Norman2, Jonathan Outlaw2, Jonathan Abeles2, Laura Shepherd1, Ruth Creamer1 and Ben Warrick1

1Royal Cornwall Hospitals Trust, Truro, UK

2University of Exeter Medical School, Truro, UK

Abstract

Background: Rib fractures represent a significant proportion of trauma seen in Emergency Departments. There is often associated lung injury with contusion or haemopneumothorax. Analgesia and respiratory support represent the cornerstones of management. There is an increasing frailty burden in healthcare. Elderly patients have previously been shown to receive poorer head injury care than younger patients. This audit examined whether this is the case with rib fractures too.

Methods: Retrospective audit of all adult patients with rib fractures from primary traumatic events, who were admitted for active treatment to a district general hospital over a 6-month period (July-Dec 2015). Patients were identified through TARN, WebPACS imaging system and emergency department software database, cross-referenced then imaging and notes reviewed. Demographics and characteristics of injury were recorded, along with markers of care such as level of trauma call, maximum imaging, critical care and analgesia such as epidural or patient-controlled-analgesia use, and outcomes including length of stay (LOS) and 30-day mortality.

Results: 43 patients identified for inclusion after review of 2461 imaging reports and 58 sets of notes. 15 (35%) not captured by TARN. 17 (40%) were low-energy mechanism (fall < 2 m). Median age 67 (range 32–96). Median of 5 fractures (range 1–22). 8 had flail chest (19%). Median hospital LOS 6 days (range 1–25). Median Charlson Comorbidity Index was 4 (range 0–11). Median ISS peaked at 16.5 in age 51–60 but was still 9 at age 81–90 and 6 at 91–100. Overall 30-day mortality was 11.6%. Hospital trauma call decreased from 75% in age group 51–60 to 0% in age 91–100, with increase in ‘no trauma call’ from 25% to 100%. Full Trauma CT decreased 88% to 25% across these age groups, with 75% age 91–100 having only chest XR as maximum imaging. Primary critical care admission decreased from 50% age 61–70 to 0% age 91–100. Elderly patients were less likely to receive advanced analgesia. Median LOS increased with age. 30-day mortality was 43% in age 81–90 and 50% 91–100, with no other patients dying.

Conclusions: Elderly patients receive less aggressive and lower standard care and have higher mortality and LOS despite less severe injuries. Earlier recognition of these injuries may facilitate improved care pathways and outcomes. Hospitals should have a lower threshold for hospital trauma call and trauma scan in elderly patients.

OP.005

Socioeconomic status is associated with 30-day mortality after injury: A cross-sectional analysis of national TARN data

Philip McHale1, Daniel Hungerford1, Tim Astles2 and Ben Morton2,3

1University of Liverpool, Liverpool, UK

2Aintree University Hospital NHS Foundation Trust, Liverpool, UK

3Liverpool School of Tropical Medicine, Liverpool, UK

Abstract

Introduction: The relationship between socioeconomic status and mortality is well defined across multiple different pathologies. However, the relationship between deprivation and survival from trauma is less clear. There is substantial evidence of a social gradient in injury risk but contradictory evidence of a gradient for mortality. To address this issue, we analysed this relationship using data from the Trauma Audit and Research Network (TARN).

Methods: We obtained TARN data for patients admitted to hospitals in England and Wales with trauma (TARN identifies patients though Hospital Episode Statistics) January 2015 – January 2016. This dataset includes Injury Severity Score (ISS), demographics, four-digit postcode and 30-day mortality. Using mortality as binary outcome, we performed multiple logistic regression with age group, sex and ISS (split into minor <15, major 15–24, and severe 25+). Additional analysis was performed, stratifying injuries into minor and major categories (using the ISS of 15). National quintiles of deprivation were constructed from the four-digit postcode using area weighted Lower Super Output Area Index of Multiple Deprivation (IMD) scores.

Results: There were 52,422 patients admitted to hospitals in England and Wales with trauma. Compared to patients from the least deprived quintile, those from the most deprived are significantly more likely to die within 30 days. Other quintiles are also significantly more likely, however there is no clear social gradient. Additionally, increasing ISS is associated with increased mortality, as is increasing age and males compared with females. Stratified analysis showed that sex and IMD were not significantly associated with mortality for major injuries, but both were significant for minor injuries. The adjusted odds ratio for those in the most deprived areas was 1.39 compared to those from the least deprived (p < 0.001).

Conclusion: The results show that deprivation was related to mortality only in minor injuries. This is potentially because of relative strength of effect in major injuries overcoming potential effect of deprivation (or other demographics) on mortality. However, another possibility would be that the increased resource available for the care of major injuries ameliorates the effect of demographics. Targeting older patients with minor trauma from more deprived backgrounds for preventative interventions and considering clinical practice pathways (e.g. increased orthogeriatrician input) could potentially impact on outcomes for these patients.

Oral presented in session 39 of the Conference programme – Abstract ID: 0405

Agitation bubble contrast (ABC) – a novel sonographic sign for diagnosing free intraperitoneal gas in the presence of peritoneal free fluid

Emese Kinga Gaal1, Theophilus Samuels1 and Matyas Andorka1

1Surrey and Sussex Healthcare NHS Trust, Redhill, UK

Abstract

Introduction: Free intraperitoneal gas can be elicited using either an erect chest radiograph or computed tomography (CT) scan. Obtaining these imaging modalities can be difficult and expose a critically ill patient to the stresses of intra-hospital transfer.

Bedside sonography is an incredibly useful tool. By completing non-specialist training, such as the Core Ultrasound Skills in Intensive Care (CUSIC) accreditation, the operator can diagnose significant pathology at the bedside.

We present a novel sonographic sign that substituted the need to perform a radiological investigation for free intraperitoneal gas.

Methods: We describe a case involving a 59-year-old man admitted with severe hospital acquired pneumonia. He deteriorated significantly, developing severe metabolic acidosis, increasing vasopressor support requiring invasive ventilation. On Day 3 post admission, he developed abdominal tenderness; urgent surgical review requested CT scan. During preparation for the CT transfer we performed a focused bedside ultrasound scan (Sonosite X-Porte, Fujifilm).

Results: Bedside ultrasonography confirmed hypovolaemia, left lower lobe pneumonia, and free fluid with gas artefact above the liver with an oscillating gas-fluid interface.

Differential diagnoses for the gas artefact were aerated lung at the anterior costophrenic angle, free peritoneal gas or gas filled bowel loop; aerated lung was visible with sliding cephalad to this gas artefact.1 We hypothesised that using a ballottement technique to the right side of the abdomen following appropriate analgesia would generate bubbles in the free peritoneal fluid.

Swirling of bubbles in the peritoneal fluid confirmed the suspected free peritoneal gas, which would not have happened if the gas artefact had originated from a gas filled bowel loop (figure 1 – permission to use image given by next of kin).

Discussion: The result was discussed with the surgical team, resulting in the patient being taken directly to the operating theatre. Exploratory laparotomy confirmed free peritoneal fluid and found duodenal perforation.

While radiological investigations may be superior to bedside sonography for detecting free air, by using this novel manoeuvre, which to the best of our knowledge has not been previously described, we managed to avoid unnecessary delay and radiation exposure.

Conclusion: Agitation bubble contrast manoeuvre can be a useful tool to sonographically confirm free peritoneal gas in the presence of free abdominal fluid.

Reference

  • 1.Goudie, Adrian. Detection of Intraperitoneal Free Gas by Ultrasound. Australasian Journal of Ultrasound in Medicine 2013; 16: 56–61. [DOI] [PMC free article] [PubMed]

Abstracts selected for ePoster presentations

EP.001

Delayed donation after brain death: Concept and support level by Flemish donor coordinators

Karen Embo1, Willem Stockman2, Piet Lormans1 and Johan Froyman2

1AZ Delta, Roeselare, Belgium

2AZ Delta Roeselare, Roeselare, Belgium

Abstract

Objective: In the practice of organ donation demand exceeds supply. Therefore new techniques and concepts are being explored. Delayed DBD (donation after brain death) is a concept in which stabilizing care is intentionally continued in a potential donor that has yet to reach the point of brain death. It concerns patients with evolving intracranial pathology, in which further therapy is futile and who are eligible for donation. By prolonging care the harvesting procedure can be transformed from a DCD (donation after circulatory death) to a DBD procedure. The objective is to extend the donor pool, augment organ per donor ratio and improve organ quality by lowering total ischemia time.

Figure 1.

Figure 1.

A: fluid-gas interface (white arrow), B: bubbles after ABC manoeuvre (white arrows).

Delayed DBD was defined as a DBD procedure in which donor management (the time between establishing futility of further therapy and the declaration of brain death), exceeds 24 hours.

Off course this can only be carried out with full support of staff and family of the potential donor.

Belgium is a member of Eurotransplant. Each participating hospital appoints two donor coordinators (1 nurse, 1 physician) responsible for everything concerning organ donation. With our study we aimed to introduce the concept to all Flemish donor coordinators, measure support and discover difficulties.

Methods: An electronic survey was designed and distributed. The concept was explained, followed by a questionnaire to investigate support and concerns.

Results: 94 coordinators received the survey. The response rate was 38% (18 nurses, 18 physicians). 25 (69,4%) were overall supportive. 8 (22,2%) responded neutral, 3 (8,3%) dismissive. Chi-square test showed no difference in support amongst nurses versus physicians (p > 0,05). 21 (58,3%) thought this policy could be explained to the family of the donor, 4 (11,1%) didn’t. 28 (77,7%) found it defendable amongst their team, 5 (13,9%) didn’t. Most coordinators, 33/36 (91,7%), found the policy justifiable towards the donor. 1 (2,8%) didn’t. The cost of extra ICU admission days wasn’t an issue for 23 coordinators (63,9%). 5 (13,9%) considered this a problem. 6 (16,7%) found the occupation of an ICU bed in sight of other admissions problematic. 25 (69,4%) didn’t.

Conclusion: Most donor coordinators were supportive of the delayed DBD practice. Difficulties were mainly reported in explaining the policy towards the family.

Potential bias must be noted. Responders are possibly more engaged and might respond more enthusiastically to new concepts in organ donation.

EP.003

Intensive Care staff perceptions of palliative care delivery in their Intensive Care Units

Manon Lewis 1

1St Georges University Hospitals NHS Foundation Trust, London, UK

Abstract

Background: End of life discussions occur daily in most intensive care units (ICU). This work aims to explore the perception of the delivery of end of life care (EoLC) in ICU, to identify staff members feelings regarding palliation, and whether EoLC is perceived to be delivered well.

Methods: 47 staff members from three adult intensive care units within one hospital (General, Neuro and Cardiothoracic ICU) participated via a structured interview. Clinical intensive care doctors and nurses of all grades and experience were included. Data was collected, then categorised into themes of response to expose trends.

Results: 46 participants (98%) felt EoLC was part of ICUs responsibility, and 34 participants (72%) reported feeling comfortable and competent managing EoLC.

22 (46%) participants would seek advice from the hospital palliative care team (PCT).

34 participants (72%) believe EoLC is delivered well on their ICUs.

Discussion: The main theme demonstrated was the perceived delay in active withdrawal of treatment. Reasons for this include varying EoLC experience amongst doctors. Delays were also attributed to a perceived reluctance from seniors to withdraw care from patients who’s families were against palliation, either for cultural or religious reasons. Although this unease was clear, there was no mention of litigation or complaints. Numerous nursing staff reported a lack of timely DNACPR form completions, clear ceilings of care and timely withdrawal plans.

Secondly, there was resistance amongst staff to reduce physiological monitoring during withdrawal. Multiple staff members suggested this reluctance was related to the stigma of the Liverpool Care Pathway. It was felt that without numerical values inadequate assessment of symptoms was made. No member of staff made reference to the ICU end of life qualitative symptom chart which is available on the unit.

Finally, there was discordance regarding referral to palliative medicine. Some staff members looked directly to the PCT to advise on EoLC, whereas others would seek guidance within ICU. This discordance may be related to a lack of evidence based gold standards for EoLC in ICU.

Conclusion: Improved EoLC in ICU needs better guidance and education to empower senior decision making. Principles, rather than tick box guidance may resolve the unease related to end of life care and allow for sufficient flexibility to individualise palliation of complex ICU patients.

This guidance should be based on a combination of palliative care and ICU withdrawal gold standards in order to produce excellence in evidence based standardised care.

EP.004

The rule of 3s – three factors that triple the likelihood of families overriding first person consent for organ donation in the UK

James Morgan1, Paul Murphy2,3, Dale Gardiner3,4, Cathy Hopkinson5, Cathy Miller5 and Olive McGowan5

1Yorkshire Deanery, Leeds & Bradford School of Anaesthesia, UK

2Leeds Teaching Hospitals NHS Trust, Leeds, UK

3NHS Blood and Transplant, London, UK

4Nottingham Teaching Hospitals NHS Trust, Nottingham, UK

5NHS Blood and Transplant, Bristol, UK

Abstract

Between April 1st 2012 and March 31st 2015, 263 of the 2244 families in the UK, whose loved ones had registered to donate organs for transplantation after their death on the NHS Organ Donor Register, chose to override this decision; an override rate of 11.7%. Multivariable logistic regression analysis was applied to data relating to various aspects of the family approach in order to identify factors associated with such overrides. The factors associated with family overrides were failure to involve the Specialist Nurse for Organ Donation (SNOD) in the family approach (odds ratio [OR] 3.0), donation after circulatory death (OR 2.7) and ethnicity (OR 2.7). This adds to the body of data linking involvement of the SNOD in the family approach to improved UK consent rates, and suggests that there may be, from the perspective of the family, fundamental differences between donation after brainstem death and donation after circulatory death.

Keywords: Organ donation, donation after brainstem death, donation after circulatory death, organ donor register, organ transplantation

EP.005

Factors affecting organ donation status of University students: A cross-sectional study of a large University in the UK

Joe Alderman1,2,3 and Andrew Owen1,2

1College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK

2Critical Care Unit, Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

3City Hospital Birmingham, Sandwell & West Birmingham Hospitals NHS Trust, Birmingham, UK

Abstract

Introduction: Nearly 500 people die per year in the UK awaiting an organ transplant; the current waiting list constists of over 6400 people. (1) Much progress has been made in recent years to improve donation rates, which have increased by 20% since 2011/12. (2) Despite recent successes, there remain significant problems with availability of donor organs in the UK, particularly in the black and minority ethnic population, who make up 11.9% of the UK population, but just 5.8% of the organ donation register. (3,4) Ethnic diversity and religion are linked, with far greater diversity in Muslim and Buddhist communities than in Christians, or those with no religion. (5) This project aimed to understand the factors affecting University of Birmingham students' donor status, and to assess opinions regarding an ‘opt-out’ system for organ donation.

Methods: A questionnaire was distributed via email, social media, and message boards to students at the University of Birmingham, UK from 2012–14 following an initial pilot phase. Demographic data and organ donor status were collected, and degree of agreement with specific phrases was assessed using a five-point likert scale. The results were analysed using Microsoft Excel and SPSS (v24; IBM). Likert data were transposed into stratified horizontally displaced bar graphs, with red-coloured right-shifted bars indicating disagreement, left-shifted green-coloured bars indicating agreement.

Results: Overall, 749 responses were returned. Of these, 498(66%) were female. 477(63.7%) of respondents were donors – substantially higher than the national average of 36%. (2) 203 (27.1%) were not donors, and 68 (9.2%) were unaware of their status. Subset analysis indicated that non-religious students were broadly likely to be donors (70.4%, n = 406), and that religious students taken as a group had a lower donor rate (55.2%, n = 179). Further analysis by religion was difficult with low numbers, but indicated variance in donation rates between different respondents from different religions

graphic file with name 10.1177_1751143718772957-img1.jpg

Over half of students volunteered their postcodes. Deprivation index correlated moderately strongly with donor propensity (Pearson R = 0.501; R2 = 0.251).

graphic file with name 10.1177_1751143718772957-img2.jpg

Overall, 74% of respondents would be in favour of an opt-out system, though this was reduced to 57% in those who were not organ donors.

Conclusion: This study, though limited, provides evidence of divergent opinions regarding organ donation between students based on their religious affiliation. Given the aforementioned deficit in donation amongst minority ethnic groups within the UK, further work is warranted to understand and overcome perceived barriers to donation.

References: Redacted. Availble via email on request

EP.006

Evaluating unintentional nasogastric tube displacement in critically ill patients

Helen Prescott1, Hayley Prior1, Alexander Sykes2, Eloise Shaw2, Tom Beadman2, Cara Valente2 and Gareth Gibbon1

1Nottingham University Hospitals NHS Trust, Nottingham, UK

2University of Nottingham Medical School, Nottingham, UK

Abstract

Introduction: The use of nasogastric tubes (NGTs) in critical care is routine. However, their dislodgement is not uncommon and risks patient harm. Within Nottingham University Hospitals Critical Care (NUHCC) department there is concern about the frequency of unintentional NGT displacement (UND) yet there is no consensus as to the best method for securing NGTs. Adhesive dressings and nasal clips tend to be used first line with looped systems reserved for patients deemed at increased risk of UND. In this study we sought to capture UND events in real time and to look further at the governance and management of NGTs within our critical care areas.

Methods: Patients within NUHCC department were reviewed at multiple time points (typically 3 times per week) over a 15-week period and the following recorded: presence or absence of a NGT; method for securing NGT; incidence of UND during preceding 48 hours; nasal pressure damage; completion of NGT care plan. For 6 weeks of the study, at the same time as data collection, nursing staff were quizzed about the types of patients that may benefit from a looped securing system. The number of adverse events captured by our study was compared with the Trust’s incident reporting system.

Results: On average two thirds of our patients have an NGT in situ at any one time. Of the 211 patients with NGTs reviewed during the study period 31(15%) suffered at least one UND. 36 patients (17%) had a looped securing system for part of their admission. Just 3 (8%) suffered UND whilst the looped system was definitely known to be in place. E-mail reminders and posters helped increase knowledge amongst nursing colleagues about the types of patient that may benefit from a looped system but had little effect on the percentage of NGT care plans completed (mean 73%). Our manual data collection system was able to identify more episodes of UND than the Trust’s incident reporting system. The converse was true for pressure damage.

Conclusions: The majority of patients admitted to NUHCC department have an NGT inserted but at least 15% will suffer one or more UNDs. The incidence of UND in patients with looped systems may be lower but numbers are small. Simple interventions improved knowledge amongst nursing colleagues of “at risk” patients but did not affect completion of NGT care plans. The Trust’s incident reporting system is well utilised for recording pressure damage but not UND.

EP.007

Sodium Administration in the ICU

Nick Tilbury1, Ian Lyons1 and Gareth Moncaster1

1Kings Mill Hospital, Mansfield, UK

Abstract

Introduction: Dysnatraemias are common in critically ill patients both on admission(1) and during ICU stay(2) and are associated with increased morbidity and mortality(1,2). While particular care is taken with fluid balance, the total sodium intake is often not considered, though it may be associated with respiratory dysfunction (3). We audited the total sodium intake in patients on our ITU.

Methods: 30 patients were reviewed. Data was obtained from prescription charts, observation charts and patient notes for a 24 hour period within our intensive care unit. Data on sodium content of medications was obtained from their summary of product characteristics.

Results: Eleven of thirty (36.7%) patients received Level 2 care, and the remainder received level 3 care. Twenty-two patients (73.3%) received greater than the WHO recommended daily intake of sodium (2 g). The mean total sodium intake was 2.8 mmol/kg (63.4 mg/kg). A significant proportion of sodium was due to intravenous medications, with the mean sodium dose being 1.2 mmol/kg; of this 0.6 mmol/kg (52.1%) resulted from solvents administered with the medication. Oral medications accounted for an average of 0.33 mmol/kg of the sodium. 10/30 patients (33.3%) were eating and drinking and the sodium intake for this could not be accurately quantified, and was not analysed.

Conclusion: Total sodium intake is not routinely considered in the ICU, despite being excessive in many critically ill patients. Intravenous medications and their solvents account for 41.8% of the total sodium load in critically ill patients. We recommend that greater consideration be given to sodium administration in the critically ill, particularly for ‘occult’ sources of sodium, as a result of medications and their solvent. As we were unable to account for oral intake, it is likely that in many patients the total sodium intake is higher than suggested. Work is ongoing in our unit to improve the delivery and monitoring of sodium.

References

Funk GC, Lindner G, Druml W, et al. Incidence and prognosis of dysnatraemias on ICU admission. Intensive Care Medicine 2010; 36: 304–311.

Vandergheynst F, Sakr Y, Felleiter P, et al. Incidence and prognosis of dysnatraemias in critically ill patients: analysis of a large prevalence study. Eur J Clin Invest 2013; 43: 933–948.

Bihari S, Peake SL, Prakash S, et al. Sodium balance, not fluid balance, is associated with respiratory dysfunction in mechanically ventilated patients: a prospective multicentre study. Crit Care Resusc 2015; 17: 23–28.

EP.008

Nasogastric feed and propofol use in the nutrition of critically unwell patients: An acute trust experience

Sean Menezes1, Brigid Sharkey2 and Shibaji Saha2

1Colchester Hospital University NHS Foundation Trust, Colchester, UK

2Queen's Hospital – Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK

Abstract

Nutrition for critically unwell patients remains a concern despite improved outcomes seen from early feeding. In most critical care units (CCUs), there is a significant delay in initiating enteral nutrition that worsens the patient’s catabolic state, leading to organ dysfunction. Additionally, the use of lipid-rich propofol as a sedative provides a high-fat caloric intake, potentially worsening suboptimal nutrition. We describe the nutritional support provided to our CCU patients to determine the daily recommended calories and protein provided in the acute period and evaluate the role of propofol as a nutritional source.

Clinical notes were examined for patients admitted to an acute trust’s general CCU over 14 days including only those who were intubated and ventilated with no contra-indications to nasogastric (NG) feeds. The volume and type of NG feed and the volume of propofol used over a 24-hour period were recorded to a maximum of seven days of admission. Daily calculated recommended energy intake (cREI) and protein intake (cRPI) were calculated by the CCU dietician. This was determined using their pre-admission nutritional status and ideal body weight with a target of 25–35 kcal/kg/day and 0.625–1.875 g of protein/kg/day to create a required hourly-rate of NG feed.

There were 37 patients identified, 13 women and 24 men, accounting for 122-patient days. There were 33 medical and 4 surgical patients with a median age of 64.5 (range 22–85.4). NG feeds provided an average of 50.9% of the daily cREI for all patients. The average daily provision of protein was only 47.3% of the daily cRPI. In both cases, this was <15% on day 1 (D1) of admission but improved to roughly 60% on D7. Propofol provided an average of 36.1% of the daily intake with 68.7% of the calories on D1 and reduced to 29.1% by D7. When considering NG feeds and propofol together, our patients met 70.4% of the daily cREI. This was an average of 31.9% on D1 of admission and improved to 83.0% by D7.

Nutrition continues to be suboptimal within our CCU, reflected in the poor total calories and protein provided to our patients. This is worse on D1 due to the investigations requiring patient transfer and the lower priority in initiating NG feed. Additionally, propofol continues to be a significant source of calories. We have explored different strategies and have determined that a 24-hour volume target would be better suited than our current practice.

EP.009

Improving the delivery of daily calorific targets via the enteral route in a critically ill patient population: A quality improvement cycle in a mixed surgical and medical intensive care unit in the United Kingdom

Brian Johnston1, Toseef Ahmed1, Zeyad Al-Moasseb1, Martin Habgood1, Sarah Clarke1 and Anton Krige1

1East Lancashire Hospitals Trust, Blackburn, UK

Abstract

Introduction: Enteral nutrition and adequate calorie intake has been associated with reduced infections and improved survival in critically ill patients. Despite this evidence data suggests patients do not achieve their daily calorific requirement. This ‘iatrogenic underfeeding’ is thought to be widespread with the CALORIES study revealing that only 10%-30% of prescribed daily kCal was delivered to patients. Utilising quality improvement methodology, we aimed to deliver greater than 85% of prescribed kCal/day by transitioning from an hourly based enteral feeding protocol to a 24-hour volume based feeding protocol, starting feeding at a higher rate and increasing the permissible gastric residual volume from 250 ml to 300 ml.

Methods: Baseline data assessing the percentage of daily kCal delivered to ventilated patients via the enteral route was collected in December 2015 (cycle 1). Following presentation of baseline data new intervention guidelines were agreed based on the PEPuP protocol. Nurse champions were identified and responsible for cascade training the PEPuP protocol to all nursing staff. Educational tools to help determine daily calorific requirement and volume of feed required were provided. Repeat data was collected 6 months (cycle 2) after intervention implementation followed by two weekly cycles utilising PDSA methodology between July 2016 and July 2017 (cycle 3 to 12).

Results: Ten and twelve patients were included in cycles 1 and 2 respectively. Five patients were included during each PDSA cycle (cycle 3 to 12). During cycle one the percentage of kCal achieved via enteral feeding was 25.1%. Following intervention this increased to 82.6% (p < 0.001) during cycle 2. This significant increase in daily kCal achieved via the enteral route was maintained throughout cycle 3 to 12 with patients meeting an average of 86.5% daily kCal via enteral feeding, increasing further to 95.3% of daily kCal when calories from Propofol were included. Episodes of gastric residual volumes >250 ml were not appreciably increased following switching to volume based protocol.

Conclusion: Switching to a 24-hour volume based feeding regimen is a simple and cost-effective method of ensuring patients meet daily calorific targets. Through the use of quality improvement methodology, we demonstrated this approach is achievable and sustainable. The success of this quality improvement project has led to the adoption of the protocol in other ICU units in a regional critical care network. Future enhancements to the protocol will be targeted additional protein supplementation and institution of trophic feeding for those patients that would traditionally be nil by mouth prior to instigation of enteral feeding.

EP.010

Immunonutrition for Acute Respiratory Distress Syndrome (ARDS) in Adults; a Cochrane Meta-Analysis

Victoria Burgess1, Ahilanandan Dushianthan1, Rebecca Cusack1 and Mike Grocott1

1Universtiy Hospital Southampton, Southampton, UK

Abstract

Background: Acute respiratory distress syndrome (ARDS) is an acute overwhelming systemic inflammatory process associated with significant morbidity and mortality. Pharmaconutrients as part of a feeding formula, or supplemented additionally, have been investigated to improve clinical outcome in critical illness and ARDS. The objective of this study is to evaluate the effect of immune moderating nutrition in patients with ARDS.

Method: We searched MEDLINE, Embase, CENTRAL, conference proceedings and trial registrations for appropriate studies up to March 2017. All randomised controlled trials (RCT’s) of adult patients with ARDS and/or acute lung injury were included. Two authors independently assessed the quality of the studies and extracted data from the included trials. Quality of evidence and analytical methods were presented in accordance to Cochrane standards. All cause mortality, duration of mechanical ventilation, ICU and hospital length of stay, new organ failures, and adverse reaction were assessed.

Results: Ten RCT’s comprising 1020 patients were included. Immunonutrition intervention was omega-3 fatty acids (eicosapentaenoic acid, docosahexaenoic acid) and gamma-linolenic acid (GLA), and antioxidants. Some studies had a high risk of bias, others were heterogenous in nature and varied in a number of ways; the type and duration of intervention given, calorific targets, and outcomes reported. For the primary outcome, all cause mortality, there was no significant difference between groups (RR 0.79, P = 0.13 Figure 1). The mortality for the immunonutrition and control groups was 23% and 28% respectively. There was a significant reduction in ventilator days (3 days P = 0.0002), and ICU length of stay (2.5 days P = 0.0009), and Pa02/Fi02 ratio at day 4 and 7. There was no difference in ventilator or ICU free days between groups or in adverse events reported. These analyses were subject to significant statistical heterogeneity and the effect was sensitive to analytical methods.

Conclusion: This meta-analysis consisted of 10 heterogeneous studies of varying quality, studying the effect of Omega-3 fatty acids, GLA and/or antioxidants in ARDS population. The results suggest that there is no long-term mortality benefit from this intervention. However, there may be improvements in duration of mechanical ventilation, ICU length of stay and oxygenation without any significant increase in serious adverse events. The quality of evidence is moderate due to number of reasons; some studies were unable to achieve calorific targets, significant dropouts not accounted for in the ITT analysis, and variations in the type and duration of intervention provided.

Figure 1.

Figure 1.

Meta-analysis of the primary outcome of mortality.

EP.011

Serotonin Syndrome as a side effect of Antibiotic therapy on the Intensive care Unit

Will Watson 1

1Wishaw General Hospital, Lanarkshire, UK

Abstract

Introduction: Serotonin syndrome is a neurological disorder caused by an excess of Serotonin within the CNS. We present a case, precipitated by introduction of Linezolid for treatment of pneumonia.

Case report: A 61 year old gentleman was admitted to our ICU with polytrauma. He underwent multiple operations for limb fractures, intra-abdominal sepsis due to bowel perforation, and had a tracheostomy, to facilitate weaning from ventilation. He was limited in his ability to receive chest physio, due to dorsally angulated sternal fracture, felt to be at risk of damaging his great vessels. He developed increasingly drug resistant ventilator associated pneumonia, as well as chronic metalwork infection in his limbs. He had been given a variety of antimicrobials, which had either been ineffective, or in the case of Meropenem, he had developed a reaction to. After a further clinical and biochemical deterioration, he was commenced on Linezolid. Prior to admission he had been on a Fentanyl patch for chronic pain secondary to Rheumatoid arthritis, and this had been supplemented with PRN Oxycodone. He was started on Citalopram during his admission for low mood. Within 24 hours of commencing Linezolid, he developed worsening agitation, Hypertension, tachycardia, hyper reflexia, and clonus. The Linezolid and Citalopram were stopped, and he was treated with Diazepam, Cyproheptadine, sedation, and ventilation. His symptoms of Serotonin syndrome improved over the next week, but he sadly succumbed to a further episode of sepsis.

Discussion: There have been reports in the literature of Linezolid causing Serotonin syndrome1. The underlying mechanism is due to the MAOI effect that Linezolid exhibits, which decreases the breakdown of biogenic amines, leading to accumulation within the CNS. In combination with proserotonergic agents, this can lead to the development of clinical serotonin syndrome. Drugs implicated include antidepressants, dopamine agonists, opioids, and stimulants. The incidence in those taking an SSRI alongside Linezolid is 3%2 Symptoms can occur immediately, but may take up to 3 weeks to appear2 Immediate management involves supportive care, treating agitation with benzodiazepines, cooling, and consideration of serotonin antagonists, such as chlorpromazine and Cyproheptadine. Further management involves removal of the offending agents, with careful consideration into which drugs are clinically necessary in the circumstances.

References

  • 1.Taylor JJ, Wilson JW, Estes LL. Linezolid and serotonergic drug interactions: a retrospective survey. Clin Infect Dis 2006; 43: 180–187. [DOI] [PubMed]
  • 2.Quinn DK, Stern TA. Linezolid and Serotonin Syndrome. Prim Care Companion J Clin Psychiatry 2009; 11: 353–356. [DOI] [PMC free article] [PubMed]

EP.012

Partial Anomalous Pulmonary venous return (PAPVR) Case Report

Raghda Abed1, Guy Rousseau1 and Mark Meller1

1North Devon District hospital, Barnstaple, UK

Abstract

Background: Partial anomalous pulmonary venous return (PAPVR) is a vascular anomaly when some of the pulmonary veins connect to the right atrium or one of its venous tributaries rather than the left atrium. Approximately 10% are left sided. An isolated PAPVR is usually small, without haemodynamic compromise and rarely requires surgical correction. It’s rarely seen in adults and more commonly an incidental finding in asymptomatic patients undergoing pulmonary vascular studies for other indications.

Clinical case: A 61 year old male, presented with a one month history of severe shortness of breath and chest pain. He had a background of rheumatoid arthritis.

Clinically the patient had crepitations bilaterally and was hypoxic. A CTPA ruled out pulmonary embolism, but showed ground glass changes on both lungs. Pneumocystis Carinii was grown from sputum culture. He continued to deteriorate and was transferred to ICU. Central venous cannulation of the right internal jugular vein(IJV) under ultrasound guidance was complicated by an accidental arterial puncture. Subsequent Ultrasound-guided cannulation of the left IJV was uneventful, and the lumen of the left IJV was noticeably more distended and easier to locate than the right.

Post procedure CXR showed the CVC tracing a path along the left heart border. With the patient on 2litres of oxygen via a nasal cannulae, a blood sample taken from the CVC showed a pO2 of 37.2 kPa and pCO2 of 4.5 kPa, the peripheral arterial blood sample (taken simultaneously), showed a pO2 of 9.07 kPa and pCO2 of 4.6 kPa. The transduced waveform from the central line had the appearance of pulmonary artery trace.

The CT pulmonary angiogram (CTPA) which was performed to exclude pulmonary embolism a few days prior to CVC was reviewed and on closer inspection revealed an anomalous left upper lobe pulmonary vein draining into the left subclavian vein.

Conclusion: In our case the, the left IJV was more filled and easier to cannulate compared to the right. The oxygen saturation from CVC blood sample was higher than that of the arterial line and the wave form similar to that of pulmonary circulation. Such findings together with the radiological evidence of abnormal CVC placement caused a certain amount of confusion as to the actual location of the CVC. We hope that this report will highlight PAPVR as a possible yet rare cause of CVC “displacement” and anomalous venous blood gas analysis results.

EP.013

Fatal Air Embolism during Elective Oesophagoscopy

Adnan Akram Bhatti1 and Priti Gandre1

1North Middlesex University Hospital, London, UK

Abstract

Introduction: Elective upper gastro-enteral endoscopy is a common day-stay procedure with minimal complication and mortality rates. We report a case of air embolism as a rare but catastrophic complication of this otherwise low-risk procedure.

Case summary: A 77-year-old man with past medical history of hypertension and oesophageal stricture came to the endoscopy unit for elective oesophageal dilatation as a day procedure. Sedation was administered at the start of the procedure with Fentanyl and Midazolam by the gastroenterologist as per the local protocol. Shortly after the start of endoscopy, the patient suffered a generalised tonic clonic seizure followed by asystolic cardiac arrest. Spontaneous circulation returned after three cycles of cardiopulmonary resuscitation. Brain and chest CT scans were performed within 60 minutes of the seizure which showed numerous unilateral cerebral air locules and surgical emphysema in the posterior mediastinum and the neck. A repeat endoscopy was performed to rule out oesophageal perforation as per advice from the tertiary centre. The patient did not regain consciousness and developed rapid deterioration with midline shift and coning within 24 hours of ICU admission. Brain stem death was confirmed the following day.

Discussion: Air embolism during endoscopy is caused by a direct communication between the pressurised air source of the endoscope and an exposed blood vessel in the gut. Rapid intravenous injection of a relatively small volume of air (up to 2 ml/kg) can cause acute haemodynamic collapse.

Initial management of air embolism includes high-flow oxygen, rapid intravenous fluids and positioning the patient urgently in Trendelenburg and left lateral side. Aspiration can be tried through central venous catheter, if available. Bedside echocardiogram can diagnose air embolus quickly and accurately. Hyperbaric oxygen therapy within 24 hours can be helpful in reducing cerebral complications of air embolism but centres that provide this treatment are rare in the UK. Carbon dioxide insufflation instead of air during endoscopy can decrease the fatality of air embolism.

The case also raises questions regarding quality of sedation and monitoring in elective endoscopy procedures administered by non-anaesthetists. Guidelines on monitoring for sedation by the British Society of Gastroenterologists differ from those by the Association of Anaesthetists of Great Britain and Ireland. Although uncommon complications like air embolism cannot be completely avoided, prompt recognition and treatment of any complication is more likely if full monitoring and a qualified expert in dealing with life-threatening complications are present during all the procedures.

graphic file with name 10.1177_1751143718772957-img3.jpg

EP.014

Interventional Radiology for the Management of Delayed Massive Hepatic Haemorrhage due to HELLP Syndrome

Eleanor Damm1 and Nehal Patel1

1Royal Stoke University Hospital, Stoke-on-Trent, UK

Abstract

Introduction: The HELLP syndrome is a life-threatening pregnancy complication characterized by haemolysis, elevated liver enzymes, and low platelet count, occurring in 0.5%–0.9% of all pregnancies (1, 2). About 70% of the cases develop before delivery, the majority between the 27th and 37th gestational weeks; the remainder within 48 hours after delivery. (2)

The HELLP syndrome usually occurs with pre-eclampsia. However, in 20% of cases there may be no evidence of pre-eclampsia before or during labour. (3)

The case: A 32 years old primipara 37/40 + 1; presented with a one-day history of constant upper abdominal pain, nausea, vomiting and reduced fetal movements. No visual disturbances or headaches were reported. She had a low risk pregnancy; SBP <110, DBP <75, and urine NAD throughout.

On presentation: BP 138/88, HR 78, CTG normal, Hb 139, platelets 54, ALT 284, ALP 220, Bilirubin 21. HELLP syndrome was diagnosed. Her platelet count deteriorated rapidly over hours, requiring a category 2 LSCS section. Initially recovering well, she collapsed two days post-partum. An intra-abdominal bleed was suspected. The major obstetric haemorrhage protocol was activated. Once stabilized, a CT angiography uncovered a 21 × 5.7 × 19 cm sub-capsular hepatic hematoma with a normal post-partum uterus.

She successfully underwent emergency embolization of the right hepatic and both uterine arteries. She was admitted to ITU as a level 2 patient for pain management, blood pressure control, and ongoing transfusion requirements.

Discussion/Conclusion: HELLP remains difficult to clinically diagnose with often non-specific symptoms (4). Maternal and fetal morbidity and mortality remains high (5).

HELLP can present independent of pre-eclampsia and a high index of suspicion is required.

There is an expanding role for interventional radiology for management of massive obstetric haemorrhage.

References

  • 1.Weinstein L. Syndrome of hemolysis, elevated liver enzymes, and low platelet count: A severe consequence of hypertension in pregnancy. American Journal of Obstetrics and Gynecology 1982; 142: 159–167. [DOI] [PubMed]
  • 2.Rath W, Faridi A, Dudenhausen J. HELLP Syndrome. Journal of Perinatal Medicine 2000; 28: 249--260. [DOI] [PubMed]
  • 3.Haram K, Svendsen E, Abildgaard U. The HELLP syndrome: Clinical issues and management. A Review. BMC Pregnancy and Childbirth 2009; 9: 8. [DOI] [PMC free article] [PubMed]
  • 4.Weinstein L. It has been a great ride: The history of HELLP syndrome. American Journal of Obstetrics and Gynecology 2005; 193: 860–863. [DOI] [PubMed]
  • 5.Sibai BM, Ramadan MK, Usta I, et al. Maternal morbidity and mortality in 442 pregnancies with hemolysis, elevated liver enzymes, and low platelets (HELLP syndrome). American Journal of Obstetrics and Gynecology 1993; 169: 1000–1006. [DOI] [PubMed]

EP.015

The Highs and Lows of Quetiapine Toxicity

Andrew Chamberlain1 and Joseph Carter1

1York Teaching Hospital NHS Foundation Trust, York, UK

Abstract

Background: Some studies have shown quetiapine to be relatively safe in overdose when compared to other antipsychotics. We argue the contrary, advocating early aggressive treatment of cardiac dysrhythmias and seizures.

Case Description: A 38-year-old female presented to the Emergency Department having ingested 16 g of quetiapine 4 hours previously. She required intubation and ventilation for reduced GCS and was admitted to Critical Care. Approximately 6 hours following admission, the patient started with several tachyarrhythmias, followed by a tonic-clonic seizure. She subsequently developed torsades de pointes, so advanced life support was commenced. Magnesium sulphate was administered, along with intravenous lipid emulsion (Intralipid® 20%), and 8.4% sodium bicarbonate.

Following 30 minutes of CPR, the patient had a return of spontaneous circulation. However, she was significantly hypotensive despite numerous adrenaline boluses. She was switched to a metaraminol infusion, and subsequently noradrenaline following the establishment of central venous access. After 2 hours, the patient’s noradrenaline requirement had reduced from 0.75 mcg.kg-1.min-1 to 0 mcg.kg-1.min-1.

The patient was extubated the following day but, as a consequence of prolonged CPR, had suffered a number of rib fractures. Paradoxical chest movement impeded her ventilation and she required reintubation. Following treatment for a hospital acquired pneumonia, the patient had a percutaneous tracheostomy on day 15. She was successfully decannulated 9 days later.

Discussion: Quetiapine toxicity is one of many cases which requires a more generalised management approach due to the lack of a specific antidote. The use of lipid emulsion for quetiapine toxicity is not well established, and its general use in the management of drug poisoning appears to be sporadic. In this case, one could argue that lipid emulsion therapy should have been used at the first sign of toxicity, but it is not known whether this therapy alone would have altered the outcome.

Of particular difficulty during resuscitation was the apparent ineffectualness of intravenous adrenaline. The most accepted mechanism is that beta stimulation can worsen hypotension in the setting of quetiapine-induced alpha blockade. As such, the early use of alpha agonists with minimal beta agonism, such as metaraminol and noradrenaline, are far more effective.

Despite the extended recovery phase, the combination of treatment given in this case ultimately resulted in successful resuscitation.

Conclusion: Lipid emulsion therapy can be used as a part of the management for quetiapine toxicity, but adrenaline should be avoided due to refractory hypotension. Clinicians should have a low threshold for admission to critical care.

EP.016

Human neutrophil function is rapidly impaired by complement C5a in a clinically relevant model of bacteraemia

Alex Wood1, Arlette Vassallo1, Klaus Okkenhaug2, A John Simpson3, Jonathon Scott3, Charlotte Summers1, Edwin Chilvers1 and Andrew Conway Morris1

1Department of Medicine, University of Cambridge, Cambridge, UK

2Signalling Programme, Babraham Institute, Cambridge, UK

3Institute of Cellular Medicine, Newcastle University, Newcastle, UK

Abstract

Introduction: Nosocomial infections commonly affect patients admitted to intensive care and are associated with worse outcomes. A key determinant of susceptibility to these infections is the recently identified syndrome of immune-cell failure. The mechanisms underpinning this deleterious process remain incompletely understood.1 Previously, we have demonstrated that the complement protein C5a (present at high concentrations in plasma from critically ill patients) impairs phagocytosis of yeast particles by healthy donor and patient neutrophils.2 In this study, we investigated the underlying mechanism, duration and preventability of C5a-induced neutrophil dysfunction in a clinically relevant in vitro model.

Methods: A new assay was developed to assess neutrophil function using small (<2 mL) volumes of blood, without the need for time-consuming and potentially cell-perturbing purification steps. Healthy human or murine samples were exposed to C5a or control, with subsequent or concomitant addition of pH-sensitive Staphylococcus aureus bioparticles. Phagocytosis and C5a receptor (C5aR) expression were quantified by flow cytometry with an Attune™ Nxt Acoustic Focusing Cytometer (Life Technologies, Paisley, UK). Selective small molecule inhibitors, soluble pro-inflammatory agents, and neutrophils from genetically modified mice were used to address our experimental questions.

Results: C5a rapidly reduced neutrophil phagocytosis of Staphylococcus aureus in human whole blood by 40% (p < 0.0001). Moreover, this phagocytic impairment increased over time post-C5a exposure. In contrast to C5a, LPS and platelet activating factor (PAF) pre-treatment increased phagocytosis (p < 0.01 and p < 0.05 respectively). When neutrophils phagocytosed Staphylococcus aureus prior to or alongside C5a exposure, phagocytosis was unimpaired. C5a, phagocytosis and soluble priming agents all reduced C5aR expression (p < 0.01), but only phagocytosis protected cells from C5a-induced phagocytic impairment. C5a-induced phagocytic impairment was PI3-kinase dependent in isolated neutrophils, but not in whole blood.

Conclusions: This study is the first to demonstrate the ability of C5a to rapidly induce a prolonged impairment of neutrophil phagocytosis in a whole blood model of bacteraemia. It also indicates phagocytosis has a protective effect against subsequent C5a-induced neutrophil dysfunction, which is unlikely to be mediated by cell-surface receptor downregulation. Finally, this study highlights differences in the pharmacological tractability of PI3-kinase enzymes between isolated and blood neutrophils, which may influence the efficacy of locally- or systemically-administered therapies.

References

  • 1.Conway Morris A, Anderson N, Brittan M, et al. Combined dysfunctions of immune cells predict nosocomial infection in critically ill patients. Br J Anaesth 2013; 111: 778–787. [DOI] [PubMed]
  • 2.Conway Morris A, Brittan M, Thomas S, et al. C5a-mediated neutrophil dysfunction is RhoA-dependent and predicts infection in critically ill patients. Blood 2011; 117: 5178–5188. [DOI] [PubMed]

EP.017

The whole blood phagocytosis assay: A near patient test to promote a personalised approach to immunomodulatory therapy in community acquired pneumonia

Jesus Reine1, Jamie Rylance1, Daneila Ferreira1, Robert Parker2, Ingeborg Welters3,4, Ben Morton1,2

1Liverpool School of Tropical Medicine, Liverpool, UK

2Aintree University Hospital NHS Foundation Trust, Liverpool, UK

3Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK

4University of Liverpool, Liverpool, UK

Abstract

Introduction: Severe community-acquired pneumonia (CAP) results from a profound systemic inflammatory response to lung infection and causes high mortality rates. Immunomodulatory drugs offer the promise of improving outcomes independent from antimicrobial use. However, clinical trials of untargeted immunostimulatory agents have not demonstrated benefit, and experts now advocate personalised therapy. However, there are currently no clinical tests used to measure immune function.

Our aim was to refine and validate a novel ‘whole blood phagocytosis assay’ to measure ex vivo neutrophil function in blood samples from CAP patients. This test could potentially measure neutrophil function to guide safe and effective administration of immunomodulatory therapies.

Methods: Clinical: A prospective case-control study conducted at Aintree University Hospital and the Royal Liverpool University Hospital (REC: 15/NW/0869): 07/2016 – 04/2017. We recruited healthy volunteers for assay refinement and three patient groups: 1) severe CAP admitted to critical care, 2) moderate CAP admitted to the medical ward and 3) age-matched volunteers from outpatient clinic without acute inflammatory illness. CAP diagnosed in line with British Thoracic Society definitions. Clinical severity and outcome data collected prospectively.

Laboratory: A single citrated blood sample taken <48 hrs of CAP diagnosis and processed within four hours. Whole blood was suspended and incubated with fluorochorome labelled reporter beads (fluoresce in response to phagolysosome function (Morton et al. Shock 2016). Subsequently, red blood cells were lysed and neutrophil association with and oxidation of the reporter beads was measured by flow cytometry.

Results: The assay was refined using 30 healthy volunteers’ samples to confirm optimal conditions for the standard operating procedure. Subsequently we recruited 46 patients (16 severe-CAP, 15 moderate-CAP and 15 controls). Patient demographics, severity and outcome data are displayed in Table 1. The proportion of isolated neutrophils association with reporter beads was significantly increased in moderate-CAP compared to both severe-CAP and control patients (Figure 1). There was also signal toward increased oxidative burst (bacterial killing mechanism) in moderate-CAP patients.

Conclusions: We have demonstrated differential neutrophil function in unstimulated neutrophils between moderate-CAP, severe-CAP and control patients in this preliminary study. The potential advantages of this solution are: 1) direct measurement of neutrophil activity, 2) minimal sample processing, 3) reproducibility and 4) results available <3 hrs sampling. We plan to conduct a larger scale evaluation study to determine if the assay can stratify patients by outcome and prospectively test potential immunostimulatory agents (e.g. GM-CSF and potential unlicensed agents such as P4 peptide) prior to administration.

graphic file with name 10.1177_1751143718772957-img4.jpg

graphic file with name 10.1177_1751143718772957-fig3.jpg

EP.018

Retention of CO2 Does Not Impact the Oxygen-Haemoglobin Dissociation Curve In ICU Patients

Nick Rosculet1, Romit Samanta1, Abishek Dixit1, S Harris2, N MacCallum2, David Brealey2, Peter Watkinson3, Andrew Jones4, S Ashworth5, Richard Beale4, Stephen Brett5, Duncan Young3, Mervyn Singer2, Ari Ercole1, Charlotte Summers1

1Department of Medicine, University of Cambridge School of Clinical Medicine, Cambridge, UK

2Bloomsbury Institute for Intensive Care Medicine, University College London, London, UK

3Critical Care Research Group (Kadoorie Centre), Nuffield Department of Clinical Neurosciences, Medical Sciences Division, Oxford University, Oxford, UK

4Department of Intensive Care, Guy's and St Thomas' NHS Foundation Trust, London, UK

5Centre for Perioperative Medicine and Critical Care Research, Imperial College Healthcare NHS Trust, London, UK

Abstract

Introduction: Since its initial description in 1904, the oxygen-haemoglobin dissociation curve (ODC) has been well described under physiological conditions1,2. However, the impact of pathology has been less well characterized, with most data arising from small clinical studies of anaesthetized adults/patients (<100 subjects), or experimentally-induced hypoxaemia/hypercapnia. Routinely collected clinical data, including arterial blood gas analyses, are now available from many thousands of critically ill patients. We sought to investigate the impact of pCO2 on the ODC of critically ill adults, and hypothesized that pCO2 would not significantly alter the relationship between pO2 and haemoglobin saturation.

Methods: Data was extracted from the National Institute for Health Research Critical Care Health Informatics Collaborative (NIHR ccHIC). Statistical analysis was undertaken on 399,000 blood gases from 13,942 patients, using R version 3.4.0. After data cleaning, the predicted oxygen saturation for each arterial blood gas sample was calculated using both the Severinghaus1 and Dash, Kroman and Bassingthwaighte2 equations. Non-linear regression modelling was undertaken to construct ODCs based on both the predicted and observed data, to allow comparison. Observed data was stratified into strata based on pCO2 to investigate the influence of hypercapnia on the ODC.

Results: No clinically significant impact of pCO2 on the relationship between pO2 and oxygen saturation was observed in samples obtained from critically ill adults (mean difference 0.35 kPa (SD = 0.2 kPa) for a given oxygen saturation). Interestingly, we did not observe “right shift” of the ODC in response to elevated arterial pCO2, and there was no impact of either acute (HCO3 < 28 mmol/L) or chronic (HCO3 > 28 mmol/L) hypercapnia on the relationship between haemoglobin saturation and pO2.

Figure 1 –The observed oxygen dissociation curve of critically ill patients with varying levels of hypercapnia.

Conclusions: These data suggest that the relationship between haemoglobin saturation and pO2 described by data from small scale studies may not reflect physiology observed in critically ill adults, and further that the right shift of the ODC reported in experimental hypercapnia, induced in healthy subjects, is not reproduced in the critically ill.

References

  • 1.Severinghaus JW. Simple, accurate equations for human blood O2 dissociation computations. J Appl Physiol Respir Environ Exerc Physiol 1979; 46: 599–602. [DOI] [PubMed]
  • 2.Dash RK, Kroman B, Bassingthwaighte JB. Simple accurate mathematical models of blood HbO2 and HbCO2 dissociation curves at varied physiological conditions: evaluation and comparison with other models. Eur J Appl Physiol 2016; 116: 97–113. [DOI] [PMC free article] [PubMed]

EP.019

Predicting secondary infections using cell-surface markers of immune cell dysfunction: The INFECT study

Andrew Conway Morris1, Deepankar Datta2, Manu Shankar-Hari3, Jacqueline Stephen4, Christopher Weir4, Jillian Rennie2, Jean Antonelli4, Anthony Burpee5, Anthony Bateman6, Noel Warner7, Kevin Judge7, Jim Keenan7, Alice Wang7, K Alun Brown8, Sion Lewis8, Tracey Mare8, Alastair Roy9, Gillian Hulme10, Ian Dimmick10, Adriano G Rossi2, A John Simpson11 and Timothy S Walsh2

1Department of Medicine, University of Cambridge, Cambridge, UK

2Centre for Inflammation Research, University of Edinburgh, Edinburgh, UK

3Guys and St Thomas NHS Foundation Trust, London, UK

4Edinburgh Clinical Trials Unit, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, UK

5Applied Cytometry, Sheffield, UK

6Western General Hospital, Edinburgh, UK

7BD Bioscience, San Jose, CA, USA

8Vascular Immunology Research Laboratory, Rayne Institute (King’s College London), St Thomas’ Hospital, London, UK

9Sunderland Royal Hospital, Sunderland, UK

10Flow Cytometry Core Facility Laboratory, Faculty of Medical Sciences, Centre for Life, Newcastle University, Newcastle, UK

11Institute of Cellular Medicine, Newcastle University, Newcastle, UK

Abstract

Background: Cellular immune dysfunctions are common in intensive care patients and predict a number of significant complications1. The enable the targeting of immunomodulatory treatments at the appropriate patients, clinically useable measures of dysfunction need to be developed. The objective of this study was to confirm the predictive value of cellular markers of immune dysfunction for secondary infection in a multi-centre context. These markers had previously been identified as potential predictors in a single-centre derivation study2.

Methods: A prospective, observational, cohort study was undertaken in four UK intensive care units. Blood samples were taken on alternate days to day 12 post-enrolment. Three cellular markers of immune cell dysfunction (neutrophil CD88, monocyte Human Leukocyte Antigen-DR (HLA-DR) and percentage of regulatory T-cells (Tregs)) were assayed on site using standardised flow cytometric measures. Patients were observed for the development of secondary infections using pre-defined, objective criteria.

Main Results: Data was available from 138 of 148 patients recruited. Reduced neutrophil CD88, reduced monocyte HLA-DR and elevated proportions of Tregs were all associated with subsequent development of infection with positive likelihood ratios (95% CI) of 1.37 (1.02–1.82), 1.81 (1.28–2.57) and 1.60 (1.09–2.36) respectively. The burden of immune dysfunction predicted a progressive increase in risk of infection, from 14% for patients with no dysfunction to 59% for patients with dysfunction of all three markers. Modelling the clinical use of these tests, they perform best between 3–9 days after ICU admission and predict not only secondary infections but also prolonged ICU stay and duration of organ support.

Conclusions: This study confirms our previous findings that three cell surface markers can predict secondary infection, demonstrates the feasibility of standardized flow cytometry and presents a tool which can be used to stratify critically ill patients to enable targeting of immunomodulatory therapies.

References

  • 1.Hotchkiss RS, Monneret G, Payen D. Sepsis-induced immunosuppression: from cellular dysfunctions to immunotherapy. Nat Rev Immunol 2013; 13: 862–874. [DOI] [PMC free article] [PubMed]
  • 2.Conway Morris A, Anderson N, Brittan M, et al. Combined dysfunctions of immune cells predict nosocomial infection in critically ill patients. Br J Anaes 2013; 11: 778–787. [DOI] [PubMed]

EP.020

Development of critical care rehabilitation guidelines in clinical practice; a quality improvement project

Sarah Elliott 1

1Medway NHS Foundation Trust, Gillingham, UK

Abstract

Introduction: Rehabilitation in critical care has the potential to restore lost function and improve quality of life on discharge, but patients are often viewed as too unstable to participate in physical rehabilitation. Following a physiotherapy service evaluation of the provision of critical care rehabilitation, a number of concerns were raised in our practice. It was identified that there was a need to standardise pathways for clinical decision making in early rehabilitation so interventions are safe, timely and consistent. The NICE Guidelines (2009) and GPICS (2015) both advocate the need for a structured rehabilitation program that addresses both physical and psychological needs of the patient by utilizing standardized assessment and outcome measures.

Methodology: PDSA cycles were used as a method for quality improvement within this setting. After consideration of the literature, the participants identified the guidelines devised by Stiller et al (2007) as a protocol that could be trialled within clinical practice.

After trialling these guidelines the participants felt it did not fully meet the needs of clinicians and patients at the Hospital. Therefore we developed our own, local evidence based critical care rehabilitation guidelines which incorporate core components from existing literature. The participants suggested that our guidelines should be; flexible, patient centred, time efficient, be in a user friendly flow chart in order to standardise our approach to rehabilitation.

Results: The guidelines have been designed not as a formal protocol, but to highlight key considerations that physiotherapists may consider when clinically reasoning whether or not the patient is suitable for rehabilitation. Type and duration of exercise are considered and the physiotherapist is prompted to review the therapeutic intervention and its impact before making future plans.

Discussion: The overwhelming reflections by physiotherapists regarding the use of rehabilitation guidelines was that they didn’t take into account the individual needs of the patient and the psychological benefit that exercise may bring. It also highlighted that we need to review the types and frequency of exercises and the MDT’s understanding of the term rehabilitation as this often caused conflict between physiotherapists and MDT when deciding treatment plans.

Conclusion: Following this project the participants surmised that in our clinical setting we were seeking to create Trust critical care rehabilitation guidelines that can act as a reference or teaching aid for all members of the MDT and that they may guide; assessment, clinical decision making, patient centred care, adherence to guidelines and options of rehabilitation.

EP.021

Outcomes for patients with Guillain Barre Syndrome transferred to a new weaning and long-term ventilation service in Liverpool, UK

Robert Parker1,2, Verity Ford1, Karen Ward1, Helen Ashcroft1, Nick Duffy1, Biswajit Chakrabarti1 and Robert Angus1

1Liverpool Sleep and Ventilation Centre, Aintree University Hospital, Liverpool, UK

2Critical Care Department, Aintree University Hospital, Liverpool, UK

Abstract

Introduction: In October 2010 the long-term ventilation service in Liverpool began providing support for non-spinal cord injury patients ventilated via a tracheostomy. As part of the Ventilation Centre (VIC) building work it became possible to look after stable tracheostomy ventilated patients away from ICU. A service has been established to assess, transfer and offer weaning for slow to wean patients from the North West and North Wales.

Methods: Prospective data collection has been done by two of the authors, VF and RP for the first 5 years. Outcomes have been assessed regarding underlying reason for failure to wean, weaning success, and follow up to one year. Failure to wean has been classified as neuromuscular disease, COPD, obesity, kyphoscoliosis and chest wall deformity, post-surgery and other. This enables comparison with published UK data. Of the first 95 transfers for weaning it was noted that 10 had Guillain Barre Syndrome and this was the main reason for weaning failure. This is a group which may be traditionally felt to have a poor prognosis, and in whom the evidence is limited.

Results: The median age for GBS patients was 58 years, and 50% were male. All were transferred from a General ICU, the median length of stay in the referring ICU prior to transfer was 69 days (range 16–265 days). The median VIC length of stay before discharge was 65.5 days (range 18–121 days), including discharge planning. They were profoundly weak on admission, mean MRC sum score 22/60. Despite this all were weaned and decannulated, seven discharged with long-term nocturnal NIV, three with no support. All those with NIV were issued cough assist devices initially for home use. Two went directly home, and eight to rehabilitation. Nine were alive one year after VIC discharge and six were living in their own home. No patients had PEG feeding on discharge, all were orally fed. Whilst four patients were treated for pneumonia on the VIC, no patients required re-escalation of care to the General ICU.

Conclusions: Patients with GBS as the reason for weaning failure had spent longer on the referring ICU, and then spent longer on the VIC compared with allcomers (median 69 vs 48 days and 65.5 vs 42 days respectively). Despite signifcant peripheral muscle weakness a weaning approach based around nocturnal NIV, chest physiotherapy and physical rehabilitation can produce good outcomes in selected patients.

EP.022

Post critical care rehabilitation service review

Kirsten Mitchell1, Kirsti Bennett-Koster2 and Rebecca Coles-Gale1

1East Sussex Healthcare NHS Trust, Eastbourne, UK

2East Sussex Healthcare NHS Trust, Hastings, UK

Abstract

Introduction: NICE CG 83 ‘Rehabilitation after critical illness for adults’ highlights that 75% of patients admitted to critical care units survive to be discharged home being at increased risk of experiencing physical and/or non-physical problems. From April 2016–2017, 909 patients were admitted to ESHT critical care. Recommendations state rehabilitation may help patient outcomes so we set up a follow-up clinic and ‘Post critical care rehabilitation group’ to address and evaluate this.

Aim: Improve patient outcomes post critical care by piloting new services, comprising of follow-up clinic and weekly rehabilitation group with specialist consultants, nurses, clinical psychologist and physiotherapists.

Method: Patients triaged by telephone to attend a follow-up clinic appointment. From 87 patients reviewed in clinic, 50 patients referred for weekly, hour long rehabilitation programme, 40 attended the group.

Patient outcome measures taken on initial assessment and after 6weeks. Notably: 10m timed walk test, 6 min timed walk test, Distress thermometer, PAS amended for ICU and patient feedback.

A weekly programme was individualised for each patient based working towards their personalised goals, commonly based on increasing strength, balance, endurance and confidence.

Results: Overall, all the outcome metrics demonstrated a clinically significant in mean measurements from week 1–6.

10 m timed walk test: Mean improvement 23.3% ± 10.2

6 min timed walk test: Mean improvement 43.6% ± 39.9

Distress thermometer and PAS amended for ICU data being collected

Eleven patients have returned to previous employment, with advice and support. Many others returned to hobbies such as farming, tractor shows, singing, football, swimming, walking, gardening, gym based classes.

Patients increased function and independence. Patients further surgeries now possible and they were no longer falling.

Many patients reported less fatigue, pain and fear. Improving their confidence, ability to express problems, increased positive mindset, feeling more normal.

More positive family interactions and sociability. Relatives were more confident with what patients could manage.

Signposting to patient support group, clinical psychologist and wellbeing support.

Not all completed the rehabilitation programme due to personal choice, hospital re-admission, terminal diagnosis, further investigations or death.

Conclusion: The ‘Post critical care follow-up clinic and rehabilitation group’ has demonstrated positive impact on patients’ physical and non-physical abilities.

Including muscle strengthening, balance, fitness and co-ordination improvements. Improving knowledge and skills to manage their fatigue, frustrations, and expectations.

Patients attending the rehabilitation group have found it beneficial and would recommended it. We are looking to develop a therapy garden.

EP.023

An observational study of characteristics, outcome and survival for difficult to wean patients referred to a regional ventilation unit

Robert Parker1, Biswajit Chakrabarti1, Robert Angus1, Verity Ford1, Paul Plant1, Nick Duffy1, Ari Manuel1, Karen Ward1 and Helen Ashcroft1

1Liverpool Sleep and Ventilation Centre, Aintree University Hospital, Liverpool, UK

Abstract

Background: Most patients requiring invasive mechanical ventilation within critical care unit will wean relatively quickly without the need for specialist weaning services. However, there are a small proportion who struggle to wean off ventilatory support and consume a disproportionate amount of bed days and resources. These patients benefit from specialist weaning services. This study looks at the characteristics, outcome and survival of difficult to wean patients admitted to a small regional ventilation unit.

Methods: 5 years data (October 2010- October 2015). Patients were weaned by reducing daytime pressure support, optimal nocturnal support, early cuff deflation and speaking valve use. NIV and high flow oxygen was used to deliberately expedite weaning. Spontaneous daytime ventilation was secured before assessing the need for nocturnal support, nocturnal NIV established where needed before de-cannulation.

Results: 95 admissions (85 individuals, some multiple admissions). Median age was 61.0 years, and 67.4% were male. The median length of stay on the referring ICU before transfer was 48 days. Most common reason for admission to ICU; respiratory failure 72.6% of which 68.1% was due to pneumonia. Thirteen (13.7%) were admitted to ICU after surgery. The most common primary underlying diagnoses being neuromuscular disease or COPD. A large proportion of admissions, 84.2%, were discharged either off ventilatory support or using only non-invasive ventilation overnight (41%). A comparison of characteristics of patients that died (8.4%) to survivors showed they were older (72.5 v’s 61years), had a longer admission to their initial ICU before transfer (65 v’s 45 days), and more significant ventilatory failure on transfer as evidenced by higher pCO2 (7.58 v’s 6.56 kPa) and bicarbonate (34.7 vs 30.7 mmol/L). Longer term survival, 73.7% were alive 6 months, and 68.4% at 12 months after hospital discharge. COPD weaned the quickest, 95.2 % were discharged alive and the majority (55%) required or wished no ongoing ventilatory support, and for those that did it was provided by nocturnal NIV in all cases. Ninety percent were discharged directly home and 85% were alive one year after discharge.

Conclusion: Patients did well most returning home, only small proportion did not survive to discharge or required long term invasive ventilation. NIV was used to support just under half of all patients which suports the logic of placing a weaning facility within an experienced ventilation service.

EP.024

The critical care patient: improving staff engagement in the provision and completion of patient diaries to assist psychological recovery

Sarah Elliott1, Olivia Padfield2 and Aysa Veloso Costa2

1Medway NHS Foundation Trust, Gillingham, UK

2Kings College, London, UK

Abstract

Background: Patients surviving critical illness are at high risk of developing psychological problems after discharge, with as many as 10% developing symptoms of post-traumatic stress disorder (Wake & Kitchener, 2013), positively correlated with length of intensive care unit (ICU) stay. NICE recommends commencement of rehabilitation as soon as clinically possible in this group. Diaries have been shown to assist patients with fragmented delusional memories and difficulty recollecting their experience, and are hypothesized to work similarly to cognitive behavioural therapy. Factors including lack of awareness, time constraints and the non-compulsory nature has led to inconsistent staff engagement with the patient diary system at Medway Maritime Hospital.

Aims: This project aimed to increase provision, consistency and overall multidisciplinary team (MDT) engagement with diaries for patients admitted to ICU for over 72 hours.

Methods: Plan-Do- Study-act (PDSA) cycles were used as a method for quality improvement within this study. The three cycles included trialled methods included adding reminders to the online patient note system (Metavision), providing education sessions and raising awareness, and introducing a bedside guidance document to facilitate entry completion. Data was collected using information inputted to Metavision; a total of 129 patients were sampled over 105 days, with 77 receiving diaries.

Results: Baseline average diary provision rate (26%) increased to 83% after the first Plan-Do-Study-Act (PDSA) cycle. During cycle two, we saw a further increase to 100%, with a subsequent decrease to 75%. However, final changes saw a return to 100% by the end of cycle three. Frequency of daily entry completion also increased, and physiotherapists (engaged in cycles 1 and 2) and an occupational therapist (engaged in cycle 2) completed entries alongside nurses.

Discussion/conclusion: On three distinct data collection points, all patients admitted for over 72 hours received diaries. Also, an increased number and variety of the ICU MDT completed more regular diary entries. Although additional methods may be needed to ensure long term sustainability, we hope to have implemented effective changes.

EP.025

Audit of the clinical activity and junior doctors working pattern at the Royal Preston Critical Care Unit

Ola Abbas1, Neha Singal2 and Shashi Chandrashekaraiah2

1Health Education North West, Manchester, UK

2Lancashire Teaching Hospitals NHS Trust, Preston, UK

Abstract

Background: The GPICS (1) advocates a consultant to patient ratio not exceeding a range of 1:8 – 1:15 and the ICU resident to patient ratio not to exceed 1:8 however, this guidance does not stipulate the seniority or the level of airway competence of the ICU resident doctor but rather recommends to adjust rotas according to local experience.

Methods: A 6 weeks review of clinical activities on a 28 bedded critical care unit at a tertiary teaching hospital. Data was collected prospectively via audit forms and confirmed with our electronic record system. This mainly entailed attending referrals, admissions, procedures, trips to radiology and discharges. We also surveyed the trainee’s views on staffing levels during the same period of time.

Results: The clinical activity was fairly spread across the days of the week but with notable peaks at evenings and night time shifts (5 pm to 8 am). 99 referrals were reviewed with an average of 38 minutes per referral all managed by senior ICU resident doctors. Admissions required on average 31 minutes to complete not including time dedicated for required procedures which averaged at 31 minutes as well. The unit during this time witnessed 154 admissions requiring 159 procedures, all necessitating input from senior ICU resident doctors. Trips to radiology required an average of 28 minutes per trip with a total 28 trips.

Notably from our survey 77% (13/18) of our junior doctor cohort advocated having three doctors to cover the unit during a night shift as opposed to only two doctors. A striking 94% of junior doctors thought there should be two registrars rostered for duty on a night shift alongside a more junior colleague. Evenings (5 pm to 11 pm) was rated as the busiest time of the day which coincides with the peaks of clinical activity noted in our audit.

Conclusion: Clinical activity on our unit peaked mainly during the out of hours periods, with such activity there was a requirement of having two senior ICU doctors, especially on night time shifts. The data collected with feedback from the survey allowed us to increase the number of junior doctors on night shifts to three, two of which are senior doctors competent in airway management.

Auditing clinical activity and surveying trainees’ views echo the GPICS standards in seeking local feedback from trainees’ regarding adjusting staffing levels.

Reference

  • 1.Guidelines for the Provision of Intensive Care Services. Edition 1, 2015.

EP.026

A prospective audit of the working locations of registrars covering the adult intensive care unit (ICU) in a tertiary referral NHS hospital

James Malycha1,2, Graham Barker3, Daniel Murphy1, Guy Ludbrook4, Duncan Young1,2 and Peter Watkinson1,2

1University of Oxford, Oxford, UK

2Critical Care Research Group (Kadoorie Centre), Nuffield Department of Clinical Neurosciences, Medical Sciences Division, Oxford University, Oxford, UK

3Oxford University Hospitals NHS Trust, Oxford, UK

4University of Adelaide, Adelaide, Australia

Abstract

Introduction and Background: The UK multi-agency Guidelines for the Provision of Intensive Care Services recommend deteriorating hospital ward patients should receive care from trained critical care outreach personnel. In most NHS hospitals, this involves a team of specialist clinicians led by an experienced ICU registrar. This involves work away from the ICU itself (off-unit work). Current guidelines stipulate that NHS hospitals must have appropriate ICU staffing to ensure safe on-unit and off-unit patient care. However, the amount and proportions of this work are not described in the literature. This audit quantified the on-unit and off-unit movements of ICU registrars within a tertiary NHS hospital.

Methods: Real-time Location Devices (RTLDs) are portable devices which communicate with WiFi network access points to determine their position and log the information to a database. This audit used T2 tags made by Aeroscout Enterprise Visibility Solutions (Stanley Healthcare, Swindon). The tags are small (35 grams), have a long battery life and provide a location update every 5 minutes. They were attached to 2 ICU ‘baton’ pagers (pagers passed on at handover between shifts) carried by senior and junior adult ICU registrars respectively. These pagers were linked to the current hospital Rapid Response System (RRS) and non-urgent referral system. The audit population was adult ICU registrars working in the John Radcliffe Hospital. Data were collected from April through July 2017.

Results: ICU registrars at the John Radcliffe Hospital spend 84.4% of their time in ICU, 8.2% in the Emergency Department (ED) and 7.4% in places within the hospital ‘other than ICU and ED’. During night shifts (9pm – 8:30am) time in ICU drops to 81.7% with time in ED increasing to 9.2%.

Discussion: Provision of Intensive Care Medicine in NHS hospitals increasingly involves ‘off-unit’ activity. This activity is important but time consuming and hence expensive. By quantifying this activity, informed decisions about staffing and expected workload can be made, especially for night shifts. This in turn should improve patient safety and staffing efficiency. The RTLDs provide a simple means to quantify time worked at different locations in a hospital.

EP.027

12-hour nursing shifts in critical care: a service evaluation

Ceri Battle1 and Paul Temblett1

1Ed Major Critical Care Unit, Swansea, United Kingdom

Abstract

Introduction: Extended nursing shifts of 12-hours or more, have become increasingly popular in the hospital setting. A recent systematic review investigating the effect of critical care nursing staff shift length on outcomes concluded that results of studies comparing 8 versus 12-hour shifts were equivocal. The aim of this single centre study was to investigate the impact of the introduction of 12-hour critical care nursing shifts on healthcare provider and patient care outcomes.

Methods: A single-centre, prospective service evaluation was completed over a two year period in a large tertiary ICU in Wales. During the first year of data collection (March 2015 to February 2016), nursing staff worked the traditional 8-hour shift. In March 2016, the 12-hour shift was introduced. Patient care outcomes were assessed using the reported number of clinical incidents and healthcare provider outcomes using levels of burn-out, sick rates, personal injuries and staff training. Comparisons between the two data collection periods were completed using Student’s t test. Statistical analyses were performed using SPSS Version 22. Statistical significance was set at p < 0.05.

Results: The results of the analysis demonstrated no significant difference in clinical incidents, sickness rates, personal injuries and staff training between the two data collection periods. Response rates for the burn-out surveys were low at 40% in the 8-hour data collection period and 42% in the 12-hour period. The results of the burn-out analysis demonstrates that emotional exhaustion fell from a high to moderate level, from the 8 to 12-hour data collection periods (p < 0.05); depersonalisation fell although was reported as low in both data collection periods (p < 0.05); and personal accomplishment remained at a moderate level between the two periods.

Discussion: The results of this study have demonstrated no significant differences in any of the outcomes analysed (other than improvements in two components of burn-out) when comparing 8 and 12-hour shifts in critical care nursing staff, working on a large ICU in Wales. A number of limitations in study design, in particular a number of potential confounders, may have influenced the study results. In order for 12-hour shifts to work in practice, there needs to be a willingness of all nursing staff involved to consider new methods of working, especially with regards to the delivery of training and communication methods.

Reference

  • 1.Estabrooks CA, et al. Effects of shift length on quality of patient care and health provider outcomes: systematic review. Qual Saf Health Care 2009; 18: 181–188. [DOI] [PubMed]

EP.028

The Role of Physiotherapy during a Major Incident: A Review of Six Patients Admitted To Critical Care with Smoke Inhalation Injury

Kirsty Jerrard1, Sian Evans1 and Charles Reilly1

1Kings College Hospital, London, UK

Abstract

Introduction: Smoke inhalation injury (without cutaneous burns) is a serious and life-threatening problem often resulting in critical care admission due to the need for early intubation. The role of physiotherapy seeks to address the respiratory complications associated with thermal injury including sputum retention, airway inflammation, airway obstruction, infection and hypoxaemia. However, there is limited evidence to substantiate this claim.

Objectives: To review the physiotherapy role in a major incident and the management of smoke inhalation patients in a Non-burns Tertiary Trauma Centre.

Case description: Two adults (female, aged 37 and 55) and four paediatric (1 male, aged 8,10,11,12) patients were admitted to critical care with respiratory complications due to smoke inhalation following a Major Incident, London 2017. All patients presented self-ventilating on oxygen therapy and ambulatory but were electively intubated due to signs of respiratory distress. All patients had soot around their mouths, Grade 2–3 Inhalation Injuries on bronchoscopy and unusually no cutaneous burns or other injuries.

Physiotherapy Management

The physiotherapy acute response included:

• Literature scoping, liaising with other trauma centre therapists and specialist burns units and access to local Inhalation Protocols based on current best practice/expert opinion.

• Joint working between paediatric and adult physiotherapy teams

• Major incident contingency planning

The clinical management included:

• Initial and ongoing complex respiratory assessment

• Physiotherapy treatment techniques; positioning, manual/ventilator hyperinflation, manual techniques, suction and optimisation of humidification and mucolytic regime.

Results: Physiotherapy and length of stay on Critical care is summarised in table 1. The patient cohort received a total number of 98 physiotherapy contacts, 80 during daylight hours (08:30–16:30) and 18 overnight (occurring in the first 72 hours of admission). During their critical care admission all patients had negative sputum microbiology and a reduction in injury grade on repeat bronchoscopy.

The importance of specialist respiratory physiotherapy in this cohort is clearly demonstrated by the intensity of physiotherapy intervention and the need for overnight intervention. This required upskilling physiotherapists on the clinical management of this cohort and increasing physiotherapy provision overnight.

Conclusion: There is a role for physiotherapy provision during a major incident as data suggests that physiotherapy contributed to positive outcomes, such as absence of respiratory complications including infection and number of days of intubation.

Table 1.

Physiotherapy and Critical care length of stay.

Median Range
Physiotherapy contacts per patient 16.5 11–23
Physiotherapy time per  patient (minutes) 64 36–76
Length of critical care stay (days) 7.5 6–10
Number of days intubated 5 4–8

EP.029

ICU Nurses job satisfaction, working hours and educational opportunities: Preliminary data from a multi-center survey in a lower middle income country setting (LMIC)

Ashoka Abeynayake1, Abi Beane2,3,4, Nilmini Dullewa1, Chaturani Sigera2,5, Lalitha Pieris2, Summayah Rashan2, Pubudu De Silva2,5, Rashan Haniffa2,3,4,5

1Post Basic College of Nursing, Colombo, Sri Lanka

2Network for Improving Critical Care Systems and Training, Colombo, Sri Lanka

3Mahidol Oxford Research Unit, Chicago, Thailand

4University of Oxford, Oxford, UK

5National Intensive Care Surveillance, Colombo, Sri Lanka

Abstract

Introduction: Job satisfaction, opportunities for development and working environment (including working hours) for ICU nurses are increasingly linked to ICU mortality and morbidity. In Sri Lanka, a LMIC, where approximately 1989 nurses serve nearly 100 ICU’s as few as 11.4% of nurses have undergone critical care training.

Aim: To describe working hours, job satisfaction, and support with career development in a cohort of ICU nurses in Sri Lanka.

Methods: Nurses who had recently completed a 2 day practical critical care skills training programme were invited to participate in this anonymised survey. The third American Association of Critical Care Nurses work environments standard survey was used to describe working hours, current job and overall career satisfaction including future job plans and support with continuing education.

Results: Eighty eight of the 94 nurses invited to participate responded, representing 68 ICU’s. Of these, 78 (88.64%) respondents held a nursing diploma, 9 (10.23%) an undergraduate degree and 1 (1.14%) had an MSc.

Average weekly working hours were reported as being between 30–60 hours 57.95% (n = 51), >61 hours 36.36% (n = 32), and <30 hours 5.68% (n = 5). All nurses worked shifts with 51.69% (n = 46) working 24 hour and 4.49% (n = 4) working 48 hour duties.

Overall career and current role satisfaction are described in table 1. Thirteen (15.29%) nurses would consider leaving their current role, and 23 (26.74%) reported recent resignation amongst colleagues. Support for continuing education is described in table 2.

Conclusion: Nurses were reportedly satisfied with their jobs, despite long working hours, and limited academic opportunities. Work is ongoing to extend the survey to the wider ICU nursing population, to enable evaluation of the relationship between these factors and the processes and outcomes of patient care in ICU’s.

Table 1.

Job satisfaction.

Very satisfied Somewhat satisfied Somewhat dissatisfied Very dissatisfied
Career satisfaction 32.18% (n = 28) 63.22% (n = 55) 4.60% (n = 4) 0.00% (n = 0)
Current role satisfaction 21.42% (n = 21) 72.09% (n = 62) 3.49% (n = 3) 0.00% (n = 0)

EP.030

Early adoption of RCOP National Mortality Case Record Review Programme in a DGH Intensive Care Unit using a Structured Judgement Review tool

Rebecca Jones1, Jerome Mccann1, Jeff Little1 and Ravinda Sandu1

1Warrington Hospital, Warrington, UK

Abstract

The Royal College of Physicians (RCOP) are recruiting ‘early adopter hospitals’ to participate in a National Mortality Case Record Review Programme which aims to standardise how adult hospital mortality data is reviewed using a structured judgement review tool initially developed by the Improvement Academy [1]. This requires safety and quality judgements to be made over different phases of care [2]. We decided to adopt this tool to judge if there were any avoidable level two/three deaths in patients with an ICNARC score of <20% on our ICU.

A review group of 3 ICU consultants and 2 junior doctors reviewed all deaths with an ICNARC score of less than 20% on our ICU over a six month period.

Each phase of care including overall care is judged and the reviewer is then asked to deem whether the death was avoidable. It also provides free text space for qualitative data. 9 patients met this criteria however one was excluded therefore 8 patients were reviewed.

All but one patient were surgical and 2 of these were elective cases. Sepsis was the most common cause of death (5 patients). Overall scores were all deemed to be a 3 or above. It is recommended that any patient with an overall score of 1 or 2 have a secondary review within hospital clinical governance team; there were none in our case. Common themes that required improvement were identified and we made 5 recommendations:

1. Post-op high risk patients to be kept on HDU/ICU for at least 48 hours

2. Any elective surgery deaths require level one investigation

3. A trust-wide audit into post-op complications is required

4. High risk elective surgery require discussion around ceilings of care

5. Ischaemic bowel cases require emergency MDT discussion

We did not record any avoidable deaths. Average overall score was 3.5

This tool may become a standard across the UK and more training is required for this. The review provided good qualitative learning points that will be actioned this year. We recommend that other units use the SJR to review any deaths with a mortality less than 20% as suggested by ICNARC[3]. We will also use this tool to review all perioperative deaths in our trust.

Table 2.

Support from employer for continuing education.

Inhouse training Yes (%)
In house education 70 (80.46)
Paid study leave 54 (62.07)
Registration fees 18 (20.69)
Unpaid study leave 65 (76.47)
Travel reimbursement 40 (45.98)
No support 14 (16.47)
Academic qualifications
Pays/reimburses initial examination fee 7 (8.05)
Professional recognition 55 (63.22)
Salary increment 5 (5.75)
Certification fee 1 (1.15)
Pays registration fees for courses  to prepare for examination 4 (4.60)
Unpaid study leave 24 (27.59)
Paid study leave 19 (21.84)
No support 45 (51.72)

References

EP.031

Influences on attendance of long-stay patients (>30 consecutive ICU days) at critical care follow-up clinic, a retrospective analysis in a single tertiary mixed medical and surgical ICU

Gioacchino Cracolici1, Monica Trivedi1 and Peter Featherstone1

1John Farman Intensive Care Unit, Cambridge University Hospitals NHS Trust, Cambridge, UK

Abstract

NICE guidelines recommend functional assessment of patients who have undergone a critical care stay 2–3 months following discharge to ensure physical and psychological sequalae of their admission are being addressed and rehabilitation progressing. However, attendance are poor (around 50–60% in other literature), and are lower amongst long-stay patients. Conversely long-stay patients have been found to have poorer quality of life following discharge and higher rehabilitation needs, so are one of the most important groups to engage in follow-up. We aimed to establish our follow-up rate amongst long-stay patients and identify the factors which may influence non-attendance.

We undertook a retrospective analysis of long-stay (>30 ICU days) patients (n = 52) from a 16-month period and identified those eligible to attend follow-up clinic (n = 35). We interrogated the notes of these patients to identify demographic factors which may influence attendance, and sent a survey to those who were invited but did not attend (n = 18). Overall attendance rate was 28%, and women were more likely to attend (69.2%) than men (8.3%). There were suggestions of potential association between length of stay (median amongst attenders 58.5 vs 40.2 days), and suggestion of a reduced likelihood of attending for amongst patients at the older and younger extremes of the population, although neither of these attained statistical significance. The survey achieved a response rate of 33%, though most who returned the survey (3/5) could not recall being invited to clinic.

Notes review of patients who did attend revealed psychological sequalae of admission in 60%, most commonly anxiety (40%), broadly in keeping with other literature. Notably, no clinic attenders reported symptoms suggestive of PTSD, where other research has suggested a prevalence of 15–20% amongst former ICU patients. 50% reported some degree of ongoing physical impairment because of their ICU stay; including paraesthesia, muscle weakness, and impaired concentration; and although 60% felt their level of independence was reduced only 10% required major assistance with day-to-day activities.

We recommend remodelling the ICU follow-up process to facilitate earlier engagement, and a robust process to ensure those who do not respond to clinic invitations are contacted to identify whether they are too well, or too unwell to attend. Review of the notes of patients who did attend suggest these may represent a self-selecting group of patients with milder impairment. Whilst 70% reported either physical or psychological sequalae, rates of PTSD and significant physical impairment were lower than expected.

EP.032

Predictors of post-traumatic stress disorder following critical illness: A mixed methods study

Ceri Battle1, Karen James1, Thomas Bromfield1 and Paul Temblett1

1ABMU Health Board, Morriston Hospital, Swansea, UK

Abstract

Purpose: Post-traumatic stress disorder (PTSD) has been reported in survivors of critical illness. A number of systematic reviews have been conducted which report a median value of 19% prevalence of PTSD, but studies report up to 64% of patients can suffer with PTSD symptoms following critical illness, leading to lower health-related quality of life.[1,2] The aim of this study was to investigate the predictors of PTSD in survivors of critical illness.

Materials and methods: Patients attending the ICU Follow-up Clinic completed the UK- Post-Traumatic Stress Syndrome 14-Questions Inventory (UK-PTSS-14) and data was collected from their medical records. Predictors investigated included age, gender, Apache II score, ICU length of stay, pre-illness psychopathology; delirium and benzodiazepine administration during ICU stay and delusional memories of the ICU stay following discharge.

Results: A total of 198 patients participated, with 54 (27%) patients suffering with PTSD. Median age was 64 years, with 54% female patients. The median number of mechanical ventilation days was three (IQR: 0–11). There were no differences reported between the admission diagnosis for the PTSD and non-PTSD patients. On multivariable logistic regression, the significant predictors of PTSD were younger age, lower Apache II score, pre-illness psychopathology and delirium during the ICU stay.

Conclusions: PTSD was reported in 27% of this cohort of critical care survivors, attending an ICU follow-up clinic. The predictors of PTSD in this study concur with previous research however a lower Apache II score has not been previously reported. These results should assist in the identification of the high risk patient for PTSD on discharge from ICU.

References

  • 1.Davydow DS, Gifford JM, Desai SV, Needham DM, et al. Posttraumatic stress disorder in general intensive care unit survivors: A systematic review. Gen Hosp Psychiatry 2008; 30: 421–434. [DOI] [PMC free article] [PubMed]
  • 2.Griffiths J, Fortune G, Barber V, Young JD. The prevalence of post traumatic stress disorder in survivors of ICU treatment: A systematic review. Intensive Care Med 2007; 33: 1506–1518. [DOI] [PubMed]

EP.033

Predictors and long-term outcomes of patients admitted to a tertiary intensive care unit in a hyperacute stroke and neurosurgical centre after ischaemic and haemorrhagic stroke

Kerrie Aldridge1, Catherine Snelson1 and Claire Sutton1

1Queen Elizabeth Hospital, Birmingham, UK

Abstract

Introduction: Increasingly sophisticated therapies for stroke patients mean that rising numbers of patients require intensive care unit (ICU) management. Co-ordinated multi-disciplinary care has led improved overall outcomes for stroke patients, but the impact of ICU admission is not well-studied. We aimed to evaluate outcomes in a contemporaneous UK cohort of stroke patients admitted to ICU in a neurosurgical and hyperacute stroke centre, and identify poor prognostic factors.

Methods: Patients admitted to ICU during a hospital admission for spontaneous intracerebral haemorrhage (ICH) or ischaemic stroke between August 2014–2016 were retrospectively identified from electronic records. Data on demographics, comorbidities, inpatient course and long-term outcome were collected.

Results: 87 patients were identified; median age was 52, and 57/87 (65.6%) had suffered ICH. Median length of stay was 7 days in ICU and 22 days in hospital. The main reasons for ICU admission were post-neurosurgical management (56/87, 64.3%), low GCS/seizures (19/87, 21.9%), and respiratory complications (5/87, 5.4%). Median Charlson comorbidity score was 0 (range 0–3).

68/87 (78.1%) of patients were mechanically ventilated for an average of 5 days; 31/87 (35.6%) required tracheostomy. 62/87 (71.3%) received acute neurosurgical/neuroradiological (NS/NR) interventions.

In total 61/87 (70%) survived to hospital discharge; of these, 29/61 (48%) returned to their own homes after rehabilitation (10% were discharged directly home). 21/61 (34.5%) patients were living independently at follow-up. Only 3/61 (4.9%) were discharged directly to a nursing home. 1-year survival was 63.2% (55/87).

There was no association between age, stroke type, comorbidities, GCS at presentation, admission SOFA score, or duration of mechanical ventilation and 1-year survival or independent living. Survivors were significantly more likely to have undergone acute NS/NR interventions (11/25 vs. 44/62, p = 0.02), though independent survivors were less likely to have undergone NS/NR intervention (13/21 vs. 36/40, p = 0.02).

Conclusion: One-year mortality rates in well-selected stroke patients admitted to tertiary ICUs are better than previously reported, and are comparable to other common ICU pathologies. Rates of discharge to independent living are low, but incompletely studied. None of the factors which have previously been demonstrated to predict poorer outcome had any influence in this cohort, including age, GCS at presentation, ICH and need for mechanical ventilation. Only receipt of NS/NR intervention was associated with improved survival. Further research on larger modern cohorts of stroke patients are needed to aid selection of those who will most benefit from ICU admission, and to identify factors predicting good functional outcome.

EP.034

Clinical outcomes in critically ill patients with Interstitial Lung Disease (ILD)

Srikanth Chukkambotla 1

1East Lancashire Hospitals Trust, Blackburn, UK

Abstract

Background: Patients with Interstitial Lung Disease presented with acute respiratory failure often referred to intensive care for ventilatory support. The study objective is to report the clinical outcomes of patients who have been admitted to intensive care unit with known or unknown Interstitial Lung Disease.

Patients and methods: Patients were identified retrospectively using the electronic data base between January 2010 and August 2017. The radiological diagnosis was verified using reports from CT chest scans.

Results: Total of twenty eight patients identified who were admitted to intensive care unit with respiratory failure with known or unknown Interstitial Lung Disease.

The unit mortality and hospital mortality rates were 57% and 78% respectively. Sixteen patients died within intensive care unit, five died in ward and one patient was sent home on end of life care support.

Thirteen patients received invasive mechanical ventilation while fifteen had basic respiratory support that includes nasal high-flow or CPAP or non-invasive ventilation. Eleven out of thirteen who received mechanical ventilation and eleven of fifteen who were treated with basic respiratory support did not survive.

The acute exacerbation of chronic interstitial lung disease was predominant cause of acute respiratory failure. Other causes included acute interstitial pneumonitis of varied etiology.

Discussion: Interstitial Lung Disease is a term that broadly describes a group of lung diseases that predominantly affect interstitium with varying degrees of fibrosis and inflammation. Idiopathic Pulmonary Fibrosis (IPF) is the most common variety. Acute exacerbations of IPF and other ILDs result in acute respiratory failure. These patients are often referred to intensive care unit for advanced respiratory support. However the prognosis is poor with lack of effective treatment. The ICU mortality and hospital mortality rates were 57% and 78% respectively in this study. The majority of patients suffered with acute exacerbation of IPF or other ILD.

Conclusion: The prognosis of patients with Interstitial Lung Disease remains poor despite admission to intensive care unit. Mechanical ventilation didn’t improve the survival. The admission of these patients to intensive care unit should be avoided unless there is an obvious reversible cause to treat.

Table 1.

Outcome of patients with ILD.

Study period January 2010 and August 2017
Total number of patients 28
Unit mortality 16 (57%)
6-month mortality 22 (78%)
No of patients had  mechanical ventilation 13
No of patients had basic  respiratory support 15

EP.035

Predicting Fluid Responsiveness with Echocardiography in the Intensive Care Unit

David Slessor1, Emma Lane1 and Jonarthan Thevanayagam1

1Queen Alexandra Hospital, Portsmouth Hospitals NHS Trust., Portsmouth, UK

Abstract

Background: Deciding when to give a fluid challenge to a critically ill patient can be one of the most important and difficult decisions for physicians to make. Hypovolaemia can lead to shock and multi-organ failure. Whereas excess fluid administration is associated with mortality. Patients are given a fluid challenge with the aim to increase their stroke volume (SV). A patient is defined as being fluid responsive if they increase their SV by 10–15% with a fluid challenge. Only 50% of haemodynamically unstable patients are fluid responders. Therefore half of haemodynamically unstable patients will not benefit from a fluid challenge and will suffer from the deleterious effects of excess fluid administration.

Echocardiography can be used to determine if a patient is likely to respond to a fluid challenge. Predictors of fluid responsiveness on echocardiography include an increase in SV > 10% following a passive leg raise; SV variation >14%; and IVC distensibility index >18%.

This project investigated how often echocardiograms performed in our ICU were used to assess fluid responsiveness.

Methods: A retrospective review was performed, by a single reviewer, of all echocardiograms performed in the ICU over a 6-month period. The reports were compared to the standards set by the British Society of Echocardiography for predicting fluid responsiveness.

Results: There were 118 echocardiograms performed. 0 patients had their inotrope or ventilatory status documented. 1 patient had an assessment of their response to a fluid challenge with a passive leg raise. 22% had the variation in IVC size documented. However these were all in relation to how the IVC variation could predict right atrial pressure rather than fluid responsiveness. 0 patients had their IVC distensibility index calculated. Of note for 18% of patients it was documented that the IVC variation could not be used as the patient was ventilated.

Conclusions: Predicting fluid responsiveness is an important component of managing critical illness. Echocardiography can be used to predict fluid responsiveness and despite echocardiograms being frequently performed on our ICU the information gained is rarely used to guide fluid therapy. A number of staff are unaware that the IVC can provide useful information in ventilated patients and therefore did not report their findings.

Teaching and education should be carried out to train staff on the assessment of fluid responsiveness. A guide to critical care echo and fluid responsiveness has been developed to help aid staff assess fluid responsiveness and understand the results found.

EP.036

The Use of Left Ventricle Global Longitudinal Strain in Patients with Sepsis

David Slessor1, Emma Lane1 and Richard Crawley1

1Queen Alexandra Hospital, Portsmouth Hospitals NHS Trust, Portsmouth, UK

Abstract

Background: Sepsis-induced cardiomyopathy has been reported to occur in 14% of patients with sepsis. It results in myocardial depression that typically resolves after 7–10 days. Transthoracic echocardiographic (TTE) assessment by conventional parameters, such as left ventricular ejection fraction (LVEF), is typically used to define sepsis-induced cardiomyopathy. A recent meta-analysis demonstrated that a low LVEF was not associated with mortality in septic patients. LVEF is often affected by changes in preload and afterload. Advanced echocardiographic techniques such as speckle tracking echocardiography and global longitudinal strain (GLS) have evolved for direct assessment of the myocardial function. These advanced techniques may be beneficial for the assessment of sepsis-induced cardiomyopathy.

Primary Objective: In critically ill patients with sepsis and septic shock, does GLS compared with LVEF identify more patients with myocardial dysfunction?

Methods: Over a 4 month period, GLS and LVEF were performed retrospectively from standard study imaging, on patients admitted to the Intensive Care Unit with sepsis, as defined by Sepsis-3 Criteria. All images were performed on GE S70 machines by a British Society Echocardiography accredited sonographer.

Results: Over a 4 month period there were 13 patients who met the inclusion criteria and had adequate imaging to measure LVEF and GLS. 3 patients had a reduced LVEF and 10 patients had a normal LVEF. Whereas 9 patients had a reduced GLS and 4 patients had a normal GLS. This was a statistically significant difference (p = 0.047). Of the 4 patients who had a preserved LVEF and normal GLS, 1 patient was on inotropes and 1 patient had cirrhosis which is associated with a hyperdynamic circulation.

Conclusion: Critically ill patients with sepsis were statistically significantly more likely to have reduced left venticular systolic function as defined by a low GLS compared with reduced LVEF. This data suggests that GLS in standard TTE could have value in the septic patient for early detection of LV dysfunction. With the use of advanced techniques, septic-induced cardiomyopathy may be more commonly diagnosed. It may help identify patients who are critically ill from sepsis and target treatment accordingly. Future studies should investigate whether a reduced GLS is associated with mortality.

EP.037

What is wrong with this heart?

Jennifer Hares1 and Nitin Arora2

1University Hospitals of Coventry and Warwickshire, Coventry, UK

2Good Hope Hospital, Sutton Coldfield, UK

Abstract

A 31-week pregnant multiparous lady presented with rapid onset shortness of breath.

She was well when she attended a routine antenatal appointment and was scheduled to return to labour suite the following day for management of her diabetes. The next day, she presented in significant respiratory distress. Her shortness of breath and cough had come on suddenly earlier that morning with no other symptoms.

A CTPA excluded a pulmonary embolism but showed bilateral consolidation and an enlarged heart. She was admitted to intensive care with a provisional diagnosis of community-acquired pneumonia.

She was treated with continuous positive airway pressure, then non-invasive ventilation and intravenous antibiotics. Although she remained cardiovascularly stable, there was no significant improvement in her respiratory function overnight, still requiring a high fractional inspire oxygen (FiO2).

A bedside transthoracic echocardiogram(TTE) showed a severely dilated left ventricle with global hypokinesia and an ejection fraction (LVEF) of 27%.

The decision was taken to perform a semi-elective lower segment caesarean section. She remained sedated and ventilated on the Intensive Care Unit for several days before she could be extubated. A follow-up TTE 2 weeks later showed little improvement in her LVEF.

The presumptive diagnosis of her heart failure (HF) was peripartum cardiomyopathy. However, her thyroid function was also found to be profoundly deranged. The levels of T3 and T4 were undetectable with a TSH 25 mIU/l taking days of high dose IV replacement to correct.

Peripartum cardiomyopathy is a diagnosis of exclusion and defined as an idiopathic cardiomyopathy presenting with HF secondary to left ventricular systolic dysfunction towards the end of pregnancy or in the months following delivery, where no other cause of HF is found (Silwa et al 2010). In this case, however, an alternative cause of HF could be the culprit – hypothyroidism. Reduced T3 production is known to have a negative impact on cardiac function, however Tang et al (2005) proposed that a low thyroid state causes cardiac atrophy with chamber dilatation, impaired myocardial blood flow, loss of arterioles, and severe systolic dysfunction.A 31-week pregnant multiparous lady presented with rapid onset shortness of breath.

EP.038

Cardiovascular Collapse in Early Pregnancy: Not the usual suspect!

Dave Robinson1 and Jag Pooni1

1New Cross Hospital, Wolverhampton, UK

Abstract

A 32 year old primiparous lady presents for surgical termination of pregnancy at 14 weeks gestation. She has a background of Systemic Lupus Erythematosus (SLE) that is in remission and not requiring any immunosuppression, hypothyroidism and fibromyalgia. A nurse-led protocol guided preoperative assessment was conducted, which did reveal some symptoms of exertional dyspnea and exercise intolerance but these were not fully appreciated or investigated, and did not emerge during the anaesthetist’s preoperative visit.

Following a routine induction with fentanyl, propofol and mask ventilation only, the patient was transferred to recovery after an uneventful procedure. In recovery the patient developed cardiovascular collapse requiring significant cardiorespiratory resuscitation. The patient was intubated and transferred to the nearest hospital with critical care facilities.

On arrival some cardiac stability had been achieved and a bedside echocardiogram revealed a dilated and hypertrophied right ventricle with a D-shaped septum and a Right Ventricular Systolic Pressure (RVSP) of 66 mmHg + Right Atrial Pressure (RAP). A diagnosis of a massive pulmonary embolism was considered highly likely and an urgent CT-Pulmonary Angiogram (CTPA) was organised. Immediate thrombolysis was not thought beneficial on the basis of the risk of postoperative bleeding and current cardiac stability.

The CTPA did not show any major emboili within the pulmonary vasculature, and following a multidisciplinary discussion it was felt that a diagnosis of preexisting pulmonary arterial hypertension exacerbated by anaesthesia and perioperative prostaglandin use was most likely.

A pulmonary artery catheter was floated which revealed a Pulmonary Artery Pressure (PAP) of 80/50 mmHg. Intravenous Illoprost was started along with nasogastric Sildenafil, improving cardiac parameters and significantly lowering oxygen requirements over the next 24 hours enough to allow safe extubation.

After transfer to a tertiary centre the illoprost was gradually weaned off and dual oral pulmonary dilator therapy of Sildenafil and Macitentan was started.

Pulmonary Artery Hypertension (PAH) is rare occurring in approximately 3 per million, per year, but occurs more commonly in ‘at-risk’ groups. The right ventricle is extremely sensitive to pressure and volume overload, which is exacerbated during pregnancy, and can be compounded by anything that increases pulmonary vascular resistance (PVR).

It is always suspicious for a young patient to have exertional dyspnea and it should be a red flag for further investigation. This case highlights a rare cause for cardiovascular collapse in pregnancy, which requires thorough physiological understanding and appropriate haemodynamic manipulation when general anaesthesia is considered.

EP.039

Predicting outcomes in patients with chronic liver disease admitted to Intensive Care: an Australian retrospective study

Kate Chatten1, Katharine Kline2, Nick Shackel2 and Heike Koelzow1

1Intensive Care, Royal Prince Alfred Hospital, Sydney, Australia

2Hepatology, Royal Prince Alfred Hospital, Sydney, Australia

Abstract

Introduction: Following Intensive Care Unit (ICU) admission with an acute illness on a background of chronic liver disease it is unclear which are the best predictors of outcome, or if regional practices influence survival.

Objectives: To establish outcomes in our patient population. To determine whether the Chronic Liver Failure Sequential Organ Failure Assessment (CLIF SOFA) and the Liver Injury Failure evaluation score (LiFe) predict mortality in our population, and to compare these scores to other established prognostic models.

Methods & settings: We undertook a retrospective analysis of admissions to ICU at Royal Prince Alfred Hospital, Sydney, in patients with chronic liver disease 2010–2012. Assessment included; Model for End Stage Liver Disease (MELD), Acute Physiology and Chronic Health Evaluation II (APACHE II), CLIF SOFA and LiFe scores.

Results: There were 138 ICU admissions in 128 patients with the median MELD score 24 and APACHE II 24. 45% of patients died in hospital without liver transplant. Patients who survived to hospital discharge had an 84% 1 year survival. There were significant differences in hospital mortality between grades of Acute on Chronic Liver Failure (from CLIF SOFA) and LiFe risk groups. Area under the receiver operator curves for the prognostic scores varied from 0.74 for the Child-Pugh to 0.91 for the CLIF SOFA score on day 3 ICU.

Conclusions: CLIF SOFA and LiFe scores were good predictors of hospital mortality. Organ failure scores performed better than liver based scores. The best predictor of hospital mortality was the CLIF SOFA on day 3 in ICU.

EP.040

Chronic Liver Failure: The Forgotten Organ

Thomas O'Pray1,2, Maryam Crews1, Owen Jefferies1,2 and Peter Hampshire1

1Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK

2Health Education North West, Manchester, UK

Abstract

Background and aims: The management of decompensated liver disease in critical care is challenging. Implementation of international guidance aimed is complex with many documents now out of date or providing contradictory advice. Recent NICE and Advancing Quality Alliance guidance statements have attempted to update and unify international standards with success in non-critical care areas. Our project had two broad aims. Firstly we wanted to assess local compliance with international guidance for the care of critically ill patients with decompensated liver failure in a large teaching hospital. Secondly we wanted to formulate cohesive guidelines for management of decompensated disease in critical care.

Material and methods: We undertook a systematic audit of patients admitted to Critical Care with decompensated liver disease over a six-month period (n = 19). Compliance was measured against European Association for the Study of the Liver (EASL) and NICE clinical guidelines, over the first 7 days of critical care. The 5 audit themes were; diagnosis of liver disease, management of infection, care for encephalopathy, management of GI bleeding, and management of alcoholic hepatitis.

Results: Analysis demonstrated poor compliance across all 5 themes.

Diagnosis of liver disease: there was poor evidence of radiological or tissue diagnosis of liver disease (0%), and staging of disease (53%).

Management of infection: 2 of 10 patients treated for infection had ascitic taps, and none ascitic drains. The presence or absence of ascites was not documented by critical care staff and so the number of patients with ascites is unknown, but likely to be greater than 2 of 10.

Care for encephalopathy: provision of simple lactulose therapy was inadequate (80%), and only 60% of patients had serum ammonium levels measured. 40% of patients underwent CT brain, which is inadequate in a patient group demonstrating universally deteriorating neurology.

Management of GI bleeding: 66% of patients were prescribed terlipressin and only 33% vitamin K. Only 33% of patients had an appropriate transfusion target (70-80 g/L) documented.

Management of alcoholic hepatitis: only 20% of patients with a hepatitis severity score indicating steroid treatment would decrease mortality received steroids. 33% of patients received N-acetyl cysteine.

Conclusions: Current international guidelines are complex but regardless compliance with most standards was poor. A care bundle has been introduced, with medical staff training, to promote adherence to best practice guidance. A supportive one-page quick reference guide has been developed. A re-audit will be undertaken.

EP.041

Predicting Short-term Mortality in Critically Unwell Patients with Cirrhosis using Prognostic Scoring Systems

Cath Huang1 and Michael Patterson2

1Newcastle upon Tyne Hospitals, Newcastle upon Tyne, UK

2University College London Hospitals NHS Foundation Trust, London, UK

Abstract

Background: Patients with chronic liver disease who become acutely unwell have poor short-term prognosis. This is particularly significant if extra-hepatic organ failure develops and critical care input is required, with hospital mortality rates between 63% and 100% reported. Consequently, there are mounting questions surrounding the effectiveness of critical care interventions in this cohort. Therefore, identification of reliable outcome predictors is merited to inform the decision-making process.

Aim: Our aim was to evaluate the prognostic value of liver-specific; Child-Pugh (CP), Model for End-Stage Liver Disease (MELD) and Modified Maddrey’s Discriminant Score (MDS)) and general ICU; Acute physiology and Chronic Health Evaluation II (APACHE II) and Sequential Organ Failure Assessment (SOFA) scoring systems in patients who have been admitted to the ICU with decompensated chronic liver disease. The main outcome measure was ICU mortality.

Methods: Retrospective data was collected on 64 consecutive patients admitted to a tertiary ICU during a 2-year period with evidence of decompensated liver disease. Baseline demographic, clinical and laboratory data were recorded on day 1 of ITU admission and prognostic scores calculated from these. Differences were compared using Student’s t-test. Statistical analysis was undertaken using SPSS v24. Survival analysis with COX regression was performed to evaluate the prognostic value of CP, MELD, MDS, APACHE II and SOFA.

Results: Full data was collected for 64 patients with decompensated liver disease in a 2-year period. 64% were male and the average age was 58 years. 31% (20/64) were admitted to ICU as a result of decompensated liver disease/GI bleed. 25% (16/64) were admitted due to sepsis of any source and subsequently found to have evidence of decompensation on transfer to ICU. 53% of patients died and their scores were statistically significantly higher in APACHE II, MELD, SOFA, and MDS (p < 0.05).

COX regression analysis revealed that increasing MELD scores were significantly associated with increasing hazard of death (p < 0.01). ROC curve analysis showed AUC for MELD and SOFA to be 0.78.

Sensitivity and specificity for mortality with MELD scores ≥20 were 74% and 67% respectively with SOFA ≥14 having a sensitivity of 70% and specificity 67%.

Conclusion: Patients with decompensated liver disease admitted to the ITU have increased risk of death and this is associated with higher scores in liver-specific and general ITU prognostic scoring systems. MELD is a good predictor of poor prognostic outcome in the ITU setting with baseline scores being correlated with hazard of death.

EP.042

Pancreatitis: Patients and antibiotic patterns

Vallish Bhardwaj1, Harry Craven1, Sonia Hudson1, Wael Elamin1 and Jayachandran Radhakrishnan1

1General ICU – Broomfield Hospital – Mid Essex NHS Trust, Chelmsford, UK

Abstract

Introduction: Critical illness secondary to pancreatitis has a high mortality and morbidity. Infection is a common complication in patients with pancreatitis, but the severe acute inflammatory response often masks the signs of infection. Both the diagnosis of infections and the estimation of antimicrobial effectiveness are inadequate with conventional clinical and laboratory methods, and a systematic, evidence-based approach is necessary.

In this clinical audit we investigated the microbiological management of patients with acute pancreatitis.

Materials and Methods: We performed a retrospective data analysis of all patients admitted to the Broomfield Intensive Care Unit, a district general hospital with tertiary specialist services. All patients admitted with acute pancreatitis from April 2014 to April 2017 were included. Data was extracted from an electronic database. Simple summary statistics were used to summarize the data.

Results: Twenty seven patients were admitted with acute pancreatitis in the 3 year study period. The mean age was 59.5 years with a SD of 15.33. The sex ratio was 1.1:1 male to female respectively. In 13 (48.1%) patients, pancreatitis was secondary to gallstone disease. Gallstone pancreatitis was more common in females (69.2%, n = 9 of all gallstone pancreatitis). Alcoholic pancreatitis was more common in males. Male patients were also more likely to have pancreatitis of unknown origin.

Average duration of stay in ICU was 8.8 days. Overall mortality was 25.9% (n = 7). Women had a higher mortality (30.8% vs 21.4%, n = 4 vs 3). Male patients were more likely to be discharged from the unit 66.7%. None of the male patients were transferred to another center, compared to 15.4% (n = 2) of the females.

Cultures were predominantly negative. There was no clear pattern of antibiotic use in patients. Both the choice and duration of antibiotics were variable, even within diagnostic groups. Treatment with vancomycin was correlated with mortality (log odds −0.53, SE 0.18), but the relevance of this finding is unclear.

Conclusion: Pancreatitis is associated with high mortality especially in women. Diagnosis of infections in patients with pancreatitis is difficult. There is no consistency in the use of antibiotics.

A standardized approach to diagnosis of infections and use of antibiotics, supported by evidence-base guidelines, is necessary for effective antibiotic stewardship in these patients.

EP.044

A Dental Abscess Gone Bad: An Atypical Diagnostic Dilemma

Saba Iqbal1, Raluca Ene1 and Venkat Sundaram1

1Glan Clwyd Hospital, Rhyl, UK

Abstract

An interesting case is hereby presented, encompassing an unusual complication of dental abscesses and a diagnostic dilemma in critical care.

Description: A 49-year-old hypertensive man developed a right sided mandibular dental abscess, later complicated by facial palsy. Two weeks after onset of symptoms, he presented to hospital with sepsis and was found to have right facial and vocal cord palsies. Emergency facial exploration for necrotising fasciitis revealed extensive necrosis of right sided facial muscles and facial nerve. These structures, along with overlying skin and auditory canal, were excised. The next day, he developed a right pupillary efferent defect. CT and MRI head revealed acute right temporal and cerebellar infarcts.

Twenty-four hours after first debridement, planned surgical re-exploration demonstrated extension of necrosis. The area was debrided further. A day later, he was anticoagulated on suspicion of cavernous venous sinus thrombosis. He subsequently developed diabetes insipidus with a dilated left pupil. Repeat CT scan revealed extensive subarachnoid haemorrhage and extension of cerebellar and temporal infarcts.

Departmental discussions were held concerning validity of brainstem death tests (BDT) in this case, given the efferent defect in one eye and an inability to access the excised right ear for caloric tests. Due to the uncertainty regarding validity of BDT, and the patient’s unsuitability for organ donation as a result of necrotising fasciitis, a unanimous decision was made to withdraw care without BDT.

There was subsequent correspondence between local clinical leads and the National Deputy Clinical Lead for Organ Donation. The consensus was that provided other stipulated conditions are met, BDT can be reliably performed in the presence of cranial nerve or tissue deficit. Ancillary tests may also be utilised, but have significant limitations.

Discussion: Dental abscesses may be complicated by necrotising fasciitis, culminating in unusual presentations to healthcare. Therefore, in these cases, signs such as facial palsy must be assessed meticulously.

Diagnosing death by neurological criteria can be made additionally challenging in the presence of cranial nerve or facial tissue deficit. BDT can be reliably performed in these cases; however, it is critical that this diagnosis is made safely, with due consideration of the clinical picture and clinician comfort.

Acknowledgments: Presented with written assent of the patient’s next-of-kin.

References

EP.045

A Case of Cerebral Malaria presenting at a District General Hospital in Scotland

Will Watson1 and Iain Lang1

1Wishaw General Hospital, Lanarkshire, UK

Abstract

Introduction: Imported malaria is rare, but most commonly occurs in the young and middle aged1. It most frequently presents with fever, malaise and jaundice around 2 weeks post exposure. Severe malaria commonly requires admission to the critical care unit, and the supportive care required depends on the affected organs. We present a case of severe malaria with cerebral involvement, which was successfully treated after initial presentation to a DGH.

Case report: A 41 year old gentleman presented to our Emergency Department, 12 days after returning from a working trip to sub-Saharan Africa. He had been staying with friends, and had described feeling unwell for a few days with viral symptoms, followed by a sudden deterioration in his conscious level, and Jaundice. On admission to hospital, his GCS was E4M4V1; He was biochemically jaundiced, with thrombocytopenia, and an Acute Kidney Injury. Rapid malaria testing demonstrated Plasmodium Falciparum and Ovale species, and parasite load was 6% on subsequent blood film. He was treated with IV Artesunate, and required supportive care, including intubation, when his GCS dropped to 7. He never required renal replacement therapy. He was transferred to the regional Infectious diseases unit, where he remained on Intensive care for 3 days, before he was well enough to be moved to a ward. He was subsequently discharged home 11 days after presentation, having made a full neurological recovery.

Discussion: Prompt diagnosis, and treatment of malaria, is crucial in increasing the chances of survival. Severity of disease can be classified using the Health Protection Agency flow sheet.2 severe cases, which are almost exclusively caused by Plasmodium Falciparum, should be admitted to hospital, and may require organ support on a critical care unit. Common manifestations of severe malaria include Neurological (Confusion, coma and seizures), Respiratory, Cardiovascular, Liver (Hypoglycaemia, Jaundice) and Renal. The mechanism of organ dysfunction is most commonly microvascular obstruction due to occlusion of capillary beds, leading to tissue hypoperfusion. Specific treatment consists of supportive care, and specific anti parasitic therapy. IV Quinine has been largely replaced by Artesunate as the treatment of choice, but is only available on a named patient basis at present in the UK.

References

EP.046

A Tale of Three Hot Cases: Management of the Acute Dysautonomias on Intensive Care

Samira Green1, Lydia Fletcher1, Lawrence Tham1 and Susan Jain1

1Homerton University Hospital, London, UK

Abstract

Serotonin syndrome (SS) and neuroleptic malignant syndrome (NMS) are two acute dysautonomias that often prove difficult to differentiate for the intensive care clinician (1, 2). Overlapping signs contribute to this, along with polypharmacy within the psychiatric cohort, preventing isolation of a single causative agent (1, 2). Three patients presented to our intensive care with acute dysautonomic symptoms within a five week period. The first, a young old man, with a history of multiple antipsychotic agent use, demonstrated altered mental status, severe rigidity and pyrexia. He rapidly deteriorated post admission, developing severe hyperpyrexia and required multi-organ support including intubation, vasoactive drugs and renal replacement therapy. CT brain imaging and lumbar puncture results were negative. Despite active cooling measures, the patient sadly arrested around 24 hours following admission. The clinical picture was consistent with NMS but occurred acutely and included many features of SS. The second patient had a more chronic history of mixed antidepressant and olanzapine use, severe confusion, agitation and increase tone. He required early respiratory support and was slow to wean with ongoing neurological sequelae to date. The third patient, with known bipolar disorder, presented with reduced conscious level, tachycardia, diaphoresis and clonus. He was receiving multiple antipsychotic medications. In all cases, acute dysautonomia was a diagnosis of exclusion.

Despite detailed examination of the history and signs elicited, in each patient the picture was consistent with NMS, SS or a possible overlap syndrome. In the acute stages, it is uncertain whether differentiation of these conditions is essential, with most guidance on initial management strongly stressing the need simply for best supportive management (3, 4). Specific medical management may be possible in the case of a clear differential, otherwise such treatments may in fact be detrimental (3, 4).

Early identification and consideration of acute dysautonomias as part of our differential diagnosis, alongside best supportive management may improve clinical outcomes for such patients (1, 4).

References

  • 1.Dosi R, Ambaliya A, Joshi H, Patell R. Serotonin syndrome versus neuroleptic malignant syndrome: a challenging clinical quandary. BMJ case reports 2014; 23: bcr2014204154. [DOI] [PMC free article] [PubMed]
  • 2.Buckley NA, Dawson AH, Isbister GK. Serotonin syndrome. BMJ 2014; 348: g1626. [DOI] [PubMed]
  • 3.Boyer EW, Traub SJ, Grayzel J. UpToDate. Waltham, MA: Serotonin syndrome. UpToDate; 2010 (accessed 27 June 2017).
  • 4.Wijdicks E. Neuroleptic malignant syndrome. In: UpToDate, Aminoff M (ed). UpToDate, Waltham, MA. (accessed on 27 June 2017).

EP.047

Case series of Airway Myiasis with maggots, in the Intensive care unit

Steve Young1 and Michael Slattery2

1ABMU Health Board, Morriston Hospital, Swansea, UK

2ITU and Anaesthetics, Morriston Hospital, Swansea, UK

Abstract

Introduction: Airway Myiasis is an extremely rare presentation with only a hand full of reported cases; we present two cases from an Intensive Care Unit in Swansea.

Case Series: Two patients in ITU were found to Myiasis of their airways. The first patient presented with severe Diabetic Ketoacidosis, about 7 days into admission she was found to have maggots infesting her ET tube (shown in a video). The maggots were in the tube wall itself in a redundant part of the suglottic suction tubeThe tube was removed the next day but the maggots had to be individually removed from the mouth under direct laryngoscopy as they escaped from the tube. The maggots were found to be from the Calliphora sp (blue bottle) or Lucilia sp (Green Bottle) fly. A week later another patient on a different unit was found to have maggots coming out of her nares, she had presented with an excerbation of COPD requiring intubation and ventilation. On further investigation she was found to have an infestation of the anterior nasal passageways with eggs and maggots, these were removed by ENT.

Discussion: These two patients had no contact with each other and the only similarities were the fact they were both intubated and ventilated in ITU. The source of the maggots has been the subject of speculation, with the most likely explanation of the infestation coming from a fly landing on the tube, presumably attracted by the smell of the mouth and then laying eggs in the tube. As the summer in Wales had been unusually hot, all the windows had been opened and flies had been seen in the units.

This has not been reported in the literature before, apart from around tracheostomy sites in three previous case reports, and never in an endotracheal tube or nose of an intubated patient.

EP.048

Bamboo, Breaks and BP: A case report on the pre-hospital recognition and management of neurogenic shock and ankylosing spondylitis

Shah Mizanur Rahman1,2, Kevin Letchford1,2, Freddie Haden-Brown2,3 and David Zideman1,2

1Thames Valley Air Ambulance, RAF Benson, UK

2South Central Ambulance Service NHS Trust, Thames Valley Region, UK

3Oxford Brookes University, Oxford, UK

Abstract

A 64 year old man was propelled (whilst parking his moped at <20 mph), head-butting a car bumper; subsequently complaining of chest pain and inability to feel or move his legs after landing in a prone position. A paramedic crew was first on scene with IV cannula and monitoring placed. An enhanced care team (Physician/Paramedic) arrived with repeat primary survey identifying:

<c> – no catastrophic haemorrhage

A – clear after removal of motorcycle helmet, patent, no injuries

c – bruising between the scapulae, no collar placed (or used subsequently)

B – left chest wall tenderness with mild bruising, poor inspiratory effort, SpO2 ∼92% on air, RR 20

C – no external haemorrhage, thread radial pulse at 67bpm, pCRT 3s, heart sounds normal, abdomen soft, no pelvic bruising or asymmetry, long bones intact, no apparent injury abdomen to feet

D – GCS 15. PEARL 4 mm, moving upper limbs with good strength, a definite sensory level at ∼T6 with absent sensory and motor function below this. No priapism or incontinence noted.

E – Normothermic and euglycaemic, last ate the previous night. Has hypertension which is well controlled with amlodipine and ramipril. No allergies. Full recall of the events preceding and after time of injury. No medical precipitant of injury.

The team performed a coordinated log roll, IV Paracetamol and 1L of normal saline administration, whilst scooping the patient with minimal handling, and transporting to the nearest Major Trauma Centre. The patient had boluses of ephedrine administered to maintain a systolic BP of ≥90 mmHg en route. After handover, the patient was found on CT to have an unstable C6/7, burst T4 and chance fracture of T12/L1, with a grossly spondylotic appearance to the spine. At follow-up (step down from ICU to Spinal Rehabilitation Unit), the patient had a number of immediate and delayed operations to decompress and fuse the spine. He was optimistic and engaging with rehabilitation, but wary of the poor prognosis from his injury, with minimal bilateral lower limb power.

Key Learning Points:

• Older patients are at higher risk of more significant injuries with lower speed/force mechanisms of injury, warranting a higher degree of suspicion.

• Spinal immobilisation is about preventing further injury through forces being exerted on the individual patient and their habitus, with gentle handling and careful packaging.

• Neurogenic shock should be treated along the same lines as neuroprotective measures e.g. maintain oxygenation, correct coagulation and promote perfusion.

EP.049

Catheter-related bloodstream infections arising from central venous catheters and their prevention in an acute critical care unit

Barbara Ribeiro1, James McCulloch2 and Mohamed Ramali2

1Queen's Hospital – Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK

2Colchester Hospital University NHS Foundation Trust, Colchester, UK

Abstract

Catheter-related bloodstream infections (CRBSIs) are associated with increased hospital stays and significant mortality rates. CRBSIs are thought to occur in about 3% of catheterisations but can be as high as 16%. The Matching Michigan Project proved CRBSIs could be significantly reduced by a combination of technical and non-technical interventions. Paired blood cultures play a pivotal role when suspecting of CRBSI.

We have studied the level of documentation and care given to central venous catheters (CVC) and attempted to determine the frequency of CRBSIs in a Critical Care Unit (CCU) at an acute district general hospital.

We have performed a two-month retrospective study including patients with CVCs in our unit. The data was collected from nursing and medical notes. Microbiology results were obtained electronically. The results were presented to the CCU team at an audit meeting and the study was repeated with further CVC details to assess changes.

In the original study, 80 CVCs were recorded with an average length of stay in situ of 7.5 days. The right internal jugular vein was the most common site for line insertion in both versions of the study. In 81.2% of cases there was written documentation a chest x-ray was requested to confirm the position of the line and exclude complications. 44% of the CVC tips were sent for microbiology and 12.5% of these had a significant positive result (>15 colonies detected). No definitive case of CRBSI was detected but no paired blood cultures were sent.

Following our intervention, the repeat study included 32 CVCs that were in situ for an average of 6.9 days. A sticker regarding CVC insertion was used in 75% of the cases. 72% of the CVC tips were sent for microbiology and 25% of these had a significant positive result. The average CVC lumens used was 3.8. The CCU team mentioned paired blood cultures were to be taken in 2 occasions. There was no documented case of CRBSI.

Our results seem to suggest that there is still room for improvement regarding CVCs and their care. We have highlighted that daily monitoring, meticulous CVC care, and recording of the CVC insertion conditions are crucial to reduce line colonisation. The CCU team has also been encouraged to continue to send CVC tips for microbiology and paired blood cultures when suspecting a CRBSI.

EP.050

Surveillance of central venous catheter bloodstream infections in critical care units in England: Results from the sentinel study May 2016-April 2017

Sarah Gerver1,2, Miroslava Mihalkova1,2, Julian Bion2,3, Peter Wilson2,4 and Russell Hope1,2

1Public Health England, London, UK

2ICCQIP, National, UK

3University of Birmingham, Birmingham, UK

4University College London Hospital NHS Foundation Trust, London, UK

Abstract

Introduction: Bloodstream infections (BSI) from central venous catheters (CVC-BSI) in critically ill patients in intensive care units (ICUs) increase morbidity and mortality, have high economic impact and are potentially preventable.

Substantial reductions in CVC-BSI rates have previously been reported in England in a two-year study (2009-11). A key outcome was the need for a professionally-owned, standardised, national infection surveillance programme in ICUs. In 2011, the Infection in Critical Care Quality Improvement Programme (ICCQIP) was developed, representing a national collaboration of all professional organisations involved in adult, paediatric and neonatal intensive care, microbiology and infection control.

Here we present the results from the first year of the ICCQIP CVC-BSI surveillance programme.

Methods: An online data capture system (DCS) was launched in May 2016 to collect patient-level data on all positive blood cultures (PBCs) in participating ICUs and unit-level data on bed-days and CVC-days. NHS Trusts (hospitals under the same management) in England who had pre-registered their interest (n = 43) were invited to participate in the voluntary sentinel phase of the CVC-BSI surveillance programme. In November 2016, the invitation was extended to all NHS Trusts in England.

Results: Between 01/05/2016-31/04/2017, 100 of 152 NHS Trusts (n = 147 ICUs) in England registered on the DCS, of which 57 (84 ICUs) have entered data (72 adult, 7 paediatric, 5 neonatal ICUs). Over the first year of surveillance, a total of 1,292, 72 and 53 PBCs were reported by adult, paediatric and neonatal ICUs, respectively. Of these, approximately half were coagulase-negative staphylococci (adult:45.3%, paediatric:56.9%, neonatal:52.8%). Among PBCs, between 20%-33% were defined as ICU-associated BSI (occurring >2 days after ICU admission) (adult:433/1,292, paediatric:19/72, neonatal:11/53). Among adult and paediatric ICUs, just over a quarter of ICU-associated BSIs were reported as CVC-BSI (124/433,28.6% and 5/19,26.3%, respectively); this was higher among neonatal ICUs (5/11,45.5%). Overall, these equate to rates of 2.3, 1.0 and 1.5 per 1,000 ICU-CVC-days, respectively. However, there was wide variation in CVC-BSI rates between ICU types, particularly in adult ICUs (0-18.3 ICU-associated CVC-BSI per 1,000 ICU-CVC-days).

Discussion: The overall rates of microbiologically confirmed ICU-associated CVC-BSI are moderate across all age-ranges; however, the difference in rates between units highlights the importance of providing a national standardised surveillance system for benchmarking and to determine the causes. With the surveillance scheme now out of its sentinel phase, work on barriers and facilitators to participation will be assessed in order to increase the number of Trusts in England providing data.

EP.051

Audit on antibiotic prescribing in QEQM Intensive Care Unit

Shadi Pishbin1, Dhir Gurung1 and Ana Alegria1

1Queen Elizabeth the Queen Mother Hospital, Margate, UK

Abstract

Background: Antibiotic resistance worldwide is increasing rapidly. With no significant new classes of antibiotics on the market since the late 1980s, minimising overuse and improper use is vital in helping to reduce further resistances from developing. To aid with this, there are NICE guidelines for the prescribing of antimicrobial agents and we have more specific local trust guidelines with regards to both general and antimicrobial prescribing.

Purpose: The aim of this audit was to assess our compliance on the ICU with both trust and NICE guidelines on antimicrobial prescribing.

Methods: Data was collected prospectively over a six month period (October 2015 to March 2016) using the data collection form. Drug charts of all patients on the unit during the investigators’ shifts were reviewed. 85 patients were included in the study. Patients were excluded if they were not currently on any antibiotic agents, on long term antibiotic prophylaxis, anti-tuberculosis treatment, anti-retrovirals or PCP prophylaxis.

Results: Patient identification was documented on 100% of drug charts. Named consultant was documented on 66% of charts. Indication for antibiotics was documented on 84% of the drug charts. Only 29% of cases there was a stop or review date for the antibiotic prescription documented. Thirty eight percente of antibiotics were prescribed without a microbiologist advice or according to the Trust policy. No patient had both normal WCC and CRP, indicating that there was some evidence for infection in every patient when antibiotic was started.

Discussion: Disappointingly, this audit has shown that we are not 100% compliant with either trust or NICE guidelines on antibiotic prescribing, other than in labelling drug charts correctl. We found we were particularly bad at documenting the stop or review date; with only 29% compliance, which is one of the guidelines that is in both trust and NICE policies. This is an essential part of antibiotic prescribing which can help reduce the days of unnecessary antibiotic use, and therefore in the longer term help against the growing antibiotic resistances. Looking at the inflammatory markers at the start of the course was a crude way of auditing the evidence of bacterial infection; however it showed in this way that we are compliant with this standard.

EP.052

Procalcitonin testing in critically ill patients-What supports the implementation and what we are against

Igor Otahal1, Peter Havalda1

1Hywel Dda UHB, Carmarthen, UK

Abstract

Introduction: Antimicrobial treatment is crucial in the management of septic patients. The relatively short half-life of Procalcitonin (PCT), coupled with its virtual absence in healthy individuals and specificity for mainly bacterial infections, gives it an advantage over the other markers of bacterial infection. PCT has a great potential to improve the Antibiotic Stewardship. We introduced the PCT testing as a part of our effort to diagnose the sepsis better and to achieve better control of the antibiotics use.

Aim and Methods: We introduced PCT testing as the first in Wales in two Intensive Care Units (ICUs) within our Health Board in September 2015. The local approval process required an extensive literature evidence search and the pilot feasibility study. To support our business case we made the list of crucial recent studies and recommendations. We assessed the statements of the NICE and other UK institutions regarding the use of PCT. We reviewed our use of PCT retrospectively using both ICU and Laboratory Information Management System (LIMS).

Results: The literature evidence was sorted into the PRO and CON categories. We made the list of documents in each category with very short comment. The lists are presented. Every PCT test required a discussion with a Consultant to achieve better demand control. During 13 months we requested 1597 tests on 407 patients admitted to ICU in both hospitals. The average number of tests per patient was 3.92. The additional cost per patient was £16.10.

Conclusion: The impact from literature and our own results allowed the routine PCT testing in our ICUs. PCT helps to diagnose bacterial infections and to guide the antibiotic therapy in our District General Hospital ICU setting. It revealed its educational potential for the ICU staff and it has triggered a lot of interest amongst our microbiology colleagues. Relatively high cost of PCT tests led us to its judicious use. Our results and experience enables us to support wider use of PCT testing.

References

  • 1.Dellinger RP, Levy MM, Rhodes A, et al. Surviving Sepsis Campaign: International Guidelines for Management of Severe Sepsis and Septic Shock, 2012. Intensive Care Med 2013; 39: 165–227. [DOI] [PMC free article] [PubMed]
  • 2.de Jong E, van Oers JA, Beishuizen A, et al. Efficacy and safety of procalcitonin guidance in reducing the duration of antibiotic treatment in critically ill patients: a randomised, controlled, open-label trial. Lancet Infect Dis 2016; 16: 819–827. [DOI] [PubMed]

EP.053

Improving Access to Meningitis Chemoprophylaxis for Staff Working in the Intensive Care Unit

Emily Reynolds1, Chris Gough2, Wendy Fletcher1 and Rowan Hardy1

1Royal United Hospital, Bath, UK

2Bristol Royal Infirmary, Bristol, UK

Abstract

Although absolute numbers are low, health care workers are at increased risk of secondary infection from meningococcal disease. Staff working in intensive care units are at particular risk due to contact with respiratory droplets during intubation or suctioning, often when the diagnosis remains unclear or before 24 hours of antibiotics have been given.

The Health Protection Agency recommends that chemoprophylaxis (stat dose of ciprofloxacin or a short course of rifampicin) should be administered to staff who are at risk of developing meningococcal disease following occupational exposure. In our DGH, prophylaxis was previously only available from Occupation Health, Monday to Friday 9–5 pm, or via the member of staff’s own GP. This lead to a delay in accessing medication and anxiety for affected staff. We wanted to introduce a more streamlined way of accessing meningococcal chemoprophylaxis in the form of readily available medication stored on the ICU.

We surveyed staff who had required post-exposure prophylaxis on their experience in accessing prophylaxis and the advice given to them. Following this we reviewed the current literature and wrote a guideline for prescribing and supplying prophylaxis. We produced prophylaxis packs containing guidelines, prescribing checklist, prescription pad, the drug plus information leaflet, a letter to be sent to occupational health, and a meningitis advice sheet. The pack was approved by the pharmacy and microbiology departments and was stored in a locked drug cupboard on our ICU which any doctor could access in the event of staff requiring prophylaxis.

In the 6 months before introducing the packs, 5 members of staff had required chemoprophylaxis. 3 found the process of accessing prophylaxis “difficult” and 2 found it “neither difficult nor easy”. 3 members of staff accessed the medication 24 hours after exposure, one 12 hours after and one within 5 hours. 3 members of staff stated that the advice they were given was either “confusing” or “very confusing”. 3 members of staff were exposed outside occupational health opening hours with 2 having to come in on days off.

We have improved access to meningitis post-exposure prophylaxis. Members of staff can now access the medication immediately after it has been prescribed by a doctor on ICU. By having the medication immediately available we anticipate a reduction in staff attending on days off or taking time out of the working day to attend occupational health and a reduction in anxiety associated with delay.

EP.054

Short and long-term outcomes in a diverse group of patients requiring prolonged weaning from mechanical ventilation

Kerrie Aldridge1, David McWilliams1, Camilla Dawson1, Rachel Allen1, Jennifer Williams1, Claire Storrie1 and Catherine Snelson1

1Queen Elizabeth Hospital, Birmingham, UK

Abstract

Introduction: Approximately 11% of critical care beds are occupied by patients requiring prolonged mechanical ventilation (>21 days in an otherwise stable patient) due to difficulty weaning. Outcomes in these patients are poor: only 30–70% of patients are liberated from mechanical ventilation at discharge. In response, some advocate transfer of stable patients to specialised weaning units, but long-distance transfers can be challenging for patients and families, and may be precluded if patients require other specialised input. We aimed to assess patient outcomes following prolonged weaning in a tertiary UK intensive care unit (ICU).

Methods: Patients with an ICU stay >21 days between 2014–2016 were identified. Patients whose long length of stay (LOS) was primarily due to prolonged weaning were selected. Demographics, comorbidities, LOS, organ support, discharge and mortality data were collected.

Weaning was led by the duty intensivist, supported by a weekly multidisciplinary ward round and a structured individualised rehabilitation programme.

Results: 55 patients were identified; 42% were female, and average age was 69. The most common reasons for admission were cardiac surgery (22/55, 40%) and pneumonia/COPD (15/55, 27%). 2/55 (4%) patients had a neuromuscular condition.

On average, patients spent 48 days in ICU, requiring mechanical ventilation for 35 days. 44/55 (80%) patients survived to hospital discharge; 42/44 (95%) were liberated from mechanical ventilation.

During weaning, 22/55 (40%) patients developed further periods of instability requiring <7 days of additional organ support.

32/44 (72%) patients were eventually discharged home. Cancer patients were less likely to return home (p = 0.02). Age, comorbidities and LOS had no association with discharge destination.

Those who died before discharge were more likely to have respiratory comorbidities (45% versus 14%, p = 0.03). There was no association between 1-year mortality and any measured variables.

Conclusion: Good weaning outcomes can be achieved in non-specialist units, although the patient populations differ compared to specialised weaning units. Only two patients required ongoing mechanical ventilation following discharge, and almost three-quarters of patients were discharged to their own homes. 1-year survival rates are similar to those reported by specialised units.

Almost half the patients identified needed additional organ support during weaning; recovery for these patients is not a linear process. This study supports an ongoing role for non-specialised units in weaning patients requiring PMV, and highlights the success which can be achieved using a structured, multidisciplinary approach. Further research is needed to identify factors which predict weaning success and long-term mortality.

EP.055

Feasibility of Using Indirect Calorimetry during Physical Rehabilitation in Critically Ill Patients

Alexandra Curtis1,2,3, Luigi Camporota4, Stephen Harridge2, Bronwen Connolly1,2,5,6

1Lane Fox Clinical Respiratory Physiology Research Centre, London, UK

2Centre for Human and Aerospace Physiological Sciences, King’s College, London, UK

3Physiotherapy Department, Guys and St Thomas NHS Foundation Trust, London, UK

4Department of Critical Care, Guys and St Thomas NHS Foundation Trust, London, UK

5NIHR Biomedical Research Centre at Guy’s and St. Thomas’ NHS Foundation Trust and King’s College, London, UK

6Department of Physiotherapy, The University of Melbourne, Melbourne, Australia

Abstract

Background: Physical rehabilitation of patients in the intensive care unit (ICU) is recommended to ameliorate ICU-acquired weakness and enhance functional recovery. However the optimal dose and physiological effects of standard rehabilitation practices have not been widely investigated. Indirect Calorimetry (IC) is a method to estimate energy metabolism, which could aid individualisation of exercise interventions. We conducted a prospective, observational study examining the feasibility of using IC in patients receiving physical rehabilitation in the ICU, and present the eligibility and enrolment data.

Method: All patients admitted to a 30-bedded mixed medical/surgical ICU during the study period were screened daily for eligibility. Eligible patients had commenced physical rehabilitation and were invasively mechanically ventilated for at least 48hours. Exclusion criteria included presence of clinical factors causing potential inaccuracies with IC measurement (e.g. high FiO2) or precluding disconnection of the ventilator (e.g. cardiorespiratory instability), and specialist rehabilitation populations (e.g. neurological injury).

IC measurement involved a commercially available ventilator with an IC cart (Carescape R860, GE Healthcare, US), to allow estimation of energy consumption through expired gas analysis. Measurements were performed at three time points: 1) 30 minutes prior to rehabilitation, 2) during the physiotherapist-lead rehabilitation session; 3) the post-rehabilitation recovery period.

Results: 104 patients were admitted during the 5week study period (859 bed-day screening occasions), of whom 14 (13.5%) were eligible for enrolment. Ineligibility based on inclusion criteria (748 occasions, 87.1%) was due to not receiving physiotherapy rehabilitation (249/748, 33.3%), an insufficient duration of mechanical ventilation (230/748, 30.7%), or both (269/748, 36.0%).

Of the 111 occasions when patients met inclusion criteria, presence of exclusion criteria on 53 occasions maintained ineligibility. The number of applicable exclusion criteria on any occasion ranged from 1 to 5; by far the most frequently occurring was the presence of renal replacement therapy within the last 24 h (62.5%), with all other exclusion criteria occurring on <10% of occasions.

The remaining 58 screening occasions related to 14 eligible patients. Four patients were enrolled and ultimately measured on 6 different occasions. Reasons for non-enrolment included unable to acquire consent prior to scheduled physical rehabilitation session, a change in planned clinical management after screening and researcher availability at weekends.

Conclusion: Eligibility and enrolment data from this feasibility study demonstrate only a small percentage of critically ill patients met criteria for undergoing IC measurement, suggesting IC may have limited clinical applicability for informing physical rehabilitation in a general ICU population.

EP.056

The Introduction of a Physiotherapy Associate Practitioner role on Critical Care: An innovative service reorganisation to enhance delivery of seven day services and improve efficiency and quality of care

Clare Wade1, Helen Sanger1 and Catherine Baker1

1The Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK

Abstract

Introduction: Rehabilitation after Critical Illness (RaCI) and Enhanced Recovery after Surgery (ERAS) have been areas of increasing focus over the last decade. Physiotherapy is integral to the optimal delivery of both pathways. Historically, physiotherapy staffing on our ICU limited the ability to best deliver these services.

To address this, we developed a non-qualified Physiotherapy Associate Practitioner (PAP) role within ICU to carry a caseload of elective surgical patients. This increased the mobilisation of ERAS patients, and also released a specialist critical care physiotherapist to coordinate a multi-disciplinary RaCI service.

Aims:

• Improve outcomes for surgical patients.

• More therapeutic time to patients with complex needs.

• More consistent physiotherapy service on the ICU, seven days a week.

• A multidisciplinary team (MDT) RaCI service; weekly outreach rounds and monthly outpatient clinic.

Method: 1.8 WTE Band 4 PAPs were recruited on a pilot basis. They both completed a comprehensive training and competency framework. Functional mobility outcome scores at discharge from ICU and weekend productivity statistics were collected for six months. This was compared with data from 6 months prior to the pilot commencing. The senior physiotherapist co-ordinated the RACI pathway, as per NICE CG83. This included involvement in each RaCI patient’s rehabilitation on ITU, ward follow up, and outpatient review. Data was also collected to evaluate RaCI Pathway delivery.

Results:

Before pilot At 6 months Difference
Surgical patient reviews per weekend (Mean) 1.45 17 15.55
ICU LOS for surgical patients (Mean-hrs) 82.5 74.6 7.9
ICU admissions 326 363 37
Functional score- discharge from ICU (Mean) 19.6 26.4 6.8
NICE CG83 compliance 0% >90% >90%
Long stay ICU patients- ward follow up 0 64 64
Long stay ICU patients- outpatient follow up 0 18 18

Conclusion: We have described an innovative service change within the ICU physiotherapy team. This has allowed implementation of an enhanced mobility service 7 days a week. It has also facilitated the development of a RaCI service, to address the complex needs of long-term ICU patients. The recruitment of PAPs represents a relatively small financial investment, but one that has facilitated a restructure of the physiotherapy team, allowing optimisation of resource allocation to different ICU patient groups. Given the current financial constraints on NHS services, this project represents a unique and practical approach to achieving NHS England’s recommendation for safe, sustainable staffing with “the right staff, with the right skills, in the right place, at the right time” (National Quality Board 2016).

EP.057

The evolving role of occupational therapists in adult critical care: A mixed methods analysis

Naomi Algeo1,2 and Leanne Aitken1

1City, University of London, London, UK

2University College London Hospitals NHS Foundation Trust, London, UK

Abstract

Background: Rehabilitation has not traditionally played a role in the critical care pathway. A recent paradigm-shift in patient care now advocates for early intervention in physical and cognitive rehabilitation. Limited evidence surrounding the impact of occupational therapy in critical care suggests input, in conjunction with other disciplines, can result in shorter duration of delirium, greater ventilator-free days, and greater functional-status at hospital discharge. Despite promising outcomes, poor understanding of the role by the multidisciplinary team appears to impact on service delivery. Currently, there is limited definition of the occupational therapy role in critical care in the UK. The aims of this study are to explore:

1. The role of occupational therapists in adult critical care.

2. Perceived facilitators and barriers impacting on service delivery.

3. The potential future of the role in critical care.

Methods: A mixed-methods design based on role theory was used. Participants were recruited via the Royal College of Occupational Therapists’ Specialist Section for Trauma and Orthopaedics between March-May 2017. Description of the role, perceived facilitators and barriers and the potential future of the role were explored through a locally developed online-questionnaire. With consent, a subset of participants was recruited to undertake an additional semi-structured interview to explore the issues in detail. Descriptive statistics included frequencies and percentages, generated through SPSS. Qualitative data were analysed using the seven-stage framework approach.

Results: Twelve occupational therapists with clinical experience in critical care were recruited, with five participants continuing to the additional interview. Preliminary results indicate that occupational therapists have a role in upper limb function, seating/positioning, cognition, psychosocial sequelae and discharge-planning in critical care. Facilitators of service delivery include role-overlap, role motivation, role capabilities, and role signs. Barriers include role stress, role overload, role ambiguity, and role capabilities. Although no new therapies were identified for the future, it is envisaged that earlier intervention of a greater percentage of critical care patients, a greater evidence-base, raising awareness across the multi-disciplinary team and beyond, and adequate staffing will be features for future development.

Conclusion: Occupational therapy has both an evolving and diverse role on critical care. For the role to evolve, occupational therapists have identified the need for earlier input, increased awareness of the role, funding for adequate staffing, and building an evidence-base.

EP.058

Supervised exercise rehabilitation in survivors of critical illness: A randomised controlled trial

Ceri Battle1, Karen James1, Paul Temblett1 and Hayley Hutchings2

1Ed Major Critical Care Unit, Swansea, UK

2Swansea University Medical School, Swansea, UK

Abstract

Background: It is not yet possible to determine an overall effect on functional exercise capacity using an exercise-based intervention initiated post-ICU discharge for survivors of critical illness. The primary aim of this study was to investigate the impact of a six week supervised exercise programme on physical and psychological outcomes of survivors of critical care.

Methods: A single centre randomised controlled trial, set in an outpatient department of a large, university teaching hospital. Patient population included survivors of critical illness, aged 18 years or more, at three months post-hospital discharge. Patients were randomised to either a control group (usual care) or a treatment group (six-week supervised exercise programme). Twice weekly exercise sessions were individualised to patient’s functional status, as assessed at baseline and included cardiopulmonary, balance and strengthening exercises. Outcome measures included the Six-Minute Walk Test, BERG balance test, Hospital Anxiety and Depression Scale and Grip strength. Data were presented using mixed models.

Results: A total of 62 patients were enrolled into the study. The only difference between the two groups in demographic and hospital data was that the treatment group were mechanically ventilated for a significantly greater number of days. No significant differences were found for the Six-Minute Walk Test. Anxiety levels were significantly lower in the treatment group than the control at one year.

Conclusions: The negative results reported in this study support the previous findings of similar exercise programmes, proving further research is needed into appropriate interventions and outcome measures, target patient populations and timing of such intervention post-hospital discharge.

EP.059

Predictive factors of mortality for primary pontine haemorrhage in an Asian population

S Surentheran1, Thangaraj Munusamy2, Ramez Kirollos2, Eugene Yang1 and Pang Boon Chuan1

1Division of Neurosurgery, Department of Surgery, Khoo Teck Puat Hospital, Singapore, Singapore, Singapore

2Department of Neurosurgery, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK

Abstract

Objective: Primary pontine haemorrhage is the most devastating form of haemorrhagic stroke accounting for about 10% of intracerebral haemorrhages with an overall mortality rate of 40–50% as reported in the literature. There are various factors reported to have an association with outcome such as Glasgow Coma Scale score, clot location, clot volume, age and history of hypertension. In our study, we analysed the correlation between outcome and clinical and radiological parameters to determine the predictive factors and prognosis in primary pontine haemorrhage.

Methods: We retrospectively reviewed the clinical data of 47 patients admitted to Khoo Teck Puat Hospital, Singapore with a confirmed radiological and clinical diagnosis of primary pontine haemorrhage from 2009 to 2015. Patient demographics, Glasgow Coma Scale scores, clinical and radiological parameters and outcomes were recorded. Subsequently, predictive factors of mortality were identified by statistical analyses. We also analysed the correlation between acute blood pressure lowering and mortality.

Results: Out of the 47 patients, 31 were men. Overall 30-day mortality rate was 25.5%. Positive predictive factor of 48-hour mortality was mean systolic blood pressure of 160 mmHg or above in the first 48 hours of admission (Grade 2 and 3 hypertension). Positive predictive factor of 30-day mortality was Glasgow Coma Scale score of 8 or less on arrival. Lowering of mean systolic blood pressure by 20% or more in the first 48 hours correlate with reduction in 48-hour and 30-day mortalities.

Conclusion: The overall 30-day mortality rate of 25.5% for patients with primary pontine haemorrhage in our study population is better than that reported in the literature. We attribute this to acute reduction of mean systolic blood pressure by 20% or more in the first 48 hours of admission. Persistently raised mean systolic blood pressure in the first 48 hours and Glasgow Coma Scale score of 8 or less on arrival are positive predictors of mortality in primary pontine haemorrhage.

The overall 30-day mortality rate of 25.5% for patients with primary pontine haemorrhage in our study population is better than that reported in the literature. We attribute this to acute reduction of mean systolic blood pressure by 20% or more in the first 48hours of admission. Persistently raised mean systolic blood pressure in the first 48hours and GCS score of 8 or less on arrival are positive predictors of mortality.

EP.060

Can Charlson Co-Morbidity Index Predict ICU Survival?

Laura Langton1, Jeff Little1 and Adam Old1

1Warrington and Halton Foundation Trust, Warrington, UK

Abstract

As populations age and emerging treatments become more effective, decisions on the admission of patients who would previously have been unlikely candidates for ICU are becoming increasingly open to scrutiny.

The Charlson Co-Morbidity Index (CCI) was developed to predict 1-year mortality among medical patients1. The scale assigns weighted scores to 17 co-morbidities ranging from diabetes to metastases.

Previous studies have compared the CCI with other scoring systems to predict short and long-term mortality following ICU admission. Quach et al2 in 2009 showed that the CCI doesn’t perform as well as the APACHE II in predicting hospital mortality. However Christensen et al3 in 2011 showed that the CCI performed as well as Simplified Acute Physiology Score (SAPS) and APACHE in predicting in-hospital, 30-day and 1-year mortality.

We aimed to look at whether the CCI was predictive of hospital mortality and therefore whether this tool is useful when making admission decisions.

Method: We collected data on consecutive patients over 50 years old admitted for greater than 24 hours to ICU in a District General Hospital between November 2015 and February 2016. ICNARC, APACHE II and CCI were calculated on admission as well as other baseline characteristics. The primary outcome measured was in-hospital mortality.

Results: Data was collected for 64 patients, 37 patients were male, 27 female. The mean age of all patients was 70.0 years. For each of the scoring systems, the mean value was calculated for those who survived and those who died. The means were then compared with a t-test and a P-value calculated. Neither ICNARC score, the number of regular medications taken on admission nor the patient’s age predict in- hospital survival in this group of patients. However, the Apache II score and CCI appear to be predictive of this outcome.

Conclusion: This is a small study of patients admitted to ICU in a DGH. Apache II score and CCI appear to be predictive of survival among patients aged 50 and over. Further work and a larger sample are necessary to confirm these findings, but results suggest that the CCI may be a useful prognostic indicator.

Mortality on ICU APACHE II score ICNARC score CHARLSON Co-morbidity Index No. of regular medications on admission
SURVIVED Mean SD 16.0 5.7 18.8 9.3 1.57 1.49 5.7 4.4
DIED Mean SD 5.7 5.6 22.8 9.1 2.8 1.9 6.7 2.7
P value P = 0.023 P = 0.13 P = 0.0094 P = 0.40

EP.061

High News Score Is Associated With Very High Mortality and Is an Appropriate Trigger for Medical Emergency Team Activation

Meenakshi Agarwal 1

1Southend University Hospital, Southend on Sea, UK

Abstract

Introduction: National Early Warning Score was implemented in 2012 to help standardise recognition and response to the deteriorating patient. Prior to the introduction of a proposed “deteriorating patient pathway” we audited mortality associated with a high NEWS Score in our hospital.

Method: All patients admitted in Southend University Hospital in May 2016 were reviewed. The 30 day mortality associated with any NEWS Value above 4 was calculated.

Results: A total of 1543 patients with NEWS score more than 4 were identified.

The mortality associated with each NEWS Value was calculated as shown in Table 1.

NEWS MORTALITY(Percentage)
5 32
6 41
7 47.5
8 55
9 56.5
10 60.2
11 75
12 70.9
13 76.9
14 88.9
15 66.7
16 100

A NEWS Score above 4 was associated with an overall mortality of 44.32%. A NEWS Score of 10 or above was associated with an average overall mortality of 76.7%.

Discussion: We have shown that in our hospital a high NEWS Score is associated with an extremely high mortality, as has been previously described (Reference 2).

From case reviews we have seen that many patients with NEWS Score of 10 or above are not referred to intensive care until a cardiac arrest or periarrest call is made.

We feel that a NEWS Score of 10 or above is an appropriate trigger for activation of a Medical Emergency Team in patients who do not have existing treatment limitation decisions in place and plan to introduce this as part of a deteriorating patient pathway.

References

  • 1.Royal College of Physicians NEWS report 2012.
  • 2.Smith GB, Prytherch DR, Meredith P, et al. The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation 2013; 84: 465–470. [DOI] [PubMed]

EP.062

Post-operative nausea and vomiting after enhanced recovery cardiac surgery in the Cardiac Intensive Care Unit – Is it still a problem?

Timothy Snow1, Anne Campbell1 and Sibtain Anwar1

1Bart's Cardiac Centre, London, UK

Abstract

Despite our understanding of post-operative nausea and vomiting (PONV) risk factors suggesting cardiac surgery should be low risk, previous studies suggest that the incidence is surprisingly common, albeit wide ranging. This may be partly explained by inter-study variations in anaesthetic practice e.g. use of propofol infusions. Enhanced recovery pathways aim to improve patient experience whilst enabling efficient health care resource utilisation such as reducing Cardiac Intensive Care Unit (CICU) length of stay. Minimising PONV is a key component of these pathways. Anecdotal evidence suggested the incidence of PONV in our CICUs remains high, thus we wished to understand current prevention strategies used and the incidence of PONV in our enhanced recovery cardiac surgery patients.

We undertook a week long prospective audit on our single centre tertiary referral CICUs. Enhanced recovery patients underwent anaesthesia, surgery and cardiac bypass as per surgical and anaesthetic preference. Following closure patients were transferred sedated to the CICU on a propofol infusion. On arrival, demographic and intra-operative PONV risk factor data was collected. Once haemodynamics, clotting and temperature was optimised, sedation was stopped, the patient woken and extubated once following commands. For the first 24hours of admission, data was collected on time until extubation, presence of 1st episode of nausea or vomiting, time from extubation until symptoms and rescue anti-emetic given.

19 patients were identified. Patient characteristics are shown in table 1. Only 1 (5%) patient received a prophylactic anti-emetic whilst the incidence of PONV was 13 (68%) & 8 (42%) respectively. All patients with PONV received ondansetron 4mg for treatment.

On the basis of this audit, the incidence of PONV in the CICU following enhanced recovery cardiac surgery remains high and few patients receive routine prophylaxis. We aim to investigate the benefit of using routine prophylaxis in a further study.

Table 1.

Patient characteristics.

Variable Mean ± Standard Deviation or Number (%)
Age (years) 60 ± 11
Male 16 (84%)
Smoker
 No 10 (53%)
 Ex 6 (32%)
 Yes 3 (16%)
Previous PONV 0
Operation
 CABG 11 (58%)
 Valve repair/replacement 3 (16%)
 CABG and valve repair/replacement 2 (11%)
 Aortic root surgery 3 (16%)
Anaesthetic time (mins) 296 ± 74
Bypass time (mins) 96 ± 41
Intra-operative opioid used/amount
 Fentanyl (mcg) 13 (68%)/893 (223)
 Morphine (mg) 2 (11%)/8.5 (0.7)
 Both 2 (11%)/650 ± 212  & 10 ± 0
Intra-operative propofol infusion 18 (95%)
PCA
 Fentanyl 2 (11%)
 Morphine 17 (89%)
ITU intubation time (mins) 332 ± 138
Time from extubation to  symptom onset (mins) 325 ± 261

EP.063

Using routine prophylactic ondansetron on the Cardiac Intensive Care Unit to prevent post-operative nausea and vomiting following enhanced recovery cardiac surgery

Timothy Snow1, Anne Campbell1 and Sibtain Anwar1

1Bart's Cardiac Centre, London, UK

Abstract

Minimising post-operative nausea and vomiting (PONV) is a key component of enhanced recovery. These pathways aim to improve patient experience whilst enabling efficient health care resource utilisation including reducing Cardiac Intensive Care Unit (CICU) length of stay. Traditional scoring systems suggest cardiac surgery is low risk, however, previous studies and our own CICU audit indicate that the incidence of PONV is high. This study assessed whether routine administration of ondansetron would reduce the incidence of PONV in the enhanced recovery patient.

We performed a sequential 2 week prospective cohort study at our single centre tertiary referral CICU. For the first week, enhanced recovery patients underwent surgery as standard per the surgeon and anaesthetists preference which excludes routine anti-emetic prophylaxis. Following closure patients were transferred to the intensive care on a propofol infusion. Baseline data collected on arrival included age, PONV risk factors (sex, previous PONV, smoking status and intra-operative opiates), operation type, duration of anaesthesia and bypass, PCA type and use of an intra-operative propofol infusion. Once haemodynamics, clotting and temperature were optimised, sedation was stopped, the patient allowed to wake and extubated once following commands. For the first 24hours of admission, data was collected on time until extubation, presence of 1st episode of nausea or vomiting, time from extubation until symptoms and rescue anti-emetic given.

The following week, patients underwent the same standard of intra-operative management however, following admission to the ICU, they were given 4 mg IV ondansetron once their temperature was above 36oC and they were ready for sedation to be weaned. They were followed for the next 24 hrs as above.

In the control group, 12/18 (67%) patients suffered nausea of which 8 (44% total/67% of those with nausea) also suffered vomiting. In the ondansetron group, 6/19 (32%) suffered nausea and 4 (21%/66%) vomiting. The odds ratio reduction for nausea was 4.33 (CI 1.09–17.17, p = 0.04) and for vomiting 2.39 (CI 0.56–10.22, p = 0.24), a relative risk reduction of 53%. Time between extubation and symptom onset increased from 305 ± 261 to 585 ± 115 minutes (p = 0.02). There were no significant inter-group differences in patient demographics or intra-operative variables. Ondansetron 4 mg IV was used as rescue anti-emetic.

This pilot study indicates the effectiveness of the simple addition of pre-extubation ondansetron in reducing PONV following enhanced recovery cardiac surgery. Limitations include non-randomisation and small sample size. Further study to confirm the role of prophylactic ondansetron in preventing PONV in this setting is warranted.

EP.064

Improving the care of patients with traumatic head injury

James Hayward1, Katherine Seal2 and Ian Littlejohn2

1Intensive Care Unit, Royal Surrey County Hospital, Guildford, UK

2Royal Sussex County Hospital, Brighton, UK

Abstract

The Royal Sussex County Hospital is the major trauma centre for the South East of England, each month we care for numerous patients with traumatic head injuries. We have well established, evidenced based guidelines pertaining to the management of patients with traumatic head injuries. Hence, we performed a retrospective audit of 30 patients admitted to ITU with head injury between 26/9/16 and 20/2/17. Patients were identified by interrogating our electronic records. The following data was collected from our ICU system (brackets indicate audit target):

• Pa02 (>13 kPa)

• PaCO2 (4.5 – 5.0 kPa)

• ICP (<20 mmHg) and CPP (>60 mmHg)

• Ventilation mode (SIMV)

• Blood sugar (4.5 – 8.3)

• Core temperature (35 – 36)

• Use of anti-epileptic medications (tight seizure control)

• Propofol rate (<4 mg/kg/hr)

The results were as follows:

Only 16 patients were ventilated and of those 63% were commenced on SIMV. 70% of the first 72 hours was spent on SIMV on average. In relation to this, whilst PaO2 control was good (77% of first 72 hrs had PaO2 > 13 kPa, 91% of first 72 hrs had PaO2 > 11 kPa) PaCO2 control was poor (47% of first 72 hrs had PaCO2 within range).

Only 14% of patients in the first 72 hrs had temperatures controlled within the range 35–36, patients spent 47% of time in first 72hours with temps >37 and 50% of that (25% overall) was >37.5oC.

ICP control of CPP > 60 mmHg and ICP < 20 mmHg was achieved in 90% of patients with ICP monitoring. 6 patients received propofol at a rate of greater than 4 mg/kg/hr, 3 of those were also receiving concurrent midazolam, 2 had their propofol reduced promptly and 1 had no change recorded.

81% of the first 72hours blood glucose was controlled between 4.5 and 8.3 mmol/L, and in the remaining 19% glucose control was still relatively “tight”, rarely exceeding 12.0.

80% of patients on anti-epileptic medications had no documented seizures but there was no indication to imply the medications were prophylactic.

100% of these patients received levetiracetam and 4% also received phenytoin. In those patients we performed 90 CT head scans, 7 tracheostomies, and 2 decompressive craniectomies.

Our review of this data suggests that we are not using the recommended ventilator settings and as a possible result of this our PaCO2 control is also poor.

Our temperature control was outside of our guideline targets and upon review we have altered our target goals to 36 – 37 degrees Celsius.

EP.065

Cervical spine immobilisation- standardised documentation

Fiona Christie1, Finbar O'Sullivan1 and Phil Munro1

1Queen Elizabeth University Hospital, Glasgow, UK

Abstract

Introduction: The Eastern Association for the Surgery of Trauma (EAST) guideline conditionally recommends cervical collar removal after a negative high-quality C-spine CT scan in an obtunded adult, blunt trauma patient. The NICE guideline on spinal injury assessment and management advises protecting the person’s cervical spine with manual in-line cervical immobilisation for any airway intervention and recording it accordingly.

Following the amalgamation of three hospitals there was wide variation in the clinical practice of cervical spine clearance in the obtunded trauma patient. The aim of our audit was to review documentation of cervical immobilisation for airway intervention and cervical collar procedure in the unconscious trauma patient. We compared documentation before and after the implementation of a hospital cervical spine collar clearance guideline.

Method: Retrospective review of all adult, major, blunt trauma patients with a decreased conscious level, admitted to the intensive care unit of a large city centre teaching hospital from May 2015 to January 2017. Major trauma was defined as an Injury Severity Score greater than 15.

We reviewed the medical admission notes from the emergency department and intensive care unit. We reviewed documentation of manual in-line cervical immobilisation during airway intervention and a documented plan for cervical spine collar clearance. We compared the practice prior to the cervical spine clearance guideline in December 2016, with the practice thereafter.

Results: Sixty five patients were included, of which 52% of patients had manual in-line cervical immobilisation documented for airway intervention.

Before the implementation of the guidance in December 2016, only 21% of patients had a documented cervical spine clearance plan in the emergency department notes, with a further 11% having a plan documented on intensive care admission. Sixty eighty percent of patients had no documented cervical spine collar procedure. Following the guideline implementation, 37% of patients had a plan documented in the emergency department with a further 26% documented in intensive care. Thirty seven percent of patients had no documented cervical spine collar procedure.

Discussion: Accurate documentation is implicit in best clinical practice and in safe patient care, particularly in transfer between departments. Our guideline enables the standardisation of clinical practice, reflecting the recommendations from the EAST guideline. Although our clinical practice and documentation appears to have improved following the guideline implementation, further improvement is still required.

EP.066

Traumatic brain injury and the level of arterial line transducer

Ken McGrattan1 and Neha Singal1

1Royal Preston Hospital, Preston, Lancs, UK

Abstract

Audit rationale: It is to reinforce the recent guidelines for CPP monitoring which recommends arterial line transducer to be at the level of tragus. With the objective of seeing our current practice for CPP monitoring in TBI patients which was then followed up with a teaching and re-audited later.

Background Joint position statement by NACCS and SBNS recommend that in the management of traumatic brain injury, when calculating cerebral perfusion pressure (CPP), the arterial transducer used to estimate mean arterial pressure (MAP) for the calculation CPP = MAP – ICP should be positioned at the level of the tragus

Method: Recorded Arterial line Transducer level of all the traumatic brain injury patients admitted in Critical care unit, Royal Preston Hospital from 23/02/2017 to 22/03/2017 and then reaudited from 26/07/2017 to 25/08/2017 after a session of teaching.

Result: There were total number of 26 observations recorded in first set of observation and 70 in next set. Only one patient had a MAP levelled at tragus level, 24 at heart level. Re-audit showed 54 observations with MAP levelled at Tragus and 16 at heart level. All the participants were confident both times that the level at which MAP is measured is right. Only 2 out of 25 staff looking after these patients were aware of current guidelines which improved to 66 out of total 70 participants.

Conclusion: For 77.4% TBI patients admitted in second set are in conformation with monitoring standards in contrast to 4% prior to audit. Most people were confident of their measurement either times. There is incredible rise in awareness regarding recent guidelines from 8% to 94.3%.

Future Actions: Although more staff are aware but not implementing the current guidelines which can be strengthened by teaching and educational activities. It can be reinforced by updating the existing online resources in hospital and putting up posters.

EP.067

Rib fractures: The challenge of creating a pathway of care at a district general hospital with no thoracic surgeon

Neil Roberts1, Emma Harrison1, James Butler1, Julia Gibb2, Rebecca Norman2, Jonathan Outlaw2, Jonathan Abeles2, Laura Shepherd1, Ruth Creamer1 and Ben Warrick1

1Royal Cornwall Hospitals Trust, Truro, UK

2University of Exeter Medical School, Truro, UK

Abstract

Background: Rib fractures represent a significant proportion of trauma seen in Emergency Departments. Analgesia and respiratory support represent the cornerstones of management. With ongoing centralisation of specialties such as cardiothoracics, specialisation of ‘general surgery’ into ‘gastrointestinal surgery’, limitations on critical care capacity, and deskilling of non-critical care nurses at management of epidurals and chest drains, identifying an appropriate route through the hospital for these patients is an ongoing challenge. This audit examined current practice with the aim of creating a unifying pathway.

Methods: Retrospective audit of all adult patients with rib fractures from primary traumatic events, admitted for active treatment to a district general hospital over a 6-month period (July-Dec 2015). Patients were identified through TARN, WebPACS imaging system and emergency department software database, cross-referenced then imaging and notes reviewed. Demographics, severity and characteristics of injury were recorded, along with pathway through the hospital, respiratory support and analgesic requirements, and outcomes including length of stay (LOS) and 30-day mortality.

Results: 43 patients included after review of 2461 imaging reports and 58 sets of notes. Median age 67 (range 32–96). Median of 5 fractures (range 1–22). 8 had flail chest (19%). Median hospital LOS 6 days (range 1–25). Median ICU LOS 3 days (range 1–7). 19 (56%) patients did not receive tertiary survey. 12 patients to Clinical Decisions Unit (28%), 11 to Critical Care (25%), 11 to Medicine (25%), 4 to general surgery (9%) and 5 to orthopaedics (12%). 3 patients deteriorated with pain and respiratory failure and required Critical Care admission. 34 patients met cardiothoracic referral criteria. 10 were discussed, with 3 undergoing transfer for rib fixation. No patients required invasive ventilation. 1 patient received non-invasive ventilation, with 3 patients receiving high-flow humidified nasal oxygen. 7 patients (16%) were treated for pneumonia, with a 29% 30-day mortality. Overall 30-day mortality was 11.6%. Epidural and PCA rates were much lower than indicated.

Conclusions: Patients with rib fractures follow varied pathways through the hospital. Poor tertiary survey rates increase likelihood of missed injury. There is significant under-referral to cardiothoracics, and significant under-analgesia. Implementing a pathway including early stratification to Critical Care according to a validated risk score, planned initial care on CDU by ED including performance of tertiary survey, patients not requiring surgical or Critical Care to be admitted under respiratory physicians with multidisciplinary trauma service support, and PCA training for CDU and respiratory ward, should address the problems identified in this audit.

EP.068

Additional computed tomography in major trauma patients

Fiona Christie1 and Finbar O'Sullivan1

1Queen Elizabeth University Hospital, Glasgow, UK

Abstract

Introduction: Standards of practice and guidance for trauma radiology in severely injured patients recommend head to thigh contrast-enhanced multi-detector computed tomography (MDCT).

Following the amalgamation of three hospitals and their variations in clinical practice, it was felt that major trauma patients were frequently subjected to additional imaging. The aim of our audit was to review the imaging these patients received both on admission and over the subsequent 72 hour period.

Methods: Retrospective review of all adult, major, blunt trauma patients admitted to the intensive care unit of a large city centre teaching hospital from May 2015 to January 2017. Major trauma was defined as an Injury Severity Score greater than 15.

We reviewed the medical notes and radiology reports of 81 patients. We compared initial imaging in the emergency department with the major trauma protocol, how frequently CT scans were repeated over the subsequent 72 hours, and the reasons for repeating these studies.

Results: Seventy nine percent of patients had imaging in accordance with the trauma radiology protocol. Seven percent of patients had additional imaging on their admission sequence including lower limbs, foot and angiography.

Forty one percent of patients required additional imaging, with 44 subsequent CT scans completed within 72 hours of admission. Ten percent of patients attended CT scan three times within 48 hours.

The reasons documented for repeat imaging were: clinical deterioration in 71% of patients; image failure in 2%; and speciality request in 27%. Within the speciality request group, five CT scans were requested within 24 hours, four scans within 48 hours and a further three scans within 72 hours. The speciality request scans included facial bones, upper and lower limb imaging, of which 75% could potentially have been identified for targeted inclusion on their admission trauma scan.

Discussion: Admission imaging did not adhere to major trauma protocol in 21% of patients. Although the requirement for subsequent CT imaging was largely due to clinical deterioration, there was a significant proportion for speciality request. Repeat CT scans inevitably result in increased radiation exposure to patients as well as the important patient safety issue regarding multiple transfers in this potentially unstable patient group. In addition to the trauma CT on admission, we propose targeted scanning to incorporate some of the speciality requests and therefore reduce overall CT time and associated transfers.

EP.069

Optimal Estimation of Oxygenation Defect in Critically Ill Patients

Emma Chang1, Kenneth Baillie1,2, Gordon Drummond1 and Andrew Bretherick3

1Royal Infirmary of Edinburgh, Edinburgh, UK

2The Roslin Institute and Royal (Dick) School of Veterinary Studies, Edinburgh, UK

3Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, UK

Abstract

Rationale: Existing measures of arterial oxygenation impairment by the lung vary misleadingly with extrinsic factors such as FIO2. This can lead to an underappreciation of disease severity in patients in clinical practice and misclassification in clinical studies. We proposed that an index based on oxygen content could provide a more reliable measure of pulmonary oxygenation.

Objectives: To assess whether a content-based index of oxygenation (estimated shunt fraction) can more accurately predict PaO2 following changes in FIO2 than current measures of oxygenation in critically ill patients.

Methods: We used a retrospective database of 100,219 arterial blood gas (ABG) analyses to calculate estimated shunt, PaO2/FIO2 (P/F) ratio and alveolar-arterial (A-a) difference in order to predict PaO2 following a change in FIO2. The median absolute difference for each index was used to represent the stability of each measure when inspired oxygen fraction was altered. In a prospective clinical study carried out between January and April 2017, we obtained 25 paired ABG samples before and after a research intervention to increase FIO2 by 0.3.

Measurements and Main Results: 4,940 retrospective ABG pairs were used. The median absolute differences (kPa) for P/F, estimated shunt and A-a gradient were 1.72, 1.54 and 5.65 respectively for retrospective analysis and 3.10, 4.26 and 13.19 in the intervention cohort. Estimated shunt provided significantly improved predictions in retrospective analysis (p = 1.01 x 10-53) but no significant difference in the prospective data set (p = 0.55).

Conclusions: The estimated shunt equation is marginally more stable than the current reference standard, P/F ratio. The simplicity and stability of the P/F ratio make it the best choice for most clinical and research applications. In some research studies where classification errors are likely to affect study results, the estimated shunt fraction may be preferable.

EP.070

High Flow Nasal Oxygen (HFNO2) in Cardiothoracic Intensive Care Patients

Kate Jones 1,2,3

1St Georges University Hospitals NHS Foundation Trust, London, UK

2Papworth Hospital, Papworth, UK

3St George's University of London Medical School, London, UK

Abstract

Introduction: High flow nasal oxygen is a new form of non-invasive positive pressured ventilation delivered via nasal cannulae. A humidifier, heated circuit and air/oxygen blender are combined with these cannulae to deliver gas via the nares. The aim of this cohort study was to determine the effectiveness of HFNO2 in practice in a Cardiothoracic Intensive Care Unit (CTICU).

Methods: This is a retrospective cohort study of patients managed in the CTICU who were commenced on HFNO2. Eighty six patients were given HFNO2, 2 of whom were excluded from analysis due to case record availability. Patients were analysed in four groups. These were patients considered escalated to HFNO2 and those considered deescalated to HFNO2. Within each of these groups patients were divided into treatment deemed successful (maintained on HFNO2) and treatment failure (requiring further respiratory support).

Results: Fifty nine patients (70%) were stepped up to HFNO2 and 25 (30%) were stepped down. In the stepped up group 41 patients (69%) were managed successfully and 18 (31%) required further support. In the stepped down group 19 patients (76%) were managed successfully and 6 (14%) required further support. Safe maintenance of ventilator support with HFNO2 was therefore observed for 71% of the patients included in this study. Of the 24 (29%) where HFNO2 was not adequate, 18 (75%) had been stepped up to HFNO2. There was a significant difference between patients with and without a history of pre-existing respiratory disease with regard to recorded success of HFNO2. In the stepped up to HFNO2 group, the PiO2:FiO2 ratio appeared predictive of outcome. 62% of the stepped up failures had PiO2:FiO2 ratios below 100, compared to 38% of those who were successfully stepped up.

Conclusion: This study demonstrates that HFNO2 appears effective for some patients in the CTICU setting. Our data suggest that pre-existing respiratory disease and severe respiratory failure is associated with failure of this form of ventilatory support. Clinicians should consider these conditions when monitoring HFNO2 therapy and perhaps have a lower threshold for escalating patients with pre-existing respiratory disease or severe respiratory failure following commencement of this form of oxygen therapy.

EP.071

Lung protective mechanical ventilation for acute respiratory failure is not being implemented in UK clinical practice

Romit Samanta1, Abishek Dixit1, Steve Harris2, N MacCallum2, David Brealey2, Peter Watkinson3, Andrew Jones4, S Ashworth5, Richard Beale4, Stephen Brett5, Duncan Young3, Mervyn Singer6, Charlotte Summers1 and Ari Ercole1

1Department of Medicine, University of Cambridge School of Clinical Medicine, Cambridge, UK

2Bloomsbury Institute of Intensive Care Medicine, University College London, London, UK

3Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford, UK

4Department of Intensive Care, Guy's and St Thomas' NHS Foundation Trust, St Thomas' Hospital, London, UK

5Centre for Perioperative Medicine and Critical Care Research, Imperial College Healthcare NHS Trust, London, UK

6Bloomsbury Institute for Intensive Care Medicine, University College London, London, UK

Abstract

Introduction: The benefits of lung protective ventilation have been replicated in multiple trials1,2. However, we suspected that adherence to this standard of care remained poor. Using the NIHR critical care Health Informatics Collaborative (ccHIC) database, we analysed data from 11 teaching hospital intensive care units (22,524 patient episodes) to investigate real-world clinical practice.

Methods: 1,248 patient episodes, where ventilation was continued for >48 hours, with 232,600 hours of concurrent mechanical ventilation and blood gas data were identified as suitable for analysis. Short gaps in ventilation (<6 hours) were imputed based on the median of nearest known values, and only the single longest period of ventilation from each patient episode was analysed.

Results: The median tidal volume received by patients was 7.3 mLkg-1PBW (IQR:5.7–9.3). Female patients, especially those with higher BMI (>30 kgm-2) consistently received higher tidal volumes than males. Patients with severe respiratory failure (PaO2:FiO2 <13 kPa) received a median tidal volume of 7.1 mLkg-1PBW, and had 71% ICU mortality (Table 1).

Patients with respiratory failure sufficient to qualify for recruitment into recent clinical trials (PaO2:FiO2 <20 kPa with PEEP >5cmH20), who were exposed to tidal volumes ≥12 mLkg-1PBW for longer than two hours had significantly increased risk of ICU mortality (odds ratio = 2.89 [1.25–7.2]; p = 0.007); This was not observed for patients with PaO2:FiO2 >40 kPa (odds ratio = 0.91 [0.58–1.56], p = 0.91).

Conclusions: More than 15 years after the ARMA study1 demonstrated a mortality benefit from lung protective ventilation, we are still not implementing 6 mLkg-1PBW ventilation into clinical practice, and are exposing even patients with a severe respiratory failure (PaO2:FiO2 <13 kPa) to higher than recommended tidal volumes, with females and higher BMI patients at particular risk. We have also demonstrated that failure to institute lung protective ventilation in patients with a PaO2:FiO2 <20 kPa leads to increased ICU mortality, which is not present in patients with PaO2:FiO2>40 kPa.

References

  • 1.ARDS Network, Ventilation with lower tidal volumes as compared with traditional tidal volumes for ALI and ARDS. NEJM 2000; 342:1301–1308. [DOI] [PubMed]
  • 2.Futier E, et al. A trial of intraoperative low tidal volume ventilation in abdominal surgery. NEJM 2013; 369:428–437. [DOI] [PubMed]

EP.072

Prone Position Ventilation: A Quality Improvement Project

Sarah Wickenden1,2, Chris Gough1,2 and Sarah Hillman2

1Severn School of Anaesthetics, Bristol, UK

2Royal United Hospital, Bath, UK

Abstract

Introduction and Aims: Prone position ventilation can be used to optimise oxygenation and improve mortality outcomes in mechanically ventilated patients with severe hypoxic respiratory failure. Placing patients in this position is not without risk and as such there is often an element of apprehension surrounding the practical aspects of positioning a patient. A number of clinical incidents relating to complications of proning, including patient blindness, had occurred, and it was evident that a lack of staff education had contributed to these.

Tea-trolley teaching sessions are a method of delivering teaching to the bedside, by taking the teaching to the staff. It has previously been successfully used for learning practical skills (e.g. fiberoptic intubation), and for learning focused information (e.g. managing paediatric anaphylaxis). We utilised this novel teaching method in a new environment – a critical care unit – to give a 7 minute snap-shot of key information to staff. This was provided in parallel with publicising a new guideline, and creating a proning checklist.

Method: A brief (7 minute) presentation was prepared, and delivered in tea-trolley style. The information delivered was selected as the most important aspects of a new prone position ventilation guideline, and was focused on maintaining patient safety and prevent complications. This was delivered in an opportunistic manner to those on shift during their normal working hours. This enabled broad coverage of ITU staff and ensured multidisciplinary participation. Feedback forms were collected during the same shift.

Results: The tea-trolley teaching was delivered to 67 staff during a two month period. 85% felt they would be confident proning a patient following the session compared to 39% at the start (marked as agree or strongly agree on the feedback form). 99% agreed the training would improve safety and 99% also recommended other units provide similar training.

Conclusion and Discussion: This project has demonstrated a large improvement in confidence in placing a ventilated patient prone, as well as awareness of complications. Staff felt the main aim of reducing risk to patients had been achieved. We have subsequently supplemented this teaching with practical proning sessions.

Other clinicians may wish to consider utilising the tea-trolley teaching method to deliver teaching on this, and other topics, in their own critical care units.

EP.073

Improving the Safety of Nebuliser Prescription in Patients at Risk of Type 2 Respiratory Failure

Daniel Wise1, Andrew Fyfe1 and Robert Torrance1

1Aintree hospital, Liverpool, UK

Abstract

Aim: Patients at risk of type 2 respiratory failure require careful management of oxygen therapy. Nice suggests “The driving gas for nebulised therapy should always be specified in the prescription.” An adverse event highlighted that nebulisers were being driven by oxygen rather than air in at risk patients leading to hypercapnia and type 2 respiratory failure, thus we set out to audit the reasons for this. Our aim was to make the nebuliser prescription compliant with the NICE COPD guideline 2010, with the ultimate aim of reducing adverse outcomes.

Methods: Three areas for intervention were identified. Education of nurses, education of junior doctors, prescription change on electronic prescribing system (EPS).

Data was collected before intervention to assess if the prescriber had specified a gas to drive the nebuliser on admission or at any other time during the hospital admission episode in patients who had been coded as having type 2 respiratory failure. A new protocol was added to the EPS which required the prescriber to specify the gas with which the nebuliser would be driven. Data was collected after the education intervention and again after the implementation of changes to EPS, to assess whether any benefit was derived from either of these changes. A chi square test was applied to the data to compare the results from each audit.

Results: In the initial audit of 25 patients, 4 patients had a documented gas driver on admission (15.4%), 4 had documentation at some point during the admission (15.4%).

Education After the education cycle, prescription documentation was re-audited. Of 17 patients, 3 patients had documentation on admission (17.6%), 5 had documentation at some point during the admission (29.4%). Compared to the initial data there were no significant differences in admission documentation (p = 0.888), documentation throughout admission (p = 0.298).

EPS: Following change to EPS, a further re-audit was undertaken. Of 16 patients, 14 patients had documentation on admission (87.5%), 14 had documentation during the admission (87.5%).

Compared to the second audit there were significant differences in admission documentation (p < 0.001) and documentation during admission (p < 0.001).

Conclusion: The education intervention did not appear to improve the prescription of nebulisers in type 2 respiratory failure. There was an improvement after prescription change. From a human factors point of view, the original EPS was an example of a latent error. This change should improve the safety of nebuliser prescribing within the trust.

EP.074

Head injury or something else?

Nathalie Graham1 and Radha Sundaram1

1Royal Alexandra Hospital, Paisley, Glasgow, UK

Abstract

History: A 20 month old was brought in to the emergency department (ED) being described as “not himself” by his parents. He was described as sleepy and less responsive than normal since being witnessed to fall and bump his head at nursery the previous day. His parents had had to prop him up to eat the night before as he was so drowsy, and he was no better in the morning. He had no past medical history, had been a normal delivery and was up-to-date with immunisations.

The best GCS recorded in the ED was 6 (E1 V2 M3). He was noted to be relatively hypertensive and bradycardic, with equal and reactive pupils, no focal neurological deficit and bilateral downgoing plantar reflexes. He was apyrexial. He was intubated in the ED prior to CT transfer. The initial suspected diagnosis was traumatic intracranial bleed.

Investigation: The CT showed generalised ventricular dilatation, most marked in the right lateral ventricle, with possible congenital absence of the corpus callosum. There was some midline shift to the left (maximally 11 mm). There was no intra- or extra-axial haemorrhage, no evidence of space occupying lesion and no evidence of skull fracture. On further questioning the patient's mum and dad remembered further history – the child had probably been unwell for a week, with progressive drowsiness, 2 episodes of vomiting and coryzal symptoms.

The patient was transferred to a tertiary centre for joint neurosurgery and paediatric intensive care. A ventricular drain was inserted that day to decompress the acute hydrocephalus. CSF samples showed no white cells, no bacteria and culture was negative. A diagnosis of lateral ventricle choroid plexus cyst was eventually made; this was thought to be congenital.

This was a subtle history however the initially incorrect suspicions did not change the course of treatment for this child.

Discussion: Congential choroid plexus cysts occur in around 2% of pregnancies and can often be detected antenatally. The majority of cases are not associated with other congential abnormalities, however they can be linked with trisomy 18 or 21, Klinefelter syndrome and Aicardi syndrome (congenital absence of the corpus callosum – as seen in this child). The cysts are generally seen at the level of the lateral ventricles and are insignificant in most cases. They often resolve by the 3rd trimester of pregnancy, and acute hydrocephalus is a known but rare complication.

EP.075

A rare case of Antineutrophil Cytoplasmic Antibody- Associated Vasculitis (AAV) with hypocomplementemia: Presenting with bowel ischaemia without major signs of other organ involvement

Jonathan Pang1, Kingshuk Das2, Shun Ying Ho3, Rishab Bassi2 and Jain Rajesh2

1Queens Hospital Romford, London, UK

2Queen Hospital Romford, Romford, UK

3Newham Hospital, Newham, UK

Abstract

Introduction: AAV are life-threatening multi system autoimmune disease characterised by necrotic inflammation of small to medium vessel. Patients with the most severe forms of AAV requiring intensive care are commonly admitted with lung manifestations including alveolar haemorrhage, renal involvement; requiring renal replacement therapy, and thrombotic microangiopathies.

Case Summary: We describe a 16 years old British Indian male who presented with 5 days of abdominal pain and diarrhoea. During his admission, an OGD showed severe oesophagitis & necrosis. On day 5 of his admission, he significantly deteriorated with a metabolic acidosis and hypotension.

CT (abdomen): thickened sigmoid colon wall 1.5 cm, dilated colon up to 5 cm, intra-operative findings: necrotic bowel (infarcted sigmoid colon, patch infarct of rest of colon with mesenteric lymphadenopathy) requiring subtotal colectomy. Post-operatively, he developed thrombocytopenia with coagulopathy consistent with DIC. He was managed supportively with FFP, platelet and tranexamic acid.

Blood film: few fragments of RBC

Blood investigations: LDH 2000 units/litre, immunology screen C-ANCA positive, PR3: 11 (ref range: 2), MPO, GBM & ANA: negative, low C3 & unrecordable C4. Normal renal function, ADAMTS13: negative. CXR: pulmonary congestion. Trans thoracic ECHO: Normal.

Bowel histology: variable extensive hemorrhagic necrosis ranging from mucosa to near full thickness, multiple foci/areas of regenerative mucosal changes consistent with previous ischaemic damage. A probable underlying cause for the bowel infarction is provided by finding of widespread leukocytoclastic vasculitis associated with thrombophlebitis involving intermiediate calibre veins that have a thin muscle wall. An occasional artery is affected by thrombosis but essentially sparing the arteries. The finding fits with predominantly hemorrhagic/venous pattern of necrosis and could be a result of abnormal immune complex deposit. No granulomata and eosinophilic infiltrate could be seen.

The Gastroenterology team has reviewed histological diagnosis and objectively ruled out Inflammatory Bowel Disease (IBD) with a view that significant bowel ischaemia is not a common feature of IBD.

He was diagnosed as AAV and immunosuppressed with pulsed methylprednisolone and intravenous immunoglobulin therapy.

Conclusion: Although AAV often present with lung or renal involvment, unexplained bowel ischaemia in a young patient should prompt clinicians to consider AAV. As prompt instigation of immunosuppressive drugs to induce remission is critical for AAV patient prognosis.

Hypocomplementaemia in associated systemic vasculitis to have a higher incidence of organ involvement than the usual normocomplementaemic forms of vasculitis. Therefore early assessment of complement levels could aid in prognostication of these life threatening vasculitis.

EP.076

A case of raised anion gap metabolic acidosis: confirmed ethylene glycol poisoning with successful organ donation

Gemma Summons1, Priyan Odedra1 and Mervyn Singer1

1University College London Hospitals NHS Foundation Trust, London, UK

Abstract

Introduction: Our patient presented with suspected overdose and a high anion gap metabolic acidosis. It was confirmed that he had consumed ethylene glycol but despite treatment with fomepizole and continuous veno-venous haemofiltration (CVVHF), brain stem death was confirmed. He donated his heart valves, liver and tissues.

Description: A 47 year old man was found disorientated and vomiting, with a note expressing suicidal intent, but no evidence to suggest nature of overdose. He was last seen 24 hours previously, had no medical or substance history, and was not registered with a GP. He was seen at 4am with Glasgow Coma Score (GCS) 14, systolic blood pressure of 220, and a metabolic acidosis (pH 7.01) with an unrecordable lactate. His GCS deteriorated to 3; he was intubated and transferred to UCLH Critical Care following a CT head, abdomen and pelvis to exclude ischaemic colitis, however this revealed only epididymitis. Paracetamol and salicylate levels were normal.

By 5am he remained in metabolic acidosis but with a normal lactate: pH 7.13, pO2 29.2 on 40% FiO2, pCO2 3, HCO3 7.5, Na 151, K 4.4, Cl 107, lactate 3.

CVVHF, sodium bicarbonate, electrolyte replacement and vasopressors were commenced. At 9am it was noted that there was a significant anion gap acidosis of 40.9 mmol/L. Ethylene glycol or methanol toxicity were suspected therefore lab tests (including urine calcium oxalate crystals and serum osmolar gap) were sent, and fomepizole was requested from pharmacy.

He became progressively acidotic with lactate 14 and malignant hypertension despite GTN and antihypertensives. At 5 pm ethylene glycol poisoning (385 mg/L) was confirmed with calcium oxalate crystals present. Retinal damage, renal failure and cerebral oedema is likely at >200 mg/L. We then treated with fomepizole, but he developed sequential blown pupils and ceased spontaneous ventilation. CT head confirmed cerebral oedema and subarachnoid haemorrhage. Brain stem death was confirmed the following day. He was on the organ donor register. With his family's consent, he donated his heart valves, liver and tissues.

Lessons:

1) Consider ethylene glycol poisoning with a raised anion gap acidosis, as this is an early sign.

2) Ethylene glycol poisoning can cause a falsely elevated lactate (see emergency department ABG), as some analysers can't differentiate between glycolate and lactate (Pernet et al, 2009).

3) Consider early fomepizole treatment if ethylene glycol is suspected, plus sodium bicarbonate and CVVHF if acidaemic.

4) Organ donation may be possible even with a late presentation.

EP.077

High anion gap metabolic acidosis associated with paracetamol and flucloxacillin – an underdiagnosed problem?

Christopher Holt1 and Andrew Hitchings2

1St Georges University Hospitals NHS Foundation Trust, London, UK

2St Georges University of London, London, UK

Abstract

A case report of pyroglutamic acidosis associated with paracetamol and flucloxacillin – a significant adverse drug reaction (ADR) with commonly prescribed medication that is likely underdiagnosed and under-reported.

Pyroglutamic acidosis is a recognised cause of high anion gap metabolic acidosis with over 60 case reports to date. Many of the risk factors for pyroglutamic acidosis are reflected in our inpatient populations and include malnutrition, infection, liver failure, renal failure, diabetes and certain medications at therapeutic doses, including paracetamol and flucloxacillin.

We describe the case of a 65 year old man originally admitted for spinal surgery who developed a wound infection requiring prolonged flucloxacillin therapy, on a background of chronic kidney disease, diabetes mellitus type 2 and chronic back pain, on long term paracetamol therapy. The case highlights the potential underdiagnosis of pyroglutamic acidosis due to a lack of readily available diagnotic testing for urinary organic acid levels and a lack of awareness of pyroglutamic acidosis as a differential diagnosis for unexplained high anion gap metabolic acidosis.

Considering the widespread use of these drugs and relatively common risk factors associated with this condition we propose that there is a possibility that a large number of patients are never diagnosed with this condition due to lack of awareness and investigation. Given the severity of this adverse drug reaction (ADR) at therapeutic doses we believe that diagnostic testing for urinary organic acid levels should be more readily available. This would encourage investigation and diagnosis of pyroglutamic acidosis and in turn potentially give a more representative picture of the prevalence and significance of this ADR in practice.

In reporting on this case we hope to highlight this complication as a cause of unexplained high anion gap metabolic acidosis and encourage clinicians to consider closer monitoring of patients at risk. We would encourage pharmocovigilance and reporting of this complication as well as further research into its occurence.

EP.078

The Impact of Frailty on Critical Care Unit Outcome and Treatment Intensity in a District General Hospital

Laura MacNally1, Nyan Soe1 and Ramesh Ananth Manohar1

1Scunthorpe General Hospital, Scunthorpe, UK

Abstract

Introduction: Frailty may be defined as a multi-dimensional syndrome characterised by the loss of physical and cognitive reserve that predisposes to the accumulation of deficits and increased vulnerability to adverse events. This phenomenon is most often described in the elderly, in whom it has been demonstrated to have an association with adverse outcomes following critical care admission, but it is also increasingly recognised in younger adults. We have determined the prevalence of frailty and its impact on outcome and treatment intensity among all adults admitted to our critical care unit.

Methodology: We prospectively enrolled all planned and unplanned adult admissions to the Critical Care Unt of our hospital over a period of 3 months. All were assessed for frailty by the unit nursing staff following admission. Frailty assessment was undertaken using the Dalhousie Clinical Frailty Scale and although no formal blinding was implemented, clinicians were unaware of the scores given. Patients were deemed to be frail if they were allocated a score greater than 4 on the Clinical Frailty Scale (CFS). The main outcome measure was on unit mortality. Secondary outcome measures included duration of unit stay and treatment intensity. We also considered limitation of care/DNACPR decision making.

Results: 98 patients were admitted and assessed according to the CFS in the described time period and allocated scores ranged from 1–9. The overall prevalence of frailty was 40.8%. Frail patients were more likely to be female (57.5%), had a higher average age (66.1 vs 57.9 years) and had a greater average number of comorbidities (2.68 vs 1.60). On-unit mortality was greater amongst frail patients at 17.5% vs 10.3% for non-frail patients, and average duration of critical care unit stay was also greater at 9.76 days vs 3.72 days. Invasive ventilation occurred more frequently in frail patients (42.5% vs 34.5%) and both haemodynamic support and Renal Replacement Therapy occurred more frequently in the frail population (55% vs 36% and 15% vs 10.3% respectively). Despite this, limitation of care decisions and DNACPR orders were made more frequently for frail patients (25% vs 5.17% and 40% vs 15.5% respectively).

Conclusions: Frailty is common in patients admitted to the Critical Care Unit of our hospital and is associated with increased on-unit mortality despite the greater treatment intensity provided. Diagnosis of frailty may assist in the implementation and communication of appropriate care plans, along with management of survivorship expectations in critically ill patients.

EP.079

Using a modified CriSTAL scoring system identifies factors associated with a poor outcome after admission to Critical care

Kieran Jankowski1 and Daniele Bryden2

1The University of Sheffield, Sheffeld, UK

2Sheffield Teaching Hospitals, Sheffield, UK

Abstract

Introduction: The growing prevalence of frail and/or elderly individuals referred for access to higher levels of medical services is projected to create a demand for Intensive Care that will rise beyond current capacity. Identifying those with the best chances of survival means there is less risk of subjecting people to treatments with no benefit or potential harm and maximising the potential for recovery. Generic prognostic scoring system for use at the point referral to critical care are needed, to inform these considerations and provide information to patients.

Methods: A retrospective epidemiological database analysis of admission and outcome data from 1000 patients 70 years of age and above admitted to a single centre was used to assess the prognostic ability of a chronic health indicator scoring system adapted from previously published CriSTAL criteria (1). Initial analysis of 500 patients using a basic model (Cr1) would allow identification and weighting of the variables significantly associated with mortality. The model was then streamlined and modified to create a second model (Cr2) model for analysis of its prognostic utility in a separate patient cohort of 500 randomly selected patients.

Results: Mean age was 77.50 years (±5.83 SD; Range: 70–101years). The Cr1 model identified 8 variables clinically important in predicting mortality (P ≤ 0.1) independent of admission diagnosis including: MI < 6 months, Abnormal ECG, Congestive cardiac failure (NYHA ≥ 2), Chronic obstructive pulmonary disease, Chronic liver disease, Metastatic cancer, Stay in hospital ≥5 days preceding ICU admission, and Frailty (CFS ≥ 4). After streamlining, each significant variable was weighted to create the Cr2 model. ROC curve analysis indicated both models had significant calibration and moderate predictive capabilities, however the alterations made to Cr1 resulted in an improved Cr2 predictive model (AUC: Cr1 = 0.67; Cr2 = 0.72).

Conclusion: 8 statistically significant variables predictive of mortality were identified including two variables (an abnormal ECG, Stay in hospital ≥5 days preceding ICU admission) not previously identified as significant. The Cr2 model showed consistent improvements on Cr1 in all tests and exhibited competitive AUC results to other ICU prognostic models not designed for use at the point of admission. However, the relative lack of sensitivity and specificity in the ROC curve results means neither model is able to provide definitive outcome predictions. The presence of these significant variables in an elderly patient could be used to inform clinical discussions with patients and relatives as to likely survival from ICU treatments.

EP.080

Does Clinical Fraility Scale Aid prognostication in ICU?

Claudia Porteous1, Laura Langton1, Jeff Little1 and Adam Old1

1Warrington Hospital, Warrington, UK

Abstract

Background: In the current climate, with our ageing population, there is increasing demand for critical care1,5, without matched increase in resources, therefore there is a need for optimal utilisation of critical care. Elderly patients, are a heterogenous cohort without a robust evidence base for prognostication2. Previous studies have shown that adjustment for age and co-morbidities has not been able to precisely assess mortality and morbidity1. Alternatively, it has been suggested that a formal quantification of frailty, defined as a multi-dimensional syndrome characterised by the loss of physical and cognitive reserve that leads to increased vulnerability to adverse events3, may enable objective assessment of this complex patient group4.

Methods: This was a single centre, prospective study that included all patients, aged over fifty, admitted to critical care, in a district general hospital in North West England, over a three-month period, from November 2015 to February 2016. A total of sixty eight patients were recruited. Each patient was assessed and assigned on the clinical frailty score(CFS), as developed by Rockwood et al3, was calculated for each patient, ICNARC and APACHE scores were also calculated. The primary outcome measured was in hospital mortality, length of stay was the secondary outcome.

Results: Patients were classified into two groups, high frailty index, a CFS score of five and above, or low frailty index, a CFS score of below five. Two thirds of patients had a low frailty index and one third a high index. Mortality was 35% in those with a high degree of frailty, compared to 24% in less frail patients. Average length of stay was 21.8 days in the less frail cohort and 25.4 days in the high frailty index group.

Interpretation: This small centre study, adds to the body of evidence which suggests that frailty is prevalent in patients who require assessment for admission to critical care1,4, and that assessing at risk patients for frailty may be of use for prognostication, and thus inform escalation decisions1,4,7.3 Rockwood K, Song X, Macknight C, et al: A global measure of fitness and fraility in elderly people. CMAJ 2005, 173:489–495

EP.081

Difficult Decisions: Predicting Outcomes in Elderly Patients with Community-Acquired Pneumonia on a District General ICU

Sara Bonfield1, Matthew Holland1 and Michael Spivey1

1Royal Cornwall Hospitals Trust, Truro, UK

Abstract

Background:

1.2–10% of adult hospital admissions with CAP require ICU

Mortality in this group is >30%

60% of these deaths are in patients over 84

On our ICU:

CAP is one of the most common primary diagnoses

Mortality for CAP in patients ≥80 is 57.4% at unit discharge and 70.2% at hospital discharge. Difficult decisions are frequently made regarding the likely benefit of admitting elderly, co-morbid patients to ICU, where beds are finite and therapy can be traumatic. We reviewed the cohort of patients ≥80 treated for CAP on our unit, looking for prognostic indicators, with a view to developing a validated decision-making aide.

Methods: Electronic databases were searched for patients ≥80 admitted to our unit with CAP since 2012. Hospital-acquired pneumonia and repatriations were excluded. 47 cases were identified and data relating to the point of admission was collected from electronic records. Data collected included demographic, clinical and physiological parameters and laboratory results.

Statistical analysis was performed using SPSS. All data were investigated with univariate binary logistic regression modelling (UBLR), to evaluate their relationship with outcome at hospital discharge. Variables with P values <0.059 underwent multivariate binary logistic regression modelling (MBLR).

Results: Using UBLR, presence of chest X-ray (CXR) changes in ≥3 lobes or bilaterally correlated significantly with mortality (P = 0.032).

ICNARC score and albumin <34 closely approached significant correlation (P = 0.051 and 0.050 respectively), and albumin <24 correlated significantly when examined with Chi-squared test (P = 0.0298).

There were no patients who survived with alb< 24.

Presence of CXR changes in ≥3 lobes or bilaterally plus albumin <34 correlated significantly with mortality, P = 0.0044.

Conclusions: In this retrospective audit of a small sample in one ICU, we found that presence of CXR changes in >3 lobes/bilaterally appears well-correlated with mortality, particularly in combination with hypoalbuminaemia. This is in keeping with recent work by others, which has found that both variables are strong mortality indicators. The significance of our findings is likely to improve with a higher-powered study, especially with regard to albumin and ICNARC score, which closely approached significance. Furthermore, as hypoalbuminaemia worsens, stronger correlation may be found with greater patient numbers.

These findings are not strong enough to support admission decision-making; but we plan to continue this work prospectively, increase our study population and look for more significant results, which it may be possible to combine in a decision-making tool.

EP.082

The impact of frailty in an elderly population admitted to the intensive care unit

Irene Francis1, Dev Katarey1, Julian Cumberworth1, Rebecca Gray1 and Owen Boyd1

1Brighton & Sussex University Hospitals NHS Trust, Brighton, UK

Abstract

Introduction: There is heterogeneous evidence regarding the impact of pre-morbid frailty on outcomes after admission to an intensive care unit (ICU). The aim of this study is to determine the impact of pre-morbid frailty on patient outcomes following discharge from ICU in a local elderly population.

Methods: This retrospective cohort study will include all patients ≥75 years old admitted to the ICU during July 1st 2015 to June 30th 2016. Frailty was defined as those who required a significant amount of assistance for activities of daily living as referenced by the Clinical Frailty/Rockwood score. Patients excluded from analysis were those with out of hospital cardiac arrests and admission following elective surgery. Each patient’s critical illness was risk stratified by the Acute Physiology and Chronic Health Evaluation (APACHE) II score. The primary outcomes were mortality in the ICU and one year mortality, and secondary outcomes were length of stay and days on organ support. Data was analysed using the chi-squared test, Students’ t-test, and Kaplan-Meier survival analysis as appropriate. Preliminary results are reported.

Results: To date, data on 85 eligible patients, of whom 60 were non-frail and 25 were frail, has been analysed. The non-frail and frail groups were similar in age (83.2yrs vs. 83.5yrs, p = 0.820), gender (male 61% vs.56%, p = 0.808), BMI (25.9 vs. 26.4, p = 0.731), and admission APACHE II score (19.4 vs. 19.4, p = 0.972). Though not significant, the prevalence of dementia was higher in those who were considered frail (5.4% vs. 17.4%, p = 0.087) as expected. Whilst in the ICU, there was no significant difference in number of days on invasive ventilation (5.6 vs. 2.6, p = 0.145) or ICU length of stay (6.2 vs. 4.9, p = 0.230) between non-frail and frail patients; although frail patients required significantly less days on inotropic support (4.2 vs. 1.8, p = 0.035). There was no significant difference in mortality in ICU (11.7% vs. 4.0%, p = 0.270) or at one year (41.8% vs. 54.5%, p = 0.311; Figure 1).

Conclusion: Preliminary data suggests that, in a population over 75 years of age, frailty is not associated with increased mortality in the ICU. Days on inotropic support were actually less compared to a similar non-frail population. These results challenge the current dogma that frail patients are less likely to survive an admission to the ICU.

EP.083

Outcomes in critically ill octogenarians and older: A clinically relevant tool to inform admission decision making

Ged Dempsey1, Lauren McGarey1, Edward Benison1, Ben Morton1,2

1Aintree University Hospital NHS Foundation Trust, Liverpool, UK

2Liverpool School of Tropical Medicine, Liverpool, UK

Abstract

Introduction: Increasing numbers of elderly patients (>/=80 years old) are presenting as emergencies to critical care. Such patients frequently have significant pre-existing co-morbidities and frailty, with poor critical care outcomes (Flaatten ICM 2017). In this context, critical care admission requires careful consideration. However, the existing literature frequently explores hospital mortality, ignoring longer term outcomes.

Methods: A retrospective observational study of elderly patients (>/=80 years) with unplanned admissions to the critical care unit at Aintree University Hospital (January 2010 – December 2016). We collected clinical information on acute physiology and co-morbidities (Functional Co-morbidity Score). We recorded hospital discharge status and 12-month survival data. HRA approval reference: 220258.

Results: We identified 705 emergency elderly admissions (mean 84 years, SD 3.2) during the study period. There were 354 (50.2%) female patients and survival to hospital discharge was 60.9% and 37.6% at one year. The APACHE II score was a fair predictor (AUCROC 0.705) of hospital mortality but a poor predictor (AUCROC 0.668) of one year mortality. We constructed a stepwise multivariate logistic regression model to identify clinically pertinent predictors of hospital (Table 1, AUCROC 0.754) and one-year mortality (AUCROC 0.671) when making admission decisions.

Conclusions: The APACHE II score was a fair predictor of hospital mortality and a poor predictor of mortality at one year. Simple clinical tools calculated at the point of admission performed better than the APACHE II score in predicting hospital mortality but not one-year mortality. This is likely due to multiple additional factors that impact on longer term mortality for hospital survivors

Table 1.

Multivariate logistic regression analysis to determine admission predictors of in hospital mortality in elderly patients admitted to critical care in emergency. Output constructed using factors that were statistically significant in univariate-analyses for hospital mortality. Med/Surg diagnosis: patient with a primary medical or surgical problem (medical associated with worse outcomes). Deprivation score: index of multiple deprivation. Number of observations = 616, pseudo R2 = 0.150. Std. Coef. = standardised coefficient (beta value). Std. Err. = standard error, 95% C.I. = 95% confidence interval.

Variable Coef. Std. Error C.I. P value
P/F ratio −0.004 0.001 −0.006 – −0.002 <0.001
Lowest GCS −0.105 0.030 −0.164 – −0.046 0.001
Lowest pH −4.248 1.047 −6.299 – −2.196 <0.001
Lactate 0.147 0.039 0.071 – 0.223 <0.001
Med/Surg diagnosis −0.373 0.185 −0.736 – −0.010 0.044
Deprivation score 0.091 0.033 0.026 – 0.156 0.006
Constant 32.164 7.635 17.65 – 47.58 <0.001

EP.084

Using social media to improve education in critical care

Neil Brain1 and Alistair Meikle2

1Queen Elizabeth University Hospital, Glasgow, UK

2Crosshouse Hospital, Kilmarnock, UK

Abstract

The challenge: Providing an interesting, up-to-date program of educational meetings based on the FICM curriculum. We have around 15 trainees, based over two sites with up to 7 on-call on any day. Due to these restraints, our previous education program was suffering from poor attendance and mediocre feedback.

The solution: The two key elements of our new teaching program are a flipped classroom style and the use of a social media app. We have based each session on one of the top 30 cases from the FICM curriculum. The standard format for each session was a case presentation and journal club review of a relevant article. This was produced and delivered by a trainee and facilitated by either an ICU consultant, senior trainee or consultant from another specialty and video conferenced between the two sites. Each term there was a session on diagnostics (such as imaging or ECGs) and a simulation session. We used a social media platform called Slack to help instigate these changes. This allowed us to form a private message board, send direct messages, share files and importantly send notifications directly to mobile devices. The app is accessible via mobile phone, tablet or desk top computer. This has meant that each week there can be discussion of the case and articles presented which allows everyone to take part, including those not working that day or busy with on-call or theatre commitments. Relevant exam based resources are shared each week including MCQs and SAQs.

The result: We think that these changes have been an overwhelming success. Attendance at teaching sessions has significantly increased and feedback universally positive. We now have trainees volunteering to do more sessions than we have asked of them. It has been easier to get consultants from other specialties to give their time to facilitate sessions as we are not asking them to prepare a presentation. We have also found that trainees from other specialties have heard about our teaching and attended sessions that are relevant to their training program. We now have over 30 active users on our Slack team with over 1000 impressions some weeks.

The future: With each new wave of trainees joining us we are seeing growth in use of the app and degree of interaction and our own anaesthetic department has now adopted the format for their teaching activities.

EP.085

Virtual Reality! Experiential simulation improves emergency physiotherapy skills, improves MDT communication and reduces anxiety

Nikki Webster 1

1St Georges University Hospitals NHS Foundation Trust, London, UK

Abstract

Aims & Objectives: Establish whether experiential simulation is beneficial in facilitating physiotherapists to gain advanced respiratory competencies and manage risk deteriorating patients in a safe and supportive learning environment.

Methodology: A one day course was developed with colleagues from the St Georges Advanced Patient Simulator Learning Centre and comprised the gaining of clinical competencies for high risk respiratory physiotherapy equipment, followed by clinical scenarios utilising advanced simulation to hone clinical reasoning and improve MDT collaboration. Staff experienced deteriorating patient scenarios in a realtime simulated environment that adapted depending on the actions of the individual. Audiovisual allowed scenarios to be peer reviewed enabling clinicians to visualise their interaction, facilitating reflection and embedding learning. The course is designed on a rolling basis ensuring that both new staff and established staff can gain and maintain competencies through annual training sessions.

Results: In eight months, fifty physiotherapists have undergone emergency skills training using simulation. All gained respiratory competency on the day, saving over 900 clinical hours of bedside training; a cost saving of £23,000. Evaluation of the training was enabled through the use of post-training questionnaires. 88% of staff who completed were junior staff members with little or no experience of on call. All were successfully treating acute respiratory patients as part of their on call duties within 6 weeks of unitising the innovation.

100% rated the experience positively. A reduction in anxiety of pressured environments was cited most frequently as a key learning outcome (67%), followed by improved communication with the MDT (58%). 42% cited improved knowledge/skills or improved safety awareness/patient monitoring. 33% reported improvements in their ability to assess and 17% felt more confident in selecting appropriate treatment modalities. Taking staff out of the high pressured environment improves their learning experience but also reduces anxiety experienced in real life pressured environments. Communication often deteriorates under pressure and an improved ability to communicate effectively with the MDT has been an indirect but welcome outcome of the training.

Conclusion and Actions: Utilising experiential simulation to train non-respiratory physiotherapists in emergency care is effective and proffers many advantages over traditional patient-bedside training of this staff group. It offers a practical method of exposing staff to pressured environments and improving clinical reasoning and skills in a safe environment. It is cost effective, saves significant resources and is easily transferrable to other Trusts. In addition, it is well received by staff, reduces anxiety and improve MDT communication.

EP.086

Improving intraosseous access skills through ‘point-of-care’ teaching

Emma Phillips 1

1Victoria Hospital Kirkcaldy, Kirkcaldy, UK

Abstract

Introduction: With hospitals becoming ever busier, finding time to deliver high quality teaching can be challenging, and often takes staff away from clinical areas or requires commitment during days off. Point-of-care diagnostic tests are becoming a popular way to make judicious use of time, and applying this idea to medical education seems logical. The aim of this project was to use 'point-of-care' teaching as a method to improve knowledge and confidence in intraosseous (IO) access in staff working in ICU and the cardiac arrest team. Obtaining IO access is crucial during emergencies where intravenous access has failed, therefore education on this is essential.

Methods: A ten minute teaching programme was designed on IO access including indications, contraindications and complications, method for insertion, and opportunity for practice of insertion. Participants were recruited during their shift (in ICU or as part of the cardiac arrest team) and teaching was delivered in the workplace. Pre and post-teaching questionnaires were completed assessing knowledge of IO access, confidence on insertion (on a ten-point rating scale), and qualitative feedback. Results were explored using a spreadsheet and Chi-square analysis.

Results: Teaching was delivered to 52 staff (19 nursing, 33 medical), whom had a wide range of clinical experience (0–22years). 38% had undergone previous training on IO access, however none had ever inserted an IO device. A significant improvement in knowledge was demonstrated following teaching, with rate of correct responses increasing from 38% to 93% (p < 0.01). Confidence levels also improved, with an increase from median score of 1 pre-teaching (range 0–8), to a median score of 7 post-teaching (range 4–10). All participants said they would recommend the teaching session to a peer. Qualitative feedback indicated that this method of teaching was convenient and met desired aims.

Discussion: The pre-teaching questionnaire demonstrated a lack of knowledge and confidence in IO access, with teaching leading to an improvement in these. This should lead to improved patient outcomes if difficult IV access in an emergency occurs. The method of ‘point-of-care’ teaching was widely accepted, and has major advantages such as being able to deliver succinct teaching during working hours. It could be carried out for other clinical skills for a range of staff. Limitations include that retention of knowledge was not studied, and further work may include a follow-up questionnaire to assess this.

EP.087

Design and implementation of a high-quality clinical skills workshop for post-anaesthetic and critical care nursing staff

Helen Westall1 and Rita Russai1

1London North West Healthcare NHS Trust, London, UK

Abstract

Background: The Royal College of Nursing requires each registered nurse to revalidate every 3 years. Nurses should complete 35 hours of continuing professional development, of which 20 should be participatory.

Aims: To develop a relevant and effective day-long course to update recovery staff in the key clinical skills essential to recognising and managing common medical and anaesthetic emergencies in post-operative patients.

Method:

• 21 post-anaesthetic and critical care unit (PACU) nurses attended

• Pre- and post-course questionnaire

• 3 × 40 minute lectures delivered by anaesthetic consultants

• 1 × 40 minute lecture delivered by a senior staff nurse

• 3.5 hours devoted to small group simulation and clinical skill workshops

Results: Of 21 attendees, 10 had been a PACU nurse for less than 6 months and 10 had over 10 years’ experience. 62% had been on an educational course within the past 6 months though 1 had not attended a course in over 5 years.

67% of nurses hoped that the course would act as a ‘refresher’ and bring their clinical skills ‘up-to-date’. The remainder were seeking to improve their knowledge and skills in airway (15%), pain (5%) or team working (5%).

100% were already aware of the trust’s major haemorrhage protocol and 86% were aware of the updated sepsis guidelines.

All candidates considered the course to be ‘good’ or ‘very good’ at meeting their educational objectives, matching their own learning style, being relevant and maintaining their interest.

When asked about their learning outcomes, 38% described a global enhancement of their learning and 62% reported improvement of specific clinical skills.

Conclusions: The course received excellent feedback despite significant disparity between previous nursing experience. This could be due to candidates having similar learning objectives, assisted by attendees and faculty originating from the same working environment, thus promoting a greater understanding of normal operating conditions and realistic, believable scenarios.

Although the majority of attendees had good awareness of trust guidelines in sepsis and major haemorrhage, the lectures and workshops were well received, suggesting an unsatisfactory correlation between awareness of protocols and confidence in using them.

It is likely that a mixture of lectures, tutorials, workshops and simulation helped keep the course content accessible and interesting to a variety of learning styles and abilities.

Overall, the course was well received and fulfilled its educational objectives. The faculty look forward to delivering the course to other anaesthetic and PACU nurses in the future.

EP.088

The use of an online platform to improve communication between the local transfer (critical care) network and trainee anaesthetists

Emma Temple1 and Stephen John1

1Nottingham University Hospitals NHS Trust, Nottingham, UK

Abstract

Introduction: The Association of Anaesthetists of Great Britain and Ireland (AAGBI) recommend: ‘All doctors and other personnel undertaking transfers should have the appropriate competencies, qualifications and experience’ and that completion of a course is ‘highly desirable’. Following two surveys of trainee’s training experiences and confidence to undertake solo transfers, we addressed concerns by creating online platform for transfer training.

Methods:

November 2015 survey: Transfer training experience, trainee confidence in undertaking level 2/3 transfers.

February 2017 survey: Ease of access to training, policies and feedback on network transfer incidents.

Free text comments invited.

Results: November 2015: n = 89. 89% of trainees formally transfer trained. 100% of speciality trainees had undertaken solo intra and 93% inter hospital transfers. 70% of core trainees had undertaken solo intra and 45% inter hospital transfers. Trainees rated their own confidence (1 = not at all, 10 = very). Core trainees were less confident with inter than intra hospital transfers. Specialty trainees were confident with both intra and inter hospital transfers (See Figure 1).

February 2017: n = 63. Local course attended by 54% of trainees. 38% stating it was “difficult” or “very difficult” to access booking information. Despite undertaking solo transfers, 24% were not trained. Only 25% rated communication from the network as average or above (See Figure 2) with 95% reporting no feedback from any critical incidents occurring on network transfers.

Action: We used feedback to create an on online platform with 3 primary aims:

1. Developing a communication link between the local transfer network and trainees. A central point of reference for contacts, curriculums, up to date policies and equipment changes.

2. Improving access to information on transfer training and regional courses.

3. A dedicated forum for feedback and reflection upon critical incidents occurring on network transfers

There are plans to develop the platform further with online training packages, videos and equipment updates.

Figure 1: Trainee confidence ratings (scale 1–10) for intra and inter hospital transfers

EP.089

An analysis of sleep in Intensive Care

Moses Chidowe1 and Kieron Rooney2

1The University of Bristol Medical School, Bristol, UK

2ICU Consultant – University Hospitals Bristol, Bristol, UK

Abstract

Introduction: Patients on ICU suffer from disturbed sleep and abnormal sleep architecture with deleterious consequences. Formal sleep assessment is time-consuming and requires expertise to interpret. There is an urgent need for interventions to improve duration and quality of sleep, but to demonstrate efficacy, simple and reliable methods of assessment are required. The Richards-Campbell Sleep Questionnaire (RCSQ) is a sleep questionnaire validated for use on ICU. We conducted a quality improvement project to evaluate the difference between patients’ and nurses’ perception of patients’ sleep using the RCSQ.

Methods: RCSQ was completed by night-shift nurses at the end of their shifts based on their interpretation of their patients’ sleep. The patients’ RCSQ were completed during the morning ward round either independently or with assistance. Exclusion criteria for patients included a GCS < 13 and refusal to participate. There were no exclusion criteria for nursing staff. The RCSQ was used to assess the difference between patients’ and nurses’ perception of patients’ sleep. This questionnaire assessed the quality and quantity of sleep through six domains with a visual assessment score (VAS) of 0 – 10, where 0 represents the worst sleep quality.

Results: This project was conducted over 9 days. After unavoidable exclusion factors and loss to follow up, there were a total of 17 paired patient-nurse questionnaires (Table 1). Overall sleep quality on the ICU during this study was average, however, there was marked variability between individual patients’ perception of sleep. A paired t-test of nurse and patient results showed no significant difference between patients’ and nurses’ perceptions of sleep (p = 0.78).

Study Sample

Conclusion: Simple, reliable methods for assessing sleep are pivotal to assessing the impact on interventions to improve sleep on ICU. Our results show that bedside nurses can reliably assess patient sleep quality using a simple validated questionnaire. Including the RCSQ in ICU clinical observations will allow nurses to act as assessors of patients’ sleep. This will aid in the management of sleep disturbance on the ICU.

Table 1.

Numbers of participants, excluded participants, and patients lost to follow up.

Patients’ Questionnaires Nurses’ Questionnaires
Empty beds 37 37
GCS < 13 68 68
Refused 25 0
Loss to follow up 0 46
Completed 59 38
Paired questionnaires 17 17

EP.090

Barnsley Hospital ICU departmental review: “Improving daily appraisal and documentation of treatment limitations in the ICU, a novel approach”

James Wright1, Nicola Johnson2 and Sughrat Siddiqui3

1CT3 Anaesthetics – Barnsley District General Hospital, Barnsley, UK

2CT2 Anaesthetics – Barnsley District General Hospital, Barnsley, UK

3ICU Consultant – Barnsley District General Hospital, Barnsley, UK

Abstract

Background: Many factors are involved in making the decision to limit intensive care treatment, with the paramount aim being to provide patient comfort and dignity. There should be full discussion and consensus among the clinical multidisciplinary team, the patient (wherever possible) and their family.

According to Intensive Care Society guidelines:

‘Patients admitted to the ICU require a clear management plan encompassing interventional limits; this requires regular review and updating. It should be regarded as a formal ICU procedure subject to the same preparation, thought and consent as for any other aspect of care.’

Rationale: Barnsley ICU department had recently implemented a patient daily review proforma, including a section considering treatment limitations. Initial departmental observations were that this section was poorly utilised and communication errors were occurring especially at handover. Therefore we researched ICS/RCOA/AAGBI treatment limitation standards and identified that the ergonomics of the proforma may be detrimental to achieving these standards. Consequently a departmental performance review was undertaken.

Aim: Analysing whether

• Treatment limitations (even no limitations) had been established and recorded on admission?

• Treatment limitations were discussed with family members?

• Treatment limitations had been considered and recorded within every daily review?

• Identify the circumstances which lead to incomplete or absent documentation

Method: Prospective data collection involved daily analysis of each patient’s proforma following the consultant lead ward round.

During 2016–17, 123 different patients spanning 666 separate daily reviews were analysed over three audit cycles.

The review was partially blinded, by not informing departmental clinicians and excluding supervisory consultant data.

Results: A closed loop audit involving education, recommendations and subsequent re-review, found departmental failings in meeting ICS/RCOA/AAGBI 100% completion standards.

Daily Treatment Limitation documentation: 51% (Feb-April) vs 55% (Aug-Sept) completion

Treatment Limitation discussion with family members: 65% (Feb – April 2016) vs 74% (Aug – Sept 2016)

Second data set analysis showed minimal improvement; therefore a redesigned proforma was implemented, including a simple tick box treatment limitations section.

To aid compliance this section was repositioned adjacent to the consultant signature box.

After 3 months of circulation re- review showed: 87% daily documentation and 94% discussion with family (June – July 2017)

Conclusion: After adopting an ergonomic and human factor approach, involving repeated education and redesign of the documentation proforma, the department displayed a marked improvement in compliance. The ergonomically designed proforma now prompts long term departmental adherence to standards and provides robust handover information, therefore improving patient care.

EP.091

Explanation of Brainstem death and testing. A service improvement project

Huw Twamley1, Jacqueline Baldwin1, Eleanor Baker1

1Lancashire Teaching Hospitals NHS Trust, Preston, UK

Abstract

Background: In the UK in Apr 2016-Apr 2017,1775 patients had a potential diagnosis of brainstem death. It is well documented that relatives of brainstem dead patients have a poor understanding of brainstem death and brainstem death testing. This lack of understanding has been shown to impact relatives’ decision-making when healthcare professionals seek their consent for organ donation.

There is no standard format and little research focusing on what information should be included, and the language which should be used, when explaining brainstem death and brainstem testing to relatives. We conducted a service improvement project on the critical care unit (CrCU) at Royal Preston Hospital (RPH) to address this unexplored topic.

Method: 14 critical care consultants were asked to explain brainstem death and brainstem testing during audio-recorded interviews using a standardised scenario in which a medical student played the role of a relative of a patient with potential brainstem death.

A Patient and Public (lay) group were utilised to evaluate and provide feedback on the consultants’ explanations and the main themes arising from them.

Results: Analysis showed marked variation between the consultants explanations with little common language used. The lay discussion group feedback provided positive and negative feedback on the main themes which emerged, with emphasis on language used, length of explanation and ease of understanding. Guidance for consultants incorporating this feedback was produced. An example explanation was also formulated.

Conclusion: Using lay representatives to advise on consultant explanations of brain stem death allowed the development of guidance for consultants on the CrCU to refer to when explaining brain stem death and brain stem testing to relatives. This is the first step towards optimising and standardising the explanation of brainstem death and brainstem testing and creates the potential to both improve relatives’ understanding and have a positive impact on their decision-making for organ donation in the future.

EP.092

A pilot study to look at the impact of a short visit from a Pets as Therapy dog to the Critical Care Unit

Emma Jackson1, Catherine Ashton1 and Jason Cupitt1

1Blackpool Teaching Hospitals NHS Foundation Trust, Blackpool, UK

Abstract

Background: Therapy pets have been shown to reduce anxiety, pain and fear as well as improve patient’s responsiveness to their surroundings. (1) We carried out a pilot study to see if it was possible for critical care patients to receive pets as therapy and to assess what impact this novel form of therapy had on their well-being

Methods: We carried out a pilot study involving level 2 patients on a 14 bed critical care unit. The patients were selected for a 15 minute visit from a therapy dog. Patients were consented for the visit. Exclusion criteria were animal allergy and animal phobia. Patients were asked a series of questions to assess the psychological impact of the dog visit before and after the therapy period. Further assessment occurred 4 weeks later.

Results: Over the 4 week period 11 patients were recruited. Of these 11, 1 patient subsequently died and another patient was unavailable for follow up. None were CAM ICU positive at the time of the visit. Of the patients recruited 36% were current pet owners.

At the initial visit 100% of patients reported that they enjoyed the visit. The psychological scores (2) showed a reduction in anxiety, distress, sadness and hopelessness. There was no change in disorientation scores.

On subsequent follow up, all of the patients felt that the therapy visit was beneficial to their recovery; 78% reported that it helped normalise the critical care environment; 89% reported that it helped to re-orientate them to the non-medical world; and 100% felt that regular visits from the therapy dog would have been beneficial to their recovery.

Patients reported, in their own words, that it made them feel ‘over the moon’, ‘showed me life was still ongoing outside the hospital’ and ‘‘it made me feel good for the first time in a long time’.

Nursing staff reported improved communication with their patients and they perceived an improvement in their patient’s mood following the visit. The impact on the nursing staff was also positive

Conclusion: We showed that a visit from a therapy dog on the critical care unit improved the mood of all our patients and was an important event during their recovery. We also showed we could implement a Pets as Therapy programme without disruption to the daily care of our patients.

EP.093

The use of the confusion assessment method (CAM-ICU) to reduce incidence of delirium in intensive care and improve outcomes: A systematic review

Nicola Denny1,2 and Nicola Ashby1

1Division of Health Science, University of Nottingham, Nottingham, UK

2Sherwood Forest NHS Foundation Trust, Sutton in Ashfield, UK

Abstract

Background: Delirium is a form of acute brain dysfunction in critically ill patients. Several tools to aid the detection of delirium have been developed for use in intensive care over the years. The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) is one of the more frequently used tools for detecting and diagnosing delirium. The objective of this study is to evaluate the current evidence for the CAM-ICU to determine whether incidence of delirium can be reduced.

Methods: A modified systematic review was conducted to identify Randomised Control Trials of the CAM-ICU reporting on incidence of delirium and improved outcomes. An Ovid, Embase, CINAHL, and PubMed database search was performed to identify RCT’s published in English language, involving adult ICU patients, comparing the CAM-ICU to standard care between 2010 and 2017. The Cochrane Central Register of Controlled Trials and The University of York Centre for Reviews and Dissemination were also searched. The CASP critical appraisal tool was used to assess the methodological quality of the studies.

Results: Two RCT’s were included in the final review. Incidence of delirium was reduced in the intervention groups in both studies; 34% and 20% compared with 36% and 33% respectively in the control groups. Duration of delirium, duration of ventilation, length of ICU stay and in hospital mortality at 30 days were not significantly affected. It was not possible to perform a meta-analysis of included studies.

Conclusions: Despite its effectiveness in diagnosing delirium the CAM-ICU has no measurable benefit in reducing incidence or improving outcomes when compared to standard care. However, the CAM-ICU may be an effective tool in reducing incidence of delirium when used as part of a care bundle approach alongside other delirium reducing interventions. Further research into individual delirium reducing interventions and specialised delirium prevention protocols is needed to determine their effectiveness in reducing incidence of delirium and improving outcomes for patients in intensive care.

EP.094

Transfer of ventilated critically ill patients by Advanced Critical Care Practitioners

Gavin Denton1, Nitin Arora1, Andrew Choyce1 and Anita Jones1

1Critical care, Heartlands Hospital, Heart of England Foundation Trust, Birmingham, UK

Abstract

Introduction: The Heart of England Foundation Trust has one of the largest teams of advanced critical care practitioners (ACCP) in the United Kingdom (UK). The critical care service operates across three hospitals. Both intra- and inter- hospital transfers are generally performed by ACCPs trained in transferring critically ill patients. This audit records the type of transfer and associated adverse events of transfers undertaken by ACCPs in the trust.

Methods: A web based anonymised electronic form was devised. Data were submitted shortly after transfer of mechanically ventilated critical care patients. All internal and external transfers by ACCPs were recorded.

Results: Between December 2016 and July 2017, 195 transfers of critically ill patients were recorded. Most were independent transfers by ACCPs (n = 177, 90.8%), patients requiring invasive mechanical ventilation comprised 79.5% of cases (n = 155). Of these 155 transfers, 32.9% (n = 51) were inter-hospital transfers. Imaging was the most common reason for intra-hospital transfer (n = 82, 78.8%). Tertiary centre transfers, mainly for neuro-surgical intervention, comprised 29.4% (n = 15) of inter-hospital transfers. In 91.6% (n = 142) of transfers of patients requiring invasive mechanical ventilation, no adverse events were reported. Hypotension was the most frequently recorded adverse event (n = 6, 3.9%). There were no extubations or airway events during the audit period.

Discussion: ACCPs in our service meet the Intensive Care Society training guidelines of staff performing transfers for Level 3 patients. The ACCP service use dedicated transfer trolleys and secure equipment. It is our policy that there is period of evaluation of current problems and a period of stabilisation of patients prior to transfer. Fanara reviewed the transportation literature and identified a range between 4–9% for serious adverse events during transfer of critically ill patients1. One U.K audit found that only 34% of doctors had training in critical care transfers compared to 100% of ACCPs in this study2.

Conclusions: A service by ACCPs of the transfer of critically ill patients, which adheres to Intensive Care Society transfer guidelines, recorded few adverse events, compared to limited published literature.

References

  • 1.Fanara B, Manzon C, Barbot O, et al. Recommendations for the intra-hospital transport of critically ill patients. Crit Care 2010; 14: R87. [DOI] [PMC free article] [PubMed]
  • 2.Easby J, Clarke F, Bonner S, et al. Secondary inter-hospital transfers of critically ill patients: completing the audit cycle. British Journal of Anaesthesia 2002; 89: 354.

EP.095

Improving transfer safety for critically ill patients at Kings College Hospital, London

Robert Jesty1, Aidan Devlin1, Jenny Townsend1, Beth Davies1

1Kings College Hospital, London, UK

Abstract

Intra-hospital transfers (eg to CT/MRI/angiography suite) for Level 2 and Level 3 inpatients on critical care units at Kings are carried out by airway trained doctors and accompanied by nursing staff. Currently no pre-transfer checklist exists and we felt that developing such a checklist could improve the safety of the transfers by ensuring staff can be confident that equipment and medication required to make the transfer is available if required. There was also no clear system for how regularly, and by whom, the contents of the transfer bag were checked and it was carried out on an ad hoc basis.

We surveyed 33 members of the medical and nursing staff working across critical care units and Kings College Hospital to evaluate whether they felt the safety of transfers could be improved. 73% of those surveyed were not aware of how often the transfer bags were checked, only 49% stated that if equipment was needed quickly it would be easy to find with the other 51% describing it as ‘difficult’ or ‘very difficult’ and 97% of those surveyed stated they thought a checklist to go through prior to each transfer would improve the safety of the transfers. A snapshot inspection of a number of the transfer bags found there to be no checking system in place with some expired items.

The issue surrounding checking of the transfer bags has been rectified immediately with regular checks introduced. We have developed a short checklist to be filled out prior to intra-hospital transfers so that staff who do not work regularly together or are new to the trust can be confident that all essential equipment and drugs are in-date, available and have knowledge of their whereabouts if required in an emergency. The checklist will be implemented shortly and it’s use will be re-audited in the coming months.

EP.096

A Modified FAST HUG BID Checklist Improves Rates of Family Engagement, Pressure Ulcer Prevention, and Sodium Level Control in the Trauma Intensive Care Unit

Gustav Strandvik1, Suresh Arumugam1, Abdelaziz Hammo1

1Hamad General Hospital, Doha, Qatar

Abstract

In 2005, JL Vincent published an ICU checklist aimed at re-enforcing bundles of care in the Intensive Care Unit (ICU) 1. The elements were Feeding, Analgesia, Sedation, Thromboembolic prophylaxis, Head of bed elevation, stress Ulcer prevention, and Glucose control (FASTHUG). In 2009, WL Vincent updated the mnemonic to include Bowel, Indwelling catheters, and Drug de-escalation (FAST HUG BID) 2. We proposed that adjusting the checklist further to reflect specific challenges facing Trauma ICU patients, would result in improvements in safety processes.

Using a modified Delphi method we identified three issues which could be addressed by an adjusted checklist. These were 1) Pressure area/Skin integrity at risk, 2) Sodium level control, and 3) Family engagement. The updated checklist included Skin integrity, Family engagement, and Sodium level control, to make FFASSSTHUG BID. Use of a pocket-sized printout of the adjusted checklist was encouraged. A two-way questionnaire assessed doctor and nurse awareness of the new elements pre and post implementation.

There was significant improvement in self-reported satisfaction amongst nursing staff, with greatest improvement being perceived in Pressure area awareness. Rates of being Very Happy with their assessment of Pressure areas improved from 5% to 43%. Nursing staff felt that doctors were more aware of the three issues, particularly Sodium level control.

Doctors felt that their appreciation of all areas improved, but especially Family engagement. Doctors indicated that nurses had improved in their appreciation of the issues in all three domains.

A cross-referencing question indicated that doctors felt that the Modified FASTHUG BID checklist improved rates of Pressure area and Sodium control, but only 80% felt it improved rates of Family engagement.

Sixty-three percent of nurses felt that rates of Pressure area were improved by the new checklist, 96% felt that Family engagement levels were improved, and 83% felt that Sodium level control was better.

Due to cervical collars and enforced bedrest, rates of Pressure area problems with skin breakdown may be high in trauma. Sodium levels are very important in Traumatic Brain Injury and Spinal Cord Injury, as they can be affected by, and influence the outcome, in these patients. Due to the nature of trauma populations, families are often not aware of the accident or injuries until well into the patients TICU journey.

Modifying an Intensive Care checklist for trauma patients results in significant improvements in perceived error rates/missed management opportunities for this vulnerable patient group and their loved ones.

EP.097

Evaluating the safety profile of ACCP delivered Central Venous Catheterisation

Gavin Denton1, Andrew Simmons1, Sarah Quinton1, Sean Munnelly1 and Lindsay Green1

1Critical care, Heartlands Hospital, Heart of England Foundation Trust, Birmingham, UK

Abstract

Introduction: The Heart of England Foundation Trust has one of the largest teams of advanced critical care practitioners (ACCP) in the UK. Central venous catheterisation (CVC) is one of the core skills provided by ACCPs. Supervision and teaching of junior doctors in performing CVC also forms part of the ACCP role. The aim of this audit was to describe the ACCP contribution to delivering and supervising CVC insertion in critical care.

Methods: A web based anonymised electronic form was devised. Data were submitted shortly after insertion. All CVCs by the critical care service between December 2016 and July 2017 were recorded. These included insertion in critical care, interventional radiology, wards and the emergency department.

Results: Between December 2016 and July 2017, 222 CVC attempts were recorded. These were undertaken by ACCP’s (n = 131, 59.0%), registrars (n = 38, 17.1%) and junior doctors (n = 50, 22.5%). ACCPs supervised 82.0% (n = 41) of junior doctor CVC insertions. Overall first pass success rate was 83.3% (n = 185), with an ACCP first pass success rate of 93.1% (n = 122). The overall complication rate was 5.4% (n = 12).

Discussion: In the critical care environment, CVC placement has been the purview of doctors in the UK. The bulk of CVC were undertaken by ACCPs in this audit. A UK audit in critical care identified a complication rate of 7%1. A complication rate of 5% for CVCs inserted by ACCPs is well within the published baseline for adverse events. With the introduction of new roles into an existing service, there are always concerns of meeting the training needs of junior doctors. We have identified that ACCPs also make a significant contribution to the supervision and training of junior doctors. A previous audit in our service showed a 35% increase in the number of CVC insertions by junior doctors since the introduction of ACCPs.

Conclusions: Most critical care CVCs in the audit location were inserted by ACCPs. The safety profile is comparable to published data. Supervision of junior doctor CVC insertion was generally provided by ACCPs.

Reference

  • 1.Lathey RK, Jackson RE, Boderham, A. Harper, D. Patle Anaesthetic audit and research matrix of Yorkshire (AARMY). A multicentre snapshot study of incidence of serious procedural complications secondary to central venous catheterisation. Anaesthesia 2017; 72: 328--334. [DOI] [PubMed]

EP.098

ICU hits the NEWS: Early Warning Score documentation within Critical Care

Ryan Jaswal1, Erin Cooper1, Lucy Davidson1, Shahd Gali2, Harry Gething1, Justyna Lunkiewicz2, Amy McCallum1, Ramanish Ravishankar1, Oliver Vick1, Gregor Mcneill2

1University of Edinburgh Medical School, Edinburgh, UK

2Royal Infirmary of Edinburgh, Edinburgh, UK

Abstract

Introduction: The National Early Warning System (NEWS) is the most commonly used track and trigger system on general wards within UK hospitals. It assesses 6 simple physiological parameters; respiratory rate, oxygen saturations, temperature, systolic blood pressure, pulse rate, level of consciousness. NEWS was introduced in the Royal Infirmary of Edinburgh in April 2016. Completion of NEWS observations has become a standard practice in Critical Care once the patient is deemed fit for transfer to general wards. It is key that NEWS observations are accurately recorded in Critical Care in order to ensure patient care is transitioned safely at the time of ward transfer. The aim of our audit was to assess the percentage of NEWS charts that had been filled out incorrectly by Critical Care staff at the Royal Infirmary of Edinburgh. Following the results, a 3 part intervention was introduced. A re-audit was then carried out.

Methods: Completed NEWS observation charts were audited for a period of 6 weeks pre intervention and post intervention. The intervention involved:

1. Critical Care nursing staff undertaking an online education package on NEWS chart completion.

2. Highlighting the importance of NEWS chart completion within a variety of unit safety forums.

3. Incorporation of NEWS scoring system into Critical Care discharge documentation.

Results: First audit: 6 week period (February – March 2017).

27/48 (56.3%) patients discharged from Critical Care had incorrectly completed NEWS charts.

Re-audit: 6 week period (June – July 2017).

6/22 (27.3%) patients discharged from Critical Care had incorrectly completed NEWS charts.

Discussion: Results of the initial audit show that human error plays a significant role in the misreporting of NEWS scores. However, the results of the re-audit show that human error can be mitigated by providing Critical Care nurses with appropriate training on how to document information correctly on a NEWS chart, and educating them on how NEWS scores influence health care decisions. To reduce human error even further, an electronic recording system may be of use. Following this project, Critical Care NEWS error rate will be incorporated in to the local Critical Care Quality Indicator reporting tool.

EP.099

Ventilator-associated pneumonia (VAP) following trauma: A review of risk factors and outcomes amongst patients admitted to a UK major trauma centre

Deborah Kerr1, Kris Bauchmueller1, Louise Nugent1 and Andy Temple1

1Sheffield Teaching Hospitals, Sheffield, UK

Abstract

Introduction: It has been reported that trauma patients have a higher incidence of VAP than non-trauma ICU admissions. We aimed to identify risk factors for developing VAP following trauma, and assess its impact on key outcomes in a UK major trauma centre.

Methods: Retrospective observational review of ventilated adult trauma patients admitted to our critical care unit between January 2015 and December 2016.

We identified all trauma patients intubated prior to ICU admission from the Trauma Audit and Research Network database. Using this alongside our electronic patient record (Metavision), we retrieved data on patient characteristics, injury severity scores (ISS), duration of ventilation, hospital mortality and length of stay.

Statistical analysis was performed using SPSS. Data are expressed as mean (standard deviation, SD) or median (range) as appropriate.

Results: One hundred and seventy trauma patients were admitted during the study period (71.5% male), with a median age of 44.7 years (17-86). All sustained multiple injuries, the most severely affected body regions being the head (in 61.7%) and chest (10%). All patients were placed on a ventilator care bundle in line with established unit practice.

Twelve patients developed VAP, with four having two episodes during the same admission period. ISS scores were significantly higher in the VAP group; however no other significant differences were observed in terms of age, gender and admission GCS scores between those who did or did not develop VAP. Outcomes are described in table 1.

The VAP rate amongst our trauma cohort was 12.86 per 1000 ventilator days, considerably higher than the overall VAP rate for our ICU during the same period (7.25 per 1000 ventilator days).

Conclusions: We did not identify a link between VAP and mortality in our trauma cohort. The association with increased duration of ventilation and length of stay provides an incentive for targeted surveillance and prevention strategies in this vulnerable group. Antibiotic prophylaxis may warrant consideration, particularly for those with high ISS scores, although robust evidence to support this is currently lacking.

Table 1.

Injury severity scores and outcomes in trauma patients with or without VAP.

VAP (n = 12) No VAP (n = 158) p-value
Injury severity score (ISS) 40.1 (16.6) 27.6 (13.0) 0.009
Duration of mechanical ventilation (days) 19.9 (12.7) 6.4 (7.0) 0.000
ICU length of stay (days) 39.9 (25.2) 12.4 (15.4) 0.000
Hospital length of stay (days) 79.4 (71.4) 27.2 (40.1) 0.000
Survival to hospital discharge (%) 91.7 77.8 0.260

EP.100

Major Trauma assessment, give us a second!

Sam Ford1 and Paul Ferris1

1Salford Royal Foundation Trust, Salford, UK

Abstract

Introduction: Salford Royal Hospital is the lead provider of major trauma services within greater Manchester. Between the 1st January and the 31st March 2017, 366 patients presented with major trauma (injury severity score greater than 16). A key component of trauma care is a thorough secondary survey following primary survey and initial resuscitation. This audit aimed to examine if a secondary survey was being documented and where this was being performed.

Methods: An electronic proforma is used to aid secondary survey documentation at the trust. A single auditor retrospectively reviewed a randomised sample of 100 case notes from the 366 patients who presented between January and March 2017. The place of secondary survey and the method of documentation was noted. If the patient was admitted to a critical care environment the time to secondary survey, removal of contact lens and tampons was also noted.

Results: Mean age 59.5 years, 84% of patients presented by ambulance, with the average time of arrival 13:07. 12% of patients eventually identified by “Trauma Audit Research Network” as major trauma patients were admitted to general medicine. 34% of patients were admitted to the trauma assessment unit whilst 14% were admitted to critical care. 47% had no documented secondary survey, of the 53% of patient with a documented secondary survey only 18% were documented using the secondary survey proforma. 25% of secondary surveys took place on the ward with 20% in the emergency department. The mean time to secondary survey in critical care was 24 hours. There was poor documentation of contact lens and tampons in eligible patients by medical staff.

Discussion: The demographic of major trauma seen at Salford Royal reflects the national trend towards a more elderly population. Many of the patients are being assessed on appropriate trauma wards by staff who regularly care for trauma patients. The level of documentation of secondary surveys is disappointing despite the existence of an electronic proforma. As a result of this audit a new electronic flag is being created to allow the trauma co-ordinators to prompt clinicians to perform and document a secondary survey. This will be incorporated into the daily trauma meeting within the trust. A further electronic solution is being explored that automatically links major trauma patients notes to a repeat secondary survey document. This will be followed by a repeat audit in autumn 2017.

EP.101

Identification and management of tension pneumothorax in the supine trauma patient: a single-centre retrospective analysis

Amit Adlakha1, Anna Goose1 and Douglas Wilkinson1,2

1John Radcliffe Hospital, Oxford, UK

2Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford, UK

Abstract

Introduction: Plain chest x-ray identification of pneumothorax and subtle signs of tension is notoriously difficult in the supine trauma patient, though informs appropriate and timely management that prevents cardiorespiratory arrest and facilitates safe transfer to CT scan. We analysed identification of tension pneumothoraces and clinical features of these cases at our major trauma centre.

Methods: Trauma patients with pneumothoraces were retrospectively identified from trauma CT scans on the InSightWeb™ radiology system. Two clinicians (one emergency medicine and one intensive care registrar) independently recorded presence of specific radiograph features of supine pneumothorax and measures of mediastinal shift. Inter-observer agreement was calculated using the Kappa co-efficient. Along with CT reports and vital signs, an observational analysis of the ED management of these cases was performed.

Results: We identified 150 cases of traumatic pneumothorax. Four patients had a chest drain inserted prior to any imaging due to tension pneumothorax. Of 15 cases that had a chest drain inserted after a chest xray showing mediastinal shift, 7 had demonstrated evidence of hypotension prior to xray. 13 cases had evidence of mediastinal shift on x-ray but proceeded to CT prior to chest drain insertion – 9 of these had documented episodes of hypotension. Of the 32 with chest xray evidence of tension, 19 were intubated for transfer, and 5 of these had no chest drain in place. Seven patients had signs of tension on CT but not on chest xray, though notably, none of these were hypotensive.

Forty-nine patients with non-tension pneumothoraces required chest drains, 6 of whom were intubated for CT scan, all with chest drain insertion prior to transfer. Sixty-two patients with pneumothoraces had no intervention.

We found the horizontal distance from the most lateral point of the cardiac borders to the spinous process to be the most informative measurement. A right: left cardiac ratio cut-off of <1:4 or >1:1.5 for right and left pneumothoraces respectively captured all incidences of clinical tension pneumothorax in our cohort.

Conclusion: This retrospective study highlights the difficulty of diagnosing tension pneumothorax in supine trauma patients in ED. Notably, a significant proportion of patients are transferred to CT scan with undrained pneumothoraces and chest-xray evidence of mediastinal shift, sometimes with hypotension and occasionally with positive pressure ventilation. Additionally CT-reported tension pneumothoraces not identified on initial radiograph do not appear to manifest clinically, though it is unclear if the radiology opinion prompts timely chest drain insertion that averts cardiovascular collapse.

EP.102

Reduced Mortality with Standardised Care in Chest Trauma

Hannah Hagan1 and Ed Scarth1

1Torbay Hospital, Torquay, UK

Abstract

Introduction: Patients with chest trauma are a notoriously difficult group of patients to identify and manage. The mainstays of treatment are adequate analgesia and physiotherapy. Mortality can be over 30% in patients with co-morbidities or multiple fractures1, often as a result of suboptimal ventilation and the subsequent onset of pneumonia2. The use of chest trauma guidelines has been shown to reduce morbidity and mortality3. In our Trust, there was until recently no defined pathway for the management of patients with chest trauma.

Objectives: To introduce standardised care to all patients admitted with chest trauma.

To reduce mortality, morbidity and length of stay for patients admitted with chest trauma.

Methods: A retrospective case note review of all patients admitted with chest trauma (defined as patients with a diagnosis at discharge of rib fractures, traumatic pneumothorax, traumatic haemothorax or lung contusion) in six months was undertaken. This looked at a number of variables including length of stay, duration of invasive respiratory support, surgical intervention and mortality. Alongside this a new pathway was designed. After a three-month trial of the new pathway, all the case notes from that time were reviewed against the same criteria.

Results: The sample size pre-intervention was 43 patients, and 20 post-intervention, with similar demographics.

Following implementation of the new pathway there was a reduction in all-cause mortality of 21%with a similar reduction in admissions to the intensive care unit.

Conclusions: The data supports the implementation of a standardised care pathway to improve outcomes in chest trauma. One hypothesis is that this is related to the increased number of patients being seen by physiotherapists and the Acute Pain Service, which may allow for improved ventilation. This cannot be confirmed.

References

  • 1.Battle CE, Hutchings H and Evans PA. Risk factors that predict mortality in patients with blunt chest wall trauma: a systematic review and meta-analysis. Injury 2012; 43: 8–17. [DOI] [PubMed]
  • 2.Bergeron E, Lavoie A, Clas D, et al. Elderly patients with rib fractures are at greater risk of death and pneumonia. Journal of Trauma-Injury, Infection and Critical Care 2003; 54: 478--485. [DOI] [PubMed]
  • 3.Todd S, McNally M, Holcomb J, et al. A Multidisciplinary Clinical Pathway Decreases Rib Fracture-Associated Infectious Morbidity and Mortality in High-Risk Trauma Patients. The American Journal of Surgery 2006; 192: 806–811. [DOI] [PubMed]

EP.103

Rib fractures: The challenge of implementing an analgesic protocol in a district general hospital with no thoracic surgical ward

Neil Roberts1, Emma Harrison1, James Butler1, Julia Gibb2, Rebecca Norman2, Jonathan Outlaw2, Jonathan Abeles2, Laura Shepherd1, Ruth Creamer1 and Ben Warrick1

1Royal Cornwall Hospitals Trust, Truro, UK

2University of Exeter Medical School, Truro, UK

Abstract

Background: Rib fractures are frequently seen in Emergency Departments (ED). These patients form a unique challenge at hospitals with no thoracic surgeon. They suffer respiratory complications which general surgeons are not comfortable managing, but require advanced analgesia which respiratory teams are not comfortable managing. At our hospital, thoracic epidurals are currently managed on Critical Care, and patient controlled analgesia (PCA) only on Critical Care or surgical wards. A reactive approach currently decides analgesic method. This audit examined current practice against a proposed protocol using a validated ‘Chest Injury Score’ to prospectively stratify analgesia based on risk of deterioration.

Methods: Retrospective audit of adult patients with rib fractures from trauma, admitted for active treatment to a district general hospital over a 6-month period. Patients identified through TARN, WebPACS imaging system and ED software database, cross-referenced then imaging and notes reviewed. Demographics, severity and characteristics of injury were recorded, along with pathway through hospital, respiratory support and analgesic requirements, and outcomes including length of stay (LOS) and 30-day mortality.

Results: 43 patients identified after review of 2461 imaging reports and 58 sets of notes. Median age 67 (range 32–96). Median 5 fractures (range 1–22). 40 (93%) had unilateral injury. Median hospital LOS 6 days (range 1–25), with median ICU LOS 3 days (range 1–7). 30-day mortality was 11.6%. 12 patients went to Clinical Decisions Unit (28%), 11 Critical Care (25%), 11 Medicine (25%), 4 general surgery (9%) and 5 to orthopaedics (12%). 3 patients deteriorated with poorly controlled pain and respiratory failure on the ward and required Critical Care admission. 10 patients (23%) received Acute Pain service referrals, 5 then received advanced interventions. 2 patients received epidurals, 16 received PCA opioid. 20 patients had chest injury score > 20, indicative of need for epidural under the new pathway, with 18 scoring 11–20 indicative of need for PCA. 14 (33%) patients had contraindications to neuraxial analgesia (spinal fracture, intracranial bleed, anticoagulation).

Conclusions: At our hospital, this group of patients is significantly under-analgesed. Changing the current ‘reactive’ analgesic protocol to one based around a risk score will lead to significant increases in advanced analgesia and require significant resource investment. The base ward outside Critical Care will need to be PCA competent in order to deliver this service. Given the high prevalence of contraindications to epidural, and predominantly unilateral injury pattern, techniques such as serratus anterior or paravertebral catheters may play a role in the future.

EP.104

Bringing the needle to the patient’s door: implementing a Pre-Hospital Emergency Medicine (PHEM) Sepsis Pack delivered by the ambulance service

Shah Mizanur Rahman1,2 and Zulfi Ahmed3,4

1Oxford University Hospitals NHS Trust, Oxford, UK

2South Central Ambulance Service NHS Trust, Thames Valley Region, UK

3Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK

4Thames Valley Air Ambulance, RAF Benson, UK

Abstract

The Enhanced Care Response Unit (ECRU) is a joint Physician and Paramedic Response Unit tasked by South Central Ambulance Service (SCAS) across the Thames Valley region. This project is coordinating a antimicrobial policy and sepsis pack across the region so that patients with a clinical or working diagnosis of sepsis by ECRU can have point of care broad spectrum antibiotics, blood cultures and blood gas analysis as well as aggressive fluid resuscitation and fluid balance monitoring initiated at the earliest practicable time, before and during transport to definitive care in the Emergency Department

This gamut of sepsis care is now standard practice in the Emergency Department and secondary care. Traditionally, the role for SCAS clinicians has been in the recognition of these patients and blue light (emergency) transfer to the nearest appropriate Emergency Department, with fluid and oxygen administration in keeping with their skills set and formulary as defined by the Joint Royal Colleges Ambulance Liaison Committee (JRCALC).

In short, the interventions to be given/performed (outside of the paramedic/JRCALC scope of practice) are:

1. Taking blood cultures and vacutainer samples

2. Intravenous antibiotic administration e.g. co-amoxiclav, gentamicin, tazocin and meropenem

3. Aggressive fluid bolus administration (up to 30 mL/kg c.f. usual 500–1500 mL)

4. Interpretation of point of care blood gas analysis

5. Insertion of urinary catheter (where feasible) to instigate strict fluid balance measurement

These can be done before or during the emergency transfer to definitive care. This means that upon arrival at the ED, the patient has already had the gold standard interventions at the roadside, and all that is necessary is processing of the samples taken before antibiotic administration, further history and examination and ancillary investigations such as radiological imaging and usual hospital care.

We are currently liaising between all five constituent NHS hospital trusts for a joint PHEM antimicrobial policy and standard operating procedure (SOP). Point of care blood gas analysis and cartridges, as well as antibiotic formulary and catheterisation kit have already been acquired. Once this PHEM SOP has been established, we look forward to presenting our findings on the implementation process for others to learn from and use locally in the FOAM spirit and demonstrating the delivery of this enhanced care standard.

EP.105

A Survey Exploring the Effects of Small Group Teaching Strategy on the Intensive Care Nurse's Knowledge about Sepsis

Marika Nemeckova1 and Ben Messer1

1The Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK

Abstract

Background: Sepsis is a serious complication of infection in which the body attacks its own tissues and organs. It is a time critical condition that can be difficult to recognise due to its variable clinical presentation. If nursing staff are not aware of the early signs and symptoms, it can delay treatment, and progress to organ failure which decreases the chance of survival. Therefore, there is a need for adopting an education strategy to improve the awareness of this potentially life-threatening condition.

Aim: The primary objective was to advance nurses’ understanding of sepsis. It involved education about evidence-based recommendations such as the screening tool Red Flag Sepsis designed to improve the early recognition of sepsis. The education also focussed on the Sepsis Six, a clinical care bundle which has been shown to reduce the risk of mortality from sepsis when delivered within the first hour of diagnosis.

Methods: This questionnaire was conducted within an interval of 3 months. A pre-education questionnaire was undertaken to assess nurses’ knowledge of sepsis. Following the questionnaire, education was delivered to all the nurses who took part in the questionnaire in the form of a Sepsis presentation. The teaching was provided during nursing shifts at the bedside as small group teaching. Following the delivery of the Sepsis presentation, the nurses repeated the initial questionnaire.

Results: Fifteen nurses undertook the pre-and post-education survey. A score out of a maximum of 43 was calculated. Data were analysed for normality using the Shapiro-Wilk test for normality. Both pre-education questionnaire data and post-education questionnaire data were normally distributed.

Pre-education questionnaire Post-education questionnaire
Mean 21.47 29.21
Standard Deviation 5.15 4.77

Means were compared using a paired samples t-test. There was a significant difference between mean scores pre-education and post-education (p < 0.001). The results of this survey have shown an improvement in the nurses’ knowledge about sepsis in the second questionnaire following the evidence-based Sepsis presentation.

Conclusions: We have shown the effectiveness of small group teaching in improving the knowledge of nursing staff about sepsis. To ensure that the improvements become sustainable, it is imperative to provide resources to continue educating healthcare professionals about sepsis. Although providing education for nurses will require a certain amount of time and effort, this investment can be justified when consideration is given to the potential reduction in sepsis mortality and the costs to the NHS.

EP.106

qSOFA and Lactate for early sepsis in a hospital ward setting

Patrick Smith1, Aine McCurry1, Anya Sheltawy1 and Jerome Mccann1

1Warrington Hospital, Warrington, UK

Abstract

Introduction: Timely detection and recognition of the signs and symptoms of sepsis saves lives. At present, no diagnostic investigation detects sepsis conclusively. A recent study in the Journal of Critical Care found that serum lactate and modified SOFA (mSOFA) were independent predictors of death within 24 hours of A&E admission in patients with severe sepsis. In Warrington District General Hospital we looked at lactate and mortality in admissions to Critical Care from the period of 2009 to 2017. Our data found a positive correlation between lactate and mortality. The role of lactate is well established as a predictor of adverse outcomes and as these two studies suggest, lactate, in combination with predictive scoring systems, could be combined to predict those patients likely to experience an adverse outcome from sepsis. These studies have proved effective in the setting of Critical Care and A&E. We set out in our study to investigate if lactate and qSOFA could be utilised in the setting of acute emergencies on the wards of Warrington Hospital to predict those patients likely to experience an adverse outcome.

Methods: Warrington Acute Care Team Nurses completed a data sheet for every Medical Emergency Team (MET) call attended, detailing the number of Red Flags each patient triggered. Each MET call was correlated with clinical documentation from the MET team Doctors, recording NEWS, Red Flags and where possible, calculating a qSOFA score. Each MET call was correlated with the relevant investigations pertaining to the patient’s assessment on the given occasion to obtain a lactate.

Results: 40 MET calls in total were studied in the period of November 2016 to May 2017 from a range of wards in Warrington Hospital. Our study found that 27.5% of patients who triggered a MET call for possible sepsis died during their admission and that 25% of patients with a raised lactate died during their admission. Of the patients who died, 63.6% had a qSOFA score greater than 1.

Conclusion: Our study has shown that in the setting of acute ward MET calls for patients with possible sepsis, lactate in combination with qSOFA is a useful tool for predicting those patients who are likely to experience an adverse outcome. These findings are in keeping with previous evidence from Critical Care and A&E and suggest that lactate and qSOFA could be used in ward settings to predict adverse outcomes until specific diagnostic investigations for sepsis become available.

EP.107

An audit of the sepsis six in a trauma hospital's Emergency Department and Intensive Care Unit

Sabrina Costa1 and Shankara Nagaraja1

1Aintree hospital, Liverpool, UK

Abstract

Background: Rivers et al. looked at effects of standard therapy versus six hours of Early goal-directed Therapy in patients admitted to an emergency department (ED) with sepsis, before admission to ICU. The study concluded that Early goal-directed Therapy provides significant benefits with respect to outcome and survival in patients with severe sepsis and septic shock. In-hospital mortality was reduced from 46.5% in the standard therapy arm to 30.5% within the early-goal directed therapy arm. (p = 0.009). This study forms the basis for the development of the sepsis six.

Objectives: We aimed to carry out an audit evaluating the fulfilment of the sepsis six within the ED and ITU of AUH, in order to ensure best practice and favourable outcomes.

Methods: All patients admitted to ITU with sepsis from March 2016 – September 2016 were included. A proforma was developed to collect raw data from 39 patients. Data was obtained using in-hospital patient databases. Information from proformas’ was entered onto an excel spreadsheet and desired data calculated using excel formulas.

Standard: 80% was the agreed standard for administering oxygen, antibiotics and fluids within 1 hour, taking 2 sets of blood cultures and measuring a lactate level.

Results: Antibiotics were recorded as given in 31 patients (74.49%). 9 of these patients were given antibiotics in <1 hour (31.03%). 94.12% had a recorded lactate level measured. 27 patients had documentation for blood cultures being taken (67.23%). Documentation of 2 sets being taken was recorded in 9 patients (34.62%). 41.38% (12 patients) were administered fluids in under 1 hour. The time taken from presentation to refer to ITU ranged from 0.1 to 7.1 hours; average time taken was 2.79 hours. Once referred to ITU 73.3% were reviewed in <1 hour. The average time taken from ITU referral to ITU admission was 3.02 hours.

Conclusions and implications: Overall, the steps included in the sepsis six are being conducted in the ED, yet unfortunately time restraints and poor documentation are not allowing these actions to be conducted in a timely manner. Additional data collected from this audit shows that ITU are acting in a timely manner.

Sepsis packs including equipment required to successfully evaluate the sepsis six such as blood culture bottles have been introduced. A re-audit is intended in October to evaluate these changes.

EP.108

Variation of care pathways for identifying patients with suspected sepsis across UK NHS Hospital Trusts

Emmanouela Kampouraki1, Aaron Jesuthasan2, Joy Allen3, Michael Power3, Ben Messer4 and Sara Graziadio3

1Institute of Cellular Medicine, Newcastle University, Newcastle, UK

2Faculty of Medical Sciences Newcastle University, Newcastle upon Tyne, UK

3Diagnostic Evidence Cooperative, Newcastle upon Tyne, UK

4Royal Victoria Infirmary, Newcastle upon Tyne, UK

Abstract

Introduction: Sepsis is a complex syndrome and remains a significant concern for clinicians and public health. In 2016, the definitions of sepsis and septic shock were refined, and NICE guidelines (NG51) were published to guide their identification and management.

Hospitals and Trusts have adopted the guidelines to different extents, resulting in a variation of local protocols for identifying and managing sepsis. The aim of the analysis presented was to explore similarities and differences between sepsis care pathways used within the North East of England, and compare these with the NICE guidelines.

Methodology: Guidelines were collected from the UK Sepsis Trust as well as four local NHS Trusts: Gateshead Health, Newcastle upon Tyne Hospitals, North Cumbria University Hospitals and South Tees Hospitals. The procedures used to identify and manage sepsis were compared between the different sources as well as with the NICE guidelines. A schematic representation of the care pathways for each source was drawn to facilitate comparisons in practice.

Ongoing work to collect guidelines from other areas of the UK is being undertaken to explore potential variation outside of the region.

Results: “Red flag sepsis” criteria used for the identification of sepsis were used by three of the five sources (60%): Newcastle upon Tyne, North Cumbria Hospitals and the UK Sepsis Trust. NICE guidelines recommended similar clinical criteria, but a lower threshold to start treatment and request medical review. The importance of localising infection to modify antibiotic therapy was mentioned in the guidelines of four of the five sources (80%): Gateshead, North Cumbria, South Tees Hospitals and the UK Sepsis Trust. NICE guidelines also recommended investigating source of infection to manage sepsis appropriately. Involvement of the critical care team was incorporated within the pathways of four of the five sources (80%): Gateshead, North Cumbria, South Tees Hospitals and the UK Sepsis Trust. This is in accordance with the NICE guidelines, which encouraged senior medical review.

Conclusion: The guidelines for identifying and managing sepsis were similar between all sites. They were mostly concordant with NICE recommendations, although higher thresholds for commencing treatment were evident. Further evaluation of variation in protocols and foreseen delays in the current management of sepsis is being conducted via dissemination of a survey to clinicians in the UK. These results are preliminary and will be integrated with information obtained from guidelines of Trusts outside of the region and results from the disseminated survey in the upcoming months.

EP.109

Aciclovir: An Innocuous Drug?

Philip Harrington1 and Chandresh Patel1

1The Dudley Group NHS Foundation Trust, Dudley, UK

Abstract

Aciclovir is a nucleoside antiviral agent with numerous indications, being active against herpes zoster, varicella zoster and herpes simplex in both skin and central nervous system infections [1]. Given its broad spectrum of antiviral cover, it is common practice to commence patients on treatment dose aciclovir when suspecting meningo-encephalitis.

A 50-year-old, fit and active male was admitted with a one-week history of altered behaviour. The only past medical history known was a vague psychiatric illness. Severe unmanageable agitation required sedation, intubation and ventilation in the intensive care unit within hours of arrival. He received broad spectrum antibiotics and antivirals to cover the possibility of a central nervous system infection. Following trust guidelines, the patient received treatment dose aciclovir as the broad spectrum antiviral. Initial blood gas analysis revealed a metabolic acidosis with pH 6.9 and lactate 12. Laboratory blood tests, computed tomography head scan and cerebrospinal fluid analysis were unremarkable in the first instance. After 48-hours of admission the patient was developed an acute kidney injury. On day five of admission dialysis was considered as the patient’s urea had risen to 18.9 mmol/L and creatinine to 563 mmol/L (baseline urea 4.8 mmol/L and creatinine 103 mmol/L). All other causes of this acute decline were excluded. Following discussion with microbiology colleagues, and on balance of risk, aciclovir was stopped. The patient’s renal function returned to baseline and clinical improvement was seen. Further investigation unearthed a history of psychiatric illnesses. The patient was discharged home with a diagnosis of an acute psychotic illness.

This case highlights that whilst reasonably widely used, aciclovir is not an innocuous drug. A significant iatrogenic renal injury was caused as a side effect empirical therapy. Aciclovir predominantly undergoes renal excretion and it is recognised that a crystal uropathy may ensue causing a significant decline in renal function [2], often with resolution on stopping. Whilst organic conditions are often highest on a differential diagnosis list it is important to consider the possibility of psychiatric illness mimicking organic pathology particularly when considering potentially toxic medications. Continuous empirical use of aciclovir in the absence of positive test results should be used with caution.

References

  • 1.Joint Formulary Committee, British National Formulary, 73, London: BMJ Group and Pharmaceutical Press, 2017.
  • 2.Whitley RJ, Alford CA, Hirsch MS, et al. Vidarabine versus acyclovir therapy in herpes simplex encephalitis. New England Journal of Medicine 1986; 314: 144–149. [DOI] [PubMed]

EP.110

Management of profound hyponatraemia in a patient requiring renal replacement therapy for acute kidney injury

Danielle Eusuf1 and Indy Kapila1

1University Hospital of South Manchester, Manchester, UK

Abstract

Hyponatraemia is the most common electrolyte abnormality in hospital. Profound hyponatraemia (Na <125) [1] can be life threatening, complicated by cerebral oedema, seizures and cardiac arrest. However, rapid correction can be complicated by central pontine myelinolysis.

Acute kidney injury (AKI) is also common in ICU, with an incidence of 10–25% in critical illness [2]. 3–5% of those severe AKI needing renal replacement therapy (RRT) [3].

RRT can cause rapid correction of electrolytes, which can be problematic in the management of chronic profound hyponatraemia. We present a case highlighting the difficulties with managing a patient who had a profound symptomatic acute on chronic hyponatraemia, with a severe AKI requiring RRT.

Mr X, a 43 year old male was admitted with a 1 week history of diarrhoea and vomiting. He presented clinically hypovolaemic, with symptomatic profound hyponatraemia (Na 102 mmol/L), and an AKI (urea 37.8, creat 968). He had a history of alcohol excess with a poor nutritional status, therefore it was unclear how chronic his hyponatraemic state was. Therefore there were concerns regarding rapid correction of the hyponatraemia as to avoid central pontine myelinolysis.

After seeking advice from our tertiary renal centre, we were advised not to commence RRT until his serum sodium was at least 127 mmol/L as this would correct the sodium too quickly. We commenced 1.8% hypertonic saline, with an aim to increase his sodium by 8–10 mmol/L in the first 24 hours. His sodium climbed slowly over the next 48 hours, however his renal function continued to deteriorate. We commenced oral sodium bicarbonate to control his metabolic acidosis. However, after 5 days of medical management, he clinically deteriorated with worsening acidosis, fluid overload and pulmonary oedema, requiring urgent RRT. His sodium at this point had corrected to 127 mmol/L therefore he was commenced on RRT (continuous haemodiafiltration). His sodium climbed rapidly from 127 to 139 mmol/L over 3 days, with no adverse clinical effect. His renal function recovered completely and he was discharged from ICU 14 days from admission.

Our aim is to answer the interesting questions this case highlighted regarding how we manage hyponatraemic patients who require urgent RRT. In this case, we were able to medically manage the patient with hypertonic saline, sodium bicarbonate and non-invasive ventilation until his sodium had corrected to a satisfactory level. But what if this wasn't possible and he needed urgent RRT whilst severely hyponatraemic?

EP.111

CRRT management. A retrospective audit

Katharine Elliott1, Anders Hulme1, James Moloney1, Justas Mazunaitis1 and Ian Roberto Carrasco Barber1

1BHRUT NHS Trust, Romford, UK

Abstract

Introduction: Continuous renal replacement therapy (CRRT) is a life-saving treatment in acutely unwell patients. Indications include: acid-base/electrolyte balance, fluid/toxin removal, poisoning and uraemia with associated complications, as well as replacement in dialysis patients w/CKD requiring ITU. Although many patients with AKI recover kidney function sufficiently to be independent of RRT, discontinuation of RRT in AKI has received little attention in the literature.

Aims:

• Review the current CRRT management in our ITU

• Design a standardised CRRT protocol based on the results

• Implement a protocol subsequently

Background:

• RRT is discontinued because: i) intrinsic kidney function has recovered to the point that it is adequate to meet patient needs, or ii) because RRT is no longer consistent with the goals of care (KDIGO).

• In CRRT, continuous solute clearance of 25–35 ml/min will stabilise serum markers after 48 hours.

Method:

• Review actual practice in Critical Care at Queen’s Hospital

• Retrospective review of 52 sets of notes from patients having received CRRT in 2015

Results:

1. Most patients required between 1 and 3 filters, however one patient required 22.

2. The top cause for discontinuation of filtration was filter clot (55%). No documentation was found in the notes about the management of early filter clotting (clot in first 12 h of the filter).

3. Over filtration was common with baseline creatinine surpassed regularly, with a failure to adjust effluent dose.

4. The KDIGO Clinical Practice Guidelines suggest placement of vascath preferably in RIJ and to avoid subclavian access. We found 47% of lines were RIJ, 40% LIJ, 12% femoral and 1% L subclavian.

5. Documentation of prescription for CRRT was found in 18% of cases, ie. 21 of a total of 117 cases. Equally, an indication for documentation was seldom indicated.

6. Filter life with citrate anticoagulation doubled that with heparin/flolan (35 vs 17). Documentation errors were common with unclear timings for filter changes.

Conclusions:

• Need for clearer documentation as to why CRRT was started, why the RIJV was not used and why the filter was withheld or stopped.

• Over filtration is common- i.e. baseline creatinine achieved yet filter flow continued.

• Need prescription of CRRT and daily review of the effluent dose.

Recommendations:

• Teaching with both nursing staff and junior doctors to emphasize the points brought up in the conclusions!

• Rewrite protocol including flows adjustment once indication is controlled

• Improve filter documentation from medical and nursing perspective.

EP.112

Have you ‘Pick’ed up everything? A unique case of flash pulmonary oedema

Raluca Ene1, Venkat Sundaram1 and Saba Iqbal1

1Glan Clwyd Hospital, Rhyl, UK

Abstract

Case history: An 86 year old lady was admitted to our hospital with lower respiratory tract infection, hyponatraemia and decompensated heart failure. She had a complex medical history with a background of treatment resistant hypertension, ischaemic heart disease, peripheral vascular disease, congestive heart failure, type 2 diabetes and chronic kidney disease. On the ward she was treated for both chest infection and cardiac failure. She developed a new onset atrial fibrillation (AF) and was on a good dose of Bisoprolol. Day 4 on the ward, the patient was chatting and mobilising. At around 3 am next morning she complained of sudden onset shortness of breath. Her oxygen saturations were 70% on 15 litres/min oxygen, blood pressure was very high 160/108 mmHg and in fast AF (170/min). During assessment by the critical care team she deteriorated very quickly into cardiac arrest. After successful CPR and intubation, florid pulmonary oedema was noticed. The patient was transferred to intensive care. The admission echocardiogram showed mild left ventricular (LV) systolic and diastolic dysfunction, ejection fraction 45–55% and pulmonary artery pressure 50 mmHg. In less than 24 hours she was extubated onto CPAP but she desaturated again a few hours later. The subsequent chest X-ray showed recurrence of pulmonary oedema. Looking in the patient’s history we found evidence of left sided renal artery stenosis (RAS) on an old MRI. Since the pulmonary oedema was disproportionate to the degree of LV impairment we decided to reinvestigate the degree of RAS. The repeat CT angiogram showed a 60–80% stenosis of the left renal artery. She had a renal artery stent placed the next day. Her pulmonary oedema was much more responsive to treatment after the intervention and was discharged to the ward few days later.

Discussion: Flash pulmonary oedema (FPO) is a general term used to describe a particularly dramatic form of acute decompensated heart failure. The unique entity of bilateral RAS and FPO was termed “Pickering Syndrome”. FPO can also rarely happen in unilateral RAS. The mechanism by which RAS causes pulmonary oedema is not well understood but the renin-angiotensin system appears to be responsible for the renal vascular hypertension and FPO. Labile hypertension, progressive renal failure and FPO are strong indications for revascularisation therapy in RAS.

EP.113

Acute Kidney Injury awareness among intensivists: Survey in Alexandria University hospital (Egypt)

Ashraf Roshdy1,2 and Ferial Youssef3

1General ICU – Broomfield Hospital – Mid Essex NHS Trust, Chelmsford, UK

2Critical Care Department – Alexandria Univeristy, Alexandria, Egypt

3Pediatric Nephrology Unit – Alexandria University Children Hospial, Alexandria, Egypt

Abstract

Introduction: Though association with morbidity and mortality in intensive care units (ICU); reports show only 50% of patients with acute kidney injury (AKI) got adequate care, and in about 30% AKI was avoidable.(1,2) It mandated to improve doctors’ awareness, to implement education programs for early AKI management and to limit mortality.(1)

Objectives: To assess medical staff knowledge and awareness about AKI in University hospital.

Methodology: Paper-based survey of 30 questions for intensivists in Alexandria University hospital (Egypt).

Results: Thirty-nine doctors carried out the survey (11 consultants, 6 specialists and 22 residents) with mean ICU experience of 6.5 years. Most doctors (28/39) had not read the KDIGO guidelines. RIFLE criteria were most commonly used for diagnosis (23/39). Most doctors (27/39) considered urine output most important AKI marker. Renal Replacement Therapy (RRT) was done upon detection of hyperkalemia or end organ affection (44% and 36% respectively) with little stress on biochemical markers (Urea and Creatinine). All doctors except one considered fluid challenge (70% for all AKI patients, 28% only if fluid responsive). About half of them considered diuretics trial early in AKI (Mean ± SD duration 14.4 ± 14.3 hours, range 4–72 hours). Furosemide was diuretic of choice with dose ranging between 20 and 2000 mg/day and mean ± SD dose of 433 ± 585 mg. Renal dose dopamine was used by 80% of doctors, with nearly a quarter (9/39) using it more than 50% of times. Thirty-four doctors (87%) seek nephrologist opinion upon AKI recognition. Most doctors (29/39) order Ultrasound upon diagnosis of AKI, but seldom urinary electrolytes (22/39 requested them less than 25% of cases). Concerning RRT, hemo(dia)filtration was preferred in 51%, hemodialysis in 40%, and 8% found all modalities equal. Diuretics were used by 40% to boost renal recovery.

Conclusions: Medical practice and knowledge about AKI varies and not based on best evidence in many instances. Developing guiding protocols and learning programs should be implemented then tested for better results.

Grant Acknowledgement: None

References

  • 1.Stewart J, Findlay G, Smith N, et al. Adding insult to injury: a review of the care of patients who died in hospital with a primary diagnosis of acute kidney injury (acute renal failure). London: National Confidential Enquiry into Patient Outcome and Death, 2009.
  • 2.Prescott AM, Lewington A, O'Donoghue D. Acute kidney injury: top ten tips. Clin Med 2012; 12: 328–332. [DOI] [PMC free article] [PubMed]

EP.115

Winning at Weaning – empowering ICU nurses to autonomously support the respiratory weaning process through implementation of a guideline

Leanne Franklin1, Helen Kenneth1, Joanne Harkcom1, Maria Salvador1, Maria Baldovino1, Louise Waters1, Sally Scott1, Louise Dodd1 and Matthew Sames1

1Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK

Buckinghamshire NHS Trust – Stoke Mandeville & Wycombe Intensive Care Units, UK

Abstract

Background: Prolonged weaning from mechanical ventilation is a recognised problem challenging many Intensive Care Units and is associated with complications, such as ventilator-acquired pneumonia (VAP), increased mortality and overall healthcare costs. Success rates and outcomes vary but early weaning adopting a nurse-driven and or protocolised approach can accelerate the process subsequently reducing total ventilator days. Vast inconsistencies in approach prompted the creation of our multi-disciplinary respiratory weaning group, which consisted of nurses, doctors and physiotherapists.

Objective: The groups' aim was to empower the bedside nurse to autonomously support the weaning process, by designing a guideline aiding a successful and consistent approach, thus improving patient outcomes and overall experience.

Methods: Primarily data was gathered through an extensive literature search, underpinning the guideline. Three simplistic flow diagrams were designed, categorised by tube type and duration plus supplementary individualised holistic goal documents ensuring patient specific care. Data was collected pre and post implementation from the nurses and consultants by questionnaire. A bedside audit reviewed compliance post implementation.

Results: Prior to implementation of the guideline only 50% of patients had a written weaning plan. Following implementation, 71% of nurses felt the guideline empowered them. 81% were confident in progressing or stopping based on the readiness to wean criteria. 97% of nurses found the holistic goals beneficial and 93% had weekly holistic goals set. Comparison of ICNARC data (for period before and after) revealed a 1.5 day reduction in ventilator days post implementation.

Conclusion: Many variables create difficulties in directly attributing implementation of the guideline to the reduction in ventilator days. However, its use has empowered the nurses and resulted in fewer inconsistencies.

EP.116

A multidisciplinary approach to lung protective ventilation in a mixed District General Hospital Intensive Care Unit

James Goddin1, Martin Cole1 and Mehdi Raza1

1Luton and Dunstable NHS Foundation Trust, Luton, UK

Abstract

It has been widely accepted that lung protective ventilation strategies are the gold standard for patients with ARDS on Intensive Care Units following the ARDSNet trials and other subsequent research. New evidence and meta-analysis suggests that adopting these strategies in the wider ICU and peri-operative environment leads to improved patient outcomes. Some studies suggest that up to 24% of patients with normal lungs, ventilated for 2 or more days in a ICU setting will develop an ALI/ARDS. We therefore decided to look at how "well" we ventilated our ICU patients in a District General Hospital with a mixed ICU patient population.

We performed a retrospective audit of all patients over 16yrs old who were admitted to our ICU and ventilated for more than 24 hrs on a mandatory ventilator mode. We collected data from 50 patients, more than 1000 ventilator hours, admitted between December 2015 – March 2016. Data was collected including; patient demographics, type of admission, ventilator mode, ventilatory parameters (mean TV, time spent over-ventilated, mean PEEP, Peak pressures) and arterial blood gas results. We also identified patients who were labeled as having ARDS within this cohort for sub-group analysis.

The majority of patients were over-ventilated during the frist 24 hours of their ICU admission to our Unit; on average by 30% and greater than 16 hours of Day 1. Patients were ventilated with a mean tidal volume of 8mls/kg. PEEP and Peak Pressure were mostly with recommended limits but there was minimal use of permissive hypercapnia in the ARDS group and a number of patients with peak pressures >30cmH2O; 37% of all patients.

A number of recommendations and actions were undertaken following this audit. These included measuring all patients on admission to our ICU to calculate ideal body weight, consultant led review of tidal volume targets and ventilation strategy on daily ICU ward-rounds. A teaching programme was developed to be delivered to new starters in ICM and ICU nursing staff. Furthermore, a MDT Ventilation Working Group has been established to push forwards good practice and tidal volume targets have been integrated into the Unit's daily safety-briefing. A re-audit was undertaken during 2016 following these interventions. Mean tidal volumes decreased and there was less ventilation over 10mls/kg with better recognition of extreme over ventilation. There was improvement in mean percentage over ventilation, from 30% to 19% with 59% less patients with peak pressures >30cmH2O compared with previously.

EP.117

Airway Pressure Release Ventilation (APRV) in a District General Intensive Care Unit- An Audit of our practice

Runruedee Hoontanee1, Esme Lewis1 and Michael Reay1

1The Dudley Group NHS Foundation Trust, Dudley, UK

Abstract

APRV is a pressure limited, time cycled, inverse ratio, ventilatory mode allowing the patient to breath throughout the ventilatory cycle. APRV is used as a rescue or secondary mode for patients with poor compliance such as ARDS.

The principal being an open lung, minimizing ventilator induced lung injury (VILI) and maintaining hemodynamic stability, in patients with non-compliant lungs. Unlike PEEP (an expiratory flow resistor), the release phase of APRV increases expiratory flow rates. This offers a protective advantage whilst recruiting lung units and optimizing functional residual capacity.

We are a DGH and ventilate approximately 220 patients per year. Our unit has been using the APRV mode since 2012. This is an audit of our practice over the last 4 years.

Methods: Our electronic charting system Philips ICIP, was interrogated. Patients on APRV mode over a 6 hour period, between 2012–2016 were extracted. Demographic, Baseline data such as admitting APACHE and PaO2/FiO2 ratios, setup parameters, weaning, complications namely ventilator induced lung injury (VILI) and ICU outcomes were observed.

Results: 31 patients with mild to moderate ARDS (PaO2: FiO2 ratios ranging 14–26) were ventilated with APRV. Median time of APRV ventilation was 52 hours (range 6 – 326 hrs). There was a wide variance in initial setup and wean plan.

7 patients (22.6%) had complications, 4(12.9%) were serious such as pneumothorax or severe surgical emphysema. Those patients with complications had longer ventilator days and ITU lengths of stay (8 &10 vs 15 & 20) (Fig1).

Fewer than 50% of patients had a demonstrable wean. Patients with no demonstrable wean [RR 2.2 (p = 0.209, 95%CI 0.63–7.97)], and tidal volumes over the ARDSnet criteria of 6 ml/kg, for over 60% of their time on APRV ventilation [RR = 4.5 (p = 0.042, 95%CI 1.04–19.68)], were observed to have had a higher risk for complications.

Conclusions: Limitations of this audit were largely a small study population and heterogeneity of the underlying severity of pneumonitis. These factors would have most certainly influenced patient outcomes. However our audit does highlight the importance of weaning and incorporating a protective ventilation strategy to reduce VILI in this high-risk patient group. We have since introduced a guideline for APRV ventilation and have a training package in place.

EP.118

EZPAP – a new adjunct for respiratory physiotherapy?

Sarah Elliott 1

1Medway NHS Foundation Trust, Gillingham, UK

Abstract

Introduction: EZPAP is a positive pressure hand held device that amplifies an input flow of air providing a larger flow and volume with less effort than an unsupported inspiration, as well as positive expiratory pressure on expiration. It is marketed as an adjunct to respiratory physiotherapy to increase lung volume, prevent atelectasis, clear secretions and improve gas exchange. However, there is a lack of published research analysing the use of EZPAP for respiratory physiotherapy.

Methodology: Twenty – five patients who used the EZPAP device were audited within a District General Hospital (DGH). Data were collected on: diagnosis, physiotherapy respiratory problem and rationale for treatment, number of treatments and outcome of intervention. The data collection utilised physiological parameters such as respiratory rate, SaO2, oxygen demand, auscultation and palpation which was collected subjectively. Additionally, qualitative data from the patients and physiotherapists on their experience in using the device were also collected. Physiotherapists were also asked why they chose EZPAP over other standard respiratory physiotherapy techniques.

Results: EZPAP was effective in increasing lung volume, as measured by auscultation and palpation in post abdominal surgery patients. It was effective in clearing secretions with fatigued patients who had ineffective coughs and preventing possible atelectasis in patients with neurovascular disorders, or limited by bed rest as they did not require positive pressure ventilation to maintain their respiratory status and as a positive pressure treatment to improve gas exchange with patients with pneumonia. It has a place in all medical directorates including critical care and paediatrics and can be used for both acute and long term conditions.

Conclusions: The EZPAP is an easy to use and effective physiotherapy adjunct for treating acute and chronic respiratory adult and paediatric patients within a DGH with a high compliance rate to treatment and should be considered as an additional adjunct to standard respiratory physiotherapy techniques and devices. Further investigation is required for its use as a home or community therapy option.

EP.119

Minimum standards of clinical practice for physiotherapists working in critical care settings in United Kingdom: A modified Delphi technique

Paul Twose1,2, Una Jones2 and Gareth Cornell3

1Cardiff and Vale University Health Board, Cardiff, UK

2School of Healthcare Sciences, Cardiff University, Cardiff, UK

3Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK

Abstract

Introduction: Across the United Kingdom physiotherapy for critical care patients is provided 24-hours a day, 7-days per week. To achieve this the service relies on non-respiratory specialists of varying clinical grade and experience. There is a national drive to standardise the knowledge and skills of non-specialist physiotherapists managing patients within critical care. Such standardisation will support training, reduce variability in clinical practice and clarify on-call competencies.

The aim of this study was to explore the minimum standards of clinical practice required by physiotherapists working in critical care within the UK.

Methods: A modified Delphi technique using a questionnaire developed in Australia and New Zealand (Skinner et al. 2016) was used to obtain consensus on the minimum standards of physiotherapy clinical practice in critical care units in the UK. Ethical approval was obtained from Cardiff University.

Participants were recruited via a variety of sources including ICS, ACPRC and CSP, and were eligible if they had at least 3 years in a senior physiotherapy position working in critical care.

The questionnaire, originally containing 214 items, was completed over 3-rounds using Bristol Online Surveys over a 6-month period. Items with no consensus were included in later rounds along with any new items suggested.

Results: 114 physiotherapists, from 142 eligible, participated in the first round (103 clinicians, 11 academics) of the study. Physiotherapists were recruited from across England (96), Wales (8), Scotland (7), Northern Ireland (1) and Jersey (1). Rounds 2 and 3 were completed by 102 (92/10) and 92 (82/10) physiotherapists respectively.

The results for each round and end results are shown in figure 1.

Figure 1: Items reaching consensus during each round

Number of Items Essential Not Essential No Consensus
Round 1 214 91 58 65
Round 2 73 11 13 49
Round 3 51 5 12 34
Total 224 107 83 34

In total, 224 items were included: 107 were deemed essential as a minimum standard of clinical practice; 83 were not essential and consensus was not reached for 34 items.

Analysis/Conclusion: This study identified 107-items of knowledge and skills that are essential as a minimum standard for clinical practice by physiotherapists working in UK critical care units.

The findings of this study require dissemination which may support training programmes in both higher education and the health service to reduce variability in clinical practice. Further work is needed to compare the findings with those from Australia and New Zealand.

EP.120

Exploring the Intensive Care Medicine (ICM) & Acute Internal Medicine (AIM) interface through trainee-led multi-disciplinary procedural skills and high-fidelity simulation training – PILOT PROJECT

Andrew Achilleos1 and Nick Murch2

1University Hospital Lewisham, London, UK

2Royal Free London NHS Foundation Trust, London, UK

Abstract

Introduction: Multi-disciplinary simulation-based medical training is an established teaching methodology for developing technical and non-technical skills, yet exposure to simulation is often single specialty. Furthermore senior-doctor led simulation may not rapidly evolve alongside the requirements of current trainees. We explored peer-led simulation training combining higher trainees in ICM and AIM.

Objectives: Despite ICM and AIM interfacing in the care of acutely ill patients, combined training opportunities are scarce in higher training and many interactions are in the midst of emergent and stressful situations. We sought to provide a supportive learning environment where procedural skills common to both specialties could be learned and developed together, followed by high-fidelity simulation that called on the expertise of both specialties working in teams to tackle complex clinical problems.

Methods: A cohort of 8 trainees (4 AIM, 4 ICM) participated in this course. Procedural skills were demonstrated by a faculty of trainees in a range of acute and medical specialties prior to the course participants practicing with peer-led supervision on simulated models. All procedures chosen appear on the eportfolio curriculum for both specialties. This was followed by high-fidelity multi-disciplinary simulation involving teams of AIM and ICM trainees. Pre- and post-course questionnaires were collected to gauge trainees’ confidence in performing a variety of procedures, their experience of working with colleagues in other acute specialties and of multi-disciplinary simulation training.

Results: 75% of trainees reported difficulties in obtaining procedural competencies due predominantly to the increase in procedures being performed by radiology and other specialties (75%) and poor access to simulated training with models (50%). Confidence in performing all procedures increased, including those that all trainees had significant experience with like central venous catheters (8.5→8.9/10) and those none had ever performed before like Sengstaken-Blakemore tubes (2.5→6.4/10). 100% of trainees stated that multi-disciplinary simulation was a superior learning experience compared with single specialty simulation.

Discussion: This course demonstrated high trainee satisfaction and improved trainees’ confidence with a range of procedural skills. Trainee-led courses may allow for better tailoring of desirable and hard to achieve skills and provides a flat hierarchy for learning. We propose the expansion of multi-disciplinary simulation as a means of improving the training and working interface between acute specialties.

EP.121

A new simulated experiential learning initiative: Improving trainee confidence and encouraging mental modelling

Helen Cronshaw1, Laura Vincent1, Claire Colebourn1, Tracy Pearse1 and Helen Surgenor1

1John Radcliffe Hospital, Oxford, UK

Abstract

Simulation is an increasingly utilized and effective teaching method for postgraduate medical training. It is evident that in-situ simulation training, delivered within the actual clinical environment, provides a more realistic, learning experience, which can be provided more cost-effectively and reliably within the constraints of rotas and working hours restrictions.

Our junior Intensive Care Medicine (ICM) trainees have expressed a low level of confidence managing acute, complex clinical cases at presentation, particularly where there is diagnostic uncertainty. In response to this identified training need, we created a mobile simulation scenario in which a small group of trainees were required to manage the care of ‘an acutely unwell young adult at the front door’. They each took the lead at a consecutive stage in the care pathway from the initial presentation in the Emergency Department (ED), the transfer to the CT scanner and then on to the Intensive Care Unit (ICU). At each stage the leading trainee was required to co-ordinate the multi-disciplinary team (MDT), direct the ongoing acute management of the patient, whilst simultaneously recognizing and treating a ‘critical incident’. This was followed by a MDT feedback discussion facilitated by the trainers, where they could discuss their clinical management and decision-making but also describe their positive and negative emotional experiences and how this impacted on their learning experience.

The trainees had on average, 1.5 years of ICM training, and 8 experiences of being team leader prior to taking part. Initial results suggest trainees found the scenario highly complex and most found the feedback session the most useful aspect. The nurses that took part were a mixture of ED and ICU staff with a range of 1–20 years’ experience. The nurses found it particularly useful to follow the patient journey, and experience the other roles of MDT members in different environments.

We collected data on levels of confidence surrounding initial assessment/management, intra-hospital transfer, critical incident recognition and treatment, multi-tasking (doctors only), leading the team (doctors only), dealing with a remote environments and feedback discussion. Their confidence remained the same or increased for each aspect and they either ‘agreed’ or ‘strongly agreed’ that it was a useful learning experience and would recommend the initiative to others.

We would like to share details of our pilot project in this early development stage, discuss our initial findings, and our plans to develop this into a part of our formal post-graduate training programme.

EP.122

The role of Hi Fidelity simulation in designing emergency airway management algorithms: The experience of the UK National Tracheostomy Safety Project

Hannah Donaldson1, Gareth Hughes1,2,3, Catherine Doherty2,4, John Moore4, Lucy Bates2,5, Dougal Atkinson4 and Brendan McGrath1,2,3

1University Hospital South Manchester NHS Foundation Trust, Manchester, UK

2University of Manchester, Manchester, UK

3Manchester Academic Health Sciences Centre, Manchester, UK

4Central Manchester NHS Foundation Trust, Manchester, UK

5Royal Bolton Hospital, Bolton, UK

Abstract

Introduction: Medical simulation has an established role in teaching airway emergency algorithms1 but its utility in algorithm development has not been studied. Our aim was to use ‘high-fidelity’ simulation to help construct an emergency tracheostomy management algorithm, challenging the established ‘expert consensus’ process.

Methods: Following pre-brief, volunteer multidisciplinary staff followed different algorithms for managing simulated tracheostomy emergencies. Four standardised scenarios and initial algorithms were adapted from anonymous reported critical incidents. The simulators (SimMan™ Essential, Laerdal and MetiMan™, CAE) deteriorated physiologically at pre-programmed intervals, prompting actions from participants. Draft algorithms were amended if observers noted prolonged desaturation (SpO2 <90% for >5mins), inaction (>1 min), algorithm deviation, or failure to achieve key learning objectives (removal of blocked tube). No further revisions were made when two different consecutive participants managed the scenario successfully, with the next scenario in sequence used subsequently.

Results: Thirty-two experienced staff participated. Session outcomes included:

1. Structuring actions into clearly identified sections, allowing responders to work faster and more confidently.

2. Colour coding sections of the algorithm as this helped to visually group together a sequence of actions.

3. Re-ordering elements. Candidates worked faster if early steps reflected their usual practice.

4. Removing elements. Instruction to re-inflate pilot balloon of the tracheostomy cuff removed as this led to faster completion of the scenarios and less pausing.

Discussion: The role of high-fidelity simulation was key in developing our algorithms. The hi-fidelity simulators recreated identical clinical situations, allowing different management strategies to be attempted, assessing the algorithm rather than the participant. Following this process, near final algorithms were sent for peer review to specialist societies2. Interestingly, no major changes were suggested that had not been considered and evaluated in the simulation testing. It is unknown whether more detailed revisions would have arisen if early algorithms were distributed for expert peer review.

We conclude that medical simulation has an important role in the development and refinement of airway emergency management algorithms.

References

  • 1.Lucisano KE, Talbot LA. Simulation training for advanced airway management for anesthesia and other healthcare providers: a systematic review. AANA J 2012; 80: 25–31. [PubMed]
  • 2.McGrath BA, Bates L, Atkinson D, et al. Multidisciplinary guidelines for the management of tracheostomy and laryngectomy airway emergencies. Anaesthesia 2012; 67: 1025–1041. [DOI] [PubMed]

EP.123

Illuminating feedback manikin use during Basic Life Support Refresher training – a pragmatic pilot and feasibility study comparing video and instructor led training with three manikin types

Alexandra Finch1, Liam Cato1, Saskia Van Dijk1, Maria-Eleni Zioupos1, Emma Hardy1, Maira McKinlay1, Joe Alderman1,2,3, Jonathan Hulme1,3 and Andrew Owen1,2

1College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK

2Critical Care Unit, Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

3City Hospital Birmingham, Sandwell & West Birmingham Hospitals NHS Trust, Birmingham, UK

Abstract

Introduction: Survival following out-of-hospital-cardiac-arrest in the UK at best occurs in just 12% of cases. (1) Early, good quality bystander cardiopulmonary resuscitation (CPR) and prompt defibrillation optimise chance of survival without disability. (2,3) Since 1995 the University of Birmingham, UK, has trained all its healthcare students to competency in Basic Life Support (BLS) (4), though available evidence suggests that skill decay occurs rapidly. (5–8)

This pilot study assessed BLS skill retention three years post-training, and feasibility of various refresher training models, including instructor-led vs video-led refresher training, and use of Laerdal Little Anne CPR training manikins vs Brayden Illuminating CPR manikins. (9)

Method: Twelve volunteer medical students from the University of Birmingham were randomised to three intervention arms: a) Instructor-led, standardised BLS training using Little Anne CPR training manikins (Laerdal); b)Instructor-led, standardised BLS training using Brayden Illuminating CPR manikins; c) Video-led BLS training, using the British Heart Foundation’s “Call, Push, Rescue” video and Laerdal Mini Anne inflatable manikins. Teaching time in every arm was 40 minutes.

Baseline measurement of CPR quality was obtained using a Laerdal Resusci®Anne SkillReporter™, and again post refresher training. Qualitative feedback was collected via questionnaire.

Results: Baseline CPR performance was better than expected with 5/12 participants achieving standards recommended by the European Resuscitation Council guidelines prior to refresher training. Following refresher training every candidate delivered compressions at the correct rate, though compression depth and ventilation quality remained imperfect. Hands-off time improved most following refresher training in participants trained using Brayden Illuminating CPR manikins – 1.25s reduction.

All Brayden users reported illuminating feedback was a useful addition, though some participants reported confusion about what precisely the illumination represented, commenting:

“I liked the lights and the positive feedback when the forehead lit up”

“Did not immediately understand the flashing neck lights”

Video-led refresher training made improvements to participants' hands off time and compression rate, but not their ventilations.

All participants reported greater confidence in BLS ability & knowledge post refresher session, with self-reported competence increasing by average 29%.

Conclusion: In this limited pilot study, refresher training improved chest compression rate and hands off time, and self-reported confidence in CPR knowledge and performance. Though our small sample size makes detailed analysis impossible, there are potential differences when manikin and training methods are varied. This pilot data will guide a larger trial to map long-term skill retention and compare divergent models of refresher training.

References: Redacted. Available via email on request

EP.124

Turn Down the Volume

Romit Samanta1, Ronel Talker2, Vidette Wong2, Abishek Dixit1, Ari Ercole1 and Charlotte Summers3

1Cambridge University Department of Anaesthesia, Cambridge, UK

2John Farman Intensive Care Unit, Cambridge University Hospitals NHS Trust, Cambridge, UK

3Department of Medicine, University of Cambridge School of Clinical Medicine, Cambridge, UK

Abstract

Reducing iatrogenic harm from mechanical ventilation by lung protective ventilation (LPV) remains an enduring priority in ARDS management. Poor practices, however, remain widespread1, even in major clinical trials.2 Changing the behaviour of clinical staff, despite the supportive evidence, is a perpetual challenge on the ICU.

Methods: We utilised high resolution, ‘big’ data as a tool to drive improvements in ventilation practices. The EPR at Cambridge University Hospitals (Epic Systems, Madison, Wisconsin, USA) routinely collects physiological and ventilation which we analysed between October 2014 and January 2016 to produce data on 685 patients over 212,326 ventilation hours. The median tidal volume that patients received was 7.4 mlkg−1 predicted body weight (PBW) with female and obese patients at particularly high risk of receiving higher tidal volumes.

To improve on these results, we used a combination collaborative, reporting and educational interventions on the general intensive care unit (22 beds) at Cambridge University Hospitals:

• Presentation to and seeking solutions from the multidisciplinary team

• Creating tools within Epic to flag patients receiving high tidal volumes

• Changing the line of visibility of weight-based ventilation data within Epic

• Empowering FY1 (newly qualified) doctors to question ventilator settings in flagged patients

• Laminate reference guides on each ventilator

• Opportunistic education of nursing team members

Results: Following a month of embedding the above behaviours, we collected hourly ventilation data for all mechanically ventilated patients in the general ICU over a two-week period (2,479 hours in 22 patients). Results were analysed by logistic regression using R (version 3.3.2). There was a significant reduction in tidal volume (7.4 to 7.1 mlkg−1PBW p < 0.001), which was more marked in male patients (7.2 to 6.4 mlkg−1PBW p < 0.001). See table 1.

Discussion: Resistance to change is an inherent feature of human behaviour but the introduction of disruptive technologies presents an opportunity to develop tools and improve care. The ability of staff to self-monitor their practice, without having to perform calculations or the threat of top-down dictat, alters the dynamic of behaviour change. Reducing the cognitive efforts of busy clinical staff to determine the ideal target tidal volume might also help improve compliance with LPV. Junior medical staff demonstrating new ways of utilising an EPR might also be perceived as a less threatening method of educating peers, compared with didactic teaching or prescriptive approaches.

EP.125

Noise levels in Adult Critical Care Unit (CCU) and Achieving a Better Sleeping Pattern for Patients

Barbara Ribeiro1, James McCulloch2, Anca Ostas2 and Mohamed Ramali2

1Queen's Hospital – Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK

2Colchester Hospital University NHS Foundation Trust, Colchester, UK

Abstract

Noise interferes with patient’s sleep patterns, ultimately leading to poor outcomes in the critical care setting. Lack of sleep has negative effects on physiology as well as detrimental psychological consequences, as sleep deprivation is linked to Critical Care Unit (CCU) delirium. Noise also affects staff members’ wellbeing and productivity. Machine alarms and staff conversations are amongst the most disturbing noises heard during the night. The World Health Organisation (WHO) recommends hospital noise levels should be on average 35 decibels (dB) during the day and 30 dB at night.

We aimed to assess the sound levels in the CCU at an acute district general hospital and compare them with the WHO guidance.

This was a retrospective study of the noise levels in the CCU over 30 days with a recording device rotating between three locations in the unit, recording 24 hours/day. The results were measured in dB using the Sound Ear Pro II® and its own software.

The average noise level recorded was 46.5 dB with a background noise of 45.0 dB within the CCU. A peak was recorded at 61.8 dB. The majority of the noise was created during the day with the main peaks between 07:00–09:00, 12:00–13:00 and 19:00–20:00. This was thought to coincide with the major daily staff activities. The period of 07:00–09:00 coincides with morning handover (for both doctors and nurses), patient hygiene and doctors’ ward round. This was followed by the 12:00–13:00 period, coinciding with Microbiology ward round and continuing patient care and physiotherapy sessions. Finally, the last period (19:00–20:00) was in keeping with evening handover (again including both doctors and nurses) and the doctors’ evening ward round.

Our work shows that the noise levels in our CCU were constantly above recommended limits by the WHO. It is important to keep in mind that the decibel unit is represented in a logarithmic scale, meaning that, for instance, 60 dB is twice as loud as 50 dB and four times louder than 40 dB. Behavioural changes have been suggested in order to reduce noise levels in our CCU, including reducing the volume of the discussions, using the doctor’s room for handover (especially in the evening) and adjusting equipment settings to match each patient’s physiological state. Also, some studies suggest the use of earplugs may be helpful to prevent critical care associated delirium. Finally, other works seem to suggest that achieving WHO standards in CCUs across the UK is virtually impossible.

EP.126

Introduction of a Sedation Management Tool on Sedation Holds and Spontaneous Breathing Trials

Laura Coleman1, James Hanison1, Catherine Applewhite1, Sobia Ghani1 and Natalie Fowler1

1Critical Care Department, Manchester Royal Infirmary, Manchester, UK

Abstract

Introduction: It is known that daily interruption of sedatives reduce the duration of mechanical ventilation without compromising patient comfort or safety and spontaneous breathing trials reduce duration of mechanical ventilation in acute respiratory failure however sedation holds and spontaneous breathing trials are not always performed.

Method: Baseline audit in January 2017 of 20 intensive care beds on 7 days demonstrated that only 36% of eligible opportunities for sedation holds were performed and no spontaneous breathing trials were conducted despite 33 suitable opportunities. Implementation of change occured through development of a steering group focussing on education with a medical and nursing staff daily prompt, encorporating sedation holds and spontaneous breathing trials as part of daily practice with safe simulataneous patient sedation holds.

Results: Daily audit of sedation holds and spontaneous breathing trials have demonstrated that over one month 83% of patients deemed suitable received an appropriate sedation hold and 30% of patients were extubated following a successful sedation hold and SBT.

EP.127

A Matter of Restraint – A Qualitative Survey On The Application Of Physical Restraints in Welsh Intensive Care Units

Craig Beaton1, Sonya Daniel1

1Adult Critical Care Unit, University Hospital Of Wales, Cardiff, UK

Abstract

Physical Restraint is “any action or procedure that prevents a person's free body movement and/or access to his/her body by the use of any method, attached or adjacent to a person's body that they cannot control or remove easily”. In the intensive care setting physical restraint is most commonly used to prevent harm and reduce chemical sedation in delirious patients.

In the Faculty of Intensive Care Medicine and Intensive Care Society’s guidance following the judgement in Ferreira v HM Coroner of Inner South London [2017] EWCA state that restraint may be in a patient’s best interests. The Mental Capacity Act 2005 states that restraint may be used if believed necessary to prevent harm to the patient provided it is a proportionate response.

This survey evaluated current practice surrounding physical restraint on intensive care in Wales.

Method: We undertook a telephone survey of all fifteen units in Wales covering the existence of guidelines, mental capacity assessment of patients, types of restraints used, authorisation, review and documentation procedures.

Results: Respondents in two units were unaware of specific guidelines; twelve units had guidelines of which five were due for review; one unit reported a policy to never use physical restraints.

All respondents stressed clear documentation of capacity, best interests decision making and discussion with the family at the earliest point possible. Two units used checklists and had information packs for relatives.

With regard to frequency of use: A single unit reported never using restraints while all others stated that they were used 'rarely'. On clarification this ranged from 'less than once a month' to 'not in 14 years'. Mitts were the most commonly used device (13/15), followed by wrist ties (5/15), and elbow extenders (2/15).

Authorisiation was by consultant alone in five units, consultant with senior nurse in seven units and nurse alone in two units.

All units stated a clear preference for chemical over physical restraint.

Discussion: Physical restraint remains a contentious issue. Our survey revealed that physical restraint is infrequently used. There were variations in practice surrounding the existence of guidelines; methods of restraint used; and authorisation of restraint. Evidence of good practice included the use of cognitive aids and provision of information packs for relatives.

Further work could include: expanding the survey to involve the entire United Kingdom; developing a physical restraint prescription and a national guideline in keeping with recent developments in case law.

EP.128

Long-term prognostic implications of intensive care admission following lung resection

David Finn1, Philip McCall2,3, Alistair Macfie2, John Kinsella3 and Benjamin Shelley2,3

1Glasgow Royal Infirmary, Glasgow, UK

2Golden Jubilee National Hospital, Clydebank, UK

3Academic Unit of Anaesthesia, Pain and Critical Care Medicine, Glasgow, UK

Abstract

Introduction: Each year in the U.K. 8500 patients undergo lung resection and although in-hospital mortality is low, there are high rates of peri-operative complications. In previous work we demonstrated an unplanned Intensive Care Unit (ICU) admission rate of 2.6%. Little is known about the prognosis of patients following discharge from ICU in this patient group. We sought to assess the prognostic implication of ICU admission on long-term survival following lung resection.

Methods: We performed a secondary analysis of a retrospective cohort study of patients undergoing lung resection over a two-year period at our institution. The need for research ethics committee approval was not sought as this study was considered to be a service evaluation. Approval for data sharing was obtained from the Caldicott guardian. ICU admission was defined as those patients needing invasive positive pressure ventilation and/or renal replacement therapy. Date and primary cause of death were obtained from National Services Scotland two years following study conclusion. Kaplan-Meier survival analysis, with log-rank testing, was performed to determine any difference in survival distributions.

Results: 1169 patients underwent lung resection during the study period. Thirty patients (2.6%) were admitted to ICU, with a unit mortality in this group of 26.7%. Two patients (0.2%) died before hospital discharge in the non-ICU group. Median (IQR) follow-up was 1041 (838, 1258) days. Comparison was made between those patients who survived ICU admission and those not admitted to ICU. There were 346 deaths in total, 337/1137 (29.7%) in the non-ICU group and 9/22 (40.9%) in the ICU group. There was no difference in survival distribution of those patients who survived ICU admission and those who were not admitted to ICU (p = 0.290, Fig 1).

Discussion: This study suggests that patients who survive ICU admission following lung resection, have a similar long-term survival to those not admitted to ICU. This contrasts with the general ICU population in whom mortality remains elevated after discharge from hospital. Although labour intensive, the low mortality and favourable survival after discharge would support ICU admission in this population. Unfortunately study size did not allow comprehensive multi-variable assessment of the factors that may also impact long-term survival following lung resection. Larger studies are required to further assess these factors.

Fig 1.

Fig 1.

Survival distributions of patients who survived ICU (dashed line) and those not admitted to ICU (continuous line) with vertical dashes indicating censored patients.

EP.129

Who should diagnose death in critical care?

Alex Gatehouse1, Sadie Diamond-Fox1, Jill Kelly1, Peter Hutchinson1 and Matthew Faulds1

1The Newcastle Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK

Abstract

Death is defined as ‘the irreversible loss of the capacity for consciousness with the irreversible loss of the capacity to breathe’. There is clear societal and professional demand for a timely and reliable means of diagnosing death, with no legal requirement for this to be performed by a doctor. Guidance regarding the practice of diagnosis of death is conflicting and traditional teaching to medical trainees may be variable.

We performed a retrospective review of the documentation of expected deaths within Adult Critical Care Units in our Trust. Organ donation following death precluded inclusion. Deaths from a two-month period were audited against the Academy of Medical Royal Colleges (AoMRC) code of practice for diagnosing death. In addition, a national electronic survey of Advanced Critical Care Practitioners (ACCPs) was conducted to ascertain their involvement in and understanding of verification of death.

Diagnosis of death was conducted by a doctor in all 41 sets of notes. Duration of assessment ranged from 1–5 minutes, with 41% stating 5 minutes and no time documented at all in 15%. In all cases cardiorespiratory observation was documented but neurological assessment was variable.

Documentation of date and time of diagnosis of death exceeded 90%. Additional information recorded included verifying patient identification (32%), location (12%), the cause of death (15%) and persons present at the time of death (<25%). The signature, name and grade of the doctor was recorded in over 90% of the notes, but only half detailed their GMC number and 12% a contact number.

The ACCP survey received 21 responses, with half diagnosing death within their clinical role, the majority being expected deaths (78%). Duration of assessment ranged from 1–10 minutes, with 50% stating 5 minutes. Again, neurological assessment reported was variable.

Data suggests that our critical care units do not consistently fulfil AoMRC standards in terms of duration, recommended criteria and documentation. Our ACCPs do not currently verifiy death but nationally this is not the case. To allow the standardisation of diagnosis of death, we have developed a proforma to aid robust completion of this important task by all critical care residents, including appropriately trained ACCPs.

Table 1.

Documentation of Neurological Assessment.

Physiological Observation Documentation
Response to pain- 76%
• central 27%
• peripheral 12%
• not specified 37%
Pupils 98%
Corneal reflex 22%

Table 2.

ACCP survey documentation of neurological assessment.

Physiological Observation Documentation
Neurology -
• Response to central pain 79%
• Pupils 93%
• Corneal reflex 36%

EP.130

Audit into Sedation Hold practice within the George Eliot Hospital ITU – compliance with Local and international Guidelines

Christopher Ball-nossa1, Mudassar Aslam2 and Sam George2

1Coventry and Warwickshire Partnership Trust, Coventry, UK

2George Eliot Hospital Fouondation Trust, Nuneaton, UK

Abstract

The ICU provides an expensive and specialised service which is reserved for critically ill patients. It is therefore essential that the service is efficient.

Any patients requiring invasive mechanical ventilation are pharmacologically sedated in the ICU in order to minimise the stress response and psychological sequelae associated with it. Studies have demonstrated that daily interruptions of continuous infusions of sedatives reduce the mean duration of mechanical ventilation and length of stay in the ICU. Additionally they have shown that this practice reduces the incidence of complications attributed to over sedation, such as ventilator-associated pneumonia and post-sedation delirium. As a result, the ACCM produced guidelines for sedation practice which include protocols for sedation holds and optimal sedation scores. This audit looks at whether these guidelines are being adhered to at a local level.

This audit retrospectively sampled 30 mechanically-ventilated patients, spanning a period from June 2016 to January 2017, taking data from their ITU charts and radiology records. Patient outcomes and compliance to trust protocols and ACCM guidelines were then recorded.

Of all the patients sampled, only 17% received daily sedation holds. When excluding those not eligible (according to ACCM criteria), which amounted to half of the patients sampled, only 33% had their daily interruptions. Those having regular interruptions saw reductions in their duration of mechanical ventilation and stay in ITU by over 50%. The sedation hold group also benefited from fewer complications. The incidence was found to be 0% compared with 20% in those not receiving daily sedation holds. In total, 40% of patients had the reason for not having a sedation hold documented on their ITU charts. Sedation scores were documented for 100% of patients, of which 57% had their sedation titrated according to their RASS score.

The findings revealed only patchy compliance with local and ACCM guidelines. Lack of implementation of daily interruption in sedation led to longer durations of ventilation, longer stays in ITU, and a greater number of complications. Revising local practices to align them with up-to-date evidence-based guidelines, followed by educating the staff responsible, is essential in improving their uptake and for improving patient outcomes. Improving compliance to the guidelines will reduce the burden on ITU, and reduce the costs to the trust.

EP.131

A comparison of the haemodynamic effects of 500 ml boluses of Hartmann’s solution and 4% human albumin solution in post-cardiac surgery ICU patients: an observational study

Anthony Wilson1, Salvatore Lucio Cutuli1, Luca Lucchetta1, Paolo Ancona1, Eduardo Osawa1, Mark Kubicki1, Maria Cronhjort1, Suvi Vaara1, Johan Mårtensson1, Glenn Eastwood1, Neil Glassford1 and Rinaldo Bellomo1

1Department of Intensive Care Medicine, Austin Health, Melbourne, Australia

Abstract

Introduction: Fluid bolus therapy (FBT) is a common treatment for circulatory failure in the ICU. There is growing evidence that the haemodynamic effects of crystalloid boluses may be modest and short-lived. Albumin solutions may offer a more sustained benefit.

Objectives: To compare the magnitude and duration of the hemodynamic changes associated with 500 ml boluses of Hartmann’s solution and 4% albumin solution administered to post-operative cardiac surgical patients.

Methods: Prospective, observational study performed in a tertiary, university-affiliated hospital. Post-operative, ventilated, cardiac surgical patients who were prescribed FBT were included. Data logging software was used to collect physiologic parameters for 60 minutes post-FBT. Haemodynamic confounders were recorded and censored for.

Results: Of 61 patients, 39 (20 Hartmann’s, 19 albumin) had over 10 minutes of confounder-free observation post FBT and were included in the analysis. Table 1 describes the characteristics of the FBT:

Table 2. displays the observed changes in mean arterial blood pressure (MAP):

Changes in cardiac index (CI) with FBT were available in 35 patients (18 Hartmann’s, 17 4% albumin). Fluid responders were defined by a stroke volume increase ≥15% from baseline at any point following commencement of FBT. Table 3 displays the changes in CI in the 24 responders:

Conclusions: FBT with 4% albumin solution yielded a more sustained MAP response at 30 minutes when compared with 500mls Hartmann’s solution but the result did not reach statistical significance. More data is needed to confirm/refute this trend. The peak changes in MAP, CI and duration of CI response were similar for both groups. This information may be useful to clinicians deciding on the best strategy to treat haemodynamic instability in this patient population.

Table 1.

fluid bolus therapy received. Medians [IQR].

Hartmann’s (n = 20) 4% albumin (n = 19)
Duration of FBT (min) 8.5 [6.8–13.0] 8.0 [5.0–13.0]
Volume of FBT (ml/kg) 6.3 [5.8–7.0] 5.9 [5.1–6.3]
Duration of confounder free observation post bolus (min) 33 [21–28] 21 [16–39]

Table 2.

changes in MAP with FBT. Medians [IQR]. *12 patients +9 patients.

Hartmann’s (n = 20) 4% albumin (n = 19)
Baseline MAP (mmHg) 67.0 [62.9–70.3] 65.3 [63.1–70.8] p = 0.69
ΔMAP (peak) (mmHg) 12.7 [7.2–20.4] 13.3 [10.6–15.9] p = 0.69
ΔMAP 30mins post FBT (mmHg) 2.2 [−1.7–6.0]* 7.9 [5.2–9.7]+ p = 0.07

Table 3.

changes in CI with FBT. Medians [IQR]. #7 patients.

Hartmann’s (n = 13) 4% albumin (n = 11)
Baseline CI (L/min/m2) 2.39 [2.24–2.67] 2.31 [2.11–2.65] p = 0.95
ΔCI (peak) (L/min/m2) 0.56 [0.31–0.68] 0.56 [0.45–0.80] p = 0.39
ΔCI (15–30 mins post FBT) (L/min/m2) 0.19 [0.16–0.44]# 0.36 [0.24-0.39]# p = 0.46

EP.132

Non Cardiogenic Pulmonary Oedema Secondary to a Therapeutic Dose of Amlodipine requiring Intensive Care Unit Admission

Eleanor Damm1, Nikki Faulkner1 and Ingi Elsayed1

1Royal Stoke University Hospital, Stoke-on-Trent, UK

Abstract

Introduction: Up to 70% of patients receiving amlodipine experience peripheral oedma (1). Cases of non-cardiogenic pulmonary oedema have been described following amlodipine overdose. We describe a case of pulmonary oedema secondary to therapeutic dose of amlodipine requiring intensive care admission.

Case: A 53 years old female with a background of hypertension, angina, insulin-dependent diabetes presented in complete heart block and hyponatraemia. A permanent pacemaker was fitted, amlodipine and demecloycline were initiated. An echocardiogram demonstrated preserved LV systolic function with an EF of 70% and mild LV diastolic dysfunction.

A month later she presented with acute pulmonary and peripheral oedema. Serum calcium was 1.02.

Amlodipine was stopped and treatment was commenced with intravenous nitrites and diuresis with intravenous furosemide. Hypocalcaemia was corrected.

Due to escalating oxygen demand the patient was admitted to critical care for non-invasive positive pressure ventilation. The hypocalcaemia was attributed to demecloycline, which was subsequently stopped. Her pulmonary oedema resolved and she returned to her baseline pre-admission function.

Discussion: Amlodipine is a calcium channel blocker commonly prescribed for the treatment of hypertension and angina. Its antihypertensive effect is due to direct relaxation effect on vascular smooth muscle.

The aetiology of amlodipine induced peripheral oedema is due to decreased resistance within the arteriolar system without a corresponding change in the venous system, causing an increase in hydrostatic pressure within the pre-capillary circulation and fluid shifts into the interstitial space (2). Similarily, in amlodipine overdose, this has been the postulated pathophysiology in pulmonary oedema (3), in addition to a myocardial depressant effect caused by the blockage of L-the calcium channels.

Treatment in this case did not require vasopressor therapy, inducement of hyperinsulinaemic state or cardiac pacing, as elsewhere described in management of amlodipine overdose (4). However, the patient was already paced with a PPM and received insulin therapy, which had a possible protective effect.

Conclusions: Non-cardiogenic pulmonary oedema has been observed following commencement of amlodipine for the treatment of hypertension, requiring critical care admission.

References

  • 1.Haria M, Wagstaff A. Erratum to: Amlodipine. A reappraisal of its pharmacological properties and therapeutic use in cardiovascular disease. Drugs 1995; 50: 896–896. [DOI] [PubMed]
  • 2.Sica D. Calcium Channel Blocker-Related Peripheral Edema: Can It Be Resolved? The Journal of Clinical Hypertension 2003; 5: 291–295. [DOI] [PMC free article] [PubMed]
  • 3.Hirachan A, et al. Amlodipine overdose with hypotension and noncardiogenic pulmonary edema. Nepalese Heart Journal 2016; 13: 27–29.
  • 4.Upreti V, et al. Shock due to amlodipine overdose. Indian Journal of Critical Care Medicine 2013; 17: 375–377. [DOI] [PMC free article] [PubMed]

EP.133

An operational definition of ventilator-associated pneumonia for automated disease identification in electronic health records

Finn Catling1,2, Steve Harris1,2, Niall MacCallum1,2, David Brealey1,2, Ari Ercole3, Peter Watkinson4, Andrew Jones5, Duncan Young4, Richard Beale6, Simon Ashworth7, Stephen Brett8 and Mervyn Singer1,2

1Bloomsbury Institute for Intensive Care Medicine, University College London, London, UK

2Critical Care, University College London Hospitals NHS Foundation Trust, London, UK

3Division of Anaesthesia, Department of Medicine, Cambridge University, Cambridge, UK

4Critical Care Research Group (Kadoorie Centre), Nuffield Department of Clinical Neurosciences, Medical Sciences Division, Oxford University, Oxford, UK

5Critical Care, Guy’s and St. Thomas’ NHS Foundation Trust, London, UK

6Division of Asthma, Allergy and Lung Biology, King’s College, London, UK

7Critical Care, St. Mary’s Hospital, Imperial College Healthcare NHS Trust, London, UK

8Critical Care, Hammersmith Hospital, Imperial College Healthcare NHS Trust, London, UK

Abstract

Background: Ventilator-associated pneumonia (VAP) often presents ambiguously and is treated empirically. Definitions of VAP vary widely across the literature, reflecting this ambiguity. Clinical coding of VAP is consequently unreliable, and discouraged in institutions using VAP incidence as a quality indicator.[1,2] An alternative, more sensitive definition of VAP would allow automated identification in electronic health records (EHRs) and improve consistency.

Methods: We developed an operational definition of VAP using the NIHR Critical Care Health Informatics Collaborative dataset, which comprises demographic and longitudinal data from 15556 patients admitted in 2014–2016 across 10 UK-based critical care units. 5184 patients with an admission diagnosis of pneumonia, or a planned admission following elective surgery, were excluded. Reflecting common clinical practice, our definition is based on a triad of invasive ventilation for >24 hours plus respiratory deterioration plus antibiotic escalation. Antibiotic escalation was characterized using an established ranking system.[3] We identified a cohort of patients with VAP, and a sub-cohort who required at least 7 days' invasive ventilation and escalated antibiotics following diagnosis. We report key demographics, process measures and outcome measures for each cohort. As part of our analysis, we developed a suite of software tools which quantify data quality in EHRs, handle missing data, and identify patients with VAP in a robust, automated and reproducible manner.

Results: VAP was identified in 1085 (10.5%) of patients, which is comparable to estimates from modern epidemiological studies.[4] VAP requiring prolonged treatment occurred in 268 (2.6%). Compared to patients without VAP, VAP was associated with a higher maximum SOFA score in the first 24 hours of admission (median 6 vs. 8, CIs 6–6 vs. 8–9), prolonged ICU admission (median 3.0 vs. 13.9 days, CIs 3.0–3.1 vs. 13.0–14.9) and higher ICU mortality (median 14.3% vs. 26.8%, CIs 13.6–15.0% vs. 24.2–29.5%). ICU mortality was highest (32.7%, CIs 27.0–38.4%) in the sub-cohort requiring prolonged treatment for VAP. 6893 (66.5%) of all patients in this study underwent invasive ventilation.

Conclusions: Our software suite and definition of VAP allow automated identification of a previously-elusive patient cohort. Our analysis offers insight into this cohort as a whole, and into the important subgroup of patients requiring prolonged treatment for VAP.

References

  • 1.Skrupky LP, McConnell K, Dallas J, et al. A comparison of ventilator-associated pneumonia rates as identified according to the National Healthcare Safety Network and American College of Chest Physicians criteria. Crit Care Med 2012; 40: 281–284. [DOI] [PubMed]
  • 2.Drees M, Hausman S, Rogers A, et al. Underestimating the impact of ventilator-associated pneumonia by use of surveillance data. Infect Control Hosp Epidemiol 2010; 31: 650–652. [DOI] [PubMed]
  • 3.Braykov NP, Morgan DJ, Schweizer ML, et al. Assessment of empirical antibiotic therapy optimisation in six hospitals: an observational cohort study. Lancet Infect Dis 2014; 14: 1220–1227. [DOI] [PMC free article] [PubMed]
  • 4.Kalanuria AA, Ziai W and Mirski M. Ventilator-associated pneumonia in the ICU. Crit Care 2014; 18: 208. [DOI] [PMC free article] [PubMed]

EP.134

Heart Rate Variability in Critical Care: Validation of a new device

Christopher Macrow1 and Tom Lawton1

1Bradford Teaching Hospital Foundation Trust, Bradford, UK

Abstract

Aims:

• To validate an innovative, newly engineered Heart Rate Variability (HRV) monitor against a current standard.

• To support a research proposal to investigate the clinical use of HRV in the critical/intensive care setting.

Background: The heart does not beat with metronomic regularity; there are changes in the R-R interval with ventilation, baroreflexes and environmental factors1. HRV measures this and is thought to predominantly represent the parasympathetic component of autonomic neural regulation of the heart.

A high HRV is linked to good health and a high level of fitness, whilst decreased HRV is linked to stress, fatigue, an increased risk of disease, mortality and decreased regulatory capacity2.

A decreased HRV has been observed in patients with cardiac failure and linked to a worse prognosis post-MI 3. It has also be seen in patients with cirrhosis and has both prognostic value and predicts mortality4. HRV in sepsis correlates with both diagnosis and prognosis5.

Automated HRV measurement has been problematic in the hospital setting6. We have created a device that can calculate the HRV from a 3 lead ECG.

Methods: To validate the device, HRV was measured in healthy volunteers at rest, using the new device and a Polar heart rate monitor7 simultaneously over a one minute period. Values were read from the device’s screen at the end of the recording period. Recordings of the R-R intervals were also taken for retrospective comparison. Metrics recorded were:

• Heart rate

• SDNN

○ Standard deviation of normal R-R intervals in milliseconds8

• 20 x ln(RMSSD), reported as “HRV”

○ RMSSD is the root mean squared successive difference of normal R-R intervals. This metric is already well used in sports science.

Results: Data was collected on 20 participants (male n = 12), with the modal age band 26–30, with 1 participant standing, 2 lying and 17 sitting. Mean SDNN was 47 ms; mean “HRV” was 67.

Pearson's correlation coefficients for the 2 devices for the SDNN and “HRV” scores are 0.9975 and 0.9868 respectively.

“HRV” 95% limits of agreement are −3.4 to +3.3 (ie +/− 5% of true value) and for SDNN the 95% limits of agreement are −3.0 to +3.1 (ie +/− 6% of true value).

The device therefore agrees closely with standard HRV measurement.

Clinical implications:

We hope to use the validated device to determine:

• if HRV can be used to prognosticate on the intensive care unit

• If HRV can be used as predictor of successful weaning

EP.135

Towards automated clinical coding in critical care

Finn Catling1, George Spithourakis1 and Sebastian Riedel1

1Department of Computer Science, University College London, London, UK

Abstract

Background: Clinical coding of critical care admissions is essential in order for hospital trusts to gain insight into the care they deliver, to participate in national quality improvement projects and to be appropriately remunerated. These codes are typically derived from free-text notes, as these record the narrative of each patient’s admission. Manual clinical coding is expensive, time-consuming, error prone and unstandardised between trusts [1]. A system which performed automated clinical coding would have great potential to save resources, and realtime availability of code would improve oversight of patient care and accelerate research, for example by improving recruitment to clinical trials. Automated coding is made challenging by the idiosyncrasies of clinical text and the large number of disease codes, many of which occur rarely.

Methods: Our study uses the MIMIC-III dataset, which comprises data from critical care admissions at an American hospital between 2001 and 2012. We used a statistical model to represent the history of presenting complaint from 55177 free-text discharge summaries, and a second model to represent all ICD-9-CM codes, then combined the representations to predict the primary ICD-9-CM code for each admission. Term frequency-inverse document frequency (TF-IDF) and recurrent neural network (RNN) text representations were compared. We also compared learning each code representation atomically, with learning representations of each node in ICD-9-CM tree structure and representing the code as the mean of the node representations. 70% of the dataset was used for model training, 10% was used for parameter optimisation and the remaining 20% was used to evaluate model performance.

Results: RNN text representation improved prediction accuracy of the 19 ICD-9-CM chapter labels to 70.23% from 69.35% using TF-IDF. Composing the code representations from the ICD-9-CM nodes increased prediction accuracy of the 17561 ICD-9-CM codes to 33.15% versus 29.54% for atomic representations.

Conclusions: Our study demonstrates that modern neural network architectures can improve representation of medical text, and that structured medical knowledge (the ICD-9-CM tree) can be incorporated into statistical models and produce performance improvements. Learning good representations of the 17561 ICD-9-CM codes from a comparatively-small dataset is very challenging, explaining the modest accuracy of some of our results. We expect that model performance will improve significantly when a larger dataset is available for model training.

Reference

  • 1.O’Malley et al. Measuring diagnoses: ICD code accuracy. Health Serv Res. 2005;40: 1620–1639. [DOI] [PMC free article] [PubMed]

EP.136

Introduction to the Weaning Dashboard project: Evaluating the paper-based charts and developing the specification for an electronic system to enhance understanding of weaning practice and process on the Intensive Care Unit

Charlotte Small1,2, Philip Pemberton2, Fiona Howroyd1,2, David McWilliams2, Karl Hewson3, Ben Crundwell3, Annabel Forbes-Cockell3, Olga Passet3 and Catherine Snelson2

1University of Birmingham, Birmingham, UK

2Critical Care Unit, Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK

3Cambridge Design Partnership, Cambridge, UK

Abstract

The prescription and recording of patient progress during weaning from mechanical ventilation is often documented using a paper “weaning chart.” The critical care research group at the Queen Elizabeth Hospital Birmingham are working in collaboration with Cambridge Design Partnership, experts in medical technology design, to develop a digital weaning chart, or “dashboard.”

The project aim was to develop a system that presents clinicians with patient breathing performance, both realtime and historical, to improve understanding of individual progress and provide data for cohort analysis and modelling.

The project objectives were to:

1. Determine the design components and usage of the current paper system.

2. Determine the user requirements of the new interactive system.

3. Demonstrate potential design elements to a group of clinical, nursing and patient stakeholders.

The project methodology combined Human Centred Design (BSI 2010), engineering usability (IEC 2007) and development of complex intervention (Craig, Dieppe et al. 2008) processes, enabling precise definitions of user requirements and future integration of the device into a clinical trial.

It was determined that the paper based weaning chart fails to fulfil its purpose due to:

1. Incomplete information due to not always finding pens, paper.

2. Inconsistent use of colours

3. Lack of space for additional information

4. Goal section not being used properly

Assimilation of data gathered from patient, family member, clinical and nursing staff stakeholder interviews and focus groups, clinical observations and knowledge of ventilator characteristics produced a journey map. This concept illustrated the context of use and identified the main user groups by capturing their goals, tasks, needs and potential risks that might arise from the interaction with a digital weaning chart.

Subsequently, a number of wire-frame concept demonstrators were produced then evaluated by the aforementioned stakeholders. Themes arising from feedback included:

1. Access to historical ventilator data would be valuable.

2. The physical appearance of the patient plays a key role in the current treatment approach.

3. Continuity of care is difficult, with constantly changing rotas of nurses and doctors.

4. The clinicians ideal goal is to be able to isolate how specific settings impact a patient’s parameters.

5. It is difficult to understand the full picture of what a patient has experienced.

The next phase of the research will be a clinical trial of the prototype device to:

1. Identify trends in setting weaning goals.

2. Understand the frequency and magnitude of changes to the ventilators settings during weaning.

3. Understand how additional data provided by the dashboard aids clinical decisions.

EP.137

Age-dependent changes in the autonomic nervous system during anesthesia: A non-linear, geometrical analysis using heart rate variability as a measure of autonomic function

Maddalena Ardissino1, Nicoletta Nicolaou1 and Marcela P Vizcaychipi1

1Imperial College London, London, UK

Abstract

Objectives: Heart rate variability (HRV) is a powerful means of non-invasively assessing autonomic nervous system (ANS) function in real-time. The aim of this multicentre study was to determine the potential use of HRV monitoring as a real-time measure of autonomic function in patients while undergoing propofol anesthesia, and to assess whether HRV can be used to predict the risk of complications following anaesthesia.

Methods: The datasets used for the analysis included recordings of cardiovascular paremeters in 18 ‘young’ (aged <45 years) and seven ‘old’ (aged >65 years) patients during a time period from the induction of anaesthesia until after extubation. These were recorded at Chelsea and Westminster and Kings College London hospitals. HRV was then extracted from HR values using Poincaré quantification codes on Matlab. We plotted Poincare plots of HRV for each patient and then quantified them, by measuring the width of the distribution along (SD2) and perpendicular to the identity line (SD1).

Results: Before the induction of anesthesia, young patients showed greater HRV than the patients (SD2: p = 0.0003). Propofol decreased HRV in the former (SD1: p = 0.019; SD2: p = 0.0002) but not in the latter (SD1: p = 0,21; SD2: p = 0.84). There was a positive correlation between the old patients’ resting HRV and their mean arterial pressure (MAP) during surgery. We measured HRV trends by estimating variability over several different time periods: from 5 minutes to 10 seconds, to estimate the real-time applicability of the measure.

Conclusions: In conclusion, young patients show greater HRV and therefore autonomic tone than old patients at rest. Propofol-induced anesthesia significantly decreases HRV (thus indicating autonomic suppression in real-time) in young but not old patients. Good resolution was obtained up to time windows as short as 20 seconds, therefore showing that HRV is an effective means of quantifying HRV in almost real-time. Low HRV at rest correlated with low MAP during surgery, thus suggesting a potential future role of baseline HRV for pre-surgical assessment of the risk of hypotension in patients.

EP.138

Antimicrobial resistant isolates from patients who require emergency laparotomy – an urgent need to standardise practice

Emma Fadden1, Natasha Hettiarachchi1, Jenifer Mason2, Nikhil Misra1, Ben Morton1,3

1Aintree University Hospital NHS Foundation Trust, Liverpool, UK

2Liverpool Clinical Laboratories, Liverpool, UK

3Liverpool School of Tropical Medicine, Liverpool, UK

Abstract

Background: Patients who require emergency laparotomy typically receive antibiotics, either empirically as part of the preoperative pathway when there are overt signs of sepsis, or as prophylaxis prior to surgery. With the increasing prevalence of community-acquired ESBL and CPE strains worldwide, currently recommended empirical antibiotic regimes could become obsolete and implementation of antimicrobial stewardship is paramount. Failure to send appropriate samples for culture may lead to suboptimal treatment and may be a contributing factor in development of antimicrobial resistance.

Aims:

1. To assess microbiology results from patients requiring emergency laparotomy

2. To determine antimicrobial resistance patterns in isolated organisms

Methods: A retrospective search of National Emergency Laparotomy Audit data for Aintree University Hospital NHS Foundation Trust from 2014–15 was carried out. Microbiological samples received by the laboratory within 48 hours of laparotomy were reviewed according to sample sent (blood or abdominal), organism cultured and the presence of antimicrobial resistance.

Results: A database search identified 202 patients who underwent de novo laparotomy. Blood cultures were sent in 35 cases (17.3%), an abdominal sample in 19 cases (9.4%) and both in four cases (1.98%). Blood cultures were positive in 22.9% (8/27 cases). Abdominal samples were positive in 63.3% of cases (12/19). Resistance to amoxicillin was present in nine isolated potentially pathogenic organisms and Tazocin in four. Table 2 shows beta-lactam resistance patterns according to organisms cultured (all organisms were sensitive to a beta-lactam in combination with gentamicin).

Conclusions: Samples were sent infrequently for microbiology from patients requiring emergency laparotomy. We found resistant organisms in samples from patients who would ordinarily receive empirical antibiotics for community acquired abdominal infections. This lack of sampling reduces our ability to guide antimicrobial therapy, particularly if a patient becomes unwell in the days after surgery (e.g. is it leak or is it inadequate antimicrobial therapy?).

Recommendations:

1. Standardise antimicrobial sampling during emergency laparotomy to inform later care

2. Collaborate with microbiology to incorporate resistance data into empirical antibiotic therapy guidelines

3. Repeat audit with agreed recommendations in place

EP.139

The vasoplegia index is a measure of therapeutic response and outcome in patients with septic shock: An analysis of the Critical Care Health Informatics Collaborative (CC-HIC) database

Simon Lambden1, Abishek Dixit2, S Harris3, N MacCallum3, David Brealey3, J Hetherington4, Sinan Shi4, David Pérez-Suárez4, Peter Watkinson5, Andrew Jones6, S Ashworth7, S Brett8, Richard Beale6, Duncan Young5, Mervyn Singer4, C Summers9 and Ari Ercole2

1Department of Medicine, Cambridge University, Cambridge, UK

2Division of Anaesthesia, Department of Medicine, Cambridge University, Cambridge, UK

3UCLH NHS Foundation Trust, London, UK

4UCL, London, UK

5University of Oxford, Oxford, UK

6Guy’s and St. Thomas’ NHS Foundation Trust, London, UK

7Critical Care, St. Mary’s Hospital, Imperial College Healthcare NHS Trust, London, UK

8Critical Care, Hammersmith Hospital, Imperial College Healthcare NHS Trust, London, UK

9Department of Medicine, University of Cambridge, Cambridge, UK

Abstract

Background: Vascular dysfunction is one of the hallmarks of septic shock and leads to more than 40,000 deaths annually in the UK alone. Exploring the role of vascular dysfunction in the pathogenesis of sepsis is critical to understanding the syndrome and improving patient outcomes. This study describes the development of the vasoplegia index (VI) as a marker of vascular dysfunction in shock, and on a population level, explores the impact of vasoplegia on outcome, and the role of adding systemic corticosteroids to norepinephrine therapy in the treatment of septic shock in intensive care patients.

Methods: A population-based modelling approach was applied to data obtained from the Critical Care Health Informatics Collaborative (CC-HIC) longitudinal database, which contains data from eleven intensive care units in five academic health centers in the United Kingdom. The CC-HIC database contains data from 22,524 patient episodes collected between 2015 and 2016. Of these, 1,579 patient episodes where infection was coded as a reason for ICU admission, included at least one paired record of MAP and noradrenaline dose, resulting in 121,620 data points for analysis. The VI (noradrenaline dose/MAP) was calculated for each recorded value, and the relationship with survival status determined. In addition, data regarding the use and timing of hydrocortisone administration was collected for each patient episode.

Results: The relationship between VI and noradrenaline dose was determined using a linear mixed effects model with quadratic terms. The VI demonstrated a non-linear response to increasing doses of noradrenaline (p < 0.0001). Non-survivors had a significantly greater degree of vasoplegia throughout the dose range of noradrenaline (p < 0.0001).

Of the patient episodes explored in the CC-HIC data analysis, 470 (29.7%) received hydrocortisone during administration of noradrenaline. Intensive care mortality was 47.0% in the hydrocortisone group, and 21.0% in the noradrenaline-only group (p < 0.0001). Patients received hydrocortisone a median (IQR) of 6.0 (1.0–21.25) hours after initiation of noradrenaline, and at a median (IQR) dose in the first hour of 0.30 (0.13–0.53) µg/kg/min. Administration of hydrocortisone to the whole population had a statistically significant effect on the degree of vasoplegia (p < 0.0001) however, the size of the effect was not clinically important.

Discussion: In ICU patients coded as having infection, the response to noradrenaline therapy as determined by the vasoplegia index, differs throughout the dose range in survivors when compared to non-survivors. The addition of corticosteroids to noradrenaline therapy in ICU patients confers a statistically significant, but clinically unimportant, improvement in vasoplegia.

EP.139 - Table.

Organism Amoxicillin Sensitive Amoxicillin Resistant Tazocin Sensitive Tazocin Resistant
Enterococcus faecium 0 1 0 1
Escherichia coli 2 6 5 2
Lactose fermenting coliform 0 1 0 1
Saccromyces 1 0 1 0
Streptococcus milleri 1 0 1 0
Klebsiella pneumoniae 0 1 - -
Total 4 9 7 4

EP.140

Augmented Passive Immunotherapy with P4 Peptide improves neutrophil function in paediatric patients admitted to intensive care with severe sepsis

Jesus Reine1, Jamie Rylance1, Laura Thompson2, Daneila Ferreira1, Stephen Gordon3, Enitan Carrol2, Matthew Peak4, Aras Kadioglu2, Ben Morton1,5

1Liverpool School of Tropical Medicine, Liverpool, UK

2University of Liverpool, Liverpool, UK

3Malawi-Liverpool-Wellcome Trust Clinical Research Programme, Blantyre, Malawi

4Alder Hey Children's NHS Foundation Trust, Liverpool, UK

5Aintree University Hospital NHS Foundation Trust, Liverpool, UK

Abstract

Introduction: Antimicrobial resistance threatens to undermine treatment of severe infection; new therapeutic strategies are urgently needed. Previous work demonstrates that P4 peptide increases phagocytic activity and demonstrates potential as a therapeutic strategy (1). Our aim was to determine P4 activity in a target paediatric population admitted to critical care with severe infection.

Methods: We prospectively recruited paediatric patients with sepsis (n = 10) and compared with healthy children (general anaesthesia for minor procedures, n = 10). Blood samples taken within 48 hrs of sepsis diagnosis and transferred to LSTM for analysis (15/NW/0869). We employed a flow cytometric assay that exposes whole blood to standardised IgG-labelled beads to report neutrophil association and quantify respiratory burst. This assay requires minimal blood volumes, a prerequisite in children at risk of iatrogenic anaemia.

Results: Ten children had severe sepsis (eight mechanically ventilated and requiring cardiovascular support at time of sampling). P4 peptide significantly increased neutrophil association with and oxidation of the reporter beads in both critical care and healthy populations (Figure 1).

Conclusions: We applied an innovative neutrophil function assay to a paediatric population with severe sepsis for the first time. We corroborated previous results that demonstrates P4 peptide augments neutrophil activity in adult patients (2). We now plan to explore our minimal volume assay in paediatric rheumatology and oncology to assess neutrophil function and potential therapeutic targets.

graphic file with name 10.1177_1751143718772957-fig5.jpg

References

  • 1.Morton B, Pennington SH, Gordon SB. Immunomodulatory adjuvant therapy in severe community-acquired pneumonia. Expert Rev Respir Med 2014; 8: 587–596. [DOI] [PubMed]
  • 2.Morton B, Mitsi E, Pennington SH, Reine J, Wright AD, Parker R, et al. Augmented Passive Immunotherapy with P4 Peptide Improves Phagocyte Activity in Severe Sepsis. Shock 2016; 46: 635–641. [DOI] [PubMed]

EP.141

The diagnostic clinical utility of implementing non-culture based pathogen detection in a sepsis care pathway

Emily Wade1, Jonathan Bannard-Smith2, Tim Felton3, Emma Davies2, Nicolas Rey de Castro2, Andrew Turner2 and Paul Dark1

1University of Manchester, Manchester, UK

2Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK

3University Hospital South Manchester NHS Foundation Trust, Manchester, UK

Abstract

Introduction & aims: Sepsis is a major cause of morbidity and mortality. Survival is improved by early diagnosis and timely, appropriate treatment. Blood culture is the established method for identifying causative pathogens but results can take between 24–72 hours. Administration of empirical, broad-spectrum antimicrobial agents, pending blood culture results, results in an increased risk of antimicrobial resistance and is associated with poor outcomes. Rapid identification of the causative pathogen could improve outcomes. Here, we assessed the utility of polymerase chain reaction and electrospray ionisation-mass spectrometry (PCR/ESI-MS) in the identification of bloodstream infections in adult patients in critical care.

Methods: An observational study of 41 prospective patients admitted with suspected sepsis to the Critical Care Units at two University teaching hospitals (Manchester Royal Infirmary and University Hospitals South Manchester). All patients had routine blood cultures taken and an additional 5 ml of blood for analysis using the PCR/ESI-MS technology (IRIDICA Assay: Ibis Biosciences, Illinois, USA). We compared result reporting times, diagnostic accuracy with blood culture as the reference standard and any influence on clinical care, which we defined as confirmation, change or de-escalation of antimicrobial therapy.

Results: IRIDICA provided quicker results than conventional blood cultures (median 25.87 hours versus 49.07, P = 0.012). The IRIDICA results were received before blood culture in 70.7% (29/41) of patients and influenced clinical care in 31.7% (13/41). The overall sensitivity and specificity of IRIDICA compared to blood culture were 100% (95% CI: 39.6–100.0) and 70.27% (95% CI: 52.8–83.6) respectively. The negative predictive value of IRIDICA was 100% (95% CI: 84.0–100.0). If antibiotics had been stopped in all patients with a negative IRIDICA result, patients would have received antibiotics for 5.63 fewer days.

Conclusion and Implications: In patients with suspected sepsis IRIDICA provided quicker diagnostic information compared to routine blood culture, which was utilised by clinicians for 1 in 3 patients. Further research is needed to assess the barriers to clinical utilisation and cost-effectiveness before this type of technology can be more widely implemented.

EP.142

Are we failing patients with acute kidney injury?

Philippa Hardy1, Timothy Hill1

1East & North Herts NHS Trust, Stevenage, UK

Abstract

Acute Kidney injury (AKI) is defined as an rapid deterioration in renal function with an increase in serum creatinine ≥26 micromols/L, ≥50% rise in serum creatinine from baseline or >6 hours of reduced urine output (<0.5 ml/kg/hour). It is associated with increased patient mortality and costs the NHS more than £500 million/year. The NCEPOD study in 2009 found that 17% of AKI complications were avoidable with adequate investigation and regular monitoring. East & North Herts NHS trust issued AKI assessment and management guidelines in 2015 suggesting initial blood tests to include FBC, U&E, LFT, CRP, HCO3-, Ca, PO4- & lactate and daily U&E monitoring until the AKI had resolved. We audited 487 patients flagged as having an AKI on BIMsEPR during their admission in the month of March. All patients admitted for less than 24 hours, those with an eGFR <10 and patients without a true AKI after blood result analysis were excluded from our study leaving a total of 110 patients. Our results showed that of 110 patients included in our study 0% had all the recommended blood tests, 95.5% of patients were missing a lactate level, 91.8% of patients did not have a bicarbonate and only 55.5% of patients had daily U&E’s. Limitations of the study include the absence of ABG results which do not appear on the online systems which may include lactate and bicarbonate. Recommendations for future practice include junior doctor education and the inclusion of an ‘AKI’ button on blood ordering software ICE which will automatically request all the bloods advised in the trust guideline.

EP.143

Neutrophil-lymphocyte ratio – is the full blood count a warning sign in prevention of acute kidney injury? A literature review

Dan Drayton1, Robert Palin1, Luke McMenamin1,2 and Andrew Lewington1,2

1Leeds Teaching Hospitals NHS Trust, Leeds, UK

2School of Medicine, University of Leeds, Leeds, UK

Abstract

Background: Acute kidney injury (AKI) is a rapid loss of renal function that can be seen in over half of critically unwell patients admitted to intensive care beds. Traditionally, AKI is diagnosed with urine output or biochemical markers such as urea and creatinine. The neutrophil-lymphocyte ratio (NLR) is a cheap and easily calculable biomarker with proven utility to detect early inflammatory changes in a range of acute medical and surgical conditions. It may offer undiscovered predictive value in the patient with AKI.

Methods: A systematic literature search of Ovid:Medline, EMBASE, CINAHL and Google scholar was performed to identify all relevant literature existing in this area. Search terms “neutrophil lymphocyte ratio” and “acute kidney injury” were used as well as some associated MeSH terms.

Results: A total of 39 papers were found. They were filtered according to the relevance of title and abstract. Seven papers, two letters to the editor and two conference abstracts were relevant to this literature review. The full text was not available in two instances. Multiple studies demonstrated an association between NLR and AKI. Eight authors used multivariate analysis which demonstrated NLR as an independent risk factor for development of AKI. Furthermore, receiver operated characteristics (ROC) curves were used by five authors to determine the optimal cut-off for NLR as a predictive biomarker, increasing the utility of NLR in clinical settings. Other studies include NLR as one element of a predictive model for AKI. In one paper, NLR measured immediately after surgery and post-operative day one was an independent risk factor for AKI, however, the authors recognised it was a retrospective study leaving little control of bias and confounders. One consistent limitation observed across the literature was small sample size (limited number of patients with AKI). Furthermore, narrow inclusion criteria in each of these studies limits the generalisability of the findings. The literature recognises that other biomarkers (such as cystain-C and interleukin-18) have been found to be more sensitive than creatinine but are also more expensive than creatinine and NLR.

Conclusion: NLR is associated with AKI and may offer a cheap and easily calculable serum biomarker which could be used to predict AKI. Only one of the papers we found studied patients in the intensive care unit. This is an area which warrants further study.

EP.144

Metabolic control with a novel citrate anticoagulation protocol for continuous venovenous haemofiltration

Oliver Meller-Herbert1, Matt Rowe1 and Matt Thomas1

1Southmead Hospital, Bristol, UK

Abstract

Background: Regional citrate anticoagulation (RCA) for renal replacement therapy (RRT) is recommended by KDIGO1. Our local protocol changed in March 2016 to continuous venovenous haemofiltration (CVVH) using RCA with Prismaflex system (Baxter UK). This was developed in-house to match previous experience with RRT modalities and is different to previously published protocols2 that have used continuous venovenous haemodiafiltration (CVVHDF).

We use Prismocitrate 18/0 for anticoagulation and pre-filter replacement and Prismocal B22 or 0.9% sodium chloride for pre- and post-filter replacement. We report our first post-implementation quality assurance assessment.

Objectives:

1. Document metabolic control (calcium and acid-base balance)

2. Document filter life and cost per filter-hour

Methods: Ten patients receiving RRT after the protocol implementation date were identified retrospectively from the Intensive Care Unit database (Wardwatcher http://www.sicsag.scot.nhs.uk/Data/WardWatcher.html). Costing data was obtained from pharmacy, biochemistry and procurement with prices current in August 2016. Data was extracted from paper records by two investigators using standard form and analysed with Microsoft Excel.

Results: The sample of ten patients represented 47 filter runs and 1683 hours of RRT.

The metabolic control values (given as mean +/− standard deviation) are:

– Filter ionised calcium 0.32 (0.07) mmol.L−1 (protocol target 0.25–0.45 mmol.L−1)

– Patient ionised calcium 1.10 (0.1) mmol.L−1 (protocol target 0.9–1.2 mmol.L−1)

– Patient standard bicarbonate 24.61 (3.55) mmol.L−1 (protocol target 22–32 mmol.L−1)

These values were in range for a high proportion of time during RRT:

Filter ionised calcium 82.7%

Patient ionised calcium 92.4%

Patient standard bicarbonate 92.4%

The mean filter life was 36 hours, with a failure of anticoagulation recorded as the reason for termination in 28% of filter runs. The cost per filter hour was £10.96.

Conclusion: Use of a locally developed RCA protocol for CVVH produces acceptable metabolic control and filter life.

References

  • 1.Kidney Disease, Improving Global Outcomes (KDIGO) Acute Kidney Injury Work Group: KDIGO clinical practice guideline for acute kidney injury. Kidney Int Suppl 2012; 2: 1--138.
  • 2.Jacobs R, Honoré PM, Bagshaw SM, et al. Citrate Formulation Determines Filter Lifespan during Continous Veno-Venous Hemofiltration: A Prospective Cohort Study. Blood Purification 2015; 40: 194–202. [DOI] [PubMed]

EP.145

The Clock's Ticking: Timely Admission and Discharge from Critical Care

Nia Jones1 and Babu Muthuswamy2

1School of Medicine, Cardiff University, Cardiff, UK

2Consultant Anaesthetist and Intensivist, Royal Gwent Hospital, Wales, Newport, UK

Abstract

Introduction: Critical care units in hospitals throughout the UK are experiencing an increase in demand, with no real increase in capacity. They are being forced to run above the recommended level of capacity, resulting in delayed admissions of critically ill patients to the critical care unit, both from the resuscitation area in the emergency department (ED resus) and other areas of the hospital. Prompt discharges of patients from critical care would help free up beds for the sickest patients requiring the right care at the right time. Delayed transfers of care (DToCs) are an ongoing national issue. This audit focuses on length of admission to, and discharge time from, critical care in Royal Gwent Hospital (RGH), Newport, UK. This encompasses the intensive care unit and high dependency unit.

Method: Over a period of 5 months, January 1, 2017 to May 31, 2017, data on critical care delayed admissions (from non-theatre settings) and delayed discharges were collected using the Intensive Care National Audit & Research Centre (ICNARC) database, WardWatcher and scanned handwritten in-patient notes from electronic hospital records (eNotes).

Results: During this 5-month period, a total of 112 patients were admitted to the unit from non-theatre hospital settings, with available eNotes. Seventy one % of these sick patients waited longer than 1 hour; usually considered an acceptable maximum time for admission to critical care. The patients from ED resus had a 72% chance of a delayed admission longer than 1 hour. The overall length of delay in admissions from ED resus stood at 2 hours 18 minutes on average. A total of 401 patients were discharged alive from critical care during the same period, 63% waited longer than the national guideline that states within 4 hours.

Conclusion: This audit showed that delayed admissions to critical care occur commonly in the RGH, including from ED resus, which has a limited number of bays. Delayed transfers from critical care are also very common, and much higher than government targets. This affects patient flow through critical care, as well as possibly impacting on patient outcomes and rehabilitation. Lastly, both delayed admissions and discharges result in significant wastage of NHS resources.

EP.146

Surveys of Future Consultant Posts in the North West of England 2014 & 2017

Ola Abbas1, Shashi Chandrashekaraiah2 and Ken McGrattan2

1Health Education North West, Manchester, UK

2Lancashire Teaching Hospitals NHS Trust, Preston, UK

Abstract

Introduction: Expansion of the critical care services resulted in a growing demand for consultants in intensive care. The introduction of single CCT and dual accreditation with specialities other than anaesthesia will diversify and complicate the working pattern for intensivist consultants in the future. The Centre for Workforce Intelligence in 2015 forecasted the demand for intensivists to rise by an average of 4.7% per year1.

We conducted these surveys aiming to evaluate the projected number of required intensivists within the region and whether increasing the training numbers is justifiable.

Methods: Surveys were distributed via SurveyMonkey® across 16 intensive care units. Questions covered several aspects from the size of the unit and expansion plans over the next 5 years to current staffing levels and projected future needs due to expansion or consultant retirement.

The first survey in 2014 yielded a response rate of 100%, in 2017 we received 87.5% response rate.

Results: On both surveys 9 of the 16 units stated they have an adequate number of consultants to fully staff their rotas, however, 9 units also stated they need a combined number of up to 13 consultants to maintain staffing levels.

Almost similar number of retiring consultants was stated on both surveys, at least 31 and 29 consultants respectively.

In both surveys, 8 units stated their intention of service expansion. To support the expansion an estimate of 26 consultants in 2014 and 22–28 consultants in 2017 were required. Service re-organisation was not perceived to be associated with redundancy.

There has been a drop between the two surveys in estimates of required consultants to meet rota changes from 40 to 14 and also estimates of required number to cover service demand from 100 to 88. This drop can be explained by the drop in the survey response rate.

14 units (87.5%) advocated increasing the training posts to support the high demand for consultants.

Conclusions: Our surveys highlight the workforce requirement across the region and aid in further recruitment planning. The data have allowed us to confidently increase the North West posts from 12 to 16 in 2014 and further to 20 in 2016.

Due to the complex nature and flexibility of service delivery between ICM and other specialities, further workforce reviews should be undertaken in the next two to three years as the ICM speciality grows.

Reference

  • 1.In-depth review of the anaesthetics and intensive care medicine workforce. London: Centre for Workforce Intelligence, 2015.

EP.147

Critical and Acute Care Unmet Need: A Network-wide Approach to Establish Patient Acuity and Levels of Care beyond the Boundaries of Adult Critical Care Units

Lesley Durham1, Julie Platten2, Isabel Gonzalez3, David Cressey4, Jan Malone1 and Sarah Gray2

1North of England Critical Care Network, North Shields, UK

2North of England Critical Care Network, Stockton, UK

3James Cook University Hospital, Middlesbrough, UK

4Freeman Hospital, Newcastle, UK

Abstract

Background: At a national level Critical Care capacity is pressurised and acutely unwell patients are being managed in ward areas with Critical Care Outreach support. There remains a crucial lack of information regarding the legitimate burden of critical care need at both local and national levels. The Critical Care Stakeholders Forum[1] recommended that critical care capacity on general wards should be evaluated at a local level. The North of England Critical Care Network (NoECCN) has undertaken an annual “Level of Care Point Prevalence Survey” for all adult in-patient ward areas, to identify the burden of critical and acute adult care need in the Networks hospital wards. A total of 37,200 in-patients have been assessed and their level of care recorded.

Objective: To identify the legitimate burden of acute and critical care need out-with the designated critical care areas within each participating hospital.

Methods: Annual single-day, snap-shot survey of Level of Care Point Prevalence (facilitated by the NoECCN Outreach Group) recording a single-point level of care of all adult in-patients in each Hospital.

The definition of ‘level of care’ was taken from the Intensive Care Society, ‘Levels of Critical Care for Adult Patients’[2]. These definitions were expanded by the Outreach Group to include Level 2a patients; those out-with a designated Critical Care Unit for whom escalation would be appropriate.

Results: Table 1 demonstrates that patient acuity is high with approximately 1:3 ward patients requiring Level 1 care. Table 2 identifies significant numbers of Level 2a patients’ out-with a designated Critical Care Unit and demonstrates a shortfall in the number of Level 2 beds required across the NoECCN.

Table 1

graphic file with name 10.1177_1751143718772957-img5.jpg

Table 2

graphic file with name 10.1177_1751143718772957-img6.jpg

Conclusion and Recommendations: The need for the different levels of care has remained reasonably constant adding some face validity to the findings. Ward based patient acuity is high and an unmet need for Level 2 care identified, demonstrating the need for 24/7 Critical Care Outreach Teams which have been described as the ‘safety engines of the hospital’.

The development of ‘enhanced care areas’ to meet the requirements of patients needing care between Level 1 and Level 2 may help improve efficiency and patient safety. Further investment in Level 2 care is required in some NoECCN Trusts.

References

DoH (2005), Quality Critical Care: Beyond 'Comprehensive Critical Care': A report by the Critical Care Stakeholder Forum

Intensive Care Society (2009) Levels of Critical Care for Adult Patients. London: Intensive Care Society

EP.148

Cheshire and Merseyside Critical Care Network Ventilator Associated Pneumonia (VAP) Audit: Phase Two

Diane Murray1, Alison Hall1 and Kevin Sim2

1Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK

2St Helens & Knowsley NHS Trust, Liverpool, UK

Abstract

Introduction: The 2015 Guidelines for the Provision of Intensive Care Services (GPICS) recommend that ICUs should have standardized systems in place to monitor Ventilator Associated Pneumonia (VAP) rates. In 2015 we devised a VAP mnemonic based definition, based on the American Centre for Disease Control guidelines and conducted a three month pilot audit at four units across the Merseyside region (Whiston, Warrington, Leighton & Wirral). We found that our VAP rate was significantly lower than that quoted in the literature.

Objectives: Our objective for the second phase of the audit was to test our VAP definition and data collection sheet on a larger sample size.

Methods: Inclusion criteria comprised a period of more than 48 hours of mechanical ventilation with stable or reducing FiO2 and PEEP requirements. Exclusion criteria included non-infective causes for increased ventilator support and/or a current VAP.

Following stabilization, a VAP was determined by:

Ventilator settings (one or more of)

• An increase in FiO2 or PEEP to achieve designated targets for patient, sustained over a 24 hr period

Associated features (one or more of)

• New WCC < 4.0 or >12.0 x109/L or increasing/decreasing from baseline

• New temperature change

• Positive microbiology (taken after 48 hr of mechanical ventilation and within 24 hr of an increase in ventilator settings) for pulmonary infection (e.g. blood culture, BAL samples (direct or indirect), endotracheal aspirate) consistent with VAP

Pneumonia (one or more of)

• Evidence of new consolidation on CXR/CT (Not required in patients with pre-existing ARDS)

We conducted an audit of three units across the Merseyside region (Chester, Royal Liverpool & Macclesfield). A data collection sheet was filled out each day.

Results: In total, we audited 2466 ventilator days, with 1431 ventilator days meeting the inclusion criteria.

ICU VAP Rate
Chester 0.82%
Royal Liverpool 1.08%
Macclesfield 0%

Conclusions: In conclusion, we continued to find that our VAP rate was significantly lower than that quoted in the literature, but that our definition and daily data collection sheet standardized data collection across the region. We had initially planned to continue our audit indefinitely with continuous daily data collection. However, due to the low VAP rate detected in the region, questions have been raised as to whether this is clinically beneficial or cost effective. We are now considering a single snapshot audit each year to identify VAP rates.

EP.149

Identification of barriers and facilitators to information exchange about a deteriorating patient’s escalation of care: A Human Factors Analysis

Julie Darbyshire1, Marta Wronikowska1, Verity Westgate1, James Malycha1, Peter Watkinson1, Duncan Young1 and Jody Ede1

1University of Oxford, Oxford, UK

Abstract

Introduction and Background: Failure to identify physiological deterioration in hospitalised patients is recurring in acute care literature. This can result in increased Intensive Care Unit (ICU) admissions, length of hospital stay and mortality. Unacceptable time-delays between physiological deterioration and referral to the ICU team exists. Despite standardising care there continues to be Failure to Rescue (FTR) events resulting from factors such as poor understanding of illness severity, clinical staff’s limited situational awareness and information barriers. Little is known about the ‘Work as done’ aspects of escalation of care, workarounds and facilitators within this process. Qualitative research has identified contextual factors affecting FTR, but there is little Human Factors research that frames exploration of the deteriorating patient’s escalation of care.

Methods: A thematic analysis is underway exploring information barriers and facilitators affecting deteriorating patients’ escalation of care. Applied Cognitive Task Analysis (ACTA) (n = 19) and surveys were conducted with purposive samples of clinical staff from ward and ICU. The System Engineering Initiative for Patient Safety 2. 0 (SEPIS) model was used to frame analysis referring to nine key elements: People, Tasks, Technology, Tools, Organization, Environment, Configuration, Engagement and Adaption.

Results: Themes emerged from the preliminary analysis of the ACTA interviews include barriers and facilitators in individual communication and Electronic Information Systems (EIS) during the escalation of patient care. Facilitators of individual information exchange are perceived as leadership, self-monitoring during decisions, combining data with situational information, systematic information gathering and early recognition of a deviation from the norm. EIS facilitators have been described as human centred-design, accessible, appropriate levels of information and intuitive technology design.

Barriers to individual information exchange are unpredictable information trends, negative emotions, poor situational awareness and information misrepresentation. Barriers to EIS information exchange are fragmented data and incompatible systems. We expect further themes to emerge, with survey data offering perceptions of both FTR events and ‘Work as done’.

Discussion: Human factors analysis can contribute to the understanding of FTR events and successful information transfers or ‘Work as done’.

Disclaimer: This abstract presents independent research commissioned by the Health Innovation Challenge Fund (HICF-R9-524; WT-103703/Z/14/Z), a parallel funding partnership between the Department of Health and Wellcome Trust. The views expressed in this publication are those of the author(s) and not necessarily those of the Department of Health or Wellcome Trust.

EP.150

Influence of ward round order on critically ill patient outcomes

Steve Evans1, Jai Darvall1, Alexandra Gorelik2 and Rinaldo Bellomo1

1Royal Melbourne Hospital, Melbourne, Australia

2Melbourne EpiCentre, Melbourne, Australia

Abstract

Medical ward rounds are a ubiquitous feature of patient care worldwide, including in the intensive care unit. Past studies have identified beneficial attributes of the ICU ward round such as ward round checklists, and also negative factors, including interruptions and new patient admissions during the round. More recently, “decision fatigue”, with resulting decreased ability to recognise and resist inappropriate decision making, has been shown to be important in influencing medical behaviour in the clinical setting. We performed a retrospective observational study in a tertiary metropolitan ICU, to assess the relationship between the order that patients were seen on the ward round and outcomes including ICU length of stay (LOS). 681 of 2094 total patients admitted in 2014 occupied either the first or last three spaces in the ICU ward round, without having moved beds during admission. There were no differences in the primary outcome, ICU LOS, median (IQR) 50 (23-102) hours for the first three patients seen vs. 51 (25-110) hours for the last three patients, p = 0.594. No differences were found in any secondary outcomes (hospital LOS, ICU mortality or duration of mechanical ventilation). We conclude that patient position on the ICU ward round does not affect relevant outcomes.

Key words (MeSH): Intensive Care Units; Decision Making; Checklist; Teaching Rounds

EP.151

A Quality Assurance Survey to Evaluate the Perceptions of the Multi-disciplinary team on the Role of the Advanced Critical Care Practitioner (ACCP)

Janice Thomas1, Susan Colling1

1Newcastle upon Tyne Hospitals, Newcastle upon Tyne, UK

Abstract

Multi-factual issues have resulted in the current and predicted long-term decrease in medical staff numbers within Critical Care. As a result many Critical Care Units have developed the ACCP role in order to fill these gaps and maintain high quality care within this speciality.

This was a single centred Quality Assurance Survey in two teaching hospitals evaluating the perceptions of the ACCP role across four adult speciality Critical Care Units. The survey had three aims:

• To assess that all current qualified ACCPs were working to the expectations of the role described by the Faculty of Intensive Care Medicine (FICM)

• To evaluate whether the introduction of the ACCP role had a negative impact on medical trainee training.

• To identify concerns or improvements that could be addressed.

A total of 610 surveys were sent out through SurveyMonkey to the majority of staff whom predominantly worked in all four Critical Care Units. There were a total of 221 replies, equating to 34%. Further analysis dividing respondents into their professional groups calculated the average group-specific response rate of 54%.

The survey was divided into sections 1–8. The sections were as follows:

1. Personal Attributes

2. Communication

3. Management

4. Decision Making

5. Support for Colleagues

6. Clinical

7. Non-Medical Prescribing

8. General

Sections 1–7 covered the expectations of the role as defined by FICM. Section 7 covered all aspects of non – medical prescribing.

graphic file with name 10.1177_1751143718772957-img7.jpg

Over the seven sections covering FICM Expectations an average of 86% of the respondents either agreed/strongly agreed that ACCPs met all expectations. When evaluating Non-Medical Prescribing an average of 80% of the respondents either agreed/strongly agreed that the ACCPs do safely prescribe. Only 5% of the respondents thought that ACCPs interfered with medical trainees training and 86% of the respondents felt that ACCPs enhanced overall patient care. However, overall the perception of the ACCP role still appeared unclear in some areas and highlighted the need for further clarification.

EP.152

Human factors training for medical students: Exploring student perception and how to promote a better understanding

Chris Allen1, Jasdeep Bhogal1, Shaan Hyder2 and Suveer Singh1

1Undergraduate Department, Clinical Learning and Development, Chelsea and Westminster Hospital, Imperial College School of Medicine, London, UK

2Postgraduate Department, Clinical Learning and Development, Chelsea and Westminster Hospital, Imperial College School of Medicine, London, UK

Abstract

Background and Purpose: Human factors training aims to assess the interaction between humans and the system in which they work. It is increasingly used in medicine to enhance patient safety and to reduce the risk of medical error 1. It has been shown that it changes the practice of those who undertake training and has a direct benefit to patients 2. The role of human factors training in undergraduate medical education is still developing. Currently there has been no formal training at our institution for human factors. This study aims to identify the attitudes of medical students towards human factors training and how a training programme can affect the student’s preparedness and approach to clinical situations required for their career as doctors, with an aim to setting up a regular teaching programme.

Methodology: We sent out a questionnaire assessing student’s understanding of, attitudes towards human factors and the perceived relevance to their careers. We sampled 2 groups from Imperial College School of Medicine; Year 3 and Year 5. Following this, we carried out a session aimed at highlighting areas of human factors training and how this can be applied to the student’s own practice. On completion of the session, a similar post intervention questionnaire was completed to gauge students’ change in attitudes and perceptions. We also assessed the contrast between the two year groups and concluded determining factors regarding perceived value, relevance to related to level of training, and most effective time to expose students to human factors training may be.

Results: The initial questionnaire has shown that student’s perception of human factors training is limited, particularly in terms of how it is relevant to their own practice. The session improved understanding and resulted in reflection on how important human factors training is to the students.

Discussion and Conclusions: Human factors is an umbrella term that covers a large number of concepts. Deciding on an appropriate method to teach this to learners has been identified as difficult 3, 4. Using an approach with a variety of teaching methods and allowing students from different year groups to attend will enable a better understanding the importance human factors plays in the careers of the medical students. Ensuring they have an appreciation for this and how they can apply this knowledge to their practice is key to the success of this training programme and ultimately in reducing risk to patients.

EP.153

How do working environments compare between Intensive Care Medicine and Acute Internal Medicine – multicentre and national feedback

Greg Packer1, Felicity Evison1 and Julian Bion1

1Queen Elizabeth Hospital, Birmingham, UK

Abstract

Aims: Intensive Care Medicine (ICM) and Acute Internal Medicine (AIM) both involve high throughput of acutely unwell patients in a 24-hour service. The Patient Engagement and Reflective Learning (PEARL) project is gathering multiple streams of data across four UK hospitals to inform workshops iteratively developing a reflective learning toolkit to encourage utilisation of patient feedback to improve ICM and AIM services. To understand existing working environments trainee feedback was obtained from the 2016 General Medical Council (GMC) Trainee Survey.

Method: The GMC survey is distributed to all trainee doctors with mandatory completion. We obtained data for trainee doctors working in ICM/AIM at the Queen Elizabeth Hospital (QEHB) and Heartlands Hospital in Birmingham (BHH), the Royal Victoria Infirmary (RVI) and Freeman Hospital (FH) in Newcastle. The data was available in a 5 item Likert scale format and analysed using the Kruskal-Wallis and Fisher’s Exact tests.

Results: AIM trainees rate their daytime workload more heavily than ICM trainees at QEHB (p = 0.001) and nationally (p < 0.001). AIM trainees rate their night-time workload more heavily than ICM trainees at RVI (p = 0.02) and nationally (p = 0.01).

AIM trainees at all sites and nationally are significantly more likely to report working beyond rostered hours than ICM trainees (p = <0.05). Nationally AIM trainees are significantly more likely to report feeling short of sleep at work than ICM trainees (p = <0.05).

AIM trainees at BHH and nationally were less likely to agree than ICM trainees that trainee doctors are always treated fairly (p < 0.05). AIM trainees nationally, at BHH and RVI are less likely to find their working environment supportive than ICM trainees (all p < 0.01), the opposite is true at QEHB (p = 0.04).

ICM trainees at QEHB were less likely than AIM trainees to agree that senior colleagues would be open to their opinion (p = 0.01), the opposite is the case at BHH (p = 0.04), RVI (p = <0.01) and nationally (p < 0.01).

Nationally AIM trainees have less confidence in intra-specialty handovers than ICM trainees (p < 0.05). ICM trainees have less confidence in inter-specialty handover arrangements than AIM trainees (p < 0.05 nationally and at all sites except BHH).

Conclusion: This data demonstrates that comparing local and national trainee feedback can be used to identify possible areas for improvement in local working environments. The fundamental weakness is the small numbers involved (1626 AIM and 756 ICM trainees nationally but as low as 6 individuals in a specialty at particular sites), therefore findings can only be used to provoke further exploration and discussion.

EP.154

Factors affecting ITU refusal

Rahul Dimber1 and Clare Stapleton1

1Frimley Healthcare Foundation Trust, Wexham Park Hospital, Slough, UK

Abstract

Background: The decision to admit a patient to ITU should be based on the likely benefit we can provide to that patient while not imposing excessive burden. We are managing a finite resource and therefore we as clinicians are obliged to use our recourse responsibly. In the UK there are 3 ITU beds per 100,000 populations compared to 25 ITU beds/100,000 populations in Germany.

Aim: To identify factors that affect decisions to admit patients to Intensive Care in a UK District General Hospital Intensive Care Unit (ICU).

Methods: This was a pilot prospective observation during a 9 month period. Data was collected on random days during the study time based on previous research1, 2. For each referral for consideration of ITU admission we noted the following: time of referral, seniority of staff making referral, seniority of staff taking referral call, location of patient, capacity on ITU, method of communication, seniority of decision making and reason for refusal.

Results: A total of 45 referrals were included. 64% of referrals were refused admission to ICU. The acceptance rate was 82.76% when referral was made between 8 pm–8 am against 17.24% when referral was made between 8 am–8 pm with Odd ratio of 0.34 (CI 0.18–0.6, p-0.01) in favour of admission for referral made out of hours. Factors most likely influencing decision to admit were- availability of ITU beds, out of hour referral to ITU, referral from senior physician, review by senior ITU staff and functional status; while DNACPR status does not influence the decision to refuse ITU admission.

Conclusion: Likelihood of admission to ITU is influenced by factors external to the patients severity of illness and premorbid health. Awareness of these factors helps us to fair and just in our decisions to admit or refuse patients to ITU.

We intend to perform a larger observation in 3 centres within our network.

References

  • 1.Garrouste-Orgeas M, Montuclard L, Timsit JF, et al. Predictor of intensive care unit refusal in French intensive care units: a multiple centre study. Critical Care Medicine 2005; 33: 750–755. [DOI] [PubMed]
  • 2.Garrouste-Orgeas M, Montuclard L, Timsit JF, et al. Triaging patients to the ICU: a pilot study of factors influencing admission decisions and patient outcomes. Intensive Care Medicine 2003; 29: 774–781. [DOI] [PubMed]

EP.155

Detecting delirium in critical care; collaboration, buy-in and junior doctor leadership

Aoife Abbey1, Nitin Arora1 and Satish Kumar1

1Heart of England NHS Trust, Birmingham, UK

Abstract

Delirium in critical care contributes not only to immediate morbidity and mortality, but may also have long-term effects on cognitive outcomes (1). At a large district general hospital (DGH) and its smaller sister DGH, we looked at detection of delirium across both intensive care and the high dependency unit.

Data for 150 patients was collected prospectively on dependency, sedation score, whether or not CAM-ICU had been carried out and if the patient had access to glasses and hearing aids(s). The expected standard was that CAM-ICU is carried out as part of nursing checks at least once a day and recorded on the patient chart. We expected an incidence of delirium in the region of 30–40%.

The dependencies of the 150 audited patients were 52% level 3 and 48% level 2. 35% of these patients had CAM-ICU done and the apparent incidence of delirium was 6% across all patients and 19% in those who had actually been screened appropriately. Site-specific results showed that our larger site preformed significantly poorer than our sister site (24% screens carried out, compared to 65%, p = <0.0001).

Following these results, we carried out informal individual interviews with ten nurses at our larger site. The strongest theme that emerged was a feeling that CAM-ICU screening was not a priority for medical staff. It was felt that even if a patient screened positive, medical staff would not ‘do anything’ unless a patient was highly agitated.

We decided to focus action on our poorest performing site initially and our primary goal was to signal unequivocal ‘buy in’ from medical staff. The junior led ‘delirium walk-around’ was created with the perceived secondary benefit of creating opportunity for juniors to show leadership in a classically senior led environment. This initiative comprises a junior doctor collating pre-determined data within 24 hours of admission (fig 1.) and conducting a ‘walk-about’ three times a week. The walk about involves visiting every bed space and collaborating with the nurse on an action plan for positive patients. The plan should focus on both pharmacological and non-pharmacological interventions and include discussion with physiotherapy and relatives.

Following 8 weeks of this intervention, re-audit will be carried out and the pilot project will be extended across our sites accordingly.

graphic file with name 10.1177_1751143718772957-img8.jpg

Reference

  • 1.Girard, T. Jackson, J.C. Pratik, P. et al., Delirium as a predictor of long-term cognitive impairment in survivors of critical illness. Crit Care Med 2010; 38: 1513–1520. [DOI] [PMC free article] [PubMed]

EP.156

A multi disciplinary team audit of sedation stewardship, to introduce education and sedation discipline in a tertiary referral intensive care unit

Benjamin Arnold1, Jo Steele1, Lucy Powell1 and Ramprasad Matsa1

1Royal Stoke University Hospital, Stoke-on-Trent, UK

Abstract

Background and Aim: Sedation in intensive care unit is inevitable and benefits patients in many ways. However, over sedation leads to deleterious consequences including increased incidence of delirium, increasing length of stay, neuromuscular weakness and significant impact on long term cognitive function. Therefore, sedation stewardship is pivotal in mitigating the improvements in quality of care for the critically ill patients. In order to introduce the stewardship, we performed a baseline audit was performed to assess whether all ICU patients had a documented target RASS (Richmond Agitation and Sedation Score) determined at the time of sedation prescription, and corresponding nursing compliance to the target RASS, and reasons for non-compliance.

Methodology and data collection: This prospective audit was conducted in a University Hospital, Tertiary referral centre, on a 36 bedded Intensive Care Unit over the duration of 2 months. The patients’ level of sedation was compared to that proscribed, and individual nursing interviews were undertaken to assess the reasons for non-compliance. All patients that were prescribed an intra-venous infusion of sedative agent were included in the audit. There were no patients that fulfilled this criteria that were excluded.

Results and outcomes: Of the forty-three data sets reviewed, 28% (n = 12) showed a difference between the targeted and actual RASS. 42% (n = 5) of these data sets had no clinical reason for non-compliance to the prescribed RASS. Individual interview with nursing staff demonstrated that the following reasons for non-compliance:

• lack of knowledge

• lack of confidence

• Prescription changes were made by the doctors without appropriate communication to the nursing staff

• Clinical decision was taken by the bedside nurse’s due to clinical reason (e.g. ventilation dysynchrony, agitation).

The results of the audit were discussed with the established Critical Care Rehabilitation team which found it useful to adopt a few modifications in the education. The audit also serves as a baseline to assess the improvement in the gaps in knowledge of medical and nursing staff. Moreover, the Targeted sedation protocol has been incorporated in to the electronic patient record system and further audit is planned.

Conclusion: This audit served as a baseline audit to persevere a well-established lines of sedation management in critically ill patients. The outcomes following the educational and implementation Rehabilitation team should improve and demonstrate hard outcomes such as decrement in incidence of delirium, duration of mechanical ventilation and the length of stay and perhaps decreased mortality.

EP.157

Reduction in Network Point Prevalence of Delirium following introduction of Delirium Reduction Working Group

Jessica Davis1, Rebecca McIntyre1, Karen Berry1, Danny Conway1, Tony Thomas1 and James Hanison1

1Greater Manchester Critical Care Network Delirium Steering Group, Manchester, UK

Abstract

Introduction: Delirium is commonly found in critically ill patients, with the incidence being quoted as somewhere between 15 and 80%.1 The presence of delirium is an independent predictor of mortality with studies showing a 2.5 fold increase in short term mortality and a 3.2 fold increase in six-month mortality in mechanically ventilated patients.2 There is also an association with increased hospital and ICU stay, increased cost of care and long term cognitive impairment.1

Methods: Following recommendations from our patient and carer forum, in April 2015 the Greater Manchester Critical Care Network formed a Delirium Reduction Working Group with the aim to reduce the incidence of delirium across the network. The group was made up of doctors, nurses and other health professionals from across critical care units in our network. The group agreed a set of network standards for the prevention, identification and management of delirium They focused on the implementation of a delirium care bundle, sharing best practice, educating staff and providing guidance on the management of delirium. The delirium care bundle addressed regular screening, sedation holds, availability of sensory aids and clocks, mobilisation plans, pain assessments and availability of sleep packs. Audit was performed quarterly looking at compliance against the standards and results fed back at the Delirium Reduction Working Group meetings.

Results: The first audit, in May 2016 showed a delirium point prevalence of 28% the most recent audit, in March 2017 showed a reduction in this to 13%. Six of the eight units showed a reduction in the point prevalence, the largest being from 50% to 8%. There has also been an increase in the number of trusts meeting the network target of two CAM ICU screens in 24 hours for each patient, from 43% in the first audit to 54% in the last. The graph below demonstrates the decrease in delirium throughout the region.

graphic file with name 10.1177_1751143718772957-img9.jpg

Conclusion: This project demonstrates that a regional multifaceted delirium reduction program is highly effective. Future work will focus on e-learning modules for health care professionals, the inclusion of sleep packs in every bed space and work around sleep disturbance and reduction of noise in critical care.

References

  • 1.Borthwick M, Bourne R, Craig M, et al. Detection, Prevention and Treatment of Delirium in Critically Ill Patients. Leicester: United Kingdom Clinical Pharmacy Association, 2006.
  • 2.Cavallazzi R, Saad M, Marik P. Delirium in the ICU: an overview. Annals of Intensive Care Medicine 2012; 2: 49. [DOI] [PMC free article] [PubMed]

EP.158

The Ongoing Prevalence of Delirium in a Large Critical Care Unit in a Tertiary Referral Teaching Hospital

Yasmin Milner1, Roseita Carroll2, Lucinda Gabriel2, Ruth Wan2, Kathryn Griffiths2 and Kyra Dingli2

1Guy's St Thomas' NHS Trust, London, UK

2Department of Critical Care, Guys and St Thomas NHS foundation Trust, London, UK

Abstract

Introduction: Delirium is defined as an acute, fluctuating deterioration of mental state. It is one of the most common types of organ dysfunction in the critically ill, with a prevalence of 50 – 80%. Despite its high prevalence, association with morbidity and contribution to prolonging hospital stay, it remains significantly under-recognised.

Methods: We conducted a prospective re-audit of an adult ICU in the same NHS Trust over a two-week period in July 2017 to complete the audit cycle. A diagnosis of delirium was assessed using the Confusion Assessment Method for Intensive Care Unit (CAM-ICU) and medical chart review (MCR). Inclusion criteria for the audit were: age >18 years, length of admission >72 hrs. Exclusion criteria were: language difficulties, pre-existing cognitive or hearing impairment (n = 68). We also documented basic demographic data, sedation scores and the use of drugs known to have a potential psychotropic effect.

Results: We recorded data for 340 patient days for 68 patients. The prevalence of delirium, based on a positive CAM-ICU, was 13% (n = 14), corresponding 4.1% of total patient days. Of the patients suffering from delirium, 88% were male, with a median age of 62 years. 50% of the patient days that scored a positive CAM-ICU were deemed delirium negative based on MCR. A CAM-ICU was not documented for 40% of patient days. 14.2% of patient days with a positive CAM-ICU had a RASS score of <1, indicating probable hypoactive delirium. 79% of patient days with a positive CAM-ICU received psychotropic and/or sedative pharmacotherapy versus 56% of patient days without a positive CAM-ICU.

Conclusions: In comparison to the results of our first audit conducted in 2016, we find that the prevalence of delirium appears reduced; from 18% to 13%. However, the percentage of total patient days when a CAM-ICU was not performed has increased, from 32% to 40% of total patient days. This means there is uncertainty in interpreting our results and an addition exploration of the barriers preventing CAM-ICU assessment is warranted. CAM-ICU has previously been found to be a very useful tool in the diagnosis of delirium with a sensitivity of 80% and a specificity of 95.9%. The fact that potentially 50% of patient days with delirium could have been missed on MCR alone emphasizes the value of using a screening tool for this clinically important condition.

EP.159

Delirium Education Package: a pragmatic and multidisciplinary quality improvement project implementing both delirium screening and management in the Intensive Care Unit

Ahmed Al-Hindawi1, Leigh J Paxton1, Kirat Panesar1, Martin Shao Foong Chong1, Marcela P Vizcaychipi1 and Linsey E Christie1

1Chelsea and Westminster Hospital, London, UK

Abstract

Introduction: Delirium is commonly seen in the Intensive Care Unit (ICU) with many well-documented consequences relating to patient experience, long-term cognitive impairment, length of stay and mortality.

Methods: In our ICU, the Confusion Assessment Method for the ICU (CAM-ICU) was not embedded. Indeed, a 7-day audit demonstrated it was only recorded at any point in 3% patients. A multidisciplinary team was formed to improve awareness, detection and management. To gain understanding of colleagues’ knowledge regarding delirium, a questionnaire was employed. A Delirium Education Package, taking approximately 45 minutes to deliver, was designed. This multi-faceted package combined an educational presentation with a CAM-ICU video demonstration, performing CAM-ICU with a colleague and then with a patient. The presentation included an introduction to delirium, management of delirium (brain care bundle and environment), CAM-ICU and its components.

Two weeks of multidisciplinary “train the trainer” sessions incorporated trainers completing the education package themselves. In April 2017, we ran “Delirium week” where the package was delivered to colleagues day and night. A certificate and badge were awarded on completion and trainers also received certificates for sessions they led. Keep Calm and CAM-ICU posters were displayed with pictures to reinforce the various components of CAM-ICU.

Similarly, the team enhanced the ICU environment, for example clocks with the date and time where each patient could see them, improved eye mask and ear plug availability, noise awareness and reduction and delirium information leaflets for relatives.

Patients with delirium received a gold wristband to raise awareness for all staff caring for the patient, a CAM-ICU positive sticker was placed on the ICU chart and the brain care bundle was implemented.

Results: By the end of delirium week, 88% (59/67) of nursing staff and 79% (19/24) of doctors had completed the education package. We trained 5 ICU physiotherapists, our ICU pharmacist, 26 students (medical, nursing and ODP) and 2 hospital directors, a total of 112 staff. Continuing to use Plan-Do-Study-Act methodology, we conducted a 7-day re-audit (July 2017) demonstrating CAM-ICU was recorded at least once in 46% patients and more than once in 12.5%. Whilst staff performed CAM-ICU, often it was not recorded. Delivery of the education package has continued and documentation emphasised. The next delirium week will focus on the management triad (gold wristbands, CAM-ICU positive chart stickers and brain care bundle).

Conclusion: A multi-faceted delirium education package can improve identification and management of patients with delirium in ICU.

EP.160

A service evaluation: the effectiveness of the Critical Care Rehabilitation Group on patients’ physical and psychological outcomes

Sophie Horton1 and Lisa Davies1

1Countess of Chester Hospital, Chester, UK

Abstract

Background: NICE guidelines (CG83) advocate that following Critical Illness patients should have an individualised, structured rehabilitation programme, regular functional assessments and be offered a review of their rehabilitation needs 2–3 months post discharge from hospital.

A weekly Critical Care rehabilitation group was developed for these patients following discharge to optimise physical function and to offer psychological support. Patients are given structured exercise programmes to focus on areas for development along with regular functional reassessments. The group provides patients and relatives the opportunity for support from peers and the therapy team. The group is led by the same Critical Care rehabilitation team that follows patients throughout their whole hospital journey from Critical Care, ward discharge and follow up class to provide continuity of care.

The aim of the group is to support and facilitate patients to improve their quality of life post Critical Illness.

Aims: Analyse the physical and psychological impact on patients who have attended the Critical Care rehabilitation group to evaluate its effectiveness.

Method: Data was collected retrospectively from 2014–2017. The 6 Minute Walk Test and 1 Minute Sit to Stand test were used as outcome measures for physical health. The Hospital Anxiety and Depression Scale (HADS) and Short Form 36 Health Survey (SF36) were used as an outcome measure for psychological health. Evaluation forms were also completed following the group.

Results: The physical and psychological impact on patients were analysed.

Change from initial session to final session
6 Minute Walk 42% increase
Sit to Stand 53% increase
HADS 53% increase
SF36 100% of patients had improvement in Physical Functioning 50% had improvement in Emotional Wellbeing, 50% no change 67% had improvement in Social Functioning, Pain and General Health, 33% reported no change

Evaluation forms were analysed and common themes identified which included:

• The group helped patients improve their strength, balance and mobility

• The group interaction with people who had been through similar experiences was beneficial

• Being part of a group improved patients’ motivation

• The opportunity to discuss matters with the Therapy team helped to eliminate worries, provided support and increased patients confidence

Conclusion: The Critical Care rehabilitation group had a positive impact in both patients’ physical and psychological outcomes. Patients and their relatives reported how beneficial they found the group in not only improving their function but also the peer and emotional support it provided.

EP.161

A retrospective cohort study investigating prevalence and nature of frightening memories in adult ICU survivors and their association with ICU exposures

Sarah Train1, Janice Rattray2, Jean Antonelli3, Jacqueline Stephen3, Christopher Weir3 and Timothy S Walsh4

1University of Edinburgh Medical School, Edinburgh, UK

2School of Nursing and Health Sciences, University of Dundee, Dundee, UK

3Edinburgh Clinical Trials Unit, Usher Institute of Population Health Sciences and Informatics, University of Edinburgh, Edinburgh, UK

4Anaesthetics, Critical Care and Pain Medicine, University of Edinburgh, Edinburgh, UK

Abstract

Background: The psychological impact of ICU survival cannot be overlooked, with significant implications for quality of life. Recalling frightening memories shortly after discharge is an important risk factor for long-term outcomes including Post Traumatic Stress Disorder (PTSD). However, understanding is limited regarding prevalence and causality of frightening memories.

Method: A retrospective cohort study of a recent cluster randomised trial (DESIST)(1) was undertaken to assess prevalence of frightening memories plus association with ICU exposures in 517 Scottish, adult ICU survivors. Administered post-ICU discharge (median follow-up: 7 days), the Intensive Care Experience Questionnaire (ICE-Q)(2) frightening experiences sub-score (FESS), and the Impact of Event Scale-Revised (IES-R)(3) total score (PTSD symptom measure), were co-primary outcomes. Cut-off scores for significant frightening memories were: FESS >18 (scale 6–35); IES-R >35 (scale 0–88).

Exposures included ICU, mechanical ventilation and hospital durations; sedation depth, agitation, APACHE II score, gender and case type. Correlation was assessed between the two primary outcomes, and between each exposure and outcome. Multivariate association testing was planned to confirm suspected significant relationships (correlation coefficient >0.2). Additionally, correlation between total memory of ICU and frightening experiences was assessed.

Results: The median scores were: FESS 15 (IQR:11–21) and IES-R 19 (IQR:7–36). Substantial frightening memories were experienced by 34.4% (measured using FESS) and 25.1% (using IES-R) of participants. Strong positive correlation between the FESS and IES-R scores (Spearman’s Rho correlation coefficient = .616, p value < .001) demonstrate the measures relate strongly. No potential risk factors showed strong association with either primary outcome.

Decreasing total memory of ICU correlated negatively with both outcomes (FESS: r = −.180, p < .001. IES-R: r = −.132, p = <.004); no memory of ICU related to the lowest median outcome scores.

Comment: Frightening experiences were common as assessed by both measures. Critical illness survivors are therefore at high risk; care-givers need to be conscious of this. FESS includes concepts of fear, helplessness and death; IES-R assesses traumatic responses. Therefore, both emotional and trauma symptoms are experienced early post-discharge. Further investigation into no ICU recall relating to fewer frightening experiences is warranted. Routine follow-up using ICE-Q or IES-R could enable patients at greatest risk of longer term psychological outcomes to be identified.

Acknowledgement: Special thanks to the DESIST contributors(1). Funded by BJA/RCoA John Snow Award.

References

  • 1.Walsh TS, Kydonaki K, Antonelli J, et al. Staff education, regular sedation and analgesia quality feedback, and a sedation monitoring technology for improving sedation and analgesia quality for critically ill, mechanically ventilated patients: a cluster randomised trial. Lancet Respir Med 2016; 4: 807--817. [DOI] [PubMed]
  • 2.Rattray J, Johnston M and Wildsmith JA. The intensive care experience: development of the ICE questionnaire. J Adv Nurs 2004; 47: 64–73. [DOI] [PubMed]
  • 3.Weiss DS, Marmar CR. The impact of event scale -- revised. In: Wilson JP, Keane TM (eds). Assessing psychological trauma and PTSD. New York: Guilford Press, 2007, pp. 219--238.

EP.162

An Evaluation of the Provision of Occupational Therapy in a Post Critical Care Follow-up Clinic in the UK

Penelope Firshman1, Nicole Walmsley2, Andrew Slack1, Joel Meyer1 and Bronwen Connolly3

1Critical Care, Guy’s and St. Thomas’ NHS Foundation Trust, London, UK

2Guy’s and St. Thomas’ NHS Foundation Trust, London, UK

3Lane Fox Research Unit, Guy's and St Thomas' NHS Foundation Trust, London, UK

Abstract

Introduction: Critical care admission can lead to cognitive, psychological and functional impairments1,2,3 known as post intensive care syndrome (PICS)4. PICS impacts negatively on care needs, employment status and family income5. National Institute for Clinical Excellence (NICE, 2009) Guideline 83 recommends follow-up at 2–3 months following hospital discharge7,8,9. This has driven the expansion of post critical care clinic (PCCC) follow-up services designed to identify and address aspects of PICS.

Occupational therapy (OT) focuses on facilitating recovery and overcoming barriers that prevent people from participating in meaningful occupations. Intervention is provided when physical and non-physical morbidities affect self-care, leisure and productivity. Despite this, UK critical care units have limited OT with only 5.5% of clinics including an OT10.

Guy's and St Thomas’ Foundation NHS Trust PCCC has been running fortnightly since September 2015. Patients who were mechanically ventilated for 72 hours or more, or received extra-corporeal membrane oxygenation (ECMO), are invited to attend a comprehensive multidisciplinary face-to-face assessment at 8–12 weeks following hospital discharge. OT was incorporated as a core component and is now embedded in this fully-commissioned service (NICE, 2017).

A 12 month prospective evaluation was completed to describe the role and contribution of OT to ameliorate PICS in our PCCC.

Aim: To evaluate the contribution of OT in PCCC in addressing PICS.

Method: All patients attending the PCCC between September 2016 and August 2017 were assessed by an OT. Data was collected prospectively by two clinic OTs.

Numbers of:

• Patients seen by OT

• Patients who benefited from OT assessment

• Patients requiring OT intervention in clinic

• Patients requiring onward referrals

Types of:

• OT intervention

• Onward referral

Percentages are calculated and presented.

Results: A total of 77 new patients attended 24 clinics during the 12-month study period. 71 were seen by OT (92%). 76% required OT advice in the clinic or onward referral.

59% required advice, including: – returning to work, grading tasks, fatigue management, environmental adaptation, cognitive strategies, finances and driving.

46% were referred to community rehabilitation, outpatient therapy, falls clinic, memory clinic or social services.

During the same time period, 8 patients were provided follow-ups by OT. 100% required OT input, 25% during clinic and 75% onward referral.

Conclusion: Most patients attending PCCC required OT for aspects of PICS. OT provided unique insight into how impairments affect everyday occupations. OT can particularly assist with reducing care needs and return to work. These findings are relevant to providers and commissioners.

EP.163

Investigating the psychological impact of surviving critical illness

Yasmin Milner1, Anna Janssen2, Mervi Pitkanen2, Rachel Spurr2, Andrew Slack2 and Joel Meyer2

1Guy’s and St. Thomas’ NHS Foundation Trust, London, UK

2Department of Critical Care, Guys and St Thomas NHS foundation Trust, London, UK

Abstract

Introduction: Surviving critical illness is an exceptional stress and can leave patients with persisting psychological symptoms that significantly impair their long-term quality of life. The aim of this study is to identify aspects of the critical care admission that correlate with more adverse psychological outcomes to identify potentially modifiable risk factors and/or the subset of patients most at-risk.

Methods: This is a retrospective case series cohort study of post critical care clinic (PCCC) attendees at a tertiary referral teaching hospital over a 2-year period. Patients mechanically ventilated for ≥72 hours who subsequently attended our outpatient multidisciplinary PCCC were eligible (n = 101). Demographic and clinical data were obtained from critical care and PCCC clinical records. Psychological health at 3 months post hospital discharge was assessed using standardized questionnaires: Patients Health Questionnaire (PHQ-9), the Generalised Anxiety Disorder 7 (GAD-7) and the Post-Traumatic Stress Syndrome 14-Questions Inventory (PTSS-14).

Results: According to the questionnaires used, 41% of patients suffered from depression, 27% anxiety and 32% Post-Traumatic Stress Disorder (PTSD). Furthermore, 23% of patients suffered from all three disorders simultaneously. Features of the critical care admission that correlated with worse psychological outcomes include: age at admission ≤55 years (RR = 2.2, 95% CI 1.01 – 4.98, p = 0.04), female gender (RR 2.24, 95% CI 1.01 – 4.98, P = 0.04), delirium during critical illness (RR = 4.2, 95% CI 1.33 – 13.18, p = 0.01), extra-corporeal membrane oxygenation (RR = 0.4, 95% CI 1.63 – 3.51, p = 0.02) and presence of tracheostomy (RR = 0.2, 95% CI 0.09 – 0.41, p = <0.0001). However, length of critical care stay, APACHE-II score on admission, duration of other organ system support (renal replacement therapy or advanced respiratory support), and presence of acute kidney injury were not related to significant long-term psychological morbidity (p = >0.05 for all).

Conclusions: We propose that certain features of an ICU admission, or demographic characteristics, may indicate a subset of critical illness survivors most at risk of severe adverse psychological morbidity. Furthermore, we find that psychological symptoms frequently persist 3 months after discharge from hospital, These encompass multiple domains resulting in significant overlapping psycho-co-morbidity. Therefore, early interventions for the most at-risk ICU survivors have the potential to significantly improve post-ICU recovery, quality of life and social recuperation.

EP.164

Psychological Evaluation of an ICU

Anya Sheltawy1, Aine McCurry1, Patrick Smith1

1Warrington Hospital, Warrington, UK

Abstract

Psychological problems particularly delirium are common on ICU, a combination of critical illness, drugs and the difficult environment. Intensive care psychological assessment tool (IPAT) is a validated tool used in Level 3 ICU patients to identify those that may be at risk of PTSD, particularly if they score over 7. There is little published on its use in Level 2 and postop ICU patients. We wanted to get an overall psychological evaluation of the ICU, and hence carried out an audit on all patients discharged from the unit using the IPAT scoring system. We wanted to see if we could predict psychological outcomes post ICU, and enable identification of an at risk cohort for whom to focus support post discharge.

Method: The IPAT screening tool was used for all discharged patients for a 3 month window from April 2017 to June 2017 inclusive. The scores were collated, and the outcomes were analysed, grouping patients into Level 2, Level 3, and elective surgical admissions. We looked at the outcomes for each question, and also the overall score.

Results: We found that the overall average score, for patients admitted to Warrington ICU/HDU was 5.05/20, hence below the threshold predicting adverse psychological outcomes. The most negative outcome was disturbed sleep, with 57.9% of patients affected. We found that level three patients had higher scores on average (5.14/20) than level 2 (4.56/20) and elective patients (4.7/20).

Conclusions: The average score of patients was lower than the threshold for identifying risk of PTSD. Somewhat predictably, we found that level 3 patients scored more highly than level 2 and elective patients. The biggest areas which were highlighted in the IPAT outcomes were sleep, feeling sad and feeling tense. This will enable us to introduce interventions targeted at monitoring mood and promoting sleep. There is good evidence for improving sleep patterns and we hope to introduce these as a result of our outcomes, by formulating a sleep promotion bundle. For example earplugs and eye masks could be promoted. Also we would recommend all units to use the IPAT tool to help predict patient requiring psychological interventions and to provide evidence of the overall psychological evaluation of their unit.

EP.165

The Epidemiology of Early Fluid Bolus Therapy

Neil Glassford1,2, Johan Mårtensson3, David Garmory1, Glenn Eastwood1,2, Michael Bailey2 and Rinaldo Bellomo1,4

1Department of Intensive Care Medicine, Austin Health, Melbourne, Australia

2ANZICS-RC, School of Public Health and Preventative Medicine, Monash University, Melbourne, Australia

3Section of Anaesthesia and Intensive Care Medicine, Department of Physiology and Pharmacology, Karolinska Institutet, Stockholm, Sweden

4School of Medicine, University of Melbourne, Melbourne, Australia

Abstract

Intravenous fluid bolus therapy (FBT) is a common intervention in the intensive care unit, but there is limited detailed information regarding its epidemiology. We aimed to perform a detailed investigation of FBT over the course of ICU admission in a large cohort of critically ill patients using electronically recorded data with relevant statistical analysis

We identified 2075 patients admitted to the ICU for more than 24 hours. Of these, 60.7% were male, median age of 64.6 (IQR: 50.5—74.6) years and median APACHE III score of 57 (Interquartile range (IQR): 43—73) (Table 1). Overall, 68.9% received FBT within 24 hours of ICU admission and FBT accounted for 62% of the fluid administered over the first 24 h of ICU admission in such FBT-positive patients. Over the first 24 h of admission, a cumulative mean of 1555.1 ± 1385.3 ml of FBT fluid was delivered per patient in the FBT-positive group over a mean of 3.3 ± 2.3 episodes of FBT (Table 2). There were significant differences in ventilatory support. There were no significant differences in, duration of ICU or hospital admission, or ICU or hospital mortality (Table 1).

FBT is a pervasive intervention in the ICU, which occurs early in the course of ICU admission; is typically repeated in nature; accounts for a large percentage of all fluid administered in ICU and is associated with invasive mechanical ventilation. These data provide the background epidemiological information for interventional studies.

Table 1.

Demographic and outcome characteristics by FBT status.

FBT negative FBT positive p-value
646 (31.1%) 1429 (68.9%)
Age, years 61.3 (46.24–73.16) 65.93 (53.05–75.24) <0.001
Male sex 363 (56.2%) 897 (62.8%) 0.01
Apache 3 score 55 (39–71) 56 (43–72) 0.07
As an emergency 546 (84.5%) 871 (61%) <0.001
Surgical 186 (28.8%) 818 (57.2%)
IPPV during admission 372 (57.6%) 1068 (74.7%) <0.001
Duration of IPPV, hours 35.5 (13.6–107.5) 18.5 (9.4–59.1) <0.001
NIV during admission 99 (15.3%) 128 (9%) <0.001
CRRT during admission 54 (8.4%) 99 (6.9%) 0.28
ICU Mortality 45 (7%) 104 (7.3%) 0.95
Hospital Mortality 80 (12.4%) 159 (11.1%) 0.42

Table 2.

FBT over first 24 h.

Time Episodes of FBT Number of discrete FBT Volume of FBT, ml Proportion of IV Fluid Volume as FBT# Proportion of FBT Volume by Fluid Type&
FBT 3.3 ± 2.3 6.4 ± 5.6 1555.1 ± 1385.3 0.62
Crystalloid FBT 1.1 ± 1.8 2.9 ± 4.7 730.1 ± 1171.7 0.29 0.47
Colloid FBT 2 ± 2.1 3 ± 3.5 707.5 ± 830.1 0.28 0.45

EP.166

Association between the Canadian Study of Health and Ageing (CSHA) Clinical Frailty Score and Outcomes from Critical Care

Ruth De Las Casas1, Deanne Bell1, Catherine Bounds1 and Alex Trimmings1

1Eastern Sussex Hospitals Trust, Hastings, UK

Abstract

Introduction: Age is a risk factor for poor outcomes in critical care. In addition, there is increasing interest in frailty as a marker of physiological condition. Studies show frailty is a predictor of adverse outcomes following surgery, but there is little data on its impact in critical care. ICNARC plan to add the seven-point CSHA Clinical Frailty Scale, which measures frailty and categorises patients between ‘very fit’ and ‘severely frail’, to their dataset. Here we investigate the relationship between CSHA frailty scale, age, and outcomes in critical care.

Methods: The study was conducted at a two-site Trust, with 19 critical care beds. The CSHA Frailty Scale was added to the electronic admission form, and completed by the admitting doctor. After one year data was collected and Frailty Score (FS) correlated to death before leaving ICU, death before leaving hospital, and ICU length of stay (LOS). Data was then divided into two age-groups, >65 and 18–64 years. FS followed a similar distribution in both groups (Fig. 1).

Results: A FS was documented for 547 patients (59% of admissions).

There was an association between FS and death on ICU. 10% of those with FS 1 died on ICU, compared to 36% of those with FS 7 (Fig. 2, Blue).

In patients with FS 1–6 mortality on ICU was approximately 10% higher in the >65 group, versus the 18–64 group. However in those with FS 7, the >65 group did much worse, with 62% dying on ICU, compared to 0% of those 18–64 (Fig. 2).

In-hospital mortality showed similar trends. Mortality in patients with FS 1–6 was comparable between age-groups. However, in those with FS 7, the >65 again did much worse with 69% dying before leaving hospital, compared to 22% of the 18–64 (Fig. 3).

FS was also associated with longer ICU LOS, and those with FS 7 and >65 stayed much longer than the FS 7, 18–64 year olds (Fig. 4).

Conclusion: There is an association between high FS and poor outcomes, with all measured outcomes worse in the severely frail. Those rated as severely frail and aged >65 have much worse outcomes than severely frail patients <65, suggesting frailty is a more useful prognosticator in the older patient group.

Figure 1.

Figure 1.

Distribution of Frailty Scores.

Figure 2.

Figure 2.

Mortality on ICU.

Figure 3.

Figure 3.

In-hospital mortality.

Figure 4.

Figure 4.

ICU LOS.

EP.167

Experiences of intensive care patients following transition to a ward as expressed in online discussion forums: A qualitative study

Louise Albrich1, Jos Latour2 and Mary Hickson2

1Yeovil District Hospital NHS Foundation Trust, Yeovil, UK

2Plymouth University, Plymouth, UK

Abstract

Background: Survivors of Intensive Care Unit (ICU) may experience physical and psychological effects including weakness, anxiety, depression, and post-traumatic stress disorder. Awareness of post-intensive care syndrome is improving, but healthcare professionals still lack a true insight into how ICU survivors experience their recovery and care.

Aim: To explore the experiences of ICU survivors when transferred from the ICU to the ward as expressed in online discussion forums.

Methods: An internet search revealed the healthunlocked.com website, supported by the charity ICUSteps, with 25 discussion threads under the critical care subsection. The discussion thread named ‘how people felt when they were transferred from ICU, why and what affected it’ was selected and examined for phrases of meaning, to assign codes and descriptors, and then grouping these into themes and subthemes. NVivo v23 software facilitated thematic analysis and anonymity was maintained. Ethical approval was not required because the data was in the public domain.

Results: The two main themes are Vulnerability and Support, both with two subthemes each, as well as an overarching theme of Empowerment (figure 1).

graphic file with name 10.1177_1751143718772957-img10.jpg

Vulnerability, has the subtheme Despair and Helplessness with patients feeling confused and traumatised over the unexpected after-effects (nobody told myself or my husband anything about the after effects of being a survivor of ICU) as well as physically dependant through their illness (horrified to discover I was unable to even pull myself up in bed). The subtheme Abandonment was evident through emotional ward moves and influenced by nurses’ time to care (leaving me laid in bed wondering how I’d get over to the table to eat). Support, as the other main theme, depends on the sub-themes of Support from Family and patient’s level of Trust in Care. Trust depends on care needs being noted (I was met with blank stares and a ‘so what’ attitude), and the level of respect and compassion shown (told me I was faking it) which is born from knowledge and insight. The overarching theme of Empowerment is evident as a thread through all themes and sub-themes.

Conclusion: The disempowering after effects of critical illness is evident and patients rely on healthcare to develop a deeper understanding which enables listening with empathy. Rather than organising care solely around surviving illness, patients' distress could be reduced by preparing them for their transitional journey from the ICU to the ward and focusing on details that may empower them better.

EP.168

Preference for peripheral? Exploring the use of peripheral metaraminol infusions

Aaron Madhok1, Madelena Stauss1, Shams Abdelraheem1 and John Moore1

1Central Manchester NHS Foundation Trust, Manchester, UK

Abstract

Background: Whereas push-dose metaraminol is widely used in acute care settings, it is less commonly used as a peripheral infusion. Data from clinical trials supporting its use as an infusion is limited, and many hospitals do not have a policy regarding its use.

Methodology: A surveymonkey questionnaire was designed to assess clinicians views on their use of peripheral metaraminol infusions outside of theatre. The survey consisted of eleven questions, and was aimed at consultants and registrars working in anaesthetics, intensive care, emergency medicine or general medicine. It was distributed initially via twitter, and then directly to the anaesthetic and intensive care departments in the Northwest region. The collection period was two weeks between July and August 2017.

Results: In total there were 454 responses. After removal of 51 incomplete responses, there were 403 complete surveys from respondents spanning anaesthetics (n = 208), intensive care (n = 180), emergency medicine (n = 13) and medicine (n = 2). Of those, 102 respondents stated they did not use metaraminol infusions, with 301 stating they used it peripherally (n = 190), centrally (n = 2) or both (n = 109). 51.6% of respondents stated their trust did not have a policy for peripheral metaraminol infusions. Bridging until gaining central venous access was the commonest reason (n = 269) for using peripheral infusions, whilst using it to avoid central access in those with a limitation of care in place (n = 150) or whilst transferring patients were also common indications (n = 123). 31% of respondents stated they agreed with the statement that 'peripheral metaraminol infusions should be used more instead of gaining central access for other vasopressor administration,' whereas 30.8% were undecided and 29.5% disagreed. Only 5 respondents identified safety concerns as a reason for not using peripheral metaraminol infusions. Whilst 13 respondents stated an experience of extravasation injury with tissue damage, 208 had not experienced any side effects. Life-threatening bradycardia (n = 1) and cardiac arrest (n = 1) were also noted. Of the 15 respondents who noted these serious adverse effects, 10 disagreed with the statement that ‘peripheral metaraminol infusions should be used more instead of gaining central access for other vasopressor administration.’

Conclusion: Although data from clinical trials supporting the use of peripheral metaraminol infusions is limited, our survey shows that it is widely used by both intensivists and anaesthetists, with a low incidence of life-threatening adverse effects. Every trust should have a policy regarding its use, and ideally a randomised controlled trial should be performed to further assess its efficacy and safety.

EP.169

Portsmouth Intensive Care Emergency Anaesthetic Drug Pack (EADP)

Keith Ritchie1, Joseph Tooley1, Caroline Cawkill1 and Steve Mathieu1

1Department of Critical Care, Queen Alexandra Hospital, Portsmouth, Portsmouth, UK

Abstract

Emergency situations commonly occur in a large District General Hospital such as Portsmouth, often requiring timely resuscitation, emergency airway and ventilatory support. Previously anaesthetic drugs needed to be sourced from several places, the controlled drugs cupboard, pharmacy store and medical refrigerator and then diluted prior to use. This could lead to delays, especially outside of critical care when distances can be larger and drug errors as drugs are prepared in pressured and time critical situations. To reduce these inherent delays and risks of drug preparation errors we have developed Emergency Anaesthetic Drug Packs (EADPs) which are immediately available to the critical care staff and include pre diluted controlled drugs.

Our Emergency Anaesthetic Drug Packs are designed to provide:

1) Safe and rapid accessibility of emergency drugs required for resuscitation, stabilisation and intubation

2) Minimise drug errors using pre-filled syringes

3) A robust framework for the provision, storage and use of these drugs within current controlled drugs legislative and local guidelines.

EP.170

Diagnostic Accuracy of Serum Proadrenomedullin versus Sofa Score in Prediction of Mortality in Critically Ill Septic Patients

Tamer Abdallah1, Sherif Ahmed1, Lamia Kandil2 and Muhammed Gabr1

1Critical Care Department – Alexandria Univeristy, Alexandria, Egypt

2Department of Pharmacology and Toxicology, Faculty of Pharmacy, University of Pharos, Egypt, alexandria, Egypt

Abstract

The use of novel sepsis biomarkers has increased in recent years. However, their prognostic value with respect to illness severity has not been explored. In this work, we examined the ability of mid-regional proadrenomedullin (MR-proADM) in predicting mortality in sepstic patients with different degrees of organ failure, compared to Sequential Organ Failure Assessment (SOFA) score.

This is a cross sectional study enrolling 100 sepsis and septic shock patients admitted to Alexandria Medical University Hospital Intensive Care Units (ICUs). Serum pro-adrenomedullin was collected during first 24 hour of diagnosis of sepsis and septic shock and compared to SOFA score during first day as gold standard in prediction of mortality of critically ill septic patients. The accuracy of the serum pro-adrenomedullin for mortality was determined by area under the receiver operating characteristic curve (AUROC) analysis.

The study enrolled 100 patients with sepsis and septic shock. The pro-adrenomedullin AUC was 0.914 with p value >0.001 and best cut off point <7.1 with sensitivity 82.76 and specificity 85.71 with positive predictive value 88.9. As regard SOFA score in day 1 AUC was 0.968 with p value >0.001 and best cut off point <9 with sensitivity 84.48 and specificity 95.24 and positive predictive value 96.1. As regard comparison between serum Pro-adrenomedullin versus SOFA score, p value for Pairwise comparison was 0.053 with no significant difference.

Serum pro-adrenomedullin can be used as a predictor of mortality in critically ill septic patients during first 24 hour of diagnosis of sepsis and septic shock.

EP.171

Retrospective analysis of positive blood cultures taken from patients on the intensive care unit

Rosaline Chisholm 1

1University College London Hospitals NHS Foundation Trust, London, UK

Abstract

Unit-acquired infections affect around 8% of patients in the intensive care unit (ICU) and are associated with longer length of ICU stay, increased healthcare costs and a 3-fold increase in hospital mortality. An audit conducted by the Intensive Care National Audit Research Centre (ICNARC) in 2016 highlighted University College London Hospital as an outlier in terms of a high rate of unit-acquired bloodstream infections when compared to other adult ICUs in the country. This retrospective analysis of positive blood cultures taken from patients on the unit seeks to evaluate current levels of infection and identify any potentially modifiable causes of the higher infection rate seen.

Methods: ICIP software was used to derive a list of all positive blood cultures in patients admitted to the ICU from April-June 2017. The clinical notes were reviewed and information collected on length of stay, co-morbidities, immune state, central venous catheter (CVC) use, organism cultured and the clinical interpretation of the culture. Definitions for unit-acquired bloodstream infection, catheter-related infection and catheter-associated infection were those used by the European Centre of Disease Control and are in keeping with those in the ICNARC audit.

Results: Twenty-four patients on the ICU had positive blood cultures over the 3-month period but only 8 met the full ICNARC criteria as having a unit-acquired bloodstream infection. Four of these patients had a known haematological malignancy and a history of recent immunosuppressive therapy associated with a neutropenia around the time of the positive culture. Three patients were thought to have developed sepsis secondary to CVCs.

Discussion: As a national centre for haematological malignancy, many admissions to the ICU are significantly immunosuppressed and as half of the cases of unit-acquired infection occurred within this high-risk group, it is logical to conclude that the hospital’s patient mix may go some way to explaining the higher infection rate seen. Of the 3 CVC related infections, 1 involved a femoral line which had been left in situ longer than the 7 days recommended by trust guidelines, suggesting the need for improved education for healthcare staff on CVC care and duration of use. Several of the cultures were dismissed as contaminants indicating a role for improved blood culture technique in reducing the number of false-positive cultures. The trust is currently in the process of introducing blood culture kits to facilitate this.

EP.172

Audit of HIV testing and Retrospective Cohort analysis of HIV positive Critical Care Patients

Masseh Yakubi1, James Pennington2

1Clinical Fellow – Intensive Care – Royal London Hospital – Barts Health NHS Trust, London, UK

2ICU Consultant – Royal London Hospital – Barts Health NHS Trust, London, UK

Abstract

Background: National and international guidelines support opt-out HIV testing where local prevalence exceeds 2 per 1000 individuals. Local guidelines at Barts Health suggest HIV testing for all Critical Care Admissions.1

Methods: Snapshot audit and re-audit of all inpatients in Adult Critical Care Unit, followed by retrospective cohort analysis of all patients who were HIV positive admitted to Critical Care between Jan 2015 and December 2016.

Results: On a snapshot audit of 31 inpatients, 68% had been tested for HIV. Following reinforcement of importance, discussion with nursing staff and posters displayed, on re-audit this had increased to 94%.

There was a cohort of 100 HIV positive patients admitted to The Royal London CCU during 2 years studied, revealing mean age at admission of 49 years, 79% of patients being male. Apache II mortality estimation mean was 20.87% with total ITU mortality of 17%, and a further 3% mortality prior to discharge from hospital. Mean critical care length of stay was 6 days. Most common reasons for admission were; respiratory related 31%, surgical 13%, sepsis 12%, gastroenterology related 9%, and neurological 9%.

Discussion: Our audit shows an increase in percentage of patients tested for HIV following our intervention, when audit loop was closed. The most common cause for HIV positive patients to be admitted to critical care were related to respiratory pathology, predominantly pneumonia. It would be interesting with further work, to identify the link between CD4 count and viral load with mortality and to compare findings at The Royal London Hospital with other centres.

Reference

EP.173

Improving antibiotic administration in Sepsis at Weston General Hospital using a Patient Group Directive (PGD): A Quality Improvement Project

James Peters1, David Crossley2, Alek Kumar2, Hannah Crofton1 and Claire Dudley2

1Severn School of Anaesthetics, Bristol, UK

2Weston General Hospital, Weston, UK

Abstract

Background: Sepsis is a common and potentially life threatening condition. Despite this, national enquiries have consistently demonstrated failings in identifying and initiating treatment for the condition. This is a quality improvement project aimed at improving the recognition and treatment of patients identified as high risk with intravenous antibiotics within one hour of emergency department (ED) admission at Weston General Hospital (WGH), UK, using a patient group directive (PGD).

Method: Patient notes were identified using the Health and Social Care Information Centre ‘sepsis’ codes A40 and A41.

Patients under age 18, neutropaenic, or pregnant were excluded. Remaining notes were reviewed to identify if ‘red flag’ markers were present at time of triage. These were defined as; hypotension (systolic BP ≤ 90 mmHg), raised lactate (≥2 mmol/L), tachypnoea (≥25/minute), hypoxia (≤91%), plasma glucose ≥7.7 mmol/L in absence of diabetes, reduced conscious level (responding only to voice), or purpuric rash. If identified, time to administration of intravenous antibiotics was recorded.

From May 2016, a series of interventions were introduced at WGH ED. In order these were; an awareness campaign at handover, creation of a PGD enabling senior nursing staff to independently issue broad spectrum antibiotics (tazocin, or levofloxacin if penicillin allergic) for patients fulfilling red flag criteria, and advertising monthly performance on a run chart within ED.

Results: In our baseline audit (n = 27), 12 patients had red flag features on triage (44%) of which only one (8%) was administered antibiotics within one hour. Following our interventions the proportion of patients identified at high risk and so treated promptly with intravenous antibiotics consistently increased. At the end of our study period, February 2017 (n = 17) seven of the eight patients presenting with red flags received antibiotics within one hour (87%).

Conclusion: Introduction of a PGD, alongside an education campaign has significantly improved the identification and treatment of patients with high risk sepsis at WGH. Interestingly, despite advocating the use of tazocin there has been little increase in the amount issued. The introduction of a PGD appears to have raised nursing awareness, expanded their independent action and prompted more rapid assessment by emergency clinicians. This has resulted in faster and more focused antibiotic administration. As ED attendances continue to rise nationally, introducing safety measures such as a sepsis PGD warrant consideration.

EP.174

Making a difference: The Use of Critical Care Networks to Drive Quality Improvement in High Risk Sepsis Management on Intensive Care Units

Paul Dean1 and Claire Horsfield1

1Lancashire and South Cumbria Critical Care Network, Lancashire, UK

Abstract

Low tidal volume (6 ml/kg) ventilation and careful fluid management are well recognised cornerstones of sepsis management 1,2. The challenges of delivering low tidal volume ventilation are well reported. 3,4. Using the Lancashire and South Cumbria Critical Care Network and quality improvement methodology we established a group to drive improvement in the area of Intensive Care sepsis management.

Initial data collection demonstrated a number of challenges; differing; patient populations, ventilation strategies, modes and practices necessitating a pragmatic approach across the network. We looked at the network collecting data and feeding it back to units, for use in used local QI approaches. Alongside these, differing rates of acceptance of the need for change, embedded QI knowledge and adoption of strategies for improvement created differences in rates of improvement.

The network asked for set tidal volume in those using volume controlled or volume limited (pressure controlled) ventilation and actual volume in receiving pressure controlled ventilation. In order to generate sufficient numbers it was accepted that all patients could be included in the data collection (excluding neuroscience patients) who were being actively ventilated. Graphical displays were then returned to units for use in local QI work.

The group acknowledged that a greater than 10% body weight fluid gain was associated with an increased mortality 5and as such should serve as an initial measure. A number of challenges here were also identified; weighing patients on admission, did admission weight actually correlate with their “normal” body weight, should we compensate for insensible losses and if so how much?, measuring loss through drains and open abdomens. Again we needed to take a pragmatic approach. Once again the data was graphically retuned to units for local QI projects.

We have shown that a collaborative peer approach to QI in sepsis management raises awareness, inspires individuals and through the provision of QI education and simple measurement strategies, networks can collectively drive improvement. Whilst challenging in determining definitions, we would advocate consideration for whole country approaches to data collection to drive QI in similar ways to existing national clinical QI projects 6,7

EP.175

Delivered dose of dialysis in a novel citrate CVVH protocol

Matt Thomas1, Nabila Chowdhury1 and Tim Hooper1

1North Bristol NHS Trust, Bristol, UK

Abstract

North Bristol NHS Trust (NBT) use citrate anticoagulation for renal replacement therapy (CVVHF; Baxter Prismaflex). The local protocol is different to those published for Prismocit 18/01,2 in terms of replacement fluid composition and amount of pre-dilution. The dose of dialysis using this protocol is unknown. We used two methods to determine the dose of dialysis.

Our first objective was to determine the dose of dialysis and compare this with the protocol target (KDIGO recommendations).3 Our second objective was to compare the dose of dialysis calculated by the Baxter method (using a dedicated and freely available app)4 with the NBT method correcting for pre-dilution.5

Patients receiving renal replacement therapy in a 6 week period in June and August 2017 were sampled. Data analysis was performed in Excel 2010 (Microsoft Corporation) and is presented as mean ( ± standard deviation) for continuous variables. A one sample t-test was used to test the hypothesis that there was no difference in calculated dialysis dose using the Baxter and NBT methods.

A total of 268 data sets from 14 patients were obtained. The mean dose of dialysis calculated by the Baxter app was 30.5 (±6.0) ml.kg.hr. The mean dose of dialysis calculated using the protocol adjustment for pre-dilution was 23.5 (±4.5) ml.kg.hr. The mean difference between the two calculated doses was 7.0 (±2.1) ml.kg.hr, which was significantly different from zero (p < 0.0001).

The mean dose of dialysis using the NBT citrate CVVH protocol is 23.5 ml.kg.hr (within the target range of 20–25 ml.kg.hr). The dose calculated by the Baxter app overestimates that fully corrected for pre-dilution; the difference of 7.0 ml.kg.hr is clinically (and statistically) significant. Dialysis dose should be corrected for pre-dilution when calculated to avoid inadvertent under-dosing.

References

  • 1.Jacobs R, Honore P, Bagshaw S, et al. Citrate formulation determines filter lifespan during continuous veno-venous hemofiltration: a prospective cohort study. Blood Purification 2015; 40: 194–202. [DOI] [PubMed]
  • 2.Morabito S, Pistolesi V, Tritapepe L, et al. Continuous veno-venous hemofiltration using a phosphate-containing replacement fluid in the setting of regional citrate anticoagulation. International Journal of Artificial Organs 2013; 36: 845–852. [DOI] [PubMed]
  • 3.KDIGO Clinical Practice Guideline for Acute Kidney Injury. Kidney International Supplements 2012; 2: 1–138.
  • 4.CRRT Dose Calculator for Android. Baxter Healthcare Corporation (accessed 1 August 2017.
  • 5.Neri M, Villa G, Garzotto F et al. for the Nomenclature Standardization Initiative (NSI) alliance. Nomenclature for renal replacement therapy in acute kidney injury: basic principles. Critical Care 2016; 20: 318. [DOI] [PMC free article] [PubMed]

EP.176

The incidence of citrate toxicity when using regional citrate anticoagulation in renal replacement therapy

Simon Hill1, Elaine Creighton1 and Edward Walter1

1Intensive Care Unit, Royal Surrey County Hospital, Guildford, UK

Abstract

Regional citrate anticoagulation (RCA) is recommended in patients requiring continuous renal replacement therapy (CRRT), but there is concern about citrate accumulation and toxicity in patients with liver dysfunction. Despite post-filter infusion of calcium, citrate enters the systemic circulation and is metabolised to bicarbonate via the Kreb cycle. There is a risk that this pathway is impaired in liver dysfunction, with subsequent citrate accumulation, characterised by a reduction in ionised calcium and a high total: ionised calcium ratio, requiring escalating calcium doses to maintain normocalcaemia.

The risk however may be lower than perceived, less than 3% in some studies, and lactic acidosis from anaerobic cellular respiration may be a more important indicator of impaired citrate metabolism than liver dysfunction per se.

Our local results suggest that the risk of citrate toxicity may be very low. We have been using RCA since 2013. 63 patients have received RCA for CRRT in the past 18 months, a total of 349 days. 272 calcium ratio calculations were made. Using a bilirubin concentration of less than 2 mg/dl, between 2 and 7 mg/dl, and greater than 7 mg/dl to differentiate no, mild and severe liver dysfunction, as done in a previous study, 45 patients had no liver dysfunction, 14 had mild, and 4 had severe liver dysfunction. The same CVVHDF protocol was used in all patients. In only one patient (1.6%) was the ionised to total calcium ratio greater than 2.4. This patient had multi-organ dysfunction syndrome from sepsis with decompensated fulminant hepatic failure, with a peak bilirubin concentration of 36.9 mg/dl and a lactataemia of 6.0 mmol/l. A threshold of 2.5 is often used as an indication of citrate toxicity; a threshold of 2.4 has a specificity of 99.2% for mortality, suggesting that citrate toxicity below this level is very rare. It also has a sensitivity of only 55.6%, and in our patient, other indirect features of citrate toxicity (reducing ionised calcium levels, escalating calcium compensation requirements, and significant acid-base disturbance) were not present, making the diagnosis of citrate toxicity uncertain.

Our results support the hypothesis that RCA is generally safe, and that citrate toxicity is rare, even in patients with severe liver dysfunction. We consider lactataemia a more significant risk factor for toxicity than liver dysfunction.

EP.177

Adapting Renal Replacement Therapy utilising Regional Citrate Anti-Coagulation to Treat a Metabolic Acidosis

James Williams 1

1Bristol Royal Infirmary, Bristol, UK

Abstract

Introduction/Background: Up to 50% of patients admitted to Intensive Care suffer an Acute Kidney Injury (AKI). Of these, over 20% will receive renal replacement therapy (RRT). Metabolic acidosis is a known complication of AKI.

Regional citrate anti-coagulation therapy (RCA) utilises citrate as an effective anti-clotting agent in the extracorporeal circuit and has been recommended as standard anti-coagulation for RRT. Through networking, we were able to determine that our current protocol could be adapted to manipulate the bicarbonate increasing potential of citrate to treat a metabolic acidosis and identified which changes should be made to the haemofilter. An additional challenge was to create a simple algorithm for staff to follow

Aims:

1. To adapt our standard RCT protocol to safely treat a metabolic acidosis, whilst maintaining adequate haemofiltration.

2. To develop a clear and simple algorithm to facilitate correct use.

Methods: A metabolic acidosis protocol (Appendix 1) was created to incorporate the evidence based recommendations. In addition, a new monitoring guideline (Appendix 2) was developed to identify to staff when to sample blood and how to respond to the results. Inclusion criteria for the appropriate utilisation of the adapted protocol were incorporated (Appendix 1).

Effects on metabolic acidosis was identified through a review of patients’ electronic records

Results: Over a 6 month period, 48 patients received RRT therapy during their stay on ICU. Of these, 21 (44%) had developed a metabolic acidosis prior to RRT. Thirty nine percent of these patients went on to the new metabolic protocol with around 88% experiencing resolution of their acidosis. One patient was taken off the acidosis protocol due to suspected citrate accumulation and received conventional RRT instead.

Due to the risk of citrate accumulation, 38% of patients with a metabolic acidosis received alternate therapy due to a known severe liver impairment. For the remaining 23% with metabolic acidosis it is unclear why they were not treated under the protocol. Possible suggestions include consultant preference and knowledge deficit.

Importantly staff reported the protocol to be clear and easy to follow.

Discussion/Conclusion: To our knowledge, this is the first published data on the treatment of metabolic acidosis for this mode of RCA. It incorporates very simple changes to the RRT protocol and has been successful in resolving acidosis.

Although not appropriate for all patients, we would recommend this protocol to clinicians treating patients with a metabolic acidosis without a known severe liver impairment.

Appendix 1:

graphic file with name 10.1177_1751143718772957-img11.jpg

Appendix 2:

graphic file with name 10.1177_1751143718772957-img12.jpg

EP.178

Managing sodium disorders in patients requiring renal replacement therapy: Improving patient safety

Vikram Malhotra 1

1Luton and Dunstable NHS Foundation Trust, Luton, UK

Abstract

Sodium disorders are extremely common, and are associated with increased morbidity. Rapid correction of serum sodium concentration can have serious consequences.

In chronic hyponatraemia (i.e. taking >48 hours to develop), brain cells will adapt to protect against cerebral oedema. Rapid correction will lead to hypertonicity, brain cells will rapidly shrink and this can lead to permanent demyelination of pontine neurones.

Osmotic demyelination syndrome can present several days after the sodium has been corrected, and patients at more risk of deterioration include those suffering from malnutrition, alcoholism and liver failure.

Symptoms of osmotic demyelination syndrome include:

• Spastic quadriparesis/paraparesis

• Lethargy

• Dysarthria

• Mutism

• Pseudobulbar palsy

• Ataxia

• Altered behaviour

• Dystonia/parkinsonism

Similarly, brain cells will adapt to chronic hypernatraemia after 1–2 days. Acute hypernatraemia can lead to cerebral dehydration and shrinkage of brain cells. Rapid reduction of sodium will reduce the osmolality and could lead to seizures.

During renal replacement therapy, the patient’s serum electrolytes will trend towards the concentration within the replacement dialysis fluid, and therefore knowledge of the concentrations within these bags is essential. The rate of change will be affected by the concentration difference and the rate of filtration/dialysis.

An audit was conducted in our institution following admission of several patients over a 3-month period that were hyponatraemic and required renal replacement therapy. With our standard protocols, every patient had a serum sodium corrected faster than recommended, but luckily none came to harm.

The audit highlighted widespread lack of knowledge amongst doctors and nursing staff. A guideline was created to minimise the risks associated with rapid correction. This was distributed amongst the critical care staff as education and presented at a departmental meeting.

Monitoring in a critical care environment, and correction limited to 8–10 mmol/L/24 hours was paramount for good clinical care.

For hyponatraemic patients, graded amounts of sterile water for irrigation was added in a sterile manner to replacement fluid in order to limit a rapid rise in sodium concentration.

For hypernatraemic patients, graded amounts of 30% sodium chloride were used.

Tables were provided within the guidelines containing instructions for the amount of sterile water or 30% sodium chloride to be added to achieve specific concentrations.

Nurse training was conducted through the intensive care practice development nurse.

Following introduction of the new guidelines, a re-audit was completed to ensure a safe knowledge base was achieved amongst responsible staff.

EP.179

Do calcium levels from the dialysis access line reflect systemic levels, to avoid placing a second vascular line?

Lydia Gabriel1, Elaine Creighton1, Nick Dawson1, Edward Walter1

1Intensive Care Unit, Royal Surrey County Hospital, Guildford, UK

Abstract

Background: Patients on continuous renal replacement therapy (RRT) require anticoagulation to prevent clotting of blood in the extracorporeal circuit. Citrate anticoagulation prolongs circuit life and causes less bleeding compared with heparin. Citrate chelates calcium requiring accurate monitoring of ionised calcium (ionCa2+) concentration within the circuit. A calcium infusion distal to the dialysis chamber prevents anticoagulated blood re-entering systemic circulation. The infusion rate depends on systemic ionCa2+ concentration, to keep the latter within normal range (1.0 – 1.2 mmol/l). Patients therefore must have another form of invasive access (central venous/arterial catheter), in addition to the dialysis catheter for RRT, to monitor ionCa2+. Additional lines have associated morbidity and may not be required for anything other than calcium measurements.

This study aimed to determine if ionCa2+ in the access limb of the dialysis catheter (before the citrate infusion), accurately reflects systemic ionCa2+, negating the need for a second form of invasive line in some patients.

Method: A prospective observational study was carried out in the intensive care unit (ICU) of a district general hospital in England from May until June 2017. Patients on RRT were managed according to agreed departmental protocol but at the same time as the systemic ionCa2+ sample from the central venous or arterial line, it was also taken from the dialysis catheter, proximal to the infusion of citrate. Samples were analysed concurrently and recorded, but only systemic ionCa2+ was used for direct patient treatment.

Results: 13 patients underwent RRT during the study period, full data obtained for all patients. The mean (standard difference) systemic ionCa2+ was 1.07 (0.11) mmol/l and the mean (SD) of the catheter levels was 1.02 (0.11) mmol/l. 12 patients (92%) had calcium levels from the dialysis catheter access point within 0.08 mmol/l of the systemic calcium level. However, in two patients (15%) the difference was such that this would have resulted in a different rate of calcium infusion than with the systemic value. Data collection is ongoing.

Conclusions: These data suggest in most cases, calcium levels in the dialysis catheter, prior to the infusion of citrate, closely reflect systemic ionCa2+. In the two patients whose levels fell in different treatment ranges, the differences were significant (0.17 and 0.44 mmol/l).

While most of the levels are similar, on occasion, the difference between the catheter and systemic level is large enough that the catheter level cannot be used, and the systemic level is still required.

EP.180

Enhanced Economy and Efficiency of Phlebotomy Practice on the Surgical High Dependency Unit – a Service Improvement Project. Queen’s Medical Centre, Nottingham

Reema Patel1, David Sperry1, Francine Burns1 and Kenneth Bolger1

1Nottingham University Hospitals NHS Trust, Nottingham, UK

Abstract

Background: Annually, the Surgical High Dependency Unit admits 1650 patients with an average length of stay of 3.4 days. Standards of practice require doctors to determine blood tests needed following daily patient reviews, however failure to follow this protocol results in excessive empirical phlebotomy for every patient on a daily basis with unnecessary costs to the hospital and NHS. In addition, it has been demonstrated that it also results in excessive blood taking in already critically ill patients.

Aims:

1. Assess existing practice regarding phlebotomy including numbers of tests being processed per day, volume of blood drawn and personnel responsible for blood requests

2. Evaluate costs associated with current practice

3. Implement a novel, cost effective practice based on pre-determined blood test regimes for specific cohorts of patients

Methods:

1. Data collected from patient notes over two 3-day periods, pre and 5 weeks post implementation

2. Data obtained on number of individual full blood counts (FBC), Renal Profiles (U&E), Liver Profiles (LFT), Coagulation Profile, Bone Profile and Magnesium- blood volumes calculated based on minimal volumes required for laboratory analysis

3. Costings assigned based on laboratory prices per test

4. Personnel correctly following given practice identified

5. Comparison of practice pre and post implementation of novel regime was undertaken

Results: Pre Implementation data (53 patient bed days)

1. Phlebotomy requests documented only three times yet tests performed on 53 occasions

2. Total number of blood tests = 300

3. Cost of blood tests per patient per day = £7.10

4. Estimated annual cost of phlebotomy on Surgical High Dependency Unit = £39,831.00

5. Estimated 20 ml of blood drawn per patient per day

Post Implementation data (49 patient bed days)

1. Nursing compliance 79.50%

2. Total number of blood tests = 172 (42.70% reduction)

3. Cost of blood tests per patient per day = £4.84 (31.83% reduction)

4. Estimated annual cost on Surgical High Dependency Unit = £27,152.40 (31.84% reduction)

5. Estimated 12.5 ml of blood drawn per patient per day (37.50% reduction)

Conclusions: By changing standards of practice and implementation of a more patient tailored phlebotomy practice, we have demonstrated a reduction in the volume of bloods being taken per patient per day. Moreover, we have identified significant potential for cost saving to the trust. Further recommendations include striving for 100% compliance through on-going education and subsequent implementation in the Adult Intensive Care Unit.

EP.181

Reduction in Phlebotomy Related Blood Loss in Critical Care: a quality improvement initiative

Paul Caddell1, Una St Ledger1, Victoria Watts1, Gemma Craven1, Amanda Scappaticci1, Terry Coogan1, Aine McCartney1, Geraldine Turner1 and Deirdre Donaghy1

1Belfast HSC Trust, Belfast, UK

Abstract

Background: Habitual intensive care unit (ICU) phlebotomy practices have been reported to lead to iatrogenic patient anaemia and corrective blood transfusions. Transfusions are associated with negative patient outcomes. With a multidisciplinary quality improvement lens we sought to investigate opportunities to permanently reduce the volume of blood loss from arterial line sampling by 30%.

Methods: Adopting improvement methodologies this project conducted in the 25 bedded regional ICU (RICU) in Northern Ireland (NI) from October 2016-July 2017 had several phases: (1) Pre-implementation fact finding – data from staff questionnaires (44) observations of practice (10) and test frequency analysis, established current practices and variations (table 1). This concentrated efforts on reducing the volume of blood drawn per patient per day for arterial blood gas (ABG), full blood count (FBC) and urea and electrolyte (U&E) sampling and on reducing variation of the pre-sample waste volume and ABG volume (table 1); (2) Implementation of pre-heparinised low volume ABG syringes and a volume specific (2.5 ml waste and 0.5 ml sample) standardised operating protocol; (3) Implementation of small sample bottles for FBC (2 ml) and U&E (3 ml). Improvements were supported by a staff awareness programme and instruction posters; (4) Post-implementation staff feedback questionnaires.

Results: An overall 36.7% (16.14 ml) average reduction in volume of blood sampled collectively for ABG, FBC and U&E per patient per day was achieved post-implementation (reduction of 51.7% (13.49 ml)if pre-sample waste data is excluded). The standardised approach eliminated variation in practice. Improvements included reduced risks to patients and staff and reduced loss of analyser cartridges from implementation of pre-heparinised syringes. Staff perceived the new system safe and easy to use.

Conclusions: The saved volume is equivalent to one unit of blood every 20 days. Next steps include extending the new protocol and shared learning to other ICUs and clinical areas in our healthcare Trust and to the NI Critical Care Network. We will explore arterial line systems that eliminate dead space volume, investigate improvements in rationale based decision-making for ABG sampling, and improve engagement with electronic ordering systems. This project is part of a lager Trust Plebotomy Reduction Project.

EP.182

Arterial line mis-sampling fatalities: A simulation study to determine staff error recognition

Vikesh Patel1, Catherine Peutherer1, Peter Young2, Maryanne Mariyaselvam2

1Department of Medicine, University of Cambridge, Cambridge, UK

2The Queen Elizabeth Hospital NHS Foundation Trust, Kings Lynn, UK

Abstract

Background: Dextrose or dextrose/saline infusion bags can be accidentally used for priming arterial transducer sets instead of the intended saline. Dextrose 5% contains 300 mmol so even minimal contamination of the arterial blood gas sample can result in an excessively high incorrect blood glucose reading. The clinician can be misguided and led down a treatment pathway, leading to inappropriate insulin administration, resulting in undetected hypoglycemia and patient harm. The National Patient Safety Agency in 2008 highlighted the error and resultant fatalities and instructed Trusts to introduce preventative safety measures [1]. A near miss occurred in our hospital in 2016, when a patient was admitted to the ICU from the operating theatre with an arterial transducer set primed with 5% dextrose. A root cause analysis led to educational alerts and reminders were issued to staff. Six months after the incident, we conducted a simulation study to determine whether staff would recognise and correct the error during a simulation study.

Methods: Following institutional approval, we conducted a simulation study asking participants (10 doctors and 10 nurses who regularly undertake arterial sampling and interpretation) to manage the care of a monitored ‘patient’ mannequin, who had an tracheal, arterial, central and peripheral cannula. The patient had multiple infusions including insulin. A 5% dextrose/0.9% saline bag was misplaced as the arterial prime/flush solution, with labelling visible through a transparent pressure bag at 300mmHg. Alongside multiple standard clinical interventions, participants were asked to demonstrate blood gas sampling, interpretation and therapeutic corrective measures when persistently raised blood glucose levels were detected in error. The hospital protocol for insulin administration was provided upon request. Failure to identify the error and repeated escalation of the insulin infusion rate following 3 sequential raised glucose readings was taken as adverse patient outcome.

Results: The dextrose priming/flush solution was detected by 5% (1/10 nurses and 1/10 doctor) of participants. The remaining 95% inappropriately administered insulin therapy to likely fatal levels. Interestingly, one participant identified the possible error, checked the flush bag, misread the labelling and continued escalating insulin therapy.

Conclusion: Despite a national alert and a local near miss with resulting local alerts, this simulation study showed that that most clinicians will miss the arterial glucose-error once it has occurred. Our study shows the need for safety solutions to be introduced and used the NHS. For rare errors, the high reliability industries have engineered safety solutions to prevent these mistakes.

Table 1.

Pre-Implementation Fact Finding Results.

Sample Average frequency/ patient/day Average volume/ sample Average volume/ patient/day Volume variation
ABG 6.1 1.9 ml 11.59ml 1.5 – 7 ml per sample
FBC 1.33 4 ml 5.32ml Nil
U&E 2.29 4 ml 9.16ml Nil
Pre-sample waste 6.1 2.94 ml 17.9ml 2-10 ml per sample
Average blood loss per  patient/day for ABG, FBC, U&E 43.97ml

EP.183

Implementing Target Range Oxygen in Critical Care (TROCC); A pilot quality improvement study

Ronan O'Driscoll1, Timothy Fudge2, Diana Chiu2, Rosie Heartshorne2, Martha Pearson2, Jill Bentley2, Jenna Cardell3, Lisa Porritt2, Hayley Millar4 and Paul Dark2,5

1Salford Royal NHS Foundation Trust, Salford, UK

2Salford Royal Foundation Trust, Salford, UK

3Royal Bolton Hospital, Bolton, UK

4Pennine Acute NHS Trust, Oldham, UK

5University of Manchester, Manchester, UK

Abstract

There is a growing awareness of the potential harm (including mortality) caused by excessive oxygen use within specific patient populations presenting to the Intensive Care Unit. Despite guidance on the prescribing and titration of oxygen therapy, there remains much potential for variability in practice.

We present the findings of our pilot quality improvement study to reduce variability and improve the key process measures of:

• Appropriate oxygen prescribing within a target saturation range (85% of the 54 baseline patients had a formal prescription);

• Proportion of oxygen saturations within the target range (82% of baseline SpO2 values were within target range, 4 of 6 patients with a target of 88–92% were at least 2% above this).

Critically, we focus on the perceptions and attitudes of the multi-disciplinary team during the implementation process to learn and inform further changes in practice. A baseline survey found that 76% of 33 ICU staff (16 doctors, 7 nurses, 9 physiotherapists, 1 ACCP) felt that slightly too much oxygen is used.

We aim to fully describe our change package and present the follow-up findings at the ICS SoA 2017 meeting as a basis to inform the design of a systematic randomised cluster implementation study using a step-wedge design to disseminate best practice amongst the Critical Care community.

EP.184

Management of diabetic ketoacidosis in a district general intensive care unit – what could be improved?

Nathalie Graham1 and Radha Sundaram1

1Royal Alexandra Hospital, Paisley, Glasgow, UK

Abstract

Objective: Diabetic ketoacidosis (DKA) can complicate hospital admissions and increase morbidity in diabetic patients. Poorly managed DKA can results in fluxes of sodium and glucose, which can result in longer hospital stays and adverse outcomes. We aimed to investigate our management of DKA in a 7-bed district general hospital intensive care unit (ICU).

Methods: We retrospectively reviewed the medical admission notes and ICU nursing charts of all patients admitted with DKA over a 20 month period (August 2015- April 2017). We collated data on several outcomes in order to ascertain how successfully we are managing DKA, and which outcomes could be improved upon.

Results: All our patients (10/10) followed the hospital DKA protocol. Every patient had sodium fluxes outwith the normal range (135-145 mmol/l). The average Na high was 153 (range 138–168) and average Na low 135 (range 122–148). Every patient's rate of fall of glucose was monitored. 10/10 (100%) experienced hyperglycaemia (>12 mmol/l) – average glucose high 28.4 (range 14.3–58). 2/9 (22%) of patients experienced hypoglycaemia (<4 mmol/l) – average glucose low was 5.4 (range 1.9–7.3). The rate of fall of blood ketones was only recorded in 1/10 patients (10%). NG feeding was commenced in all DKA patients by day 2 of admission (1/10 (10%) pre-ICU, 8/10 (80%) on day 1 of admission and 1/10 (10%) on day 2). Diabetes team input was gained in 3/10 (30 %) of patients, and a further one patient (10%) had paediatric input into their diabetic management. The timing of the recommencement of normal insulin regimes was recorded in 9/10 patients. Of 9 patients, 7 restarted their normal insulin post-discharge from ICU (78%). One patient re-started on day 7 on ICU consultant advice and one patient re-started on day 11 on the advice of the hospital diabetes team.

Conclusions: We complied well with the hospital DKA protocol. NG feeding commenced early in all patients with DKA. Our recording of blood ketones was universally poor and this could be improved upon. The hospital diabetes team were used variably in order to assist DKA management; we should make them aware of patients earlier. Our recommencement of patients' normal insulin regimes was very poor – this should be improved upon in order to prevent errors made in handover of patients and to prevent unnecessary delays in restarting basal insulin. We aim to adjust our current DKA protocol in order to make it more relevant to critical care patients.

EP.185

Audit of hospital-wide Outreach response to automated VitalPAC™ alerts from chest medical versus cardiology and surgical patients in a UK cardiothoracic hospital

Ivett Blaskovics1, Jonathan Lonsdale1, Judith Machiwenyika1, James Clayton1, Kiran Salaunkey1 and Jonathan Mackay1

1Papworth Hospital, Papworth, UK

Abstract

Background: Electronic observation charts and early warning scoring (EWS) were introduced to Papworth Hospital in 2013. Papworth has two geographically distinct groups of wards; Surgical and cardiology patients at the ‘acute end’ of the hospital and patients with chronic chest disease in the Chest Medical Unit (CMU). Outreach received automated alerts via VitalPAC Doctor™ for high-risk (EWS = 4) and critical risk (EWS ≥ 5) from 2015. Although CMU accounted for <10% of cardiorespiratory arrests and unscheduled ICU admissions, they generated >50% of automated alerts to Outreach raising concern that care was being diverted from the acutely unwell to chronically sick.

Aim of Audit: Compare response to automatic alerts from CMU vs ‘Cardiology & Surgery’ wards.

Methods: Detailed analysis of case notes and vitalPAC log by ICU Clinical Fellow and Practice Development Lead –both independent of Outreach team – to evaluate outcome of automatic iPod alerts triggered by high or critical-risk EWS. Study population 30 ‘chest medical’ and 30 ‘surgical or cardiology’ patients.

Entry Criteria – Two scores of ≥4 at least one hour apart -to exclude transient self –limiting problems or inaccurate scores due to incorrect data entry (fat fingers).

Patient Selection – Patients randomly selected by audit department during study period June to November 2015.

Exclusions – ICU, HDU and Day Wards.

Results:

Cardiology & Surgery CMU
Total Patients n = 30 n = 30
High-Risk (MEWS 4) n = 20 n = 20
Critical-Risk (MEWS ≥ 5) n = 10 n = 10
Automated Alerts 139 378
Documented Reviews 79 (57% alerts) 102 (27% alerts)
Interventions 51 (65% reviews) 46 (45% reviews)
Alerts per intervention ratio 2.7 8.4
ICU Admissions 7 2
Alerts per ICU admission ratio 19.9 189
High- and critical-risk results  combined in interests of simplicity. Intervention defined as either ordering  investigation or change in therapy.

Conclusions: There was almost a ten- fold difference in automated alerts per ICU admission between groups. Our CMU ratio of 189 is very similar to the previously published 220 Outreach calls per ICU admission from chest medicine at Broad Green, Liverpool1. There is consensus that standard EWS is too sensitive in chronic respiratory patients. Automated alerts from CMU have been suspended pending introduction of a less sensitive Chronic Respiratory Early Warning Score (CREWS). In the interim, ward nurses phone Outreach when there is any cause for concern.

Reference

  • 1.Finnamore H, et al. Should there be a Respiratory-specific Modified Early Warning Score? 10.1136/thoraxjnl-2013-204457.437. [DOI]

EP.186

Decision making in the context of ‘big data’: Using the Delphi process to spot the needle in the haystack

Verity Westgate1, Julie Darbyshire1 and Peter Watkinson1,2

1University of Oxford, Oxford, UK

2Oxford University Hospitals NHS Trust, Oxford, UK

Abstract

Background: Early recognition of hospital patients at risk of severe reversible deterioration is a key goal for the NHS (Department of Health, 2012; NHS England, 2012). The Hospital Alerting Via Electronic Noticeboard project is developing a system enabling continuous risk assessment in all hospital patients, predicting those at risk of requiring intensive care admission.

It is now possible to include any data electronically captured during routine care in an early warning score. The challenge is to identify the variables with meaningful predictive power.

We used a mixed-methods approach to examine a variety of data sources. The core decision-making method was a Delphi process.

Methods: The Delphi process is based on the principle that the opinion of a group is more valid than an individual (Keeley, 2011) For the first round, initial results from a literature search (Malycha, 2017) were circulated to a panel of experts who were invited to confirm or reject the proposed variables as important predictors of deterioration. Panel members were also asked to suggest additional variables. During the second round, panel members were requested to indicate which variables they felt should not be included. A final face-to-face meeting consolidated a list of variables for the development team to include in the first iteration of the risk prediction score.

Results: Sixteen variables were identified from the literature. Seven responses (100% return) from round one increased this to sixty-five possible clinical indicators of deterioration in the hospital patient. After responses had been received from round two (70% return), twelve variables were discounted by at least one expert. There was insufficient consensus to remove any single variable at this stage. After the meeting six variables were discounted; seventeen further were proposed and agreed. Seventy-six variables were put forward into the initial score.

Discussion: Using the Delphi method enables a wider range of opinions and evidence based values to be considered and triangulated to achieve a more comprehensive consensus. This decision-making process included opportunities for opinions to be given both in isolation and developed multi-disciplinary conversation (Kitzinger, 2006)

This abstract presents independent research commissioned by the Health Innovation Challenge Fund (HICF-R9-524; WT-103703/Z/14/Z), a parallel funding partnership between the Department of Health and Wellcome Trust. The views expressed in this publication are those of the author(s) and not necessarily those of the Department of Health or Wellcome Trust.

EP.187

The variability in documented response to automated electronic early warning score alerts in deteriorating ward patients: is there an equally electronic solution?

Sam Prince1, Richard Cox2, Sarah Ingleby2, Jonathan Bannard-Smith2

1University of Manchester, Manchester, UK

2Central Manchester University Hospitals NHS Foundation Trust, Manchester, UK

Abstract

Introduction: Rapid response systems aim to identify deteriorating patients in the hospital setting (the afferent limb), before prompting an appropriate and timely clinical response (the efferent limb). Automated electronic track and trigger systems add reliability and robustness to the afferent limb. We have introduced such a system and have found it to improve clinician attendance at the bedside in a large university hospital. Capturing data from the efferent limb concerning the timeliness and nature of clinician response along with immediate patient outcomes is more challenging.

Study Objectives: A previous audit of responses to electronic alerts revealed a response rate of 87% of patients, but documentation of these responses only occurred for 52% of patients. We implemented a technological solution to allow clinical responders to submit real time data and documentation in the patient’s medical notes via a web-based portal.

Methods: Following implementation and staff training on the response tab we repeated a covert 7-day audit in June 2017. We collected data on clinical responses from critical care staff, their timeliness and whether they were documented in the patient’s medical record. We also collected data on patient outcomes at 24 hours following their alert.

Results: There were 33 alerts for 17 individual patients during the audit period. 16 alerts and 14 individual patients received a response from critical care providing a comparable per patient response rate of 82% (14/17). The median response time was 79 minutes. 100% of responses were documented and all documentation occurred via the electronic tool rather than the hand-written case notes. Repeat alerts were common, with 18 of them originating from just 3 patients. No patients were admitted to critical care within 24 hours of their alert, however 4 (23.5%) received a new initiation of palliative care.

Conclusions: The introduction of an electronic documentation tab to our rapid response system revealed a sustained clinician response rate and a significant improvement in response documentation in patients’ medical records. There appeared a preference for electronic documentation compared to handwritten notes. A switch to palliative care was common. The lack of subsequent admissions to critical care along with the high number of repeat alerts could be viewed as an excessive source of noise. Further innovations such as assigning an automated shelf life to clinical responses could help to address this issue.

EP.188

Humidified High Flow Nasal Oxygen delivery (Optiflow™) on the Ward is safe and effective in reducing admissions to Level 2 care

Darius Zeinali1,2, Tim Furniss1,2, Tina Joy2 and Laura Langton2

1Health Education North West, Manchester, UK

2Warrington and Halton Foundation Trust, Warrington, UK

Abstract

Introduction: High-flow nasal cannula oxygen therapy (HFNC) via Optiflow™ (Fisher & Paykel Healthcare) has been increasingly used in our critical care unit for management of hypoxia. Benefits of HFNC have been well described elsewhere1 however there has been debate as to whether HFNC can be safely used on general wards2. We report a trial of HFNC on our MAU and respiratory wards as an alternative to critical care admission.

Methods: Optiflow™ was introduced for a four week trial, led by the respiratory physiotherapy team with critical care outreach input. Nurse training was implemented and escalation plans were agreed for each patient on commencement of HFNC. Patients had daily medical and outreach nursing review. Routine observations were completed using the NEWS chart with escalation based on NEWS score.

Results: During the trial period ten patients were commenced on HFNC for hypoxia with a median duration of therapy of three days. Three patients deteriorated despite HFNC and further escalation was deemed inappropriate. The remaining patients were weaned from HFNC successfully; no trial patients required escalation of care. No adverse events were recorded. Since the trial nine patients were admitted to HDU solely for HFNC therapy over a two month period. These patients potentially could have been managed on the ward had HFNC been available.

Discussion: Concerns regarding HFNC use on wards may arise from the lack of evidence of safety in these settings. There are concerns, particularly in patients with severe hypoxaemia, adverse events may result in patient harm. There is some evidence that in patients who 'fail' on HFNC, intubation may be delayed, and they have a higher mortality and worse outcome3. Our limited trial demonstrated however, that with appropriate observation, training and frequent input from critical care outreach, HFNC can be introduced safely onto the general ward. There will remain a subset of patients for whom admission to HDU for closer observation on HFNC is desirable. As we look to permanently introduce HFNC on our wards robust protocols governing patient selection, monitoring and escalation will be essential to maintain patient safety.

References

  • 1.Ashraf-Kashani R, Kumar R. High-flow Nasal Oxygen Therapy. BJA Education 2016; 17: 57–62.
  • 2.Spoletini G, Alotaibi M, Blasi F, et al. Heated humidifed high-flow nasal oxygen in adults: mechanisms of action and clinical implications. Chest 2015; 148: 253–261. [DOI] [PubMed]
  • 3.Kang BJ, Koh Y, Lim CM et al. Failure of high-flow nasal cannula therapy may delay intubation and increase mortality. Intensive Care Med 2015; 41: 623–632. [DOI] [PubMed]

EP.189

Improving Medical/Surgical Emergency Team (MET/SET) call effectiveness: How can we improve decision making, documentation and communication between teams?

Charlotte Long1, Sarah Crabtree2 and Kate Murray3

1Western Sussex Hospitals NHS Trust, Worthing, UK

2St Georges University Hospitals NHS Foundation Trust, London, UK

3East Sussex Healthcare NHS Trust, Hastings, UK

Abstract

Medical Emergency Team (MET) and Surgical Emergency Team (SET) calls have been used in both District General Hospitals of an acute NHS trust to alert the relevant teams of a deteriorating patient in order to initiate prompt assessment and treatment, escalation of care where necessary or decisions regarding appropriate ceilings of care. The Surgical or Medical registrar is required to attend, as well as the Critical Care Outreach Nurse and the Anaesthetic Registrar or Senior House Officer covering Intensive Care. Although these calls are put out regularly, there have been inconsistencies noted in the team members attending, documentation reordered by each relevant team, and at times, a lack of clear evidence regarding decisions surrounding appropriate escalation plans.

We conducted a prospective audit over a one month period, with data from 85 MET & SET calls being collected using a pro forma completed by the attending Critical Care Outreach Nurse. This allowed us to identify which team members attended, whether medical assessments, critical care opinions and escalation plans were clearly documented and whether “Not for Resuscitation” forms were considered or completed if it was agreed that CPR would not be appropriate. Data was collated using Microsoft Excel.

MET & SET calls were well attended by Critical Care Outreach, with an Outreach nurse present at 99% of calls. 93% were attended by a medical or surgical registrar and 85% of all calls were attended by an anaesthetist. Documentation was variable: medical assessments were documented for 86% of calls, whereas critical care opinions were only documented at 67% of all MET & SET calls. Escalation plans were either unclear or not documented at 35% of all calls. This is clearly an area important to address, both in terms of optimal patient care and also for improving communication with ward teams regarding those patients for whom escalation to critical care would be appropriate, and under which circumstances, or indeed where discussion about resuscitation status should be considered.

We propose a simple intervention to improve documentation at MET/SET calls in the form of a distinctive yellow pro forma on an A5 sized sticker that can be placed in patient notes. This serves as a prompt for MET/SET members to document clear decisions regarding appropriate escalation plans for that patient. It also provides clear communication to the patient’s responsible medical/surgical team that there has been a clinical deterioration, especially if occurring out of hours.

EP.190

Correlation between pRBC transfusion and reintubation rate in ICU patients

Aristeidis Vakalos1 and Konstantinos Pagioulas1

1ICU, Xanthi General Hospital, Xanthi, Greece

Abstract

Introduction: Transfusion is not risk free, and is associated with allergic reactions, lung injury, infectious disease and immunosuppression in recipients, while cost of blood screening and storage is high. After all, pRBC transfusion is not risk free, and is associated among others with circulatory overload in recipients, leading in increasing reintubation risk.

Objectives: The aim of our retrospective observation study was to test the hypothesis that a correlation exists between pRBC transfusion and reintubation rate in our both medical and surgical ICU served in community hospital.

Materials and methods: From January 2006 to December 2014 admitted to our ICU 692 patients, mean age 65.1 years, mean length of ICU stay (LOS) 13.7 days, mean mechanical ventilation duration per ventilated patient (V. Days) 11.83 days, mean APACHE II score on admission 21.3, predicted mortality 39.2 %, actual mortality 31.50 %, Standardized Mortality Ratio (SMR) 0.80. From our database we looked for the reintubation rate (episodes per ‰ ventilation days) and the following values from 2006 to 2014: pRBC cross matched (c-m) and transfused (tran): Total, per patient, per hospitalization days (HD), per patient under mechanical ventilation (pts V), per ventilation days (VD) and the ratio pRBC cross matched over transfused. Using linear correlation method, we looked for linear slope, correlation coefficient (r), and coefficient of determination (r2), and by linear regression method using ANOVA test we looked for p value, according reintubation rate and pRBC transfusion.

Results: Correlation between reintubation rate and pRBC transfusion and cross matched indexes.

pRBC:/Slope/r/r2/S. Error/p Value.

Total c-m: /-0.6400 /-0.8638/0.7461/1.5810/0.0057

Total tran: /-4.4600/-0.8523/0.7265/1.1170/0.0072

c-m per pt: /-0.0234/-0.2782/0.0774/0.0329/0.5046

Trans per pt: /-0.0668/-0.2570/0.0660/0.0259/0.5389

c-m per H.D: /-0.0020/-0.4201/0.1765/0.0017/0.3001

Trans per H.D: /-0.0014/-0.3566/0.1272/0.0015/0.3859

c-m per Pt V: /-0.0317/-0.3569/0.1274/0.0338/0.3854

Trans per Pt V: /-0.0225/-0.3197/0.1022/0.0273/0.4401

c-m per V. Day:/-0.0025/-0.3391/0.1150/0.0028/0.4112

Trans per V. Day:/-0.0018/-0.3153/0.0993/0.0022/0.4469

c-m over Trans: /-0.0012/-0.0852/0.0072 /0.0060 /0.8409

Conclusion: According to our data, there was no statistically significant correlation detected between reintubation rate and pRBC transfusion and cross matched indexes, nor cross matched over transfused. On the other hand, there was statistical significant, strong, positive linear correlation detected between reintubation rate and total amount of pRBC cross matched and transfused. Our data suggest that even though pRBC transfused per patient do not correlate with reintubation rate, some pRBC recipients need special monitoring and management to avoid weaning failure and reintubaion.

EP.191

Hypernatraemia in the Critically Ill

Andrew Laurie1, Natalie Hills1 and Paul Gamble1

1Aberdeen Royal Infirmary, Aberdeen, UK

Abstract

Hypernatraemia, a common finding in the critically ill, is a potentially serious complication that can be associated with a lack of free body water. As the majority of intensive care patients are unable to regulate their own intake due to sedation, it has been suggested that this is predominantly an iatrogenic complication. We aimed to investigate the number of patients who developed hypernatraemia, in particular the surrounding clinical circumstances, recognition, subsequent management, and mortality rate.

Methods: The study centre was a 16 bed general adult Intensive Care Unit (ICU) in Scotland. Using WardWatcher, all patients admitted for >48 hours during the calendar year 2015 were identified. Within this cohort those patients with serum sodium values ≥150 mmol.l−1 were identified and their medical records analysed to extract: date of death if applicable, management of hypernatraemia and use of diuretics. The degree of respiratory failure (as measured by PaO2/FiO2 ratio) and degree of concurrent renal dysfunction (as measured by serum creatinine and urea) were also noted, on the hypothesis that these factors may have significantly influenced fluid management.

Results: Two hundred and forty-two admissions were identified, of whom 10 were readmissions. We were unable to access the electronic records of eight patients and these were excluded from data analysis. Eighty-two patients (35%) were identified as having hypernatraemia during their ICU stay, including two (0.8%) who were admitted with a sodium ≥155 mmol.l−1. The 90-day mortality in this group was 45.12%, compared with 36.91% of non-hypernatraemic patients admitted in the same period. Thirty-six patients (43.90%) had severe hypernatraemia defined as a value >155 mmol.l−1. Only a minority of hypernatraemic patients had a PaO2/FiO2 ratio <200 mmHg, and the majority had a creatinine <100umol.l−1.

Discussion: The data showed that a significant proportion of patients admitted to ICU in 2015 developed hypernatraemia, with differing approaches to management thereafter. Hypernatraemia appeared to be associated with a higher mortality rate (45.12% versus 36.91%) in this study, although sample size was small. It remains for debate whether developing high serum sodium is secondary to underlying disease processes or reflects current practices in the management of the critically ill, sedated patient, particularly with respect to fluid status. Evidently there is scope for improvement in the recognition and management of patients with serum sodium disturbances and this will be re-evaluated in the study centre in the future.

EP.192

Managing fluid balance in critical care: The role of ROSE and its impact on prognosis in critically ill patients

Jonathan Finnity1, Aditya Kuravi1, Asad Naqvi1, Gaurav Gupta1

1Walsall Manor Hospital, Walsall, UK

Abstract

Introduction: Evacuation, or de-resuscitation, is a growing concept in fluid optimization in critical care. Excessive positive fluid balance is associated with risks such as cardiac failure and ARDS. The ROSE Concept describes the different phases of fluid management in critical illness. The association between positive fluid balance, fluid overload and increased morbidity and mortality is well established. The first 3 phases (Resuscitation (R), Optimisation (O) and Stabilisation (S)) represent progression from fluid boluses, to fluid titration, to maintenance fluids. The final phase, evacuation (E), known as ‘goal-directed fluid removal’ or ‘de-resuscitation’, describes the active removal of fluids using diuretics, renal replacement therapy and discontinuation of invasive therapies. The aim is to achieve a net negative balance, thus avoiding fluid overload. It is well demonstrated that a higher fluid balance is associated with a higher mortality rate. An overall even fluid balance improves outcome. Furthermore, evidence shows that negative fluid balance at day 5 is associated with a 70% improvement in survival, exemplifying the importance of a fluid evacuation phase.

Methodology and results: A retrospective audit of 79 patients over one month. Fluid balance, severity of ARDS, haematocrit and APACHE II score were assessed to evaluate our practice. The incidence of severe ARDS was lowest in patients with a neutral cumulative fluid balance (2.4% in those with a balance of −2L to +2L). However, those with a more negative (<-2L) or more positive (>+2L) net balance showed an incidence of 28.5% and 25.9% of severe ARDS respectively. Mean fluid balance in patients with severe ARDS was +6.46L (range 21.3L to −3L), compared to +0.22L (range 6.8L to −8.6L) and +3.34L (range +20.4L to −1.8L) in patients with moderate and mild ARDS respectively. Overall, a higher haematocrit (mean 0.32, range 0.2–0.42) was associated with a higher incidence of ARDS. This we believe is to optimize the V/Q matching by aggressive fluid offloading. 83% of patients with an APCHE score of >25 had a positive cumulative fluid balance (mean +1.53L, range +3.2L to -0.5L) while the equivalent figure was 71% in those with APACHE of </= 25 (mean +2.43L, range −8.6L to +21.3L).

Conclusion: Higher fluid balance was associated with higher incidence of severe ARDS and APACHE scores. Though attempts were made to optimize V/Q matching, our findings demonstrate the difficulty in corroborating with the aforementioned ROSE strategy, thus highlighting the need for increased awareness amongst healthcare staff to improve outcomes.

EP.193

Sodium Glucose Co-Transporter 2 (SGLT2) Inhibitor: How well do we know the side effect profile in ITU? Should Current peri-operative diabetes guideline be challenged?

Jonathan Pang1, Raghib Qamar Malik1, Shun Ying Ho2, Shibaji Saha1 and Nauman Hussain1

1Queen Hospital Romford, Romford, UK

2NHS Newham, Newham, UK

Abstract

Introduction: Diabetes is a common and chronic metabolic disorder. The effective management of diabetes in the peri and post-operative period of any surgical patient is at the heart of every clinicians’ clinical practice.

Current AAGBI surgical peri-operative management of diabetes guidelines recommend SGLT2 inhibitors be omitted when the patient is fasting. However if VRIII is being used, the SGLT2 should be with-held until the patient is eating and drinking normally.

Case Summary: We are present the case of a 42 years old Asian woman with a background of hypertension, TIA and coeliac disease type 2 Diabetes admitted for an elective hysterectomy for menorrhagia. Intra-operatively, the surgery was prolonged due to adhesions. No other complications were reported.

Day 1 post operatively, she was recovering well eating, drinking and mobilising, her oral medications restarted.

Day 2 post-operatively in the evening, she became unwell and progressively dyspnoeic, tachypnoea and tachycardiac with a severe metabolic acidosis (PH: 6.89, HCO3: 2 and BE: −22.5) demonstrated on ABG, urine dip was positive for ketones with BM 20 mmol. She was transferred to ITU peri-arrest requiring intravenous bicarbonate and had a CT (abdomen) showing a collection. She was treated as Diabetic Keto-Acidosis with fixed rate insulin and 0.9% saline with potassium as per standard protocol.

Between day3 to 5 post-operatively, this lady developed a recurring ketosis each time her fixed rate insulin was weaned down despite long acting subcutaneous insulin.

On review of her medications, canagliflozin (CAN), A SGLT2 inhibitor was identified to be an offending agent.

CAN lead to ketosis was via increasing hepatic ketogenesis and has been highlighted in a MHRA alert.

The offending drug was stopped and her recurrent ketosis after weaning down her fixed rate insulin resolved. She was stepped down from critical care and discharged to the ward.

Conclusion: Although, SLGT2 inhibitor induced ketosis are rare, the diagnosis should be considered.

Reviewing the timeline of restarting SGLT2 inhibitors should be redefined by AAGBI with MHRA alert and above clinical case. With the marketing of newer classes of agents in this field there is an increased need for clinicians to stay up to date with guidance on the appropriate management of these agents in the post-operative period. As this case highlights that patient who are eating and drinking could develop life threatening condition.

EP.194

Gender Correlation of patients on CIWA Protocol

Ammar Malik1, Asif Abdul Hameed1, Nicholas Ghionni1, Ana Maheshwari1, Aasim Mohammed1, Blanca Iriarte1, Komal Magsi2 and Dominic Valentino III1

1Mercy Catholic Medical Center, Pennsylvania, USA

2Stony Brook University, New York, USA

Abstract

Introduction: Literature has shown that Benzodiazepines affect males and females differently for various reasons. We analyzed males and females on CIWA (Clinical Institute Withdrawal Assessment for Alcohol) protocol and studied their outcomes, benzodiazepine dosage, ICU length of stay (LOS) and hospital LOS.

Methods: A retrospective observational hospital study in an adult ICU at an academic community hospital in an urban setting was administered with a total patient population of 56 individuals. The groups were divided based on gender, male (n = 46) and female (n = 10). Subsets were compared using mean benzodiazepine dose, ICU LOS and hospital LOS.

Results: The mean benzodiazepine dose was higher in the female patient population compared to male population (64 mg vs 28 mg, p-value = 0.05). The mean ICU LOS was also higher in the female population compared to the male population, with a notable statistical significance (12 days vs 4.0 days, p-value = 0.05). The mean hospital LOS was also higher for the female group when compared to the male group (13 vs 8 days, p-value = 0.05).

Conclusions: There was a significant difference in the amount of benzodiazepine administered on the female patient population, which appears to have affected their ICU LOS and hospital LOS. The results are not entirely conclusive, due in part to the disproportionate gender representation and overall small number of patients.

EP.195

Creating a Portable High-Fidelity ECMO Mannequin for Developing an ECMO assisted CardioPulmonary Resuscitation (eCPR) Programme

Christopher Tomlinson1, Alex Rosenberg1, Bradley Pates1 and Abayalingam Abayalingam1

1Royal Brompton & Harefield NHS Foundation Trust, London, UK

Abstract

Background: eCPR is a complex process requiring multiple team members to conduct technical procedures in a coordinated, time-critical fashion. Simulation has an increasingly well recognised role in medical training, particularly in the field of critical care, allowing non-technical skills to be honed and logistical challenges to be identified and solutions sought in a safe environment. Despite the availability of commercial simulation mannequins for specific tasks in the eCPR process, e.g. ECMO cannulation, we were unable to identify a suitable and affordable product able to accommodate all stages of the resuscitation.

Objectives: We sought to design a high-fidelity mannequin to facilitate eCPR simulation encompassing advanced airway management, ventilation and chest compressions alongside ultrasound-guided cannulation and establishment of peripheral VA-ECMO. The mannequin was required to be portable to pilot a prehospital eCPR programme allowing sequential simulation and transfer from helicopter to trolley to cath lab and ICU.

Methods: To minimise cost and simplify construction we adopted a pragmatic approach; modifying a broken Laerdal SimMan ® to provide airway management and chest compressions ability. A circulatory system was created using rubber tubing, a loop of which was passed down the mannequins right leg. A window over the right groin allowed the tubing to be encased in ballistics gel, simulating soft tissues and skin, permitting ultrasound-guided cannulation of the femoral artery & vein. The two ends of the rubber tubing were connected to a two litre reservoir bag, filled with fake blood, forming a closed circuit which was then pressurised with a standard pressure bag. Once cannulated this allows a primed ECMO circuit to be attached and the mannequin to be perfused by the ECMO pump. As the mannequin was broken there was no interface to simulation software, this was circumvented using the virtual mannequin setting in the SimMan software to provide real time monitoring.

Results: To date we have conducted several multidisciplinary eCPR simulations with members of the medical, nursing, perfusion and prehospital teams. A brief survey conducted to evaluate the mannequin’s performance showed the 24 respondents unanimously ‘strongly agreed’ that the mannequin was believable, increased their confidence in eCPR and was a valuable learning experience.

Conclusion: Our DIY mannequin fulfilled our objectives of facilitating high-fidelity eCPR simulation as evidenced by unanimously positive feedback. To enable repeated simulations we plan to 3D print bespoke legs, incorporating a user-replaceable femoral vessels module, allowing the mannequin to be rapidly reset.

EP.196

Increased incidence of aspiration pneumonia following new Cardiac Arrest guidelines?

Matthew Cadd1, Marek Parkola1 and Mohsan Mallick1

1Hull and East Yorkshire Hospitals NHS Trust, Kingston Upon Hull, UK

Abstract

Background: Following the publication of the 2010 ALS (Advance Life Support) guidelines there was an increased emphasis on minimally interrupted chest compressions, and reduced emphasis on early endotracheal intubation during cardiopulmonary resuscitation (CPR). A service evaluation in critical care was performed to elucidate if the incidence of aspiration pneumonia increased in patients receiving CPR.

Method: A retrospective analysis of patients admitted following CPR in the Hull Royal Infirmary critical care unit was performed. This entailed fractionating the data into two groups; pre 2010 (change in ALS resuscitation guidelines), and post 2010. Aside from evaluating cardiac arrest management, we investigated variables such as sputum cultures, radiological changes, and inflammatory markers consistent with aspiration pneumonia. Data was also gathered in regards to the presence of endotracheal tube at time of CPR commencement.

Results: 118 patients’ medical records were analysed; with 58 patients in the pre-2010, and 60 in the post-2010 groups respectively. Results have been tabulated, with relevant significance values calculated using the chi-square method.

Pre- 2010 Post- 2010 Significance
Positive Sputum Culture 6 22 p = <0.01
Radiological Changes 34 27 p = 0.139
Received ETT 33 38 p = 0.475
Mean White Cell Count 24.8 20.6
Mean CRP 109 91

Conclusion: This project has demonstrated a potential increased risk and incidence of aspiration pneumonia since the change in ALS guidelines in 2010. Whilst there are a many factors which conduce to risks of aspiration pneumonia, this study speculates whether further research may identify a link between the changes in the ALS guidelines, and possible increased incidence of aspiration pneumonia.

EP.197

A retrospective audit of post cardiac arrest temperature management in a DGH Intensive Care Unit (ICU)

Laura Pocock1, Thomas Pratt1, Steve Fry1 and James Nicholson1

1Western Sussex Hospitals NHS Trust, Worthing, UK

Abstract

Introduction: Neurological outcomes following cardiac arrest (CA) have consistently demonstrated improvement with control of hyperthermia. Although the optimum temperature and duration of control remains unclear, the International Liaison Committee on Resuscitation recommend targeted temperature management for at least 24 hours in all patients after return of spontaneous cardiac output1.

As part of a review of our post-cardiac arrest management, we audited the temperature management of patients admitted to our Intensive Care Unit.

Methods: Data for all cardiac arrests was collected retrospectively for a two-year period from April 2015 until June 2017. All information was taken from the ICIP clinical information system (Philips Healthcare, Guildford, UK) and the hospital’s computerised record systems (SemaHelix, Atos, Bezons, France and Evolve, Kainos, Belfast, UK). Data was analysed using Microsoft® Excel® (Microsoft Corporation, Redmond, USA).

Results: We identified 89 patients admitted following cardiac arrest, with a mean (SD) age of 65 (15) years. 55 (62%) patients were male and just under half were admitted following an out of hospital cardiac arrest (OOHCA) 44 (49%). 32 (36%) patients survived to hospital discharge. Two-thirds of survivors were admitted following an in-hospital cardiac arrest.

Non-shockable rhythms occurred most frequently as the presenting rhythm (57, 64%). These patients had poorer outcomes when compared with those presenting with shockable rhythms.

Temperature measurements recorded during the first three days of admission showed that targeted temperature control was only achieved in half of patients. The average time in the target temperature range varied between 50.5%-80.2%. Targeted temperatures were most consistently adhered to during the second day of admission. Of note the lowest daily temperatures were seen in the non-survivors group.

Conclusions: Although the rate of survival to hospital discharge in patients admitted to our ICU reflects national data, our audit highlighted considerable room for improvement in the attainment and maintenance of targeted temperature control. It also identified scope to formalise a unit protocol to standardise temperature monitoring and the equipment used to facilitate these specified temperature ranges post arrest.

Reference

Donnino et al. Temperature Management After Cardiac Arrest: An Advisory Statement by the Advanced Life Support Task Force of the International Liaison Committee on Resuscitation and the American Heart Association Emergency Cardiovascular Care Committee and the Council on Cardiopulmonary, Critical Care, Perioperative and Resuscitation. Circulation, 2015; 132(25): 2448–56.

EP.198

From Defib to Debrief – hot Debriefing Following Cardiac Arrest

Emily Reynolds1, Bethan Williams1, Siobhan Perry2, Gavin Richards3 and Belen Espina1

1Royal United Hospital, Bath, UK

2Great Western Hospital, Swindon, UK

3North Bristol NHS Trust, Bristol, UK

Abstract

Ward based cardiac arrests are often stressful and despite training some staff may feel ill-prepared. Debriefing after cardiac arrests can reduce psychological harm, improve team performance, might improve patient outcomes and is recommended by the Resuscitation Council. This can take place immediately post event (hot debriefing) or some time afterwards (cold debriefing).

Following a difficult cardiac arrest at our DGH we decided to implement a quality improvement project of hot debriefing after ward-based cardiac arrests. At the time no debriefing routinely took place. Hot debriefing was chosen because immediately following cardiac arrest all the team are generally present. Cold debriefing is potentially difficult due to shift patterns and ward duties. We aimed for 50% of cardiac arrests to be debriefed by June 2017.

We surveyed 20 doctors regarding their experience and expectations of debriefing. A flow diagram was designed with guidance on holding hot debriefs which was incorporated into the cardiac arrest documentation on every arrest trolley.

We educated members of the crash team with teaching sessions and simulated cardiac arrests. Only cardiac arrests where a 2222 call was made, the crash team attended and chest compressions/defibrillation occurred were included.

15/20 doctors surveyed had attended previous cardiac arrest(s). 1 doctor reported attending a formal debrief, but 17 underwent informal debriefing with colleagues. 90% felt that hot debriefing would be helpful, although time limitations were a concern. 50% said that keeping staff together was difficult. Most respondents felt that clinical and emotional aspects should be included and discussed in a debrief (90% and 85% respectively); 60% felt technical skills should be covered.

2 months of baseline data collection showed no debriefs occurring. During our 4 month intervention cycle, crash teams attended 22 cardiac arrests. 1 in 5 arrests were followed by a hot debrief (figure 1) with very positive feedback from members of the team. The main barriers to debriefing were lack of availability of resuscitation team (further crash calls or need to stabilise patient following ROSC) and lack of awareness.

Conclusion:

• The rationale for debriefing following a cardiac arrest was strongly supported by staff.

• 20% of all cardiac arrests are now followed by a hot debrief.

• Main barriers remain time constraints and availability of members of resuscitation team.

• Further education and training are necessary to achieve sustained improvement.

Figure 1: Incidence of cardiac arrests calls versus hot debriefs per month, Jan-June 2017, and significant training interventions.

EP.199

Factors Influencing the Outcome of In-hospital Cardiac Arrests

Nicholas Womack 1

1University of Manchester, Manchester, UK

Abstract

Background: The latest statistics from the National Cardiac Arrest Audit (NCAA) state that the incidence of in-hospital cardiac arrests is 1.3 per every 1000 admissions, with a survival to discharge rate of 20.1%. In order to uphold survival rates, the Chain of Survival must be adhered to at all times, through employment of the latest Advanced Life Support (ALS) guidelines. Foundation year doctors hold a key role in the identification and management of cardiac arrests and therefore their adherence to the Chain of Survival is key.

Objectives:

1. To compare the incidence and survival rates of in-hospital cardiac arrests at Manchester Royal Infirmary to the national average.

2. To describe factors influencing the outcome of in-hospital cardiac arrests.

3. To assess the exposure of Foundation year doctors to cardiac arrests and how this affects their confidence with ALS.

Methods: A retrospective cohort of adult patients who sustained an in-hospital cardiac arrest at Manchester Royal Infirmary between the 1st of April 2016 and the 31st of March 2017 were assessed. A wide variety of factors were observed for an association with a decreased likelihood of survival to discharge, using Fisher’s Exact Test for significance. An online survey was formulated and distributed via email to Foundation year 1 & 2 doctors to assess the number of cardiac arrests that they had attended in the past year and their self-reported confidence with ALS. A correlation between exposure and confidence was then explored using Spearman’s Correlation Coefficient.

Results: From 127 cardiac arrests (1.0 per 1000 admissions), survival to hospital discharge was 19.7%. Advancing age (p = 0.006), longer resuscitation time (p < 0.001) and rhythms other than ventricular fibrillation (p < 0.001) were associated with a poorer outcome. Out of the 26 responders, Foundation year doctors averaged a total of 4.2 cardiac arrests attended annually, although exposure varied considerably between responders. Higher levels of cardiac arrests attended were correlated (coefficient = +0.44) to higher levels of confidence in managing cardiac arrests (p = 0.026).

Conclusions: The incidence of in-hospital cardiac arrests for Manchester Royal Infirmary is lower than the national average, and survival rates are similar. Initial rhythm, age and resuscitation time all have an influence on the likelihood of survival. There is a positive correlation between the number of arrests doctors attend and their confidence in management. The large variation in exposure to cardiac arrests between Foundation year doctors suggests that some may benefit from steps to increase their experience of ALS.

EP.200

Outcomes in haematological malignancy; four-year retrospective in a District General Hospital

KM Wilkinson1, Peter Hart1, Matthew Tinker1 and Brendan Sloan1

1Department of Critical Care, Pinderfields Hospital, Mid Yorkshire Hospitals NHS Trust, Wakefield, UK

Abstract

Introduction: Haematological malignancy has historically been associated with poor outcomes in critical care. We conducted this review in order to inform decisions about admission and escalation of care.

Methods: Case notes were reviewed in all diagnoses of haematological malignancy admitted to our unit between 2012 and May 2017. Only patients whose malignancy was relevant to admission were included. Demographic data was collected, along with the reason for admission, provision of organ support, documented treatment limitations and outcomes (unit and hospital mortality).

Results: 75 patients were identified. Median age was 66, mean Apache 2 score was 25. Overall ICU and hospital mortality was 42% and 62% respectively. Mortality increased with age, patients over 75 had mortality rates of 57 and 85%. Lymphoma, AML and myeloma were the most common diagnoses. Pneumonia, acute kidney injury (AKI) and sepsis were the most common reasons for admission. 50% had level 2 documented as a ceiling of care. Diagnoses, organ support and associated mortality is shown in table 1.

Conclusions: This data corroborates several recent publications in demonstrating that outcomes in haematological malignancy are better than previously thought.

Mortality tallies closely with previous analyses from our unit. However, proportionaly more patients received level 2 care. This may reflect documented ceilings of care and suggests these ceilings were appropriate.

Interestingly, some groups did significantly better than expected. These included AML and myeloma, despite both groups having a mean apache score of 27. AML patients all presented with infection, mostly neutropoenic. It may be relevant that neutropoenia itself was not found to predict poor outcome.

RRT, previously associated with poor outcomes in haematological malignancy, here demonstrated benefit. This may reflect the fact that 8/9 patients receiving RRT had a diagnosis of myeloma. The aetiology of AKI may be different in this group.

Certain combinations still carry a grave prognosis. These include haematological malignancy in old age, the presence of two or more organ failures, and the need for IPPV in respiratory failure.

Table 1.

Associated mortality.

Variable (number) ICU mortality (%) Hospital mortality (%)
Lymphoma (26) 50 73
Myeloma (21) 38 52
AML (15) 33 46
Pneumonia (40) 45 68
AKI (35) 54 77
Sepsis (31) 52 68
Neutropoenic sepsis (19) 47 68
Neutropoenia (32) 40 63
Inotropes (42) 52 71
NIV (32) 53 75
IPPV (13) 54 77
IPPV for respiratory failure (11) 55 82
RRT (9) 33 44
>1 organ failure (39) 62 82

EP.201

Stratification of critically ill cancer patients

Katherine Murphy1 and Phil Haji-Michael2

1University of Manchester, Manchester, UK

2The Christie, Manchester, UK

Abstract

Patients with cancer are increasingly admitted to the intensive care unit, with wide ranging outcomes previously reported.

Our study focussed on a subset of these patients: unplanned medical ICU admissions in patients diagnosed with a solid tumour, hereby anticipating reduced heterogeneity and production of more meaningful results. The aim was to identify factors apparent within the first 12 hours of admission that could best act as prognostic indicators in this group.

Retrospective study of all solid tumour patients with unplanned medical admissions to a single tertiary critical care unit between 2009 and 2015.

408 patients were included (median age 63 years, 55.4% male). Survival to ICU discharge, hospital discharge and 1 year were 73.3%, 58.1% and 26.2%, respectively. On multivariate analysis, the factors associated with significantly worse 1 year survival were age (Exp(B) 1.011), presence of metastases (Exp(B) 1.793), GCS of ≤7 (Exp(B) 7.207), pancreatic tumour (Exp(B) 5.724), lung tumour (Exp(B) 3.942), and pneumonia (Exp(B) 5.391). Variations when compared to ICU survival were only seen in total organ support, with more organ support directly correlated to increased mortality at this interval. ICNARC is superior to APACHE II with regards to ICU mortality probability; though both are poor prognostic tests (AUC 0.678 and 0.590, respectively).

Our results reiterate that survival in this group is better than previously thought. Therefore, cancer alone should not prevent ICU admission, rather we should consider the patient's likelihood of surviving the acute illness to stratify treatment groups.

EP.202

HIV testing for patients with bacterial pneumonia admitted to a DGH ITU

Paul Flinders1 and Andrew Drummond1

1Pennine Acute NHS Trust, Oldham, UK

Abstract

Introduction: Pharmaceutical advances mean that Human Immunodeficiency Virus (HIV) is now a treatable condition, with outcomes comparable to other chronic conditions such as type 2 diabetes. Indeed many people diagnosed with the virus remain well on treatment. Despite this a significant number of seropositive people are unaware of their infection, resulting in delays in treatment and the risk of onward transmission. Late diagnosis is a significant cause of HIV related morbidity and mortality.

Critical care services have many patients who are diagnosed with `indicator' conditions for whom HIV remains a differential diagnosis that the British Human Immunodeficiency Virus Association (BHIVA) guidance suggests requires exclusion. One such indicator condition is bacterial pneumonia.

Methodology: Consecutive admissions for bacterial pneumonia to a DGH ITU, over a 7 month period between September 2016 and March 2017, were reviewed to check if HIV testing had been undertaken. If testing had been completed, the length of time elapsed from admission to testing and test outcome were recorded.

Results: 26 patients were admitted to ITU within the time frame with bacterial pneumonia. A total of 34% (n = 9) were tested for HIV post admission to ITU. A further 8% (n = 2) had been tested prior to ITU admission on the ward. Amongst those patients not tested there were no documented refusals in the notes. The mean length of time from admission to testing was 4.5 days, with the maximum length of time to testing being 13 days. One test undertaken pre-ITU admission was the only positive test.

Discussion: In total, around 60% of admissions to ITU with bacterial pneumonia were not tested for HIV, and no documented test offer was made. Given that many of these patients are intubated and ventilated, it is likely that testing is not being considered by the treating team, as opposed to the team failing to document a genuine patient test refusal. This audit did not establish whether testing was recommended to the GP on discharge.

In addition, with an average time to testing of four and a half days, testing is clearly not automatically undertaken on admission to ITU and may be being considered when the patient does not respond to treatment adequately. BHIVA suggest this approach is unnecessary, as any diagnosis of bacterial pneumonia warrants testing, not just those cases failing to respond to treatment.

EP.203

Screening for Pregnancy in Critical Care: A Regional Survey of Practice in the North of England

Monica Jackson1, Emily Frostick1 and Sarah Platt1

1Royal Victoria Infirmary, Newcastle upon Tyne, UK

Abstract

Failing to identify pregnancy in critical care patients may have significant clinical and legal consequences. We audited our practice and found a concerning number of emergency admissions of females of childbearing age had not been screened. We therefore changed our local guidelines to test all of these women on their admission blood tests. Ethical approval was sought and gained, as this practice does not involve a formal consent process. Repeated audit has shown a dramatic improvement in our rates of screening. This survey evaluated what methods are used across the region to ensure patients are appropriately tested for pregnancy and what consent process is expected.

Methods: A four question survey was sent to 12 Intensive Care Units (ICUs) across the North of England. Each question had a choice of responses to select, and a free text option. We asked whether the ICU had a policy to screen all emergency admissions of females of childbearing potential for pregnancy, details of methods they used to identify which patients should be tested, and whether they consented patients with capacity prior to testing.

Results: Responses were received from 9 out of 12 ICUs. They demonstrated that only 1 ICU routinely screened all emergency admissions of females of childbearing age for pregnancy. All others required staff to identify the need for testing on a case-by-case basis. When doing this, most (6 out of 9) said they would use only age to inform their decision to test, but some also used aspects of the history such as last menstrual period (LMP) or the patient’s perceived risk of the likelihood of pregnancy. Only 1 ICU did not consent patients with capacity, with the rest divided between always or sometimes obtaining formal consent.

Discussion: Our audit demonstrated that case-by-case identification of patients requiring pregnancy testing was unreliable. Automatic admission testing reduces the chances of undiagnosed pregnancy but this survey shows that it is not widely used, possibly due to concerns over testing without formal consent. Clinicians may be falsely reassured by aspects of the patient history regarding risk of pregnancy, yet this is still used to inform the decision whether or not to screen. We recommend ICU teams audit their local practice of pregnancy screening and consider routine admission testing to reduce the chances of missed diagnosis in the critically ill.

EP.204

The provision of theatre access for emergency surgical patients requiring an immediate intervention at an acute NHS trust and its impact on survival and hospital stay

Sean Menezes1, Barbara Ribeiro2 and Shamim Haider1

1Colchester Hospital University NHS Foundation Trust, Colchester, UK

2Queen's Hospital – Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK

Abstract

Early intervention for surgical patients requiring an emergency laparotomy has been linked to improved outcomes. The National Emergency Laparotomy Audit (NELA) and National Confidential Enquiry into Patient Outcome and Death (NCEPOD) have guidelines for the time frames that emergency surgical patients should face to enter the theatre complex (TC) once a decision to operate is made. For patients requiring immediate surgery, this should be less than 2 hours (h). Our aim was to determine if our care to these high-risk surgical patients met this standard and the impact on survival and length of stay (LOS).

All entries to the NELA database were retrospectively examined for the 2015 and 2016 calendar years. Only those with a NCEPOD classification of intervention of “Immediate” were included. Patient demographics (age, sex, and ASA grade) were collected from the NELA database as well as the times for the decision to operate and entry into the TC. The trust’s electronic patient records system was used to determine the level of care after post-operatively, survival to discharge, and the LOS.

There were 348 emergency laparotomy/laparoscopy-equivalents that were performed at our trust in 2015 and 2016. Only 37 patients met our criteria of an “Immediate” need for surgery. The median age was 71 (range: 32–86) and 40.5% were men. The mode ASA grade was 4 and the pre-operative POSSUM predicted mortality was 38.4% (range: 1.5–98.5%). Only 70.3% of the patients entered the TC within 2 h of the decision to operate with a median time of 1.6 h (range: 0.25–59.5 h). For those meeting the standard, 80.8% were admitted to critical care post-operatively, 15.4% went to the wards, and 3.8% did not survive the procedure. Only 61.5% of these patients survived to discharge and their median LOS was 26.6 days (range: 5.8–114.5 days). For those that did not meet the standard, 81.8% were admitted to critical care and 18.2% to the wards. Of this group, 90.9% survived to discharge with a median LOS of 24.8 days (range: 3.5–120 days).

Our trust has some improvement to undertake to ensure optimal care for patients that require immediate surgical intervention as shown by only 71.3% meeting the standard. However, these patients all remain critically unwell and 61.5% of those that met the standard still did not survive to discharge. The trust has developed an emergency theatre standard operating procedure and improvements in the access to theatres will be subsequently re-evaluated.

EP.205

Improving post-operative care for the high-risk, emergency surgical patient

Sean Menezes1, Barbara Ribeiro2 and Shamim Haider1

1Colchester Hospital University NHS Foundation Trust, Colchester, UK

2Queen's Hospital – Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK

Abstract

High-risk, emergency surgical patients have improved outcomes when cared for in a critical care unit (CCU). The National Emergency Laparotomy Audit (NELA) has guidance that all patients with a POSSUM predicted mortality risk (PPMR) >10% should be admitted to CCU. Our acute trust has taken many steps (structured briefings, flyers, and local education and governance meetings) in an attempt to improve our admission rate to CCU for high-risk patients since the introduction of the NELA. Our aim was to determine if our interventions have led to a better engagement with CCU and the impact on survival to discharge, length of stay (LOS), and 30-day mortality.

The NELA database was retrospectively studied for two 6-month time periods: January to June, 2014 (early group; EG) that corresponds to the start of the database and the latest period with complete data (July to December, 2016; late group; LG). Only patients who had a pre-operative PPMR > 10% were included. Patient demographics were collected from the NELA database. The trust’s electronic patient records system was used to determine post-operative level of care, survival to discharge, and CCU- and hospital-LOS.

In the EG, there were 98 emergency procedures, of which 48 met our criteria. In the LG, we identified 54 patients out of 108 that fulfilled our criteria. The median age of the EG was 74.0 (range: 30.2–93.7) and 50% were men. Similarly, the median age for the LG was 73.0 but 55.6% were men. The mode ASA in both groups was 3. The EG had a median pre-operative PPMR of 17.0% (range: 2.4–99.5%) while the LG’s median PPMR was 19.2% (range: 1.3–75.6%). In the EG, only 56.3% of patients were admitted to CCU but this increased to 85.2% in the LG. The median CCU-LOS was 3.0 days (d) for both groups. For the EG, 81.5% of those admitted to CCU survived to discharge with a median hospital LOS of 17.9d (range: 4.8–78.3d). In the LG, 79.6% of those admitted to CCU survived to discharge with a median LOS of 14.0d (range: 1.1–119.6d). The 30-day mortality rate for the EG was 14.6% but was 22.2% in the LG.

Our acute trust has made significant improvements in ensuring an appropriate level of care for our high-risk patients. Our department’s interventions have been successful in increasing the engagement with the CCU. Further examination is required for the lack of improved mortality rates in the LG.

EP.206

Does the use of post-operative Enhanced Care Areas increase 30-day mortality in high-risk patients undergoing emergency abdominal surgery?

James Selby1, Stefan Gurney1 and Tamsin Rope1

1London North West Healthcare NHS Trust, London, UK

Abstract

Aims: National guidelines suggest patients undergoing emergency abdominal surgery with a calculated mortality risk >10% using the P-POSSUM score (1) should be admitted to critical care post-operatively (2). Limited critical care bed capacity has led to the creation of post-operative enhanced care areas (ECA). We performed a retrospective audit to examine the local utilisation of the ECA in high-risk patients and its impact on 30-day mortality.

Methods: Local NELA data was obtained for the period January 2015 to October 2016. Post-operative P-POSSUM score was used to stratify patients according to predicted mortality. 30-day mortality and median P-POSSUM score was compared between high-risk patients according to post-operative destination. P-POSSUM score was then combined with 30 day mortality to give an observed/expected (O/E) mortality ratio, using methodology available on the NELA website (3).

Results: 294 patients underwent emergency abdominal surgery during the audit period. 132/294 patients (45%) had a mortality risk >10%, compared to 41% nationally (3). 60/132 high-risk patients were admitted to critical care and 71/132 to the ECA. Critical care patients had a significantly higher 30-day mortality than those admitted to the ECA; 38% vs. 11%, p < 0.005, (Fisher’s exact test). Critical care patients also had a significantly higher median P-POSSUM score; 52 vs. 23, p < 0.001, (Mann-Whitney test). The O/E mortality ratio was 0.36 in the ECA and 0.65 in critical care.

Conclusion: Locally, the majority of high-risk patients were admitted to the ECA post-operatively. The ECA group had a significantly lower median P-POSSUM score and mortality rate compared to the critical care group. The O/E mortality ratio was low in both patient groups, particularly in the ECA group. Taken together, this would suggest clinicians are appropriately identifying the highest risk patients for critical care and that using ECAs have not compromised 30-day mortality. Whilst we are reassured by this data, we accept that mortality is a crude indicator of quality of care and in future work we aim to examine the impact on length of stay and patient morbidity.

References

  • 1.Prytherch DR, et al. POSSUM and Portsmouth POSSUM for predicting mortality. Physiological and Operative Severity Score for the enUmeration of Mortality and morbidity. Br J Surg 1998; 85: 1217–1220. [DOI] [PubMed]
  • 2.The Higher Risk General Surgical Patient: towards improved care for a forgotten group. London: Royal College of Surgeons of England.
  • 3.NELA.org.uk.

EP.207

ICNARC score versus P-POSSUM score for predicting mortality in emergency laparotomies

Samantha Fryer1 and Jerome Mccann1

1Warrington Hospital, Warrington, UK

Abstract

Introduction: The National Emergency Laparotomy Audit (NELA) examines the delivery of emergency bowel surgery in hospitals within England and Wales

The second NELA report 2016 commented on the preoperative P-POSSUM predicted mortality versus the actual observed mortality. They concluded that at low scores, 0–15%, it proved useful in determining the need for consultant presence in theatre and need for post-operative Intensive Care Unit (ICU) admission. At higher scores the correlation of POSSUM with mortality was poor and therefore should not be used.

The ICU ICNARC score is well validated in ICU patients in predicting mortality. It is similar to POSSUM although POSSUM is more weighted on surgical findings e.g. malignancy and ICNARC more on physiological parameters e.g. lactate. We speculated on whether the ICU ICNARC score would correlate better with mortality with this emergency laparotomy cohort than POSSUM.

Method: From the NELA database we identified patients that underwent emergency laparotomies in 2014–2016 who were admitted to ICU post-operatively. We identified the 30 day mortalities of 166 patients. We collected ICNARC, pre-operative and post-operative POSSUM scores on these patients.

Results: We found that with our data there is a significant correlation with ICNARC and both POSSUM scores.

ICNARC correlates with pre-operative POSSUM at Pearson coefficient of 0.525 with p value = 0.006 (p < 0.01). ICNARC correlates with postop possum at Pearson coefficient 0.609 with p value = 0.001 (p < 0.01).

ICNARC correlates more closely with post-operative POSSUM than pre-operative POSSUM – Pearson coefficient of 0.609 compared to 0.525.

Conclusions: The estimate of risk of death provided by P-POSSUM is reasonably accurate below around 15%. However above this, P-POSSUM tends to overestimate risk.

P-POSSUM is still useful for identifying patients who need resources such as critical care. However we urge caution in reliance on P-POSSUM when used to guide clinical decision making at high levels of predicted mortality as it overestimates risk of death by a factor of approximately two.

Post-operative P-POSSUM predicted mortality has a closer correlation with ICNARC score than pre –operative.

We had hoped that ICNARC would be more useful at the higher range of predicting mortality. Our study showed close correlation with P-POSSUM and we can therefore conclude that it would have the same limitations as possum.

EP.208

Ensuring theatre access for patients requiring urgent surgical intervention at an acute district general hospital

Barbara Ribeiro1, Sean Menezes2 and Shamim Haider2

1Queen's Hospital – Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK

2Colchester Hospital University NHS Foundation Trust, Colchester, UK

Abstract

According to the latest National Emergency Laparotomy Audit (NELA) report (2016), more than 30,000 patients undergo an emergency laparotomy each year in NHS hospitals in England and Wales. The NELA has established guidance recommending specific timelines for surgical patients to enter theatres once the surgical team has made a decision to operate, aiming to provide high quality care. For patients requiring an urgent intervention, the time between decision and procedure should be between 2 and 6 hours (category 2A).

We examined whether this group of emergency surgical patients admitted to an acute district general hospital had their surgical interventions within the timeline recommended by NELA and its impact on the hospital length of stay (LOS) and survival.

We have retrospectively examined all patient entries from the 2016 NELA database and only those highlighted as “Urgent” on the NCEPOD (National Confidential Enquiry into Patient Outcome and Death) classification were included (category 2A). Patient demographics were collected from NELA. The surgical details and timing, level of care after surgery, LOS and survival were obtained from NELA and the hospital electronic records.

In 2016, 187 urgent laparotomies/laparoscopy-equivalents were performed at our trust. Of these, 83 (44.4%) were considered “urgent” as per NCEPOD and included. The median age was 71 years (range 28–93) and there were 43 men (51.8%) and 40 women (48.2%). The mode ASA grade was 3. For this group of patients, the pre-operative POSSUM predicted mortality was 8.6% (0.8–75.5%). The median waiting time to enter the emergency theatre was 2.5 hours (0-22.5 hours). 86.7% of the cases entered the emergency theatre within 6 hours after the decision to operate. Of these, 55.6% were admitted to critical care post-operatively with the rest being admitted to the wards. 88.9% of these patients were discharged from hospital with a median LOS of 11.4 days (0.7–53.5 days). For the group who waited longer than 6 hours for their urgent procedure (13.3%), 54.5% were admitted to critical care. Of this group, 90.9% survived to discharge with a median LOS of 15.5 days.

Our results show that our trust has been successful in meeting the NELA guidance by ensuring that the majority of patients requiring urgent surgical interventions have received the appropriate level of care within the target timeframe. Those not meeting the guidance have a longer hospital LOS. Post-operative monitoring in critical care is common and it reflects on the higher survival rates.

EP.209

Making Intubation Safer Together – a cross specialty program to improve the safety of out of theatre intubation

Andrew Claxton1, Temilola Erinle1, Hayley Millar1, Aine Keating1, Gareth Hardy2, Clare Bayliss1, Oliver Pratt1, Paul Ferris1, Sarah Stibbards1 and Daniel Horner1

1Salford Royal Foundation Trust, Salford, UK

2Blackpool Teaching Hospitals Foundation Trust, Blackpool, UK

Abstract

Introduction: Endotracheal intubation outside the operating theatre is a frequent occurrence. There is increasing recognition that this procedure carries a high complication and adverse event rate (1,2). Despite national recommendations (1), the cross specialty nature of this issue often leads to poor governance and limited quality/safety assurance.

We sought to identify the frequency and baseline safety of this procedure at a large major trauma and neurosciences centre. This information was used to introduce a targeted improvement bundle with the aim of reducing procedural complications and embedding excellence.

Method: A single centre, phased, prospective service evaluation of all emergency airway events occurring outside of operating theatres during 2015 to 2017.

We initially performed a paper prospective data collection exercise using the ANZEDA registry airway template for all relevant endotracheal intubation events. We then introduced a rolling program of online education, in-situ high fidelity simulation, multi-specialty debrief and divisional governance updates. Several QI measures were introduced, including standardised electronic data collection templates (based on the ANZEDA template), per patient drug labels, standardised drug boxes, kit layout and prefilled syringes.

A further service evaluation was conducted after a year of intervention to assess uptake, impact and comparative outcomes.

Results: The initial service evaluation in 2015 collated data on 79 intubation episodes over 8 weeks (1.3/day). ED/ICU split was 39%/61% respectively. First pass success rate was 81.7%. Overall operator described documented complication rate was 40.5%.

Following the intervention period, the subsequent service evaluation was conducted over 3 months in 2017. Data was collated from 123 intubation episodes (1.4/day). ED/ICU split was 65%/35%. First pass success rate was 89.4%. Overall operator described documented complication rate was 19.5%. Complication rates were equal across ED and ICU, but with differing key themes (hypotension and CVS collapse versus desaturation/hypoxia respectively). On subgroup analysis, complication rates were significantly lower across all areas of airway management.

Evaluation of interventions in isolation were strongly positive. The standardised electronic reporting form was used in the second phase to record 70% of intubation episodes, implying good uptake across the trust. The rolling programme of high fidelity in situ simulation received consistently excellent feedback.

Conclusion: Following a series of interventions, we have seen an improved first pass success rate and significant reduction in complication rates for emergency intubation across all clinical areas. Furthermore, we have established an ongoing governance program in order to embed this change and develop excellence within the future service.

EP.210

Cricoid Pressure in Rapid Sequence Induction on the Intensive Care Unit; a Pilot Teaching Project for ICU Nurses

Harry Wadman1, Matthew Day1

1Great Western Hospital, Swindon, UK

Abstract

During Emergency Intubations on the Intensive Care Unit (ICU), a Rapid Sequence Induction (RSI) is often used. During RSI, cricoid pressure is applied to minimise the risk of passive regurgitation of gastric contents and aspiration of those contents into the bronchial tree. We found that many of the Nurses on ICU were not confident in applying cricoid pressure during RSI and in particular, they were unsure of the anatomy of the cricoid cartilage and also the correct pressure needed. We therefore decided to run a pilot teaching project on the Intensive Care Unit at the Great Western Hospital. We designed a pre-teaching and post-teaching questionnaire which had 6 MCQ’s about different aspects of cricoid pressure, and also asked the nurses to rate their confidence in applying cricoid pressure before and after the teaching session. We then ran 20-minute long teaching sessions on the ICU, with 2–3 nurses per teaching session, with an introductory theory based discussion and then we used a Medical Model Neck to illustrate the anatomy of the cricoid cartilage and surrounding structures. The Medical model neck also allowed the Nurses to practice applying cricoid pressure in a non-emergency situation. The pre-teaching questionnaire demonstrated a gap in understanding of when, why and how cricoid pressure is used with a pre-teaching mean score of 3.95 out of 6 (n = 18). More importantly the mean score of “confidence in applying cricoid pressure” was only 3.0 out of 5 (n = 18). We gave the nurses the post teaching questionnaire 2 weeks after the teaching sessions and this showed an improvement in the MCQ mean score to 4.95 out of 6 (n = 16) and an improvement in confidence to 4.38 out of 5 (n = 16). 100% of the Nurses said they found the teaching useful and 100% would like more teaching delivered in a similar style. This pilot teaching project has shown that an improvement can be made in the understanding of the use of cricoid pressure in RSI amongst the Nursing staff on the ICU, using small hands-on teaching sessions and equipment that most units will already have. It also dramatically increased the Nursing Staff’s confidence in applying cricoid pressure. We are hoping to incorporate it into the Nursing Staff Induction on the ICU thereby continuing its contribution to patient safety; addressing an essential patient safety and medico legal aspect of airway management.

EP.211

Awake Double Lumen Endotracheal Tube Intubation in an ICU Patient with Difficult Airway and Bronchopleural Fistula

Qing Yuan Goh1 and Andrew Kong1

1Department of Anaesthesia, Singapore General Hospital, Singapore, Singapore

Abstract

Our patient was a elderly gentleman who had undergone an open hepatic sectionectomy for hepatocellular carcinoma. His operation was complicated by an infected biloma, right pleural effusion with right lung abscess. After percutaneous drainage of these collections, he was admitted to the ICU for septic shock, ARDS and required invasive mechanical ventilation. He developed a right BPF, which persisted despite weaning him off ventilation.

He was documented to have difficult tracheal intubations previously (Grade-4 with direct laryngoscopy, and a Grade-2b with BURP manoeuvre on another instance).

5 days after leaving ICU, he developed increasing respiratory distress due to left pneumonia and right BPF.

On re-admission to ICU, he was tachypnoeic but fully conscious and cooperative. While continuing high flow oxygen via facemask, his upper airway was anaesthetised with 10% lignocaine sprays (5-6 aliquots) and a single transtracheal injection of 3 ml lignocaine 2%. Direct laryngoscopy was performed with the patient in the semi-recumbent position. He tolerated this inspection well without any sedation. A grade 3 larynx was visualised. Laryngoscopy repeated with a McGrath video-laryngoscope afforded a clear view of the vocal cords. After spraying lignocaine at the vocal cords, a 37Fr left-sided DLT was inserted and its position was confirmed by fibreoptic bronchoscopy. The left lung was isolated and APRV started on that side. With spontaneous breathing maintained, supplemental oxygen (8L/min) was insufflated via a 10F airway suction catheter through the tracheal lumen.

ABG values improved and APRV to the left lung was carried out for 30 hours before CO2 retention and respiratory acidosis required mechanical ventilation to be directed to both lungs. Pressure support ventilation with CPAP permitted further weaning. After another 6 hours, the patient was extubated to a 50% Venturi mask. The next day he was discharged to the ward where his clinical improvement continued despite developing a left loculated pneumothorax, requiring temporary drainage. He was subsequently discharged home on 2L/min nasal O2, with the right chest tube connected to a Heimlich valve and the biloma drain in-situ. At outpatient review, he only required oxygen supplementation on exertion.

Awake intubation should be considered in a critically-ill patient especially with a difficult airway. The use of a DLT allows independent lung ventilation. In our patient, this permitted alveolar recruitment in one lung, a vital component of treating ARDS, and by preserving spontaneous breathing, we also minimised the deleterious effects of positive pressure ventilation on the contralateral BPF.

EP.212

Laryngoscopy Bed Space Charts on ICU

Beth McElroy1, Chris Gough1 and Tim Cook1

1Royal United Hospital, Bath, Bath, UK

Abstract

Introduction: The NAP4 study found that the degree of harm sustained to patients as a result of airway complications such as accidental extubation was significantly higher in ICU than in other areas of the hospital, despite only 20% of all airway incidents occurring in that setting. Re-intubation was commonly difficult and rescue measures unsuccessful. It was recommended that units should develop a system to identify patients at-risk of an airway event in order to reduce possible harm. We developed a bed space chart to easily identify patients at risk of an airway event.

Method: We sought to ascertain whether ICU staff could quickly recognise patients with ‘difficult airways’ by identifying their laryngoscopy grade. We asked 30 staff members (15 non clinicians; 10 clinicians with airway skills and 5 clinicians without airway skills) to find this information. We then developed an airway card for each bed space dusplaying this information. As a subsequent measure of improvement we audited compliance of completion of these airway cards at 2, 4, 6 and 10 weeks after implication. Use of these was promoted through bulletins at handover and through posters detailing their importance around the unit.

Results: The mean time taken to identify the intubation grade was 90.2 seconds. Non clinicians took longer to find this information (134.3 seconds) than clinicians without airway skills (52.2 seconds) and clinicians with airway skills (42.7 seconds). Once the airway card was implemented compliance was found to be 66%; 57%; 83% and 71% at 2, 4, 6 and 10 weeks respectively.

Discussion: In the event of an unexpected extubation on ITU it would potentially fall to non clinical staff to review notes for airway information whilst the clinician made preparations to manage the threatened airway. We found that non clinical staff are seemingly less familiar with where to find this information as it is not their role to intubate or document this information. Having information immediately available regarding previous laryngoscopy grade could prove vital in an emergency.

Compliance with completion of the charts varied and was likely influenced by the rotation of doctors. Doctors reported that they found the charts helpful particularly in the event of planned manoeuvres such as proning or during laryngoscopically assisted insertion of nasogastric tubes. The presence of the charts increased awareness of and education about what constitutes a difficult airway on the unit.

EP.213

Can Advanced Critical Care Practitioners provide safe advanced airway management?

Gavin Denton1, Nitin Arora1, Marion Palmer1, Simon Giles1 and Daniel Higgins1

1Critical care, Heartlands Hospital, Heart of England Foundation Trust, Birmingham, UK

Abstract

Introduction: Advanced airway management (AAM) may be delivered by Advanced Critical Care Practitioners (ACCP) across the hospital, in the context of a consultant led/supervised support structure. There are limited data on the provision of AAM by ACCPs in the UK. The aim of this audit is to describe tracheal intubations and associated adverse events undertaken by ACCPs at a large NHS Trust.

Method: We included consecutive tracheal intubations outside of the operating theatre between December 2016 and July 2017. Both rapid sequence inductions (RSI) and intubations during cardiac arrest were included. Data were prospectively collected using a web-based anonymised form. Descriptive statistics were applied in the analysis of data.

Results: The audit period recorded 241 intubations. Most were RSI intubations (n = 204, 84.6%), the remainder were intra-arrest intubations (n = 37, 15.3%). The majority of cases were intubated by ACCPs (n = 174, 72.2%), 77% by qualified ACCPs (n = 134), 23% by trainees (n = 40). For the 144 RSIs performed by ACCPs, first pass success (FPS) was 89.6% (n = 129). RSI was performed more frequently by qualified ACCPs than trainee ACCPs (107 v 37 events), the FPS rate differed between groups (qualified n = 98, 91.6% v trainee n = 29, 78.4%).

There were no adverse events during the majority of the 144 ACCP delivered RSIs (n = 119, 82.6%). Twenty five adverse events were observed, hypoxia (n = 12, 8.3%) and hypotension (n = 10, 6.9%) were most common, recognised oesophageal intubation (n = 4, 2.7%), and cardiac arrest (n = 3, 2.1%). There were no unrecongised oesophageal intubations.

Discussion: The Difficult Airway Society recognise that repeated, failed attempts at laryngoscopy increases risk of adverse events. A meta-analysis of FPS for RSI in the emergency department, considered 84% the minimum standard1. The largest published data sets in the UK on the subject of RSI in the critically ill identified FPS of 82% for emergency medicine doctors and 92% for anaesthetists2.

Conclusion: This audit of ACCP delivered tracheal intubation, qualified ACCPs working within a framework of appropriate clinical supervision, were found to be able to perform tracheal intubation to a comparable standard to doctors working in intensive care or emergency medicine.

References

  • 1.Park L, Zeng I, Brainard A. Systematic review and meta-analysis of first-pass success rates in emergency department intubation: Creating a benchmark for emergency airway care. Emergency Medicine Australasia 2016; 29: 40–47. [DOI] [PubMed]
  • 2.Stevenson AGM, Graham CA, Hall R, et al. Tracheal intubation in the emergency department: the Scottish district hospital perspective. Emerg Med J 2007; 24: 394–397. [DOI] [PMC free article] [PubMed]

EP.214

Preventing Chest Drain Guidewire Retention using the WireSafe: A pilot clinical simulation RCT

Claire Malcolm1, Caroline Jarman1, Blessy Babu1, Rqia Zouitene1, Peter Young1 and Maryanne Mariyaselvam1

1The Queen Elizabeth Hospital NHS Foundation Trust, Kings Lynn, UK

Abstract

Background: Retained objects are considered an unacceptable and preventable never event in the NHS. Training, two-person procedures and rigorous documentation have not stemmed the rising incidence.1 In 2017, the Royal College of Emergency Medicine issued warning for Seldinger guidewire procedures, as responsible for 50% of never events in the ER.2 The WireSafe is being implemented through an NHS England programme3 for central venous catheter insertion as an engineered solution forcing recognition of the error and removal of the guidewire at a critical moment whilst not impacting adversely on procedure performance. The WireSafe is a box containing the stitch, forceps and dressings with a simple key mechanism requiring the guidewire to open. Pictorial instructions are present on the underside. Forced-error simulation, as used in other high risk industries, is now used in healthcare.1 We tested the hypothesis that the WireSafe, in a forced-error simulation study, would be efficacious in preventing guidewire retention when used by naive clinicians.

Methods: With IRB approval and written consent, doctors experienced in independent chest drain placement were randomised to standard practice or WireSafe groups. All were presented with a scenario whereby a colleague who had inserted the Seldinger chest drain had been called away to an emergency. The manikin model had the chest drain inserted with the guidewire left intra-luminally, visible through the transparent tubing and just retrievable with forceps from the hub. Participants were asked to safely complete the procedure, stitching, securing, dressing and attaching to an underwater seal drain. No guidewire was present on the trolley or in the sharps bin.

Results: The WireSafe prevented guidewire retention. Guidewires were retrieved from 0% standard vs 100% WireSafe, n = 10, p < 0.05 (Fishers Exact test). In the control group all completed suturing, dressing, drain connection and returned the patient to the ward with the guidewire in-situ. In the WireSafe group participants attempted to secure the drain in place however, when unable to access the equipment, this prompted a search for the guidewire, leading to the realisation of the intra-luminal location of the wire. All wires were safely removed by forceps or clamping and drain removal. All participants were able to use the WireSafe without training. A structured questionnaire indicated the WireSafe improved practice in terms of guidewire safety, convenience and sharps/wire disposal safety.

Conclusions: The WireSafe was 100% successful in preventing the never event of chest drain guidewire retention alongside facilitating fixation and sharps disposal.

EP.215

Building ICU Teams: The Five Dysfunctions of a Team

Rachel Quinn1 and Ellen Martyn1

1Warrington Hospital, Warrington, UK

Abstract

Introduction: In Patrick Lencioni’s book “The five dysfunctions of a team” he describes what must be overcome to make a team work. His five dysfunctions are absence of trust, fear of conflict, lack of commitment, avoidance of accountability and inattention to results.

We thought we could use his findings to assess the effectiveness of three clinical teams within the Warrington Intensive Care Unit to see if we could identify areas were we could improve performance.

Methods: The 3 Teams were: 7 ICU consultants, 8 Band 7 sisters and 5 members of our Critical Care Outreach team-the Acute Care Team (ACT).

A questionnaire affiliated with the book itself includes 38 statements focusing on the 5 dysfunctions. Each Team member scored them from 1–5 depending on how strongly they agree with the statements.

Results: The ICU nurses scored 4.2(high), the consultants 3.55(medium) and the ACT 4.05(high). We scored the 5 dysfunctions separately. We found that Commitment, Trust, Results and Conflict were all scored high but accountability scored only medium.

Discussion: In his book Patrick Lencioni makes suggestions as to how we can improve poor accountability mostly confronting peers about problems in their respective areas of responsibility. Since this review the ICU staff have addressed the issue of accountability. We have developed 7 multi professional teams with doctors and nurses in every team:

Patient Safety and Risk assessment Team, Infection Control team, Quality Team, Education/Simulation, Performance, Equipment, Patient Experience/Rehabilitation

Team leaders from each group will meet up on a quarterly basis at a new business meeting, set a strategy with goals and standards, and review results.

This more business-like approach, we hope will address the shortfall we uncovered with our team assessment.

In the Francis Report a lack of accountability was highlighted as a problem in a poorly performing NHS hospital. “The five dysfunctions of a team” could be used as a tool to assess accountability within any NHS organisation and we would recommend this approach to other ICUs.

EP.216

Extending the use of the arterial transducer set using the non-injectable connector

Emily Hodges1, Julie Allen1, Paul Mallet1, Mark Blunt1, Peter Young1 and Maryanne Mariyaselvam1

1The Queen Elizabeth Hospital NHS Foundation Trust, Kings Lynn, UK

Abstract

Introduction: Manufacturers and the Royal College of Nurses recommend arterial transducer sets should be changed every 4 days1 and NICE supports application of an arterial cannulae site antimicrobial dressing for 7 days.2 Changing sets require a concurrent dressing change, risking blood contamination, cannula loss, infection and increasing nursing time and cost of disposables. The non-injectable arterial connector (NIC) attaches to the two luer hubs (sampling and transducer) in a standard arterial set, prevents accidental injection and bacterial contamination and creates a closed arterial sampling system.3 Using the NIC connectors we microbiologically examined arterial giving sets to determine the contamination pattern during extended use.

Methods: Following institutional approval, we collected arterial transducer sets (Codan, Forstinning, Germany) from critically ill patients aseptically following clinical use. All sets were continually protected during their clinical use by NIC connectors on both the luer hubs. For analysis, the external arterial line was sterilised with alcohol between the arterial cannula connector and the sampling port then cut with sterile scissors. The saline within the transducer set (10 ml) was collected and an aliquot (0.2 ml) plated onto growth media and incubated for 48 hr. Bacterial contamination was classified as >3 colony forming units (CFU) which was the lab standard. This study was conducted alongside an economic cost analysis.

Results: Eight samples were collected from arterial sets used between 1–3 days and thirteen samples were collected from arterial sets used between 4–7 days. No contamination was found from either the standard or extended time samples. Analysis of annual transducer set usage found that arterial transducer set use would fall from 1144 to 584 saving £9,749 per year. Additionally, assuming set changes take 5 minutes, and 2 nurses, this results in an annual time saving of 97 hours.

Conclusion: Our preliminary findings suggest that it may be clinically acceptable to extend the use of an arterial transducer set to at least 7 days, when using the NIC at the sampling and transducer port thereby improving patient care and providing cost savings. Adoption of the practice across the NHS would equate to an estimated £4m annual saving.

EP.217

Improving airway management on the intensive care unit using a combined clinical and human factors approach

Steven Gill1, Thearina De Beer1, Umair Ansari1,2, Jeena Velzen3, Sarah Atkinson3, Alexandra Lang3, Giulia Miles2, James Shilston1, Reema Patel1 and Bryn Baxendale1,2

1Nottingham University Hospitals NHS Trust, Nottingham, UK

2Trent Simulation and Clinical Skills Centre, Nottingham, UK

3Human Factors Research Group, University of Nottingham, Nottingham, UK

Abstract

Introduction: The risk to patients on the ICU from complications following airway interventions is well documented in NAP4 amongst other studies. We noted locally this risk may be exacerbated by a regularly rotating cohort of doctors in training with varying airway skills and a large and mostly jnursing team of mostly junior nurses with limited experience of assisting with airway interventions. We sought to better understand current clinical and non-clinical practice and re-design critical care airway training.

Methods: A team with clinical, educational and human factors expertise undertook a multi-faceted investigation. A hierarchical task analysis was used to define the process of managing an unplanned extubation and resecuring the airway in accordance with national guidance. This was used to review input from the "shop floor team" gathered via structured interviews conducted and analysed by the University of Nottingham Human Factors Research Group. A questionnaire appraisal of staff knowledge of airway equipment was also completed. The final information gathering exercise was a simulated airway emergency conducted on all three of our main critical care units. These emergencies were managed by the clinical team on duty using the standard airway equipment available. Video recording were reviewed by clinical and human factors experts.

Results: Common factors were identified during each phase of the project. A strong reliance on senior clinical staff with advanced airway skills to both perform technical tasks and lead the clinical team seemed key to successful outcomes however the risk of task fixation and cognitive overload was high in these individuals. A varied understanding of airway equipment was noted, in particular amongst junior members of both medical and nursing staff; impacting on the assistance provided to the airway practitioner. Recognition and allocation of the different key roles was important and when not fulfilled replication or distraction from tasks was common. Effective communication was essential with examples of good practices and areas for improvement noted. Ergonomic difficulties relating to management of people, equipment and the clinical environment were common during the simulation exercises.

Outcomes: A modular training program has been designed covering recognition of airway problems, “airway first aid” and calling help, definitive airway management, airway management assistance and team leadership has been designed using a mastery learning model and is being implemented with our critical care unit encoporating pre-existing learning opportunities. All modules include both the clinical and non-clinical learning points from our study.

EP.218

Large radio-occult pneumothorax identified by physiotherapist initiated lung ultrasound – a case report

Alex Hemsley 1

1Newcastle upon Tyne Hospitals, Newcastle upon Tyne, UK

Abstract

Background: Point of care lung ultrasound (LUS) is increasingly being utilised to aid diagnosis. With many areas including A + E now using LUSS routinely. More recently LUS has started being utilised within the critical care environment (1). Physiotherapists are key members of the critical care MDT. They provide specialist respiratory assessment and treatment to critical care patients. This case review highlights the advantages of physiotherapists performing LUSS in order to guide physiotherapy assessment and treatment on critical care.

Methods: A patient admitted to critical care unit following polytrauma was failing to wean with high ventilation pressures and oxygenation requirement (SIMV TV 550 PEEP +10 and Fi02). Despite optimal ventilation, positioning and recruitment. A detailed assessment by physiotherapist identified reduced air entry in left base however repeat chest x-ray was clear. Despite optimising ventilation, positioning and recruitment the patient failed to improve. Therefore the physiotherapist conducted a LUS which indicated a left pneumothorax (lack of pleural sliding and bar code sign on M Mode on left at upper anterior, mid and posterolateral points). A repeat chest x-ray was completed which did not demonstrate a pneumothorax.

Results: The next day upon ward round the patient was due to have a head CT given the result of the LUS and lack of progression with weaning to spontaneous ventilation and reducing 02 requirement therefore a CT chest was also completed. The CT demonstrated a large left sided pneumothorax. A chest drain was inserted. The patient then weaned to spontaneous ventilation and was discharged to the ward 18 days later.

Conclusion: This case study demonstrates the benefit of physiotherapists utilising lung ultrasound as an addition to traditional assessment tools. It also highlights that the reliance on one imaging modality within critical care may lead to an inaccurate diagnosis. This is supported by a study comparing chest x-ray, LUS and CT which demonstrated that LUS had better diagnostic performance than chest X-ray in critical care population (2).

References

  • 1.Volpiceli G, Elbarbary M, Blaivas M, et al. International evidence-based recommendations for point-of-care lung ultrasound. Intensive Care Medicine 2012; 38: 577–591. [DOI] [PubMed]
  • 2.Xirouchaki N, Magkanas E, Vaporidi K, et al. Lung Ultrasound in critically ill patients: comparison with bedside chest radiography. Intensive Care Medicine 2011; 37: 1488–1493. [DOI] [PubMed]

EP.219

ICU physician-delivered point of care renal tract ultrasound in Acute Kidney Injury

Prashant Parulekar1, Ed Neil-Gallacher1 and Alex Harrison1

1Royal Sussex County Hospital, Brighton, UK

Abstract

Background: Acute kidney injury (AKI) has an incidence of 57% in critically ill patients.1 Ultrasound is a first-line investigation to help determine the cause of AKI. Basic renal tract imaging forms part of the syllabus for “core ultrasound in Intensive care” (CUSIC). Focused ultrasound of the kidneys by Intensivists for assessment of hydronephrosis can potentially improve patient outcomes with quicker time to diagnosis and intervention, reduce the need for transfer of critically ill patients to the Ultrasound department and reduce the burden of work for the Radiology department. We performed a feasibility study by performing bedside ultrasound imaging of the renal tract, by Intensivists, in critically ill patients, with assessment for the presence of hydronephrosis.

Methods: Patients with AKI admitted to a tertiary critical care unit were prospectively scanned by one of two critical care physicians. Patients were scanned in whichever position they were currently being nursed in ICU, using a standard critical care ultrasound machine (SonoSite EDGE). Longitudinal and transverse views of both kidneys were obtained and archived, plus pelvic views if evidence of free fluid or a distended bladder were seen. All images were reviewed by a Radiologist for adequacy and accuracy of findings. Hydronephrosis, any other significant abnormalities and duration of scan were documented.

Results: 30 patients were included in the study. Adequate views of both kidneys were possible in 27 patients (90%). The reasons for inadequate views were drains/dressings from surgery in 2 patients and pain in 1 patient. No images deemed adequate by the scanning Intensivist were rejected as inadequate by the reviewing Radiologist (who was blinded to the prior interpretation). Only 1 patient had features of hydronephrosis. 6 patients had abnormalities noted on bedside imaging (or inadequate views) to suggest formal scanning by a Radiologist. Median scan duration was 7 minutes.

Discussion: This study shows that ICU physicians with appropriate training and experience can perform point of care renal tract ultrasound to the standard required for CUSIC. The majority of ICU patients can be scanned in bed with no change in their position, and adequate views obtained of both kidneys to exclude hydronephrosis.

This study supports teaching Intensive Care trainees to perform focused renal ultrasound in AKI. We aim to study this further, and determine the experience required to achieve competency. The low incidence of hydronephrosis noted suggests that trainees performing only a small number of scans may not see a case of hydronephrosis.

EP.220

Focused Bedside Cardiac Ultrasonography – which views are we reliably obtaining and what is the quality?

James Ward1, Theophilus Samuels1 and Matyas Andorka1

1Surrey and Sussex Healthcare NHS Trust, Redhill, UK

Abstract

Introduction: Point-of-care ultrasound allows clinicians to perform focused scans to obtain further information that can help with clinical decisions. Focused Intensive Care Echocardiography (FICE) is beneficial in critically ill patients, especially when formal BSE accredited trans-thoracic echocardiography is not immediately available; in our department we have two clinicians that are FICE mentors.

ICU patients have inherent difficulties in acquiring acceptable cardiac sonographic views. Our centre has conducted a retrospective observational study to determine which views were being achieved the most along with their quality, and then compare our findings with that of a recent study that used high fidelity simulation to demonstrated that the subcostal view was achieved the quickest (Bowcock et al.).

Methods: Our unit uses the X-Porte ultrasound device (Sonosite, UK) for bedside ultrasonography. During the period between June-August 2017, we recorded the scans using our purposefully designed website (https://www.easy-icusonography.com). The data included basic information on haemodynamics, views obtained and the required questions for the FICE protocol.

Results: From a total of 27 scans, we found 3 (11.1%) scans rated as good, 14 (51.9%) scans rated as acceptable and 10 (37%) scans rated as poor. The number of views that were successfully obtained were: 27 (100%) sub-costal (SC) views, 22 (81.5%) parasternal long axis (PLAX) views, 21 (77.8%) apical 4 chamber (A4Ch) views and 19 (70.4%) parasternal short axis (PSAX) views (see table 1).

Discussion: A recent study from a quaternary centre showed that 52/60 (86.6%) of views were reported as good or adequate1 which is greater than our department (63%). The sub-costal view proved to be the most available view in our department which not only fits well with its recommended use in the cardiac arrest algorithm, but is also the quickest to obtain by trainees as shown in the Bowcock et al study. A similar study also showed that participants with >50 scans also achieved the SC view 100% of times. 2,3

Conclusion: Despite small numbers, we have shown that in our department, the quality of FICE scans are comparable to larger centres and the subcostal view is the most acquired view.

Table 1.

View Percentage Achieved (%)
PLAX 81.5
PSAX 70.4
A4Ch 77.8
SC 100

EP.221

Lung ultrasound can optimise the sequence of medical and physiotherapy treatments during a lung “whiteout”: A case report

Simon Hayward1 and Lisa Hayward1

1Blackpool Teaching Hospitals NHS Foundation Trust, Blackpool, UK

Abstract

Introduction: Lung ultrasound (LUS) is a gaining popularity as a bedside diagnostic technique amongst respiratory physiotherapists but the current body of evidence is in its infancy. Chest physiotherapy has the potential to positively influence the removal of secretions following sputum retention and whole lung collapse. However, this can prove difficult due to the low sensitivity and specificity of auscultation and anterior-posterior chest radiography (CXR) when identifying the causes of respiratory compromise in patients on critical care.

The aim of this report is to highlight the potential impact of LUS use by physiotherapists within the multidisciplinary team.

Case Presentation: A 63 year old patient underwent surgery for valve repair and a triple coronary artery bypass graft. Unfortunately, they sustained a cerebral artery infarct causing a dense left hemiplegia and an ineffective cough. Regular nasal pharyngeal suction and the use of a cough assist machine to remove secretions became necessary.

A rise in oxygen requirements prompted a CXR which showed a complete left hemi-thorax opacity often termed a “whiteout”. Physiotherapy was requested to aid sputum removal and lung recruitment.

Prior to physiotherapy treatment, author (SH) performed a LUS assessment showing a large left sided pleural effusion. This new information was relayed back to the medical team who decided to forego physiotherapy and perform a bronchscopy under the impression this collapse was still due to a sputum plug. Bronchoscopy showed no significant sputum plug. A second LUS scan showed no change in the effusion. Further team discussions prompted insertion of an intercostal chest drain which elicited 3500mls within an hour.

A third LUS showed an absence of effusion and a return to abnormal lung aeration with multiple “B-lines”. Physiotherapy treatment was now initiated using intermittent positive pressure breathing. After a single treatment a fourth and final LUS scan showed a return to A-line presentation, with lung sliding, consistent with optimal aeration. A repeat CXR confirmed resolution of the “whiteout”.

Discussion: A “whiteout” can be caused by a significant pleural effusion even when sputum retention is suspected. When patients are referred for a physiotherapy opinion LUS can be used to highlight pathologies not amenable to physiotherapy treatment making it an effective tool to distinguish between collapse and pleural effusion. It seems reasonable to address the pleural effusion first and then implement physiotherapy treatment. LUS will also allow rapid repeated imaging to assess whether treatments have been successful immediately after being administered.

EP.222

Determination of End Point of Fluid Resuscitation Using Simplified Lung Ultrasound Protocol in Septic Shock Patients

Rana Ibrahim1, Amr Dahroug1 and Tayseer Zaytoun1

1Critical Care Department – Alexandria Univeristy, Alexandria, Egypt

Abstract

Background: Fluid administration is a corner stone in treating septic shock. Dynamic measures are preferred over static ones owing to their easiness and non-invasiveness. Thoracic ultrasound is now recommended as a guide for the intensivist, one of its many applications is the assessment of extravascular lung water by viewing and quantifying B-lines.

Objectives: The aim of this work is to assess the role of simplified lung ultrasound protocol in identifying the end point of fluid resuscitation in patients with septic shock.

Methods: This is a single blinded randomized controlled trial enrolling 80 septic shock patients admitted to Alexandria Main University Hospital ICUs. Patients were randomly allocated into two groups: the first group received fluid resuscitation according to early goal directed therapy (EGDT) with a target central venous pressure of 8–12 mmHg, while the second group received fluids guided by simplified lung ultrasound protocol till reaching a score of 16. The diagnostic potential of the ultrasound to be used as an end point of fluid resuscitation is determined by area under the receiver operating characteristic curve (AUROC) analysis.

Results: The ultrasound score at day 1 showed positive moderate significant correlation with the CVP readings (Spearman’s r = 0.50, P = 0.001). The ultrasound group had better SOFA score (p value of day 1 = 0.012), less 7 days mortality (p = 0.039), less hypoxia (p value of day 1 = 0.022) and less pulmonary edema (p = 0.026) than the EGDT group. A cut off value of a score of >10 in septic shock patients with a sensitivity of 84.21% and specificity of 90.48% was concluded upon which resuscitation with fluids should be terminated, (area under curve was 0.818 with a significant correlation of P = 0.001)

Conclusion: Simplified Lung ultrasound protocol is easy to accomplish, non-invasive, saves time and effort in with better mortality and less complications than other resuscitation tools.

Recommendations: Simplified lung ultrasound protocol with a cut of value of score <10 can be used to determine the end point of fluid resuscitation in septic shock patients.

Keyword: End point of fluid resuscitation Sepsis, Septic shock, Ultrasound.

EP.223

An Extra Patient on the ward round… In-situ simulation to improve MDT performance in ICU

Katie Allan1 and Kaushik Bhowmick1

1West Suffolk NHS Foundation Trust, Bury St Edmunds, UK

Abstract

Background: Simulation is a popular teaching method and common educational tool for healthcare professionals. Widespread use of simulation for training is prohibited by interruption to clinical duties, high cost and inability to recreate nuances of real life environments1. ‘In-situ’, ‘point-of-care’ or ‘mobile’ simulation is the new paradigm because it is carried out within the actual clinical area, with people assuming their normal roles on a normal working day.

Aim:

• To develop an in-situ simulation program within ICU (Intensive Care Unit) for a multi- professional team using clinical and non-clinical situations

• To improve patient safety by focusing on non-technical skills or ‘human factors’ which are situational awareness, decision making, task management, team working, communication skills and calmness under pressure

• To focus on specific incidents or find out any latent error and act accordingly

Methods: A simulated ICU bed space was recreated using our Laerdal SimMan EssentialT, supplemented by real ICU equipment at the West Suffolk Hospital. The patient is seen on the ward round consecutively and notes and parameters are reviewed as usual. Actors are used as additional features in certain scenarios. Doctors, nurses and allied healthcare professionals manage the simulated patient through realistic events for 2–3 consecutive days in a week as part of their ward round. After receiving a comprehensive handover, nurses conduct a clinical assessment and relay any priorities to the medical team on their arrival, or earlier if clinical situation demands. The team then reach a management plan or carries out any acute intervention. Following this, a structured teaching debrief is conducted using FAST-PAGE2 model.

Results: The scenarios reflect usual ICU case mix, including clinical crisis situations and everyday maintenance issues during a patient’s ICU journey. Evaluation so far suggests the program is being well received among all participants. The participant’s feedback highlights that it has improved their human factor training, awareness and they are more confident in handling critical clinical situations.

Conclusion: Multi-disciplinary in-situ simulation in ICU is achievable and can deliver high value programs to meet clear educational goals and bring in quality improvement within the routine working environment. More innovations are needed in the current climate of stretched resources to not only increase the frequency of such programs but to introduce in-situ simulation into other clinical areas.

Reference

  • 1.Weinstock PH et al. Simulation at the point of care: reduced cost, in-situ training via a mobile carry. Paediatric Critical Care Medicine 2005; 6:635–41. [DOI] [PubMed]

EP.224

Identifying Latent Threats Using Insitu Simulation on the Adult ICU

Louise Ma1, Kylie Bunting1, Erika Avery1, Jennifer Fawcett1 and Alice Carter1

1UCLH NHS Foundation Trust, London, UK

Abstract

Introduction: Many ICUs already include high fidelity simulation as part of their regular curricula. Running in-situ simulation scenarios also has the added benefit of providing staff the opportunity to identify latent threats, thus increasing patient safety and reducing the risk of serious untoward incidents in the future. In-situ simulation scenarios were introduced to the ICU at UCLH in 2016 for latent threat identification purposes.

Method: Monthly in-situ simulation sessions within the adult ICU at UCLH were run. Participants consisted of one junior doctor, one senior doctor one junior nurse, and one senior nurse. An ICU doctor and an ICU practice development nurse debriefed scenarios, whilst another ICU nurse facilitated scenarios. Clinical scenarios were based on serious untoward incidents that had either occurred within the Trust or had been reported in national audit reports. All participants were asked to complete a human factors questionnaire after each session. Facilitators then completed a latent threats review which covered the crisis resource management issues below:

• Task factors

• Communication factors

• Team factors

• Individual factors

• Education and training factors

• Equipment and resource factors

• Working conditions factors

• Organisational and strategic factors

Results: In total, six simulation sessions took place between May and October 2016. Several latent threats were noted in three or more sessions:

Communication factors:

• Team members did not use ‘closed loop’ communication.

Team factors:

• Instructions issued by team leader, but only followed by one team member.

• Team did not notify ICU consultant on call with regard to clinical incidents.

• Junior nurse notified change in observations to junior doctor, but there was delay in escalating information to team leader.

Individual factors:

• Lack of situational awareness

○ Unaware that O2 saturations were dropping when trying to review initial incident.

○ Fixated on visualising airway, but had no one preparing for endotracheal intubation.

○ No one was observing the observation monitor during emergency intubation.

○ Did not use EtCO2 monitor during intubation.

Conclusions: All identified latent threats were debriefed to participants. Results were presented to all ICU staff. Highlighted latent threats were then used to support teaching of ICU doctors and nurses in their weekly teaching sessions. The aim is that, through continued reflection of CRM skills noted in-situ simulations, all latent threats can eventually be eliminated.

EP.225

Paediatric In-Situ Simulation on Adult Intensive Therapy Unit

Mike Dickinson1, Mark Hatch1, Andrew Henson1, Liz Gooderham1 and Jason Cupitt1

1Blackpool Teaching Hospitals NHS Foundation Trust, Blackpool, UK

Abstract

Introduction: In situ simulation is conducted in the clinical area, using the available staff, resources and equipment. It can be used to reinforce clinical team training, involve human factors, such as such as leadership, situational awareness, decision making and teamwork, and help to identify latent safety threats and systems errors (Patterson et al., 2013). Simulation also offers the opportunity to formally debrief the members of the multiprofessional team, something that occurs rarely in practice, and has been shown to be the most important element in providing effective learning in simulation-based education.

Aims: Critically ill children are occasionally managed in the adult Intensive Therapy Unit (ITU) for stabilisation and initial treatment, prior to transfer to specialist paediatric centres. The aim was to run a paediatric in-situ simulation on adult ITU with the multiprofessional team, the environment, equipment, systems and protocols currently in place to deal with such an emergency, leading to improvement in patient safety outcomes (Fent et al., 2015).

Method: The poster summarises the in-situ simulation of a child with meningococcal septicaemia, and how it led to improvements in practice.

Findings: The following learning points were identified by the team:

• Team role allocation: team lead, airway and breathing, fluids, drugs, disability/exposure, scribe.

• Cognitive aids: WETFlAG, CrashCall, B@EASE Rapid Sequence Induction Checklist – not immediately available or visible to the team.

• Paediatric emergency trolley: overloaded with equipment, leading to delay with preparation of airway equipment.

• Communication: communication breakdown due to stress and cognitive load.

• Psychological support for parents/carers: delayed due to fidelity error.

• The team found the experience interesting, some loved it, some felt like crying, some felt like it was a test.

Conclusion:

This led to the following changes:

• Allocation staff roles – effective team working

• WETFlAG and CrashCall calculations visible on the wall or a whiteboard.

• Ergonomics of paediatric emergency trolley to be reviewed.

• Emphasise closed loop communication, SBAR handover

• Incorporate Human Factors into staff training

• Review and adjust systems & protocols for paediatrics

References

  • 1.Fent G, Blythe J, Farooq O, et al. In situ simulation as a tool for patient safety: a systematic review identifying how it is used and its effectiveness. BMJ Simulation and Technology Enhanced Learning 2017; 29: 83--88. [DOI] [PMC free article] [PubMed]
  • 2.Patterson MD, Geis GL, Falcone RA, et al. In situ simulation: detection of safety threats and teamwork training in a high risk emergency department. British Medical Journal Quality & Safety 2013; 22: 468–477. [DOI] [PubMed]

EP.226

Pilot Insitu Simulation on the Adult ICU at UCH

Louise Ma1, Erika Avery1, Kylie Bunting1, Jennifer Fawcett1 and Alice Carter1

1UCLH NHS Foundation Trust, London, UK

Abstract

Introduction: Multidisciplinary in-situ simulation sessions are a useful way to identify areas for improvement and gaps in training within an existing working ICU, and further cements the ICU team's crisis resource management (CRM) skills in real-time. This pilot study was to test the feasibility of in-situ simulation sessions within an adult ICU of a busy tertiary teaching hospital.

Method: Each scenario had learning points based on curricula from the Faculty of Intensive Care Medicine, and the Southbank University BSc Course on Professional Practice in Critical Care for Nurses. Partcipants in each in-situ session consisted of one senior doctor, one junior doctor, one senior nurse, and one junior nurse. Each session was facilitated and debriefed by two ICU practice development nurses and another doctor. CRM questionnaire forms were collected from participants.

Results: The in-situ simulation sessions occurred once a month between May and October 2016. Each session lasted 40 – 60 mins (including. debrief). 20 participant feedback forms were collected.

All participants strongly agreed that:

• all members of the team were appropriately involved and participated in the activity.

• the clinical scenario was realistic.

• this simulation will impact positively on their team working skills.

• this simulation will impact positively on their overall practice.

Participants on average either disagreed or strongly disagreed that:

• the leader maintained appropriate balance between authority and team member participation.

• team members used closed loop communication at all times.

• disagreement or conflicts amongst team members were addressed without loss of situation awareness.

All feedback was very supportive of having such in-situ simulation sessions in the ICU. Some comments extracted below:

• “Useful learning points re: task delegation, top communication. Well run, like real life – great, thanks.”

• “Very helpful opportunity to practice realistic scenario… I felt quite animated in my role in this scenario, however still found this helpful as I have encountered airway issues on the ward and struggled with my role then.”

• “Loved it! Great learning experience. I really liked having a debrief.”

Conclusion: In-situ simulation sessions within the adult ICU at UCLH received very positive feedback from both participants and staff. The sessions highlighted that in particular, improving teamwork and communication was critical to improving CRM skills, and pointed out what was needed to be done to make such improvements. In-situ simulation in the ICU should and will continue on a regular basis.

EP.227

Could the use of frailty scoring be an efficient risk stratification tool in Intensive Care Units in District General Hospitals?

Samantha Harmer1 and Hannah Chin1

1Milton Keynes University Hospital, Milton Keynes, UK

Abstract

Frailty scoring could provide a rapid assessment of predicted outcomes in surgical patients admitted to Intensive Care Units. A review of 85 patients admitted to a District General Hospital Intensive Care Unit demonstrated that there is statistical difference between Non-frail (score 1–4) and Frail patients (score >4) with regards to Age, P-Possum physiology scores, APACHE II scores and survival to hospital discharge. There was no significant difference in predicted P-Possum mortality, level three days or hospital length of stay between the frail and non frail groups. Whilst it is accepted that being elderly does not equate to being frail, patients considered to be frail were less likely to leave hospital alive. Calculating frailty scores is time efficient, based on information already gathered and does not require extra training or expertise. Intensivists often informally perform frailty scores when considering admission to the unit and the Rockwood frailty score could quantify those considerations into numerical form. This review showed that surgical patients with frailty scores of more than four were at a statistically significant (P Value 0.001) increased risk of mortality than those considered to have low frailty scores. Recognising the increased risk of mortality could allow clinicians to use frailty scores in end of life decision discussions with patients and relatives.

EP.228

Mortality in patients intubated in the Medical High Dependency Unit

Duncan Philp1 and Andrew John Clarkin1

1Aberdeen Royal Infirmary, Aberdeen, UK

Abstract

The Medical High Dependency Unit (MHDU) in Aberdeen Royal Infirmary provides level two organ support including high flow nasal oxygen (HFNO), continuous positive airway pressure (CPAP) and non-invasive ventilation (NIV). It is also geographically remote from the Intensive Care Unit (ICU). We hypothesized that this ability to provide significant respiratory support may delay referral to ICU and invasive ventilation, and that delayed referral may necessitate intubation in MHDU prior to transfer with the attendant risks.

All patients admitted to ICU from MHDU between January 2015 and December 2016 were identified using the WardWatcher database. Patients who were not invasively ventilated were excluded. WardWatcher and the electronic patient record were then used to determine the number of days from hospital admission to ICU admission, location of intubation, and the in-hospital mortality.

There were 121 patients admitted from MHDU to ICU of whom 39 did not require intubation, leaving 82 patients. Of these 27% were intubated in MHDU prior to transfer. There was a statistically significant increase in mortality associated with intubation on MHDU (p = 0.03).

Intubated in MHDU Intubated in ICU All Admissions (n = 121)
Total Intubated 22 60 82
Survived to  Discharge (%) 7 (32%) 35 (58%) 65 (54%)
Hospital  Mortality (%) 15 (68%) 25 (42%) 56 (46%)
Admitted to ICU  on MHDU Day 1 6 (27%) 17 (28%) 30 (25%)

These results show an increased hospital mortality for patients intubated in MHDU. There are several possible explanations including speed of patient deterioration, physiology at the time of intubation (and so complications during intubation), or delay in intubation and invasive ventilation. Our data also showed that a quarter of the ICU admissions from MHDU were on the first day of their hospital stay. This suggests inappropriate admission to MHDU rather than ICU.

There is improvement required to address the issues of avoidable intubation in MHDU and incorrect patient placement. All patients who require intubation in MHDU will be discussed at the MHDU morbidity & mortality meeting to identify and address factors contributing to this. Patients admitted to ICU within 24 hours of their MHDU admission will also be similarly reviewed. Education in both units is required to highlight these issues and collaborative approach required to ensure timely ICU admission. After making these changes re-audit will quantify the resulting improvement.

EP.229

Geriatric Intensive Care Unit: Mapping trends and predicting outcomes for elderly patients escalated to ITU

Rochelle Velho1,2, Fayaz Baba1, Govindan Raghuraman1 and Nitin Arora1

1Critical Care, Good Hope Hospital, Heart of England NHS Trust, Birmingham, UK

2University of Warwick, Warwick, UK

Abstract

Background: The demographic changes on the Intensive Care Unit (ICU) reflect an aging population both on a national and international level. The evidence base is contentious with regards to outcomes of elderly (>80years) ICU admissions. On the one hand, elderly ICU admissions had an increased survival post-elective surgery. On the other hand, the evidence base is sparse for elderly general medical ICU admissions. This latter cohort may have higher ICU and hospital mortality rates compared to elective surgical admissions. The primary objective of this study was to map the trends in APACHE II scores and ICU mortality across age groups, using ICNARC data (2007 and 2017) from a single centre.

Methods: A retrospective analysis was conducted using ICNARC data, for adult patients (n = 11682) admitted to a single Level 2 (8 beds) and Level 3 (12 beds) centre in the West Midlands (Birmingham Heartlands Hospital, Heart of England NHS Trust) between October 2007 and May 2017. The 10 year data set was then divided into two groups, October 2007 to September 2012 (n = 5274) and October 2007 to May 2017 (n = 6408), to facilitate comparison of patient characteristics (age, sex, APACHE II) and trends in outcomes (length of stay, mortality). GraphPad PRISM software was used to undertake one-way ANOVAs with Bonferroni post tests to elicit significance in trends.

Results: The average age was 61.8years and 63.7years for each year group respectively. APACHE II scores for the two time periods were not significantly different (p > 0.05) for all age groups. Level 3 mortality for >79years was significantly greater in the 2007 to 2012 time period (p < 0.05) compared to the 2012 to 2017 time period (p > 0.05).

Conclusion: APACHE II scores were not significantly different for the two time periods; however the mortality of the elderly cohort was significantly greater in the earlier time period. Therefore, perhaps frailty needs to be factored into current prognostication scores for both surgical and medical patients rather than age in isolation. The overall recommendation of this study is to conduct a prospective multicentre study on the accuracy of frailty indices and outcomes for ITU patients in all age groups.

EP.231

Predictors of Unit Outcome in Patients with Ruptured Abdominal Aortic Aneurysm

Callum Kaye 1

1Aberdeen Royal Infirmary, Aberdeen, UK

Abstract

Introduction: Ruptured abdominal aortic aneurysms (rAAA) require quick decision making, with very few predictive models available to support the decision to operate. The majority of such patients are admitted to the ICU, however there remains no validated tool to support risk prediction in the ICU. This project aimed to identify risk factors identifiable on admission to the ICU associated with increased unit mortality in the Aberdeen Royal Infirmary General ICU.

Methods: The Aberdeen Scottish Intensive Care Society Audit Group (SICSAG) dataset was interrogated using the WardWatcher system to identify patients presenting with rAAA. Variables forming the initial SICSAG dataset and age were recorded, along with unit discharge outcome. The variables underwent bivariate correlation analysis and those with the greatest degree of correlation underwent binary logistic regression, with relative risks being calculated for each variable and combination models.

Results: A total of 164 patients were reviewed, with a mean age of 73.3 and overall mortality of 27.62%. The leading variables associated with increased mortality were identified as decreasing pH, decreasing haemoglobin (Hb) and cardiopulmonary resuscitation (CPR) within the previous 24 hours. Using a cut off of pH <7.2 and Hb<9, relative risks of mortality were calculated as 3.31 (P < 0.001) and 2.28 (P < 0.001), respectively. The relative risk for mortality in patients receiving CPR in the previous 24 hours was 2.91 (P < 0.001).

When looking at predictive models, there was a relative risk for mortality of 1.12 (P < 0.001), 3.11 (P < 0.001) and 3.73 (P < 0.001) in patients with 1, 2 or 3 of the above factors respectively.

Discussion: Although care needs to be taken when applying prediction scores at a patient level, Hb < 9d/dl, pH < 7.2 and the requirement for CPR in the previous 24 hours can be used to predict patients at higher risk of death. Variables such as age, vasoactive medication use, renal function and potassium have little predictive effect.

Importantly, this dataset only included patients who were admitted to the ICU and underwent SICSAG scoring, so will exclude those admitted for palliative care or who died within 8 hours of admission. Furthermore, this only looked at the SICSAG dataset, and not other factors which may predict mortality, such as open or closed repair.

EP.232

Transcribing errors between two electronic prescribing systems during step down from the High Dependency Unit (HDU)

Richard Di Palma 1

1Royal Brompton & Harefield NHS Foundation Trust, London, UK

Abstract

Background: The adoption of electronic prescribing (EP) systems in UK hospitals is rising with the government’s target for a paperless NHS by 2020. At our Trust there are two different EP systems: ICCA® (IntelliSpace Critical Care and Anaesthesia, Philips®) used in critical care and HDU areas, and MedChart® (CSC®) used in level 1 ward areas. When patients are stepped down from HDU the medications have to be manually transcribed from ICCA® to MedChart®. Trust incident reports (Datix®) and personal experience during clinical screening have highlighted transcribing errors between the two interfaces leading to potential patient harm.

Objectives:

• To quantify the number and type of transcribing errors occurring during step down

• To assess the error rate between specialties and prescriber types

Method: Data collection for all patients discharged to a level 1 ward from HDU was during 30 weekdays between 20th February and 12th April 2017. ICCA® and MedChart® prescriptions were compared at point of discharge and any unintentional discrepancies and omissions were recorded.

Results:

The most frequent discrepancy errors were duplication, drug no longer prescribed, and incorrect dose (all n = 8), most common drugs omitted were levobupivacaine PVB, enoxaparin, aspirin and Oramorph (all n = 3).

Conclusions: Transcribing between two different EP systems results in a substantial error rate. This error rate is highest in Thoracic Surgery although a higher drug chart error rate was seen in Cardiac Surgery. Amongst prescriber types, the highest error rate was seen with the HDU SHO (although based on just 6 patients) and the lowest with the specialist nurse prescribers. Reasons for these errors include reusing the pre-op chart, unfamiliarity of the two prescribing systems, prescribers not looking at the whole ICCA® drug chart (e.g. missing continuous infusions), mis-selection of a drug on MedChart®, and time pressures.

Table 1.

Comparison of error rates between specialties.

Total Cardiac Surgery Thoracic Surgery Cardiology
No. patients 127 82 37 8
No. drugs  prescribed 1534 1086 379 69
No. errors 67 48 19 0
Error rate 4.37% 4.42% 5.01% 0%
Drug charts with at least one error 34% 38% 32% 0%

Table 2.

Comparison of error rates between prescriber types.

Total Team SHO HDU SHO Specialist nurse prescribing practitioners SpR
No. patients 127 65 6 43 13
No. drugs prescribed 1534 766 65 587 116
No. errors 67 38 4 20 5
Error rate 4.37% 4.96% 6.15% 3.41% 4.31%
Drug charts with at least one error 34% 46% 25% 28% 23%

EP.233

Enteral versus parental route for drugs on the ICU – are we doing it right?

Andrew Pearson1 and Lyndsay Cheater1

1Countess of Chester Hospital, Chester, UK

Abstract

Our critical care unit is in a busy DGH, has 15 level 2/3 beds. We identified that patients are often given medication intravenously (IV) without consideration of the enteral route (oral/nasogastric tube). It is important to rationalise medication routes to prevent delay in CVP line removal (with the associated risks) and unnecessary repeated peripheral cannulation in patients who often have poor IV access. Enteral administration is also associated with fewer errors versus parenteral administration (1). We looked at how many medications we are giving IV that could be given enterally.

One auditor carried out 4 separate “snap shot” reviews of all the drug charts in early 2017, more than one week apart. Data collected included basic demographic data, the type and route of regular and prn medication and the presence or not of a functioning and accessible gastrointestinal tract.

44 prescriptions were reviewed from 34 patients. 17 were male, 17 were female. 10 patients were audited on two separate occasions. The age range of the patients was 27–88 years old. The length of stay of the patients ranged from 1 to 39 days at the time of the audit.

At the time of audit 34 patients had a functioning gastrointestinal tract. The 10 excluded were either vomiting, had non-confirmed NG tubes, were not established on enteral feed, were considered too coagulopathic for NG tube placement or had recently been extubated and not yet had a swallow assessment.

A total of 693 drugs were prescribed, 60% enterally, 40% intravenously. Intravenous medications were identified as either “appropriate” (indication for parenteral administration or non-functioning GIT; 185/279), “inappropriate” (no indication for parenteral administration in the presence of a functioning GIT; 40/279) or “duplicate” (prescribed by multiple routes; 50/279). Four medications were not classifiable (all antibiotics with >90% oral bioavailability, but would require consultant microbiologist discussion before step-down). Other studies have found high rates of IV antibiotic use when the enteral route is available. (2)

The most common medications administered inappropriately were electrolytes (magnesium sulphate 12, potassium chloride 4) despite evidence that the enteral route for electrolyte administration being just as effective (3) and analgesics (paracetamol 8, tramadol 3, others 3).

As a result, a change of practice has occurred in that enteral electrolyte supplementation has been added to the electrolyte order set, which up until this point only had IV electrolytes and a re-audit is planned after this change has been implemented.

EP.234

Improving IV Fluid Prescribing

Abdul-Rahman Gomaa1 and Jonathan Wilkinson1

1Northampton General Hospital, Northampton, UK

Abstract

Introduction: Intravenous (IV) fluids are some of the most commonly prescribed day-to-day drugs. They have their indications, benefits, side-effects and complications. Evidence suggests that they are rarely ever prescribed correctly despite the presence of NICE guidelines. This is thought to be due to lack of knowledge and experience, placing patients at increased risk of harm and incurring unnecessary costs to the Trust.

Objective: To ensure that all IV fluid prescriptions are safe, appropriate and adhere to NICE guidance by January 2018.

Methods: Review and improve the process of “IV fluid prescribing” via three simultaneous approaches. Teaching sessions were delivered to all junior doctors to improve knowledge and awareness of appropriate IV fluid prescribing and promote familiarity with the current NICE guidelines.

A point-of-care aide-memoire containing a summary of the information needed for correct prescription was printed and distributed.

Using serial Plan-Do-Study-Act (PDSA) cycles, a novel “IV fluid bundle” was developed and trialled on five wards (three surgical, two medical). The aim of the bundle was to ensure that patients were clinically reviewed.

These interventions was monitored using weekly point-prevalence audits of the IV fluid bundles. Parameters measured were: incidence of deranged U&E’s and AKI, the number of days between the latest U&E’s and the patient’s IV fluid prescription.

Results: With a 50% uptake outcomes were significantly improved. Of all of the patients on the IV fluid bundle, 100% had a documented review of both fluid status and balance. The incidence of deranged U&E’s decreased from 48% to 35% and AKI decreased from 14% to 10%. The average number of days between the latest U&E’s and a fluid prescription decreased from 2.2 days to 1.0 day.

Conclusion: Prescribing IV fluids is a complex task. It is an area of clinical practice that requires significant improvement both locally and nationally. The project included carefully structured interventions geared towards tackling the confounding issues.

Changing prescribing habits is an extremely challenging goal for many reasons. The introduction of a change that incorporates something clear and simple has a minimal effect on compliance. The design of a simple IV fluid bundle ensured minimal interference.

Since commencing the project, we have seen an improvement in the knowledge base around IV fluid safety and a clear improvement in the prescription of IV fluids in our Trust. We anticipate that further improvements will be achieved once the bundle has been incorporated into the hospital’s electronic prescribing system.

EP.235

Electronic prescribing on the ICU – From frustration to fruition, the implimentation of the junior clinician led ‘ePrescribing admissions powerplan'

Christopher Smith1, Rebecca Harper1 and Kat Eigener1

1Homerton University Hospital, London, UK

Abstract

Background: It had been noted that newly admitted ITU patients frequently had medications missing from their drug chart following the withdrawal of paper ITU charts and the introduction of electronic prescribing, causing frustration for staff and compromising quality of care for patients.

Aims: Data collection, intervention and assessment of effect was instigated by junior doctors to implement an ‘in-house' and timely solution to the problem.

Methodology: Quality improvement methodology was used to identify the problem and implement change. Regular review and measurement with test cycles were carried out with consultation from a range of staff within the department to produce a ‘ground-up’ solution involving nurses, junior doctors, consultants, pharmacists and ICT.

Quantifying the problem: Initial data revealed that medicines required immediately by nursing staff were missing in over three quarters of cases reviewed; including venous thromboembolism prophylaxis, 'VAP' bundles, ionotropes, fluids (including transducer flush bags) and electrolyte replacement. In addition, a survey of ICU staff involved showed that 66% felt that clinically important medications were omitted on initial clerking (demonstrated either by nurses having to ask for them to be prescribed or doctors being asked to prescribe them).

Designing the solution: Initially staff were hesitant that an ICT solution could be found, however it became apparent that existing solutions that had been implemented within other departments could be adapted and tested to create an 'admissions powerplan', effectively a checklist of pre-prescribed medications that could be selected and adjusted when patients are admitted. A test domain was set up with the new powerplan and quality control and testing commenced for a period of 1 month involving senior clinicians and the head of pharmacy to ensure safety. The first admissions powerplan was accessible from the live electronic patient record from June 2016 and positive feedback was received immediately. Constructive feedback was used to amend the system and a snapshot audit and questionnaire revealed that clinicians were asked to prescribe missing medications far less frequently, and that the powerplan had been used in over 80% of new ITU admissions.

Conclusion: From frustration to fruition – a junior doctor instigated project led to a significant change in ICT systems within just a few months, improving prescribing in the ICU and involving team members in improving their working environment and quality of patient care. We'd like to share our methodology and enthusiasm here and inspire other units that clinician-led ICT change is possible!

EP.236

Intravenous crystalloid fluid prescribing in a British ICU pre- and post- local policy change following NICE CG174: Cost implications

Julian Cumberworth1, Irene Francis1, Oyin Close2, Jessica West1 and Owen Boyd1

1Brighton & Sussex University Hospitals NHS Trust, Brighton, UK

2Royal Brompton & Harefield NHS Foundation Trust, London, UK

Abstract

Introduction: In December 2013, NICE published a clinical guideline (CG174) concerning intravenous fluid usage in adult hospital inpatients. The publication of CG174 prompted a revision of local intravenous fluid therapy guidelines within our NHS Trust. In particular, a prominent role for 0.18% NaCl with 4% glucose was advocated for routine maintenance. This was therefore made more widely available within our department.

Here, we present total quantities and costs of intravenous crystalloid fluids prescribed over six month periods pre- and post- local guideline revision, in our NHS Trust ICU and HDU.

Methods: Data were collected retrospectively from pharmacy records of the periods from November 2013 to April 2014, and November 2014 to April 2015. Quantities and costs of IV crystalloid solutions prescribed were calculated over the two periods. Here, data are grouped into six month blocks for clarity.

Results: The first period corresponded to 5107 patient bed days; the second 5866. Results (table 1) showed a 49% reduction in the total quantity of Hartmann’s solution prescribed (3890L to 2000L) between the periods, with a corresponding increase in use of 0.18% NaCl with 4% glucose. Use of 1L 0.9% NaCl with KCl halved, whilst use without KCl was comparable. Total crystalloid cost decreased by £2808 between the periods (£18485 to £15677) representing a 15% saving.

Discussion: The introduction of 0.18% NaCl with 4% glucose for maintenance may relate to reduced usage of Hartmann’s solution, 5% glucose (without KCl) and 0.9% NaCl (1L preparation with KCl).

Overall cost reduction was primarily accounted for by reduced use of ‘concentrated’ 40 mmol KCl in 100 ml NaCl. This is considerably more expensive than the other crystalloids described. Total administration of potassium (chiefly via 0.18% NaCl with 4% glucose and 40 mmol KCl) substantially increased; this could relate to greater emphasis on adequate potassium provision in guidelines.

Table 1.

Quantities of intravenous crystalloid fluid solutions prescribed in ICU/HDU pre- (November 2013-April 2014) and post- (November 2014-April 2015) a change in local fluid policy.

Volume (L) Volume (L) Volume (L)
No added KCl 20 mmol KCl 40 mmol KCl
1L 0.18% NaCl with 4% glucose Pre 0 0 12
Post 1314 80 840
1L 5% Glucose Pre 402 0 30
Post 229.5 10 131
1L 0.9% NaCl Pre 2347.5 20 340
Post 2103 10 170
100 ml 0.9%NaCL Pre n/a n/a 245
Post n/a n/a 193
Hartmann's solution Pre 3890
Post 2000
Total Pre 7286.5
Post 7080.5

EP.237

The forgotten tube? A quality improvement project assessing the effectiveness of the intensive care team to assess the position of the endotracheal tube on chest x-ray and correct misplacement

Christopher Taylor1, Laura Massey1, Timothy Ogilvie1 and Thearina De Beer1

1Nottingham University Hospitals NHS Trust, Nottingham, UK

Abstract

Aim: Assess how effectively the medical team on a tertiary trauma and neurosurgical intensive care unit assessed the position of the endotracheal tube (ETT) on chest x-ray (CXR).

Background: Correct position of the ETT is integral to optimising invasive ventilation strategies. The literature available suggests that the rate of misplacement is 15%. Accidental extubations increase ICU, hospital length of stay as well as ventilator associated pneumonia. NAP 4 highlighted a higher proportion of airway complications on ICU, often occurring out of hours, stressing an imperative to ensure the ETT is in the right place. Physicians are poor at predicting a misplaced ETT by use of clinical examination or length at teeth. Therefore, CXR remains the least invasive and most common method of checking the position of the ETT.

Methods: Data was collected from 100 intubations over a 3-month period from November 2016 to January 2017. CXRs were assessed and compared to the documented assessment in the medical notes. Level of agreement with project authors was recorded along with subsequent management and complications in patients who had misplaced ETT on CXR. Other data including sex, grade of reviewer and need for re-intubation was recorded. The ETT was considered misplaced if less than 2 cm or greater than 5 cm from the carina.

Results: 28% of ETTs were misplaced – 17 too high and 11 too low (including 2 endobronchial). All but one of the low ETTs were re-positioned. Contrastingly, only 4/17 high ETTs were re-positioned. Of the 13 not re-positioned, 4 patients (33%) required re-intubation due to cuff herniation or unplanned extubation. No documentation of ETT position occurred in 34 intubations.

Conclusion: The project revealed a significant incidence of malpositioned ETTs and a shortcoming in timely, accurate assessment and documentation of the ETT position on CXR. There was also evidence of subsequent complications that may lead to patient harm such as unrecognised endobronchial intubation or cuff herniation/tube displacement requiring re-intubation. Low ETT position was more readily recognised than high ETT possibly because of a clear landmark for a low position. There were more complications because of high ETTs. Consequently, a teaching session was incorporated at induction for new junior staff to emphasise the importance of assessing ETT position. This brought an improved confidence in CXR interpretation among trainees (see graph above). Re- audit ongoing.

graphic file with name 10.1177_1751143718772957-img13.jpg

EP.238

On The Right Trach Yet? The South West of England multicentre tracheostomy service audit

Agnieszka Skorko1, Robert Goss2, Richard Innes3, Christopher Newell4 and Sanjoy Shah1

1Bristol Royal Infirmary, Bristol, UK

2Royal Devon and Exeter Hospital, Exeter, UK

3Musgrove Park Hospital, Taunton, UK

4Southmead Hospital, Bristol, UK

Abstract

Introduction: The use of temporary tracheostomies is commonplace on intensive care units. However, this is not a risk-free procedure. Following the findings of NAP4, in 2014 NCEPOD published ‘On The Right Trach’ recommendations aiming to improve safety.

Methods: A six week, trainee-led quality improvement project assessing tracheostomy insertion and care was carried out on 12 (out of 14) ICUs within the South West of England, auditing practice against the NCEPOD recommendations.

Results: Of patients admitted to ICU during the audit period, 10% (93/925) underwent tracheostomy insertion, after a median stay of nine days and remained in situ for a median of 10 days. 75% (70/93) were inserted percutaneously, ranging from 30% to 100% by unit.

A WHO-style checklist was used in 54%(38/70) of percutaneous insertions, a consent form in 67% (47/70), and bronchoscopy in 96% (67/70) insertions. Checklists were used consistently at four units and consent forms at eight.

Following a tracheostomy; 57% (50/88) of patients were decannulated on ICU, 33% (29/88) discharged with a tracheostomy and 10% (9/88) died on ICU, 4/9 having withdrawal of life sustaining therapy. No deaths were tracheostomy related.

Immediate complications occurred following 23% (21/93) of insertions, most commonly minor bleeding (62%, 13/21). Rates of complications varied by unit, from 0% to 62.5%.

A tracheostomy-related complication occurred in 32% (30/93) of patients at some point during the ICU stay. 19% (18/93) of tracheostomies needed changing at some stage and 50% (9/18) of these were unplanned. For all these findings, variation by unit was wide.

From a governance perspective, all units had a difficult airway trolley and immediate access to video laryngoscopy. Of 12 units, three did not have a training programme or core competencies for tracheostomy care laid out, as recommended by NCEPOD. Two do not have a protocol for the management of displaced/dislodged tracheostomies.

Conclusion: In the South West of England, we found wide variation in practice around the insertion and ongoing care of patients who have a tracheostomy inserted on ICU. Complication rates were surprisingly high and varied by unit.

We will address the findings of this audit by undertaking a regional study day to understand how quality of care can be improved. This project will be repeated nationally to see if the degrees of variation and complications are prevalent in other regions and if so, how these may be mitigated.

EP.239

Failed Extubation Rates on the ICU-An Audit of Practice

Samantha Jones1 and Anthony Parsons1

1St Peter's Hospital, Chertsey, UK

Abstract

Introduction: Extubation on the ICU should not be overlooked. The need for reintubation is associated with an increased ICU length of stay, rate of pneumonia and mortality. However, if we continue mechanical ventilation for too long it has an economic impact and health implications, such as the risk of critical illness neuropathy. A large cohort study in the USA demonstrated a 10% reintubation rate. This was the standard by which our own extubation failure rates could be audited and assessed.

Method: A retrospective audit was completed looking at all intubated patients admitted to a DGH ICU from June to October 2016. Further analysis of extubated patients was performed, to look at how many required reintubation and whether this could have been predicted using the ICU’s protocols on ‘readiness to wean’ and extubation. Following this, a review of the literature looked at best practice guidelines for extubation on ICU.

Results: The audit found a failed extubation rate of just 5.4%. Almost 70% of patients were extubated on the ICU, 83% of these were extubated onto facemask oxygen and 17% onto NIV. Of those extubated conventionally 6.5% required rescue NIV with a total of 3.2% failed extubations in the conventional group. 15.7% of those extubated onto NIV required re-intubation. All of the failures were predictable using the unit guidelines.

Conclusion: The audit demonstrated a low rate of failed extubations on the ICU, however a number of limitations were identified. Firstly, were our guidelines up to date if Consultants made the decision to extubate, without the criteria being met? Secondly, spontaneous breathing trials were not used which are known to increase the speed of extubation when compared to other weaning strategies. Finally no formal consideration was made, as to which patients should be extubated onto NIV, although we know there is evidence as to which groups are deemed high risk. Should this be incorporated into guidelines?

As a result of the audit the guidelines have been re-written into one ‘Readiness for Extubation’ document, encompassing the most up to date predictors for successful extubation and guidance of who should be considered high risk to be extubated onto NIV. This is for re-audit over the next 12months.

EP.240

Prediction of tracheostomy in critically ill patients: A systematic review

Neil Glassford1,2, Andrew Casamento1,3, Bronwyn Beebee4 and Rinaldo Bellomo1,5

1Department of Intensive Care Medicine, Austin Health, Melbourne, Australia

2ANZICS-RC, School of Public Health and Preventative Medicine, Monash University, Melbourne, Australia

3Department of Intensive Care Medicine, The Northern Hospital, Northern Health, Melbourne, Australia

4Department of Emergency Medicine, Austin Health, Melbourne, Australia

5School of Medicine, University of Melbourne, Melbourne, Australia

Abstract

Tracheostomy is performed in more than 10% of critically unwell patients who require prolonged mechanical ventilation in the Intensive Care Unit (ICU). There is little evidence to guide patient selection for and timing of tracheostomy insertion. In randomised studies of tracheostomy timing, the timing of the tracheostomy was randomised, while the perceived need for tracheostomy remained clinician driven. Given that the percentage of patients in the ‘late’ tracheostomy groups who actually receive, and therefore presumably require, a tracheostomy may be as low as 26%, it is clear that such judgement is imperfect. We systematically reviewed the literature to ascertain whether useful prediction rules or validated scores to predict eventual tracheostomy can be identified to better inform patient care.

Two investigators searched and screened three electronic databases to identify all studies of any design evaluating potential predictors of tracheostomy in mechanically ventilated ICU patients. Bias was assessed using the Quality in Prognosis Study tool.

Of 106 potentially relevant studies, we included 10 observational studies recruiting a total of 119,287 mechanically ventilated patients; 11% received a tracheostomy. Six studies were in trauma populations, only one in an undifferentiated ICU population. A variety of different factors in a variety of populations were independently associated with subsequent tracheostomy. These could be collected into three groups: patient factors, such as age and comorbidities; diagnostic factors, such as injury or illness severity; and intervention factors, such as re-intubation, craniotomy, cardiopulmonary bypass, or laparotomy. Profound clinical and methodological heterogeneity prevented meaningful meta-analysis. More predictors were present within the first 48 h of admission in trauma populations than non-trauma (89.7% vs 57.5%). Two studies reported predictive scores for subsequent tracheostomy, but neither were subsequently validated, nor likely to demonstrate external validity.

This is the first systematic review of clinical factors predictive of tracheostomy. Our use of an objective outcome and a validated tool limit the introduction of bias. The exclusion of studies reporting only independent predictors of prolonged ventilation is methodologically correct in the absence of a generally accepted definition for such ventilation. We identified a number of factors in observational studies of varying quality that predict tracheostomy in a population dominated by trauma patients and failed to identify any validated predictive models for requirement of tracheostomy within 48 hours of ICU admission. The effectiveness of and optimal timing for tracheotomy is currently difficult to assess.

EP.241

The association between utilisation of respiratory physiotherapy adjuncts post- extubation and extubation outcome: a retrospective review

Gabriella Cork1,2, Harriet Shannon2 and Leyla Osman3

1Guys and St Thomas NHS Foundation Trust, London, UK

2University College London, London, UK

3Guy’s and St. Thomas’ NHS Foundation Trust, London, UK

Abstract

Extubation failure is a common problem in the adult ICU, affecting up to 25% of patients who are extubated (Cavallone & Vannucci, 2013). Extubation failure is associated with increased mortality and ICU length of stay (Menon et al, 2012). Respiratory physiotherapists have a role in post-extubation care, particularly with regards augmentation of airway clearance. Adjuncts such as mechanical in-exsufflation (MI:E) and non-invasive ventilation (NIV) can be used to support patients at high risk of extubation failure post-extubation (Bach et al, 2010).

The aim was to investigate the association between utilisation of physiotherapy adjuncts post-extubation and extubation outcome.

A retrospective electronic health record review was undertaken. Ethical approval was waived as the project was locally registered as a service evaluation. All extubation events during a three month period in the adult ICU of a large UK teaching hospital were included. Palliative extubations were excluded. Physiotherapy adjuncts recorded were intermittent positive pressure breathing (IPPB), MI:E, nasopharyngeal suction and bridge-NIV (immediately post-extubation) or rescue-NIV. Extubation failure was defined as reintubation up to one week following extubation.

One hundred and thirty four extubation events occurred, 36(27%) of which required a physiotherapy adjunct. Extubation failure rates for all extubation events and those requiring adjuncts were 25% and 44% respectively. See Table 1 for association between adjunct requirement and extubation failure. For patients requiring airway clearance adjuncts (MI:E and/or nasopharyngeal suction), time to reintubation was significantly longer at 3 days (IQR 1–4), compared with those not requiring these adjuncts, 0.75 days (IQR 0.5–1) (p = 0.04).

The utilisation of physiotherapy adjuncts post-extubation was associated with higher likelihood of extubation failure. Mechanical in-exsufflation and naso-pharangeal suction were associated with an increased time to reintubation, suggesting that prolonged use of such techniques may delay necessary reintubation. The critical care multi-disciplinary team should consider prolonged post-extubation requirement for physiotherapy adjuncts as a potential indicator for reintubation.

Table 1.

Association of physiotherapy adjuncts and extubation outcome.

Adjunct Total (n = 134) Extubation Success (rate) Extubation Failure (rate) Odds Ratio (of failure)
Any 36 (27%) 20 (55%) 16 (45%) 3.8 (CI 1.6-8.8; p = 0.01*)
IPPB 16 (12%) 10 (63%) 6 (37%) 2.0 (CI 0.67-6; p = 0.203)
MI:E 16 (12%) 5 (31%) 11 (69%) 9.6 (CI 3-30; P < 0.0001*)
NP suction 14 (10%) 3 (21%) 11 (79%) 16 (CI 4-63; p < 0.0001*)
Rescue NIV 7 (5%) 2 (29%) 5 (71%) 8.4 (CI 1.6-46; p = 0.01*)
Bridge NIV 8 (6%) 7 (88%) 1 (12%) 0.4 (CI 0.05-3.4; p = 0.608)
*

statistical significance p < 0.05

EP.242

Using intelligent data from an electronic warning score system to drive patient safety improvements, better decision making and develop a culture of improvement

Kate Murray1, Charlotte Long2

1East Sussex Healthcare NHS Trust, Hastings, UK

2Western Sussex Hospitals NHS Trust, Worthing, UK

Abstract

VitalPAC electronic warning score software was introduced to an Acute Trust, comprisin of 2 hospitals and a total of 800 beds, which serves a population of 500 000. It has been a significant factor in improving patient safety, and facilitating smarter working with a reduced workforce. The software allows real-time, bedside capture of patient assessments and provides immediate calculation of National Early Warning Scores (NEWS). In turn, guidance is given regarding when patient observations should be repeated and when to escalate care. Data can be viewed from remote locations via hospital intranet, allowing proactive intervention by specialists. VitalPAC was introduced rapidly with a small team over the summer of 2014, there was immediate engagement with the technology. Vocal champions were found on every ward and a clear patient safety message for staff was developed. Outreach championed the project and proactively used VitalPAC.

The main improvements seen following the introduction of VitalPAC were 99.98% of patients having complete sets of observations and NEWS scores being 100% accurate Trust-wide. 94% of observations were taken on time and improvements were seen in the “track and trigger” rate when escalation of care was appropriate. This was helped by having an embedded escalation policy and clear guidance on the handheld devices used by ward staff recording patient observations. There has also been an overall downward trend in the rates of inpatient cardiac arrests. The use of VitalPAC has also led to a change in practice of Critical Care Outreach teams, with a proactive approach to seeing out the deteriorating patient on the ward, providing a rapid response and, where appropriate, escalating to the Critical Care Medical team. Furthermore, VitalPAC is now used by Critical Care teams to recognise deteriorating patients early, thus allowing earlier ICU admission, and to review patient progress remotely for those who have been discharged from ICU or reviewed on the ward but at the time not considered in need of ICU admission. It is also used by medical teams and the hospital at night team to identify the sickest patients, and therefore prioritise them for early review. Finally, VitaPAC data has been used to drive improvements in performance, by providing feedback to each ward about key metrics, such as percentage of observations taken on time. It has allowed focused education sessions to be delivered where necessary and for examples of good practice to be shared between wards.

EP.243

Designing and implementing a learning from excellence project into a busy adult intensive care unit setting

Zoe Campbell 1

1Royal Sussex County Hospital, Brighton, UK

Abstract

Introduction: Learning from excellence is a system of reporting episodes of good practice or when things worked well. It is both opposite and complimentary to traditional incident reporting systems, it enables us to show appreciation to staff for thier good work, and learn from their excellent practice.

Historically in the NHS we have focussed on learning from errors, which is important, but can have a significant emotional impact on members of staff. A more balanced approach, which learning from excellence aims to provide, teaches resilience, enables us to identify what works well, and hopefully allows us to repeat this behvaviour in the future.

Excellence is difficult to define, but we know when we see it. There can be a tendency in the NHS for us to think staff are ‘just doing their job’, but this ordinary excellence needs to be recognised and learned from.

This project is looking at how such a system to learn from excellence can be implemented in a busy adult intensive care setting.

Method: Stake holders were questionned as to whether they would find it useful to implement a learning from excellence system in our busy adult ICU in Brighton.

This will take the form of a simple paper reporting system which could be completed by any member of staff, relatives and patients. The reports will be anonymised and then shared on bulletin boards and in monthly team meetings.

Results: The reports will then be analysed by a process called appreciative enquiry, this process has a sound theoretical basis and the 4D model (discover, dream, design, deliver) has been used in projects involving the US navy and the supermarket giant Wallmart. Appreciative enquiry has been shown time and time again to produce transformational change in big organizations, by analysing what already happens, and imagining how that behaviour can become widespread.

Once analysed, common themes will be identified, and ideas about how we achieve excellence and subsequently make staff feel appreciated for the excellent work they do can be shared.

Conclusions: Setting up such a system in a busy intensive care unit has its own challenges, aside from engaging stake holders, there is also an historical naivety with regards to the merit of learning from positive behaviours rather than from mistakes. Along with learning about the excellent work staff do on a daily basis this project will also help to motivate staff and build team moral.

EP.244

Enabling sustainable practice change in the complex environment of critical care

Anna Herbert1, Rachel Howarth1, Jacqueline Baldwin1 and Thomas Owen1

1Lancashire Teaching Hospitals NHS Trust, Preston, UK

Abstract

Introduction: Research provides the evidence base for best practice in healthcare. However, research in critical care is a challenging undertaking. The heterogeneous patient population, critical nature of the patients’ illnesses, with a high incidence of multiple overlapping illnesses, multi-organ failure and mortality makes it a complex task. Patient diversity and unpredictability of their illness, along with significant ethical considerations has led to difficulties in obtaining positive research outcomes and implementing evidence-based practice.

Implementation is not an inevitable consequence of the discovery of evidence-based innovations, with approximately 50% of evidence-based practices never reaching widespread and sustained clinical use. This qualitative study explored which factors are required to implement and sustain change in the complex healthcare environment of critical care medicine.

Methods: Clinical staff across the multi-disciplinary team were invited to participate in semi-structured focus groups and individual interviews to discuss the barriers and enablers to sustainable practice change within critical care. A total of 20 participants from 2 focus groups and 9 interviews provided the data for the study. Audio recordings of the discussions were fully transcribed for analysis.

Results: Thematic analysis of the data generated 5 themes that directly affect the ability to implement and embed practice changes, specific to the context of the CrCu. These can be grouped into 2 core concepts of collaborative MDT working and education, training and review of practice, as illustrated in figures 1 and 2.

These 5 themes can be viewed as supporting structures for change. Their utility and effectiveness needs to be fully optimised to support the process of embedding change in clinical practice. The study data reflects an inextricable link and interdependence between the themes and concepts, as illustrated in figure 3:

Conclusions: This qualitative study highlights 5 inextricable supporting structures required to enable the implementation and embedding of change into clinical practice within the complex environment of critical care. The results can be used to create a process model for change, optimising and utilising these structures, specific to the needs of individual departments.

Figure 1.

Figure 1.

Broad concept, themes and subthemes.

Figure 2.

Figure 2.

Broad concept, themes and subthemes.

Figure 3.

Figure 3.

Inextricable supporting structures to embedding change in practice in the CrCu.

EP.245

Developing sustainable Systems-Based Training in Trauma Intensive Care in Myanmar

Julia Neely1, Tom Bashford2, Poppy Aldam2, Dhupal Patel2, Lauren Ward1, Jacqueline Wilson3, Naing Naing Linn4, Ronan O'Leary1, Thinn Thinn Thlaing4, Mu Mu Naing4 and Rowan Burnstein1

1Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK

2Division of Anaesthesia, Department of Medicine, Cambridge University, Cambridge, UK

3Cambridge University Department of Anaesthesia, Cambridge, UK

4University Medical School 1, Yangon, Myanmar

Abstract

Traumatic injury is the major cause of intensive care admission in Myanmar. Cambridge University Hospitals NHS Trust (CUH) has been working with Yangon General Hospital (YGH), Myanmar, to improve the care of trauma patients in Yangon. Coordinated through the CUH Global Health Partnership Program, Addenbrooke’s Abroad, this has involved multiple disciplines including pathology, physiotherapy, orthopaedic surgery, nursing, and intensive care & anaesthesia.

The core question we sought to address from an intensive care perspective was how best to support the delivery of trauma intensive care in a safe, ethical, and effective way. Over a series of visits we developed and evaluated a three-day intensive care training program (DelTICa: Delivering Trauma Intensive Care) which has been integrated into the postgraduate curriculum for intensive care training at University of Medicine 1, Yangon. The program draws on general principles of intensive care embedded in a systems-based framework with specific application to the management of trauma patients.

In developing the program our guiding principles have been:

• Improving trauma care requires a multidisciplinary systems approach.

• Partnerships benefit from shared leadership; two-way exchange, formal and informal teaching, clinical mentoring and academic collaboration.

• Sustainable development of systems and education requires local leadership and local solutions.

Following an initial scoping visit we have run the program three times in the last 2 years, serially evaluating and developing the program to meet local needs. We have introduced novel, small group, multidisciplinary teaching, involving local senior doctors as an increasing presence on the faculty each visit. An external evaluation of the Partnership supported ongoing development of the course.

Specific challenges have been: to ensure the course impacts on the delivery of sustainable changes in the workplace, and supporting local faculty to deliver the course independently. Incorporating nursing and physiotherapy as part of the international faculty, as well as attendees, has been challenging but successful.

Next steps involve: transferring delivery and evolution of the course to local faculty so it can be maintained independently, developing global health fellowship program exchanges within anesthesia and intensive care, and supporting the development of academic collaboration between CUH and YGH.

EP.246

Mapping the Provision of Intensive Care in Myanmar

Tom Bashford1, Katie Macdonald2, Dhupal Patel1, Naing Naing Linn3, Kyi Kyi San3, Thinn Laing3, Mu Mu Naing3, Zaw Wai Soe3 and Rowan Burnstein4

1Division of Anaesthesia, Department of Medicine, Cambridge University, Cambridge, UK

2Kings College Hospital, London, UK

3University Medical School 1, Yangon, Myanmar

4Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK

Abstract

The need to develop intensive care services parallels the progression of health care delivery in low and middle income countries (LMICs). In the last 5 years Myanmar has undergone a period of rapid expansion in health care provision, but the burden of both critical illness, and critical care provision, remains difficult to quantify. Several accounts have been published attempting to survey these in LMICs as recognition of the value of critical care has grown.

We were invited to undertake a national survey of adult intensive care provision in Myanmar. The work presented many challenges, but two fundamental issues were:

1. There is no internationally agreed definition of what defines an intensive care unit.

2. There is also no validated means of assessing intensive care delivery in LMIC’s.

We developed a tool based on previously published surveys in LMICs and the available national standards for intensive care provision, working with local clinicians to ensure local relevance. The tool surveys multiple domains including facilities, access to equipment, drugs and consumables, workforce, medical education, and governance processes.

We located 17 adult intensive care units across three cities in Myanmar (Yangon, Mandalay and Nay Pi Taw) but could not locate any intensive care units outside these areas. The survey was undertaken in English, using local and UK translators as necessary. Initially, a paper survey was distributed, followed by a visit from members of both the UK and Myanmar team.

We found 95 level 3 intensive care beds across the 17 hospitals (Myanmar population is approximately 61 × 106). All units delivered a mixture of level 2/3 care, with all beds equipped to deliver level 3 care with the exception of renal support, which was virtually absent. Our workforce assessment showed nurse: patient ratios of 1:3 level 3 patients as the norm.

Ongoing medical education, including training in intensive care medicine, and resuscitation training, especially for nursing staff, was limited. Specialist medical advice (microbiology, radiology) and AHP support (dieticians, SLT, physiotherapy) within intensive care was also very limited. The presence of guidelines, governance structures, and IT support was minimal and rarely formalized. Critical care outreach was undeveloped with referrals to ICU usually made by phone call.

Our survey tool provides a means for assessing the development of ICU provision both at an individual hospital level and nationally, with the next step to develop and deliver a national ICU plan informed by its data.

EP.247

Comparing rates of hyperthermia amongst cardiac arrest survivors, pre and post the introduction of Targeted Temperature Management (TTM)

Andrew Stewart1 and Andy Temple1

1Sheffield Teaching Hospitals, Sheffield, UK

Abstract

Sheffield Teaching Hospitals is a tertiary centre accepting high numbers of critical care admissions for comatose survivors following cardiac arrest. Therapeutic manipulation of body temperature has long been recognised as an important element of post cardiac arrest care. In light of the TTM trial and updated guidelines published by the International Liaison Committee on Resuscitation (ILCOR), clinical focus has shifted from offering therapeutic hypothermia (TH) to focusing more on the stringent prevention of hyperthermia. At our institution, a newly updated clinical guideline sets the standard of care for the post cardiac arrest patient. Recent audit data revealed higher rates of hyperthermia with a target set at 360C. Thus, our current strategy is to lower body temperature to 34 – 350C.

We compared audit data both pre and post the introduction of TTM, coinciding with changes made to local management guidelines. Across this time period, patients were cooled using a Laerdal MedicoolR device. Data was collected electronically via Metavision (IMD software).

In the initial audit, data was pooled from 2010 to 2012 during which 69 patients received TH following cardiac arrest. The second audit included patients admitted from January 2016 to January 2017. Over this time period, 67 patients received TTM.

% Patients with temperature >37.5oC within first 72 hours % Survival to critical care discharge
TH data (2010 – 2012) 77 55
TTM data (2016 – 2017) 48 53

Patients in receipt of TTM were significantly less likely to develop hyperthermia during the first 72 hours following return of circulation. Despite a marked difference between groups, this did not produce a significant difference in mortality measured at critical care discharge.

Adopting a modest temperature target may result in fewer patients developing hyperthermia during the first 72 hours following return of circulation. This may lead to more favourable neurological outcomes. Rationale for this finding may be explained by considering both physiology and medical intervention. At modest induced hypothermic states, one may expect lesser influence of the body’s homeostatic mechanisms acting to restore normothermia. Secondly, during the rewarming phase following TH, it is perhaps more likely that active warming methods were used by clinical staff leading to higher rates of overshoot. Further research is required to establish whether this is a consistent trend, and to determine if a reduction in the incidence of hyperthermia correlates with improved neurological outcomes.

EP.248

Are we too relaxed with Targeted Temperature Management in out of hospital cardiac arrest patient?

Zhihong Yao1, Aaron Madhok1 and Bernard Foex1

1Critical Care Department, Manchester Royal Infirmary, Manchester, UK

Abstract

Background: In the UK, emergency medical services attend around 28000 out of hospital cardiac arrests (OOHCA). Targeted Temperature Management (TTM) improves outcome in patients with cardiac origin of OOHCA. This audit evaluated whether OOHCA patients admitted to the Intensive Care Unit (ICU) had correct TTM.

Methodology: Data of patients admitted with OOHCA were collected retrospectively between January 2016 and March 2017. Patient’s demographics, time of return of spontaneous circulation (ROSC), causes of cardiac arrest, cardiovascular intervention, first 24 hour temperatures management and 12 months outcomes were collected. Patients with presumed cardiac origin of OOHCA, ROSC <60mins, intubated and ventilated are deemed suitable for TTM.

Result: A total of 43 (21 male, 12 female) OOHCA patients were included. 74% (N = 32) of OOHCA patients were of cardiac origin and 26% (N = 11) were due to other causes. The average age was 60 years (Range 20–81). 62% (N = 27) patients had downtime <15 minutes, 28% (N = 12) patients had downtime between 15–60 minutes and 9% (N = 4; cardiac = 2 and non-cardiac = 2) patients had downtime >60 minutes.

After exclusion criteria, 29 patients were found to meet the criteria for TTM. The exclusions included non-cardiac origin of OOHCA (N = 11), ROSC > 60 min (N = 2), and un-witnessed asystole (N = 1). In the inclusion group, 31% (N = 9) of patients had their temperature kept < 36 C. In 69% (N = 20) their temperature exceeded 36C at least once during the initial 24 hours. Patients kept <36 had better outcomes: full recovery (78% versus 55%) and lower rate of hypoxic encephalopathy (0% versus 25%) when compared to the non-compliant group. The characteristics of age, ROSC time and cardiovascular intervention were similar between both groups.

In the non-cardiac origin OOHCA (N = 11) group patients had a higher rate of death (64% VS 21%) when compared to the cardiac origin OOHCA group. The non-cardiac causes included drug toxicity (N = 2), food aspiration (N = 3), pulmonary embolism (N = 1), cerebral vascular disease (N = 1), infective endocarditis (N = 1), sepsis (N = 2) and anaemia (N = 1).

Conclusion: Our study suggests that a cardiac origin of OOHCA is associated with better outcome compared to non-cardiac origin. Patients managed according to the TTM protocol had improved outcome, which is consistent with the literature. Sub-optimal compliance with TTM was largely due to late application of cooling methods. TTM education and a new protocol have been developed to start cooling measures once central temperature rises above 35 C.

EP.249

The Fewer the Better? Low-Dose Thrombolysis Is effective In Acute Venous Thromboembolism, a Pilot Study

Geoffrey Horlait1, Pierre Bulpa1 and Patrick Evrard1

1CHU UcL Namur, Yvoir, Belgium

Abstract

Background: To determine whether prolonged low-dose urokinase could be an alternative in acute venous thromboembolism (VTE) disease when standard thrombolysis is contraindicated.

Methods: We conducted a prospective monocentric study that used retrospectively collected data from 35 patients with a confirmed diagnosis of venous thrombosis (VT) of a large venous trunk (veins cava superior or inferior, porta or mesenteric trunks) without guidelines-directed indications to fibrinolysis or pulmonary embolism (PE) with contraindication to conventional thrombolytic therapy. Urokinase was started at the dose of 500 Units/kilograms/hour (U/Kg/h) on continuous infusion, increased by 500 U/Kg/h every 24 hours. The primary outcome was the resolution of the obstruction or improvement of symptoms. The main safety outcomes were major bleeding and discharge of ICU.

Results: Efficacy, defined by disappearance of the pathological image or clinical improvement, was observed in 24 patients (77.1%): 90% in the EP group (confidence interval 95, 0.55 to 1) and 72% in the VT group (CI95, 0.51 to 0.88). The duration of lysis varied from 4 to 8 days in the EP group and 1 to 9 days in the VT group (mean = 7 days). Appearance of side effects, mainly hemorrhage (epistaxis, puncture points hematoma) occurred in 5 patients but only 1 required a transfusion (5.7%) for intra-abdominal hemorrhage. The dose had to be reduced for side effects in 22.9%. Death or hemodynamic decompensation occurred in 1 patient (2,9%).

Conclusion: Prolonged thrombolysis with low-dose urokinase could be a safe alternative in acute VTE disease when recommended thrombolytic dosages are contraindicated.

EP.250

Adequacy of Venous Thromboembolism prophylaxis in Intensive Care Unit patients

Kathleen Karauda1, Kunal Waghmare1 and Maria Maccaroni1

1Basildon and Thurrock University Hospital, Basildon, UK

Abstract

Introduction: Venous thromboembolism (VTE) is a medical condition that includes pulmonary embolism (PE) and deep vein thrombosis (DVT). Up to 25,000 patients die in the hospital every year due to VTE. For this reason, VTE has been recognised as a clinical priority by the National Quality board, NICE and the Secretary of State.

It can affect every hospitalised patient and particularly patients admitted to Intensive Care Unit (ICU) and is one of major causes of mortality and morbidity. The incidence of VTE increases without prophylaxis and it is even higher in patients with risk factors. Various guidelines and recommendation are made for VTE prophylaxis based on risk stratification, but none of them are extensively validated.

Aims:

To evaluate 1) current VTE prophylaxis regime followed in our ICU,

2) appropriate assessment and documentation of risk factors,

3) written justification and appropriate dosing of VTE prophylaxis.

Methods:

50 patients admitted to ICU over next 2 months will be selected randomly and evaluated for 3 important issues:

a) wheter or not the patients are receiving any form of VTE prophylaxis,

b) wheter or not the risk stratification was documented,

c) wheter or not each patient is receiving adequate VTE prophylaxis based on risk stratification.

All the data will be collected on data collection proforma and will be statistically evaluated.

Results: The obtained data will be evaluated and the results will be presented in trust audit meetings and other ICU meetings. Appropriate recommendations will be made based on our findings.

Conclusion: Ideally all the patients should have VTE prophylaxis, but whether the dosing was based on risk stratification needs to be assessed. Also correct documentation is very important. Timely risk assessment saves lives. Everyone has a role in VTE prevention – consultants, junior doctors, nurses, midwives, pharmacists, ward clerks … It's a whole system approach.

References

  • 1.Gangireddy C, Rectenwald JR, Upchurch GR, et al. Risk factors and clinical impact of postoperative symptomatic venous thromboembolism. J Vasc Surg 2007; 45: 335–341. discussion 341–332. [DOI] [PubMed]
  • 2.Osborne NH, Wakefield TW, Henke PK. Venous Thromboembolism in Cancer Patients Undergoing Major Surgery. Ann Surg Oncol 2008; 15: 3567--3578. [DOI] [PubMed]
  • 3.Cohen AT, Tapson VF, Bergmann JF, et al. Venous thromboembolism risk and prophylaxis in the acute hospital care setting (ENDORSE study): a multinational cross-sectional study. Lancet 2008; 371: 387–394. [DOI] [PubMed]

EP.251

Stress Induced Hyperglycaemia & Glycaemic control in non-diabetics Is good glucose control in non-diabetics on ITU associated with better outcomes?

Zoe McMillan1 and Michael Reay2

1University of Birmingham Medical School, Birmingham, UK

2The Dudley Group NHS Foundation Trust, Dudley, UK

Abstract

Introduction: Stress-induced hyperglycaemia (SIH) in non-diabetic patients is a well recognised marker of disease severity in the ITU, and is associated with poor patient outcomes [1]. As to whether SIH is a protective response or a dangerous side-effect is poorly understood. Glucose targets for SIH remain to be clarified [2]

Audit standards:

The primary aim of this audit was to determine the glucose control of non-diabetic patients.

The expected standard was a glucose target of <10 mmol/l during their ITU treatment.

A secondary aim was to compare outcomes associated with good or poor glucose control.

Method: Data was extracted from our electronic charting system (ICIP Phillips-ICM) to include all non-diabetic patients staying on ITU who required glycaemic control, with Insulin, between the periods 2011 and 2016. A previous audit suggested good compliance with glucose monitoring.

StatStick glucose readings, insulin doses, patient demographics (age, gender, weight), APACHE II scores, and patient outcomes (ITU/hospital mortality, ITU/hospital length of stay (LOS), ITU organ support days (OSD) were obtained.

Patients were grouped into good control based if their Statstrip readings were of ≤10 mmol for 2/3rd of their ITU stay.

Results: Patient Demographics were comparable in both groups.

68% of Non Diabetic patients had good glucose control throughout their ITU stay. 52% had good glucose control whilst on insulin.

Non Diabetic patients on Insulin with good glucose control had a demonstrable reduction in their ICU and Hospital mortality (RR = 0.44,p = 0.007 & RR 0.50,p = 0.003). Insulin dose and duration of administration did not seem to influence outcomes.

Conclusion: Although the mechanisms underlying SIH are poorly understood, it seems to be a reliable indicator of disease severity [2]. Current guidelines advise maintaining blood glucose levels under 10 mmol/l.

Our audit suggests that glucose control in Non Diabetics on Insulin is achieved 2/3rds of the time with better outcomes than patients with poor control. However poor control may suggest Insulin resistance, which possibly predicts a poor outcome.

References

  • 1.Marik P Bellomo R. Stress hyperglycemia: an essential survival response! Critical Care 2013; 17: 305. [DOI] [PMC free article] [PubMed]
  • 2.Hsu C. Glycaemic control in critically ill patients. World Journal of Critical Care Medicine 2012; 1: 31–39. [DOI] [PMC free article] [PubMed]

EP.252

Investigating Standards of Delirium Assessment

Peter Harding1, Kieron Rooney2 and John Bell3

1The University of Bristol Medical School, Bristol, UK

2Consultant in Intensive Care Medicine, Bristol Royal Infirmary, University Hospitals Bristol NHS Foundation Trust, Bristol, UK

3Senior Nurse, General ICU, Bristol Royal Infirmary, University Hospitals Bristol NHS Foundation Trust, Bristol, UK

Abstract

Introduction: Delirium during an intensive care unit (ICU) admission is very common with an incidence of up to 65% in ventilated patients (1). Detection of ICU delirium is important as it is an independent predictor of mortality, increased hospital length of stay, increased cost of care and development of cognitive impairment; however, it is frequently missed, particularly in patients with the hypoactive subtype of delirium. The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) is a tool that has been developed to assist clinicians in the detection of delirium in ICU patients (2). In the general ICU of University Hospitals Bristol (UH Bristol) NHS Foundation Trust, patients are expected to receive at least one CAM-ICU assessment in a 12-hour period (≥1 per nursing shift). Assessments are usually performed by the bedside nurse.

Following several years of an iterative delirium improvement programme, the overall delirium rate on UH Bristol ICU is 15% (unpublished data), however, the validity of these results depend on the correct application of the CAM-ICU.

This project was undertaken as part of an ongoing quality improvement project to reduce delirium in patients on ICU at UH Bristol. It had two aims. Firstly, to establish whether CAM-ICU assessments were taking place on the unit according to the local standard. Secondly, to observe whether documentation of CAM-ICU assessments contained evidence of errors in the performance of assessments compared to the training and guidance provided in the literature.

Method: The electronic clinical information system (CIS) was retrospectively examined for all patients present on ICU during June 2017. The number of 12-hour periods in which patients received ≥1 CAM-ICU assessment and the number of errors made in CAM-ICU documentation were counted, excluding patients that met the criteria of ‘unable-to-assess’.

Results: 125 patients passed through the general ICU 1st-30th June 2017 and 1034 12-hour periods were reviewed. At least one CAM-ICU assessment was performed on the patient during 85% of 12-hour periods. There was a discrepancy of documentation of CAM-ICU assessments in 13% of 12-hour periods.

Conclusion: Compared to the expected standard, the compliance in this unit was good, reflecting a level of compliance seen in other studies. This gives confidence that the reduced delirium rates seen are a true reflection of improved practice.

EP.253

Direct Discharge Home from Critical Care: A safety audit

Siyao Xing1, Jayachandran Radhakrishnan2, Kevin Kiff2, Jake Collins2 and Ashraf Roshdy2,3

1Watford General Hospital, Watford, Hertfordshire, UK

2General ICU – Broomfield Hospital – Mid Essex NHS Trust, Chelmsford, UK

3Critical Care Department – Alexandria Univeristy, Alexandria, Egypt

Abstract

Introduction: Acute trusts across the NHS are facing severe bed pressures with bed occupancy rates consistently above 90% in some trusts. Disruption of patient flows is especially problematic in the UK, where the critical care bed to population ratio is one of the lowest in the developed world. The number of patients discharged directly to home from critical care has increased as a consequence. In this retrospective, observational study we examined the safety of direct discharges from a critical care unit that has been particularly susceptible to this problem.

Materials and Methods: All patients who were discharged directly to home from the Broomfield Critical Care Unit, from January 2014 to December 2016, were included in the study.

Admission to any part of the hospital within 28 days of discharge from the critical care unit was considered to be a readmission. Simple statistical measures were used to compare patients discharged to the ward (DW) and directly discharged home (DD).

Results: The proportion of DD patients approximately doubled every year in the study period (3.7%, 8.8%, 15.7% from 2014 to 2016). While the number of admissions increased approximately linearly, the number of discharges to the ward showed a steady decrease.

When compared to conventional discharges, DD patients tend to be younger (mean age 46.3 years (SD 19 years) compared to 64.9 years (SD 17 years)) and tend to have shorter critical care stay (median of 2 compared to 4 days). Patients discharged home were commonly admitted with diabetic ketoacidosis (DKA, 35%), drug overdose (12%) and seizures (8%). One patient, readmitted with DKA, died in the three-year period.

Readmission rates for DD patients was similar throughout the study period and comparable to the readmission rates for DW patients (Table 1).

Conclusions: The proportion of patients discharged directly to home is increasing. However, these discharges occur in a relatively healthy subset of younger patients with a small group of pathologies. In this patient subset, direct discharge is not associated with increased readmission rates or excess mortality.

Acknowledgments: We would like to thank Olalekan Agboluaje (Information analyst), Barbara O'Leary (ICNARC team) and Dr. Keith Gunasekara in Broomfield hospital for their help in data provision.

Table 1.

Readmission rates from different patient discharge destinations.

Discharge Area 2014 2015 2016
Hospital 8.26% 7.87% 8.39%
Home 7.69% 8.33% 8.78%

EP.254

Improving the Availability of Dedicated Bedside Stethoscopes in the Critical Care Units of a UK University Teaching Hospital

Li Lin Hong1, Laura Massey1, Ruth Pettit1, Ashley Verghese1 and Gareth Gibbon1

1Nottingham University Hospitals NHS Trust, Nottingham, UK

Abstract

Background: It is accepted that dedicated bedside stethoscopes (DBS) should be immediately available for the recognition of life-threatening complications in critically ill patients and reduce the risk of pathogen cross-transmission. Anecdote suggested that DBS were rarely available in our critical care service. The aim of this Quality Improvement Project was to explore the problem, agree multi-disciplinary consensus solutions and introduce interventions that will lead to improvement.

Methods: A questionnaire was circulated to nurses, doctors and physiotherapists working in the critical care department at the Queen’s Medical Centre. Responses were reviewed by a multi-disciplinary taskforce (MDT) which subsequently defined standards and goals and coordinated interventions.

Results: Our survey received 110 responses. Forty seven percent of respondents stated that they perceived a DBS was unavailable more than half the time. When available, 67% of nurses reported that the DBS would be missing by the end of the shift. Over half of doctors and physiotherapists had difficulty locating DBS. When DBS were unavailable, 63% used personal stethoscopes, and the remainder used a bedside stethoscope dedicated for a different patient. However, only 53% reported cleaning non-DBS stethoscopes with antimicrobial wipes before and after each patient encounter. The main factors contributing to the unavailability of DBS included the lack of a standardised location to keep stethoscopes, misplaced stethoscopes, insufficient quantity and poor quality stethoscopes. Almost all respondents (95%) felt that the lack of DBS was a significant problem which could hinder the delivery of safe care. Pre-survey, 75.1% of beds were found to have a DBS. Our MDT agreed to standardise the location of DBS when not in use and introduce a campaign to spread this message across the team with variety of interventions. We agreed to standards that over 90% of beds should have a DBS, over 90% of DBS were kept in the standard location when not in use, and 90% of staff knew where this location was. We can report an increase in the average number of DBS-compliant beds from a pre-survey rate of 75.1% to our most recent measurement of 90%.

Discussion: We believe that simple, inexpensive and targeted interventions have already led to a demonstrable improvement in DBS availability in our critical care units. At time of submission, data collection is on-going to evaluate for sustained improvement in the availability of DBS and staff compliance in returning DBS to the appropriate location. We hope to present more data in December.

EP.255

The Effect of Delayed Discharge on Patient Related Outcomes

Callum Kaye1, James Forrester1

1Aberdeen Royal Infirmary, Aberdeen, UK

Abstract

Introduction: Discharge from critical care to the general ward within 4 hours of the decision to step down care is a standard contained in the Intensive Care Society (ICS) Guidelines for the Provision of Intensive Care Services (GPICS) and is a standard reviewed in the Scottish Intensive Care Society Audit Group (SICSAG) report. Although there are obvious implications for the unit, little is known about the impact of delayed discharge on the individual patients.

Methods: The local SICSAG dataset for Aberdeen Royal Infirmary General ICU was reviewed from 1st January 2016 to 27th August 2017 to identify baseline characteristics, discharge delay reason, unit and hospital outcome and any healthcare associated infections (HAIs) diagnosed during their ICU stay.

Results: A total of 831 cases were reviewed, with complete data for 613 patients who were not delayed and 184 whose discharge was delayed by greater than 4 hours. The two groups were similar, based on APACHE II score and predicted mortality. Patients with a delayed discharge had a 4.3 day longer stay from ICU to hospital discharge and 4.3% increase in hospital mortality. Six and twelve month survival was similar between the groups, as seen in the table below.

n % n %
Survival at Hospital  Discharge 567 88.60% 171 92.90%
Survival at 6 months 567 92.50% 171 92.90%
Survival at 12 months 567 92.50% 171 92.90%
Readmissions 31 5.10% 8 4.35%

HAIs were diagnosed in 2% of the not delayed patients and 2.72% in the delayed patients, with a greater proportion of HAIs in the not delayed patients being bloodstream infections. All the infections in the delayed patients were pneumonias.

Discussion: Although a delayed discharge from ICU is suboptimal for unit and wider hospital resources, it is reassuring that there is no obvious impact on important patient-level outcomes, such as mortality and HAIs. The difference in LOS between delayed and non-delayed patients suggests there may be a delay in specialty specific rehabilitation and further work should look at whether in-reach from such services will improve this outcome. This study does not consider patient feedback or any psychological distress associated with being in a critical care setting, and should be considered in future work.

EP.256

Development of the ACCP role across the North West of England

Emma Parkin1, Alexandra Larkin1, Annabella Gloster2 and Jonathan Goodall1

1Salford Royal Critical Care, Salford, UK

2Salford University, Salford, UK

Abstract

Introduction: The ACCP role has developed substantially over the past few years and continues to be ever evolving. Within the North West there were advanced practitioners working within critical care and only three with FICM associate membership. With increasing service demand, strategies for future workforce planning and the successful implementation of a small number of ACCP's across the northwest region the need to develop this role was identified.

Method: A steering group that consisted of critical care network lead nurses, medical and non-medical consultants, ACCP's and programme lead from MSc in Advanced Practice from the University of Salford was developed to work collaboratively to explore ACCP training in line with FICM accreditation.

Initial work focused on integrating the competencies from the FICM curriculum with the work-based Masters at Salford University. This was achieved by a small sub-group which comprised of a Faculty member, nurse consultant – critical care and the programme lead. A universal job description and memorandum of understanding was developed for both training and qualified ACCP's across the Greater Manchester and Lancashire and Cumbria critical care networks.

Whilst identifying the service needs and establishing the trainee posts across the various networks, the steering group discussed the need for regional education and support and network for trainee and qualified ACCP's.

Conclusion: Following the collaboration there are now 16 trainee ACCP’s at the University of Salford, training to work in variety of critical care settings, and from both physiotherapy and nursing backgrounds. On qualification due to the work undertaken by the collaborative this cohort will have achieved the required standards stipulated by the FICM curriculum.

Education and networking is improving across the North West with the aid of Facebook and twitter pages and regular networking days have been established.

The university has developed an opportunity for existing ACCP's holding honorary lecturer contracts within the University to be involved in the ongoing education and development of the trainee ACCP’s. The establishment and ongoing commitment for the ACCP role in the North West will be further supported through steering group objectives to future proof and support longevity and ongoing career progression for ACCP’s.

Abstracts selected for New Generation presentations

NG.001

hocuspocus.org.uk: A web-based platform enabling critical care ultrasound accreditation via remote mentoring

Jonathan Bedford1, George Reid2 and Ben Attwood2

1Oxford University Hospitals NHS Trust, Oxford, UK

2South Warwickshire NHS Foundation Trust, Warwick, UK

Abstract

Point-of- care ultrasound (POCUS) is increasingly recognised as a valuable adjunct to patient care. Trainees in intensive care medicine are expected to accredit in focused intensive care echocardiography, but the availability of trained mentors and logistical/geographical factors make this difficult within the time constraints required. As a result, many trainees who are enthusiastic about POCUS find it difficult to achieve accreditation.

We present a secure, web-based, multi-user system which mitigates many of these difficulties and allows for clinical mentorship to take place without geographical barriers, and at a time convenient for the participants. Hocuspocus.org.uk (Hands On, Cloud Uploading Service for Point Of Care Ultrasound) represents a significant advance over the original process using shared spreadsheets[1].

Once the trainee and mentor have both created an account on the site and assigned their mentor/mentee relationship, trainees can enter the details required by the FICE and/or CUSIC schemes for each of their scans. Scan data is copied from the ultrasound device in the standardised DICOM file format. The scan clips are uploaded to an external, third-party online DICOM file-sharing service which is free to access (www.dicomlibrary.org). The scans are automatically fully anonymised at the point of upload. Each scan is then made available via hyperlink from the relevant case record on HOCUS POCUS: the website does not store the scan data itself.

At present, the database holds 17 FICE, 16 CUSIC (pleural) and 6 CUSIC (abdominal) scans, with this number expected to increase significantly as more users register for an account.

A survey of users demonstrated that 83% of the respondents strongly agreed that HOCUS POCUS is easier to use than paper forms, with 100% of respondents agreeing that the site is well-designed and intuitive to use. Reflecting their unequal distribution across hospitals, two thirds of respondents agreed that there are problems matching mentors with trainees, with the remaining third reporting no problems locally. All respondents agreed that HOCUS POCUS will make it easier for trainees to achieve accreditation.

HOCUSPOCUS.org.uk is still in its infancy and requires further testing to ensure it meets the needs of its users and the accreditation schemes. We welcome any feedback and/or criticism.

[1] "HOCUS POCUS: Hands-On Cloud Uploading Service for Point Of Care UltraSound", poster presentation reference 0084, ICSSOA 2016.

NG.002

A case of aortic dissection and pericardial tamponade: Where FICE has taken us

Dominic Moor1 and Theophilus Samuels1

1Surrey and Sussex Healthcare NHS Trust, Redhill, UK

Abstract

Introduction: The dissemination of FICE training is an exciting innovation in intensive care training. The ability to deliver round the clock rapid targeted assessment at the bedside in the hyper acute setting is incredibly valuable, however it raises new challenges in terms of how we deal with our findings.

Case Report: We present a case and echo images of pericardial tamponade leading to loss of consciousness, severe hypotension and near cardiac arrest in a district general hospital.

The patient a previously fit and well 76 year old female had attended one evening complaining of paroxysmal chest pain radiating down her left arm. An unremarkable ECG suggested a likely NSTEMI, dual anti platelet therapy and Fondaparinux were started in-line with our local protocols.

At 0300 the following morning on the Acute Medical Unit (AMU) the lady was noted to be agitated and dyspneic.

At the peri-arrest call the ICU registrar began to perform a FICE protocol scan and immediately noted a large pericardial effusion. At that stage it was noted that the patient had distended neck veins despite equal and vescicular breath sounds.

A pericardial drain was inserted by the ICM trainee using a seldinger technique via a subcostal approach. CT revealed a Type A dissection.

Urgent transfer to a cardiac centre facilitated receipt of a proximal aortic arch replacement and the patient made a full recovery to discharge from hospital.

We are convinced that FICE training is responsible for saving this patients life.

Discussion: The dissemination if FICE has meant that interventions that were almost solely the domain of cardiologists are now much more likely to become the responsibility of us as intensivists and also our allied acute specialties who are embracing point of care echo.

We believe that we have reached a point that FICE should form a core part of the mandatory FICM training curriculum. Indeed there is also a strong case for this to be extended to the curricula of the allied acute specialties of Anaesthesia, Emergency Medicine and Acute Internal Medicine. Given that the principle advantage of FICE is rapid delivery at the bedside, we will be missing out on a huge part of the potential yield if we do not ensure that all of our specialty trainees are not appropriately skilled to deliver this service.

NG.003

What becomes of the broken hearted? Critical care outcomes in cardiomyopathy

Zoe Riddell1, Rinesh Parmar1 and Michael Reay1

1The Dudley Group NHS Foundation Trust, Dudley, UK

Abstract

Introduction: Patients with cardiomyopathy admitted to district general hospital critical care units constitute a unique patient population, of which we know little about. Cardiomyopathy encompasses a range of disorders of myocardial dysfunction. According to the World Health Organization1 they are classified as intrinsic or extrinsic. The authors’ experience of a critical care admission following a cardiac arrest in a patient with cardiomyopathy prompted this review.

Method: We reviewed the case records of all patients admitted to our critical care unit, with cardiomyopathy or myocarditis as a primary or secondary diagnosis, between 2009 and 2017. Median APACHE II score, cardiac indices, vasopressor support, organ support days, and length of critical care stay were reviewed. These values were compared to the general critical care population.

Results: 15 patients were identified with cardiomyopathy or myocarditis. The median age was 43 years. 40% were female. The comparator group comprised of 423 critical care admissions. 12 patients in the study group survived, of these 10 (83%) were due to stress related cardiac dysfunction.

The mean APACHE II scores were comparable between the both groups. The organ support days were higher in the comparator group. The cardiovascular support was similar. The study population had a shorter critical care length of stay in comparison to the general critical care population (2.0 vs 4.03 days).

The mortality in the study group appeared to be better than that of the general population (20% vs 23.76%. p value 0.744).

Conclusion: In our patient cohort, those with cardiomyopathy tend to be younger with similar disease severity to the general critical care population. This is reflected by the fact that most of the admissions with cardiomyopathy as a primary cause for admission were probably stress related, as opposed to having a myocardial pathology. This explains why their overall organ support was similar to the general population with shorter critical care length of stay. Small numbers limit our observational cohort however it does throw some light on the outcomes of this unique subpopulation of critical care admissions.

Reference

  • 1.Richardson P, McKenna W, Bristow M, et al. Report of the 1995 World Health Organization/International Society and Federation of Cardiology Task Force on the Definition and Classification of cardiomyopathies. Circulation 1996; 93: 841–842. [DOI] [PubMed]

NG.004

Review of outcomes of patients admitted to a District General Intensive Care Unit with a diagnosis of Guillain-Barre Syndrome over 5 years, including comparison with Intensive Care National Audit and Research Centre data

Naomi Jennings1, Helen Jones2 and Michael Spivey1

1Royal Cornwall Hospitals Trust, Truro, UK

2Plymouth Hospital NHS Trust, Plymouth, UK

Abstract

Introduction: Guillain-Barre Syndrome (GBS) is an acute inflammatory immune mediated polyneuropathy. 33% of patients will require admission to intensive care for monitoring. 30% will require mechanical ventilation, often with prolonged respiratory failure. Patients are usually alert and interactive with staff and relatives having a significant impact on units.

Objectives:

1. To review the presentation, management and outcomes of patients admitted to the Royal Cornwall Hospital Trust (RCHT) intensive care unit with GBS between 2012 and 2017.

2. To compare outcomes with national outcomes using the Intensive Care National Audit and Research Centre (ICNARC) database.

Methods: A retrospective review using paper and electronic case notes.

Careview technology, Maxims software and PACS programmes. A Microsoft Excel spreadsheet summarized data.

Review of the summary ICNARC data for patients categorized with a primary or secondary reason for ICU admission as GBS between January 2012 and December 2016.

Results: 1,446 (0.2% of total admissions) with GBS were identified, including 13 admissions to RCHT.

Mean age was 58.5 years, range 24 to 79.

10 patients required mechanical ventilation. Mean advanced respiratory days in RCHT 29.5 compared to 17.3 nationally. Local range 7- 160 days with median of 26.5 days. Length of stay 26.5 days compared to 22.6 nationally.

1 person died of an intracranial haemorrhage. 7 patients developed VAP. 2 thromboembolic disease. 8 had autonomic dysfunction and 6 described neuropathic pain.

Currently 5 patients are living at home, 1 died after 4 years at home. 3 remain ICU inpatients. 1 remains in rehabilitation. 1 repatriated.

Discussion: 7 patients had serological evidence of previous infection. The presumed trigger infections may be unrelated. RCHT had a higher proportion of Campylobacter and Hep E; which are associated with a worse prognosis.

The population characteristics of patients admitted to RCHT are analogous to national statistics. Of note are the increased length of stay and increased number of advanced respiratory days. This may be due to small population numbers with outliers requiring prolonged ventilation. Shorter hospital length of stay prior to ICU admission may suggest a rapid disease progression or delayed hospital presentation.

Conclusion: The incidence of ICU admission with GBS has increased over the last 5 years. The length of stay correlates with increased complications. 87% achieve hospital discharge.

Table 1.

Identified pathogens within population.

Pathogen Trigger Number of patients
Campylobacter IgM 3
CMV IgG 1
EBV IgG 1
CMV IgM, EBV IgG 1
Hepatitis E IgG 1
Unidentified 5

NG.005

Communication under pressure: improving clinical-interoperability and communication between the Royal Air Force Critical Care Air Support Teams and US Air Force Critical Care Air Transport Team

David Hall1 and Mark Howley1

1Tactical Medical Wing, Royal Air Force, UK

Abstract

International aeromedical evacuation of critically ill or injured UK military personnel and entitled civilians is the responsibility of the Royal Air Force (RAF) Critical Care Air Support Team (CCAST). A similar role is performed within the US military by the US Air Force Critical Care Air Transport Team (CCATT). Despite their similar responsibilities, the organisation, equipment and clinical protocols utilised by the two teams differ in several important areas. With future military operations and training exercises increasingly likely to involve the UK and US as coalition partners, the need for CCAST and CCATT to work together clinically is likely to increase. Experience during recent operations in Afghanistan has demonstrated that accurate and timely handover of clinical care between UK and US teams, particularly in time-pressured situations or austere locations, represents a significant clinical risk and is an area suitable for quality improvement. Exercised COMBINED JOINT ATLANTIC SERPENT was a recent major exercise involving CCAST and CCATT training together to improve critical-care communication and clinical inter-operability, particularly with regard to cross-loading critically injured patients from UK to USAF aircraft. During this exercise, an iterative approach was taken to developing a clinical and communication toolkit to ensure safe clinical and operational handover between RAF and USAF clinical teams. We describe the development and testing of this checklist over the course of several joint simulated missions, its incorporation into routine use for the transfer of critically ill patients and lessons identified when care of complex critically ill patients is transferred between teams from different countries and health systems. We suggest these lessons may be applicable for UK critical care units not just receiving patients repatriated from overseas, but also from other UK centres with which they do not have a regular referral relationship.


Articles from Journal of the Intensive Care Society are provided here courtesy of SAGE Publications

RESOURCES