Footnotes
- Innovator Partner: 3M
- Collaborator Partner: Ethicon
- Contributor Partners: Medtronic, Pendopharm, MD
- Financial Management and Olympus
Procedural simulation has been shown to enhance early endoscopy training. In this proof of concept study, we aimed to show that a first-person shooter (FPS) video game with a novel in-house designed modified endoscope controller shares similar constructs with real-life endoscopy.
A non-functioning colonoscope was fashioned to a wooden platform and suspended over a sensor connected to a computer. Customized software translated the colonoscope’s movements into computer input. Participants completed the first three levels on an FPS video game, Portal (Valve Corporation), first using conventional mouse and keyboard controls and then using the novel endoscope controller. Twelve expert endoscopists and 12 surgical residents with minimal endoscopy experience participated. Participants were evaluated on the basis of completion time, number of button presses, and hand motion analyses.
Experts outperformed novices for time to study completion (expert 944 s, novice 1515 s; p = 0.006) and number of hand movements (expert 1263.1 s, novice 2052.6 s; p = 0.004) using the novel colonoscope controller. There was no difference in number of button presses or total path length travelled. Self-reported number of past endoscopies was moderately linearly correlated with time to game completion (r = −0.493, p = 0.020) and total hand movements (r = −0.462, p = 0.030). Novices and experts did not statistically differ while using the conventional video game controls.
Experts outperformed novices using the endoscope controller but not the conventional game controller with respect to economy of movement and completion time. This result confirms that our endoscope-controlled video game shares similar constructs with real-life endoscopy and serves as a first step toward creating a more enjoyable and cheaper alternative to commercially available endoscopy simulators.
The Aboriginal population is known to have a higher prevalence of diabetes and other chronic diseases compared with non-Aboriginal populations in the same geographic location. It is also presumed that the outcomes of those who develop diabetic complications like diabetic foot ulcers (DFUs) are worse in the Aboriginal population. This review aims to complete a systematic literature review comparing outcomes of DFUs in Aboriginal and non-Aboriginal populations. The primary outcome measure was major amputation rate.
PubMed, Cochrane, Embase and CINAHL were searched from inception to October 2018. Inclusion criteria were as follows: studies comparing the outcomes of patients with DFUs in Aboriginal and non-Aboriginal populations, studying patient populations limited to Canada, the United States, Australia and New Zealand. Studies were excluded if they were non-English or reported on patients under age 18 years. Risk of bias was assessed using the ROBINS-I tool. RevMan 5.3 software was used for data analysis with effect measures reported as odds ratio with 95% confidence intervals. The I2 statistic was calculated to quantify heterogeneity.
Six cohort studies were included totalling 244 769 patients with DFUs (2586 Aboriginal, 242 183 non-Aboriginal). Studies were set in Australia, Canada and the United States and included both inpatient and outpatient settings. Four studies were deemed to have moderate risk of bias, and 2 had serious risk of bias. The Aboriginal population was found to have a higher rate of major amputation (odds ratio 1.85, 95% confidence interval 1.04–3.31). With the exception of minor amputation rate, all other outcomes were worse in the Aboriginal population.
There is a paucity of studies comparing DFU outcomes in the Aboriginal and non-Aboriginal populations. Analysis of available studies supports the conclusion that outcomes are worse in the Aboriginal population, particularly the major amputation rate. This review highlights the need for higher quality data collection prospectively detailing risk factors that contribute to poor outcomes.
Growing evidence suggests poorer outcomes for black patients undergoing surgery. The objective of this study is to determine if black race is associated with worse short-term postoperative morbidity and mortality compared with white race in a contemporary, cross-specialty matched cohort.
A retrospective analysis was conducted comprising all patients undergoing surgery in the National Surgical Quality Improvement Program (NSQIP) data set between 2012 and 2018. One-to-one coarsened exact matching was conducted between black and white patients. The primary outcome was rate of 30-day morbidity and mortality.
After 1:1 matching, 615 118 patients were identified. Black race was associated with increased rate of all-cause morbidity (odds ratio [OR] 1.10, 95% confidence interval [CI] 1.08–1.13, p < 0.001) and mortality (OR 1.15, 95% CI 1.01–1.31, p = 0.039). Black race was associated with increased risk of reintubation (OR 1.33, 95% CI 1.21–1.48, p < 0.001), pulmonary embolism (OR 1.55, 95% CI 1.40–1.71, p < 0.001), failure to wean from ventilator for more than 48 hours (OR 1.14, 95% CI, 1.02–1.29, p < 0.001), progressive renal insufficiency (OR 1.63, 95% CI 1.43–1.86, p < 0.001), acute renal failure (OR 1.39, 95% CI 1.16–1.66, p < 0.001), cardiac arrest (OR 1.47, 95% CI 1.24–1.76, p < 0.001), bleeding requiring transfusion (OR 1.39, 95% CI 1.34–1.43, p < 0.001), deep vein thrombosis or thrombophlebitis (OR 1.24, 95% CI 1.14–1.35, p < 0.001) and sepsis or septic shock (OR 1.09, 95% CI 1.03–1.15, p < 0.001). Black patients were more likely to have a readmission (OR 1.12, 95% CI 1.10–1.16, p < 0.001) and discharge to a rehabilitation centre (OR 1.73, 95% CI 1.66–1.80, p < 0.001) or facility other than home (OR 1.20, 95% CI 1.16–1.23, p < 0.001).
This contemporary matched analysis demonstrates an association with increased morbidity, mortality and readmissions for black patients across surgical procedures and specialties.
There is a paucity of evidence surrounding the issue of delays on the day of surgery with respect to both causes and consequences. We sought to determine whether patients whose operations started late were at increased risk of postoperative complications.
We conducted a retrospective cohort study of 1420 first-of-the-day common general surgical procedures, dividing these into on-time start (OTS) and late start (LS) cases. Our primary outcomes were minor and major complication rates; our secondary objective was to identify factors predicting LS. Groups were compared using univariable and multivariable analyses.
The LS rate was 55.3%. On univariable analysis, LS cases had higher rates of major and minor complications (7.3% v. 3.5%, p = 0.002; 3.8% v. 1.6%, p = 0.011). On multivariable analysis, LS was not associated with increased odds of any complications. Minor complications were predicted by operative duration (odds ratio [OR] 1.005, p = 0.002), female sex (OR 1.78, p = 0.037) and undergoing an ileostomy closure procedure (OR 10.60, p < 0.001), and they were reduced in patients undergoing surgery on Wednesdays (OR 0.38, p = 0.023). Major complications were predicted by operative duration (OR 1.007, p = 0.004) and American Society of Anesthesiologists (ASA) class (OR 6.73, p = 0.016). Multivariable analysis using LS as an outcome identified that anesthesia time (OR 1.35, p < 0.001), insulin-dependent diabetes (OR 1.91, p = 0.016) and dyspnea upon moderate exertion (OR 2.52, p = 0.002) were predictive of LS. Most cases in our study started late.
Although this has substantial efficiency and economic costs, it is not associated with adverse patient outcomes. This topic remains incompletely described. Further research is needed to improve efficiency and patient experience by investigating the causes of operative delays.
The COVID-19 pandemic has strained health care resources the world over, requiring health care providers to make resource allocation decisions under extraordinary pressures. A year later, our understanding of COVID-19 has advanced, but our process for making ethical decisions surrounding resource allocation has not. During the first wave of the pandemic, our institution uniformly ramped down clinical activity to accommodate the anticipated demands of COVID-19, resulting in resource waste and inefficiency. In preparation for the second wave of the pandemic, we sought to make such ramp-down decisions more prudently and ethically.
We report the development of a tool that can be used to make fair and ethical decisions in times of resource scarcity. We formed an interprofessional team to develop and use this tool to ensure that the diverse range of stakeholder perspectives were represented in this development process. This team, called the Clinical Activity Recovery Team (CART), established institutional objectives that were combined with well-established procedural values, substantive ethical principles and decision-making criteria using the accountability for reasonableness ethical framework. The result of this is a stepwise, semi-quantitative, ethical decision tool that can be applied to proposed solutions to problems of resource allocation to reach a fair and ethically defensible decision. This ethical decision tool can be applied in almost all conceivable contexts and scales, from the multi-institution level to the provider level, and indeed this is how it is applied at our institution. As the second wave of the COVID-19 pandemic strains health care resources, this tool can help clinical leaders to make fair decisions.
Surgical program directors (PDs) have been identified as being at high risk for emotional exhaustion and burnout. Consequent PD turnover and discontinuity in leadership can affect faculty and trainee success and well-being and the stability of residency programs. Prior studies have documented factors contributing to nonsurgical PD burnout; however, rates of early attrition and contributing factors in surgical PDs have not been investigated. This study examined factors affecting surgical PD satisfaction, stressors and areas where institutions can improve PD support.
A national cross-sectional study of PDs was performed across all accredited surgical subspecialties. Domains assessed via a web-based survey included PD demographic characteristics and compensation, availability of administrative support for programs, satisfaction with the PD role and factors contributing to PD challenges and burnout.
Sixty percent of eligible surgical PDs (81/134) responded to the survey from 12 surgical specialties. Substantial heterogeneity was seen in tenure, compensation models and administrative support. All respondents exceeded their protected time for the PD position, and 66% received less than 0.8 full-time equivalent of administrative support. One-third of respondents (36%) were satisfied with their overall compensation for the position, while 43% were unhappy with compensatory models. Most respondents (70%) enjoyed the PD role, specifically relationships with trainees and the ability to shape the education of future surgeons. Stressors included insufficient administrative support, especially around resident remediation, and inadequate compensation, with 37% of PDs considering leaving the post prematurely.
The majority of surgical PDs enjoy the role. However, intersecting factors such as disproportionate time demands, lack of administrative support and inadequate compensation for the role contribute to substantial stress and risk of early attrition. Systematic culture change to support PDs via better defined structural processes and sufficient resources is needed to keep these educators engaged and improve both PDs’ and trainees’ experiences.
Oncoplastic surgery (OPS) enhances breast-conserving surgery with improved oncologic and cosmetic outcomes; however, access to OPS varies greatly across Canada. The objective of this study was to evaluate the outcomes of breast surgery after the introduction of OPS in a community hospital.
All breast oncology surgery cases performed in 2 time periods before (2015–2016) and after (2018–2019) the introduction of OPS by a single surgeon in a community hospital were retrospectively reviewed. Patients undergoing surgery before and after OPS introduction were compared.
Twenty patients underwent breast-conserving surgery in the post-OPS period and 12 patients in the pre-OPS period. Post-OPS patients had more comorbidities (mean Charlson Comorbidity Index score 1.90 v. 1.08, p = 0.024), longer operative time (mean 61.0 v. 46.4 min, p = 0.026) and greater specimen mass (107.4 v. 33.0 g, p = 0.037). Completion mastectomy rate was lower in the post-OPS group (0 v. 4, p = 0.014). Post-OPS breast conservation increased from 30% to 50% (n = 8/27 v. n = 20/40, p = 0.048) and positive margin rate decreased from 25% to 10% (p = 0.26). The post-OPS group had a slightly higher readmission rate for postoperative hematoma (10% v. 0%, p = 0.29).
Introducing OPS in a regional centre resulted in improved oncologic outcomes and increased breast conservation. Surgeons adapting these techniques should be aware of the increased operative time required and the potential risk of postoperative hematoma. This work suggests OPS can be performed safely in the community setting with appropriate training and improve outcomes in breast surgery for patients in smaller centres.
Surgeons are expected to thrive in multidisciplinary teams. While accreditation bodies have included leadership as a core competency for all clinicians, there remains a general lack of definition and strategy to achieve that objective. This paper aimed to systematically review the literature on leadership development programs (LDPs) for surgical residents.
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used to search for studies on LDPs for surgical residents. We examined the setting, frequency, content, teaching methods and learning outcomes of each program. The Kirkpatrick effectiveness and Best Evidence Medical Education (BEME) scales were used to assess curriculum effectiveness and quality, respectively. Relevant Accreditation Council for Graduate Medical Education and Royal College of Physicians and Surgeons of Canada learning outcomes were cross-referenced with the content of each LDP.
Nine studies were included in the final analysis. The majority of LDPs were delivered in a didactic (n = 8), classroom (n = 7), longitudinal format (n = 5). The most common topics included leadership theory (n = 8) and team-building techniques (n = 5). Learning outcomes included an improved understanding of leadership (n = 4), communication skills (n = 3) and team-building and management (n = 3). The overall effectiveness of each program was low, with 6 studies having a Kirkpatrick score of 1 out of 4, indicating a change only in learners’ attitudes. The highest BEME score, achieved by 5 of the programs, was 3 out of 5, indicating that their conclusions can probably be based on the results. Only 3 of the studies placed their learning outcomes in the context of competencies outlined by national accreditation committees.
The current body of literature on leadership curricula for surgical residents is heterogeneous and limited in effectiveness and quality. Future programs need to be rooted in leadership theory and national accreditation competencies, with a focus on deliberate practice, to adequately prepare today’s residents to become tomorrow’s surgeon-leaders.
Adequate pain control is crucial for successful recovery after thyroid and parathyroid surgery. Effective postoperative pain control can shorten hospital stay, improve postoperative outcomes, decrease morbidity and improve the overall patient experience. Traditionally, opioids have been the mainstay of postoperative analgesia after thyroid and parathyroid surgeries. However, the use of opioids has been linked to an increased incidence of postoperative complications such as ventilatory depression, sedation and postoperative nausea and vomiting (PONV), which can contribute to delayed discharge, as well as lead to opioid dependence. The aim of this meta-analysis was to evaluate the body of evidence investigating the postoperative use of non-opioid analgesic drugs and techniques after thyroid and parathyroid operations.
A comprehensive systematic literature review via Medline, Embase, Web of Science and Cochrane Central Register for Controlled Trials from inception until Dec. 26, 2020, was conducted, followed by meta-analysis. Abstract and full-text screening, data extraction and quality assessment were independently conducted by 2 investigators. Odds ratios (ORs), mean differences (MDs) and 95% confidence intervals were calculated using RevMan 5.3.
Sixty-seven randomized controlled trials were identified from 486 unique publications screened. Pooled MDs and 95% confidence intervals (CIs) for pain scores were higher for the control group at 24 hours postoperatively both at rest (MD −0.65, 95% CI −0.92 to −0.37) and with swallowing (MD −0.77, 95% CI −1.37 to −0.16) These differences were statistically significant. Furthermore, the pooled MD and 95% CI for postoperative analgesic requirements was lower in the intervention group (−1.38, 95% CI −1.86 to −0.90). The pooled OR for the incidence of PONV was 0.67 (95% CI 0.48 to 0.94), and for the incidence of hematoma it was 0.69 (95% CI 0.28 to 1.65).
Non-opioid analgesia was found to be superior to the control for pain control in patients undergoing thyroid and parathyroid operations, with no significant difference in complications.
There is a lack of consensus on the management of ductal gallstone disease, with some surgeons recommending endoscopic retrograde cholangiopancreatography (ERCP) before cholecystectomy and others suggesting cholecystectomy with possible intraoperative cholangiography first. Guidelines regarding optimal timing and utility of ERCP in relation to cholecystectomy for gallstone disease are unclear. This study aimed to compare outcomes of patients who underwent ERCP followed by cholecystectomy with those of patients who underwent cholecystectomy first for treatment of ductal gallstone disease.
This retrospective cohort study included patients who underwent cholecystectomy at 1 community hospital during fiscal years 2016–2019. Patients who had a previous emergency visit within 6 weeks of cholecystectomy and received a diagnosis of cholecystitis, choledocholithiasis or gallstone pancreatitis were included. Outcomes were compared between those who received ERCP first and those who underwent cholecystectomy first.
In fiscal years 2016–2019, 205 patients underwent cholecystectomy for diagnoses of acute cholecystitis, choledocholithiasis and gallstone pancreatitis. Fifty-eight met the study inclusion criteria (37 underwent ERCP before cholecystectomy; 21 underwent cholecystectomy first). Those in the cholecystectomy-first group had fewer comorbid conditions (1.2 v. 2.2, p = 0.014) and were more likely to have a diagnosis of acute cholecystitis (38% v. 5%, p = 0.002). There were similar rates of intraoperative complications (0% v. 10.8%, p = 0.12) and postoperative complications (14.3% v. 8.33%, p = 0.47) for the cholecystectomy versus ERCP groups, respectively, and the operative time was similar (73 v. 68 min, p = 0.28) for patients with and without intraoperative cholangiography, respectively. Patients who underwent cholecystectomy first had a shorter length of stay (5.3 v. 7.4 d, p = 0.04). Nine (43%) ERCP procedures were avoided in the cholecystectomy-first group, and there was a 32% (12/37) nontherapeutic ERCP rate in the ERCP-first group.
Cholecystectomy before ERCP in patients with ductal gallstone disease is associated with mortality rates, complication rates and introperative times that are similar to those for patients who undergo cholecystectomy first, and cholecystectomy before ERCP may decrease length of stay and number of surgical procedures in these patients.
Red blood cell (RBC) transfusions are common in surgery and associated with widespread interpractitioner variability despite adjustment for case mix. Evidence-based recommendations guiding RBC transfusion in the operative setting are limited. The objective of this work was to carry out a systematic review and meta-analysis of randomized controlled trials (RCTs) comparing intraoperative RBC transfusion strategies to determine their impact on postoperative morbidity, mortality and blood product use.
The search strategy was adapted from a previous Cochrane review. Electronic databases were searched from January 2016 to February 2021. Included studies from the previous Cochrane review were considered for eligibility from before 2016. RCTs comparing intraoperative transfusion strategies were considered for inclusion. Co-primary outcomes were 30-day mortality and morbidity. Secondary outcomes included intraoperative and perioperative RBC transfusion. Meta-analysis was carried out using random-effects models.
Fourteen trials (8641 patients) were included. One cardiac surgery trial accounted for 56% of patients. There was no difference in 30-day mortality (relative risk [RR] 0.96, 95% confidence interval [CI] 0.71–1.29) and pooled postoperative morbidity among the studied outcomes when we compared restrictive and liberal protocols. Two trials reported significantly worse composite outcomes with restrictive triggers. Intraoperative (RR 0.53, 95% CI 0.43–0.64) and perioperative (RR 0.70, 95% CI 0.62–0.79) blood transfusions were significantly lower in the restrictive group than in the liberal group.
Perioperative restrictive transfusion strategies decreased intraoperative and perioperative RBC transfusion and were not associated with increased morbidity and mortality in 12 of 14 trials. Given trial design heterogeneity and generalizability limitations, uncertainty remains regarding the safety of the broad application of restrictive transfusion triggers in the operating room. Trials specifically designed to address intraoperative transfusions are urgently needed.
Older patients are increasingly presenting for surgical evaluation. Although age is independently associated with morbidity, mortality and adverse postoperative outcomes, much of the variation may be attributable purely to frailty. Preoperative assessments and prehabilitation could serve to evaluate and optimize physical, psychological, functional and social issues to improve outcomes. Our study’s aim was to evaluate whether institution of a Frail Elderly Pre-Operative Assessment Clinic (FEPAC) affected outcomes for elderly patients having elective surgery.
We conducted a retrospective case–control study of patients older than 70 years of age scheduled for elective surgery at a tertiary hospital from Apr. 1, 2018, to Dec. 31, 2019. “Cases” were patients deemed frail assessed at FEPAC. “Controls” were patients not assessed at FEPAC, matched to cases by age, surgery and comorbidities. Patient demographic characteristics, disease characteristics, and clinical and surgical outcomes were collected. Comparisons between the cases and controls with a p value less than 0.05 were considered statistically significant.
Ninety participants were included in the analysis: 30 cases and 60 controls. The two groups had no difference in terms of age, sex or American Society of Anesthesiologists classification or in terms of the proportion of patients receiving a new ostomy. The control group was less frail (p = 0.04). There was no statistical difference in the incidence of moderate-severity complications. The mean length of stay (LOS) was shorter among the cases, although the difference was not statistically significant. After FEPAC assessment, 3 patients decided to forgo surgery. Two patients in the cases group and 6 in the control group required increased care upon discharge.
FEPAC patients had a shorter LOS and were less likely to require increased care upon discharge than controls, although the difference was not statistically significant. A FEPAC discussion about goals allowed 3 patients to reevaluate their desire for surgery. The study results are limited by the small sample size: future studies with larger groups are warranted.
Postoperative ileus (POI) remains a common complication following bowel resection. Selective opioid antagonists have been increasingly studied as prophylactic pharmaceutical aids to reduce rates of POI. The aim of this study was to evaluate the impact of selective opioid antagonists on return of bowel function following bowel resection.
Medline, Embase and Central were systematically searched. Articles were included if they compared the incidence of POI or length of stay (LOS) or both in patients receiving and not receiving selective opioid antagonists following elective bowel resection. A pairwise meta-analysis using inverse variance random effects was performed.
From 636 citations, 30 studies with 45 051 patients receiving selective opioid antagonists (51.3% female, mean age 60.9 yr) and 55 071 patients not receiving selective opioid antagonists (51.2% female, mean age 61.1 yr) were included. Patients receiving selective opioid antagonists had a significantly lower rate of POI (10.1% v. 13.8%, relative risk [RR] 0.68, 95% confidence interval [CI] 0.63–0.75, p < 0.01). Selective opioid antagonists also significantly reduced LOS (standard mean difference [SMD] −1.08, 95% CI −1.47 to −0.69, p < 0.01), readmission (RR 0.94, 95% CI 0.89 to 0.99, p = 0.03) and 30-day morbidity (RR 0.85, 95% CI 0.79 to 0.90, p < 0.01). Improvements in LOS, readmission rate and morbidity were not significant when analysis was limited to laparoscopic surgery. There was no significant difference in inpatient health care costs (SMD −0.33, 95% CI −0.71 to 0.04, p = 0.08).
The rate of POI decreases with the use of selective opioid antagonists in patients undergoing bowel resection. Selective opioid antagonists also improve LOS, rates of readmission and 30-day morbidity for patients undergoing open bowel resection. The addition of these medications to enhanced recovery after surgery protocols should be considered.
Bile duct injury sustained during laparoscopic cholecystectomy is associated with high morbidity and mortality and can be a devastating complication for a general surgeon. We introduce a novel individualized surgical coaching program for surgeons who have recently experienced a bile duct injury in laparoscopic cholecystectomy. We aim to explore the perception of coaching among these surgeons and to assess surgeons’ experiences in the coaching program.
Six practising general surgeons who experienced a bile duct injury at an emergency laparoscopic cholecystectomy were approached by a hepatopancreatobiliary surgeon for a 1-on-1 coaching session. Preceding the session, videos of complicated gallbladder surgery with a step-by-step approach were sent to the participants. The session itself focused on debriefing the index case where the injury occurred and discussing strategies for safe laparoscopic cholecystectomy. The pilot program ran from March to November 2020. Exit interviews were then conducted. Themes covering perception of surgical training, perception of complications and experience in the coaching program were explored.
Surgeons were generally accepting of the coaching program, especially when the goals aligned with their self-identified areas of development. One-on-one sessions with a local expert in the area and the use of video feedback created a unique and interactive coaching opportunity. Coaching was found to be valuable in helping surgeons regain confidence after a complication. All 6 surgeons maintained a relationship with the coach following the session and would recommend it to a friend or colleague. Maintaining a collegial, nonjudgmental relationship is critical in establishing a positive coaching experience.
An individualized surgical coaching program creates a unique opportunity for professional development and may help promote safe laparoscopic cholecystectomy. Formalizing and expanding coaching initiatives in this context are key in the broader effort to improve safety in laparoscopic cholecystectomy.
Median arcuate ligament syndrome is a result of the compression of the celiac artery and plexus by the median arcuate ligament. The pathophysiology is incompletely understood; however, it is thought to be related to both ischemic and neuropathic mechanisms. Preoperative workup includes a multidisciplinary assessment (i.e., gastroenterology, vascular surgery, general surgery, dietician, psychiatrist), careful history and physical examination, and imaging (i.e., computed tomography [CT]angiography, duplex ultrasonography). If the diagnosis is suspected, a trial of a celiac plexus block is attempted, and if a good response is achieved, a laparoscopic median arcuate ligament release can be considered.
This video presents the case of an 18-year-old female with a 1-year history of worsening appetite, generalized abdominal pain and weight loss. Her CT angiogram demonstrated compression of the celiac artery, and a celiac plexus block substantially improved her symptoms. As such, she was consented for a laparoscopic median arcuate ligament release.
The patient was placed in the split leg position with arms tucked and padded. Ports were placed periumbilically, as well as in the right and left upper quadrants. A Nathanson liver retractor was positioned for exposure. Dissection was initiated by division of the pars flaccida. The left gastric vessels were identified, isolated and retracted medially. The diaphragmatic fibers overlying the aorta just proximal to the celiac take-off were identified and divided. The anterior abdominal aorta was cleared. The celiac trunk was eventually identified. Celiac ganglion fibers were lysed. The diaphragmatic fibers compressing the celiac trunk were divided and complete celiac artery release was achieved.
Operative time was 1 hour and 33 minutes. There was minimal blood loss and no complications. The patient was discharged home on postoperative day 1. YouTube video link: https://www.youtube.com/watch?v=ULVDYZICBw
This surgical education video demonstrates the major operative steps of a right retroperitoneoscopic adrenalectomy. The video addresses the key learning points of surgical considerations of right and left adrenal vein anatomy in addition to good intraoperative communication with the operating room team.
Optimal prone patient positioning and port site placement are discussed. The first major step is entering the Gerota fascia widely from the paraspinous muscle medially to the peritoneum laterally using an advanced energy sealing device. The posterior attachments of the perinephric and periadrenal fat are bluntly swept anteriorly and bring the peritoneum into view laterally, the paraspinous muscle medially and the apex of the retroperitoneum superiorly.
The superior pole of the kidney is then mobilized from the adrenal gland with the dissection starting from the anterolateral surface of the kidney and carried lateral to medial toward the renal hilum. The next step is identifying and dividing the adrenal arterial blood supply using the energy sealing device. In a right adrenalectomy, the inferior vena cava (IVC) is identified and the right adrenal vein is isolated and divided with the energy device where it drains directly into the IVC.
In left adrenalectomy, the left adrenal vein drains into the left renal vein, meeting with the inferior phrenic vein before draining into the IVC. The adrenal gland is dissected from surrounding tissues using blunt and sharp dissection. The specimen is then extracted and ports are closed after ensuring good hemostasis. YouTube video link: https://www.youtube.com/watch?v=uHd1MaI25gk
A Zenker diverticulum is a false diverticulum through a weakly reinforced tissue plane between the oblique fibers of the thyropharyngeus muscle and the transverse fibers of the cricopharyngeus muscle. Asymptomatic patients with a diverticulum less than 1 cm do not require any further management. Patients with symptoms or with a diverticulum larger than 1 cm warrant surgical consideration. The classic surgical approach was through an open anterolateral neck incision. Although endoscopic management was first proposed in 1917, it was not extensively studied until the 1980s. Since this time, the endoscopic approach has been developed such that compared with the open approach it substantially reduces operative time, time to return to diet, postoperative length of stay and overall complications. There is, however, an increased risk of recurrence with the endoscopic approach.
This video abstract demonstrates endoscopic management of a large left posterolateral Zenker diverticulum that was causing recurrent episodes of aspiration in a 94-year old female. In this video, the common wall between the esophagus and the diverticulum is clearly identified and divided. Visualization of the cricopharyngeus muscle fibers is maintained throughout the dissection as to not perforate the esophageal adventitia. The dissection was carried until 0.5 cm proximal to the blind end of the diverticulum. An adequate overflow tract between the diverticulum and the esophageal lumen was created. Operative time was 21 minutes, and the patient was discharged home on postoperative day 1, after a barium swallow did not show any evidence of leak. YouTube video link: https://www.youtube.com/watch?v=JSWETkBu6I0
Pheochromocytomas are rare tumours that require complex perioperative management. Despite recent consensus guidelines, there is substantial variability in the management of pheochromocytomas. Our study characterizes the current state of perioperative pheochromocytoma management by Canadian surgeons.
A 23-item online survey was sent to Canadian surgeons who perform adrenalectomies for pheochromocytoma. We assessed personal and institutional practices, preoperative blockade preferences, preadmission rates, perioperative intensive care utilization and other management patterns.
The national response rate was 53%. Surgeons from 9 provinces responded; the majority were general surgeons (70%). The median number of adrenalectomies for pheochromocytoma performed per surgeon per year was 3 (range 1–12). Reviewing pheochromocytoma patients at a multidisciplinary tumour board was not routine practice (12% of respondents) and only 42% consistently referred patients for genetic testing. Preoperative α and β blockades at half of the respondent institutions were performed by endocrinology alone (54%), with the other half employing a multidisciplinary approach. Preferred blockade regimens varied widely. Half of respondents admitted their pheochromocytoma patients to hospital before the day of surgery. Postoperatively, 12% of respondents routinely admitted their patients to the intensive care unit (ICU) for monitoring on the basis of personal preference or institutional convention. Surgeon case volume, years in practice, physician responsible for preoperative blockade and preadmission were not associated with postoperative ICU admission.
Perioperative surgeon management of patients undergoing adrenalectomy for pheochromocytoma is highly variable across Canada. Notably, less than half of respondents routinely refer patients for genetic testing, which should be offered to all pheochromocytoma patients according to the most recent practice guidelines. Currently, surgeon preference and institutional convention are large driving forces behind preoperative admission and routine postoperative ICU admission, despite a lack of evidence to support this practice. Future direct comparative studies are necessary to evaluate the relative safety and cost of these approaches.
Red blood cell (RBC) transfusions are associated with morbidity, mortality and cancer recurrence following gastrointestinal cancer surgery. These patients, however, are often transfused, and despite recent evidence and practice guidelines informing transfusion practice substantial transfusion variation persists among individual physicians and institutions. We examined the association of surgeon and hospital variation in RBC transfusion practice with postoperative major morbidity and mortality in patients undergoing gastrointestinal cancer surgery.
We performed a population-based cohort study of patients who underwent elective gastrointestinal cancer resection between 2007 and 2019. Indirect standardization using hierarchical logistic regression was used to estimate case-mix-adjusted perioperative RBC transfusion rates and 90-day major morbidity or mortality rates for each surgeon and hospital. Linear regression estimated the association between transfusion and postoperative morbidity or mortality rates.
A total of 59 964 patients (median age 69 yr, 43.2% female, 75.8% colorectal resections), 616 surgeons and 81 hospitals were included. A total of 18.0% of patients received transfusions. Adjusted RBC transfusion rates ranged from 7.4% to 36.4% for surgeons and 8.4% to 30.0% for hospitals. Across surgeons, we observed a 3.8% increase in major morbidity for every 10% increase in transfusion rates (β = 3.8, 95% confidence interval [CI] 1.4–6.2), whereas hospital-level transfusion rates had no significant association with morbidity. With regard to mortality, we observed a 0.3% increase for every 10% increase in transfusion rates for both surgeons (β = 0.34, 95% CI 0.22–0.46) and hospitals (β = 0.27, 95% CI 0.02–0.52).
We observed a wide variation in surgeon and hospital transfusion practices that were associated with patient outcomes. After adjusting for patient case-mix, postoperative major morbidity and mortality were less frequent among those surgeons and hospitals who transfused fewer patients. Individual transfusion practices are a potentially modifiable process of care that should be targeted by quality improvement programs, including monitoring and reporting, with a goal of standardizing perioperative transfusion practices and improving patient outcomes.
Perioperative blood transfusions have been associated with increased morbidity and poorer oncologic outcomes for numerous surgical procedures. However, this issue is understudied among patients with gastroesophageal malignancies. The objective was to clarify the risk factors and impact of perioperative transfusions on quality of life and surgical and oncologic outcomes among patients undergoing gastric and esophageal cancer surgery.
Patients undergoing curative-intent resections for gastroesophageal cancers between 2010 and 2018 were included. Perioperative blood transfusion was defined as any transfusion within 24 hours preoperatively, during surgery or the primary postoperative hospital admission period. Patient and tumour characteristics, surgical and oncological outcomes and quality of life were compared.
A total of 435 patients were included. Perioperative transfusions occurred in 184 (42%). Anemia, blood loss, female sex, open surgical approach and operative time emerged as independent risk factors for transfusions. Factors found to be independently associated with overall survival were neoadjuvant therapy, tumour size and stage, major complications and mortality. Transfusions did not independently affect overall survival, disease-free survival or quality of life.
Perioperative transfusions did not affect oncologic outcomes or quality of life among patients undergoing curative-intent surgery for gastroesophageal cancers.
Long-term functional outcomes are central to older adults’ decision-making regarding cancer treatments. While frailty is known to affect short-term postoperative outcomes, its impact on long-term functional decline after cancer surgery is unknown. We examined the association between frailty and remaining alive and at home after cancer surgery among older adults.
In this population-based study, we included adults 70 years of age and older who had a cancer diagnosis and underwent resection (2007–2017). The probability of remaining alive and at home (i.e., not admitted to a nursing home) in the five years after cancer resection was evaluated with Kaplan–Meier methods. Extended Cox regression with time-varying effects examined the association between frailty and remaining alive and at home.
Of 82037 patients, 6443 (7.8%) had preoperative frailty. With median follow-up of 47 months (interquartile range 23–81), patients with frailty had a significantly lower probability of remaining alive and at home 5 years after cancer surgery (39.1%, 95% confidence interval [CI] 37.8%–40.4%) compared with those without frailty (62.5%, 95% CI 62.1%–63.9%). After adjustment, frailty remained associated with increased hazards of not remaining alive and at home. This increase was highest 31 to 90 days after surgery (hazard ratio [HR] 2.00, 95% CI 1.78–2.24) and remained significantly elevated beyond 1 year after surgery (HR 1.56, 95% CI 1.48–1.64). This pattern was observed across cancer sites, including breast and melanoma.
Preoperative frailty was independently associated with a decreased probability of remaining alive and at home after cancer surgery among older adults. This relationship persisted over time for all cancer types, beyond short-term mortality and the initial postoperative period. Frailty should be assessed in all candidates for cancer surgery, and these data should be used when counselling, selecting and preparing patients for surgery.
The COVID-19 pandemic has been an uncertain, challenging time that has placed numerous strains on the Canadian health care system. A crucial yet sometimes overlooked aspect of this has been the mental health impact on health care workers. Surgeons, in particular, have faced unique stressors because of the cancellation of elective procedures, uncertainty regarding future management of urgent patient conditions such as oncologic operative procedures, and fear of infectious transmission to family and friends because of exposure from aerosolgenerating procedures. The purpose of this study was to evaluate the impact of the initial phase of the COVID-19 pandemic on general surgeons’ mental health across British Columbia, Canada.
An online survey was distributed to BC general surgeons to gather demographic and mental health data related to the pandemic period, including 2 validated burnout and psychological distress tools, the abbreviated Maslach Burnout Inventory (aMBI) and the Kessler Psychological Distress Scale (K10).
Sixty-three of 198 surgeons (32%) across BC responded to the survey; 44% and 59% felt that the pandemic negatively affected their job performance and personal relationships outside the hospital, respectively. In addition, 64% felt more stress or anxiety because of decreased access to operating rooms. From the aMBI results, 33% of surgeons felt emotionally exhausted from work, and the average K10 score was consistent with moderate psychological distress.
The COVID-19 pandemic has negatively affected general surgeons’ mental health across BC, both professionally and personally. This should be acknowledged by hospital leaders with specific efforts to mitigate the short- and long-term impacts on surgeons’ well-being.
Endoscopic retrograde cholangiopancreatography (ERCP) is commonly performed to treat biliary and pancreatic diseases. Current guidelines suggest anticoagulation should be held before high-risk endoscopic procedures, including ERCP. The American Society of Gastroenterology has extended this to include venous thromboembolism (VTE) prophylaxis. We aimed to identify available literature regarding post-ERCP bleeding risk from VTE prophylaxis.
We searched Medline and the Cochrane Central Register of Controlled Trials indexed from inception through Feb. 13, 2020, for studies of adult patients where bleeding risk after ERCP was assessed. We scored studies on quality of reporting, internal and external validity, and study power; combined scores determined the overall quality.
Our search identified 441 titles, and 38 articles were reviewed in full. A total of 5 relevant titles were identified: 1 conference abstract and 4 primary articles. Only 1 result, a conference abstract, specifically addressed the risk of bleeding associated with VTE prophylaxis in inpatient ERCP. This study included 135 patients, 20 of whom continued VTE prophylaxis. There was no statistically significant increased risk of bleeding for patients who remained on VTE prophylaxis. The 4 articles evaluated the use of unfractionated heparin and low molecular weight heparin in the prevention of post-ERCP pancreatitis; bleeding complications were variably reported as secondary outcomes in these papers, with variable results.
This systematic review demonstrates the lack of high-quality evidence regarding the periprocedural use of VTE prophylaxis. The only study designed to evaluate bleeding risk was an abstract with small numbers. Despite this, the current recommendation is to suspend its use in the periprocedural period, potentially leaving acutely ill patients susceptible to VTE complications. Further high-quality studies are required to address this clinical question and to direct further recommendations.
Laparoscopic subtotal cholecystectomy (LSC) is accepted as being a successful method for aborting an episode of severe cholecystitis when the inflammation is such that safe dissection and removal is not possible. What remains less clear in the literature is the frequency of complications following LSC and reintervention rates, particularly as they relate to the surgical strategy used.
This study sought to identify the reported complication rates for LSC and how they differ for 2 main surgical strategies: nonremoval of the back wall and closure of the gallbladder remnant. Studies from 1993 to December 2019 were included in the analysis. Thirty-two studies were included in the analysis. Outcomes of interest were complication rates, including specific biliary complications and general surgical complications in relation to surgical strategy. Additional outcomes that were analyzed included reintervention rates after surgery, including endoscopic retrograde cholangiopancreatography, percutaneous drainage and reoperation.
The most common postoperative complications included bile leak and retained stones. Rates of severe complications, including common bile duct injuries, were very low. A meta-analysis was performed on the data to assess the likelihood of postoperative complications with fenestrating compared with reconstituting procedures. It was found that most complications were more likely with fenestrating rather than reconstituting procedures, including bile leak (odds ratio [OR] 2.13, 95% confidence interval [CI] 1.67–2.73), and retained stones (OR 2.10, CI 1.29–3.41). Rates of postoperative ERCP were much higher with fenestrating procedures (OR 4.55, 95% CI 3.30–6.29).
LSC is a safe procedure with low complication rates for both reconstituting and fenestrating strategies. Fenestrating LSC has more frequent bile leaks and retained stones than reconstituting on statistical analysis, and accordingly increased rates of needing postoperative intervention, including ERCP and repeat surgery.
Modern surgery crucially relies on teamwork between surgeons and assistants. The science of teamwork has been and is being studied extensively, although the use of specific objective methodologies such as shared pupil dilations has not been studied as sufficiently as subjective methods. In this study, we investigated team members’ shared pupil dilations as a surrogate for surgeon’s team performance during a simulated laparoscopic procedure.
Fourteen subjects formed dyad teams to perform a simulated laparoscopic object transportation task. Both team members’ pupil dilation and eye gaze were tracked simultaneously during the procedure. Video analysis was used to identify key event movement landmarks for subtask segmentation to facilitate data analysis. Three levels of each teams’ performance were determined according to task completion time and accuracy (object dropping times). The determined coefficient of determination (R2) was used to calculate the similarity in pupil dilations between 2 individual members’ pupil diameters in each team. A mixed-design analysis of variance was conducted to explore how team performance level and task type were correlated to joint pupil dilation.
The results showed that pupil dilations of higher performance teams were more synchronized, with significantly higher similarities (R2) in pupil dilation patterns between team members than those of lower performance teams (0.36 ± 0.22 v. 0.21 ± 0.14, p < 0.001).
Levels of pupil dilation synchronization presented among teams reflect differences in performance levels while executing simulated laparoscopic tasks; this demonstrated the potential of using joint pupil dilation as an objective indicator of surgical teamwork performance.
Peer coaching has been associated with higher rates of practice changes than traditional learning modalities but is underutilized globally. The purpose of this study was to explore international surgeons’ perspectives and attitudes about peer coaching.
Survey questions, developed by the research team on the basis of previous studies, focused on needs for, perceived benefits of and barriers to peer coaching participation among surgeons in practice. Practising surgeons in general surgery and related subspecialties globally were eligible to participate. Recruitment was done electronically using select surgical society mailing lists and snowballing (i.e., respondents were invited to further disseminate the survey). Responses were collected between June 1 and Aug. 31, 2020.
A total of 521 responses were collected. The majority of participants practised in North America (260, 50%) with the remaining respondents hailing from Asia (82, 16%), Europe (32, 6%), South America (22, 4%), Africa (16, 3%) and Oceania (6, 1%). Duration of practice was evenly distributed across 4 intervals (0–5 yr, 6–15 yr, 16–25 yr, > 25 yr). Respondents most frequently identified as general surgeons (283, 66%), and 390 (75%) were male. Awareness of peer coaching was reported by 275 (53%), yet 197 (45%) never sought formal feedback from peers. The majority of respondents (372, 84%) would be willing to participate in a peer coaching program, with monthly interactions the most desirable frequency reported (174, 46%). In-person coaching in the operating room was preferred over remote or virtual interactions (360, 86%). Few respondents (44, 10%) indicated that they would accept coaching from someone unknown to them. Desirable program characteristics included personalized goal setting (285,68%), confidential feedback (267, 63%) and the option to choose one’s own coach (205, 49%). The most commonly cited potential barrier to participation was logistical constraints (339, 65%).
Internationally, surgeons infrequently seek formal peer feedback but would be willing to participate in peer coaching. This survey identified preferences and potential barriers that could be used to guide future coaching program design.
Literature reports a decrease in health care–seeking behaviours during the COVID-19 pandemic. Given that emergency general surgery (GS) conditions are associated with high risk of morbidity and mortality if untreated, the objective of this study is to quantify the impact of the COVID-19 pandemic on rates of emergency department (ED) utilization due to GS conditions.
This study involved the analysis of an institutional database. We identified adult patients presenting to the ED in a network of 3 teaching hospitals during the early stages of the pandemic (Mar. 13 – May 13, 2020) and a prepandemic period (Mar. 13 – May 13, 2019). Patients with GS conditions (American Association for the Surgery of Trauma definition) were included. Comparison between the 2 time periods was conducted using the Poisson exact test.
During the prepandemic period, 352 patients presented to the ED with a GS diagnosis versus 259 patients during the pandemic (26% reduction, p < 0.001). There was a significant reduction in the number of patients presenting with appendicitis, pancreatitis, bowel obstruction and colorectal diseases during the pandemic (p < 0.05) but not biliary disease or perforated viscus (p > 0.05). Overall, 111 patients were managed surgically (32% of presentations) in the prepandemic period compared with 80 patients during the pandemic (31% of presentations). This represents a 28% reduction in the number of operations (p = 0.001) but not in the proportion of patients managed surgically (p = 0.93). There was no significant difference in the number of appendectomies (38 v. 26, p = 0.63) and cholecystectomies (22 v. 18, p = 0.64). Of 140 patients (55%) tested for COVID-19, 2 tested positive.
Our findings suggest a decrease in the health care–seeking behaviour of adults with GS conditions in the early stages of the COVID-19 pandemic. Given the preserved rates of operative management, the decrease in the absolute number of operations is probably associated with the decrease in presentation to the ED compared with prioritization of conservative management.
The need to expand the health care capacity for patients with COVID-19 required a shutdown of elective surgery in our region. To mitigate the negative effects of subsequent waves of the pandemic on surgical education, a thorough understanding of the magnitude of its impact is necessary. The objective of this study is to estimate the impact of the first wave of the COVID-19 pandemic on the participation of general surgery trainees in operative procedures.
This study is a retrospective review of all emergency and elective general surgery procedures performed at the 3 sites of an academic health network. Cases performed during the pandemic period (2020) were compared with those performed in the prepandemic period (2019). Procedures performed by a general surgeon involving adult patients were included. Operating exposure was defined as (1) the total number of trainees present in the operating room (OR) and (2) the total time (h) spent in the OR by all trainees. The impact was estimated as a percentage of baseline (2019) and significance was tested using the exact Poisson test.
During the first wave of the pandemic, residents’ attendance in the OR decreased from 1328 cases to 914 cases (68.9% of baseline, p < 0.001) and total time in the OR decreased from 4000.4 to 2957.3 hours (73.9% of baseline, p < 0.001),with junior and senior residents similarly affected. Exposure to minimally invasive surgery and bariatric cases was most negatively affected (52.9% and 52.0% of baseline). However, exposure to emergency surgery cases and surgical oncology cases was relatively preserved (91.0% and 90.0% of baseline).
The first wave of the COVID-19 pandemic reduced operative exposure by 30% for general surgery trainees. The pattern of this impact reflects institutional policies of prioritizing oncology and emergency surgeries. This knowledge can be used to design rotations or remediation activities to mitigate the impact of the pandemic on surgical training.
It is unknown whether attainment of a master’s or PhD degree is associated with greater research productivity for general surgeons. This project aims to identify if the possession of a graduate degree correlates with greater research productivity among Canadian academic surgeons.
A list of general surgeons located at the largest hospital associated with each English-language residency program accredited by the Royal College of Physicians and Surgeons of Canada was obtained. Staff surgeons active between 2013 and 2018 were included. Surgeons were classified separately by degree (none, master’s, PhD) and professorship (assistant, associate, full) status, and surgeons’ publications from January 2013 to December 2018 were identified through Scopus. Variables of interest were annual number of publications, median number of citations per article, median CiteScore of journal of publication, and author’s h-index and R-index. Kruskal–Wallis testing and the Dunn multiple comparisons test were used to assess significance.
A total of 3591 publications from 187 surgeons were identified. Seventy-eight surgeons (41.7%) had no graduate degree, 84 (44.9%) had master’s degrees and 25 (13.4%) had PhDs. Surgeons with graduate degrees had more publications per year, higher CiteScore values, more citations per paper and greater h-index and R-index values than those without graduate degrees. When we compared master’s to PhD degree status, nonsignificant trends suggested higher values in all domains. With respect to professorship status, 77 surgeons (41.8%) were assistant professors, 63 (34.2%) were associate professors and 44 (23.9%) were full professors. Three surgeons were excluded from this analysis as they had no professorship status. Statistically, full professors were found to have a greater number of publications per year and greater h-index and R-index values than both the assistant and associate professor groups.
Canadian surgeons with graduate degrees or higher professorship status have greater research productivity than those without. Surgeons with PhDs trended toward greater research productivity when compared with those holding master’s degrees.
Laparoscopic endoscopic cooperative surgery (LECS) is a procedure for the management of luminal gastrointestinal lesions. It was developed in Japan from necessity, on the basis of the established complication rate of endoscopic submucosal dissection (ESD) and the inability to resect lesions arising from or attached to the muscularis propria. LECS in North America is in its early phase of development and establishment. Here we present a video of an open LECS.
Our patient was a 67-year-old woman who presented to an outside institution with vague abdominal pain. A gastroscopy revealed a subepithelial gastric lesion high on the lesser curve. Computed tomography of the abdomen identified a 2.5-cm mass, probably a gastrointestinal stromal tumour (GIST). The patient was referred for endoscopic evaluation and management.
The patient underwent a repeat gastroscopy and endoscopic ultrasonography with fine-needle biopsy at our institution. Pathology confirmed the diagnosis of a low-grade GIST. The patient was discussed at multidisciplinary case conference and an open LECS was offered. The video presentation shows the main steps of the procedure.
Japanese data and our early experience suggest that LECS is a safe, feasible and effective procedure for gastrointestinal luminal lesions. It is a great addition to the armamentarium of both surgery and endoscopy programs as it offers a collaborative and minimally invasive approach to lesions that are not amenable to ESD and would require more extensive surgical resections. It appears promising for luminal lesions in challenging anatomical locations to minimize resection and preserve organ function. Future studies are needed to determine its long-term functional and oncologic outcomes and its place in the management of gastrointestinal lesions in the North American patient population. YouTube video link: https://www.youtube.com/watch?v=2g0eQQ9UMk0
In response to the COVID-19 pandemic a state of public health emergency was declared in March 2020. Health care resource allocation and delivery were modified. We aim to evaluate the early effect of the COVID-19 pandemic on acute care surgical outcomes in Canada.
Acute care general surgery cases from 2 academic hospitals in a Canadian province were included in this retrospective observational cohort study. Cases occurring 30 days prior (Feb. 10, 2020, to Mar. 9, 2020) to the pandemic and 30 days after initiation of the pandemic (Mar. 23, 2020, to Apr. 20, 2020) were included. Primary outcomes of interest were postoperative length of stay and 30-day mortality. Negative binomial and logistic regression models informed by a directed acyclic graph were developed to investigate outcome effects.
Our study included 257 patients. Increased ischemic bowel related to small bowel obstruction and greater white blood cell counts were found during the COVID-19 pandemic. Decreased emergency department wait times and interhospital transfers were also identified. Adjusted models found no difference in length of stay or 30-day mortality between cohorts. Subgroup analysis of cases completed under COVID-19 precautions showed unadjusted increases in intensive care unit admission (odds ratio [OR] 6.34, p = 0.006), surgical morbidity (OR 4.46, p = 0.017) and quick Sequential Organ Failure Assessment (qSOFA) score (OR 13.1, p < 0.001).
Effective surgical care is possible during global pandemics. We did not observe a meaningful effect related to increased morbidity, mortality or length of stay during the first wave of the COVID-19 pandemic. “Lockdown” effects associated with increased mortality, morbidity or a decrease in surgical volume have been observed worldwide. Our study reflects the effect of COVID-19 public health measures in a public health system operating within capacity limits. Understanding how pandemics indirectly influence health outcomes is important to optimize pandemic protocols and communications.
The overprescription of opioids to surgical patients is recognized as an important driving force behind the opioid crisis. However, the value of prescribing opioids following postoperative discharge is still uncertain. This study investigated the feasibility of conducting a full-scale randomized controlled trial (RCT) to assess the comparative effectiveness of opioid analgesia (OA) versus opioid-free analgesia (OFA) after outpatient general surgery.
This pragmatic, assessor-blind, pilot RCT included adult patients undergoing outpatient breast and abdominal procedures at 2 tertiary hospitals (targeted n = 80). Patients were randomly assigned 1:1 to receive OA (around-the-clock nonopioids and opioid tablets for breakthrough pain) or OFA (around-the-clock nonopioids and increasing doses or adding nonopioid drugs for breakthrough pain or both). Primary outcomes were a priori RCT feasibility criteria: more than 70% of screened patients meet eligibility criteria, more than 50% of eligible patients are randomly assigned to a study group and more than 80% of patients randomly assigned to a study group complete follow-up. Other outcomes included 7-day pain severity and interference (Brief Pain Inventory), analgesics intake and adverse events. Data were analyzed using descriptive statistics and exploratory effect estimates.
Of 224 patients assessed for eligibility (January–September 2020), 164 met the inclusion criteria (73%) and 93 consented to be randomly assigned to a study group (57%). Twelve patients were excluded before randomization and 5 were excluded after randomization. Seventy-six patients (39 OA and 37 OFA) were included in the intention-to-treat analysis (mean age 55.5 yr, 66% female, 53% abdominal surgery [48% laparoscopic], 47% breast surgery [50% sentinel node biopsy, 19% axillary dissection]). All patients completed the 7-day follow-up (100%). Pain intensity and interference were comparable between the groups. Twenty-three OA patients did not take opioids (59%). One OFA patient received an opioid prescription (3%). Common adverse events were nausea (OA 21% v. OFA 16%), vomiting (8% v.. 3%) and constipation (41% v. 32%).
This pilot study supports the feasibility of a robust, adequately powered RCT, which has the potential to contribute practice-changing evidence to mitigate postoperative opioid overprescribing after outpatient general surgery.
Tyrosine kinase inhibitor (TKI) therapy, anti-programmed cell death protein-1 (PD-1) or anti-programmed cell death ligand-1 (PD-L1), cytotoxic T-lymphocyte-associated protein 4 (CTLA-4) or a combination PD-1(L1)/CTLA-4 therapy in the neoadjuvant setting are currently being used to treat several solid tumours. There are concerns about the safety profile of neoadjuvant immunotherapy and subsequently, possible delays to surgery. This review summarizes the pooled proportion of patients who underwent surgery following neoadjuvant therapy and the proportion of adverse events.
Medline, Central and Embase databases were searched for single-arm or randomized controlled trials studying neoadjuvant TKI, PD-1/L1 or CTLA-4 immunotherapy up until January 2021. The pooled proportion of patients who completed planned resection and the pooled proportion of adverse events were estimated using random-effects model. The inverse variance method was used to estimate weighting of each trial. Statistical heterogeneity was calculated using the I2 and χ2 test.
Seventeen relevant studies with a total of 535 patients were included for analysis. This included those receiving neoadjuvant neoadjuvant TKI therapy (n = 148), PD-1/L1 immunotherapy (n = 234) or combination PD-1/CLTA-4 (n = 153). The types of tumours in the included studies were renal cell carcinoma (8 studies), bladder carcinoma (3 studies), non–small cell lung cancer (3 studies), hepatocellular carcinoma (2 studies) and colon cancer (1 study). The pooled proportion of patients who completed planned surgery was 94% (95% confidence interval [CI] 0.91–0.97). The pooled rate of clinically relevant grade 3 or 4 adverse events reported after neoadjuvant therapy was 23% (95% CI 0.14–0.32). The incidence of partial radiologic response after neoadjuvant therapy was 22% (95% CI 0.04–0.40).
The use of either neoadjuvant TKI, PD-1/L1 or PD-1(L1)/CTLA-4 combination therapy before surgery for solid tumours rarely delays surgical resection, presents an adequate safety profile and can lead to partial radiologic response for some patients.
Operating room recording, via video, audio and sensor-based recordings, is increasingly common. Yet, surgical data science is a new field without clear guidelines. The purpose of this study is to examine existing published studies of surgical recording modalities to determine which are available for use in the operating room, as a first step toward developing unified standards for this field. Medline, Embase, Central and PubMed databases were systematically searched for articles describing modalities of data collection in the operating room. Search terms included “video-audio media,” “bio-sensing techniques,” “sound,” “movement,” “operating rooms” and others. Title, abstract and full-text screening were completed to identify relevant articles.
Descriptive statistical analysis was performed for included studies. Of 3756 citations, 91 studies met the inclusion criteria. These studies described 10 unique data collection modalities for 17 different purposes in the operating room. Data modalities included video, audio, kinematic and eye-tracking, among others. Data collection purposes described included surgical trainee assessment, surgical error, surgical team communication and operating room efficiency.
Effective data collection and utilization in the operating room are imperative for the provision of superior surgical care. The future operating room landscape undoubtedly includes multiple modalities of data collection for a plethora of purposes.
This review acts as a foundation for employing operating room data in a way that leads to meaningful benefit for patient care.
Low neighbourhood socioeconomic status (N-SES) is associated with increased injury incidence. We examined the relationship between N-SES and nonaccidental trauma (NAT), defined as blunt and penetrating assaults, during the COVID-19 pandemic in a major Canadian city.
Our institutional trauma registry data were retrospectively analyzed for all severely injured (Injury Severity Score [ISS] > 12) adult (> 18 yr) patients admitted between January 2016 and September 2020 with nonaccidental trauma (NAT). The period from Jan. 1, 2016, to Feb. 28, 2020, was defined as pre-COVID-19, and the period from Mar. 1 to Sept. 30, 2020, was defined as COVID-19. Addresses of patients sustaining NAT were geocoded and matched to 15 local geographic areas (LGAs), which were divided into quintiles on the basis of median household income. Changes in proportions of NAT during COVID-19 were mapped to LGA using geospatial analysis software. Correlation between quintile and NAT was performed using the Spearman rho.
In this study, 439 patients sustained NAT: 386 (88%) were injured during the pre-COVID-19 period, and 53 (12%) were injured during the COVID-19 period. Six of 15 LGAs experienced increases in NAT during the COVID-19 period, 2 of which were statistically significant. There was a strong and significant inverse correlation between income quintile and NAT (Spearman rho correlation coefficient −0.55, p = 0.035) in the pre-COVID-19 time frame. This correlation became more pronounced during the COVID-19 period (Spearman rho correlation coefficient −0.60, p = 0.018).
Low N-SES was associated with increased incidence of NAT during the COVID-19 pandemic, highlighting the impact of income disparity on an alreadyvulnerable population. Further work is needed to determine whether other trauma-related health outcomes due to disparities in N-SES have been amplified during the pandemic.
Gastropleural fistula is rare and most commonly occurs as a surgical complication. It has also been reported in malignancy, trauma, perforated gastric ulcer and complex pulmonary infections, and it often presents with empyema. Initial management addresses intrathoracic sepsis and includes antibiotics and chest drainage. Although conservative treatment has occasionally been successful, definitive treatment usually requires surgical management. Surgical access can be achieved via the thorax or the abdomen, either by minimally invasive or open techniques. Decisions regarding surgical timing and approach can be complex.
We present the case of a 77-year-old woman with lymphoma of the gastric fundus having eroded through the diaphragm during the course of chemotherapy, resulting in a transdiaphragmatic gastropleural fistula and empyema. Once the thoracic sepsis was adequately controlled and after a thorough preoperative work-up confirming the absence of residual tumour, the fistula was approached laparoscopically. Operative steps included taking down the fistula, a wedge resection of the involved portion of the fundus, debridement and primary repair of the diaphragm with bio-mesh reinforcement, and partial fundoplication covering all staple lines.
The surgery was well tolerated, and the patient resumed an oral diet and chemotherapy on postoperative days 6 and 14, respectively. She remained disease free at 16 months.
Transdiaphragmatic gastropleural fistulas are rare. Assuming adequate control of intrathoracic sepsis, the fistula may be successfully addressed laparoscopically. In our patient, minimally invasive repair of the fistula allowed for prompt recovery and swift resumption of life-saving chemotherapy.
The COVID-19 pandemic necessitated a rapid uptake of video-based interviewing for selection processes in health care. A scoping review was conducted to consolidate the available literature on the benefits and limitations of video-based interviews and to understand the perceived barriers associated with transitioning away from face-to-face interviews.
A search strategy, developed in concert with an academic health sciences librarian, was run on Ovid Medline, Embase, PsycINFO and Cochrane Central. The search was performed on Mar. 31, 2020, and updated on Feb. 21, 2021. Studies that implemented and evaluated the impact of video-based interviewing in health care were included in our study. Review articles, editorials and studies that did not implement a video-based interview were excluded.
Forty-three studies were included in our scoping review, of which 17 were conference abstracts and 26 were peer-reviewed manuscripts. The risk of bias was moderate or high in most studies. Both financial costs and opportunity costs were reported to be improved with video-based interviewing, while environmental costs were not well explored. Technical limitations, which were not prevalent, were easily managed during the interview process. There were limited studies that evaluated how nonverbal cues are interpreted in a video-based format. Overall, video-based interviews were well received by both applicants and interviewers, although most participants still reported a preference for face-to-face interviews.
While video-based interviewing has become necessary during the COVID-19 era, there are benefits from a financial, opportunistic and environmental point of view that argue for its continued use even after the pandemic. Despite its successful implementation with minimal technical issues, a preference still remains for face-to-face interviews. Reasons for this preference are not clear from the available literature. Future studies on the role of nonverbal communication during the video-based interview process are important to better understand how video-based interviews can be optimized.
The use of indocyanine green fluorescence angiography (ICG-FA) in colorectal surgery to assess tissue perfusion intraoperatively has been gaining popularity, with the goal of reducing anastomotic leak rate. The economic impact of this intervention is not yet understood. We performed a cost analysis of the routine use of ICG-FA in colorectal surgery from the hospital payer perspective.
A decision analysis model was developed for patients undergoing colorectal resections with and without the use of ICG-FA for assessment of anastomotic perfusion. Incorporated in the model were the costs of ICG-FA technology and anastomotic leaks, and anastomotic leak rates in each scenario. Conservative estimates of direct costs of surgical intervention and anastomotic leakage were obtained from a population-based analysis. Effectiveness of ICG-FA was calculated from the odds ratio published in a recent meta-analysis.
Routine use of ICG-FA for colorectal surgery is found to have a cost-saving of $241.20 (2.5%) per case when analysis is performed using the following data: leak rate of 8.6% (range 1.2%–21.2%) without ICG-FA, odds ratio of 0.46 (95% confidence interval [CI] 0.34–0.62) for reduction of leakage with ICG-FA, cost of ICG-FA of $250 per surgery and cost of anastomotic leak not requiring reoperation of $10 024.43 (95% CI $94 24.87–$10 618.49).
In 1-way sensitivity analyses, routine use of ICG-FA was cost-saving as long as the cost of an anastomotic leak is more than $5616.29, the cost of ICG-FA is less than $642.44 per use, the anastomotic leak rate is higher than 4.8% or the odds ratio for reduction of leak with ICG-FA is less than 0.69.
Routine use of ICG-FA in colorectal surgery was found to be a cost-saving measure from a payer’s perspective using conservative base case values derived from recent publications. However, the overall quality of the available evidence is low. There is a need for prospective, randomized controlled trials.
The transition to competency-based surgical training mandates changes to residency workload. Both actual composition and the perception of resident workload should be considerations in program design.
A web-based survey was distributed to faculty and residents in a Canadian general surgery residency program. Questions pertaining to perception of current resident workload (as a percentage of daytime workload), optimal resident workload and educational value (linear scale 0–10) of various components of resident workload were included. Workload categories included 5 task categories. Responses were compared with a 660-hour resident workload observation data set.
Seventeen residents and 16 faculty completed the survey (74% and 67% participation). Residents perceived that substantially less time was allocated to educational activities than faculty (median 10% [interquartile range (IQR) 5%–10%] v. 15% [IQR 10%–20%], p < 0.001, observed 7.7%). There were no significant differences in perception of time allocated to direct patient care (DPC), indirect patient care (IPC), downtime or transit tasks. Both groups underestimated the amount of time spent on IPC (resident perception 20% [IQR 12%–30%], faculty 18% [IQR 10%–29%], observed 32%). When asked to describe an ideal workload, residents felt that more time should spent on DPC tasks (60% [IQR 50%–70%] v. 50% [IQR 40%–60%], p = 0.037, observed 44%) and less on IPC tasks (10% [IQR 10%–14%] v. 10% [10%–25%], p = 0.037, observed 31%) than faculty. Residents and faculty agreed on the high value of DPC and educational activities and the low value of downtime and transit tasks. Residents felt that IPC tasks were less valuable than faculty (4 [IQR 2–6] v. 7 [IQR 6–8.2], p < 0.001).
Surgical residents and faculty agree on the importance of dedicated educational activities, but faculty overestimate education as a proportion of workload. Both groups underestimate IPC, which faculty perceive as of greater value than residents. This information can guide resident training program design and be used to bridge gaps between resident and faculty perceptions of resident workload.
Up to 85% of health care providers experience loss or illness in themselves or a loved one by the time they complete training. These experiences can intensify both grief and empathy when they later encounter illness in the professional setting as providers of care. The range of formal training that addresses these experiences is unknown, and thus this study aimed to explore interventions that teach health care providers and trainees about personal illness experiences.
A scoping review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Scoping Reviews (PRISMA-ScR) guidelines. Three bibliographic databases (Medline, PsycINFO, ERIC) were searched using the keywords “illness,” “personal” and “education” and their related terms. Title, abstract and full-text screening were conducted by 2 authors in duplicate to identify studies that described an intervention with a component of teaching or learning in personal experiences with illness or loss (e.g., in self, family, friends). Data were extracted and described narratively and graphically.
The search yielded 4023 articles, of which 13 studies were included. Only 2 studies were published after 2010 and 9 were from the United States. Personal illness was most frequently taught for reflection in the context of palliative care curricula (54%). Only 2 studies’ primary purpose was to teach about coping with grief related to personal experiences. No studies described training on how to support colleagues or trainees facing personal illness or loss. Interventions most often targeted medical students (54%) and were most commonly delivered via group discussion (100%). Reported findings included improved coping skills, decreased stress and better ability to support bereaving patients.
Specific education on personal experiences of loss or illness in health care providers is limited despite the prevalence of these experiences and their unique impact on providers’ ability to cope. Future curricula could equip health care providers and trainees with coping strategies that address the challenges of personal illness experiences while enabling improved resilience and patient care.
Suturing is a skill that surgical trainees must master. Owing to the COVID-19 pandemic, medical schools needed to transition from in-person to virtual avenues of teaching, including procedural skills. Our primary objective was to determine whether virtual video-based feedback is no worse than in-person feedback in improving novice medical students’ suturing skills.
Fifty-four medical students were randomly assigned either to an experimental arm in which they received remote-recorded feedback (RRF) or remote-live feedback (RLF) or to a control arm in which they received in-person feedback (control). There were 18 participants in each group. Participants first learned to suture via an online module then recorded themselves performing a standardized suturing task at home. Customized feedback was then provided by a surgical resident who received standardized training for this project.
RRF participants received a feedback video, RLF participants received live feedback over Zoom and control participants received feedback in person. Participants then recorded another video of the same suturing task. Prefeedback and postfeedback suturing performances were scored by blinded assessors using the University of Bergen Suturing Skills Assessment Tool (UBAT). Our primary outcome measure is the score difference between pre- and post-feedback videos. The RLF and RRF groups were compared for statistical significant differences using a 2-tailed paired t test.
Twenty-seven participants (median age 22) were included in the interim analyses. Postfeedback UBAT scores were not significantly different between groups (70.34 [interquartile range (IQR) −25.41 to 127.34] v. 11.34 [IQR −99.66 to 78.34], p > 0.05), with a higher score indicating better performance. Although there was a trend toward RLF demonstrating greater improvement (32 [IQR −10 to 59.25] v. 25 [IQR −63 to 77], p > 0.05), this was not statistically significant.
Thus far, there has been no significant difference between groups in prefeedback scores, postfeedback scores or score difference. Future steps include analyzing videos for all participants and comparison with the control arm.
Robotic surgery has rapidly been integrated into the Canadian health care system despite limited evidence demonstrating its clinical benefit. Our objectives were to describe secular trends and patient- and system-level determinants of the receipt of robotic rather than open or laparoscopic surgery.
This population-based retrospective cohort study included adult patients who, between 2009 and 2018 in Ontario, Canada, underwent 1 of the 4 commonly performed robotic procedures: radical prostatectomy, total hysterectomy, thoracic lobectomy or partial nephrectomy. For each procedure, patients were categorized on the basis of surgical approach: robotic versus open or laparoscopic. Multivariable regression models characterized the trend of robotic surgery use and the association of patient and systems characteristics with the surgical approach received.
A total 24 741 patients had radical prostatectomy, 75 473 total hysterectomy, 18 252 thoracic lobectomy and 4608 partial nephrectomy. After adjusting for age, sex, comorbidities, disease stage (if applicable), socioeconomic status, rurality, teaching hospital status, physician volume and years in practice the overall rate of robotic surgery increased by 24% annually in Ontario: 13% increase in robotic radical prostatectomy, 9% in robotic total hysterectomy, 26% for thoracic lobectomy and 26% for partial nephrectomy. There were some variations in the association between patient- or system-level characteristics and the receipt of robotic surgery between the different procedures. Generally, across all procedures, patients with a lower comorbidity burden and earlier disease stage (when applicable) were more likely to receive robotic surgery than open or laparoscopic surgery. In addition, high-volume, early-to mid-career surgeons at teaching hospitals were more likely to provide robotic surgery.
Use of robotic surgery has substantially increased in Ontario and is offered selectively. Further study of the real-world clinical outcomes and health care costs of this shift in practice is needed.
Parastomal hernia (PSH) is the most common complication of stoma formation. The safety and efficiency of prophylactically placing mesh to prevent PSH remain controversial. To address this question, we examined the incidence of clinical and radiologic PSH when using prophylactic preperitoneal mesh.
We performed a retrospective, single-centre cohort study that included all patients with permanent stoma creation between 2015 and 2018. Patients were divided into 2 groups according to whether parastomal prophylactic mesh (PPM) was used or not.
During the study period, 185 patients had permanent stoma creation, 144 with colostomies and 41 with ileostomies. PPM was placed in 79 patients. There was no difference in the need for early surgical reintervention (p = 0.652) or readmission to hospital (p = 0.31) for stoma-related complications in patients with mesh versus patients without mesh. Similarly, there was no difference in operative time (p = 0.78) or in length of hospital stay (p = 0.81). No patients experienced infection of the mesh or required prosthesis removal. There was a lower incidence rate of PSH in the PPM group in patients with permanent colostomy (adjusted hazard ratio [HR] 0.50, 95% confidence interval [CI] 0.28–0.89, p = 0.018). In contrast, a higher incidence rate of PSH was observed in patients with ileostomy and PPM (adjusted HR 5.92, 95% CI 1.07–32.65, p = 0.041).
Parastomal prophylactic mesh placement to prevent PSH is a safe and efficient approach to reduce the incidence of PSH in patients requiring a permanent colostomy. However, mesh may increase the rate of PSH after permanent ileostomy.
Right laparoscopic hemicolectomy can be performed with intracorporeal or extracorporeal anastomosis. It is not clear which technique is the best. This study aims to evaluate the impact of both surgical techniques on perioperative safety and postoperative evolution.
We conducted a retrospective cohort study from 2015 to 2019 in a tertiary colorectal surgery centre. All patients who had an elective right laparoscopic hemicolectomy were divided into 2 groups according to the anastomosis technique used, being either intracorporeal (IA) or extracorporeal (EA).
A total of 285 patients were included, with intracorporeal anastomosis performed in 64 patients (22%). At 30 days after surgery, there was no difference in anastomotic leak (IA group 0% v. EA group 2%; p = 0.59), bleeding (3% v. 3%; p = 1.00) or intra-abdominal abscess (0% v. 0%; p = 1.00). Mean operative time was longer in the IA group (160 ± 31 min v. 138 ± 42 min; p < 0.001). Median time to first flatus was longer in the IA group (p = 0.049) with a trend toward prolonged ileus (p = 0.07). We noted a trend toward more hernia in the EA group with a hazard ratio (HR) of 7.128 (95% confidence interval 0.956–53.165; p = 0.06). No difference was observed in intraoperative complications, operative blood loss, nasogastric tube insertion, time to first bowel movement, length of stay, and surgical reoperation or readmission to hospital. Anastomosis technique had no influence on recurrence in multivariate analysis.
For right laparoscopic hemicolectomy, both anastomoses surgical techniques are safe, with a low incidence of anastomotic complications when performed by experienced surgeons. Intracorporeal anastomosis may be associated with a lower incidence of incisional hernia.
Emergency department (ED) visits have been increasing, and emergency general surgery (EGS) patients represent nearly 10% of these visits. EGS patients are at a substantially higher risk for perioperative morbidity and mortality compared with elective general surgery patients. The acute care surgery (ACS) model has been shown to improve EGS patient outcomes and cost-effectiveness. A recent systematic review demonstrated extensive heterogeneity in the structure of ACS models worldwide. The objective of this study was to describe the current landscape of ACS models in academic centres across Canada.
An online questionnaire was sent to the 17 Canadian academic centres. The lead ACS physicians from each institution completed the questionnaire describing the structure of their ACS models.
Fifteen institutions responded, all of which reported having ACS models. A total of 29 sites with ACS models were described. All were in teaching hospitals with resident coverage and most 18 (62%) had dedicated allied health care staff. A minority of 3 (10%) sites required the most responsible physician to be ACS or trauma fellowship trained. The staff surgeon was free from any elective duties while covering ACS at 18 (62%) sites. Only 6 (21%) sites had protected ACS operating room (OR) time all 5 weekdays, while 14 (48%) sites had some protected OR time during the week, with the rest of the time being shared with other surgical specialties. Nine (31%) sites had no protected ACS OR time. Only 2 (7%) sites reported a mandate to conduct ACS research, while 13 (45%) found ACS research difficult because of lack of resources.
Large variations were seen in the structure of ACS models in Canadian academic centres. The components of ACS models that are most important to patient outcomes remain poorly defined. Defining the necessary cornerstones of ACS models may benefit the Canadian health care system.
Emergency general surgery (EGS) patients represent a high-risk subset of general surgery patients, responsible for only 11% of operations, but 47% of deaths. There is substantial interhospital variability in the way care is delivered to EGS patients and no consensus regarding the optimal model. We aimed to characterize the structures and processes responsible for delivery of EGS care. All Ontario hospitals were contacted and those offering adult acute care were surveyed. Responses were collected between August 2019 and July 2020.
Our survey response rate was 96% (109/114). A third (n = 37, 34%) of hospitals have EGS models of care. The vast majority have been established since 2007, with 12 of 37 (32%) programs initiated within the last 5 years. The majority (34/37) of the hospitals with EGS models are large institutions (> 100 beds) that would be predicted to have increased resources. However, even when we compared similarly sized hospitals, those with EGS models had increased staffing (clinical associates [16% v. 3%, p = 0.03], nurse practitioners and physician assistants [32% v. 4%, p < 0.01]), diagnostic and interventional equipment (24/7 access to computed tomography [89% v. 64%, p = 0.05], interventional radiology [84% v. 39%, p < 0.01], endoscopy [95% v. 64%, p = 0.02] and endoscopic retrograde cholangiopancreatography [74% v. 39%, p = 0.02]) and dedicated operating room (OR) time (59% v. 0%, p < 0.01). The median amount of dedicated OR time was 16 hours per week (interquartile range 12–22.8 h).
The structures and processes relevant to the care of EGS patients are highly variable among Ontario’s hospitals. As expected, large academic hospitals tend to have more resources; however, even when hospitals of similar size and academic status are compared, hospitals with EGS models of care have more staff, are more likely to have dedicated OR time and have increased access to computed tomographic scanners, interventional radiology, endoscopy and endoscopic retrograde cholangiopancreatography than similar-sized hospitals without formal EGS models.
Virtual learning has been integrated into medical education as an alternative to live sessions during the COVID-19 pandemic. However, its effectiveness for teaching trauma resuscitation has not been validated. Although small-group sessions are an effective pedagogic model in person, less is known about how they translate to online learning in clerkship.
Medical students attended a 2-day virtual trauma conference organized by student interest groups at McMaster University and promoted on social media. The event included 9 interactive presentations by physicians in 5 specialties, followed by virtual small-group case discussions. A best-match algorithm assigned students to preferred small-group sessions. Participants completed anonymous pre- and post-conference knowledge tests and feedback questionnaires. Results were analyzed using paired t tests and descriptive content analysis.
A total of 360 students from 17 medical schools in 5 countries registered for the conference. A peak of 167 simultaneous connections during presentations was recorded and 68 participants attended small-group discussions. A total of 131 students (36%) completed the pretest, with a mean score of 3.4 out of 10 (standard deviation [SD] 2.04). Eighty-six students (24%) completed the posttest, with a mean score of 6.3 out of 10 (SD 2.3, p < 0.001). Paired t-test analysis revealed significant improvement in the mean score by 2.7 out of 10 (SD 2.3, 95% confidence interval 2.17–3.23, p < 0.001). No significant correlation between years of education and school attended with performance was found. In total, 95% of participants agreed the platform was effective and 78% indicated it was helpful preparation for clerkship. The response rate for feedback forms for small-group sessions was 59% (40/68), where 93% of participants rated small-group discussions as effective.
With high participant satisfaction and improved posttest results, this virtual model for trauma education at the medical student level is an effective adjunct to the clerkship curriculum. This study has important implications for the future design and implementation of international virtual conferences.
The National Surgical Quality Improvement Program (NSQIP) tracks postoperative outcomes and benchmarks to drive local quality improvement programs. The NSQIP Surgical Risk Calculator (SRC) was developed to provide patient-level risk estimates to guide informed consent; discharge to nursing or rehabilitation facility (DNRF) was added as a patient-centred outcome measure. Emergency general surgery (EGS) is a known risk factor for increased postoperative complications. Our objective was to apply the SRC to our EGS NSQIP cohort to evaluate the accuracy of patient-centred outcomes as metrics to support patient and surgeon decision-making.
A retrospective audit of our NSQIP data was performed. EGS cases from the July 2019 to June 2020 validated NSQIP submission were included. Standard NSQIP demographic data, clinical data and outcomes were extracted. The SRC was applied to each patient using collected database variables. Predicted risks for postoperative complications, return to operating room (RTOR), mortality, DNRF, length of stay (LOS) and readmission within 30 days were compared with observed data. Basic frequencies were calculated. Patient-centred outcomes (LOS, DNRF) were compared using standard t tests.
Overall, 170 patients were included. Average age was 55.4 (standard deviation [SD] 20.5) years and 52.4% of the patients were female. Most patients were American Society of Anesthesiologists (ASA) class 2 (72, 42.4%) or 3 (68, 40.0%). Average body mass index was 28.6 (SD 6). The SRC adequately predicted risk of any complication (16.2% predicted v. 16.5% observed), surgical site infection (5.8% v. 6.5%), RTOR (4.2% v. 5.3%), mortality (3.0% v. 2.4%) and readmission (8.1% v. 5.9%). The SRC underperformed for patient-centred outcomes. Predicted LOS was significantly lower than observed LOS (5.6 v. 10.7 d, p < 0.001). The predicted DNRF rate was much higher than observed (11.1% v. 1.8%, p < 0.001).
The NSQIP SRC adequately predicted postoperative complications and mortality in our EGS cohort. However, it performed poorly in predicting patient-centred outcomes (LOS, DNRF). In the Canadian health care context, caution should be used when including risk of DNRF in informed consent discussions with EGS patients.
Opioid abuse continues to be an endemic social and medical issue with substantial morbidity and mortality. Our group has previously demonstrated substantial reduction in opioid prescription following outpatient surgery without affecting patient-reported pain scores in the context of intensive patient and physician intervention.
Data on patient-reported pain control as well as opioid prescription and use following elective outpatient open hernia repair or laparoscopic cholecystectomy were collected prospectively 1 year after the completion of a division-wide intervention (Post-I). The control group data were collected during the active intervention (Int) phase of the previous study, which included emphasis on co-analgesia with acetaminophen and nonsteroidal anti-inflammatory drugs, opioid reduced prescriptions, and patient education.
A total of 192 patients were assessed in the intervention phase, with 128 patients assessed 1 year after the completion of the STOP study. There were no significant differences in the proportion of patients rating their pain control as good or very good (84.7% Int v. 78.9% Post-I, p = 0.23), but mean postoperative pain scores were significantly greater 1 year after the intervention (2.1 [standard deviation (SD) 1.7] Int v. 4.1 [SD 1.0] Post-I, p < 0.0–1). Mean total oral morphine equivalents prescribed decreased after intervention (78 [SD 70] Int v. 50 [SD 3] Post-I, p < 0.001). There was an increase in the proportion of prescriptions filled 1 year after intervention (44.8% Int v. 67.2% Post-I, p = 0.03).
Following opioid prescription reduction interventions, there is a sustained reduction of narcotics prescription and compliance with recommendations. There may have been a decrease in patient education and expectation management resulting in worsened postoperative pain scores. The majority of patients still described good overall pain control.
Transanal endoscopic microsurgery (TEM) is a minimally invasive procedure that allows for full-thickness local excision of adenomas and select early rectal adenocarcinomas. Despite its clear advantages over other modalities, TEM is not uniformly utilized across Canada. We designed an anonymous survey to assess potential barriers to TEM referral.
The survey was in an online, predominantly multiple-choice format, distributed to endoscopists across Canada electronically.
This was a survey using population-based sampling. In total, 199 endoscopists completed our survey, including 62 (31.3%) gastroenterologists and 136 (68.7%) surgeons. Referring endoscopists comprised 147 of the 199 respondents (73.9%), while 52 of the 199 respondents (26.1%) perform TEM or transanal minimally invasive surgery (TAMIS) or both. For patients with either clear or unclear indications for TEM, 30 of 146 (27.4%) and 64 of 146 (43.8%) referring endoscopists had a low referral rate, respectively. On univariate analysis, factors associated with low referral rate included lack of confidence with indications for TEM (odds ratio 9.94, 95% confidence interval [CI] 3.15–31.40, p < 0.001), poor understanding regarding the advantages of TEM (OR 11.26, 95% CI 3.83–33.07, p < 0.001), low comfort with referring (OR 183.7, 95% CI 21.9–1537.5, p < 0.001) and lack of access to TEM (OR 14.91, 95% CI 5.39–41.26, p < 0.001). Gastroenterologists are more likely to have a low referral rate than surgeons (OR 2.76, 95% CI 1.30–5.83, p < 0.01). On multivariable analysis, low comfort with referring remained independently associated with low referral rate (OR 109.5, 95% CI 7.3–1626.8, p < 0.001). Provinces with a population of less than 1 million or nonacademic practice settings were all independently associated with lower TEM accessibility (OR 0.269, 95% CI 0.110–0.658, p < 0.01; OR 0.326, 95% CI 0.141–0.755, p < 0.01]. Among the surgeons trained to perform TEM and TAMIS, the greatest perceived barrier to referral was a lack of educational resources for referring physicians (23/58, 37.9%).
Many patients who are potentially eligible for TEM are not being referred for consideration. An educational gap regarding indications, lack of comfort among referring physicians and geographic inaccessibility are among the greatest perceived barriers to referral.
Northern Alberta represents a geographically complex region encompassing over 460 000 km2. This poses substantial challenges for prehospital care of rural trauma patients. We performed a geospatial analysis to understand the geographic locations of rural trauma patients and compare those who were directly transported to a level 1 or level 2 trauma centre (Direct) with those who were initially managed in a smaller hospital and then transferred to a trauma centre (Staged).
We performed a retrospective review of our trauma registry for all severely injured (Injury Severity Score [ISS] > 12) adult (≥ 18 yr) patients injured outside of Edmonton and admitted to a trauma centre between Jan. 1, 2016, and June 30, 2020. Clinical demographic characteristics, location, injury type and mode of transportation were compared between the Direct and Staged groups. Our primary outcome was in-hospital mortality.
We included 2462 patients in the study: 910 in the Direct group and 1552 in the Staged group. Patients in the Direct group were more frequently injured in motor vehicle or motorcycle collisions than patients in the Staged group (p < 0.001) and had higher ISS (20 v. 18, p < 0.05) despite having similar prehospital vital signs. Patients injured within a 60-minute drive from a trauma centre were more likely to be Direct transfer. In total, 546 (60%) (Direct) versus 729 (47%) (Staged) patients were injured within 60 minutes’ driving time of a trauma centre; 14% of patients were injured more than 3 hours’ driving time from a trauma centre. Significantly more patients in the Direct group required operative intervention after arrival at the trauma centre (4.3% v. 2.1%, p < 0.001). In-hospital mortality was also higher in the Direct group than in the Staged group (12% v. 8%, p < 0.001).
Northern Alberta represents a complex landscape with a substantial number of rural patients who are injured far from a trauma centre. Higher mortality was observed in those transported directly to a trauma centre and may be accounted for by known weaknesses in this methodology. Future work will examine the impact of field triage and prehospital transportation within this geographically diverse population.
Blunt chest trauma (BCT) may compromise respiratory effort because of pain and lead to substantial morbidity and mortality. Utilization of bedside incentive spirometry (IS) encourages patients to recoup atelectatic lungs and may identify patients at an increased risk of pulmonary complications due to BCT. We explored the feasibility of implementing an IS protocol in a trauma ward.
A protocol to measure inspiratory reserve volumes (IRV) using a bedside flow-oriented incentive spirometer (DHD CliniFLO, Smiths Medical Inc.) was developed in October 2018. Training was provided to nursing staff in a trauma ward at a tertiary care trauma centre to measure IRV. Between Nov. 1, 2018, and Mar. 12, 2019, we conducted a prospective observational study on all adult (≥ 18 yr) patients admitted to the trauma service and measured IRV. We examined the rate of implementation and utilization of IS in the ward and conducted biweekly sessions with nursing staff to obtain feedback.
Within 3 weeks of implementing the protocol, 90% of patients had bedside spirometers. Measurement of IRV at least once per shift increased from 25% to 90% at 17 weeks after implementation of the IS protocol. Feedback at 12 months after implementation demonstrated that use of IS was feasible, measurement of IRV did not add to nursing care workload, and IS was perceived to reduce complications and improve patient outcomes. During the study period, we measured IRV in 55 trauma patients: median IRV values on postadmission days 1, 2 and 3 were 800 mL, 1000 mL and 1200 mL, respectively.
Implementation of an IS protocol is feasible without overburdening nursing workload and is perceived to have substantial value in the care of trauma patients. Future work will correlate IRV with pulmonary complications in BCT and aim to modulate IRV in BCT with different analgesia regimens.
Impostor phenomenon (IP) characterizes the feeling of extreme self-doubt despite consistently positive feedback. This study explored the relationships between IP, burnout and anxiety in Canadian resident physicians. We hypothesize that impostorism is substantially associated with burnout and anxiety.
Anonymous surveys were emailed to 1434 residents in family medicine (FM), pediatric medicine (PM), anesthesiology (AN) and general surgery (GS) programs across Canada. The Clance Impostor Phenomenon Scale (CIPS), Maslach Burnout Inventory–Human Services Survey (MBI-HSS) and General Anxiety Disorder-7 (GAD-7) scales were used to assess the prevalence of IP, burnout and anxiety, respectively.
A total of 269 residents responded to the survey (FM 24.9%, PM 33.1%, AN 20.4%, GS 21.6%). IP was identified in 62.7% of all participants and the average IP score was 66.4 (standard deviation 14.4), which corresponded with “frequent feelings of impostorism.” Women were more likely to score positive (relative risk [RR] 1.27, p = 0.02) for IP. FM, PM and AN residents were all more likely to be burned out than GS residents (26.7%–31.7% v. 10.0%, p = 0.02). Those who did not feel “well-supported” were more likely to score positive for IP (RR 1.05–1.50, p < 0.01), burnout (RR 1.87–3.26, p < 0.01) and anxiety (RR 1.34–2.19, p = 0.03). Scoring positive for IP was an independent risk factor for both burnout (RR 1.82, p = 0.02) and anxiety (RR 3.64, p < 0.01). Scoring positive for burnout, in turn, was an independent risk factor for anxiety (RR 2.65, p < 0.01). Increasing scores on the CIS were associated with increasing scores on both the MBI-HSS and GAD-7 (p < 0.01).
IP appears to be a universally prevalent phenomenon experienced by residents of all specialties surveyed and probably contributes to the development of both burnout and anxiety symptoms. These results underline the focus educators must place on providing support to residents throughout their education, particularly to those in vulnerable groups, to alleviate threats to resident well-being.
Canadian surgical residency programs are designed to prepare trainees to develop into competent and safe surgeons. Multiple innovative teaching models and strategies have been reported to support prospective surgeons in their training. However, common constraints to teaching time include pressures for efficiency, complexity of cases, complexity of technologies, and shorter work weeks, among others. Accordingly, a substantial number of trainees have reported concerns with regard to confidence surrounding preparedness for independent practice. Consequently, there is a need for a more structured and deliberate approach to surgical knowledge and skill acquisition. The purpose of this study is to characterize the current status of perioperative and intraoperative education and experience at a large single centre.
A literature review was performed, and experts were consulted to design a comprehensive survey to capture respondents’ demographic information and their perceptions of perioperative briefing practices, notably setting of objectives, review of indications and imaging, technical manoeuvres, procedural pitfalls, intraoperative decision-making, as well as perceptions of autonomy, independence and influence on performance. The survey included a mixture of Likert-scale responses, multiple-choice questions and free-text response questions. A request for participation was sent to all surgical residents and surgical staff of the Department of Surgery. Summary statistics and thematic analysis were used to analyze these data, respectively. The results of this study will provide valuable perspectives and support the design of an efficient tool to facilitate perioperative review of evidence and surgical steps to improve perioperative knowledge and skill among surgical trainees.
Personal protective equipment (PPE) guidelines serve to protect health care providers and patients from harmful biohazards. With the rise of COVID-19, many institutions have mandated strictly enforced endoscopic PPE guidelines. We currently do not know how practitioners perceive these mandates or how they will influence their practice in the long term. We aimed to study the PPE practices among endoscopists across Canada and compare their perceived differences in practice between the pre- and post-pandemic eras.
A 74-item questionnaire was emailed from June 2020 to September 2020 to all members of the Canadian Association of Gastroenterology and the Canadian Association of General Surgeons through monthly newsletters. The survey was created by expert consensus and distributed using REDCap. Survey questions collected basic demographic characteristics of Canadian endoscopists and differences between PPE practices before and after the COVID-19 pandemic.
A total of 77 respondents completed the survey, with the majority of respondents aged 40–49 years (34 [44.2%]) and identifying as gastroenterologists (54 [70.1%]). There was an even split in terms of sex: (38 women [49.4%], 39 men [50.6%]). In the prepandemic era, the majority of endoscopists wore gowns (91.0–93.9%) and all endoscopists wore gloves (100%). However, the majority of endoscopists did not wear surgical masks (20.9%–31.3%), N95 respirators (1.5%–3.2%), face shields (13.4%–33.9%), eye protection (13.4%–21.3%) or hair protection (11.1%–12.5%). In the postpandemic era, endoscopists reported a plan to dramatically change their prepandemic practices and adopt current PPE mandates. Overall, the top 3 PPE changes endoscopists reported implementing were increasing routine use of surgical masks (50.6%–61.0%), face shields (32.5%–46.8%) and hair protection (32.5%–36.4%). Endoscopists also reported a plan to change gowns more frequently (13.0%–19.5%).
The COVID-19 pandemic has changed the attitudes of many endoscopists regarding future PPE use in routine endoscopy. Ongoing studies comparing the rates of transmission of hospital-acquired infections in the setting of endoscopy are needed to develop a new postpandemic PPE consensus.
Colonic diverticular disease is a common entity among the general population, commonly discovered incidentally on imaging or colonoscopy. Spontaneous colocutaneous fistula (CCF) from diverticular disease without history of surgery or percutaneous drainage is exceedingly rare.
We report an unusual presentation of spontaneous CCF from perforated diverticular disease presenting as necrotizing fasciitis of the thigh, in a healthy 59-year-old female with an 8-month history of chronic sciatic pain following an episode of acute abdominal pain. This case is the first to our knowledge where the intra-abdominal process was immediately recognized, and operative management of the thigh and intraabdominal disease occurred concurrently.
Subcutaneous emphysema of the thigh with sepsis should prompt consideration of a gastrointestinal cause. Rapid recognition is important, and initial concurrent surgical débridement and intra-abdominal disease control offer the best chance of a successful outcome.
Indigenous health has increasingly become a focus in improving Canadian health care. The Indigenous population is known to experience disparities including higher rates of chronic disease and lower life expectancy. However, disparities in surgical outcomes are less well described. The purpose of this systematic review is to identify available studies comparing surgical outcomes among Indigenous and non-Indigenous populations in Canada to better characterize research in this area and the presence of any disparities.
A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines using Medline and Embase. All studies reporting comparative surgical outcomes for both Indigenous and non-Indigenous populations in Canada were identified from database inception to February 2021. Indigenous populations included those described as Indigenous, Aboriginal, Native or North American Indian, Eskimo, Inuit, First Nations and Métis. Noncomparative studies, review articles, case reports and studies reporting only nonsurgical outcomes were excluded.
Eighteen studies were included. Five studies related to general surgery, 5 related to urology, 4 related to vascular surgery, 2 related to orthopedic surgery and 1 related to each of cardiac and thoracic surgery. Thirteen studies reported a disparity in Indigenous patients with worse rates of 11 separate postoperative complication types in this population. Eight of these were major complication types as determined using the Clavien–Dindo classification of surgical complications (grades III–IV). Areas of disparity included rates of graft rejection following solid organ transplantation, postoperative infection risk including septicemia and postoperative all-cause mortality. One study reported a better outcome in Indigenous patients for phantom limb pain following lower extremity amputation.
Further comparative studies across other surgical disciplines and analyses of barriers to accessing care can help provide a broader picture of existing disparities and inform approaches to address these gaps.
Fundoplication and medical management are current mainstays for the management of Barrett esophagus (BE). However, our understanding of the differences in outcomes between these 2 treatments is limited. The aim of this study was to perform a systematic review and meta-analysis to evaluate the efficacy of these interventions on BE disease regression and progression.
A comprehensive search in Medline, Embase, Scopus, Web of Science and Cochrane Library databases was performed on Feb. 22, 2021. Inclusion criteria were studies with both medical and surgical management comparators, BE diagnosis before treatment, patients aged 18 years and older and studies with more than 5 patients. Primary outcomes of interest included evaluating changes in histopathologic BE regression and disease progression between interventions. Meta-analysis was performed using a Mantel–Haenszel random-effects model (RevMan 5.4.1).
A total of 7231 studies were retrieved after the initial search, with 9 studies (1 randomized trial, 7 prospective cohorts, 1 retrospective cohort) meeting the final inclusion criteria. Of the included studies, 890 (65%) patients received medical management while 470 (35%) received surgical management. Medical management included proton pump inhibitors (n = 807, 91%, 6 studies), H2-receptor blockers (n = 40, 4% patients, 3 studies) and combination therapy (n = 43 patients, 5%, 1 study). Nissen fundoplication was the most commonly performed type of fundoplication (n = 265, 93%). Median length of follow-up ranged from 18 to 72 months. Meta-analysis revealed that fundoplication was associated with improved histopathologic regression of meta-plasia and low-grade dysplasia (odds ratio [OR] 4.38, 95% confidence interval [CI] 2.28–8.42, p < 0.001) and disease progression to dysplasia or adenocarcinoma (OR 0.34, 95% CI 0.12–0.96, p = 0.04) compared with medical therapy.
Fundoplication is superior to medical therapy with regard to improved odds of histopathologic BE disease regression and disease progression. Additional randomized trials that directly compare medical management and surgical intervention are required to delineate the optimal delivery and timing of these interventions.
Quality surgical education during residency training is central to graduating competent general surgeons. Surgical education goals can be divided into knowledge, skills and judgment acquisition. Traditionally, knowledge acquisition has been delivered through scheduled didactic sessions, access to different textbooks and journal articles, and access to online resources. However, the COVID-19 pandemic has shifted the knowledge delivery methods to exclusively online content and virtual teaching. There has also been a shift in service expectations for both residents and educators, making it challenging to balance the demands of delivering and consuming high-quality educational content. Another neglected aspect of surgical education is equitable access to quality and resource-specific content for colleagues in resource-limited settings.
The project aims to capture the current experience of residents, program directors and administrators in structured residency programs using a qualitative approach and to create and implement a comprehensive online curriculum designed for general surgery trainees in both high- and low-resource settings.
A structured interview was developed by educational experts to capture the lived experience of the interviewees. This is being administered to general surgery residency programs in Canada and through the College of Surgeons of East, Central and Southern Africa (COSECSA). The interviews are recorded, transcribed and coded to blind the participants’ information and institution. A grounded theory approach to content analysis will be used to summarize interviewees’ experiences and present the results as quantitative and qualitative data. The interviews and qualitative analysis will be performed by separate research team members.
In parallel with the qualitative study, a novel online general surgery platform based on the Royal College of Physicians and Surgeons of Canada objectives is being developed and implemented. Data from the qualitative analysis will be used to inform the design and content of the platform. A description of the novel platform and preliminary usage data will be presented along with the qualitative analysis.
Hemorrhage is a leading cause of death in trauma. Hemorrhage control includes surgical and endovascular techniques, with use of topical hemostatics playing an increasing role. The defensive slime produced by hagfish (a deep-sea animal) forms within milliseconds and entraps up to 26 000 times its weight in water within a fine mesh-like network. Our project aims to determine if hagfish slime can be applied as a novel innovative hemostatic “clot.”
Research ethics approval were obtained and a standardized protocol for extraction was developed. Hagfish were anesthetized and electrical stimulation was used to extract slime exudate. The exudate was placed in a buffer solution and tested with normal saline, seawater, blood and varying concentrations of calcium chloride. Initial “clot” samples were evaluated with direct observation to determine viability. Additional viscosity testing using thromboelastography and rheometry was attempted.
Thromboelastography and rheometry testing was unsuccessful because of the speed of reaction and the volume limitations of the testing devices. One hundred milligrams of slime exudate in combination with either normal saline or seawater created a total volume of 100 mL of “clot.” An experimental set-up was used to mix the slime exudate, solution and calcium and to suspend the formed “clot” above a beaker on a scale. The weight of the solution returning to the beaker over time was used as a proxy for “clot” formation and deterioration. Increasing the amount of calcium added did not increase the amount of slime formed. Tests conducted with blood demonstrated successful creation of a “clot” that was observed to be cohesive and stable for approximately 30 minutes. The “clot” appeared more viscous than the slime created with normal saline or seawater.
Initial results show that slime can be formed with solutions isotonic to human plasma. The durability of the slime “clot” created with blood may be due to the clotting cascade interacting with the slime exudate, but further exploration is needed.
There are well-established provincial guidelines surrounding target wait times for patients diagnosed with cancer. Wait 1 is the time from initial referral to a patient’s first surgical oncology appointment. Wait 2 is the time from the decision to operate to the actual operation. During the first wave of the COVID-19 pandemic in March 2020, elective operations decreased and the majority of in-person appointments were cancelled or changed to telephone appointments. Oncologic operations were allowed to continue; however, routine screening temporarily stopped. The objective of this study is to determine the effect of the COVID-19 pandemic on overall case counts, Wait 1 and Wait 2.
All patients diagnosed with cancer and referred for surgical management at our centre from Mar. 15 to June 30, 2019, were compared with those in the same time period in 2020.
In 2020, there was a significant decrease in new cases, from 666 to 588 (p < 0.001). Surgical specialties varied greatly in how much the case volume was affected. When we compared Wait 1 from 2019 to 2020, significantly more patients (p = 0.019) met the target time for Wait 1 in 2020 (89.4%) compared with 2019 (85.0%). However, significantly fewer (p < 0.001) patients met the target time for Wait 2 in 2020 (71.6%) than in 2019 (88.4%). Again, there was variability in the ability to meet target wait times for both Wait 1 and Wait 2 by surgical specialty. The decrease in overall case counts from 2019 to 2020 (11.7%) was less than the provincial decrease in cases (23.2%) over the same time period.
While overall case volumes dropped, the decrease was not universal across surgical specialties. Further, the majority of subspecialties improved in Wait 1, while the majority had a lengthening of Wait 2. It remains to be seen whether individual patient outcomes will suffer because of the first wave of the COVID-19 pandemic.
Same-day surgery is an increasingly utilized and cost-effective strategy to manage common surgical conditions. However, many institutions limit ambulatory surgical services to only healthy individuals. There is also a paucity of data on the safety of same-day discharge among high-risk patients. This study aims to determine whether same-day discharge is associated with higher major morbidity and readmission rates compared with overnight stay among high-risk general surgery patients.
This is a retrospective cohort study using the data from the National Surgical Quality Improvement Program from 2005 to 2017. Patients with an American Society of Anesthesiologists (ASA) class of 3 or greater undergoing general surgical procedures amenable to same-day discharge were identified. Primary and secondary outcomes were major morbidity and readmission at 30 days. A multivariable logistic regression model using mixed effects was used to adjust for the effect of same-day discharge.
Of 191 050 cases, 137 175 patients (72%) were discharged on the same day. At 30 days, major morbidity was 1.0%, readmission 2.2% and mortality less than 0.1%. The adjusted odds ratio of same-day discharge was 0.59 (95% confidence interval [CI] 0.54–0.64, p < 0.001) for major morbidity and 0.75 (95% CI 0.71–0.80, p < 0.001) for readmission. Significant risk factors for morbidity and readmission included nonindependent functional status, ascites, renal failure and disseminated cancer.
Major morbidity and readmission rates are low among this large sample of high-risk general surgery patients undergoing common ambulatory procedures. Same-day discharge was not associated with increased adverse events and could be considered in most high-risk patients after uncomplicated surgery.
The 6-variable Codman score is a validated tool designed to generate risk-adjusted benchmarking of general surgery outcomes. We sought to externally validate the Codman Score in colorectal surgery.
The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) and colectomy targeted data set of 2018 were merged. A Codman score was assigned to every patient. The primary outcome was in-hospital mortality and the secondary outcome was morbidity at 30 days. Logistic regression analyses were performed using the Codman score and the ACS NSQIP mortality and morbidity algorithms as independent variables. The predictive performance (area under the curve, AUC) of the Codman score and these algorithms was compared.
A total of 40 589 patients were included and a Codman score was generated for 40 557 (99.0%) patients. The median Codman score was 3 (interquartile range 1–4). A total of 883 (2.2%) patients died and 8081 (19.9%) experienced any morbidity at 30 days. Each component of the score had a significant association with mortality (p < 0.001). For every 1-point increase in Codman score there was a stepwise increase in mortality and morbidity rates and an exponential increase in the odds of mortality (p < 0.001). To predict mortality, the Codman score had an AUC of 0.92 (95% confidence interval [CI} 0.91–0.93) compared with the NSQIP mortality score 0.93 (95% CI 0.92–0.94). The Codman prediction rules were highly correlated with the ACS NSQIP mortality (0.88) and morbidity (0.75) prediction rules. When a Codman cut-off score of 7 was applied, 382 of 883 (43.3%) unexpected failures and 2224 of 39 706 (5.6%) unexpected successes were identified for the morbidity and mortality conference.
We propose that the 6-variable Codman score is an efficient and actionable method to drive quality improvement in colorectal surgery. Unlike the ACS NSQIP algorithms, which use over 150 variables in their calculation and are available only to ACS NSQIP participant sites, the Codman score can be implemented broadly.
Limited data exist on the role of diverting loop ileostomy and colonic lavage (DLI) for fulminant Clostridioides difficile colitis (FCDC) compared with total abdominal colectomy with end ileostomy (TAC). Prospective surgical trials are difficult to conduct given the rarity of this condition and the changing epidemiology of the disease. Our objective was to compare outcomes of TAC and DLI for patients with FCDC.
After institutional review board approval, a multicentre retrospective chart review was conducted at 11 institutions in North America for patients who underwent a TAC or a DLI for FCDC (2010–2018). Demographic, preoperative, intraoperative and postoperative data were collected. Main outcome measures were overall 30- and 90-day postoperative mortality, postoperative major morbidity, gastrointestinal restoration rates and recurrence rates.
Of 178 patients, 144 (80.9%) had a TAC and 34 (19.1%) had a DLI. Patients in both groups were similar with regard to patient demographic and preoperative markers of disease severity. Overall 30- and 90-day postoperative mortality were 23.5% versus 37.1% (p = 0.10) and 29.4% versus 46.2% (p = 0.104) for DLI compared with TAC, respectively. Multivariate logistic regression identified age (odds ratio [OR] 1.07, 95% confidence interval [CI] 1.03–1.12), preoperative vasopressor use (OR 5.39, 95% CI 1.96–14.83) and creatinine levels (OR 1.34, 95% CI 1.07–1.69) as independent predictors of mortality. The incidence of postoperative major morbidity was high in both groups; however, DLI was associated with a significantly decreased odds of major morbidity (OR 0.33, 95% CI 0.11–0.97). Gastrointestinal continuity was established more often and earlier in patients who underwent a DLI, and recurrence of C. difficile infection after DLI and after gastrointestinal restoration was low.
DLI is a comparable alternative to TAC with regard to overall 30- and 90-day postoperative mortality. However, DLI is advantageous with decreased morbidity and greater gastrointestinal restoration rates. DLI can be considered an alternative to TAC in select patients.
Emergency general surgery (EGS) conditions cause a disproportionately high degree of morbidity and mortality among surgical patients. The objective of our study was to generate estimates of the potential access to EGS care within various driving distances for Ontario residents in each geographic region.
Institutional details were collected using a survey of all hospitals offering urgent and emergent general surgical care in Ontario (n = 114). The locations of these hospitals were mapped using geographic information systems (GIS), and land catchment areas were modelled for 30-, 45-, 60- and 90-minute travel times using the 2019 Ontario road network. Population data were reported on the basis of the 2016 census blocks, which are the smallest geographic units by which to report population. Travel distances were determined between the geographic centroid of the census block and the closest hospital. Results were stratified for certain characteristics offered by the institution, such as EGS care available (n = 114), dedicated EGS service model (n = 36), 24/7 emergency department (ED) (n = 97) and 24/7 operating room (OR) capabilities (n = 76).
Nearly all (96%, n = 12 933 892) of the Ontario population lives within 30 minutes’ driving time to a hospital that provides care to EGS patients, and 93% (n = 12 471 908) can access a hospital with 24/7 OR capabilities in this same time frame. Around-the-clock ED availability within 30 minutes is potentially accessible for 93% (n = 12 471 908) of the population. However, only 76% (n = 10 220 018) live within 30 minutes of a facility with an EGS model of care. Substantial regional disparities are present and appear to particularly affect remote communities.
Most Ontario residents live within a 30-minute driving distance of a hospital able to provide emergent or urgent general surgical care, with slightly fewer able to access 24/7 operative facilities. Despite this, striking differences persist in access for remote and rural areas of the province.
Immersive virtual reality (iVR) simulators provide accessible, low-cost, realistic adjuncts in time and financially constrained training programs. With increasing utilization of this technology, its effect on global skill acquisition should be clarified. This systematic review examines the current literature on the effectiveness of iVR for surgical skills training.
A literature search was performed on Medline, Embase, Central, Web of Science and PsycINFO for primary studies published between Jan. 1, 2000, and May 13, 2020, on the use of iVR to develop technical surgical skills. Two reviewers independently screened titles, abstracts and full texts, extracted data and assessed the quality and strength of the evidence using the Medical Education Research Quality Instrument (MERSQI) and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework. Results were qualitatively synthesized, and descriptive statistics were calculated. Where possible, standardized mean differences (SMDs) were calculated using a random effects model.
The literature search yielded 8939 citations, with 13 articles included for qualitative synthesis. Immersive VR-trained groups performed 18% to 43% faster on procedural time to completion compared with controls (pooled SMD −1.11, 95% confidence interval [CI] −1.66 to −0.57, I2 1%, p < 0.001). Two of 4 controlled trials that used task-specific checklists found that iVR groups completed significantly more steps than controls after the intervention. Four studies reported on implant placement accuracy. In 2 studies by Xin and colleagues, iVR groups placed significantly more successful grade I/A pedicle screws than controls (89.6% v. 60.4%, and 69.6% v. 55.4%). The mean MERSQI score was 11.88 ± 1.60.
Immersive VR incorporation into surgical training programs is supported by high-quality, albeit heterogeneous, studies demonstrating improved procedural times, task completion and accuracy, positive user ratings and cost-effectiveness.
The Canada Lymph Node Score (CLNS) uses 4 sonographic criteria to predict the risk of malignancy in lymph nodes during endobronchial ultrasonography (EBUS) and may play a role in identifying targets for biopsy or rebiopsy during invasive mediastinal staging for lung cancer. However, the CLNS has not yet been independently, prospectively validated in routine clinical practice.
The CLNS scores for each lymph node biopsied during EBUS were prospectively captured for 1 year (2019). The CLNS and the presence of malignancy in each lymph node were compared. Univariate binary logistic regression was completed for each ultrasonographic feature, as well as a multivariate logistic regression model and receiver operator characteristic.
CLNS score and diagnostic pathology results were available for 367 lymph nodes. The indication for EBUS was suspected lung malignancy in 355 (91.7%). Incidence of malignancy increased with increasing CLNS score. Scores of 3 or higher were significantly associated with malignancy (p < 0.001, specificity 84.4%, positive likelihood ratio 4.0). Receiver operating characteristic area under the curve was 0.76, indicating a good ability of the model to predict presence or absence of malignancy. Nodes scoring less than 2 that were also negative on computed tomography and positron emission tomography were malignant in 10.1% (specificity 75.2%, negative predictive value 91.3%).
This study provides independent realworld validation data for the Canada Lymph Node Score, employed in routine clinical practice by a group of general thoracic surgeons in 189 unselected, consecutive patients. The CLNS correlates with the presence or absence of malignancy in thoracic lymph nodes and could serve as an important adjunct to currently available methods of invasive and noninvasive mediastinal staging. The CLNS may be most helpful to select which nondiagnostic nodes require rebiopsy. Use of a combination of low CLNS score and negative conventional radiology to obviate the need for any initial biopsy remains to be studied in prospective trials.
Venous thromboembolism (VTE) is a major cause of morbidity and mortality in surgical patients. Surgery for esophageal cancer carries a particularly high risk of VTE. This study identifies the risk factors and associated mortality of thrombotic complications among patients undergoing surgery for esophageal cancer.
All patients in the Canadian province of Ontario undergoing esophageal cancer surgery from 2007 to 2017 with follow-up until 2018 were identified. Logistic regression identified VTE risk factors at 90 days and 1 year postoperatively. A flexible parametric survival analysis was used to compare mortality and survival up to 5 years after surgery for patients with and without a postoperative VTE.
Overall, 9876 patients with esophageal cancer were identified, of whom 25.7% underwent surgery. VTE incidence at 90 days and 1 year postoperatively was 4.1% and 6.3%, respectively. Patient factors including age, sex, performance status and comorbidities were not associated with VTE risk. Surgical approach (open v. minimally invasive) was also not associated with VTE risk. The highest risk of VTE development was during the first 6 months after surgery. Adenocarcinoma was strongly associated with VTE risk compared with squamous cell carcinoma (odds ratio [OR] 2.82, p = 0.006). VTE risk increased with advanced disease (for stage III or IV, OR 4.01, p < 0.001). The highest risk of VTE development was during the first 6 months after surgery, and it plateaued thereafter. Postoperative VTE development was associated with decreased survival at 1 and 5 years, regardless of disease stage (hazard ratio 1.65, p < 0.001).
Patients with esophageal cancer with postoperative VTE have worse long-term survival than those without thrombotic complications. Adenocarcinoma and advanced disease stage carry an increased VTE risk. Extended VTE prophylaxis should be considered in high-risk patients to reduce the decreased survival associated with thrombotic events.
Venous thromboembolism (VTE) is a substantial cause of morbidity and mortality in patients with cancer who undergo surgery. Among these patients, the subset requiring thoracic surgery have an increased VTE risk because of inherent technical and disease-specific factors. Although other surgical specialties have adopted postdischarge extended VTE prophylaxis, evidence for this practice is scarce in thoracic surgery. This study aims to identify VTE risk factors and associated mortality among patients with lung cancer who undergo surgery.
Using administrative databases, all patients in the province of Ontario who underwent lung cancer surgery from 2007 to 2017, with follow-up until 2018, were identified. Logistic regression was used to identify VTE risk factors at 90 days and 1 year postoperatively. A flexible parametric survival analysis compared mortality and survival up to 5 years after surgery between patients with and without a VTE.
Of 65 513 patients diagnosed with lung cancer in Ontario, 12 626 (19.3%) underwent surgical resection. VTE incidence at 90 days and 1 year postoperatively was 1.3% and 2.7%, respectively. Open and more extensive resections carried an increased VTE risk, with pneumonectomy conferring the highest risk (odds ratio 3.01 at 90 d and 2.36 at 1 yr, p < 0.001). Disease stage was associated with VTE risk; stage III and IV disease carried a 3.19 and 4.97 times higher risk, respectively, compared with stage I (p < 0.001). Frailty conferred a 1.43 times higher risk (p = 0.002). For patients with a postoperative VTE, the hazard ratio for mortality was 2.01 compared with patients without a thrombotic event (p < 0.001). Patients suffering a VTE had substantially reduced 5-year survival (p < 0.001).
Patients undergoing pneumonectomy or open resections and those with advanced stage disease have an increased risk of VTE. Patients suffering a thrombotic complication have an increased risk of mortality and decreased 5-year survival. Accordingly, extended prophylaxis should be considered in patients undergoing high-risk operations to reduce the mortality associated with VTE events.
Failure to rescue (FTR), or the inability to prevent death after a complication, is an important quality measure in thoracic surgery. The association between patient frailty and FTR is not well established in patients undergoing esophagectomy. Our objective was to evaluate the association of the validated 5-factor modified frailty index (5-mFI) with FTR after esophagectomy.
A retrospective cohort analysis was conducted using the American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) esophagectomy targeted data set. Adult patients undergoing esophagectomy with gastric conduit between 2016 and 2018 were identified. Among those, patients with at least 1 major postoperative complication were further divided into the FTR group. The 5-mFI was the primary exposure. Outcome measures included 30-day morbidity and mortality; the latter was used to define FTR after complications. A subgroup analysis was conducted in patients with an anastomotic leak. Univariate and multivariable logistic regression models were used to assess the independent association of frailty with FTR.
A total of 997 patients were included in the main analysis, 71 of whom were included in the FTR group (7.1%). The 5-mFI was not significantly associated with FTR in either the univariate (odds ratio [OR] 1.34, 95% confidence interval [CI] 0.99–1.80, p = 0.05) or the multivariable model (OR 1.12, 95% CI 0.60–2.09, p = 0.72). A total of 310 patients experienced an anastomotic leak, of whom 18 died within 30 days (FTR 5.8%). The 5-mFI was significantly associated with FTR in the multivariable model (OR 7.23, 95% CI 1.06–49.19, p = 0.04), which controlled for other known risk factors such as age, surgeon specialty, body mass index, smoking status, steroid use, American Society of Anesthesiologists (ASA) class, operative approach and receipt of chemoradiotherapy.
Although 5-mFI was not associated with FTR in patients with 1 major generic complication after esophagectomy, 5-mFI is independently associated with FTR in patients who had an anastomotic leak after esophagectomy. This may be helpful in identifying patients who may benefit from targeted prehabilitation interventions before esophagectomy.
Triple normal lymph nodes (LNs), appearing benign on computed tomography, positron emission tomography and endobronchial ultrasonography (EBUS), have a less than 6% probability of malignancy. We hypothesized that targeted sampling (TS), which omits biopsy of triple normal LNs during EBUS, is not an inferior staging strategy to systematic sampling (SS) of all LNs.
A prospective feasibility randomized controlled trial was conducted from May 2019 to March 2020 at a tertiary care centre to decide on the progression to a pan-Canadian trial comparing these 2 sampling methods. Patients with cN0–N1 non–small cell lung cancer (NSCLC) undergoing EBUS were randomly assigned in a 1:1 ratio to TS or SS. LNs in the TS arm were then crossed over to receive SS. Progression criteria included recruitment rate (70% minimum), procedure length (no significant increase for TS) and incidence of missed nodal metastasis (MNM, < 6%). The Mann–Whitney U test and McNemar test on paired proportions were used for statistical comparisons with p less than 0.05. Pathology specimens were the gold standard.
The progression criterion of 70% recruitment rate was achieved early, triggering a planned early stoppage of the trial. Nineteen patients were allocated to each arm. The median procedure length for TS was significantly shorter than for SS (3.07 min, 95% confidence interval [CI] 2.33–5.52 v. 19.07 min, 95%CI 15.34–20.05; p < 0.001). After crossover analysis, 5.45% of LNs in the TS arm were upstaged from N0 to N2, but this MNM incidence was below the 6% threshold. During surgical resection, the MNM incidence from N0 to N2 was 0% for both TS and SS arms.
The progression criteria for a pan-Canadian, noninferiority crossover trial comparing TS with SS were met, thus warranting the advancement of such a trial. If noninferior in its diagnosis, TS may improve mediastinal staging of NSCLC by shortening the time to appropriate treatment and reducing patient discomfort.
The purpose of gastric ischemic conditioning (GIC) is to reduce anastomotic complications after esophagectomy; however, there remains no clear consensus on the efficacy, modality and timing of this procedure. On Feb. 5, 2020, a systematic review and meta-analysis on the effect of GIC on postoperative outcomes was conducted to address this gap in the literature with incidence of anastomotic leaks targeted as the primary outcome. Secondary outcomes included mortality, gastric conduit ischemia and anastomotic strictures.
Databases included Medline, Embase, Scopus, Web of Science and Cochrane Library, and inclusion criteria included human participants undergoing esophagectomy with gastric conduit reconstruction, age 18 years and older, sample size of 5 or greater and GIC performed before esophagectomy. A meta-analysis using RevMan 5.4.1 was performed using a Mantel–Haenszel fixed-effects model.
After standardized review of abstracts and full-text articles, 23 studies were included in the final analysis. A total of 2203 patients, the majority of whom were male (n = 1515, 69.8%) with a mean age of 63 (standard deviation [SD] 7.1) years, were included. GIC was performed in 1178 (53.5%) patients with a mean time from GIC to surgery of 28 (SD 37.3) days. Surgical complications included 47 deaths (2.2% GIC v. 2.1% control), 215 anastomotic leaks (7.1% GIC v. 13.1% control), 30 ischemic conduits (0.9% GIC v. 1.9% control) and 212 anastomotic strictures (3.0% GIC v. 17.7% control). Meta-analysis revealed significant improvements in anastomotic leaks (odds ratio [OR] 0.67, 95% confidence interval [CI] 0.46–0.97, p = 0.03) and anastomotic strictures (OR 0.48, 95%CI 0.29–0.80, p = 0.005) in patients undergoing preoperative GIC.
GIC is a promising technique that may reduce postoperative anastomotic leaks and stricture formation after esophagectomy. Prospective data exploring the differing treatment modalities and timing of esophagectomy, as well as the target population that would benefit most from this technique, are needed before definitive conclusions can be drawn.
Preconditioning before surgery can lower complication rates after surgery and hence length of stay (LOS) in hospital. Therefore, we designed and compared Move For Surgery (MFS), a home-based, preoperative, preconditioning program intervention for thoracic surgery patients that involves aerobic exercise using wearable technology and deep breathing exercises, with the usual preoperative standard of care (control) through a prospective randomized controlled trial.
Patients undergoing resection for early-stage non–small cell lung cancer were preoperatively enrolled and randomly assigned to either the MFS or control group in a 1:1 allocation ratio. Those in the MFS group were provided with a wearable activity tracker and a booklet describing various aerobic and deep breathing exercises and nutritional and smoking cessation tips, and they underwent the intervention, whereas those in the control group underwent usual preoperative care. Daily step count, sleep cycle and calories burned were synced and tracked remotely. Daily step goals were set by increasing the participants’ baseline step count by 10% each week until the day of surgery. Participants were encouraged and motivated to reach their daily step goal by automatic reminders through the tracker. Participants completed the EQ-5D-5L health-related quality of life instrument at baseline and on the day of surgery. Continuous variables were compared using the Student t test, and categorical variables were compared using the χ2 test, with a level of significance of p less than 0.05.
Of the 117 patients screened, 87.2% (102/117) were eligible and 93.1% (95/102) completed the trial. The median age was 68 (range 45–87) years and 57.9% (55/95) were women. The mean predicted forced expiratory volume in 1 second and diffusing capacity for carbon monoxide were 88.9% (standard deviation [SD] 17.0%) and 77.1% (SD 17.6%), respectively. LOS in hospital after surgery for the MFS and control groups was 2.67 (SD 1.61) and 4.44 (SD 3.48) days (p = 0.002), respectively. Significant improvement was seen in the overall health component of the EQ-5D-5L from before MFS (69.38 [SD 17.11]) to after (79.60 [SD 11.63]; p < 0.001).
MFS significantly shortened LOS in hospital after surgery when compared with usual preoperative care and resulted in improved patient-reported quality of life.
Hospital readmission after lung cancer surgery is associated with worsened overall survival. This retrospective cohort study examined whether readmission to the index hospital was associated with improved outcomes in comparison to non-index hospital readmission.
Patients undergoing lung cancer resection between 2012 and 2019 were identified from a prospectively maintained institutional database at a tertiary thoracic centre in Ontario. Patient demographic variables, operative parameters and postoperative complications were recorded. The primary outcomes were readmission rate and number of repeated readmissions across index and non-index hospitals. Overall survival was examined as a secondary outcome and survival differences were examined using a log-rank test. Cox regression identified the association between non-index readmission and all-cause mortality. Logistic regression analyzed factors associated with readmissions at 90 days postoperatively.
Over the study period, 3615 patients were identified, of whom 311 (9%) required readmission, with 173 of those (56%) being readmitted to the index hospital. Older patients were more likely to be admitted to non-index hospitals (p = 0.018), and patients presenting to non-index hospitals were more likely to require multiple readmissions than patients presenting to the index hospital (odds ratio 2.0, 95% confidence interval [CI] 1.1–3.6, p = 0.024). Importantly, patients readmitted to non-index hospitals had a worse overall survival (adjusted hazard ratio 1.9, 95% CI 1.2–2.9, p = 0.006).
This study found that non-index hospital readmission after lung cancer resection is associated with an increased risk of multiple readmission and a decreased overall survival. These results highlight the importance of close clinical follow-up in patients after thoracic surgery and emphasize the expertise and specialization required in their postoperative care.
The standard of care for stage I non–small cell lung cancer (NSCLC) is surgical resection. Stereotactic ablative radiotherapy (SABR) plays an important role in the management of early NSCLC in patients who are poor operative candidates, or more recently during the COVID-19 pandemic, as a bridge to surgery, when operating room access is limited. The impact of preoperative SABR on surgical resection has not been extensively explored in terms of length of hospital stay (LOS) and difficulty of surgical resection (DSR). Our unique published prospective MISSILE study afforded the opportunity to examine this.
LOS and perioperative outcomes were assessed for patients with stage I NSCLC who received preoperative SABR and subsequent surgical resection (RS) within 10 weeks and compared with a similar cohort who underwent surgery alone (S) from 2014 to 2017 using a propensity-score matched analysis. DSR was assessed on the basis of operative time, blood transfusions, conversion rates (CR) and increased sublobar to lobar resection (SL).
Forty patients in the RS cohort were compared with 168 patients in the S cohort. Univariable and multivariable logistic regression models were generated as a comparison for all patients (n = 208). LOS was similar between the cohorts (mean 5.2 [standard deviation (SD) 4.7] d v. 4.3 [SD 2.2] d, p = 0.90). There were no differences between cohorts for blood transfusions (0% v. 0%), mean operative time (2.4 [SD 1.0] h v. 2.5 [SD 1.2] h, p = 0.60), conversion rates (21.9% v. 18.8%, p = 0.76) or increased SL (9.4% v. 0%, p = 0.24). Three patients who received radiotherapy did not proceed to surgery, 1 because of concerns of radiation pneumonitis.
Preoperative SABR in patients with stage I NSCLC does not have a significant impact on the DSR and LOS.
Robotic-assisted segmentectomy is emerging as a standard operation for early-stage non–small cell lung cancer. Near-infrared fluorescence (NIF) mapping with indocyanine green (ICG) dye facilitates the identification of the true intersegmental plane of resection in the lung, which is often different from the plane predicted by the operating surgeon. We hypothesized that as surgeons gain more experience in segmentectomy, the measured distance between the true plane and the predicted plane will approach 0.
This study is a phase 2, singlearm, prospective trial in patients undergoing robotic-assisted segmentectomy for lung tumours less than 3 cm. At the time of surgery, the predicted intersegmental plane was identified by consensus between 2 thoracic surgeons before ICG injection. Then, the true plane was mapped by intravenous ICG injection. The average distance between the true plane and the predicted plane was recorded and compared across temporal tertiles. The first tertile comprised 30 participants, which is the required number of cases to attain proficiency with the robotic platform, and the remaining participants were divided equally for the second and third tertiles. A Kruskal–Wallis test was used to compare differences between tertiles.
A total of 170 consecutive patients were enrolled from October 2016 to January 2021. Median age was 68 (range 35–85) years, and 43.5% (74/170) were men. The planned intervention with ICG was completed in 59.4% (101/170) of the participants, and intersegmental plane visualization was achieved in 90.1% (91/101). The mean measured distance between the true plane and the predicted plane was 19.4 (standard deviation [SD] 4.2 mm) in the first tertile, 2.18 (SD 2.6) mm in the second and 2.48 (SD 1.5 mm) in the third (p < 0.001). Locally estimated scatterplot smoothing revealed that this distance approaches 0 as the surgeon performs more cases.
In robotic-assisted segmental resection for lung cancer, the added value of NIF mapping diminishes with surgeon experience.
Current strategies to ensure maintenance of competency for practising surgeons typically consist of self-reported credits or knowledge-based recertification examinations or both. There has been a recent call for formal competency-based medical education (CBME) assessments as part of continuing professional development (CPD).
A questionnaire was developed to examine attitudes toward CBME in CPD for practising surgeons. Items addressed experience with CBME, support for CBME in CPD, considerations for implementation and anticipated impacts of implementation. Email questionnaires were distributed to all members of the Canadian Association of Thoracic Surgeons (CATS) (n = 138). Questionnaire responses informed development of semistructured individual interviews of practising thoracic surgeons. Grounded theory analysis by 3 independent raters was used to analyze the qualitative interview data.
Responses were received from 58 surgeons (response rate 42%). Only 9 (15.5%) had undergone assessment of competence while in practice. There was moderate support for assessment of surgeons’ technical skills (50.9%) or decision-making (56.6%). Support was highest for a mechanism to flag surgeons in need of a focused competence assessment (83.0%). There was a diversity of opinion regarding the optimal timing of assessment and the ideal body to be responsible, with the highest support for assessments primarily for older surgeons (52.8%) and for CATS to play a role (56.6%). Eight surgeons participated in interviews. Interviews identified a range of benefits of CBME in CPD but also several challenges to implementation, including the need for fair, datadriven assessments, taking into account patient outcomes.
A lack of enthusiastic support for CBME in CPD among practising surgeons could present a barrier to implementation. However, surgeons do foresee potential benefits for patients, surgeons and the health care system. By listening to surgeons’ concerns and recommendations, an effective CBME strategy may be devised that would be embraced by surgeons and allow for improved patient safety and surgeon performance.
This study aimed to examine functional and long-term health care dependency outcomes of stereotactic body radiotherapy (SBRT) to surgery for older adults with stage I non–small cell lung cancer (NSCLC).
We conducted a propensity-score matched analysis of adults older than 70 years of age with stage I NSCLC treated with surgery or SBRT from January 2010 to December 2017. Home care utilization, probability of being alive and at home, and 1-year days at home were compared using Andersen–Gill, piecewise Cox and negative binomial regression models, respectively. E-value methods assessed presence of residual confounding.
Of 3699 included patients, 1129 had SBRT and 2570 had surgery (72% video-assisted thoracoscopic surgery, 71% lobectomy). A total of 1016 per group were matched. Median follow-up was 38 months (interquartile range [IQR] 20–61 mo). SBRT was associated with a higher risk of home care utilization (hazard ratio [HR] 1.75, 95% confidence interval [CI] 1.37–2.23) than surgery, with a consistently greater proportion of SBRT patients requiring home care in the 6 months to 5 years following treatment. Surgery was associated with a higher probability of being alive and at home within 6–12 months (HR 1.87, 95% CI 1.33–2.64), 2–3 years (HR 2.31, 95% CI 1.64–3.24) and 4–5 years (HR 1.79, 95% CI 1.11–2.91) after treatment, on piecewise regression. SBRT patients had statistically fewer days at home (median 352 d, IQR 346–355 d) than those with surgery (median 358 d, IQR 351–360 d) over 1 year following treatment (rate ratio 1.01, 95% CI 1.01–1.02). The findings persisted in stratified analyses for frail and nonfrail patients, although the association was smaller in frail patients. E-values indicated it was unlikely that the observed estimates could be explained by unmeasured confounders.
SBRT was associated with higher home care utilization and a lower probability of being alive and at home than surgery for older adults with stage I NSCLC. These are important patient-centred outcomes for counselling and shared decision-making for older adults with early lung cancer.
While neoadjuvant therapy followed by esophagectomy is the standard of care for locally advanced esophageal cancer, the role of adjuvant therapy is uncertain. This review aims to analyze patients with esophageal cancer who previously underwent neoadjuvant therapy followed by curative resection to determine whether additional adjuvant therapy is associated with improved survival outcomes.
Medline, Embase and Central databases were searched up to August 2020 for studies comparing patients with esophageal cancer who underwent neoadjuvant therapy and curative resection with and without adjuvant therapy. The primary outcome was overall survival (OS), and secondary outcomes were disease-free survival (DFS), locoregional recurrence and distant recurrence at 1 and 5 years. Random effects meta-analysis was conducted where appropriate. Grading of Recommendations, Assessment, Development and Evaluation (GRADE) was used to assess certainty of evidence.
Ten studies involving 6462 patients were included. A total of 6162 (95.4%) patients received adjuvant chemotherapy, whereas 296 (4.6%) patients underwent either adjuvant radiotherapy or chemoradiotherapy. When compared with patients who received neoadjuvant therapy and esophagectomy alone, adjuvant therapy groups experienced a significant decrease in 1-year mortality by 48% (RR 0.52, 95% confidence interval [CI] 0.41–0.65, p < 0.001, moderate certainty). This reduction in mortality was consistent at 5-year follow-up (RR 0.91, 95% CI 0.87–0.96, p < 0.001, moderate certainty), and for the pathologically node-positive subgroup. While adjuvant therapy presented no benefit for the T0–2 subgroups, patients with T3–4 disease experienced a significant reduction in mortality with the addition of adjuvant therapy at both 1 year (RR 0.51, 95% CI 0.41–0.63, p < 0.001, moderate certainty) and 5 years (RR 0.91, 95% CI 0.85–0.97, p = 0.005, moderate certainty). Owing to incomplete reporting, the added benefit of adjuvant therapy was uncertain regarding DFS, locoregional recurrence and distant recurrence.
Adjuvant therapy after neoadjuvant treatment and curative esophagectomy provides improved OS at 1 and 5 years, but the benefit for DFS and locoregional and distant recurrence was uncertain owing to limited reporting of these outcomes.
The incidence of thoracic empyema has been steadily rising in many parts of the world, including Canada. This study aimed to determine which population characteristics are contributing to the increasing incidence in empyema.
Linked population data registries were used to identify patients aged 18 years or older with a hospital admission diagnosis of thoracic empyema from Jan. 1, 1996, through Dec. 31, 2018. Baseline patient characteristics were collected. Incidence rates were calculated with stratification for patient age (< 50, 50–70, > 70 yr) and sex. Linear regression analysis was performed on the prevalence of asthma, chronic obstructive pulmonary disease (COPD), diabetes, alcohol use disorder and other substance use disorders against incidence of empyema. Rate correction was performed to standardize the units of the intercepts calculated.
The sharpest increase in incidence rates was seen in male patients and those over the age of 70 years. The incidence rate in male patients increased from approximately 8.0 to 12.5 per 100 000 (3.0–5.0 per 100 000 in female patients) over the study period. Patients over the age of 70 years experienced an increase in incidence rate from approximately 16.0 to 25.0 per 100 000. No change was observed in the cohorts of patients aged less than 50 years and 50–70 years: 2.5 and 5.0 per 100 000, respectively. All diseases that were analyzed demonstrated a positive correlation with incidence of empyema (p < 0.001). Alcohol use disorders and COPD contributed the most to rising incidence rates with intercepts of 1.56 and 1.55, respectively. This was followed by other substance use disorders at 0.87. Asthma and diabetes were the least contributory at 0.61 and 0.60, respectively.
To our knowledge, this is the first study to examine the impact of population prevalence of different factors on empyema incidence. Stronger preventive health policy targeting modifiable factors such alcohol, tobacco and other substance use disorders may help curb the rising incidence of thoracic empyema.
Staging of lung cancer requires biopsy of mediastinal lymph nodes (LNs) by endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA). As many as 40% of the results are inconclusive, causing repeated biopsies and treatment delays. Elastography is a novel technology that produces a colour map to visualize tissue elasticity, with stiffer tissue depicted in blue and softer tissue in red. Elastography may be a useful adjunct for targeting EBUS-TBNA biopsies to reduce inconclusive results. However, this new technology requires evaluation and standardization.
The primary aim is to determine the optimal stiffness colour threshold that most accurately distinguishes malignant from benign LNs using elastographic images, analyzed by a deep neural artificial intelligence (AI) network. The secondary aim is to determine the stiffness area ratio (SAR) cut-off, on the basis of this optimal stiffness colour threshold.
Thirty-one LNs were imaged using EBUS-elastography and analyzed by a trained deep neural AI algorithm, which segmented the LNs and identified the percentage of LN above each of 9 different stiffness colour thresholds. The optimal stiffness colour threshold was based on comparing the area under the curve (AUC) and the SAR cut-off was based on the receiver operating characteristic (ROC) curve. Results were compared with the gold standard of pathology results.
The optimal colour stiffness threshold was defined as level 60 on the 0–255 colour scale with an AUC of 0.891. A SAR cut-off of 0.4959 was determined, with a sensitivity of 92.3%, specificity of 76.5% and overall accuracy of 83.3% in predicting LN malignancy. Benign LNs had a mean SAR of 0.37, whereas the malignant LNs had a mean SAR of 0.67 (p = 0.0002). An optimal stiffness colour threshold was determined in this pilot study, which was then used to determine the SAR cut-off in our sample. Future larger scale studies are necessary to validate these results.
The prominence of enhanced recovery after surgery (ERAS) pathways of care being adopted in thoracic surgery has required a turnover of dynamic guidelines. This study aims to assess the role of sequential compression devices (SCD) in the prevention of symptomatic or asymptomatic venous thromboembolisms (VTEs) such as deep vein thrombosis (DVT) and pulmonary embolism (PE) in patients undergoing thoracic surgery who follow an enhanced recovery after thoracic surgery (ERATS) pathway of care.
Data were collected prospectively on 200 patients who underwent elective oncologic thoracic surgery between 2018 and 2020 with non-SCD (NSCD) and SCD as the cohorts. Data collection included 30-day follow-up and Caprini and Charlson Comorbidity Index (CCI) scores. All patients followed a standardized ERATS protocol. Quality of care provided by SCDs was evaluated by the incidence and severity of postoperative symptomatic and asymptomatic VTEs. The Fisher exact test and t test were implemented with significance set at p less than 0.05.
The mean age of patients was 67.16 (standard deviation [SD] 9.56) years with 35.5% and 64.5% being male and female, respectively. Comorbidities were comparable between cohorts. The Caprini scores (6.96 [SD 1.32] and 6.92 [SD 1.51], p = 0.84) and the CCI scores (3.92 [SD 2.23] and 4.14 [SD 2.61], p = 0.52) for SCD and NSCD, respectively, did not differ. Furthermore, length of stay (LOS, d) remained consistent across cohorts. However, 3 patients within the SCD cohort developed postoperative PEs, with prolonged LOS, despite conservative Caprini and CCI scores, raising concern over the prophylactic nature of SCDs. These patients received fragmin or apixaban resulting in a resolved PE in accordance with the ERATS protocol.
SCDs are common embolic prophylaxis. Patients following pathways like ERATS stress early mobilization, smoking cessation and an exercise program with anticoagulants, rendering administration of SCDs ineffective. Omitting SCD use is a great step toward improving surgical care, improving outcomes and decreasing costs.
Prolonged air leak after major lung resection is a substantial factor affecting hospital length of stay (LOS). Traditional dissection within the fissure may result in prolonged air leak, particularly with incomplete fissures. Fissureless video-assisted thoracoscopic surgery (VATS) lobectomy has the potential advantage of minimizing this risk through the stapler transection of lung parenchyma. The objective of this study was to assess the impact of the fissureless (FL) fissure-last technique in VATS lobectomy, particularly relating to air leak and LOS.
A review of a prospectively collected database on patients who underwent VATS lobectomy at a single centre between December 2008 and January 2021 was conducted.
A total of 375 and 238 patients underwent the fissure-less and traditional approach, respectively. The groups did not differ with regard to sex, age, body mass index, lobar distribution and pathologic TNM stage (8th edition of the cancer staging manual of the American Joint Committee on Cancer). Median length of stay did not differ significantly between the 2 groups (5 v. 5 d, p < 0.05). A greater incidence of prolonged air leak (> 5 d) was seen in the traditional group, reaching significance (13.9% traditional v. 8.8% FL, p < 0.05). Shorter mean operative time was observed in the traditional group (127.0 v. 165.9 min, p < 0.05) with overall complication rates, in-hospital and 90-day mortality being similar between the groups. Overall 5-year survival was comparable between the groups (71.5% FL v. 73.5% traditional, p > 0.05).
We present the largest Canadian series to date comparing the VATS lobectomy outcomes between fissureless and traditional fissure techniques. Our results highlight that routine adoption of a fissureless approach can result in decreased rate of prolonged air leaks; however, this may not translate into improved LOS.
Lung adenocarcinoma (LUAD) is the predominant histologic subtype of primary lung cancer. Although driver mutations in the Kirsten rat sarcoma 2 viral oncogene homolog (K-RAS) oncogene occur in a quarter of LUAD cases, this alteration portends a poor prognosis and lacks dedicated therapeutics. The C-X-C motif chemokine receptor 2 (CXCR2) mediates neutrophil egress from bone marrow. Neutrophils are thought to exert immunosuppressive effects in the tumour immune microenvironment (TIME) of lung cancer. We found that CXCR2 ligand expression is preferentially upregulated in K-RAS-driven LUAD. K-RAS is a commonly tested oncogene during the diagnostic work-up of LUAD and may serve as a surrogate marker for the utility of CXCR2 inhibition either alone or in combination with other active systemic therapies.
We utilized the PRECOG data set of cancer gene expression and survival outcome data to show that CXCR2 expression is at least 18-fold greater in neutrophils than other immune cells in LUAD. Overexpression of 8 out of the 9 known CXCR2 ligands in LUAD correlates with poorer survival outcomes (p < 0.05) (PRECOG). In addition, high neutrophil infiltration is associated with the poorest survival compared with other immune infiltrates (p < 0.001) (PRECOG). Infiltrating neutrophils in a 100-patient LUAD tissue microarray are associated with worse overall survival (p < 0.05). Neutrophil migration to K-RAS, EGFR, ALK and ROS1-driven LUAD cell lines in microfluidic devices modelling the CXCR2 axis ex vivo was highest in K-RAS-driven LUAD. CXCR2 inhibition reduced neutrophil migration only in K-RAS-driven LUAD (p < 0.05). K-RAS comutational status with TP53 affects neutrophil recruitment capabilities of LUAD cell lines, which is also reversible via CXCR2 inhibition (p < 0.05).
K-RAS-driven LUAD is a promising candidate for effective inhibition of neutrophil migration and may potentiate existing systemic therapies.
The advent and widespread adoption of competency-based education increased the need for objective and valid assessment tools (ATs) of operative skill. However, commonly used ATs are generic, limiting their ability to assess and provide feedback for proficiency in specialized procedures. Video-assisted thoracoscopic lobectomy (VATSL) is a complex procedure requiring knowledge and skills not captured by generic ATs. Therefore, the objective was to identify and evaluate VATSL ATs assessing surgical proficiency.
A systematic literature search was performed using Embase, Google Scholar, Ovid Medline, PubMed and Web of Science (1990–2020). Conference abstracts, letters, commentaries, ATs assessing bronchoscopy, open surgery, pneumonectomy, wedge resections and ATs not evaluating technical skills were excluded.
Of 523 unique publications, 4 were included. One article described the development of an 8-item assessment tool (VATSAT) by iterative Delphi consensus with 31 experts but did not describe any validation. VATSAT was then modified by the same group to permit assessments of performance on a virtual reality simulator (mVATSAT). mVATSAT had high intraclass correlation coefficients for single (0.78, p < 0.001) and average (0.91, p < 0.001) measures and high reliability scores (G coefficient 0.79, Pearson coefficient 0.70, p < 0.001), but the proposed cut score had high false passing (29%) and failing (43%) rates compared with predicted performance by clinical experience. The third tool (TCAT-ARC) comprised 35 items scored on a 5-point Likert scale. TCAT-ARC was developed by iterative expert consensus and tested in simulated and clinical environments. TCAT-ARC had high discriminatory ability to differentiate novices from experts (Cronbach α 0.93, interobserver reliability 0.73, correlation with objective structured assessment of technical skills 0.68). However, a threshold score indicative of competence was not determined. The fourth used an error score to assess a porcine simulator but did not provide details regarding development or validation.
This review identified 4 VATSL ATs evaluating technical skills: 1 has validity evidence from the clinical environment and none have a reliable competence threshold score. Further study is needed to refine VATSL ATs.
Chest tube management after lung resection is variable. The Enhanced Recovery after Surgery (ERAS) Society and the European Society of Thoracic Surgeons recently set out recommendations regarding postoperative chest tube management. The degree to which these guidelines are adopted is unknown; a national survey was conducted to better understand current practice.
An online survey consisting of 15 questions queried the surgeon’s operative volume, postoperative chest tube management and the surgeon’s awareness and attitude toward ERAS recommendations. The survey was emailed to members of the Canadian Association of Thoracic Surgeons with 3 monthly email reminders. Survey responses were anonymous. Continuous and categorical variables were analyzed using the Student t test and Pearson χ2 test, respectively. We considered p less than 0.05 to be significant.
Sixty surgeons responded (44% response rate, 60/145). Most surgeons placed a single chest tube in both open (75%, 45/60) and minimally invasive (MIS) (93%, 56/60) lobectomies. Digital drainage systems were used by 50% (30/60) of respondents. A quarter of respondents (15/59) reported removing chest tubes regardless of drainage output. Compared with those who based removal on drainage output, there were no significant differences in surgeons’ years in practice (13 [standard deviation (SD)] 10 v. 18 [SD 11], p = 0.13), number of lobectomies performed annually (59 [SD 28] v. 61 [SD 39], p = 0.89) and proportion of lobectomies performed by MIS (85% [SD 25%] v. 78% [SD 25%], p = 0.30). The practice was independent of whether the surgeon was aware of ERAS guidelines (χ[1] = 0.96, p = 0.33). Drainage output thresholds for removal varied widely; 86% (37/43) used a volume lower than the 450 mL per 24 hour threshold as set out by ERAS.
This study demonstrated ongoing varied practice among Canadian thoracic surgeons with regard to postoperative chest tube management, particularly around chest tube removal. Recognizing this variability is important for understanding barriers to guideline adoption and has identified areas needing further research to develop best practice.
Insertion of a feeding jejunostomy tube after esophagectomy has been challenged by shifting support for early recovery after surgery protocols. However, the difference in mortality and postoperative outcomes between esophagectomy with and without feeding jejunostomy is unclear. We performed a systematic review and meta-analysis to evaluate the impact of jejunostomy tubes on esophagectomy outcomes in patients with esophageal cancer.
Medline, Embase and Central databases were searched through December 2020 for studies that compared esophagectomy with and without jejunostomy tubes in patients with esophageal cancer. Primary outcomes were 30-day all-cause mortality, readmission and incidence of anastomotic leak. Secondary outcomes were postesophagectomy complications (sepsis, pneumonia and chyle leakage) and length of stay. Random-effects pairwise meta-analysis was used to compare the 2 groups, and the quality of studies were assessed with the Newcastle–Ottawa Scale and the Cochrane risk-of-bias tool.
Twelve articles (2 randomized controlled trials, 10 retrospective studies) were included for analysis. The meta-analyses of these 12 studies enrolling 36 284 participants demonstrated significantly lower 30-day all-cause mortality in the jejunostomy tube group (risk ratio [RR] 1.53, 95% confidence interval [CI] 1.37 to 1.70, p < 0.001, I2 0%, p = 0.80). However, patients with and without jejunostomy tube showed no difference in regard to their rate of readmission (RR 0.97, 95% CI 0.92 to 1.02, p = 0.2), incidence of anastomotic leakage (RR 0.88, 95% CI 0.61 to 1.28, p = 0.5), sepsis (RR 1.2, 95% CI 0.96 to 1.50, p = 0.11), pneumonia (RR 0.88, 95% CI 0.75 to 1.03, p = 0.11), chyle leakage (RR 1.05, 95% CI 0.34 to 3.27, p = 0.14) and length of stay (mean difference −0.22, 95% CI −1.34 to 0.89, p = 0.69).
Insertion of a feeding jejunostomy tube after esophagectomy may lead to lower 30-day all-cause mortality while showing no difference in common postesophagectomy complications. Routine insertion of a jejunostomy tube could be considered at the time of preoperative planning for patients with esophageal cancer.
Primary spontaneous pneumothorax is managed initially with observation and chest tube placement, followed by surgical intervention in select cases. With little currently published evidence, the role of surgical pleurodesis or pleurectomy to reduce primary spontaneous pneumothorax recurrence remains unclear. This study compares the recurrence rates of primary spontaneous pneumothorax following bullectomy alone versus bullectomy with pleurodesis or pleurectomy.
A retrospective review was performed at a quaternary hospital for all patients undergoing surgery for primary spontaneous pneumothorax between June 2006 and December 2018. Patient demographic characteristics, disease severity, operative technique and time between initial surgery and recurrence were recorded. Standard statistical techniques were used for univariable and multivariable analyses.
Of 222 total included patients, 28 required a second surgery: 4 (1.8%) for prolonged air leak and 24 (10.8%) for recurrent pneumothorax. The median time from first to second surgery was 363 days and 35.7% of recurrences did not present until after 2 years. Age, sex, smoking, year of initial surgery, disease severity and surgical technique did not significantly affect recurrence rate on univariable analysis. On multivariable analysis, the odds ratios of recurrence for bullectomy with mechanical pleurodesis or pleurectomy were 0.82 and 0.15, respectively (p = 0.22), compared with bullectomy alone. Combined bullectomy, pleurectomy and pleurodesis had the lowest recurrence rate (0/18, 0%).
Bullectomy with pleurectomy and pleurodesis demonstrated a 0% recurrence rate for the treatment of primary spontaneous pneumothorax in this study. Statistical significance was not achieved in univariable or multivariable analyses comparing recurrence rates for the surgical approaches. A multicentre randomized controlled trial with longer follow-up than previously performed is needed to confirm these preliminary findings and optimize surgical management of primary spontaneous pneumothorax.
Chest wall resection (CWR) is a complicated treatment modality with substantial morbidity. We sought to report our experience with CWR to determine the frequency and severity of postoperative complications and to identify factors that may be associated with surgical outcomes.
The medical records of all patients undergoing CWR at The Ottawa Hospital from January of 2011 to December 2019 were retrospectively reviewed. For the group analysis, χ2 tests were applied to categorical variables and t tests to continuous variables. Factors predicting binary dependent variables were analyzed using logistic regression.
A total of 96 CWR were retrospectively reviewed. The indications for surgery included malignant tumours in 43 patients (44%), benign tumours in 31 (32%) and infections or other indications in 24 (24%). Forty-nine resections (50%) were followed by reconstruction, with 36 soft tissue coverage and 25 skeletal reconstructions with the use of prostheses (20 flexible, 5 rigid). A multidisciplinary approach was applied in 36% of the procedures. The complication rate was 30%, with wound-related complications present in 67%. Risk analysis demonstrated an association between both resection of ribs and reconstruction of the chest wall and postoperative complications. The comparison of different reconstructive materials and the different methods of soft tissue transfer demonstrated that there was no association between either the type of material or the method of soft tissue transfer and poorer outcomes.
This single-institution retrospective study showed that the reconstructive technique did not affect the patient’s outcome, and most of the complications were wound related. Additional studies with larger sample sizes are necessary to further explore the most important factors associated with the high morbidity of CWR and how to improve patient outcomes.
Primary mediastinal nonseminomatous germ cell tumours (PMNSGCTs) are rare and associated with poor prognosis. There is a paucity of data in the literature, with the largest case series reporting roughly 100 patients after more than 25 years of follow-up. We present our 15-year experience of PMNSGCTs at a quaternary referral centre.
A retrospective cohort reflecting all patients who underwent surgery at our centre from 2006 to 2020 was identified. Demographic characteristics, details surrounding neoadjuvant chemotherapy, surgical intervention, postoperative course, recurrence and serum tumour markers (STM) were collected. In addition to descriptive statistics, survival analysis was performed on several factors.
Thirty patients underwent resection for primary mediastinal germ cell tumours during the study period. Metastases were present in 57% of patients at diagnosis, and all patients received preoperative chemotherapy. Perioperative morbidity and mortality were 23% and 0%. Surgical access was variable (sternotomy 60%, clamshell 30%, thoracotomy 10%). Median follow-up was 10.4 months, with 57% alive at end of follow-up. Forty percent of patients developed evidence of recurrence during the follow-up period, with a median time to recurrence of 1.4 months. Survival analyses were conducted on presence of metastases, histologic subtype, postchemotherapy histology, history of venous thromboembolism, postchemotherapy tumour markers and type of chemotherapy. Postchemotherapy histology was associated with survival, with a significant difference seen on the basis of the degree of residual tumour (p = 0.003), and therewas a trend toward improved survival for patients with normal postchemotherapy tumour markers (p = 0.06).
PMNSGCTs are rare and challenging to effectively treat. Our series demonstrates that complex surgical care can be provided with low morbidity and long-term outcomes consistent with those reported in the literature. Recurrent disease, when present, tended to occur early following surgery. Persistent disease and elevated tumour markers following chemotherapy carry a poor prognosis with significant reduction in survival.
Minimally invasive anatomical lung resection can be performed via robotic-assisted thoracic surgery (RATS) or video-assisted thoracoscopic surgery (VATS). Appropriate nodal dissection improves pathologic staging, directs postoperative adjuvant treatment and might affect survival. Current evidence comparing these methods with regard to adequacy of nodal staging is inconclusive.
We conducted a single-centre retrospective analysis of prospectively collected data. All patients (n = 867) undergoing VATS or RATS anatomical lung resections for non–small cell lung cancer (NSCLC) between April 2014 and April 2020 were included. Propensity score matching was performed. Primary outcomes were number and stations of lymph nodes examined and nodal upstaging. Secondary outcomes included margins (R0, R1, R2) and peri- and post-operative complications.
After matching, 754 patients (484 VATS and 270 RATS, mean age 68.3 yr [standard deviation 8.7 yr], 58.8% female) were included. More lymph nodes were dissected in the robotic group (median 7 [range 0–23]) than in the VATS group (4 [range 0–27], p < 0.001]. Multiple imputed linear regression adjusting for the propensity score revealed a mean difference of 2.65 (95% confidence interval 2.10–3.23, p < 0.001). However, there was no difference in nodal upstaging between RATS (10%) and VATS (10.5%, p = nonsignificant [NS]). There were no differences between the groups when comparing length of stay (p = 0.91), margins (p = 0.72), intraoperative complications (p = 0.32), in-hospital mortality (p = 1.00) and postoperative complications (p = NS for all parameters).
While the robotic approach resulted in a higher number of dissected nodes, it did not translate into higher upstaging and there were no short-term differences between the RATS and VATS groups. A long-term analysis is needed to further characterize potential differences between those 2 minimally invasive approaches.
Mediastinal mass resection and thymectomy are complex and related operations that are core aspects of competency for a general thoracic surgeon. This study aimed to design a combined competency-based medical education assessment instrument for mediastinal mass resection and thymectomy.
A draft instrument was designed by a process of logical analysis by 3 expert thoracic surgeons with expertise in mediastinal surgery. The instrument was then assessed and refined using a modified Delphi process. The Delphi questionnaire was distributed to all members of the Canadian Association of Thoracic Surgeons (n = 132) in 2018–2019. Any items that did not reach inclusion or exclusion thresholds were further reviewed in successive Delphi rounds. The total number of rounds was predetermined at 2 or 3, with a minimum level of consistency of responses determined as a Cronbach α of at least 0.8.
A first version of the competency assessment instrument was developed iteratively over 5 drafts, and it ultimately included 42 steps in 6 categories. This version was reviewed in the first round of the Delphi process, completed by 58 respondents (response rate 43.9%). Respondents represented nearly all Canadian provinces, with a wide range of clinical experience and a high rate of involvement in resident education. A total of 3 rounds of Delphi review were performed. The Cronbach α for the final round was 0.83. Ultimately, 29 items were retained from the original instrument and 2 modified and 3 new items were added. The final instrument has 34 steps in 5 categories.
A nationwide consensus was established on the key components of assessing competence to perform mediastinal mass resection and thymectomy. The Thoracic Competency Assessment Tool–Anterior Mediastinum (TCAT-AM) could be used to guide competency-based assessments of thoracic surgeons and trainees, although further evidence of validity should be collected.
Venous thromboembolism (VTE) is a common, potentially preventable postesophagectomy complication, but the true incidence, associated risk factors and impact on patient outcomes are unknown in this high-risk population.
Patients undergoing oncologic esophagectomy in 8 international tertiary care centres between November 2017 and March 2020 were enrolled in a prospective cohort study. All patients received guideline-concordant thromboprophylaxis until hospital discharge and underwent perioperative bilateral lower extremity venous Doppler ultrasonography (DUS), computed chest tomography–pulmonary embolus protocol (CT-PE) and DUS at 30 and 90 days postoperatively, and DUS again at 60 days. D-dimer levels were measured at each interval and patients were followed for 6 months. Primary outcome was VTE incidence up to 180 days postoperatively. Secondary outcomes were VTE impact on survival, and patient- and surgery-related risks for VTE.
The study enrolled 187 patients, with 140 (74.8%) completing the study. Median age was 64 years, 160 (85.6%) were males, 82% (n = 153) had adenocarcinoma histology and 135 (72.2%) received neoadjuvant chemoradiation. There were 34 VTE events (24.3%, 95% confidence interval [CI] 17.4%–32.3%): DVT in 64.7% cases (22/34) and PE in 35.3% (12/34). Median time to VTE was 7.5 (interquartile range 3–27.3) days. Six-month mortality was 17.6% (6/34) for patients with VTE and 2.1% (3/106) for those without (odds ratio 7.74, 95% CI 1.7–31.3). VTE occurred along all study intervals, with 14 of 34 events diagnosed at baseline hospital stay, and the remaining events continuing up to 6 months. No patient-related factors, disease-related factors or postoperative complication–related factors were associated with VTE development. D-dimer levels and Caprini score (98.1% had a score of ≥ 5 at baseline) did not correlate with VTE risk. Most patients had mild, nonspecific symptoms at the time of VTE diagnosis.
VTE events are common after esophagectomy, frequently occurring perioperatively and up to 6 months postoperatively, and they may be associated with a significantly worse prognosis. Further studies are needed to determine the potential benefit of pre- and peri-operative VTE screening and extended, postdischarge thromboprophylaxis.
Esophageal cancer (EC) is often diagnosed at an incurable stage where treatment focuses on palliation and quality of life. The objectives of this study were to analyze the symptom trajectory and severity in patients with late-stage EC 6 months before death such that we could finetune palliation treatments at end of life.
We conducted a retrospective, descriptive cohort study to assess symptom scores in EC patients at end of life from January 2009 to September 2016. Symptom scores were derived from Edmonton Symptom Assessment System (ESAS) scores that are collected during outpatient clinic visits at regional cancer centres across Ontario. These scores are stored in Cancer Care Ontario’s symptom management reporting database. Our primary outcome was reporting of a moderate-to-severe symptom score for any symptom in the 6 months before death.
A total of 4074 patients were included and of these, 2668 (65%) patients completed 1 or more ESAS assessments in the 6 months before death. The most frequently reported moderate-to-severe symptoms were tiredness (84%), poor well-being (81%) and lack of appetite (79%), and a higher proportion of patients began reporting these symptoms and their progression as they approached the end of life. Furthermore, symptom profiles did not vary on the basis of the palliative therapies administered. Finally, multivariate analysis determined specific covariates associated with a higher likelihood of reporting symptoms as moderate to severe, including age over 60 years, female sex, urban residence, immigrant status, being 3–4 months from death and having a high number of comorbidities. In patients with EC within the last 6 months of life, temporal proximity to death was the most important predictor of poor control of symptoms, with a very high proportion of patients reporting a high symptom burden. Multidisciplinary teams can utilize symptom screening scores to identify the appropriate services and therapies to better support patients with EC.
The management of centrally located and locally advanced lung tumours has evolved substantially. Bronchovascular sleeve resections are increasingly described as acceptable alternative treatment options to pneumonectomy, as the latter is associated with high rates of morbidity and mortality. However, controversies surrounding pulmonary artery (PA) reconstructions remain, given that these procedures can be technically challenging and accompanied by their own short-and long-term risk profile. This study aims to review postoperative and oncologic outcomes for patients with lung cancer undergoing PA reconstructions.
Major literature databases (Medline, Embase, Cochrane) and clinical trial registries (ClinicalTrials.gov, International Clinical Trials Registry Platform) were systematically searched, from 2010 to present, for randomized controlled trials and retrospective studies reporting major complication, mortality and 5-year survival rates for PA reconstruction (angioplasty, patch or sleeve) with lobar resection (simple or bronchovascular sleeve). References were searched for additional studies. Risk of bias was assessed for quality appraisal.
After initial screening for duplicates and accuracy of topic, 48 full-text articles were assessed for eligibility and 14 studies were included in the final synthesis. On average, PA reconstructions led to postoperative morbidity in 42.2% of cases, most commonly arrhythmias and prolonged air leaks, with 10.5% major complication (bleeding, thromboses, prolonged mechanical ventilation) and 3% and lower 30-day mortality rates. No intraoperative mortality was reported, and studies suggested that a variety of PA reconstruction techniques were successfully and safely performed in lieu of a pneumonectomy. Most studies, however, had small sample sizes, limited length of follow-up and highly variable 5-year oncologic outcome reporting.
This evidence, although retrospective, supports the safety of PA reconstructions in selected patients with centrally located lung tumours that would have historically required pneumonectomies. Long-term oncologic outcomes with PA reconstructions remain uncertain. Our findings justify prospective multi-institutional data collection for these rare procedures to improve our understanding of when they are appropriate and how they compare with pneumonectomy as alternative approaches.
Surgical failure in patients who have undergone laparoscopic paraesophageal hernia (PEH) repair remains a challenge in thoracic surgery. This single-centre study sought to measure the recurrence rate of PEH after laparoscopic repair and to assess the relationship between the surgical technique used and the recurrence.
Prospectively collected data of patients who underwent laparoscopic PEH hernia repair at The Ottawa Hospital were included (January 2008 to March 2020). Hernia recurrence was assessed postoperatively with either upper intestinal contrast study or computed tomography performed every 5 years or if symptoms developed. Large recurrence is defined as radiologic herniation greater than 2 cm and partial recurrence as less than 2 cm.
A total of 186 patients (mean age 66 yr, range 37–90 yr) were analyzed. Crural closure was performed with horizontal pledgeted mattress sutures in 83 (55%) patients, and reinforced with mesh in 82 (44%) patients. A total of 97% patients had antireflux surgery (36% Dor fundoplication, 61% Nissen fundoplication). Followup was a median of 3 years (range 1–8.5 yr). The overall recurrence rate was 17% of patients, including partial recurrence in 9% of patients and large recurrence in 8%; 2 patients required reoperation. The median time to recurrence was 13 months. There was a nonsignificant trend toward decreased large recurrence with pledgeted repair compared with no pledgets (3.6% v. 11.6%, p = 0.08) with no change in all recurrence (20.5% v. 14.6%, p = 0.29). Addition of mesh was associated with a statistically significant reduction in overall recurrence rate (11.0% v. 22.1%, p = 0.04) with no difference in large recurrence (6.1% v. 9.6%, p = 0.3). A nonsignificant trend toward decreased large recurrence was associated with Dor fundoplication versus Nissen fundoplication (3.0% v. 11.4%, p = 0.08) with no change in all recurrence (p = 0.30).
While the interpretation of these results is limited by the nonrandomized selection, laparoscopic PEH repair may be associated with a relatively low incidence of recurrence, and the type of crural reinforcement may play a role in preventing recurrence.
The practice of esophageal surgery has a complex history, with a wide range of surgical approaches reported and a high incidence of perioperative morbidity. Enhanced recovery programs, particularly in colorectal surgery, have demonstrated improved perioperative outcomes, reduced length of stay and earlier return to regular activity. There is increasing evidence for the implementation of similar programs in esophageal surgery, and the Canadian Association of Thoracic Surgeons (CATS) has endorsed best practice recommendations for enhanced recovery after surgery (ERAS) after esophageal surgery. This study sought to understand the current self-reported implementation of ERAS strategies by Canadian surgeons with regard to esophagectomy.
A cross-sectional survey of Canadian thoracic surgeons was created and distributed via an online survey platform. The survey was distributed via the CATS newsletter from April to July 2020. Questions focused on implementation of minimally invasive techniques, nutritional assessment and management, and perioperative care.
There were 74 respondents of a total 136 members, for a response rate of 54.4%. A total of 40.3% of surgeons stated that their unit utilized a standard ERAS pathway, 68.3% reported using minimally invasive techniques, 39.1% reported selected avoidance of enteral feeding tubes, 63.9% reported early (on postoperative day 2) removal of nasogastric drains, 36.1% avoided routine contrast esophagraphy before initiating an oral diet and 50.8% reported early institution of an oral fluid diet. Multimodal analgesia was utilized (81.7% first-line use of epidural or paravertebral, 93.4% routine acetaminophen, 72.6% reported individualized use of nonsteroidal anti-inflammatory drugs). A total of 90.2% reported early mobilization of patients with standardized approach.
Self-reported implementation of formal ERAS protocols for esophageal surgery in Canada is relatively low. However, most surgeons report implementation of some of the principles of enhanced recovery as part of their usual postoperative care. The sharing of structured care pathways among surgeons along with further reporting of their clinical benefit will be important in increasing their implementation.
Surgical management of esophageal cancer continues to be a substantial clinical challenge and we considered the following question: What are the trends in early esophagectomy mortality related to the surgical approach utilized?
We studied all esophagectomies performed at our hospital for esophageal (including esophagogastric junction) carcinoma over the past 18 years. We report the patient mortality rates at 30 days (30DM) and 90 days (90DM) in 3 periods: P1 (2003–2008), P2 (2009–2014) and P3 (2015–2020). We report mortality rates stratified by surgical approach: transhiatal (TH) versus transthoracic (TT), where TT includes Ivor–Lewis, McKeown (3-hole), thoracoabdominal and minimally invasive (with thoracoscopy) esophagectomy.
Between 2003 and 2020, we performed 403 esophagectomies: 179 TH (44.4%) and 224 TT (55.6%). Of the 120 esophagectomies in P1, 91 (75.8%) were TH and 29 (24.2%) were TT. Of the 135 esophagectomies in P2, 60 (44.4%) were TH and 75 (55.6%) were TT. Of the 148 esophagectomies in P3, 28 (18.8%) were TH and 120 (81.1%) were TT. For TH esophagectomies, in P1 30DM was 0 of 91 (0.0%) and 90DM was 5 of 91 (5.3%), in P2 30DM was 0 of 60 (0.0%) and 90DM was 0 of 60 (0.0%) and in P3 30DM was 1 of 28 (3.6%) and 90DM was 2 of 28 (7.1%). For TT esophagectomies, in P1 30DM was 4 of 29 (13.8%) and 90DM was 6 of 29 (20.7%), in P2 30DM was 3 of 75 (4.0%) and 90DM was 9 of 75 (12.0%) and in P3 30DM was 2 of 120 (1.7%) and 90DM was 7 of 120 (5.8%).
It is gratifying that the improved TT mortality rate (in period P3, 1.7% 30DM and 5.8% 90DM) was seen at the same time that we evolved our practice from 24.2% TT in P1 to 81.1% TT in P3. The specific differences in the characteristics of patients who underwent TH versus TT and how these differences guide surgical approach need further study.
Objective reporting of postoperative adverse events (AEs) following laparoscopic surgery for paraesophageal hernia (PEH) is essential for continuous quality assessment of surgical care. There are limited data regarding postopeartive AEs and length of stay (LOS) following laparoscopic paraesophageal hernia (PEH) repair, as well as the impact (if any) of antireflux technique (Dor v. Nissen) and use of mesh cruroplasty.
All patients undergoing laparoscopic PEH repair (January 2008 to April 2020) underwent prospective monitoring of AE incidence and severity. AEs were categorized by the Thoracic Morbidity & Mortality Classification System (based on Clavien–Dindo schema) recorded during 30 days of postoperative recovery.
Ninety-nine patients were included (mean age 66.3 [standard deviation 11.9] yr, 26% male, 74% female). Twenty-eight patients (28%) experienced at least 1 AE, comprising grades I (17%), II (50%), IIIA (21%) and IIIB (11%) AEs; there were no grade IV or V AEs. Ninety-four subjects underwent Dor (71.3%) or Nissen (28.7%) laparoscopic fundoplication. Atrial arrhythmias (n = 6), myocardial ischemia (n = 3) and infections (n = 3) were the most frequent AEs. Nine percent of patients diagnosed with PEH had a prolonged length of stay, and 1% required readmission. No difference in AE rates was associated with the choice of fundoplication or use of mesh. Dor fundoplication was associated with a trend toward reduced median LOS (median 2 [interquartile range (IQR) 2–4] as compared with Nissen 3 [IQR 3–4] d, p = 0.15). Ninety-four patients underwent crural repair with mesh (31.9%) or no mesh (68.1%), with a median LOS of 3 (IQR 2–4) and 2 (IQR 2–4) (p = 0.5).
Relevant to future patients and surgeons, this study highlights that a third of patients undergoing laparoscopic PEH repair will experience a postoperative AE, 10% of patients will have a prolonged LOS and 1% require readmission, with no substantial impact based on type of fundoplication or mesh cruroplasty.
Fundoplication is routinely performed during laparoscopic paraesophageal hernia (PEH) repair; however, the optimal degree of fundoplication remains controversial. While short-term outcomes of Dor and Nissen fundoplication are well documented, the durability of postoperative symptom control over time is less understood. The objective of this study is to compare postoperative dysphagia, heartburn, regurgitation, odynophagia and quality of life (QoL) over time after Dor or Nissen fundoplication in patients undergoing laparoscopic PEH repair.
Symptom data of consenting patients who underwent laparoscopic PEH repair with Nissen or Dor fundoplication at a single centre were prospectively collected using validated questionnaires, recording use of antacid therapy and rating symptoms in 5 primary domains (dysphagia, heartburn, odynophagia, regurgitation and QoL). Symptom ratings were recorded preoperatively, then 4 weeks, 6 months and annually after the operation.
A total of 84 patients (58 Dor, 26 Nissen) had a mean follow-up of 37.1 months (standard deviation [SD] 13.6 mo). All symptoms and QoL markedly improved after surgery and remained stable throughout follow-up. There was a trend toward higher dysphagia 4 weeks postoperatively (p = 0.11) with Nissen, and with no differences afterward. There were slight increases in heartburn with Dor at 6–12 months (1.9 [SD 0.3] v. 0.9 [SD 0.3], p = 0.03) with no differences otherwise. No differences in regurgitation were observed between groups. Slightly increased postoperative odynophagia was found with Dor at 2–3 years (3.0 [SD 0.4] v. 1.4 [SD 0.4], p = 0.02) and 4–7 years follow-up (2.7 [SD 0.3] v. 1.2 [SD 0.4], p = 0.03) versus Nissen. QoL remained consistently stable in the postoperative period in both groups.
While the interpretation of these data is limited by the nonrandomized surgeon selection, the requirement for patient consent and a single-centre experience, the results demonstrate the efficacy of both laparoscopic Dor and Nissen fundoplication for the treatment of PEH. Given the minor differences observed, multicentre randomized controlled trials are required to definitively determine the optimal anti-reflux procedures following laparoscopic PEH repair.
Robotic-assisted thoracic surgery (RTS) has been demonstrated to be safe and effective but is associated with high capital and operating costs that are not reimbursed by many third-party payers, including the Canadian government, limiting patient access to this platform. We hypothesized that Canadian patients who had received RTS on research or philanthropic funds would have been willing to pay out of pocket to gain access to this technology.
This was a retrospective, crosssectional, observational study within the RTS population at the highest volume institution for RTS in Canada. Patients who had undergone RTS from January 2014 to July 2020 were invited to participate by telephone and asked to complete a survey about demographic characteristics, degree of satisfaction with RTS and willingness to contribute to the cost of their RTS in the absence of research or philanthropic funds.
Of the 547 eligible participants, 75.1% (411/547) completed the survey. Mean age at surgery was 65.44 (standard deviation 10.27) years, and 58.6% (241/411) were female. On a scale of 1 (poor) to 10 (excellent), 85.9% (353/411) stated that their overall experience with RTS was 8 or above. With regard to postoperative experience, 88.8% (365/411) and 86.1% (354/411) were satisfied or very satisfied during their hospital admission and with their recovery after discharge, respectively. Of the respondents, 70.6% (290/411) stated that they would have paid the $2000 supplement to government health care coverage to have access to RTS. Factors found to be significantly associated with participants’ willingness to pay were postsecondary education (p < 0.001), annual income of $60 000 or more (p = 0.034), private insurance coverage (p = 0.011), overall experience with RTS rated as 8 or above (p < 0.001) and postoperative experience after discharge rated as satisfying or very satisfying (p = 0.004).
Many Canadian patients who had experienced RTS would have been willing to pay out of pocket to have access to this technology. Patients are recognized as important stakeholders in health care policy, and this study provides important insights into the conversation about robotic surgery funding.
A reliable noninvasive biomarker of response to immunotherapy in patients with lung cancer remains elusive and represents a major unmet need. Tumoural and peritumoural immune infiltration is known to associate with response to immune checkpoint inhibitors. Radiomic analysis of pretreatment medical imaging converts medical images into high-dimensional quantitative data amenable to deep learning artificial intelligence analytical algorithms. This study aims to create a radiomic-based signature that predicts immune infiltration patterns of early-stage lung cancer to assist with clinical decision-making and tailor cancer-directed therapy accordingly. Supervised machine learning (ML) and several deep learning (DL) architectures will be employed to create a predictive model of the lung and tumour immune microenvironments (TIME) of corresponding regions of interest.
A cohort of 110 patients who underwent surgical resection for lung adenocarcinoma (LUAD) between 2014 and 2020 was identified. The cohort consisted of 60% female patients, diagnosed at a median age of 67 (range 61–74) years. At the time of diagnosis, 73% of patients had stage I, 19% stage II and 8% stage III disease. Preoperative computed tomographic scans were collected, deidentified and contoured for tumour core, tumour–lung interface and normal adjacent lung volumes from the same lobe. Immune infiltrates from corresponding regions of interest from these same patients were assessed by multiplex immunofluorescence microscopy on a tissue microarray constructed for purpose. Antibodies against CD8, CD4, FOxP3, CD68, H3Cit, NE and DAPI were employed, and multiplexing was accomplished with the OPAL system.
Radiomic signatures have promise to provide a reliable noninvasive way to predict the tumour immune microenvironment in lung cancer. If successful, these strategies may improve treatment assignment and help improve clinical outcomes.
Pyloric drainage procedures are frequently performed during esophagectomy to reduce delayed gastric conduit emptying secondary to vagus nerve division. Conduit distension from pyloric obstruction can potentially result in postoperative aspiration pneumonia and anastomotic leak. Pyloromyotomy or pyloroplasty are the most frequent surgical pyloric drainage procedures performed. Intraoperative pyloric botox injection has been utilized as an alternative to surgical pyloric drainage procedures; however, rates of repeat endoscopic intervention for pyloric stenosis following surgery have been reported to be as high as 40%. We sought to evaluate our centre’s experience utilizing intraoperative pyloric botox injection during esophagectomy.
Retrospective chart review of all patients who underwent esophagectomy at a single university centre from January 2018 to December 2020 was performed.
A total of 71 patients underwent intraoperative pyloric botox injection during esophagectomy. Esophagectomy was performed primarily for malignancy (n = 65), Boerhaave syndrome (n = 5) and refractory achalasia (n = 1). Surgeries performed were Ivor–Lewis esophagectomy (minimally invasive esophagectomy n = 11, hybrid laparoscopic abdomen n = 34, open n = 21) and McKeown (hybrid thoracoscopic chest n = 2, open n = 3). Sixteen patients required endoscopic pyloric dilation following surgery (16/71, 22%). Regarding major adverse events in patients requiring pyloric dilation, 1 patient required high-flow oxygen cannula for aspiration pneumonia (n = 1) and another required intensive care admission and reoperation for anastomotic leak (n = 1). The majority of patients required endoscopic dilation for failure to tolerate post-esophagectomy diet.
We demonstrate a high rate of endoscopic intervention for pyloric stenosis when intraoperative pyloric botox injection is utilized during esophagectomy.
Endobronchial ultrasound (EBUS) features like nodal size, margins, central hilar structure and necrosis have high accuracy for predicting lymph node (LN) malignancy. However, their clinical application remains limited because of high operator dependency. We hypothesized that an artificial intelligence convolutional neural network (CNN) is capable of accurately identifying and predicting nodal malignancy.
This was a 2-phase study. In the derivation phase, retrospective EBUS images were segmented twice for LN features by a blinded experienced endosonographer and used as controls in 5-fold cross-validation training. The CNN’s custom pipelines calculated Canada Lymph Node Score (CLNS) features, and LNs were assigned a malignancy score. In the validation phase, prospective EBUS images were collected to test the algorithm. Logistic regression, c-statistic and receiver operator characteristic curves were used to determine the CNN’s performance and capability of discrimination for malignancy. Pathologic specimens were used as the gold standard.
In total, 298 LNs (16.4% malignant) from 140 patients were used for derivation and 108 LNs (29.8% malignant) from 47 patients for validation. In the derivation cohort, the CNN was able to predict malignant LNs with an accuracy of 73.8% (95% confidence interval [CI] 68.4%–78.7%), a specificity of 84.3% (95% CI 79.2%–88.6%) and a negative predictive value (NPV) of 84.4% (95% CI 82.5%–86.2%). In the validation cohort, the CNN possessed a diagnostic accuracy of 72.87% (95% CI 63.46%–80.98%), a specificity of 90.79% (95% CI 81.9%–96.22%) and NPV of 75.92% (95% CI 71.51%–79.85%). The CNN showed a high diagnostic discrimination for a CLNS of 2 or greater in the validation sample (c-statistic 0.75, 95% CI 0.65–0.85).
The CNN identified malignant LNs with high specificity and NPV. It shows promise in ruling out mediastinal metastasis when biopsy is not possible or inconclusive. Future work with a larger data set is required to refine the algorithm before clinical trials.
The effect of postoperative adverse events (AEs) on patient outcomes such as comorbidity, length of stay (LOS) and readmissions to hospital is not completely understood. This study examined the severity of AEs from a high-volume thoracic surgery centre and its effect on the patient postoperative LOS and readmissions to hospital.
This study includes patients who underwent an elective lung resection between September 2018 and January 2020. AEs were graded on the basis of the Ottawa Thoracic Morbidity & Mortality System Classifying Thoracic Surgical Complications Clavien–Dindo classification system. The AEs were grouped as no AEs, 1 or more minor AEs and 1 or more major AEs. Patients who met the criteria but died were not included in the study (n = 9, 1.8%). The effects of the AEs on patient LOS and readmissions were examined using a survival analysis and logistic regression, respectively, while adjusting for the other demographic or clinical variables.
Among 488 patients who underwent lung surgery (wedge resection n = 100, segmentectomy n = 51, lobectomy n = 310, bilobectomy n = 10, or pneumonectomy n = 17) for either primary (n = 440) or secondary (n = 48) lung cancers, 179 (36.7%) patients had no AEs, 264 (54.1%) patients had 1 or more minor AEs and 45 (9.2%) patients had 1 or more major AEs. Overall, the median LOS was 3 days, which varied significantly between AE groups: 2, 4 and 8 days among the no, minor and major AE groups, respectively. In addition, type of surgery, renal disease and American Society of Anesthesiologists (ASA) score were significant predictors of LOS. Overall 58 (11.9%) patients were readmitted and it was found that readmissions also varied significantly (p = 0.016) between AE groups. No other variable could significantly predict patient readmission.
Overall, postoperative AEs significantly affected the postoperative LOS and readmission rates. The major AEs increased the LOS and readmission rate more than the minor AEs.
Adverse events (AEs) following thoracic surgery place considerable strain on patients, care providers and an over-burdened health care system. AEs are common (e.g., they occur in 60% of patients who undergo esophagectomy and 30% of patients who undergo lobectomy), and no matter how minor, AEs negatively affect length of stay, patient experience and hospital costs. However, a rigorous evaluation of the economic impact of thoracic surgical AEs remains lacking, and it is required to understand the essential cost-saving nature of formal quality improvement programs.
We performed a systematic search of studies using Medline, Embase and the Cochrane Library. Search criteria included adult patients who underwent a thoracic surgical procedure and studies reporting estimated costs of postoperative AEs. Studies published after 1999 were included without language restrictions. Two reviewers independently screened eligible studies using predefined criteria, extracted data using a standardized template and synthesized results using a descriptive approach. All costs were adjusted to 2021 US dollars to facilitate comparisons.
A total of 3349 abstracts were identified, of which 21 studies met the inclusion criteria. Most studies were conducted in the United States (10/21) evaluating impact on hospital expenditures (18/21). Forty-four procedure-specific AE mean costs were characterized. The most commonly described AE-related costs were anastomotic leak (mean US$21 369, range US$6176–US$37 397) and pneumonia (mean US$18 599, range US$2608–US$34 591) following esophagectomy, and prolonged air leak (mean US$2565, range US$571–US$3573), respiratory failure (mean US$19 786, range US$11 841–US$37 812), pneumonia (mean US$15 362, US$2542–US$28 183) and arrhythmia (mean US$6835, range US$5833–US$8659) following lobectomy.
Our systematic search demonstrates that the costs associated with postoperative AEs following thoracic surgery are substantial and variable. Quantifying the costs of AEs will focus attention upon the importance of remediable care processes, helping translate a reduction of AEs into cost savings for hospitals. Further analyses and generation of models capable of identifying patients at risk for specific AEs will be vital for mobilizing health care resources to improve clinical outcomes and reduce costs.
Increasing surgeon or hospital volume may improve outcomes following laparoscopic paraesophageal hernia repair (LPEHR). This study evaluated the association between surgeon and hospital volume and LPEHR outcomes in a Canadian health network.
A retrospective study evaluating patients who underwent LPEHR at 2 hospitals between 2012 and 2017 was performed. Surgeons performing more than 5 cases per year and hospitals performing more than 20 cases per year were categorized as high volume (HV). Descriptive statistics were used to compare patient demographic characteristics, surgical characteristics, operative factors, postoperative complications and mortality between HV and low-volume (LV) surgeons and hospitals.
Of 211 patients included in the tudy, 152 LPEHR were performed by HV surgeons and 59 by LV surgeons. LV surgeons were more likely to perform emergency or recurrent hernia repairs. The rate of conversion to open was 3.9% for HV and 6.8% among LV surgeons (p = 0.38). LV surgeons had an increased rate of postoperative complications (15.2% v. 5.9% p = 0.029). Among the 152 LPEHR performed by HV surgeons, 104 patients underwent LPEHR at a HV hospital and 48 at a LV hospital. We found no difference in postoperative complications, hospital length of stay or postoperative mortality between centres.
LV surgeons performed more emergency and recurrent cases and had higher complication and conversion rates. However, we found similar outcomes between HV and LV hospitals among patients undergoing LPEHR by HV surgeons. Our results suggest that surgeon volume is more important than hospital volume in achieving better outcomes after LPEHR. Larger multicentre studies are needed to suggest clinical practice changes.
Esophagectomy remains an integral part of cure for patients with esophageal cancer. However, the operation can be a source of substantial morbidity and mortality. As such, careful patient selection is required to achieve optimal outcomes. Previously, an 11-factor modified frailty index (mFI) from the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) demonstrated correlation between frailty and adverse esophagectomy outcomes; however, the factors have since been reduced to 5. Investigation is needed into whether the 5-factor mFI (mFI-5) retains the same predictive value.
Patients undergoing esophagectomy from 2016 to 2018 were identified using the targeted NSQIP Participant User File. A mFI-5 was used to determine correlation between frailty and postesophagectomy morbidity and mortality.
A total of 3274 patients were included in the analysis. No patients had an mFI-5 score higher than 3 out of 5. Mortality rates for mFI-5 scores of 0, 1, 2 and 3 were 1.5%, 3%, 3.6% and 5.2%, respectively. Clavien–Dindo 4 complication rates for mFI-5 scores of 0, 1, 2 and 3 were 11.3%, 16.5%, 20.6% and 25.9%, respectively. Multivariate logistic regression analyses controlling for age, sex, body mass index, American Society of Anesthesiologists (ASA) class, operative time, pathology and neoadjuvant therapy showed increasing frailty score was a significant predictor of Clavien–Dindo 4 complications (p = 0.01) but not mortality (p = 0.13). Associations were also noted between frailty score and hospital stay longer than 30 days, reoperation and sepsis. Increasing frailty score was particularly associated with pulmonary complications, including reintubation, prolonged ventilation for more than 48 hours and pneumonia. Twelve propensity score matching scenarios were generated using 8 covariables comparing mFI-5 scores of 0 to mFI-5 scores of 3, selecting the scenario with the best balance. The mortality rate was higher in frail patients, but the difference did not reach significance (2.3% v. 5.2%, p = 0.37).
The 5-factor mFI score significantly predicts morbidity but not mortality after esophagectomy. Additional research is needed before it can be recommended for the selection of patients being considered for esophagectomy.
Patients undergoing thoracic surgery experience high rates of complications. A knowledge gap exists as to whether targeted pretreatment interventions can reduce complications. Health-related quality of life (HRQOL) could be used as a prognosticator for this purpose. The objectives of this study were to determine the perioperative trajectory of HRQOL and the association between pretreatment HRQOL and postoperative complications.
A retrospective cohort study was performed using prospectively collected data. Consecutive patients seen at a tertiary thoracic clinic between January 2018 and January 2019 underwent longitudinal assessment of HRQOL using self-reported EuroQol 5-Dimension (EQ-5D) questionnaire scores. Postoperative complications were assessed using the Ottawa Thoracic Morbidity & Mortality System, prospectively with double adjudication. Multivariable logistic regression (generalized estimating equations with robust estimator) was used to determine the association between pretreatment EQ-5D scores and incidence of postoperative complications.
Among 515 patients analyzed, mean EQ-5D visual analogue scale (VAS) scores did not change significantly between clinic visits (p > 0.1). Time trend of VAS scores depicted no discernable trends. Changes in HRQOL dimension scores were seen in patients’ usual activities only. During the first 2 clinic visits, the majority of patients had no problems with usual activities. In the third and fourth clinic visits, the majority of patients had at least some deficits in usual activities. Pretreatment EQ-5D VAS was independently associated with incidence of postoperative complications in the multivariable analysis (p = 0.01).
There are no obvious trends in the trajectory of perioperative HRQOL for patients undergoing thoracic surgery. Extended analysis should be conducted to identify subsets of patients who may experience worse HRQOL over time. This may be based on common phenotypes. Pretreatment HRQOL is associated with postoperative complications in patients undergoing thoracic surgery. Assessments of pretreatment HRQOL could be used to identify patients at higher risk for postoperative complications for whom targeted supportive interventions could be implemented.
A Canadian province regionalized thoracic surgery to designated centres to provide high-volume care for patients undergoing esophageal cancer resection. The objective of this study was to assess variation in treatment patterns and outcomes across thoracic centres and to compare their performance with that of nonthoracic centres.
A retrospective, population-based cohort study (2002–2014) was conducted in a Canadian province (population 13.6 million). Adults with resected esophageal cancer were identified through the Presto database. Case mix, use of neoadjuvant therapy, surgical outcomes (lymph node yield and margin rates) and survival were described across thoracic centres. Multivariable regression was used to estimate the effect of having surgery at a regionalized thoracic surgery centre on perioperative (in-hospital and 90 days after discharge) mortality and long-term survival, adjusting for case mix.
Of 3880 patients meeting the study criteria, 2213 had pathology data available and were included in the analysis. Average age was 64 years, 85.7% had adenocarcinoma, 50.2% were at stage pT3 and 38.4% were at stage pN0. Most (82.6%) had surgery at 1 of 15 thoracic centres. Across thoracic centres, rates of neoadjuvant therapy varied from 16.4% to 81.6%, positive margin rates varied from 8.2% to 29.6%, median lymph node harvest varied from 7 to 20 nodes, perioperative mortality varied from 2.6% to 20.5% and 2-year survival varied from 48% to 80%. There was a trend toward reduced perioperative mortality, but no difference in long-term survival, with having surgery at a thoracic centre.
Even at designated thoracic centres, there is substantial variability in treatment patterns, surgical outcomes and survival. Looking beyond centre volume and translating best practices from high-performing hospitals to other hospitals may improve patient outcomes.
We aimed to determine whether pretreatment health-related quality of life (HRQOL) predicts short-term survival in patients with stage I–IV esophageal cancer.
A prospective cohort study was performed of consecutive patients with esophageal cancer at a tertiary thoracic centre. Patients with 6-month survival data who completed Functional Assessment of Cancer Therapy–Esophageal (FACT-E) questionnaires before treatment were included. FACT-E is a validated HRQOL measure incorporating the Esophageal Cancer Subscale (ECS). Univariate analysis was performed using the Fisher exact test and analysis of variance. Multivariable logistic regression analysis was performed to assess 6-month survival. Discriminant analysis and receiver operating characteristic (ROC) analysis were performed to assess discrimination of survival at 6 months.
In total, 132 patients had 6-month survival data. The majority were male (76.5%) with adenocarcinoma (88.6%) presenting with clinical stage III or IV (78.8%). Patients with stage III disease were treated with neoadjuvant therapy, predominantly chemoradiation. Twenty-nine patients died at 6-month follow-up. On univariate analysis, lower clinical stage (p < 0.001), lower Eastern Cooperative Oncology Group (ECOG) performance status (p = 0.02) and higher body mass index were associated with survival at 6 months. Although the overall FACT-E score was not associated with survival, higher ECS was significantly associated with being alive at 6 months (p = 0.02). No other pretreatment variables were associated with 6-month survival. Multivariable analysis showed that lower clinical stage, lower ECOG and higher ECS were independently associated with being alive at 6 months. Without inclusion of clinical stage, ROC analysis showed that the model had excellent discrimination, with an area under curve (AUC) of 0.80. Addition of clinical stage increased discrimination to an AUC of 0.88.
Pretreatment HRQOL demonstrates excellent discrimination of 6-month survival in patients with stage I–IV esophageal cancer; discrimination is augmented but not dependent on incorporation of clinical stage. ECS increased predictive power more than FACT-E, suggesting it may prove a more parsimonious prognostic tool that requires further investigation. These data may be used as an adjunct to short-term risk assessment and shared decision-making before treatment.
The Ivor Lewis esophagectomy (ILE) is an operation that involves a laparotomy and a right thoracotomy, both of which are associated with severe postoperative pain. Currently, the accepted “gold standard” for postoperative analgesia for thoracotomies and upper abdominal incisions is the thoracic epidural. A systematic review showed paravertebral catheters (PVC) were equivalent to epidural analgesia for postthoracotomy pain control and have been associated with less nausea and vomiting and hypotension. To our knowledge, the use of the paravertebral catheter in open Ivor Lewis esophagectomy has not been formally studied.
We performed a retrospective chart review of patients who underwent open ILE from 2012 to 2018 at our local institution with local ethics board approval. A total of 96 patients underwent open ILE: 44 patients with a PVC and 52 patients given an epidural. Our primary outcome was the area under the curve pain scores in the first 48 hours after surgery using the trapezoid method.
The PVC group was noninferior and statistically equivalent to the epidural group using a noninferiority margin of 2 on the pain scale. For secondary outcomes, the highest pain score was noninferior and equivalent between PVC and epidurals (t[90] = 1.53, p = 0.13). The total opioid consumption in the PVC group was significantly less compared with the epidural group. The PVC group had a quicker time to ambulation and similar length of stay. Our study showed a higher incidence of pruritus in the epidural group. The incidence of nausea, somnolence and hypotension were similar.
Our retrospective study continues to challenge the role of epidurals as the gold standard of pain control after thoracotomy and upper midline abdominal incision. A surgeon-placed paravertebral under direct vision may be safer than a thoracic epidural with fewer side effects. Further prospective studies with a larger population are needed to better compare the two modalities.
Enhanced recovery after surgery (ERAS) protocols allow safe early discharge home. One barrier to early discharge home after undergoing lung resection is return to normal or adequate oxygenation. We assessed time to normal oxygenation after lung surgery and determined factors associated with early return to normal oxygenation.
A prospective cohort study was performed at a tertiary Canadian hospital between 2019 and 2021, including patients undergoing 1-lung ventilation and planned pulmonary resection. Pulse oximetry was assessed after 5 minutes of being of room air (RA), every 4 hours after surgery. Return to normal oxygenation was defined as being able to maintain oxygen saturation greater than 90% after 5 minutes on RA. Univariable (Fisher exact tests, t tests) and multivariable logistic regression analyses were performed to assess factors associated with ability to reach normal oxygenation within 4 hours postoperatively.
A total of 139 patients were included, of whom 52.5% (n = 73) were males; 51.8% (n = 72) of the patients returned to normal oxygenation by 4 hours postoperatively. By 24 hours, 93.5% (n = 130) of patients had returned to normal oxygenation levels. On univariable analysis, higher preoperative percent predicted forced expiratory volume in 1 second (FEV1) was associated with achieving normal oxygenation by 4 hours postoperatively. Multivariable modelling was able to significantly predict those able to reach normal oxygenation within 4 hours postoperatively (omnibus test p < 0.001), with good fit (Hosmer–Lemeshow test, p = 0.92). On multivariable analysis, the following were significantly associated with return to normal oxygenation by 4 hours postoperatively: higher preoperative FEV1 (adjusted odds ratio [aOR] 1.03, 95% confidence interval [CI} 1.01–1.06, p = 0.006), lower body mass index (aOR 0.91, 95% CI 0.85–0.98, p = 0.02) and younger age (aOR 0.95, 95% CI 0.91–0.99, p= 0.01).
With routine and early measurement of room oxygenation after lung surgery, approximately 51.8% and 93.5% of patients will have returned to normal oxygenation by 4 hours and 24 hours, respectively. We identified factors associated with return to normal oxygenation by 4 hours. This may help to inform and facilitate ERAS protocols and hospital resource allocation.
One-lung ventilation (OLV) in thoracic surgery can increase the risk of ventilator-induced lung injury (VILI). We aimed to characterize the immunologic impact of OLV as well as that from surgical trauma; in addition to the local inflammatory response to lung surgery and OLV we aimed to assess the systemic response.
In a prospective study of 12 high-risk patients undergoing thoracic surgery requiring OLV arterial plasma and bronchoalveolar lavage fluid (BALF) from both lungs were collected before and after OLV. Multiplex enzyme-linked immunosorbent assay (ELISA) was used to measure the concentration change of 61 biomarkers. The Mann–Whitney U test was used to calculate significance with an unadjusted threshold of p less than 0.05.
Eighty-three percent (n = 10) of patients underwent video-assisted wedge resections for interstitial lung disease, with 75% (n = 8) being female, and with an average age of 57 (standard deviation 19) years. In the ventilated lung 2 growth factors, 3 anti-inflammatory cytokines and 13 inflammatory markers significantly increased during ventilation. Of these, anti-inflammatory IL-10 increased the most (959-fold), followed by IL-4 (945-fold) and IL-2 (938-fold). Twelve biomarkers significantly increased in the ventilated lung in comparison with the operated lung. All except IL-10 were inflammatory markers. IL-7 and IL-15 slightly decreased in the operated lung while increasing 19- and 176-fold in the ventilated lung, respectively.
Lung surgery induces different inflammatory consequences in the operated and ventilated lung. The ventilated lung produces a comparatively homogeneous inflammatory profile between patients compared with an operated lung, which can potentially be a conserved target for pharmacologic interventions to reduce ventilator-induced lung injury and pulmonary complications.
Anastomotic leaks are a feared complication following colorectal surgery. Indocyanine green (ICG) angiography enables intraoperative assessment of anastomotic perfusion. This study aims to perform a comprehensive review and meta-analysis evaluating ICG’s role in improving postoperative colorectal outcomes.
A systematic search of Medline, Embase, Scopus and Web of Science databases was conducted in July 2020. Studies were reviewed and data extracted independently by 2 reviewers following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Inclusion criteria were ICG use in colorectal surgery, studies with comparison groups and studies with more than 5 patients. Abstracts, non-English and pediatric studies were excluded. Our primary outcome was evaluating anastomotic leaks; secondary outcomes included rates of anastomotic revision, ureteric injuries, reoperations, length of stay and operative time.
Overall, 1586 studies were retrieved, with 25 meeting the criteria. Included studies comprised 5949 patients with 47.7% receiving ICG. ICG and non-ICG cohorts were similar with respect to age (62.9 v. 63.1 yr), sex (43.0% v. 41.4% female), smoking status (22.2% v. 24.5% smokers) or diabetes (13.8% v. 13.8%). Factors including anastomotic distance from the anal verge (6.5 cm v. 6.8 cm) and anastomotic technique (78.7% v. 74.8% stapled) were also comparable. ICG was associated with reduced odds of anastomotic leak (odds ratio [OR] 0.41, 95% confidence interval [CI] 0.32–0.53, p < 0.001), reduced reoperation for anastomotic leak (OR 0.64, 95% CI 0.43–0.95, p = 0.03), and pronounced effects were recognized in the rectal anastomosis subgroup (OR 0.31, 95% CI 0.21–0.44, p < 0.001). Length of stay, diversion and operative time were not different, while ureteric injury and long-term outcomes could not be evaluated because of lack of reporting.
ICG presents a promising strategy to address anastomotic leaks following colorectal surgery and appears most beneficial for rectal anastomoses. Randomized controlled trials evaluating ICG dosing and techniques are needed, as are studies characterizing long-term outcomes.
Repeat preoperative endoscopy is common for patients with colorectal neoplasms. This can result in treatment delays and risks of colonoscopy-related complications. Repeat preoperative endoscopy has been attributed to poor communication between endoscopists and surgeons. In January 2019, mandatory electronic synoptic reporting for endoscopy was implemented to include elements consistent with quality indicators proposed in national guidelines. The aim of the present study is to assess whether the repeat preoperative endoscopy rate for colorectal lesions changed following synoptic report implementation.
A retrospective review was performed of all patients who underwent elective surgical resection for colorectal neoplasms from January 2007 to June 2020 at a tertiary hospital in Canada. Patients who had an index endoscopy documented via synoptic report were compared with those reported via narrative report. Primary outcomes were rates of repeat preoperative endoscopy and inclusion of colonoscopy quality indicators (e.g., photodocumentation, tattoo placement and bowel preparation score). A total of 1429 patients who underwent elective colorectal resection for colorectal cancers or polyps between January 2007 and June 2020 were included. Of these, 115 had index endoscopies recorded via synoptic report and 1314 by narrative report. The repeat preoperative endoscopy rate after endoscopies documented by narrative report was 29.1% (95% confidence interval [CI] 26.6%–31.6%) and 25.2% (95% CI 17.6%–34.2%) for synoptic report. Patients whose index endoscopies where performed by a practitioner other than their operating surgeon had a re-endoscopy rate of 36.0% (95% CI 32.8%–39.3%) after narrative report and 38.8% (95% CI 27.1%–51.5%) for synoptic report. Rates of tattoo placement, photo-documentation and reporting of bowel preparation quality were all significantly increased with synoptic reports (p ≤ 0.003).
Endoscopy synoptic reports based on current guidelines were not associated with a decrease in rates of repeat preoperative endoscopy at a high-volume colorectal cancer centre. Future study should examine synoptic report contents for this purpose and make necessary modifications.
Traditional narrative operative reports have historically been of poor quality. Synoptic operative reporting has been utilized as an effective and efficient communication tool. For patients with rectal cancer, synoptic reports are required for pathology, radiology and major oncologic resections but have never previously been developed for transanal endoscopic surgery (TES). The objective of this study was to develop consensus-derived quality indicators (QIs) for TES reports.
An online Delphi protocol was used. Colorectal surgeons and other key physician stakeholders across Canada were recruited to participate via a secure web-based platform. Delphi participants were asked to submit potential QIs according to 6 reporting themes proposed by the study authors, based on thorough literature review. The initial QIs were recirculated to participants and rated on 9-point Likert scales. Scores of 70% or greater were used for inclusion consensus, and scores of 30% or less denoted exclusion. Elements scoring 30% to 70% were recirculated by runoff in a subsequent round to generate the final list.
Fifteen physicians consented to participate, including 7 academic and 2 community colorectal surgeons, a surgical oncologist, a general surgeon with expertise in synoptic operative reporting, 2 gastrointestinal pathologists, an abdominal radiologist and a radiation oncologist. Round 1 achieved 100% (15/15) response and identified 79 potential QIs for consideration. Round 2 had an 87% (13/15) response, with 61 of the 79 proposed items reaching consensus for inclusion. Round 3 achieved a 93% (14/15) response. Sixty-seven items reached final inclusion.
This study is the first to establish multidisciplinary, consensusderived QIs for TES reports. This will allow generation of a synoptic reporting template to improve perioperative communication for these patients.
Colonoscopy is the standard of care for diagnosis and evaluation of colorectal cancers before surgery. However, varied practices and heterogenous documentation affects communication between endoscopists and surgeons and can hamper surgical planning. The objective of this study was to develop recommendations for the use of standardized localization and reporting practices for colorectal lesions identified at lower endoscopy.
A systematic review of existing endoscopy guidelines as well as thorough narrative review of the overall endoscopy literature were performed to identify existing practices recommended globally. Colorectal surgeons and gastroenterologists from across Canada who had demonstrated leadership in endoscopy, managed large endoscopy programs, produced high-impact publications in the field of endoscopy or participated in the development of endoscopy guidelines were selected to participate. Using an online Delphi process, experts were prompted to propose standard practices to facilitate documentation and localization of colorectal lesions at endoscopy and to minimize the need for repeat procedures. These initial statements were recirculated to participants and rated on a 9-point Likert scale. Median scores of 1–3 were excluded, 7–9 were included and 4–6 were recirculated. In the final round, inclusion of recirculated items were decided by majority vote (> 50%).
In total, 129 of 197 statements achieved consensus after 3 rounds of voting by 23 experts. There was greater than 90% participation in each round. Specific recommendations varied according to lesion location in the cecum, colon or rectum and whether referral for surgical or advanced endoscopic resection was planned. Recommendations were provided for appropriate documentation, indications, location and method of tattoo placement, as well as photographs and real-time 3D scope configuration device use.
Best practices to optimize endoscopic lesion localization and communication are not addressed in previous guidelines. This national consensus between colorectal surgeons and gastroenterologists in Canada will provide a framework for efficient and effective colorectal lesion localization.
Black patients are disproportionally affected by colorectal cancer, both with respect to incidence and mortality. Studies accounting for patient- and community-level factors that contribute to such disparities are lacking. Our objective is to determine if black compared with white race is associated with worse survival in colon cancer, while accounting for socioeconomic and clinical factors.
A retrospective analysis was performed of black or white patients with nonmetastatic colon cancer in the Surveillance, Epidemiology, and End Results (SEER) cancer registry between 2008 and 2016. Multivariable Cox regression analysis and propensity-score matching (PSM) was performed.
A total of 100 083 patients were identified, 15 155 black patients and 84 928 white patients. Median follow-up was 38 months (interquartile range 15–67 mo). Black patients were more likely to lack health insurance and reside in counties with low-household income, high unemployment and lower high school completion rates. Black race was associated with poorer unadjusted 5-year cancer-specific survival (79.4% v. 82.4%, p < 0.001). After multivariate adjustment, black race was associated with greater 5-year cancer-specific mortality (hazard ratio [HR] 1.19, 95% confidence interval [CI] 1.13–1.25, p < 0.001) and overall mortality (HR 1.12, 95% CI 1.08–1.16, p < 0.001). Mortality was higher for black patients across stages: stage I (HR 1.08, 95% CI 1.08–1.09), stage II (HR 1.06, 95% CI 1.06–1.07) and stage III (1.03, 95% CI 1.03–1.04). PSM identified 27 640 patients; black race was associated with worse 5-year overall survival (67.5% v. 70.2%, p = 0.003) and cancer-specific survival (79.4% v. 82.3%, p < 0.001).
This population-based, propensity-score matched analysis demonstrates poorer overall survival and cancer-specific survival in black patients undergoing surgery for nonmetastatic colon cancer.
Fecal immunochemical testing (FIT) is an accepted form of colorectal cancer (CRC) screening and is recommended for adults up to the age of 75 years in Canada. Many individuals over 75 years of age continue to receive FIT despite being outside accepted guidelines. The aim of this study was to determine whether patients aged 75 years and older with screendetected neoplasms demonstrated improved outcomes and survival compared with patients with non-screen-detected neoplasms.
A retrospective population-based cohort study was performed. We identified an aggregated patient cohort aged 75 years and older with a diagnosis of advanced colorectal neoplasia (ACN) from November 2013 to November 2019, as well as patients aged 75 years and older who underwent FIT within these dates. The proportion of screen-detected ACNs was calculated. Surgical intervention, hospital length of stay, postoperative mortality and overall survival (OS) were analyzed.
Between November 2013 and 2019, 3980 patients aged 75 years and older were diagnosed with ACNs; 798 (20%) were screen detected. Patients with screen-detected ACN were more likely to be diagnosed at earlier stage and twice as likely to undergo surgery (odds ratio 2.12, p < 0.001). Patients with screen-detected ACN had a 36% overall survival benefit (hazard ratio 0.64, p < 0.001) compared with patients with nonscreen-detected ACN, adjusted for other variables such as age, Charlson Comorbidity Index (CCI), stage and surgical intervention. The survival benefit of screen detection persisted across most age and CCI groups.
Screen detection of ACN in patients over 75 years of age is associated with improved overall survival when controlling for other potential confounders. Compared with their counterparts with nonscreen-detected ACN, these patients have earlier stage of disease and are more likely to undergo surgical intervention with improved outcomes, irrespective of age and CCI. These data may support screening for appropriately selected patients who would otherwise fall outside of current guidelines.
Loop ileostomies (LI) are used for temporary fecal diversion to protect downstream colorectal anastomoses. Timing of reversal is dependent on the indication for the ileostomy and standard operative approach has been through an open technique. Recently, however, with the rising prevalence of obesity and increasing competency with minimally invasive surgical approaches, laparoscopic LI reversal has been employed and studied. The aim of this systematic review and meta-analysis is to compare laparoscopic and open LI reversal.
Medline, Embase and Central were systematically searched. Articles were included if they compared rate of postoperative morbidity or length of stay (LOS) or both in patients undergoing laparoscopic and open loop LI reversal. Pairwise meta-analyses using inverse variance random effects were performed.
From 410 citations, 4 studies with 205 patients undergoing laparoscopic LI reversal (44.9% female, mean age 55.7 yr) and 198 patients undergoing open LI reversal (42.9% female, mean age 54.6) were included. Patients in the laparoscopic reversal group had significantly shorter LOS (4.3d v. 5.2d, relative risk [RR] −0.65, 95% confidence interval [CI] −1.20 to −0.11, p = 0.02). Laparoscopic and open LI reversal were comparable in rates of postoperative morbidity, aside from a significant decrease in the rate of superficial surgical site infection (sSSI) with the use of laparoscopy (1.2% v. 6.7%, RR 0.26, 95% CI 0.07 to 0.96, p = 0.04). Operative time was significantly longer in the laparoscopic LI reversal group (mean difference 15.30, 95% CI 8.83 to 21.77, p < 0.001). Observational studies were uniformly at a moderate risk of bias and the included randomized controlled trial was at low risk of bias.
Patients undergoing laparoscopic LI reversal have a decreased postoperative LOS and sSSI compared with patients undergoing open reversal. This study is limited by the number of included studies. Further prospective study comparing laparoscopic and open LI reversal is required.
In rectal cancer surgery, larger mesorectal fat area has been shown to correlate with increased intraoperative difficulty. Prior studies were mostly in Asian populations with average body mass indices (BMIs) less than 25 kg/m2. This study aimed to define the relationship between radiologic variables on pelvic magnetic resonance imaging (MRI) and intraoperative difficulty in a North American population.
This is a single-centre retrospective cohort study analyzing all patients who underwent low anterior resection (LAR) or transanal total mesorectal excision (TaTME) for stage I–III rectal adenocarcinoma from January 2015 until December 2019. Eleven pelvic MRI measures were defined a priori according to previous literature and measured in each of the included patients. Operative time in minutes and intraoperative blood loss in millilitres were utilized as the primary indicators of intraoperative difficulty.
Eighty-three patients (39.8% female, mean age 62.4 [standard deviation (SD) 11.6] yr) met inclusion criteria. The mean BMI of included patients was 29.4 (SD 6.2) kg/m2. Mean operative times were 227.2 (SD 65.1) minutes and 340.6 (SD 78.7) minutes for LARs and TaTMEs, respectively. On multivariable analysis including patient, tumour and MRI factors, increasing posterior mesorectal thickness was significantly associated with increased operative time (p = 0.04). Every 1-cm increase in posterior mesorectal thickness correlated with a 26 minute and 6 second increase in operative time. None of the MRI measurements correlated strongly with BMI.
As the number of obese patients with rectal cancer continues to expand, strategies aimed at optimizing their surgical management is paramount. While increasing BMI is an important preoperative risk factor, the present study identifies posterior mesorectal thickness on MRI as a reliable and easily measurable parameter to help predict operative difficulty. Ultimately, this may in turn serve as an indicator of which patients would benefit most from preoperative resources aimed at optimizing operative conditions and postoperative recovery.
Recent evidence suggests that the gut microbiota may affect colonic healing after surgery and may cause or prevent anastomotic leak. Short-chain fatty acids (SCFAs), including butyrate, are beneficial bacterial metabolites that are produced as a result of bacterial fermentation of dietary oligosaccharides in the colon. They are known to play an important role in the maintenance of colonic homeostasis and health. As such, SCFAs, specifically butyrate, may enhance anastomotic healing.
The objective is to promote anastomotic healing and prevent leak in a mouse anastomotic leak model by increasing the bacterial production of butyrate in the colon via dietary supplementation of oligosaccharides, namely inulin and galacto-oligosaccharides (GOS).
Mice were fed diets supplemented with 10% wt/wt inulin or GOS, or with cellulose as a control, for a period of 2 weeks. They were then subjected to a proximal colonic anastomosis under general anesthesia. Mice were followed after surgery and were sacrificed on postoperative day 6. Anastomotic healing was assessed both macroscopically and microscopically. The preservation of the extracellular matrix at the wound site and the integrity of the gut barrier were also evaluated.
Diets supplemented with inulin or GOS increased the levels of SCFAs, namely butyrate, in the colon. When compared with cellulose, diets supplemented with oligosaccharides improved macroscopic and microscopic anastomotic healing. They were also associated with a higher collagen concentration at the colonic wound site and lower bacterial translocation after surgery.
Oligosaccharides seem to enhance anastomotic healing and prevent AL after colonic surgery in our mouse AL model. This beneficial effect was associated with enhanced collagenization of the wound and an improved gut barrier.
Identifying patients at high risk of colorectal cancer is essential to the provision of surgical care. At our site, the Facilitated Access to Surgical Treatment (FAST) program triages patients with rectal bleeding as high risk or low risk using a 10-item questionnaire and subjective classification by the referring physician. Our objective was to assess the performance of this system in predicting the presence of cancer.
All patients referred for rectal bleeding between Feb. 1, 2016, and Dec. 31, 2018, were considered. Patients with duplicate referrals or previously diagnosed colorectal cancer were excluded. Follow-up was completed on Apr. 1, 2020. Primary outcomes included incidence of cancer and time from referral to diagnosis. Secondary outcomes included the predictive accuracy of specific symptoms for colorectal cancer.
A total of 1846 patients were included (mean age 51.8 yr), of whom 582 (31.5%) were identified as high risk using the triage system. Of the 37 (2%) patients diagnosed with cancer, 29 (78.4%) patients were classified as high risk. High-risk patients had a higher incidence of cancer (5.0% v. 0.6%, p < 0.001) and shorter wait times for endoscopy (median 201 [interquartile range (IQR) 77–383] v. 292 [IQR 190–448] d, p < 0.001) when compared with low-risk patients. Patients identified as high risk by the referring doctor were 22.3 times more likely to have a cancer than low-risk patients (20.8% v. 0.9%, p < 0.001). Patients identified as high risk by the questionnaire were 3.5 times more likely to have a cancer than low-risk patients (2.7% v. 0.8%, p = 0.012). The most predictive symptoms for cancer were black blood (positive predictive value [PPV] 5.56%), more frequent bowel movements (PPV 3.16%) and blood mixed inside the stool (PPV 3.02%).
The FAST system identified the majority of patients with rectal bleeding who had cancer; however, several patients with cancer were inaccurately classified as low risk. We will continue to refine this system.
The purpose of this study was to describe postoperative bowel dysfunction after restorative proctectomy and to identify factors associated with its development, using real-world population-level data.
Adult (≥ 18 yr) patients who underwent restorative proctectomy for rectal cancer between April 1998 and November 2018 were identified from the Hospital Episode Statistics (HES) database and were linked to the Clinical Practice Research Datalink (CPRD) for postoperative follow-up. All patient had to be ostomy-free at cohort entry. The primary outcome was bowel dysfunction, defined according to relevant symptom-based read codes and medication prescription product codes recorded during follow-up visits. A Cox proportional hazards model was performed to identify factors associated with postoperative bowel dysfunction, adjusting for relevant covariates.
In total, 2197 patients were included. The median age was 70.0 (interquartile range 62.0–77.0) years, the majority (59.2%) of patients were male and nearly two-thirds (60.1%) had mid–low rectal tumours. After a median follow-up of 51.6 (24.0–90.0) months, 592 (26.9%) patients had a clinical encounter for a bowel symptom, and 690 (31.4%) patients received a bowel medication prescription; 217 patients (31.4% of medicated patients, or 9.9% of the entire cohort) received 10 or more prescriptions. The primary outcome, bowel dysfunction, was identified in 620 (28.2%) patients. Risk factors for postoperative bowel dysfunction included extremes of age (< 40 yr: adjusted hazard ratio [aHR] 2.35, 95% confidence interval [CI] 1.18–4.65; 70–79 years-old: aHR 1.25, 95% CI 1.03–1.52), radiotherapy (aHR 1.94, 95% CI 1.56–2.42), distal tumours (aHR 1.62, 95% CI 1.34–1.94), history of diverting ostomy (aHR 1.58, 95% CI 1.33–1.89) and anastomotic leak (aHR 1.48, 95% CI 1.06–2.05). A minimally invasive surgical approach was protective for postoperative bowel dysfunction (aHR 0.68, 95% CI 0.53–0.86).
Bowel dysfunction is common following rectal cancer surgery, and several patient-, disease- and treatmentlevel factors were associated with its development.
Previous meta-analyses have compared observational therapy (OT) to antibiotics for acute uncomplicated diverticulitis (AUD); however, the lack of clinically relevant margins has limited their conclusions. The purpose of this study was to determine if OT is noninferior to antibiotics for AUD.
Medline, Embase and Cochrane were systematically searched by 2 independent reviewers to identify comparative studies of OT versus antibiotics for AUD. Non-inferiority margins (NIM) for each outcome were based on a previous Delphi consensus process including 50 patients and 55 physicians: persistent diverticulitis (NIM 4.0%), progression to complicated diverticulitis (NIM 3.0%) and time to recovery (NIM 5 d). Risk differences (RD) and mean differences (MD) were pooled using random-effects meta-analysis. As is customary with noninferiority analyses, 1-sided 90% confidence intervals and Z tests were used to determine non-inferiority. A sensitivity analysis was performed, excluding patients post-hoc determined to have complicated diverticulitis.
Nine studies (3 randomized controlled trials, 6 observational studies) met the inclusion criteria: OT (n = 2011) versus antibiotics (n = 1144). Eight studies were European publications and 1 was from New Zealand. The years of accrual spanned from 2000 to 2019. OT was noninferior to antibiotics regarding the risk of persistent diverticulitis (pooled RD −0.39%, 90% confidence interval [CI] −3.22% to 2.44%, NIM 4.0%, NI p value < 0.001; I2 66%), progression to complicated diverticulitis (pooled RD −0.030%, 90% CI −0.99% to 0.92%, NIM 3.0%, NI p value < 0.001; I2 0%) and time to recovery (pooled MD 2.0 d, 90% CI 1.38–2.62 d, NIM 5 d, NI p value < 0.001; I2 not applicable). On sensitivity analysis, OT remained noninferior for each outcome. When stratified by study design, OT also remained noninferior for each outcome among randomized controlled trials only.
According to clinically relevant NIMs, OT was noninferior to antibiotics for the treatment of AUD with regard to persistent diverticulitis, progression to complicated diverticulitis and time to reach full recovery.
Patients with stage I anal squamous cell carcinoma (SCC) have been underrepresented in landmark trials showing superiority of chemoradiotherapy (CRT) over radiotherapy alone (RT) for definitive treatment. This review aimed to elucidate whether definitive treatment with RT versus CRT is associated with differences in survival and treatment-related toxicity outcomes in patients with stage I anal SCC.
Medline, Embase and Central were searched as of November 2020 to identify studies comparing outcomes of RT versus CRT for nonoperative treatment of patients with stage I anal SCC. The primary outcomes were 5-year overall survival (OS) and 5-year disease-free survival (DFS). The secondary outcome was treatment-related toxicities. A pairwise meta-analysis was performed using an inverse-variance random-effects model.
From 2174 citations, 5 retrospective studies with 415 patients treated with RT and 3784 patients treated with CRT were included. Patients treated with CRT had an increased 5-year OS (relative risk [RR] 1.18, 95% confidence interval [CI] 1.10–1.26, p < 0.001, I2 0%) but no significant difference in 5-year DFS (RR 1.01, 95% CI 0.92–1.11, p = 0.87, I2 0%). Treatment-related toxicities could not be meta-analyzed because of heterogeneity. Limited data from individual studies suggested an increased frequency of select toxicities with CRT.
RT may be an appropriate alternative to CRT for patients with stage I anal SCC who may be unable to tolerate chemotherapy-related toxicity; however, CRT remains the gold standard. Larger prospective studies comparing strategies for this select patient population are needed to clarify whether treatment can be de-escalated.
Historically, the Hartmann procedure (HP) has been the operation of choice for diverticulitis in the emergency setting. However, recent evidence has demonstrated the safety of primary anastomosis (PA). The purpose of this study was to evaluate the trends of, and factors associated with, HP compared with PA in emergency surgery for diverticulitis.
Using the National Inpatient Sample database, we identified adult patients 18 years of age and older who underwent emergency surgery for diverticulitis (HP or PA) between January 1993 and October 2015 using International Classification of Diseases, 9th Revision (ICD-9) codes. Patients with inflammatory bowel disease or gastrointestinal cancer or who underwent elective surgery were excluded. Of 90 815 patients who met the inclusion criteria, 49 259 (54.2%) had a HP and 41 556 (46.8%) had a primary anastomosis (PA). Median age was 62 years (interquartile range [IQR] 50–73). Patients who had HP were more likely to be older (median 63 yr (IQR 51–75) v. 60 yr (48–72), p < 0.001) and have major or extreme severity of illness index (69.5% v. 41.4%, p < 0.001). The overall likelihood of HP did not differ significantly over time: HP comprised 53.5% of included cases in 1993–1995 and 54.3% of cases in 2013–2015 (p < 0.001). On multiple logistic regression, age (reference: 18–44 yr; 45–54 yr: odds ratio [OR] 1.13, 95% confidence interval [CI] 1.07–1.19; ≥ 75 yr: OR 1.39, 95% CI 1.29–1.49) and severity of illness (moderate illness: OR 1.81, 95% CI 1.69–1.93, major or extreme illness: OR 4.80, 95% CI 4.51–5.12) were independent predictors of HP.
Despite early data demonstrating its safety compared with the Hartmann procedure, the use of sigmoid colectomy with primary anastomosis for diverticulitis in the emergency setting has remained stable over 2 decades. Future research should aim to identify a possible change in practice with the publication of more recent randomized controlled trials.
Traditionally, reversal of neuromuscular blocking agents following the completion of surgery was achieved with cholinesterase inhibitors. Recently, sugammadex has been increasingly relied upon. Sugammadex is a cyclodextrin molecule that rapidly reverses steroidal neuromuscular blocking drugs. Its use following colorectal surgery has become more common, and while the rapidity of reversal is undoubtedly improved, whether sugammadex affects clinical postoperative outcomes is unknown. This systematic review and meta-analysis aims to compare postoperative outcomes in patients receiving sugammadex with those receiving a control during colorectal surgery.
Medline, Embase and Central from database inception to December 2020 were systematically searched. Articles were included if they compared sugammadex with a control (e.g., neostigmine, pyridostigmine, placebo) in patients undergoing colorectal surgery in terms of postoperative morbidity or length of stay (LOS) or both. Pairwise metaanalyses using inverse variance random effects was performed. Risk of bias was assessed using the RoB 2 tool for randomized trials and ROBINS-I for nonrandomized studies.
From 269 citations identified, 5 studies with 535 patients receiving sugammadex (45.8% female, mean age 64.4 yr) and 569 patients receiving a traditional neuromuscular blockade reversal agent (45.0% female, mean age 64.3 yr) were included. All included studies were appraised to be of low or moderate risk of bias. There was no significant difference in LOS between the 2 groups (mean difference −0.01, 95% confidence interval [CI] −0.27 to 0.25, p = 0.95). The risk of adverse respiratory events postoperatively was similar between the 2 groups (risk ratio 1.33, 95% CI 0.81 to 2.19, p = 0.25).
There are no current data to suggest an improvement in postoperative outcomes with the use of sugammadex in patients undergoing colorectal surgery. This study is limited by the number of included studies. Further prospective study comparing sugammadex and traditional neuromuscular blockade reversal agents in colorectal surgery is required.
Sexual dysfunction is an important sequela of rectal cancer treatment and has important implications for patients’ quality of life. However, patients often lack information about sexual dysfunction associated with rectal cancer treatment. Given that sexual health and satisfaction are important determinants of quality of life, optimizing patient education in this area has the potential to improve rectal cancer survivorship care. This study aims to ascertain patients’ needs and expectations for information on sexual dysfunction after rectal cancer treatment.
After institutional ethics board approval, a qualitative study was conducted using semistructured interviews of both rectal cancer survivors (n = 10, 50% female, age 48–64 yr) and colorectal surgeons (n = 6, 83% female). Data analysis was performed using thematic analysis. Transcribed interviews were coded independently by 2 researchers using MAXQDA software and the themes identified were refined iteratively on the basis of continuing discussions with all investigators.
Patient interviews revealed 4 overarching themes: (1) patients have limited knowledge about symptoms of sexual dysfunction, (2) patients receive inadequate information from the medical team regarding sexual dysfunction, (3) patients want to receive information about sexual dysfunction in different formats, (4) before the start of treatment. Surgeon interviews revealed 4 overarching themes: (1) surgeons face challenges in informing patients about sexual dysfunction, (2) discussion on sexual dysfunction depends on the complexity of the patient’s condition and persona, (3) surgeons think patients should receive high-quality information on sexual dysfunction, (4) both before and after treatment.
Patients with rectal cancer receive limited information about sexual dysfunction associated with treatment and as a result, their needs and expectations are inadequately met. Our study supports the development of high-quality, patient-centred material on sexual dysfunction after rectal cancer treatment that could facilitate communication, improve physicians’ and patients’ satisfaction, and help patients to gain better knowledge of sexual dysfunction.
As a result of the pandemic, resident selection was forced to move to a video-based process in 2020. Over the next few years, training programs will have to choose to either go back to face-to-face interviews or continue with video-based platforms. In an attempt to help programs with their decision, this project aimed to document the experience of program directors and applicants undergoing a video-based selection process.
Applicants and program directors involved in the 2020 Canadian Colorectal Fellowship Match were invited to participate in this qualitative study. All colorectal fellowship programs carried out their selection process as per their protocol. Structured phone interviews of participants were completed before and after the fellowship interviews. The perspectives of applicants and program directors were extracted using directed content analysis. Six program directors and 5 applicants participated in this study.
Before the interview, program directors described the potential role of the interview as an opportunity to get a sense of how applicants react under stress and who they are as individuals. In programs where most applicants do electives, program directors admitted that the interview had minimal impact on the rank list. Applicants used the interview to assess the culture of the program and to ensure that their aspirations fit well with the program. When asked after the interview, applicants found the process useful. Multiple benefits of this interview format were mentioned by both applicants and program directors, including financial cost, opportunity cost and environmental benefits. The most commonly mentioned aspect lacking with the video-based format was the informal interactions.
Considering the multiple advantages of the video-based interviews documented by this project and other sources, identifying ways to further optimize this process should be of interest. Looking for alternative opportunities for informal interactions may serve to refine the video-based interview process.
The COVID-19 pandemic required strict prioritization of health care resources, resulting in at least a partial shutdown of endoscopy in many health care centres. This study aims to quantify the impact of colonoscopy shutdown on colorectal cancer detection and screening.
The endoscopy database at an academic tertiary care centre was queried for all colonoscopies performed from March to June 2020, corresponding to the first wave of the pandemic, and colonoscopies performed in March to June 2019, a nonpandemic (NP) period. The indications, cancer and adenoma detection rates, as well as the prioritization of urgent procedures were compared between the 2 periods.
In the NP period, 2515 colonoscopies were performed, while only 462 were performed during the pandemic period, an 82% reduction. Surveillance colonoscopies in high-risk patients were reduced from 848 to 114, an 87% reduction. Only 17 initial screening colonoscopies in high-risk patients were completed compared with 303 in the NP period, a 94% reduction. An increase in the proportion of colonoscopies performed in the pandemic period was for urgent indications, 4.7% versus 27.1% (p < 0.001), and for inpatients, 0.5% versus 13.6% (p < 0.001). In the NP period, 44 (1.7%) patients were diagnosed with cancers and adenomas were removed in 766 patients (30.5%), whereas during the pandemic period, 18 (3.9%) cancers were diagnosed and 142 patients (30.7%) had adenomas. On multivariate regression, the pandemic period was independently associated with increased cancer detection (odds ratio [OR] 2.12, 95% confidence interval [CI] 1.18–3.80) and urgent colonoscopy (OR 6.58, 95% CI 4.81–9.09) but was not associated with adenoma detection (OR 1.06, 95% CI 0.84–1.35).
The restriction of colonoscopies resulted in a significant reduction in screening and surveillance in high-risk patients, as well as a significant reduction in cancers found. Future studies will determine the ramifications of the decrease in surveillance and adenoma detection on patients whose colonoscopies were delayed because of the pandemic.
Opioid use disorders (OUD) have been associated with an increased risk of complications, length of stay and cost in many surgical procedures. This study aims to determine their impact on the outcomes of colorectal resections.
Using the National Inpatient Sample database, all admissions for colorectal resections between 2000 and 2016 were identified using International Classification of Diseases, 9th Revision and 10th Revision (ICD-9/10) codes. Patients with a history of OUD were similarly identified. The outcomes of interest were mortality, postoperative complications, length of stay and cost of admission. Patient and disease factors associated with OUD were identified in univariate linear regression. Multivariable logistic and linear regression models were used to study the association of OUD and the outcomes of interest.
Of 1 599 767 admissions for colorectal resections, 5953 (0.37%) patients had an OUD. Patients with OUD were younger (56.0 [standard deviation (SD) 16.1] vs. 62.2 [SD 17.1] yr, p < 0.01), more likely to be smokers (27.6% v. 21.6%, p < 0.001), overweight or obese (10.9% v. 8.6%, p < 0.001) and less likely to be undergoing elective surgery (33.1% v. 49.1%, p < 0.001). On multivariate logistic regression, OUD was significantly associated with both overall operative complications (odds ratio [OR] 1.54, 95% confidence interval [CI] 1.34–1.77) and anastomotic leak (OR 1.29, 95% CI 1.08–1.55). However, OUD did not have a significant impact on mortality (OR 0.90, 95% CI 0.64–1.25). On multivariable linear regression, OUDs were also associated with an increased cost of $26 365.59 (95% CI $26 365.50–$26 365.67) and increased length of stay of 2.60 days (95% CI 2.54–2.65).
OUD represents a significant predictor of overall complications, anastomotic leak, increased cost and length of stay in major colorectal surgery. This patient group should receive dedicated preoperative optimization and careful postoperative monitoring.
Patients with rectal cancer (RC) may have unrealistic expectations of good anorectal function after neoadjuvant radiation and low anterior resection (LAR) surgery. The goals of this study are to assess patient expectations of bowel, urinary and sexual function after RC treatments and whether an education video changed expectations.
Thirty-four RC patients having neoadjuvant chemoradiation then LAR were assessed. Following consultation with a surgeon, but before time of surgery, a questionnaire assessing expectations of treatment was administered. Patients then watched an educational video that provided information on treatments and functional limitations after treatments. The video included patient testimonies. The questionnaire was administered again after patients watched the video and changes in expectation analyzed. Scoring values were from 1, indicating poor function, to 5, indicating normal function.
Patient scores for control of bowel movements, rectal gas, wearing a pad, rushing to toilet, altering diet and use of medication, on average indicated expectation of sometimes being problematic but with a wide range in scores from occasionally problematic to good function (scores 1 to 5). Similarly, urination and sexual function scores indicated expectation of sometimes being problematic but with a wide range of scores (1 to 5). Expectations of altered bowel function, dietary restriction and urinary and sexual function were changed in 47% to 71% of patients after watching the education video. The education video was scored as helpful or very helpful by 85% of patients.
Patients have wide expectations of problematic control of bowel function, urination and sexual function following RC treatments that could depend on individual patient factors such as age, general health, tumour location, anastomotic height and preoperative function. An education video did change expectations for functional limitations after treatments in a majority of patients. Further education modalities for patients and surgeons may provide more uniform expectations of functional limitations after RC treatments.
Robotic-assisted surgery offers technical advantages in the pelvis. However, cost concerns have limited the implementation of robotic colorectal surgery in Canada. Our study describes the early experience of a robotic colorectal surgery program started in November 2018 at a Canadian tertiary centre.
A prospective database was used to perform a case review from program inception to January 2021 on patients with rectal and rectosigmoid adenocarcinoma. Demographic, procedural and tumour characteristics were collected. Short-term clinicopathologic outcomes were analyzed.
Sixty-two patients were included, of whom 45 were male (72.6%). Mean age was 63.2 years (standard deviation [SD] 9.9 yr), mean body mass index was 29.3 (SD 6.0) and median Charlson Comorbidity Index score was 4 (range 2–11). Fifty-six (90.3%) patients had rectal adenocarcinoma and the remaining 6 patients (9.7%) had rectosigmoid adenocarcinoma. Mean tumour distance from sphincter was 5.1 cm (SD 3.6 cm). Common procedures were low anterior resection (41 patients, 66.1%; mean operating time [OT] 303 [SD 73.0] min) and abdominoperineal resection (13 patients, 21.0%; mean OT 345 [SD 114.5] min). Median hospital stay was 5 days (range 1–34 d). Thirty-day complication, readmission and mortality rates were 4.8%, 3.2% and 0%, respectively. A median of 15 nodes were retrieved (range 5–37). R0 resection was achieved in all cases and no conversions occurred. Total mesorectal excision was graded as complete, nearly complete and incomplete in 41 cases (73.2%), 14 cases (25.0%) and 1 case (1.8%), respectively. A significant decrease in OT was noted between the first and last 10 cases (mean 348 v. 291 min, p = 0.02).
Implementation of robotic rectal and rectosigmoid cancer resection in a Canadian centre is feasible with comparable clinicopathologic outcomes. There is a significant decrease in OT as surgeon experience increases. Future research will compare robotic versus laparoscopic rectal cancer outcomes and analyze cost-effectiveness.
Supportive interventions for rectal cancer survivors with low anterior resection syndrome (LARS) following restorative proctectomy are lacking. To improve patient experience through education and support, we developed an online patient-centred application on LARS (eLARS) for rectal cancer survivors. The aim of this study was to evaluate the feasibility of eLARS usage in rectal cancer survivors and gain insight into participants’ experiences with LARS and the eLARS application.
This was a mixed methods pilot study, which included a feasibility and qualitative analysis. Convenience sampling was used to recruit rectal cancer survivors from a single institution who underwent restorative proctectomy. Study participants gained access to eLARS for 2 months. The primary outcome was feasibility, defined as 75% of study participants using the application at least 4 times per month. Semistructured interviews were performed with patients following the study period. Thematic analysis was used to analyze the interviews.
Our sample included 8 rectal cancer survivors, 5 females and 3 males, with a median age of 58.5 (interquartile range 56.5–64.5) years. Most participants (75%) were at more than 1 year after restorative proctectomy. Five participants had major and 1 had minor LARS. The majority (75%) of participants used the application at least 4 times per month during the study period. Our thematic analysis revealed 2 challenges and 2 corresponding solutions. Participants felt that they lacked access to credible information and emotional support around the time of ileostomy reversal, and they found that the eLARS application addressed these challenges through the educational module and the patient discussion forum, respectively. All 7 participants who answered the satisfaction questionnaire were “mostly” or “very” satisfied with the app and would recommend it to other rectal cancer survivors with LARS.
Our study revealed challenges faced by rectal cancer survivors living with LARS. eLARS is a feasible educational and supportive intervention for rectal cancer survivors that has the potential to enhance patient experiences.
Colonic emergencies remain a major life-threatening condition associated with high morbidity and mortality rates. Many factors have been reported as significant risk factors for postoperative course outcome. Unlike elective colorectal surgical procedures, a large portion of emergency colorectal surgical procedures are performed by noncolorectal surgeons. The impact of specialization on the outcome of emergency colorectal surgery has not yet been well described. The objective was to evaluate the impact of surgeon specialization on the outcomes of emergency colorectal surgeries.
This was a retrospective cohort study conducted at King Abdulaziz Medical City in Riyadh. Patients undergoing emergency colorectal surgery were identified and grouped according to the specialty of the primary surgeon: colorectal surgeon or noncolorectal surgeon. Outcomes included 30-day mortality, length of stay, intensive care unit (ICU) stay, 30-day complications and reoperation Bivariate and multivariate regression analyses were used to assess the association between the surgeons’ specialty and outcomes.
Of 219 included patients who underwent surgery between 2008 and 2020, there were 126 men (57.5%) and 93 women (42.4%). The most common procedure performed by colorectal surgeons was left hemicolectomy (n = 45, 67.2%) while the most common procedure performed by noncolorectal surgeons was right hemicolectomy (n = 26, 51%). The most common reason for surgery was malignant pathologies (n = 129, 58.9%). Patients who had their surgeries performed by a colorectal surgeon had a significant decrease in 30-day mortality (odds ratio [OR] 0.23, 95% confidence interval [CI] 0.065–0.834). Reoperation also decreased in this group (OR 0.413, 95% CI 0.179–0.956). In addition, both hospital length of stay and ICU length of stay decreased in the colorectal group compared with the noncolorectal group (OR 0.636, 95% CI 0.465–0.869, and OR 0.385, 95% CI 0.235–0.63, respectively).
Specialization in colorectal surgery has a significant influence on morbidity and mortality after emergency operations. These findings may help improve emergency services and remodel referral systems in institutions.
Internationally, more and more cancer resections are occurring in those over the age of 80 years. Understanding the outcomes for these patients is paramount to providing informed and accurate patient care. Our institution serves a population of 100 000 across 35 000 km2, in a region that has among the highest rates of colorectal cancer in the world. We reviewed the clinical outcomes for patients over the age of 80 years undergoing elective colonic resection for malignancy in a regional hospital in New Zealand.
We conducted a review of those 80 years old or older who underwent elective colonic resection surgery for malignancy from January 2016 to January 2021. Baseline characteristics were gathered, including Charlson Comorbidity Index score to assess comorbidity. Thirty-day mortality, length of stay, intensive care unit stay, discharge disposition and mortality were the primary outcomes. Multivariate regression was used to assess the baseline factors that were predictors of 30-day mortality.
Sixty-four patients were identified, the mean age being 83.8 years, 32 of whom were women. The mean length of stay was 12.4 (standard deviation [SD] 9.6) days, with a mean critical care unit (CCU) stay of 1.9 (SD 2.3) days. Twenty-four percent of patients were discharged to a higher level of care than their baseline or to an inpatient rehabilitation ward. There was a 3% (3/64) 30-day mortality rate. Of the baseline characteristics, Charlston Comorbidity Index score was statistically significantly associated with mortality (p = 0.042).
Patients over 80 years of age have acceptable postoperative outcomes in keeping with international standards, although they may have a longer length of stay in hospital and higher likelihood of requiring rehabilitation or decreased independence. Careful discussion is warranted for selected patients, particularly those with comorbidities, but age alone need not be prohibitory in this population.
Perianal sepsis in Crohn disease (CD) fistulae is managed with antibiotics, surgical drainage and a noncutting seton if a trans-sphincteric tract is identified. Optimal management following seton placement remains to be determined. We aimed to assess the success rates of subsequent curative surgery, seton removal or long-term indwelling seton in patients with or without CD.
A retrospective cohort study of consecutive patients with perianal fistula treated with noncutting seton between 2010 and 2019 was undertaken. Patients included 83 with and 94 without CD. Initial control of symptomatic perianal infection with seton (improvement of pain, no fever, decreased purulence, and no dependence on antibiotics), subsequent healing (closure of external openings or minimally symptomatic fistula with little-to-no drainage upon gentle compression) and reintervention rates were compared.
Overall, 177 patients, 61% male and 83% with complex fistulae, were followed for a median of 23 months (interquartile range 11–40). Immunomodulatory treatment was used in 90% of patients with CD after seton. The probability of achieving initial control of perianal infection with seton was 92.9% in patients with CD and 96.7% in those without CD (p = 0.11). Subsequently, fistula healing for patients with and without CD, respectively, was achieved in 64% and 86% after curative surgery (n = 84, p = 0.1), 49% and 71% after seton removal (n = 57, p = 0.21) and 58% and 50% with long-term seton management (n = 36, p = 0.72). The overall probability of reintervention for recurrence during the follow-up was 83% in patients with CD versus 53% in those without CD (p = 0.002). The 3 management strategies had statistically comparable healing and reintervention outcomes in patients with CD (p = 0.5). CD, supralevator extension and diabetes were predictors of worse outcomes.
Curative surgery resulted in satisfactory fistula healing in the majority of patients without CD patients but had poor results in the few patients with CD having curative surgery. Reintervention for recurring perianal sepsis is frequent in CD and can be managed using long-term indwelling setons.
Early ileostomy closure (EIC), 2 weeks or less from creation, is a relatively new practice that has been shown to be safe, feasible and cost-effective in Europe. To our knowledge, this is not routine practice, nor has it been studied, in North America. We sought patient and surgeon opinions about EIC.
This was a mixed-methods, cross-sectional study. Rectal cancer survivors from a single institution who underwent restorative proctectomy with diverting loop ileostomy (DLI) and subsequent reversal within 4 years were included. North American surgeons with high rectal cancer volumes (> 20 cases/yr) were included. Structured (for patients) and semistructured (for surgeons) interviews were performed. Grounded theory was used for thematic analysis.
Thirty-nine patients were interviewed (mean age 65 [standard deviation 12] yr, 48% female). DLI reversal occurred after a median of 7.3 months (interquartile range 4.6–10.6) and 49% (n = 19) of patients found it difficult or very difficult to live with their DLI. Important advantages of EIC perceived by patients were improved quality of life and quicker return to normal function, whereas the greatest disadvantage was that 2 operations in 2 weeks would be too taxing. The majority of patients (69%, n = 27) would have chosen EIC had it been offered. Surgeon interviews (n = 14) revealed 4 overarching themes: (1) barriers to implementing EIC exist including logistical difficulty, concerns for patient outcomes and resistance to change; (2) it is only appropriate for motivated patients with an uncomplicated perioperative course; (3) implementation will require a strategic approach including multidisciplinary and institutional buy-in; and (4) there are many benefits to EIC including mitigating stoma-related complications and financial burden. The majority of surgeons (n = 12, 86%) said they would definitely participate in a randomized controlled trial (RCT).
Although EIC has been proven safe, it has not yet been implemented in North America. Both patients and surgeons are interested in further exploring this approach and believe it warrants a North American RCT to motivate a change in practice.
Increased length of hospital stay (LOS) is associated with health care overutilization and adverse patient outcomes. Even though Crohn disease (CD) is often managed surgically, the existing evidence for the impact of CD on LOS after surgery is sparse. Right hemicolectomy (RC) and ileocecal resection (IC) are commonly performed to treat CD. We aimed to investigate the independent effect of CD on LOS after these surgeries.
Using provincial administrative data from 2008 to 2019, we conducted a retrospective cohort study on patients undergoing laparoscopic RC or IC without conversion, diversion, lysis of adhesions or postoperative complications (to minimize surgery-related heterogeneity). The independent effect of CD on LOS was assessed using multivariable regression, adjusting for patient, provider and technical factors. To confirm the independence of this effect from those of postoperative complications and surgeryrelated factors, the analysis was repeated on a cohort including all patients undergoing RC or IC, controlling, further, for laparoscopic surgery, conversion to laparotomy and postoperative complications.
A total of 11 500 patients undergoing laparoscopic RC or IC were analyzed. The mean LOS was 4.8 days. In these patients undergoing laparoscopic RC or IC with no diversion, conversion, lysis of adhesions or postoperative complications, CD was independently associated with increased LOS (relative risk [RR] 1.31, 95% confidence interval [CI] 1.24–1.39 p < 0.001), translating into a 1.5-day average increase in LOS. Similar effects were seen in the model containing all RC and IC and adjusting for complications and surgery-related factors (RR 1.27, 95% CI 1.22–1.32, p < 0.001).
CD was strongly associated with increased postoperative LOS, independent of patient factors, surgeon and hospital factors, or technical factors related to the surgery. This increase in LOS was also independent of postoperative complications. Further investigation into the mechanisms underlying increased LOS in CD is needed, so as to reduce LOS after surgery in this patient population.
Although short stay (≤ 1 d) protocols exist for diverting loop ileostomy (DLI) closure, this practice is not widespread. The aim of this study was to identify patient and procedural factors associated with short-stay DLI closure and to study the morbidity of short-stay DLI closure, specifically related to readmission rates.
Adults (aged ≥ 18 yr) who underwent an elective DLI closure between 2012 and 2018 were identified from the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP). Short-stay DLI closure was defined as a postoperative stay of 1 day or less. Patients were grouped on the basis of length of stay after DLI closure (≤ 1 d v. > 1 d). Demographic, clinic, pathologic and operative factors were compared. Multivariable logistic regression was used to identify factors that were independently associated with a short stay, as well as readmission, postoperative major morbidity and mortality.
Of the 26 363 patients who underwent DLI closure, 1056 (4.0%) had a short postoperative stay (≤ 1 d). On crude analysis, short-stay patients were younger, were more likely to be male and white, had procedures with a shorter operative time and had fewer comorbidities. Short-stay patients had lower rates of surgical site infections and major postoperative morbidity. No difference was found in 30-day readmission and mortality rates. On multiple logistic regression, independent predictors of short stay were younger age, shorter operative time and the absence of comorbidities. A short-stay was not associated with readmission or 30-day mortality on multiple regression. Finally, short stay was negatively associated with postoperative major morbidity and surgical site infection.
Short-stay (≤ 1 d) DLI closure can be implemented with younger, healthier patients undergoing shorter operations. A short-stay DLI closure in these patients is safe and is not associated with increased readmission and complication rates.
Traditionally, the perineal repair of choice for full thickness rectal prolapse has been the Altemeier procedure, a perineal proctosigmoidectomy with a handsewn anastomosis. A recently described variant of this procedure combines the resection and anastomosis into 1 step by means of linear and transverse stapling. There are few published data comparing the characteristics and outcomes of these 2 approaches.
This retrospective review, performed at 2 Canadian academic hospitals, performed a surgical outcomes and cost comparison between the perineal stapled prolapse resection and the Altemeier procedure. All patients undergoing perineal stapled prolapse resection (25) and the Altemeier procedure (19) between 2015 and 2019 were included.
The perineal stapled prolapse resection group was significantly older than the Altemeier group (81 yr, 95% confidence interval [CI] 70–92 yr v. 74 yr, 95% CI 63–85 yr; p = 0.047), with a lower body mass index (21.4, 95% CI 17.7–25.1 v. 24.4, 95% CI 18.5–30.3; p = 0.042) and equivalent American Society of Anesthesiologists score (2.84, 95% CI 2.09–3.59 v. 2.68, 95% CI 1.93–3.43; p = 0.49). The operative time for perineal stapled prolapse resection was significantly less (30.3 min, 95% CI 16.3–44.3 min v. 67 min, 95% CI 43–91 min; p < 0.001), as were the operative costs. Recurrence (28% v. 37%; p = 0.53) and complication rates were equivalent.
Perineal stapled prolapse resection is a safe, efficient and effective approach to perineal proctosigmoidectomy, with surgical outcomes comparable to those associated with the Altemeier procedure, but with a significant reduction in operative time and cost.
Mental health and substance use disorders have been independently associated with increased risk of mortality and morbidity, as well as health care resource utilization after surgery. The aim of this study is to understand the prevalence of mental health and substance use disorders in patients with rectal cancer who are admitted for proctectomy and examine their association with 90-day postoperative complications and readmissions.
This was a retrospective cohort study using a representative sample of admissions and discharges from hospitals in the United States captured in the Nationwide Readmissions Database. Adult patients admitted with a primary diagnosis of rectal neoplasm who underwent surgical resection between 2010 and 2017 were included. The main outcome measures were 90-day readmission and postoperative complication rates.
In total, 61 416 patients were included. The prevalence of mental health and substance use disorders in patients with rectal cancer showed a significant rise (R2 0.88) on linear regression. Patients with known mental health and substance use disorders had higher rates of 90-day readmissions (30.6% v. 24.8%, p < 0.05), postoperative complications (52.7% v. 44.7%, p < 0.05) and mean length of stay on the index admission (9.2 v. 7.7 d, p < 0.05). Overall 90-day readmission rate was 25.7%, with the most common reasons being renal injury (34.3%), infection (23.9%) and ileus–bowel obstruction (19.7%). On multiple logistic regression, mental health and substance use disorders were independently associated with 90-day readmission (odds ratio [OR] 1.16, 95% confidence interval [CI] 1.11–1.22) and postoperative complications (OR 1.21, 95% CI 1.15–1.27). Independent predictors of developing complications included all studied mental health and substance use disorders: mood disorder (OR 1.14, 95% CI 1.06–1.22), anxiety disorder (OR 1.08, 95% CI 1.01–1.15), schizophrenia (OR 1.30, 95% CI 1.06–1.60), opioid use disorder (OR 1.50, 95% CI 1.03–2.17), nonopioid substance use disorder (OR 1.50, 95% CI 1.23–1.83) and alcohol use disorder (OR 1.80, 95% CI 1.05–1.33).
We observed increasing trends in the prevalence of mental health and substance use disorders in patients with rectal cancer. These comorbidities are associated with increased morbidity and health care costs. Future studies evaluating the effectiveness of perioperative support interventions for this subset of patients with rectal cancer will further provide practical solutions to improve surgical outcomes.
Implementation of enhanced recovery pathways in colorectal surgery has been effective in reducing health care costs and improving patient satisfaction. The objectives of this study were to describe trends in early postoperative discharge; to explore potential consequences of premature early discharge in terms of patient outcomes and resource utilization; and to identify predictors of bounce-back readmission following early discharge.
This was a retrospective cohort study using the Nationwide Readmissions Database. Adult patients admitted with a primary colorectal neoplasm who underwent colectomy or proctectomy between 2010 and 2017 were identified. The exposure of interest was early (≤ 3 d from surgery) discharge. The main outcome measures were 30-day readmissions and postoperative complication rates.
Of 342 242 patients, 51 977 (15.2%) were discharged early. The proportion of early discharges significantly increased (R2 0.94), from 9.9% in 2010 to 23.4% in 2017, while their readmission rates remained unchanged (7.3% [standard deviation 0.5%]). Compared with patients who suffered complications during the index admission, those who were discharged early and bounced back (readmitted within 7 d of discharge) with complications had higher rates of anastomotic leak (38.4% v. 27.5%, p < 0.001), bleeding (5.3% v. 4.4%, p = 0.007) and percutaneous drainage (7.8% v. 4.3%, p < 0.001). Experiencing complications during bounce-back readmission after early discharge, rather than during the index admission, was an independent predictor of longer overall length of stay (β 0.04, p < 0.001) and higher cost of hospital admission (β 0.03, p < 0.001) on multiple linear regression. Factors independently associated with bounce-back readmission following early discharge were male sex (odds ratio [OR] 1.47, 95% confidence interval [CI] 1.33–1.63), open surgery (OR 1.37, 95% CI 1.23–1.52), stoma creation (OR 1.51, 95% CI 1.22–1.87) and comorbidities including hypertension (OR 1.15, 95% CI 1.03–1.28), chronic pulmonary disease (OR 1.50, 95% CI 1.29–1.74) and renal failure (OR 1.24, 95% CI 1.01–1.53).
Early postoperative discharge of patients with colorectal cancer is increasing despite a lack of improvement in readmission rates. Discharging patients prematurely early could lead to readmissions with critical complications related to surgery and increased resource utilization.
In patients undergoing elective colectomy, preoperative administration of oral antibiotic bowel preparation (OABP) alone has been shown to reduce complications, but its utility in emergency resections is less clear. This study compares the perioperative morbidity of patients who received OABP before emergency resection with that of those who did not.
After institutional review board approval, the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database was queried for all patients who underwent emergency colectomy between 2012 and 2018. Those receiving mechanical bowel preparation or unable to ingest OABP (i.e., ventilated, obstructed, septic shock or moribund) were excluded. Outcomes of interest were surgical site infection (SSI), leak, ileus and major morbidity (defined as organ space or deep SSI, dehiscence, sepsis, septic shock, reintubation, reoperation, myocardial infarction, cardiac arrest, acute renal failure, pneumonia, deep vein thrombosis or urinary infection). One-to-one optimal propensity score matching was performed with potential confounders selected a priori including age, American Society of Anesthesiologists (ASA) class, smoking, diabetes, chronic obstructive pulmonary disease, weight loss (> 10%), immunosuppression, chemotherapy, anemia, preoperative sepsis, indication for surgery, emergency indication, surgical approach, type of resection, stoma creation and wound class. Logistic regression was then performed.
Of 15 988 patients who met the inclusion criteria, 592 received OABP before emergency colorectal resection. After matching, 1184 patients remained (592 in each arm), with balanced preoperative characteristics on univariate analysis. Postoperatively, those who received OABP had decreased organ space SSI (8.9% v. 14.9%, p = 0.006), but no differences in overall SSI (16.9% v. 19.6%, p = 0.49), leak (2.5% v. 3.4%, p = 0.493, ileus (26.2% vs 29.9%, p = 0.174) or NSQIP major morbidity (40.5% v. 42.7%, p = 0.48). On multivariate logistic regression including propensity score, the reduction in organ space SSI with OABP persisted (odds ratio 0.60, 95% confidence interval 0.417–0.852).
In the emergency setting, administration of OABP may contribute to reduced organ space SSIs.
Robotic surgery for colorectal pathology has gained interest as it can overcome technical challenges and limitations of traditional laparoscopic surgery. Concerns regarding costs have limited its use in Canada. The objective of this paper was to assess the impact of robotic surgery on short-term outcomes, minimally invasive surgery (MIS) utilization and costs in a tertiary care centre in Canada.
Consecutive patients undergoing sigmoid resection, low anterior resection or abdominoperineal resection were included. Outcomes, utilization and costs were compared between the prerobotic phase and postrobotic phase. Univariate and multivariate analysis was used for these comparisons.
A total of 295 patients were included in the analysis (145 in the prerobotic phase and 150 in the postrobotic phase). Characteristics, diagnosis and type of resection were similar between groups. Robotic implementation resulted in higher rates of successful MIS (i.e., attempt at MIS without conversion) (85% v. 47%, p < 0.001), shorter mean length of stay (4.7 d v. 8.4 d, p < 0.001) and similar mean operative times (3.9 h v. 3.9 h, p = 0.93). Emergency department visits were fewer in the robotic phase (24% v. 34%, p = 0.04), with no difference in readmission, anastomotic leak or unplanned reoperation. Mean hospital costs (including operative costs, index admission, emergency department visits and readmission) were similar between phases (robotic phase −$1453, 95% confidence interval [CI] −$3974 to $1068, p = 0.25). Regression analysis adjusting for age, sex, obesity, American Society of Anesthesiologists (ASA) class and procedure showed similar findings (robotic phase −$657, 95% CI −$3038 to $1724, p = 0.59).
Implementation of a robotic colorectal surgery program in a tertiary care centre in Canada resulted in improved clinical outcomes, without a significant increase in the cost of care. Although this study is from a single institution, we have demonstrated that robotic colorectal surgery is feasible and can be cost effective in the right setting.
As life expectancy increases, health care professionals are faced with the decision to continue offering screening and surveillance colonoscopies for fit elderly patients. Canadian guidelines currently advise that screening end at age 75 years. The purpose of this study was to characterize the adenoma detection rate (ADR), identify predictors of adenoma detection and study adverse outcomes of screening and surveillance colonoscopies in this population.
A retrospective cohort study was performed using a tertiary care endoscopy database. The study included patients aged 75–79 years at the time of screening or surveillance colonoscopy. Patients with inflammatory bowel disease, genetic syndromes and diagnostic colonoscopies were excluded. Patient demographic characteristics, clinical characteristics, incidence and histology of polyps, colonoscopy quality indicators, and outcomes after colonoscopy were analyzed. Predictors of ADR were assessed by multiple logistic regression.
Of 1174 patients who underwent screening (26.0%) or surveillance (74.0%) colonoscopies, 652 (55.5%) had a polypectomy. Mean age was 76.6 (standard deviation [SD] 1.3 ) years, 49.6% were female, and mean body mass index was 26.5 (SD 5.4) kg/m2. Of 1434 polypectomies performed, 75.2% were adenomas. Adenomas were similarly located in the right (43.3%) and the left (46.3%) colon, with only a minority in the rectum (6.4%). Detection rates of polyps, adenomas, advanced neoplasia and colorectal cancer (CRC) were 55.5%, 45.4%, 7.2% and 1.9%, respectively. Only 2.1% had a poor preparation, 98% were complete colonoscopies and there were no postprocedure complications. On multiple logistic regression, no predictive factors of adenoma detection were identified. A total of 22 (1.9%) patients were diagnosed with CRC; of these, 20 (90.9%) patients underwent surgery, while 3 (13.6%) were treated with chemotherapy, radiation or both. There were no major postoperative complications or mortalities.
The ADR in adults aged 75–79 years undergoing screening or surveillance colonoscopies is high. Colonoscopies in this age group did not result in adverse outcomes and thus can be selectively offered.
This study aims to investigate trends in emergency department (ED) visits and hospital admissions and to determine predictors of admission from the ED for patients with a primary diagnosis of acute uncomplicated diverticulitis (AUD).
Weighted ED visits for a primary diagnosis of AUD (based on International Classification of Diseases, 9th Revision and 10th Revision, [ICD-9 and ICD-10] codes) were captured from 2006 to 2017 using the Nationwide Emergency Department Sample (NEDS). A χ2 test of independence was used to compare patient and hospital characteristics between admitted and nonadmitted patients. A multiple logistic regression was used to determine odds of admission.
In 2006, a total of 237 445 patients presented to the ED with AUD, of whom 131 758 (55.5%) were admitted. In 2017, 392 316 patients presented to the ED with AUD, of whom 99 775 (25.4%) were admitted. This represents an overall 65.2% increase in the number of cases but a 30.1% decrease in the proportion admitted. In addition to year of presentation (odds ratio [OR] 0.85, 95% confidence interval [CI] 0.85–0.85), age (18–44 yr: ref; 45–64 yr: OR 0.90, 95% CI 0.89–0.92; 65–84 yr: OR 1.10, 95% CI 1.07–1.12; > 85 yr: OR 2.36, 95% CI 2.23–2.44), insurance status (uninsured or self-insured: ref; private insurance: OR 1.21, 95% CI 1.19–1.24; Medicare: OR 1.26, 95% CI 1.23–1.30; Medicaid: OR 1.15, 95% CI 1.11–1.18; other: OR 1.45, 95% CI 1.40–1.50), number of comorbidities (0: REF; 1: OR 1.80, 95% CI 1.75–1.84; 2: OR 2.93, 95% CI 2.86–3.00; ≥ 3: OR 10.73, 95% CI 10.52–10.95), hospital location (metropolitan: OR 1.31, 95% CI 1.29–1.33) and hospital region (west: ref; northeast: OR 1.89, 95% CI 1.86–1.93; midwest: OR 1.27, 95% CI 1.25–1.29; south: OR 1.36, 95% CI 1.59–1.39) were strongly associated with admission.
While the number of cases of AUD presenting to the ED continues to rise, the proportion of patients admitted is declining. In addition to year of presentation, older age, insurance status and hospital characteristics are strongly associated with admission.
The aim of this study is to assess the association between obesity and complicated acute diverticulitis.
We obtained data from the Nationwide Inpatient Sample database (2005–2016) for all admissions of adult patients with an acute episode of diverticulitis. Patient and disease factors as well as comorbidities were detailed using International Classification of Diseases, 9th Revision and 10th Revision (ICD-9 and ICD-10) codes. Exclusion criteria were elective admissions for surgery for diverticulitis, malignant colon neoplasms and inflammatory bowel disease. Obesity (ICD-9 278.00, ICD-10 E66.9) was the exposure of interest, while complicated diverticulitis (including obstruction, perforation, peritonitis, abscess percutaneous drain placement without a surgical procedure) was the primary outcome. Secondary outcomes included length of stay (LOS) and cost of hospital admission. Crude and adjusted logistic and linear regressions were used to study the association of obesity and the outcomes of interest.
Of 585 401 admissions for acute diverticulitis, 134 314 (22.6%) were for complicated diverticulitis. Compared with patients with uncomplicated diverticulitis, patients who had complicated diverticulitis were more likely to be obese (10.9% v. 9.9%, p < 0.001), to be male (62.9% v. 57.9%, p < 0.001), to be smokers (16.8% v. 13.5%, p <0.001), to have hypertension (51.5% v. 42.5%, p < 0.001) and to have depression (12.4% v. 8.6%, p < 0.001). On crude analysis, obese patients admitted had longer LOS (5.2 [standard deviation (SD) 5.0] v. 4.9 [SD 5.1] d, p < 0.001) and higher costs associated with their admission ($35 330 [SD $50 066.87] v. $28 965.34 [SD $42 314.80]) compared with nonobese patients. After accounting for age, sex, smoking, immunosuppression and diverse comorbidities, obesity remained independently associated with admission for complicated diverticulitis (odds ratio [OR] 1.21, 95% confidence interval [CI] 1.18–1.23). Obesity was independently associated with a small increase in LOS of 0.35 (95% CI 0.31–0.39) days and increased cost of $5602.77 (95% CI $5251–$5953.59) for admission for acute diverticulitis.
Obesity is significantly associated with a complicated episode of acute diverticulitis and increased cost of hospital admission.
Total mesorectal excision is known to be the treatment of choice in low rectal cancer. Colorectal anastomoses are associated with high anastomotic leak rates; the literature reported an incidence as high as 20% for very low anastomosis. Many different factors can explain these anastomotic leaks. One of the major ones is technical factors such as inappropriate proximal bowel perfusion. Therefore, new technologies are needed, especially to assess this perfusion. Indocyanine green fluorescence angiography (FA) has already shown great promise.
The objective of this project is to evaluate whether the use of indocyanine green modifies the intraoperative management of anastomosis for cancers located within 15 cm of the anal margin. The secondary objective is to estimate the incidence of postoperative anastomotic leaks when this technology is used.
A prospective cohort study with a before-andafter design was conducted from August 2019 to September 2020 at the Centre hospitalier universitaire de Québec. The 2 groups consisted of patients who had surgeries before and after the implantation of the Pinpoint technology, respectively. A total of 113 patients in each group were included. The mean level of the anastomosis was similar in each group (4.10 [standard deviation (SD) 2.45] v. 4.48 [SD 2.52]). The use of indocyanine green had an impact on intraoperative management; surgeons had changed the site of the specimen transection in 10.6% (12/113), without anastomotic leaks in these 12 patients. The incidence of anastomotic leak at 30 days showed a relative decrease by 53.3% with the use of FA (15/113 [3.3%] v. 7/113 [6.2%], p = 0.11).
In patients with very low rectum cancers, FA seems to be effective in reducing early postoperative anastomotic leaks. Therefore, its implementation would help the surgeon to assess the quality of bowel perfusion.
Following surgery for fibrostenotic ileocolic Crohn disease, patient age has been identified as a risk factor for endoscopic and clinical disease recurrence. However, limited information exists on the impact of age on surgical recurrence. The aim of our study was to determine whether increasing age at primary resection was associated with a decreased risk of surgical recurrence in patients with fibrostenotic ileocolic Crohn disease.
A prospectively maintained inflammatory bowel disease database was used to identify 262 patients who underwent resection for ileocolic Crohn disease between 2000 and 2016. Supplemental patient data were gathered by retrospective chart review. Overall and age-stratified surgical recurrence rates were estimated with the Kaplan–Meier method.
Of the 262 patients treated with ileocolic resection, 75 had primary surgery for fibrostenotic ileocolic disease: 69% female, 24% smokers and 13% Montreal A1 (diagnosed at age < 17 yr). Median time from diagnosis to primary surgery was 2.9 years (standard deviation [SD] 0.7 yr). At the time of primary surgery, 30 patients (40%) were younger than 30 years of age, 32 (43%) were 30–49 years of age and 13 (17%) were 50 years of age or older. Following primary surgery, prophylactic biologic therapy was used in 83%, 91% and 77% of patients, respectively. Median follow-up time after surgery was 13.8 years (SD 0.5 yr). Overall surgical recurrence rates at 5 and 10 years were 12% and 26%, respectively. Stratified by age, surgical recurrence rates at 10 years were 14%, 45% and 8% in patients younger than 30 years of age, 30–49 years of age and 50 years of age or older (p = 0.007, log rank).
Following primary resection for fibrostenotic ileocolic Crohn disease, the overall surgical recurrence rate was 26% at 10 years. When stratified by age, patients 30–49 years of age had a significantly higher risk of surgical recurrence than either younger or older patients. These results may indicate the need for an age-specific postoperative management strategy in patients with fibrostenotic ileocolic Crohn disease.
This study aims to explore patients’ preferences for receiving information about low anterior resection syndrome (LARS) at the time of their cancer diagnosis.
We conducted a qualitative study on rectal cancer survivors’ preferences regarding the timing, format and source of the LARS discussion. On the basis of predefined criteria, we invited 11 rectal cancer survivors who had undergone restorative proctectomy and were living with minor or major LARS to participate. Semistructured telephone interviews with open-ended questions were conducted by a trained researcher, transcribed verbatim then coded by 2 researchers independently using MAXQDA software. Grounded theory was applied to analyze the interviews.
Participants were a median of 3.8 (interquartile range [IQR] 1.1–9.3) years after RP, 7 (64%) were male, median age was 70 (IQR 48–84) years and 7 (64%) had major LARS. Participants had varying opinions regarding the timing and approach for receiving information about LARS. Some preferred receiving information at the time of diagnosis because they wanted to (1) be aware of complications, (2) participate in decision-making or (3) feel more comfortable during treatment. Others were not keen to know about LARS at the time of diagnosis because they (1) trusted their physician would make the right decision, (2) did not think knowing more would change the outcome, (3) were preoccupied with other concerns or (4) believed that knowing more would lead to increased fear. In terms of the format and source of LARS information, some preferred receiving information in person or via telephone while others preferred documentation to be provided as a hard copy or online.
Patients who underwent RP for rectal cancer had varying preferences in terms of the timing and format of the LARS discussion. Our findings emphasize the need to tailor the discussion according to patient preferences in terms of when and how it should be held.
Racial disparities in breast cancer are well established. However, there is a paucity of contemporary studies assessing the interaction of complex patient, socioeconomic and community factors on breast cancer care. The objective of this study was to determine the influence of race, ethnicity and socioeconomic status (SES) on disease presentation, access to care and survival in breast cancer.
A retrospective analysis was performed of non-Hispanic black (NHB), non-Hispanic white (NHW) and Hispanic patients with nonmetastatic breast cancer in the Surveillance, Epidemiology, and End Results (SEER) cancer registry between 2007 and 2016. Primary exposures were race or ethnicity and SES. Outcome measures included disease presentation, access to surgery and survival. Multivariable binary logistic regression and Cox regression analyses were conducted.
A total of 382 975 patients were identified: 289 074 (75.5%) NHW, 45 821 (12.0%) NHB and 48 080 (12.6%) Hispanic. On multivariate analysis, NHB (odds ratio [OR] 1.18, 95% confidence interval [CI] 1.15–1.20) and Hispanic (OR 1.20, 95% CI 1.17–1.22) patients were more likely to present with higher stage (stage II or III) disease than NHW patients. There was increased likelihood of not undergoing primary resection and breast reconstruction for NHB (OR 1.56, 95% CI 1.49–1.65, and OR 1.07, 95%CI 1.03–1.11) and Hispanic patients (OR 1.41, 95% CI 1.34–1.48, and OR 1.60, 95% CI 1.54–1.66, respectively) compared with NHW patients. NHB patients had increased hazard for all-cause mortality (hazard ratio [HR] 1.13, 95% CI 1.10–1.16) and breast cancer–specific mortality (HR 1.20, 95% CI 1.16–1.24). All-cause mortality increased across SES categories (lower SES: HR 1.33, 95% CI 1.30–1.37; middle SES: HR 1.20, 95% CI 1.17–1.23).
This large population-based analysis of patients with breast cancer confirms worse disease presentation, access to surgical therapy and survival across racial, ethnic and socioeconomic factors. These disparities were compounded for patients from minority populations across worsening SES, and insurance coverage.
Clinical trials have shown that palliative chemotherapy (PC) improves survival in patients with incurable esophageal and gastric cancer; however, outcomes achieved in routine practice are unknown. We describe treatment patterns and outcomes among patients treated in the general population of Ontario, Canada.
The Ontario Cancer Registry was used to identify patients with esophageal or gastric cancer from 2007 to 2016 and data were linked to other health administrative databases. Patients who received curative-intent surgery or radiotherapy were excluded. Factors associated with receipt of PC were determined using logistic regression. First-line PC regimens were categorized and trends over time were reported. Survival was determined using the Kaplan–Meier method.
The cohort included 9848 patients; 22% (2207/9848) received PC. Patients receiving PC were younger (mean age 63 v. 74 yr, p < 0.0001) and more likely to be male (71% v. 65%, p < 0.001). Thirty-seven percent of patients who did not receive PC saw a medical oncologist in consultation. Over the study period, utilization of PC increased (from 11% in 2007 to 19% in 2016, p < 0.001) while the proportion of patients receiving triplet regimens decreased (65% in 2007 to 56% in 2016, p = 0.04). In the PC group, median overall and cancer-specific survival from treatment initiation was 7.2 months.
One-fifth of patients with incurable esophageal and gastric cancer in the general population receive PC. Median survival of patients treated in routine practice is inferior to that in clinical trials. Only one-third of patients not treated with PC had consultation with a medical oncologist. Further work is necessary to understand the low utilization of PC and medical oncology consultation in this patient population.
Frailty is a state of decreased physiologic reserve, characterized by a loss of resiliency in the face of acute stress. We evaluated frailty as a predictor of postoperative complications following pancreaticoduodenectomy using a validated modified frailty index (mFI).
A retrospective cohort study of consecutive patients undergoing pancreaticoduodenectomy (2011–2020) at a single institution was conducted. An mFI consisting of 11 variables adapted for the National Surgical Quality Improvement Program database from the Canadian Study of Health and Aging frailty index was used. Patients were stratified into 2 groups: high mFI (≥ 0.27) and low mFI (< 0.27). The effect of mFI on postoperative complications (Clavien–Dindo classification) and mortality was evaluated using multiple logistic regression and expressed as odds ratio (OR) and 95% confidence interval (CI).
Among 554 pancreaticoduodenectomies, there were 64 (12%) patients with a high mFI. Patients with a low mFI were significantly younger (67 v. 72 yr, p < 0.001). The high and low mFI groups had similar baseline and operative characteristics, including the proportion of pancreatic adenocarcinoma (45% v. 45%, p = 0.98), hard pancreatic texture (39% v. 32%, p = 0.51), operative room time (370 v. 368 min, p = 0.63) and drain placement (72% v. 75%, p = 0.64), respectively. The high mFI group had a significantly longer median length of hospital stay (11 v. 8 d, p = 0.016), a higher rate of intensive care unit admission (73% v. 43%, p < 0.001) and higher 90-day mortality (11% v. 4.1%, p = 0.017). By multivariate analysis, mFI score was an independent predictor for the development of any type of postoperative complications (odds ratio [OR] 1.44, 95% confidence interval [CI] 1.02–2.10) and major complications (OR 1.44, 95% CI 1.05–1.98).
The mFI predicts postoperative outcomes following pancreaticoduodenectomy and can be used as a risk stratification tool for patients being considered for surgery.
Synoptic operative electronic reporting is used as an operative template in multiple provinces for cancer surgery. From the data collected in these templates, surgeons receive feedback including annual total number of cancer cases reported and quarterly reports comparing their practice with provincial aggregates. Currently, data are reported to surgeons on 38 quality indicators of practice previously identified by Canadian Partnership Against Cancer (CPAC) national experts for 5 tumour sites (breast, thyroid, colon, rectum and ovary). Tumour site surgical leads review practice variation to identify areas of intervention for improvement. Interventions include review of data, update of synoptic operative templates, directed communication about variation, educational updates, provision of surgeon-specific data and tumour site meetings.
In 2019, 90 surgeons prospectively collected data via 4352 synoptic operative reports for the 5 tumour sites. Surgical tumour site leads reviewed data on 3–17 quality indicators. Areas of significant variation were identified by tumour site for quality improvement. For example, in breast cancer, the percentage of procedures where more than 1 sentinel lymph node was assessed was 62% (657/1059 cases) before and 67.7% (723/1068 cases) after intervention. In thyroid cancer, preoperative lymph node assessment by ultrasound was performed in 66.7% (65/102) of procedures before and 89.6% (90/125) after intervention. There was a 5.7% improvement in cases where more than 1 sentinel lymph node was assessed for breast cancer patients and a 23% increase in preoperative ultrasound lymph node assessment in patients with suspicion for thyroid malignancy.
Synoptic operative reporting identifies areas of variation between individual surgeons’ practices. Specific interventions can influence practice toward guideline and national expert recommendations.
Basal-like breast cancer (BLBC) is an aggressive breast cancer subtype with poor prognosis and no known targeted therapies. Although cancer research has traditionally focused on cancer cells in isolation, nonmalignant cells in the tumour microenvironment (TME) also contribute to the growth and metastasis of cancer. A strong correlation exists between tumour-associated macrophage (TAM) infiltration and poor prognosis in BLBC. We hypothesize that the Hedgehog signalling pathway is responsible for not only proliferation and progression of BLBC but also shaping the TME.
In a mouse tumour-derived cell line encompassing the loss of BRCA1 and p53 in the presence of k14Cre (termed KBP cells), Hedgehog intracellular signalling components GLI1 and GLI2 were knocked down or inactivated using either siRNA, doxycycline-inducible shRNA, or the GLI1/GLI2 inhibitor GANT61. Proliferation and cancer cell stemness were analyzed in vitro. In vivo, KBP cells were injected into immunocompetent mice and tumour growth, tumour weight and immune infiltrates were examined in the presence or absence of GANT61.
GANT61 decreased the proliferation of KBP cells in vitro. Knockdown of GLI1 decreased the number of cancer stem cells (CSCs) using both siRNA and inducible shRNAs. As well, inhibition of GLI1 and GLI2 using GANT61 substantially decreased CSCs. In vivo, GANT61 treatment of KBP allografts resulted in decreased tumour growth, tumour volume and TAM infiltration. In addition, GANT61 treatment resulted in increased CD4+, CD8+ and activated CD8+ T-cells.
Hedgehog signalling components GLI1 and GLI2 are important for BLBC cell proliferation, with GLI1 also important in CSC promotion. Inhibition of GLI1 and GLI2 resulted in decreased tumour growth and converted an immunologically “cold” tumour into a “hot” tumour by decreasing TAMs and increasing CD4+ and CD8+ T-cells. Therefore, therapies that target GLI1 and GLI2 or downstream signalling could be used as novel treatments for BLBC.
Oncoplastic surgery aims to maintain quality of life by pre-empting and mitigating against breast asymmetry while not compromising oncologic effectiveness. Although it is growing in popularity in North America, many patients still do not have access to these techniques. This study demonstrates the implementation of an effective oncoplastic surgical practice in a community hospital in Canada and shows low rates of perioperative complications as well as high levels of patient-reported outcome measures.
We conducted a retrospective chart review of patients diagnosed with breast cancer treated with level 1 and level 2 oncoplastic techniques by a single breast surgeon. Patient demographic characteristics, tumour characteristics, procedure types and clinical outcomes were collected. Patient satisfaction was assessed with the Breast-Q questionnaire administered preoperatively as well as 3 months and 9 months postoperatively.
Oncoplastic breast conservation surgery was performed in 340 patients from 2017 to 2019. The average size of the breast lesion was 1.8 cm, with 96 patients having lesions 2–5 cm in size and 10 patients having tumours greater than 5 cm in size. Thirty (8.8%) patients experienced a complication requiring intervention. Margin revisions were required in 21.8% of patients, which decreased to 18% after the implementation of the new margin consensus guidelines. The completion mastectomy rate was 4.7%. Breast-Q scores increased across breast satisfaction, process of care, psychosocial, physical and sexual satisfaction domains postoperatively.
This study demonstrates the feasibility of an oncoplastic breast surgery practice in a busy community hospital in Canada. This adds to the growing body of North American data on the clinical and oncologic safety of these techniques and introduces the idea of collecting patient-reported outcome measures in a Canadian population. We hope that this will serve to aid in the recruitment of oncoplastic-trained surgeons to both teaching and community hospitals and enable these techniques to become the standard of care in North America.
Atypical ductal hyperplasia (ADH) is a benign epithelial proliferative lesion with histologic features resembling those seen in low-grade ductal carcinoma in situ (DCIS). Surgical excision of the biopsy site is the standard management approach. The objective of this study was to determine the upgrade rate from ADH on stereotactic breast biopsies to DCIS or invasive carcinoma (IC) in our institution. We also sought to identify clinical, pathologic and radiologic predictive factors associated with risk of upgrade.
Clinical charts, mammograms and pathology reports were reviewed for all women with a stereotactic breast biopsy showing ADH and subsequent surgery at our institution between 2008 and 2018. When available, mammograms were re-reviewed by a radiologist for this study.
A total of 295 biopsies were analyzed in 290 patients. The mean age was 56 years. The upgrade rate was 10.5%: 7.5% DCIS and 3.1% IC. Mammograms were reviewed by a radiologist for 161 patients from 2013 to 2018. In this subset of patients, the rate of upgrade was 8.7% (4.35% DCIS and 4.35% IC). A statistically significant difference in the largest size of the microcalcification clusters on mammogram was observed between the upgraded and the nonupgraded subgroups (14.2 mm v. 8.9 mm, p = 0.03).
The evaluation of the largest size of the microcalcification clusters on mammogram as a cut-off feature could be considered to choose between an observational versus a surgical approach. This large series provides contemporary data to assist informed decision-making regarding the treatment of our patients.
Oncoplastic breast reduction (OBR) allows breast conservation surgery (BCS) to be combined with breast reduction for select patients. The objective of this study was to measure time to first adjuvant treatment (AT) in patients who undergo OBR and to evaluate whether initiation dates conformed to conventional post-BCS treatment windows for radiation (RT), chemotherapy and endocrine management.
Institutional and university ethics boards approved this retrospective review, which included all patients receiving OBR from April 2009 to April 2020. Consecutive patients were identified from operative slates. Data were extracted from a prospectively maintained database and surgeons’ electronic medical records. The relative start date (RST) of AT was calculated as the time elapsed between the OBR date and the earliest start date or the first day after resolution of delays due to medical reasons or patient preference.
This study included 5504 new breast cancer cases, and 81 had OBR. Patients who underwent OBR had unilateral (n = 79) or bilateral (n = 1) breast cancer, malignant phyllodes tumour (n = 1), had bilateral (n = 73) or unilateral (n = 8) OBR, and had OBR as a first surgery (n = 69) or during margin re-excision after BCS (n = 12). Additional surgery after OBR was required by 7 patients for margin revision (n = 6) or sentinel node biopsy (n = 1), while 7 had completion mastectomy. No patients required reoperation for debridement, or hematoma evacuation. In total, 72 (88.9%) patients received AT: 36 started with radiation, 19 with chemotherapy and 17 with endocrine management. RST averaged 9.4 weeks for radiation, 7.0 weeks for chemotherapy, 8.0 weeks for endocrine and 8.4 weeks for any AT. Among patients receiving AT, 70 (97.2%) initiated AT by week 16, and 100% of patients who received chemotherapy initiated AT by week 12.
The average time to first adjuvant treatment conformed to local recommendations for chemotherapy (aim for 6–8 wk after ablative surgery), radiation (optimally within 10 wk of surgery but no detriment to delay up to 20 wk) and endocrine therapy (no absolute upper limit to the window for initiation).
Young women with breast cancer (YWBC) have unique survivorship needs because of life stage, and interventions to address these are limited. We aimed to understand the unmet needs of YWBC to develop a tailored self-management tool to improve breast cancer experience and psychosocial–sexual outcomes for YWBC in the long term.
We conducted semistructured survivor and clinical interviews. Purposive maximum variation sampling was used. Inclusion criteria were women 40 years of age or younger at diagnosis, who had stage 0–IV disease, with a minimum of 1 year after diagnosis and active treatment complete. Interviews were recorded and transcribed, using collaborative group immersion to analyze data and identify emerging themes.
Thirty-four participants were interviewed from 10 centres across 7 Canadian provinces. Participant-reported demographic characteristics were as follows: 18% were members of a “visible minority,” 9% were “born outside Canada,” 7% were “Indigenous” and 54% of patients’ household income was at or below the Canadian average. Thirty-six percent received neoadjuvant chemotherapy, 47% underwent mastectomy with reconstruction and 41% underwent contralateral prophylactic mastectomy. The YWBC who were interviewed reported psychological, self-identity, fertility and sexual health needs, which have a substantial impact on intimacy in relationships. Partner fear and anxiety were described, without professional or social support measures. Education on psychology, sexual health and peer mentorship were suggested to improve self-support.
We have identified unique psychosocial–sexual needs among this young cohort of women. We will target these through a novel and pragmatic self-management tool, to be used across Canada, aiming to improve YWBC’s experience and long-term morbidity.
Variability exists in postoperative pain management for patients undergoing breast surgery. We describe opioid prescribing practices, opioid consumption and pain experience for patients undergoing outpatient breast surgery at a single institution.
Patients 18 years of age and older undergoing outpatient breast surgery without reconstruction were prospectively enrolled from September 2020 to February 2021. Patients were excluded if they were admitted to hospital postoperatively. A telephone survey was conducted between postoperative days 5 and 7. Prescribed and consumed oral morphine equivalents (OME), pain levels and satisfaction with pain management were compared according to surgery type (excisional biopsy [E], partial mastectomy with or without ±axillary intervention [PM], simple mastectomy with or without axillary intervention [SM]) using χ2, analysis of variance and t tests. We considered p values less than 0.05 to be statistically significant.
Fifty-three patients were included, 43 (81%) of whom had malignant disease. Patients underwent E (n = 3, 6%), PM (n = 44, 83%) and SM (n = 6, 11%); 2 patients (4%) underwent bilateral procedures. Similar proportions of patients reported tolerable pain levels (E 67%, PM 66%, SM 67%) and satisfaction with pain management (E 100%, PM 93%, SM 100%). Thirty-eight patients (72%) received a prescription for an opioid (PM n = 33, 87%; SM n = 5, 13%); of these, 30 patients (79%) filled the prescription. Mean OME prescribed was not statistically different on the basis of surgery type (PM 30.6 [standard deviation (SD) 15.2]; SM 32.1 [SD 21.7]; p = 0.86). Of those prescribed opioids, an equal proportion (60%) of patients consumed opioids; however, mean OME consumed was not significantly different (PM 8.2 [SD 15.7]; SM 8.8 [SD 8.5]; p = 0.94). Although most patients (21, 72%) knew to dispose of unused pills at a pharmacy or doctor’s office, 8 (28%) did not know or described an unaccepted method of disposal.
Most patients undergoing outpatient breast surgery report tolerable pain levels and satisfaction with pain management. The amount of OME consumed by patients is smaller than what is prescribed, suggesting that it may be appropriate to prescribe fewer OMEs.
Literature on rectal anastomosis and diverting ileostomy in patients treated with hyperthermic intraperitoneal chemotherapy (HIPEC) is scarce. The study objectives are to evaluate the safety of rectal anastomoses during cytoreductive surgery (CRS) and HIPEC, with and without fecal diversion, and its morbidity when performed.
From January 2012 to January 2020, patients with peritoneal metastases who underwent CRS and HIPEC that required a rectal anastomosis were included in this single-hospital retrospective chart review.
Eighty-four patients were included, of whom 29 had a diverting loop ileostomy. The rectal anastomotic leak (AL) rate for the series was 8.3%. Factors associated with AL were male sex (p = 0.031) and increased body mass index (p < 0.001). Diverting loop ileostomy was associated with a significant decrease in clinically significant rectal AL (0% v. 12.7%, p = 0.045). However, the 90-day readmission rate was higher in this group (37.9% v. 10.9%, p = 0.003). Stoma reversal surgery was performed for all patients, but 3 patients experienced AL (10.7%).
This study suggests that creation of a diverting loop ileostomy may be an effective method to prevent symptomatic rectal AL following CRS with HIPEC. However, it is also associated with an increased readmission rate and increased risk of AL following reversal surgery.
In March 2020, guidelines recommended that breast cancer (BC) centres suspend nonemergent surgeries because of the COVID-19 pandemic, including delaying surgeries for estrogen-receptor positive (ER+) BCs with neoadjuvant endocrine therapy (NET). The objective of this study was to evaluate the oncologic outcomes of patients with BC who were affected by these guidelines.
All female patients with stage I–II BC receiving NET during the COVID-19 pandemic at 2 referral-based academic BC centres were prospectively identified. Clinical and pathologic data were collected. Patients were matched to a historical cohort of patients with stage I–II ER+ BC treated with upfront surgery within 35 days between 2010 and 2013. Primary outcomes were pathologic upstaging compared with clinical staging before NET initiation.
Thirty patients who received NET and 200 randomly selected historical patients were matched on the basis of preoperative tumour grade and pathologic features. After matching, 29 patients who received NET and 53 patients who received upfront surgery remained. The median age in each group was 65 and 66 years, respectively (p = 0.33). Most patients (79.3% and 81.1%) had invasive ductal carcinoma with a median clinical tumour size of 0.9 cm versus 1.7 cm (p = 0.005). Median time to surgery was 73 days in the NET group and 27 days in the upfront surgery group (p < 0.001). Twenty-three patients who received NET (82.2%) had the same or lower pT stage compared with 36 (67.9%) patients who received upfront surgery (p = 0.31). Only 3 (10.3%) patients who received NET had an increase in pN stage compared with 16 (30.2%) control patients (p = 0.06). Overall, 3 (10.3%) patients who received NET had a stage increase on final pathology versus 15 (28.3%) in the control group (p = 0.16).
Despite experiencing delays to surgery that were over 2.5 times longer, patients with ER+ BC receiving NET did not experience significantly different pathologic upstaging during the COVID-19 pandemic. These findings support the use of NET in stage I–II ER+ BC if substantial delays in surgery are projected.
Two-thirds of patients who undergo breast cancer surgery receive opioids and 2%–4% will continue to use them after 3 months. The November 2020 American Society of Breast Surgeons consensus statement advised the use of routine co-analgesia and the reduction of opioid prescriptions in breast oncologic surgery. To inform future institutional guidelines, the objective of this study was to determine baseline opioid prescribing patterns in a single high-volume specialist-referral breast cancer cenre. We hypothesized that opioid prescribing practices varied between procedures and operating surgeons.
We performed a retrospective analysis of all women undergoing breast cancer surgery (any stage) between September and December 2019. Opioid prescriptions at discharge were converted to milligram equivalents of morphine (MEM). Patients from a surgeon who only performed 2 surgeries were excluded, as was a patient who received over 2000 MEM. The primary outcome of interest was MEM at discharge. Multiple linear regression was used to identify independent risk factors for increased MEM.
A total of 137 patients met the inclusion criteria, of whom 67.1% underwent partial mastectomy. Median age was 62 (IQR 51–71). All but 1 patient (99.3%) received an opioid prescription at discharge, with median MEM of 112.5 (interquartile range 75–150). A total of 81.8% were prescribed co-analgesia. The prescriber was a trainee in 42.2% of cases. Five patients (3.7%) required an opioid renewal. On multivariate analysis, patients undergoing total mastectomy or axillary lymph node dissection or both were at increased risk of receiving more MEM (β = 22 p = 0.04, and β = 32 p = 0.03, respectively). However, the factor with the greatest association with MEM was operating surgeon (β = 99, p < 0.001). A resident prescriber was not associated with an increase in MEM prescribed.
In a tertiary care centre, the operating surgeon has the greatest influence on opioid prescribing practices. These findings support the need for a standardized approach to optimize prescribing and reduce opioid-related harms after oncologic breast surgery.
Oncoplastic breast reduction (OBR) combines oncologic and plastic surgery principles in breast conservation. The objective of this study was to evaluate complications and patient-reported outcomes after OBR.
A retrospective review including all patients who received OBR from April 2009 to April 2020 was conducted with institutional and university ethics boards’ approval. Data were extracted from a prospectively maintained database and surgical electronic medical records. Risk factors for any complication were evaluated by univariate logistic regression analysis. Significance level was set at p values less than 0.05. Postoperative patient well-being and satisfaction were evaluated with the validated BREAST-Q 2.0 questionnaire. Rasch-transformed scores from 0 (worst) to 100 (best) were calculated from each scale following validated BREAST-Q conversion tables.
Eighty-one patients had OBR. Postsurgical complications experienced by 19 patients included localized hematoma (n = 3), seroma (n = 4), partial nipple necrosis (n = 2), limited wound dehiscence (n = 6), delayed healing (n = 3) and fat necrosis (n = 1). No patients had surgical complications requiring hospital admission, débridement reoperation or hematoma evacuation. The BREAST-Q questionnaire was completed after OBR by 37 patients, who responded with favourable scores for psychosocial (84.7), sexual (69.0) and physical (80.6) well-being, satisfaction with their breasts (83.6), radiation oncologists (88.8), surgeons (94.3), medical staff (94.2) and office staff (95.8).
All breast complications were managed with local wound care and none required repeat procedures. Patients who underwent OBR reported a high degree of satisfaction with their physicians and medical and office staff. Increasing American Society of Anesthesiologists (ASA) score and ipsilateral and contralateral specimen weight were significantly correlated with increased odds for having any complication.
There is current concern for overtreatment of breast cancer and so surgical quality indicators (QIs), including the breast conserving surgery (BCS) rate, have been published by European and American breast cancer societies. Our purpose was to calculate QIs for BCS rate and determine compliance with American and European standards and to assess reasons for mastectomy to identify opportunities to de-escalate surgery.
Patients having surgery for unilateral primary cancer from 2013 to 2017 were identified using our institution’s database. QIs for BCS rates were calculated. The reasons for mastectomy were prospectively collected and verified by chart review. Where appropriate, Poisson regression models were constructed for statistical analyses using R.
A total of 3076 patients underwent breast cancer surgery and 2311 met the inclusion criteria. Our BCS rate for invasive cancer less than 3 cm was 77%, for invasive cancer less than 2 cm it was 84%, and for ductal carcinoma in situ (DCIS) less than 2 cm it was 85%. The rate for patients having a single operation was 89% for invasive cancer and 80% for DCIS. Despite knowledge of low BCS rates initially, there was no statistically significant change in BCS rates over the 5-year period, but there was a reduction in contralateral prophylactic mastectomies (CPM) from 32% in 2013 to 17% in 2017 (p < 0.001). For those patients having an initial mastectomy, 72% were medically necessary and 28% were by patient choice. Trend analysis looking at tumour size and medical need for mastectomy indicated that 80% of patients at our centre would be eligible for BCS with a tumour cut-off of 2.5 cm.
Our institution met American but not European QI standards for BCS rates. We identified a high number of medically necessary mastectomies, potentially indicating a difference in patient demographic characteristics compared with Europe. Our results support the understanding that BCS rates are influenced by multiple factors and are challenging to compare across jurisdictions. CPM rates may offer a more actionable opportunity to de-escalate surgery for breast cancer.
The COVID-19 pandemic has seen major shifts in the delivery of health care across the world, including the rapid adaptation of telemedicine. Here we present a survey of patients’ experience with telemedicine in the treatment of breast cancer.
A questionnaire aimed at assessing patient satisfaction with telemedicine was distributed to all patients who underwent surgery at our centre for breast cancer or benign or high-risk lesions with surgery follow-up dates between Oct. 13 and Dec. 31, 2020. Surveys were conducted via phone or at in-person follow-up appointments.
A total of 123 of 172 (72%) eligible patients completed the survey; 85% of these patients enjoyed their telemedicine consultation, 93% found that there was enough time for dialogue and to ask questions, 66% would choose to have a telemedicine consultation again, 79% would recommend telemedicine at our centre to a friend or family member and 92% found Zoom easy to use. When asked if they preferred a telemedicine initial consultation over an in-person one, 28% of patients agreed. When patients were analyzed according to their home address, those more than 10 km away from our centre preferred telemedicine over in-person appointments (37%) more often than those who lived less than 10 km away (23%).
Patients report a high level of satisfaction with telemedicine. It may be worthwhile to continue telemedicine beyond the pandemic era, owing to its convenience, efficiency and low cost while keeping patients, physicians and office staff safe. It also may be more useful in large geographic areas such as our province to increase access to care.
Even small-volume peritoneal metastases (PM) portend limited survival in patients with GCa, and strategies to prevent and more effectively treat PM are needed. Existing models are limited in recapitulating key elements of the peritoneal metastatic cascade. To explore the underlying cellular and molecular mechanisms of PM, we have developed an ex vivo human peritoneal explant model.
With research ethics board approval and informed consent, fresh peritoneal tissue samples were obtained from patients undergoing abdominal surgery and cultured in sterile medium at 37ºC, 5% CO2. Peritoneal explants were suspended, mesothelial layer down but without direct contact, above a monolayer of red fluorescent stained AGS human GCa cells for 24 hours, then washed and cultured for a further 5 days. Preservation of tissue architecture and cell viability were confirmed. Implantation and invasion of GCa cells within the explant were examined using hematoxylin and eosin staining and real-time confocal fluorescence microscopy.
Superficial implantation of AGS GCa cells within the mesothelial surface was readily detected, and colonies expanded over 5 days. To investigate the sensitivity of the model to altered GCa cellular adhesion, we stably transfected AGS cells with CDH1, restoring the E-cadherin that they otherwise lack. This markedly suppressed implantation. Invasion of AGS cells into the submesothelial mesenchymal layer was similarly inhibited by wildtype CDH1 restoration.
Here we show that this ex vivo human peritoneal explant model is responsive to manipulation of genetic and environmental factors that regulate peritoneal implantation and invasion by GC cells, with reproducible results. In the future, we will precisely target critical steps in the peritoneal metastatic cascade, with the goal of improving the quantity and quality of life in patients with GCa.
The objective of this study is to determine patient participation rates and insights related to patient-reported outcomes following breast cancer (BC) surgery.
Female patients with unilateral, nonrecurrent BC were assessed for satisfaction and quality of life via the BREAST-Q questionnaire. Scores were standardized (maximum 100). Participants received questionnaires (web-based or paper) preoperatively and 2–6 weeks after surgery. Enrolment was between June 2020 and March 2021.
Fifty-nine patients were eligible, with 44 participating (75%). The mean age of participants was 55 years, and it was 68 years in those declining. Thirty-four patients completed Q1 (58%) and 22 completed Q2 (50%). Of participating patients, completion of Q1 and Q2 were 86% and 60%, respectively, for those aged 25–39 years, 77% and 78%, respectively, for those aged 40–69 years and 71% and 83%, respectively, for those aged above 70 years. Patients undergoing mastectomy (M) had the highest preoperative psychosocial well-being and satisfaction with breasts scores (83 and 71, respectively), while the lowest scores were in mastectomy with reconstruction (MR) (62 and 58, respectively). Patients undergoing M and breast conservation surgery (BCS) had higher preoperative sexual well-being scores than those undergoing MR (62 v. 45). Postoperatively, patients undergoing M scored higher than those undergoing BCS in psychosocial well-being (93 v. 71), sexual well-being (96 v. 59) and satisfaction with breasts (91 v. 70), with patients undergoing MR scoring the lowest (40, 21 and 47 respectively). Satisfaction with information scores was higher in those undergoing BCS than MR (71 v. 49).
Preliminary results suggest that patient compliance with the BREAST-Q questionnaire is suboptimal and may require reassessment. Patients undergoing M had the most favourable results, while the score of those undergoing BCS was slightly lower but was consistent with other reports in the literature. Patients undergoing MR had unique findings, including low pre- and post-operative scores, along with satisfaction with information. This may be an area for future quality improvement and education to ensure realistic expectations about outcomes. The significance of these results is limited by small numbers and the need for multivariate analysis.
Venous invasion (VI) is an under-reported prognostic factor in colorectal cancer (CRC). Staining for elastin may facilitate the accurate detection of VI and minimize interobserver variability. We evaluated the impact of routine elastin staining on VI detection in resected CRC and its relationship to oncologic outcomes at a tertiary centre that performs a high volume of CRC resections.
This is a single-institution, observational study of consecutive patients who underwent resection of primary CRC between March 2011 and April 2013. Patients were dichotomized into 2 cohorts: resection during the year before (preimplementation; n = 145) and the year following (post-implementation; n = 128) the implementation of routine elastin staining. All resection specimens were independently assessed by expert pathology reviewers (with elastin staining) who were blinded to outcomes. Recurrence-free survival (RFS) was estimated by the Kaplan–Meier method and group differences were calculated using the log-rank test.
Routine elastin staining resulted in an increased VI detection rate from 21% to 45% (odds ratio [OR] 3.1, 95% confidence interval 1.8–5.3, p < 0.001). Expert pathology review of all cases revealed that a large proportion of VI was missed during the original assessment (VI−[original]/VI+[review]), with a higher miss rate during the preimplementation (48%) compared with the postimplementation (22%, p = 0.007) period. Elastin-detected VI was strongly associated with RFS (p = 0.003), in contrast to VI detected with hematoxylin and eosin (p = 0.05). Similar discriminatory superiority was observed for risk of hematogenous metastases (OR 11.5 v. 3.7). Interestingly, the missed VI group (VI−[original]/VI+[review]) had comparable RFS to the VI+[original]/VI+[review] group in the preimplementation period, whereas RFS was comparable with the VI−[original]/VI−[review] group in the postimplementation period.
Routine elastin staining enhances VI detection and its ability to stratify oncologic risk in CRC, and it should be implemented for the evaluation of CRC resection specimens. Incorporation of this parameter into standard CRC staging may help inform clinical decision-making regarding adjuvant therapy and intensity of follow-up.
Neoplastic processes cause distinct changes to the body’s metabolism, creating unique patterns in volatile organic compounds (VOCs) being produced. Unique VOC profiles are diagnostic for certain cancers. Exhaled breath VOC analysis is a potential point-of-care cancer screening tool.
An exhaled breath VOC analyzer was developed with ASDevices and Spira Innovation patented technology. Proof-of-principle experiments established detection limits and the sensitivity and specificity of Epd sensing technology. A case–control trial is in process to validate the device using 25 exhaled VOCs previously shown to be diagnostic of colon cancer. VOCs aspirated from the colon during colonoscopy in patients with known colon cancer will also be sampled and compared with the VOC profile of exhaled breath. The ability to reliably detect colon cancer will be evaluated by receiver operating characteristic curve analysis followed by logistic regression analysis.
Using representative exhaled VOCs, proof-of-principle testing confirmed the sensitivity and selectivity of the sensing technology based on a proprietary gas chromatography (GC) method from eTrace Medical (patent pending). Protocol validation included testing of ASDevices technologies (pretreatment and concentrator, iMov GC platform, uInProve GC valve and Epd sensing technologies). Targeted molecules (sulfurs, benzene, toluene, xylene) were contained in an air matrix to simulate exhaled breath. This validation system demonstrated that the device can reliably measure targeted molecules at levels within 10 ppb of their nominal value. The detection limit, based on 3 × signal:noise ratio, was between 200 and 700 ppt. The system’s sensitivity can be further adapted via sample concentration and pretreatment.
There is strong evidence supporting the use of exhaled VOCs to accurately detect colon cancer. We propose a proof-of-concept case–control study to determine the sensitivity and specificity of exhaled breath analysis using a novel point-of care tool for the diagnosis of breast cancer using previously described VOC breathprints.
Radical cystectomy (RC) is the standard of care for bladder cancer. Despite a high risk of morbidity and mortality, adverse events associated with RC have remained relatively unchanged for almost a decade. This study investigates the effect of a multimodal clinical pathway on morbidity and mortality among patients with RC.
Our institution adopted the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) in 2015 and a multimodal clinical pathway for RC was implemented in February 2016 to reduce adverse surgical events. The NSQIP database (2015–2019) was queried for RC cases at our institution. Demographic characteristics, medical comorbidities and intraoperative characteristics were collected to establish a baseline. Primary outcomes included length of stay (LOS) and 30-day postoperative outcomes (i.e., complications, readmission, reoperation and death). Preand post-pathway were compared using odds ratios for postoperative outcomes and independent t tests for LOS.
Two hundred and sixty-one patients (35 prepathway and 226 postpathway) were included. There were no statistically significant differences between groups at baseline. Implementing a clinical pathway was associated with a shorter LOS (10.20 [standard deviation (SD) 6.72] d v. 14.20 [SD 8.40] d, p < 0.01) and 63% fewer postoperative complications (odds ratio [OR] 0.37, 95% confidence interval [CI] 0.18–0.78, p < 0.01). Complications included infections, blood transfusions, pulmonary embolism, deep vein thrombosis, unplanned intubation, renal failure, cerebrovascular accidents and cardiac arrest, among several others. There were no significant differences in the occurrence of 30-day readmission (OR 0.65, 95% CI 0.29–1.59, p = 0.33), reoperation (p = 0.55, Fisher exact test) or death (p = 0.81, Fisher exact test).
This study demonstrates how a multimodal clinical pathway significantly improves LOS and complications among patients with RC. Although quality improvement has traditionally focused on postoperative interventions, clinical pathways that include preoperative medical optimization and intraoperative management can synergistically reduce morbidity associated with major surgery.
Treatment-related infertility is an important cause of distress in young women with breast cancer (YWBC) that is preventable by fertility preservation (FP) before initiating therapy. This study assesses FP service use by YWBC in Quebec, Canada.
Administrative claims for women aged 40 years and under diagnosed between Apr. 1, 2012, and Mar. 31, 2018, were identified using Quebec’s universal health services database (Régie de l’assurance maladie du Québec, RAMQ). Access to and use of FP services were ascertained by identifying claims for a visit with an obstetrician–gynecologist within 90 days of diagnosis, followed by claims for ovarian stimulation, ovule harvesting or artificial insemination. Patient-, disease- and treatment-related predictors were estimated using logistic regression.
A total of 1350 YWBC were treated during the study period. Mean age was 35 (standard deviation 3.9) years, 69.3% had a Charlson Comorbidity Index score of 0, 77.6% were urban residents and 36.9% were considered socioeconomically disadvantaged. Stage distribution was as follows: 0.96%, 67.5% and 31.6% for stages 0, 1–2, and 3, respectively. In total, 50.4% had a mastectomy, 40.6% received chemotherapy (CT), 70.4% received radiotherapy and 20.4% initiated antiestrogen therapy; 97% had a surgeon as the most responsible physician. A total of 333 YWBC consulted an obstetrician–gynecologist within 90 days of diagnosis and 155 subsequently received FP services. Predictors negatively associated with use of FP included increasing age (odds ratio [OR] 0.819, 95% confidence interval [CI] 0.782–0.858), increasing levels of social deprivation (OR 0.656, 95% CI 0.384–1.118; OR 0.555, 95% CI 0.317–0.973) and receipt of chemotherapy (OR 1.739, 95% CI 1.104–2.740).
Only 24.6% of eligible YWBC in Quebec accessed FP specialists. Of these, 46.5% chose to move forward with FP. These findings raise important questions about how to optimize access to FP expertise.
In 2016, a multipronged provincial pathway was implemented across 13 hospitals to improve the mastectomy perioperative care experience, while maintaining high-quality care. While the pathway successfully increased same-day mastectomy rates from 1.7% to 59.2%, the rate of postoperative emergency department (ED) visits remained high at 20.9% in spite of focused interventions at the patient and provider levels to enhance perioperative support. Our study sought to investigate potential factors associated with high postoperative ED visits following mastectomies.
Data were collected using the Discharge Abstract Database and the National Ambulatory Care Reporting System databases. Eligible patients included all women over 18 years of age who underwent a mastectomy province-wide between 2004 and 2020. Patient demographic characteristics including age, socioeconomic status, Charlson Comorbidity Index score and treatment variables such as location and date of surgery, surgery type and ED visit times and reason were collected. The primary outcome of interest was unplanned ED visits within 30 days of mastectomy. Univariate and multivariate analyses were performed to identify independent predictors of unplanned postoperative ED visits.
A total of 19 974 patients had a mastectomy during the study period. Of these, 4590 (23%) had an ED visit, of which 4% were admitted. The most common causes of ED visits were incision issues, infection, hematoma or seroma, and pain. Independent factors associated with ED visits were increasing age, overnight-stay mastectomy, reconstruction, comorbidities, depression and living rurally. There was a slight decrease in ED visits after implementation of the perioperative pathway (20.9% v. 23.7%) but it was not statistically significant.
Postoperative ED visits remain high despite initiation of a province-wide surgical pathway in 2016 that emphasizes patient education and improved perioperative care and supports. Currently, the majority of ED visits are manageable in nonemergent settings and do not result in readmission. Targeted interventions and patient education strategies may further decrease unplanned ED visits.
In breast cancer, clinicians aim to improve survival while patients value quality of life. We aim to delineate the impact of patient, tumour and treatment factors on psychosocial outcomes after treatment.
A prospective cohort of women with stage I–III breast cancer were recruited at an academic cancer centre between 2014 and 2017. Validated questionnaires (BREAST-Q, Impact of Event, Hospital Anxiety and Depression scales) were completed preoperatively and 6 and 12 months after surgery. Change in psychosocial scores over time by surgical procedure was assessed using linear mixed models. Predictors of psychosocial outcomes at 12 months were assessed using multivariable linear regression models. We considered p values less than 0.05 to be significant.
A total of 413 women underwent unilateral lumpectomy (UL, 48%), unilateral mastectomy (UM, 36%) and bilateral mastectomy (BM, 16%).
Over time, women having UL had the highest scores in terms of breast satisfaction, psychosocial well-being and sexual well-being (all p < 0.01), with no difference between women having UM versus BM. Age was inversely related with distress (p < 0.01), psychosocial well-being (p < 0.01) and physical well-being (p = 0.001). Radiotherapy was associated with worse breast satisfaction, psychosocial well-being and physical well-being (all p < 0.01), while chemotherapy was associated with worse sexual well-being (p = 0.04). Women with a pathologic complete response had less anxiety than those with stage I (p = 0.03). Women with triple-negative disease had worse satisfaction (p = 0.03), distress (p = 0.01), anxiety (p < 0.01) and psychosocial well-being (p = 0.047) than those with HR+/HER2− disease. Surgical procedure was a significant predictor of breast satisfaction and psychosocial, physical and sexual well-being (all p < 0.01). Income level predicted satisfaction and physical well-being (p = 0.01). Ethnicity (p < 0.01) and education level (p = 0.04) predicted distress scores.
Psychosocial functioning after breast cancer is influenced by an interplay between patient, tumour and treatment factors. Delineating these influences identifies potentially modifiable factors with de-escalation therapy and psychosocial support.
We aim to delineate the relationship between pathologic complete response (pCR) in the breast and axilla, stratified by receptor subtype, in women with breast cancer.
We performed a retrospective cohort study of women with invasive breast cancer between 2014 and 2019 at an academic cancer centre who received neoadjuvant chemotherapy (NAC) and surgery. Clinical and pathologic stage, receptor status and surgical management were abstracted from medical charts. Breast pCR was defined as no evidence of invasive disease (ypT0/Tis). Regarding axillary pCR, micrometastases were considered as residual disease while isolated tumour cells were considered negative. Women were stratified into hormone receptor–positive (HR+) and HER2-negative (HER2−), HR+/HER2+, HR−/HER2+, and HR−/HER2− (triple negative) groups. The Fisher exact test was used to compare groups, with p values less than 0.05 considered significant.
Among 374 women, 114 (30.4%) achieved breast pCR as follows: HR+/HER2− (11/123, 8.9%), HR+/HER2+ (21/80, 26.3%), HR−/HER2+ (43/64, 67.1%) and triple negative (39/107, 36.4%) (p < 0.001). Among women with involved lymph nodes confirmed by fine-needle aspiration biopsy (FNAB) before NAC, axillary pCR rates were as follows: HR+/HER2− (10/110, 9.1%), HR+/HER2+ (23/51, 45.1%), HR−/HER2+ (35/42, 83.3%) and triple negative (33/54, 61.1%) (p < 0.001). In women achieving breast pCR with a positive pre-NAC axillary FNAB, rates of associated axillary pCR were as follows: HR+/HER2− (3/9, 33.3%), HR+/HER2+ (10/12, 83.3%), HR−/HER2+ (24/27, 88.9%) and triple negative (20/22, 90.9%) (p = 0.004). Conversely, in women with a positive axillary FNAB achieving axillary pCR, rates of associated breast pCR were as follows: HR+/HER2− (3/10, 30.0%), HR+/HER2+ (10/23, 43.4%), HR−/HER2+ (24/35, 68.6%) and triple negative (20/33, 60.6%) (p = 0.08).
Breast pCR is a strong predictor of axillary pCR, while axillary pCR is a modest predictor of breast pCR, for HER2-positive and triple-negative disease. There is a poor relationship between breast and axillary pCR for HR-positive disease. These data may inform the success of future surgical de-escalation for HER2-positive and triple-negative disease.
Preoperative biliary bacterial colonization (bacterobilia) is considered a risk factor for infectious complications after pancreaticoduodenectomy (PD). The aim of this study was to investigate the role of the microbiome grown from bile cultures taken at PD in the development of post-PD complications.
In a retrospective study of 162 consecutive patients undergoing PD with intraoperative bile sampling (2008–2018 inclusive), bile cultures were analyzed and sensitivities compared with preanesthetic antibiotics. Thirty-day postsurgery infectious complications were assessed with regard to bile culture growth, speciation of organism and sensitivity to antibiotics.
Bacterobilia was present in 136 patients (84.0%), with the most common organisms being Enterococcus (38, 27.9%), Streptococcus (21, 15.4%) and Klebsiella (19, 14.0%). The majority of patients were administered cefazolin as preanesthetic antibiotic (135, 83.3%). Only 24 bile cultures grew bacteria sensitive to pre-operative antibiotics (17.6%). Patients with bacterobilia had significantly higher rates of major complication (Clavien–Dindo 3 or 4) than patients without (33.1% v. 7.7%, p = 0.017), as well as higher rates of surgical-site or deepspace infection (SSI–DSI, 56.9% v. 26.9%, p = 0.010). Compared with those who had sterile cultures, patients who grew Enterococcus species were associated with higher rates of major complications (31.6% v. 7.7%, p = 0.050), SSI–DSI (65.8% v. 26.9%, p = 0.005) and organ space infection (OSI) (31.6% v. 7.7%, p = 0.050). Streptococcus species were associated with a longer length of stay (median 15.0 v. 11.0 d, p = 0.031) and higher major complication rate (38.1% v. 7.7%, p = 0.028), SSI–DSI rate (76.2% vs 26.9%, p = 0.002) and intensive care unit admission rate (19.0% v. 0%, p = 0.034).
Positive bile cultures taken at PD were associated with a higher incidence of major complications and SSI–DSI. Efforts to reduce rates of bacterobilia, such as limitation of biliary instrumentation, should be considered. Preanesthetic antibiotics did not cover the majority of the bacteria cultured in positive bile cultures, suggesting that preanesthetic antibiotics covering Enterococcus and Streptococcus should also be considered.
Patients with hepatobiliary (HPB) disease experience substantial risk of preoperative frailty; however, studies assessing preventive prehabilitation are limited. This systematic review and meta-analysis aims to evaluate outcomes for patients with HPB treated with exercise prehabilitation compared with standard care.
A comprehensive search of Medline, Embase, Scopus and Web of Science was conducted, with studies selected and data extracted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Included studies evaluated adult patients with HPB disease, had more than 5 patients and provided 7 or more days of exercise prehabilitation. The primary outcome was length of stay (LOS); secondary outcomes included mortality, complications, physical performance and psychosocial evaluations.
We evaluated 1548 studies and included 6 evaluating 957 patients; of those, 56.0% underwent exercise prehabilitation and 44.0% were controls. Prehabilitation was associated with a 5.20-day LOS reduction (95% confidence interval [CI] −9.89 to −0.51, I2 94%, p = 0.03). Postoperative complications (odds ratio [OR] 0.68, 95% CI 0.36 to 1.25, I2 48%, p = 0.22), major complications (OR 0.83; 95% CI 0.60 to 1.14, I2 0%, p = 0.24) and mortality (OR 0.67, 95% CI 0.17 to 2.70, I2 0%, p = 0.57) were similar. Subgroup analysis demonstrated that those undergoing pancreatic surgery had a nonsignificant 15.77-day LOS reduction (95% CI −38.1 to 6.57, I2 95%, p = 0.17) and those undergoing hepatic resection had a nonsignificant 0.29-day increase (95% CI −3.69 to 4.27, I2 80%, p = 0.89). Reduced sarcopenia and improved quality of life were recognized with prehabilitation.
Exercise prehabilitation may reduce morbidity following HPB surgery. Studies with well-defined exercise regimens are needed to optimize exercise prehabilitation outcomes.
Perihilar cholangiocarcinoma (PHC) is a rare malignancy arising at the biliary confluence. Given the lack of effective systemic therapy, margin-negative (R0) resection remains the most important prognostic indicator for long-term survival. The objective of this study is to review the impact of intraoperative revision of positive biliary margins in PHC on oncologic outcomes.
Electronic databases were searched from inception to October 2020. Studies comparing 3 types of patients undergoing resection of PHC with intraoperative frozen section of the proximal or distal bile ducts or both were identified: those who were margin negative (R0), those with an initially positive margin who had revised negative margins (R1R0) and those with a persistently positive margin with or without revision of a positive margin (R1). The primary outcome was overall survival (OS). Secondary outcomes included postoperative morbidity. Meta-analysis was carried out using randomeffects models.
A total of 409 studies were screened. Ten retrospective observational studies reporting on 1955 patients were included. Patients undergoing successful revision of a positive proximal or distal bile duct margin or both (R1R0) had similar OS to those with a primary margin-negative resection (R0) (hazard ratio [HR] 0.93, 95% confidence interval [CI] 0.72–1.19, p = 0.56) and significantly better OS than patients with a final bile duct margin that was positive (R1) (HR 0.52, 95% CI 0.34–0.79, p = 0.002). There was no increase in the rate of postoperative complications associated with additional resection, although postoperative morbidity was poorly reported.
The current review supports routine intraoperative biliary margin evaluation during resection of PHC with revision if technically feasible.
Prolonged warm ischemia times in kidney transplantation (KT) have been associated with worse outcomes. During organ procurement, and after ice removal, kidneys located in the retroperitoneum are at risk of rewarming during the time taken to retrieve other abdominal and thoracic organs. The purpose of this study is to evaluate the impact of prolonged kidney procurement (PKP) time on kidney transplantation outcomes.
A total of 145 patients were reviewed. We defined PKP as more than 65 minutes from aortic cross-clamp to final organ extraction versus standard procurement (SP) time (< 65 min).
No statistically significant differences were seen in outcomes when we compared kidney-only (KOP) versus multiorgan procurements, even though KOP organs have a higher incidence of Kidney Donor Profile Index (KDPI) greater than 50% (p < 0.01). However, when kidney procurement took more than 65 minutes, there was a higher rate of 3-month graft loss (6.6% v. 0%, p < 0.01), a higher incidence of de novo donor specific antibodies formation (10% v. 0.9%, p < 0.01), and an inferior 5-year graft survival of 90% versus 97.4% (p = 0.03). Left kidneys had an average ischemia time 10 minutes longer than that of right grafts, and their 5-year survival was significantly lower than that of their mate organs (p = 0.03).
Procurement time, when longer than 65 minutes, is an important and potentially modifiable factor that influences not only early but also long-term graft survival. Preventive measures to reduce the exposure of kidney procurements to rewarming may influence long-term outcomes.
Balancing the risks of venous thromboembolism (VTE) with bleeding following hepatectomy is challenging. VTE risk following hepatectomy specifically for colorectal cancer (CRC) metastases is unclear. Furthermore, there are limited data on the effect of VTE on survival in these patients. Our aims were to identify the risk of VTE development in routine clinical practice among patients with resected CRC liver metastases, the factors associated with this, and its impact on survival.
We conducted a population-based retrospective cohort study of patients undergoing hepatectomy for CRC metastases between 2002 and 2009 using linked administrative health care databases. Multivariable logistic regression was used to estimate the association between patient, disease and perioperative factors and VTE risk at 30 and 90 days after surgery. Cox proportional hazards regression was used to estimate the association between VTE and adjusted cancer-specific (CSS) and overall survival (OS).
A total of 1310 patients were included with a mean age of 62.7 (standard deviation 10.8) years; 62% were male. A total of 51% had 1 meta-static deposit. Major hepatectomy occurred in 64%. VTE occurred in 4% within 90 days of admission for liver resection. Only longer length of stay was associated with VTE development (odds ratio 6.88, 95% confidence interval 2.57–18.43, p < 0.001 for 15–21 d v. 0–7 d). Burden of metastatic disease (based on the size of the largest lesion and the number of lesions), receipt of perioperative chemotherapy and receipt of blood transfusion were not associated with increased risk of VTE development. Thirty-eight percent of VTEs were diagnosed after discharge. VTE was not associated with inferior CSS or OS, even though we adjusted for several covariates.
Risk of VTE development in this population is similar to the risk in patients undergoing hepatectomy for other indications and to the risk following other cancer site resections where postoperative extended VTE prophylaxis is currently recommended. The number of VTEs occurring after discharge suggests there may be a role for extended VTE prophylaxis.
Laparoscopic liver resection (LLR) is associated with fewer complications, shorter hospital stays and reduced costs relative to open resection while offering equivalent oncologic benefits. As these patients often receive perioperative red blood cell (RBC) transfusions, understanding whether the resection approach affects the risk of transfusion provides opportunities to optimize outcomes. We examined the association between surgical resection approach and perioperative RBC transfusion use among patients who have undergone elective liver resection for gastrointestinal cancer.
We performed a populationbased retrospective cohort study of adults undergoing elective hepatectomy for gastrointestinal cancer (2007–2019). Modified Poisson regression estimated the relative risk (RR) of perioperative RBC transfusion and secondary outcomes (perioperative bleeding, 90-day readmission, major morbidity and mortality) associated with liver resection approach, either laparoscopic or open. Estimates were adjusted for patient, procedure, surgeon and hospital factors.
Of 5902 patients who underwent hepatectomy (mean age 63 yr, 39.3% female, enterocolorectal cancer 64.1%, major resection 50.7%), 18.4% underwent LLR. After adjustment, we observed a 42% reduced risk of perioperative RBC transfusion among patients who underwent LLR (RR 0.58, 95% confidence interval [CI] 0.42–0.80). This was similarly observed among minor and major hepatectomy subgroups. LLR was also associated with reduced risks of perioperative bleeding (RR 0.70, 95% CI 0.55–0.91), 90-day readmission (RR 0.83, 95% CI 0.74–0.91) and major morbidity (RR 0.70, 95% CI 0.64–0.78). Sensitivity analyses adjusting for preoperative anemia and restricting to patients without perioperative bleeding showed similar associations.
LLR was associated with lower risks of perioperative RBC transfusions, bleeding, major morbidity and readmission when compared with open resection after adjusting for patient case mix, procedural complexity, and surgeon and hospital experience. By reducing perioperative transfusion, LLR can help improve patient outcomes and mitigate against the use of scarce blood resources. Among other factors, surgeons should consider the planned surgical resection approach when counselling, preparing and optimizing patients for surgery.
Laparoscopic liver resection (LLR) offers equivalent oncologic outcomes to open resection while reducing complications, hospital stays and costs in selected patients. However, the constraints of laparoscopy along with the inherent technical challenges of liver resection have slowed LLR uptake. To understand how experience supports LLR uptake, we examined the association between surgeons’ liver resection volume and the use of LLR for gastrointestinal cancer.
We identified patients who underwent elective liver resection for gastrointestinal cancer (2007–2019) within a health system with regionalized hepatobiliary surgical services. Surgeons’ annual liver resection volume, defined using data from 2 years before patients’ index surgery, was dichotomized into low (< 30) and high (≥ 30) volume informed by restricted cubic splines (RCS). We examined the association between surgeons’ annual resection volume and LLR with RCS and modified Poisson regression adjusting for patient, procedure and surgeon factors.
Seventy-four surgeons performed 5133 liver resections (median patient age 64 yr; 38.7% female), 17.7% of which were performed laparoscopically. High-volume surgeons cared for 37.3% of patients. Low-volume surgeons performed a median of 18 annual resections (interquartile range [IQR] 12.5–30), whereas high-volume surgeons performed 42 (IQR 36–52.5). High-volume surgeons were more likely to utilize LLR (23.6% v. 17.7%, p < 0.001). After adjustment for patient and surgeon factors, high-volume surgeons remained independently associated with the use of laparoscopy (adjusted relative risk 1.30, 95% confidence interval [CI] 1.14–1.48) and increasing surgeon volume associated with higher LLR probability.
Patients cared for by high-volume liver surgeons were 30% more likely to receive LLR over open surgery relative to those cared for by low-volume surgeons after adjustment for patient and surgeon characteristics. This indicates that the use of LLR differs on the basis of surgeons’ experience with liver resection. These data are important to direct efforts aimed at optimizing the appropriate use and increasing the uptake of LLR, supporting the improvement of patient outcomes.
The objective of this study was to evaluate the expression of the immunosuppressive receptor TIGIT and its ligands, as well as the resulting inhibition of tumour infiltrating lymphocytes (TILs) in comparison with PD-1, in colorectal liver metastases (CRLM).
We extracted total RNA from 52 CRLM and conducted deep RNA sequencing. We performed flow cytometry analysis for protein expression on paired tissue suspension of 18 patients for CRLM, adjacent normal liver and peripheral blood mononuclear cells. We measured IFN-γ (ELISA) secretion and and performed tumour lysis (Incucyte) assays of TILs with and without TIGIT and/or PD-1 monoclonal antibodies.
By gene expression analyses of CRLM, we found higher expression of TIGIT and its ligands PVR/PVRL2 compared with other immune checkpoints such as PD-1 and PD-L1/PD-L2. By flow cytometry, TIGIT ligands were expressed by nonimmune cells at higher levels than PD-1 ligands (PVR 34.6% [standard deviation (SD) 6.8%]; PVRL2 10.9% [SD 2.8%] v. PD-L1 0.5% [SD 0.2%]; PD-L2 1.0% [SD 0.5%]; p < 0.001). In TILs, TIGIT was significantly overexpressed in activated (CD25+) CD4+ (74.8% [SD 3.0%]) and CD8+ (68.7% [SD 8.4%]) compared with resting (CD25neg) T lymphocytes, in contrast with PD-1, were expressed at a steady state. In vitro, tumour coculture with autologous T cells in the presence of TIGIT blocking antibody resulted in higher IFN-γ secretion and increased tumour cell lysis.
The TIGIT/PVR immune suppressive axis appears to be biologically relevant to CRLM, according to expression and functional data. These results support targeting TIGIT in future immune checkpoint blockade studies in patients with CRLM.
Postoperative pancreatic fistula (POPF) remains a common complication following pancreaticoduodenectomy (PD), with rates of clinically relevant (CR) POPF of 10%–15%. This risk is higher in subgroups of patients, which can be predicted by the fistula risk score.
This study aims to examine the rates of POPF and perioperative morbidity for high-risk patients undergoing PD where the following strategies have been employed: (1) external pancreatic stent, (2) perioperative hydrocortisone, (3) inner invaginating layer and (4) Blumgart outer layer. Patients undergoing PD in 2020 who received all 4 strategies to reduce POPF were included. Patients were selected to undergo such interventions at their surgeon’s discretion when they were suspected to have an elevated risk of CR-POPF. The following outcomes were reported: CR-POPF, surgical site infection, percutaneous drain insertion, postoperative complication (Clavien–Dindo grade III or higher) within 30 days, length of stay, readmission with 30 days and reoperation within 30 days.
A total of 14 patients underwent all 4 interventions. The median age was 64 (interquartile range [IQR] 57–69) years and the median body mass index was 30 (IQR 23–32). The medial fistula risk score was 6 (IQR 7–9). Four patients were high risk according to the fistula risk score, six were intermediate risk and four were low risk. One patient (7.1%) experienced a CR-POPF; this patient had a high fistula risk score. For comparison, the weighted average of the expected CR-POPF rate was 15.4% and the rate at this institution in 2020 was 13%.
Interventions to decrease the rate of CR-POPF following PD are important but challenging. A combined approach of utilizing and studying multiple effective strategies shows promise in reducing the CR-POPF rate for patients with elevated risk of CR-POPF. Further research is required to demonstrate an improved rate of CR-POPF in these patients when compared with similar controls.
Synchronous resection of primary colorectal tumours and liver metastases is commonly performed. The totally laparoscopic approach to synchronous resection has recently gained traction. However, more research is needed to determine whether laparoscopic synchronous resection has improved outcomes compared with the open approach.
A retrospective cohort study using data from the American College of Surgeons (ACS) National Surgical Quality Improvement Project (NSQIP) database was performed comparing totally open versus totally laparoscopic synchronous colorectal and liver resections. Multivariable logistic regression was performed to control for age, sex, American Society of Anesthesiologists (ASA) score and extent of liver resection. The primary study outcomes were mortality and major morbidity at 30 days. Secondary outcomes were average length of stay, readmission and postoperative rates of bile leak and liver failure.
A total of 918 open and 80 laparoscopic cases were identified. Major morbidity occurred in 8.8% of laparoscopic cases, compared with 25.2% of open cases (p = 0.001). Mortalities were low in each group, with 2 cases among the laparoscopic group and 13 in the open group (2.5% v. 1.4%, p = 0.34). Median length of stay was 5 days for laparoscopic cases compared with 7 days for open cases (p < 0.001). Rates of readmission (15% v. 19.8%, p = 0.50), bile leak (1.3% v. 0.9%, p = 0.56) and liver failure (3.8% v. 6.4%, p = 0.47) were similar between groups. On logistic regression, the open approach was associated with a significantly increased rate of major postoperative morbidity (odds ratio 3.4, 95% confidence interval 1.5–7.6, p = 0.002).
Although the total number of cases remains low, totally laparoscopic synchronous resection for metastatic colorectal cancer is associated with reduced major morbidity and shorter length of stay compared with the totally open approach, with similar rates of postoperative bile leak and liver failure.
Despite high recurrence rates following liver resection for colorectal cancer liver metastasis (CRLM), there is a lack of robust evidence to guide clinical management in this setting. We aimed to define prognostic factors for survival in patients with recurrence following liver resection for CRLM. We also determined overall survival (OS) and disease-free survival (DFS) in patients who undergo curative treatment at the time of first recurrence following initial liver resection.
This was a multi-institutional retrospective cohort study of consecutive patients who had disease recurrence following their first liver resection for CRLM. Recurrence was defined as metastatic disease following curative-intent therapy for the preceding recurrence. Multivariable Cox regression models were used to identify poor prognostic factors associated with survival. Using recursive partitioning on significant prognostic factors, patients were classified into low- and high-risk groups and compared using Kaplan–Meier curves and the log-rank test.
There were 471 patients included in this study. Prognostic factors associated with survival were (i) time to first recurrence less than 8.5 months after liver resection (hazard ratio [HR] 4.60, 95% confidence interval [CI] 2.46–8.59) and (ii) presence of extrahepatic disease at first liver resection (HR 2.95, 95% CI 1.30–6.71). The high-risk group (i.e., patients with time to recurrence < 8.5 mo or extrahepatic disease or both) had median OS of 40.5 months (95% CI 34.0–45.7) versus 64.7 months (95% CI 57.9–72.3) in the low-risk group. The mean DFS and OS following curative surgery at the time of first recurrence were 17.4 months (95% CI 15.2–19.5) and 53.2 months (95% CI 50.5–55.8), respectively.
Extrahepatic disease at the time of first liver resection as well as short time to recurrence were identified as poor prognostic factors for patients with recurrence following liver resection. These results can guide tailoring of therapy to low- and high-risk patients.
Minimally invasive approaches represent the most recent innovation in the field of pancreatic surgery. Nowadays, the laparoscopic distal pancreatectomy is a standard of care. However, the laparoscopic Whipple still faces critical applicability limitations such as an extremely long learning curve associated with high complication rates, especially in its early portions. The major critical step is the pancreatic reconstruction, as its complication profile such as pancreatic anastomotic leak is the Achilles heel in its feasibility.
Our efforts were put into finding the most feasible, safe and highly reproducible technique for this part of the surgery. As a result, we developed our laparoscopic modified version of the Blugmart pancreatojejunostomy with external stenting.
As one of the few centres in Canada routinely performing laparoscopic pancreaticoduodenectomy (PD), London Health Sciences Centre (LHSC) is in a unique position to describe the outcomes associated with this novel technique.
Data were collected prospectively from patients undergoing laparoscopic PD between 2018 and 2020 for an elective indication that encompasses early utilization. Data were reported using descriptive statistics. Primary outcomes include postoperative pancreatic fistula (POPF) rate, length of stay (LOS), in-hospital complications, 30-day morbidity and mortality, 30-day readmission and 30-day reoperation. Secondary outcomes include R0 resection and time to recurrence. Rates of and reasons for conversion to open were assessed. National Surgical Quality Improvement Program (NSQIP) institutional and collaborative data were used as comparison.
A total of 127 patients underwent PD between January 2018 and November 2020. Seventeen of the procedures were completed laparoscopically or were laparoscopic assisted; 41% (n = 7) of the laparoscopic cases were converted to open. Mean operative time was 6 hours 28 minutes. Fifty-three percent (n = 9) experienced Clavien–Dindo grade III or higher postoperative complications. Median LOS was 10 days. POPF occurred in 17.6% (n = 3), and all were IPGFS-B (International Study Group of Pancreatic Fistula grade B). Thirty-five percent (n = 6) required 30-day readmission, and 18% (n = 3) required reoperation. There was 1 30-day mortality due to out-of-hospital cardiac arrest, with no evidence of intra-abdominal complication on autopsy. R0 resection margins were achieved in all specimens. No patients had early recurrence. Median follow-up was 92 days (range 15–584 d).
Although some cases required conversion to open due to patient factors, conversions that either were planned or were due to difficult dissection occurred earlier in adoption of this technique, indicating the presence of a technical learning curve. Postoperative morbidity, readmission and reoperation remained higher than NSQIP reported rates, which may be attributable to early introduction of this approach. Oncologic resection was achieved in all cases. Future directions include a propensity-matched comparison to open PD and ongoing analyses as the technique matures.
This study examines the effects of neoadjuvant chemotherapy (NAC) in patients with borderline resectable mean pancreatic ductal adenocarcinoma (PDAC) compared with those who underwent upfront pancreaticoduodenectomy.
American Hepato-Pancreato-Biliary Association (AHPBA) criteria for borderline resectability or a carbohydrate antigen 19-9 (CA19-9) value greater than 100, or both, defined borderline resectable tumours retrieved from an institutional registry from 2007 to 2020. The primary outcome was overall survival (OS) at 1 and 3 years. Secondary outcomes included margin status, recurrence, length of stay (LOS) and 30-day morbidity. Analyses were based on intention to treat.
A total of 87 patients with borderline resectable PDAC were identified. Forty underwent NAC and 46 underwent upfront surgery. Median pretreatment CA19-9 corrected for bilirubin was lower in the NAC group (10.6 v. 125, p = 0.03). Seventy percent (28) undergoing NAC proceeded to surgery. Of the operative patients, 86% (24) of the NAC group and 62% (28) of the upfront surgery group had resectable disease. One-year OS was 70% (28) with NAC and 39% (18) in the upfront surgery group (p < 0.01). Three-year OS was 43% (17) with NAC and 4% (2) with upfront surgery (p < 0.001). Median OS was 12.6 months with NAC and 10 months with upfront surgery. Median followup was 337.5 days (range 31–2314 d). In the operative NAC group, median survival was 20 months, 1-year OS was 85% and 3-year OS was 21%. Both had a comparable median time to recurrence (276.5 v. 260.5 d, p = 0.61), LOS (10 v. 11 d, p = 0.11) and 30-day morbidity (36% v. 36%, p = 0.36). The NAC group had lower rates of R1 margins (8% v. 32%, p = 0.04) and lymph node metastasis (44% v. 66%, p = 0.1).
This indicates an OS benefit for patients with borderline resectable PDAC undergoing NAC. Patients undergoing NAC had an improved resection rate, R0 margins and lymph node status compared with patients undergoing upfront surgery.
Liver resection for malignant indications can lead to prolonged hospital admission. A recent statement from the Society of Surgical Oncology (SSO) has advised that thermal ablation (TA) for colorectal liver metastasis (CRLM) be considered during times of resource contraction in the COVID-19 pandemic. The purpose of this study is to evaluate the impact of broader use of TA and telemedicine during the pandemic.
We retrospectively reviewed consecutive patients undergoing TA with or without liver resection during the COVID-19 pandemic (since March 2020, with a minimum follow-up of 6 mo), compared with patients who underwent similar liver procedures in an era immediately preceding the pandemic. Primary outcomes including health care resource utilization (length of stay, and readmission), complications and oncologic adequacy of treatment were analyzed. Cox proportional hazards modelling was used for risk adjustment and to identify predictors of the primary outcomes.
Forty-two patients undergoing TA for CRLM were identified. The median age was 62.5 (range 32–84) years and 54.8% (n = 23) were women. Ten percent (n = 4) had combined colorectal resection with liver ablation. All patients in the COVID-19 era (n = 11) had at least 1 telemedicine consultation preoperatively and all were reviewed in a virtual liver multidisciplinary tumour board. In the pre-COVID-19 cohort, 45.2% (n = 14) underwent a major liver resection (MLR) in combination with TA, whereas in the COVID-19 cohort 27.3% (n = 3) underwent a combined MLR and TA. Length of stay in the COVID-19 era was 1.7 (range 1–5) days with no readmissions when MA was the primary procedure. Across both groups, the median number of ablated lesions was 2 (range 1–10), and the median size was 12.8 mm (range 4–35 mm). Only 2 (4.76%) patients experienced ablation-related complications (both Dindo–Clavien II). Thirty-day imaging follow-up demonstrated complete response to ablation in 83.3% (n = 35) of patients and partial response in 16.7% (n = 7). Liver recurrence within 6 months occurred in 30.9% (n = 13). Of these, 6 recurred in the ablation site and 7 in a different hepatic location. Twenty-nine percent of patients with liver recurrence underwent reablation; 23.8% (n = 6) of patients had distant recurrence independently of the ablation. KRAS mutation was the only predictor of overall recurrence (odds ratio 6.12, 95% confidence interval 1.3–29.7, p = 0.024).
TA for CRLM is safe and effective, and it reduces health care resource utilization during the pandemic. Complication rates and oncologic adequacy of treatment were favorable even in instances of multiple ablations (> 5) compared with hepatic resection. KRAS mutation status is a dominant mode of TA treatment failure, suggesting that either larger ablation margins or hepatic resection be employed. There were no unintended consequences of the SSO guidelines for the treatment of CRLM during the COVID-19 pandemic.
The aim of this study was to calculate the cumulative costs, mean costs and incremental cost-effectiveness of a liver transplant program that utilizes normothermic machine perfusion (NMP) in concert with static cold storage (SCS) compared with one that uses SCS alone (control).
A Markov model compared approaches (NMP v. control) using a 1-year cycle length over a 5-year time horizon from the public health care payer perspective. Primary microcosting data (Can$ 2020) from a single centre retrospective trial were applied along with EQ-5D utility values from literature sources. Transition probabilities were deduced using local transplant data and supplemented by literature values for sensitivity analysis. A scenario and probabilistic sensitivity analysis (PSA) was conducted.
The NMP approach was both cost-saving and cost-effective in comparison with the control approach, which was dominated. The cumulative costs for NMP were $5.57 billion and for the control they were $6.39 billion. The mean cost of NMP was $557 450.01 and the mean cost of the control was $634 106.42. The NMP program had a greater incremental quality-adjusted life years (QALYs) gain over 5 years, which was estimated to be 3.48, versus 3.17 for the control. The results remained robust in the scenario analysis. In PSA, NMP was cost-effective 63% of the time at the conventional willingness-to-pay threshold of $50 000.
The addition of NMP to a liver transplant program results in greater QALY gains and is both cost-saving and cost-effective from the public health care payer perspective.
Ampullary carcinomas (AC) account for approximately 6%–7% of all periampullary cancers. AC generally presents at earlier stages owing to the rapid development of obstructive jaundice. Due to earlier presentation and tumour biology, AC demonstrates the longest median overall survival (OS) among periampullary tumours. Several retrospective studies and propensity-matched cohort studies have shown the benefit of adjuvant therapy for advanced-stage disease. The aim of this study was to investigate any benefit of adjuvant therapy in earlystage AC (ES-AC).
A retrospective review was performed including patients who underwent pancreatoduodenectomy for AC between 2006 and 2020. Univariable and multivariable Cox proportional hazards analysis was performed with survival presented using the Kaplan–Meier method and logrank test.
Overall, 111 patients were identified who underwent resection for AC. A total of 78 patients received adjuvant therapy while 33 patients were treated with resection alone. There was no significant difference in median overall survival (OS) for ES-AC (stage ≤ 2b) in patients who did or did not receive adjuvant therapy (60.4 mo v. 57.1 mo, respectively). Patients with lymph node metastasis and whose disease was at stage 2b or higher were more likely to receive adjuvant therapy, reflecting their higher stage of disease. Patients with AC of the pancreatobiliary subtype (PBS) had significantly higher rates of lymph node metastasis than those with AC of the intestinal subtype (IS) (p = 0.046). As well, patients with AC of the PBS had higher rates of disease recurrence than those with AC of the IS (20.0% v. 34.8%, p = 0.010).
Among patients with ES-AC, there was no significant difference in OS between those who did or did not receive adjuvant therapy. Patients with AC of the PBS had a significantly higher incidence of lymph node metastasis and a higher incidence of disease recurrence in comparison with patients with AC of the IS. Tumour characteristics such as lymph node metastases and PBS portend a worse prognosis, and adjuvant therapy should be considered in those patients.
Emerging evidence demonstrates the impact of the gut microbiome in carcinogenesis and treatment response in various types of cancers including gastrointestinal and pancreaticobiliary malignancies. However, limited data exist on the impact of the biliary microbiome on the therapeutic efficacy of neoadjuvant chemotherapy (NAC) in pancreatic ductal adenocarcinoma (PDAC).
A single-institution, retrospective analysis was performed; all patients with PDAC who underwent pancreaticoduodenectomy after NAC with intraoperative bile cultures between 2010 and September 2020 were included. The Fisher exact test was used to assess the association between bile flora composition and tumour regression score (TRS) on pathologic specimens. Human pancreatic cancer cell line PANC-1 was treated with 1 μM gemcitabine in the presence of patient-derived bile samples in a 1:50–1:200 dilution in vitro. Cellular activity was measured using the CellTiter-Blue assay after 24 hours of drug treatment. Relative cellular activities with or without gemcitabine were calculated for each bile sample.
A total of 23 patients were included. Overall, 10 patients had colonization with aerobes alone, 11 with aerobes and anaerobes, and 2 with no growth on bile cultures. Eight patients had a TRS of 0–1 and 15 had a TRS of 2–3. Patients who had colonization with anaerobes were more likely to have a TRS of 0–1 than those without anaerobes (64% v. 8%, p = 0.009). Pancreatic cancer cells were more sensitive to gemcitabine in the presence of bile colonized with anaerobes than without anaerobes in vitro.
This study suggests an improved NAC response in the presence of anaerobes in the biliary tract, supported by improved sensitivity to gemcitabine in vitro. Further research is needed to identify the specific anaerobic bacterial strains responsible for this phenomenon and to dissect the molecular mechanism of the observed enhanced chemosensitivity.
The Metroticket project produced prognostic calculators for patients undergoing liver transplant (LT) for hepatocellular carcinoma (HCC). Radiology-based and pathology-based calculators predict 5-year HCC-specific and overall survival, respectively. Our objective was to evaluate how viable tumour burden at explant affects the predictive capability of the Metroticket model.
A retrospective cohort analysis of HCC LT patients from 1996 to 2019 was conducted. Locoregional therapy (LRT) data, radiographic parameters and explant pathology findings including tumour viability were collected. Metroticket predicted survival was calculated. Tumour response to LRT was assessed. Radiographic total tumour volume (TTV) and explant total viable tumour volume (TVV) were correlated. HCC-specific survival of these subgroups was assessed using Kaplan–Meier curves and compared via log-rank testing. Finally, predicted versus observed survival was compared by patient subgroups.
Eighty patients were included. TTV and TVV correlated strongly (Pearson r = 0.98, p < 0.01), with imaging overestimating TVV by 42.1%. There was no significant difference in HCC-specific survival if patients underwent LRT (p = 0.50), regardless of tumour response (p = 0.85), nor if tumours were viable (p = 0.10), regardless of viable tumour burden (p = 0.74). Similarly, the presence of microvascular invasion (p = 0.73) or satellitosis (p = 0.99) did not influence HCC-specific survival. Observed 5-year overall survival was significantly lower than predicted by the Metroticket model for patients with viable tumours (66.3% v. 61.8, p = 0.03). This finding was amplified in patients without vascular invasion (78.9 v. 66.7%, p < 0.01). Patients with viable tumours and presence of microvascular invasion demonstrated overall survival significantly greater than predicted by the Metroticket model (60.6 v. 51.7%, p < 0.01).
In our study, HCC-specific survival does not appear to be affected by LRT or the burden of viable tumours. However, tumour viability does appear to influence the predictive capability of the Metroticket model, varying with the presence of microvascular invasion. Integrating tumour viability in the Metroticket model may augment its efficacy.
Margin-negative (R0) resection is the strongest positive prognostic factor in perihilar cholangiocarcinoma (PHC). Owing to its anatomic location, the caudate lobe is frequently involved in PHC. The objective of this review is to examine the impact of concomitant caudate lobe resection (CLR) in addition to liver lobectomy and bile duct resection in patients with PHC.
Medline, Embase and Cochrane databases were systematically reviewed from inception to September 2019 to identify studies comparing patients undergoing surgical resection with or without CLR for treatment of PHC. Outcomes included the proportion of patients achieving R0 resection, overall survival (OS) and postoperative morbidity.
A total of 771 studies were screened. Six observational studies reporting on 917 patients were included. Patients undergoing CLR had a higher likelihood of R0 resection (odds ratio [OR] 5.63, 95% confidence interval [CI] 2.41–13.16) and improved OS (hazard ratio [HR] 0.60, 95% CI 0.47–0.77) compared with patients who did not. CLR did not increase the risk of postoperative morbidity (OR 1.05, 95% CI 0.66–1.65).
Given the higher likelihood of R0 resection, improved OS and no apparent increase in perioperative morbidity, this review supports routine caudate lobectomy in the surgical management of PHC. These results should be interpreted with caution given the lack of high-quality prospective data.
Few reports have evaluated prognostic modelling studies of tools used for surgical decision-making. This systematic review aimed to describe and critically appraise studies that have developed or validated multivariable prognostic models for postoperative liver decompensation following partial hepatectomy.
This study was designed using the CHARMS checklist. Following a comprehensive literature search, 2 reviewers independently screened candidate references for inclusion and abstracted relevant study details. Studies were excluded if their objective was predictor-finding, if they assessed only the prognostic value of a single factor (unless adding to a pre-existing multivariable model), if they had an inapplicable analytic purpose (e.g., multivariable modelling not aimed at prognostication, development of novel statistical methods), if their outcome(s) did not include a postoperative liver decompensation event, or if they were a duplicate study not initially screened out. Qualitative assessment was performed using the PROBAST tool.
We identified 36 prognostic modelling studies; 25 focused on development only, 3 developed and validated models and 8 validated pre-existing models. None compared routine use of a prognostic model against standard clinical practice. Most studies used single-institution, retrospective cohort designs, predominantly conducted in Eastern populations. In total, 15 different outcome definitions for postoperative liver decompensation events were used. Statistical concerns surrounding model overfitting, performance assessment and internal validation led to high risk of bias for all studies.
Current prognostic models for postoperative liver decompensation following partial hepatectomy may not be valid for routine clinical use because of design and methodologic concerns. Landmark resources and reporting guidelines such as the TRIPOD statement may assist researchers, and, additionally, model impact assessment studies represent opportunities for future research.
Appropriate patient selection for liver resection in hepatocellular carcinoma (HCC) is critical to mitigate major liver-related postoperative complications. Currently, no standard prognostic tool exists to predict the risk of postoperative liver decompensation events (POLDEs) following partial hepatectomy in patients with cirrhosis and hepatocellular carcinoma. The objective of this study was to identify independent preoperative predictors of POLDEs, for future development of prognostic tools to improve surgical decision-making.
This was a population-based, retrospective cohort study of patients with cirrhosis and incident HCC between 2007 and 2017, identified using administrative health data from Ontario, Canada. The occurrence of a POLDE (jaundice, ascites, bleeding varices, portal hypertension, hepatorenal syndrome, hepatic encephalopathy or hepatic failure) or death within 2 years from surgery was described. Multivariable Cox regression identified independent predictors of POLDEfree survival, as well as cause-specific hazards for POLDEs and death.
Among 611 patients with cirrhosis and HCC who underwent liver resection, 160 (26.2%) experienced at least 1 POLDE and 189 (30.9%) died within 2 years of surgery. The presence of diabetes, major liver resection and previous nonmalignant decompensation were independent predictors of decreased POLDE-free survival. In contrast, a hepatitis B cirrhosis cause was an independent predictor of improved POLDE-free survival. Except for extent of resection, the same risk factors were associated with POLDEs in cause-specific analysis. Only age and history of previous nonmalignant decompensation were independent predictors of death.
In patients with cirrhosis undergoing resection for HCC, patient- and disease-related factors are associated with POLDEs and POLDE-free survival. These factors can be used both to inform clinical practice and for the development of preoperative prognostic tools, which may lead to improved outcomes in this population.
Preoperative bile duct stenting is increasingly being used to administer neoadjuvant chemotherapy in patients with pancreatic cancer. This study aimed to characterize bacterial colonization after bile duct stenting and to analyze its association with the severity of pancreatic fistulas and oncologic outcomes in patients undergoing a Whipple pancreaticoduodenectomy (PD).
We conducted a single-centre retrospective cohort study of prospectively collected data in patients who underwent PD for cancer between January 2012 and December 2018. Number, type and duration of preoperative bile duct stenting, antibiotic prophylaxis, results of perioperative bile culture, frequency and severity of pancreatic fistulas and survival were analyzed. Proportions were compared with the χ2 test and survival was analyzed with Kaplan–Meier curves and the log-rank test.
A total of 368 patients were included in the analysis. Preoperative biliary stenting was done on 302 (83%) patients, 134 (44%) of whom required more than 1 biliary procedure. The median time between bile duct stenting and PD was 59 days. Bacterial growth was found in 236 out of the 289 (82%) bile cultures, 197 (68%) were polymicrobial, and 203 (70%) presented resistance to tested antibiotics. Enterobacter was detected in 84 patients (29%) and found to be resistant to piperacillin–tazobactam, cefoxitin and cefazolin, in 10%, 50% and 100% of cultures. Enterobacter was more frequently found in patients presenting clinically significant (grade B or C) pancreatic fistulas than in those with biochemical leaks (13/28 [46%] v. 6/34 [17%], p = 0.026). The number of preoperative bile duct procedures, the duration of preoperative stenting, the presence or the characteristics of bacteriobilia and the use of antibiotic prophylaxis not covering resistant germs were not significantly associated with the frequency of pancreatic fistula or oncologic outcomes.
Bile colonization with Enterobacter was overrepresented in patients who developed clinically significant pancreatic fistulas after PD, with up to 10% resistance to piperacillin–tazobactam.
Minimally invasive abdominal wall reconstruction continues to evolve with technological advancement. This video describes a useful method to improve visualization and surgeon ergonomics during robotic retromuscular ventral hernia repair (RRVHR) by inverting the camera image. The technique of camera inversion is demonstrated in a stepwise fashion with reassignment of surgical instruments and arms to facilitate dissection. Retro-rectus Rives–Stoppa style incisional hernia repair with mesh is demonstrated using the Da Vinci Xi platform.
The inverted image provides a useful alternative perspective that decreases surgeon fatigue and complements ergonomic console design. Using this technique a normalized view of the surgical field is restored, mimicking open surgery. Hernia defect closure is facilitated by the inversion, which also assists in safe management of needle and tissue. Inverting the camera may enhance economy in dissection and identification of key landmarks. Hernia defects and diastasis may be clearly delineated, repaired and reinforced with mesh in an efficient ergonomic process that enhances surgeon comfort and skill. YouTube video link: https://www.youtube.com/watch?v=jjBh2x2wV78
Abdominal hernia repair is a common general surgery procedure. Massive hernias that result in a substantial loss of domain represent a technical challenge for surgeons. In 1947, the Argentinian surgeon Moreno described the progressive preoperative pneumoperitoneum (PPP) technique as an initial step to treat these hernias. The objective of this study was to review the effectiveness of the PPP in our centre. Secondary objectives were to review morbidity and mortality.
This is a retrospective single-centre study of patients who underwent PPP between 2013 and 2020. Data were retrospectively collected from medical charts and included demographic characteristics, insufflation technique, surgical intervention, complications and length of stay.
Thirty-four patients were admitted for PPP. The mean age was 62 years. The average body mass index was 32.2 kg/m2. The 2 most common preoperative comorbidities were hypertension (26 patients, 76.4%) and dyslipidemia (23 patients, 67.6%). In terms of physical status, 19 patients were American Society of Anesthesiologists (ASA) class II (56%), and 14 patients were ASA class III (44%). Most hernias (91%) were incisional in nature. The average hernia deficit size was 13 cm (min–max: 6–22 cm). Complications, as per the Clavien–Dindo classification system, during the insufflation period were as follows: 2 grade III, 1 grade IV and 1 grade V. Complications following hernia repair were as follows: 7 grade III and 3 grade IV. Out of the initial 34 patients, 32 underwent hernia repair. All hernias were reduced and full closure of the fascia was possible in 20 patients (62.5%). Average length of stay was 50 days total (18 days after hernia repair).
PPP is an efficient technique in the management of giant hernias. However, it is not exempt from complications in this highly morbid patient population. Appropriate thromboprophylaxis and heightened surveillance for possible perioperative respiratory complications are essential.
Parastomal hernia (PSH) is a frequent complication of stoma creation during colorectal surgery. Radiologic classification systems have been proposed for PSH, but they are primarily used for research. Our objective was to determine if PSH radiologic classification at diagnosis could predict the need for surgical repair during follow-up.
In this retrospective cohort study, we reviewed 705 postoperative computed tomographic (CT) scans from 154 patients with permanent stoma creation from 2015 to 2018. Patients were included for analysis if a primary PSH was diagnosed on any examination. PSH were classified according to the European Hernia Society (EHS) and Moreno-Matias (MM) classification systems.
The incidence of radiologic PSH was 41% (63/154) after a median radiologic follow-up of 19.2 months (interquartile range 10.9–32.9 mo). Surgical repair was required in 17 of 62 patients with a primary PSH. There was no significant correlation between PSH classification and surgical hernia repair for either the EHS (p = 0.56) or MM classification systems (p = 0.35) in a univariate analysis. However, in a multivariate analysis, the type of PSH according to the EHS classification was significantly correlated with PSH repair during follow-up (p = 0.02). Type III PSH were associated with a lower incidence of surgical hernia repair compared with type I, with a hazard ratio of 0.01 (95% confidence interval 0.00–0.20). A similar correlation was not seen using the MM classification system (p = 0.10).
EHS classification of PSH was significantly correlated with the need for surgical repair during short-term follow-up. Prospective studies are required to establish a potential role in patient care.
This retrospective cohort study compares 2 different methods of hernia defect closure: an open hybrid approach versus a laparoscopically sutured closure for laparoscopic incisional ventral hernia repair. Currently there is no consensus regarding the ideal surgical approach to an incisional hernia measuring 2–10 cm.
We identified consecutive patients who underwent incisional hernia repair at 2 centres, North York General Hospital and Humber River Hospital, between 2015 and 2020. Patients were grouped according to whether hybrid fascial closure or totally laparoscopic closure was performed. The hybrid approach involved open adhesiolysis, hernia sac resection and primary fascial closure followed by laparoscopic mesh placement without transfascial fixation sutures. The laparoscopic approach involved defect closure by intracorporeal suturing and transfacial fixation of the mesh. Both techniques included laparoscopically placed intraperitoneal mesh with 5 cm of overlap using a tacking device. Age, sex, body mass index (BMI), comorbidities and hernia size were analyzed. Primary outcomes included surgical site infection (SSI), other wound complications including seroma and hematoma, length of hospital stay, pain reported at follow-up appointment and hernia recurrence.
We identified 164 patients who underwent incisional hernia repair at the study centres between 2015 and 2020. Postoperative pain, surgical site infections and seromas did not differ between the totally laparoscopic and hybrid groups. The recurrence rates were 5.8% and 6.8% for the laparoscopic and hybrid groups, respectively; the diference was not statistically significant. The time to recurrence was 15 months (range 8–12 mo) in the laparoscopic group and 7 months (range 6–36 mo) in the hybrid group, and this difference was also not statistically significant. The hernia defect size and BMI were significantly higher in the hybrid group, without increased wound complications.
These results suggest that open defect closure is a safe alternative to totally laparoscopic closure and can be a helpful alternative in patients with higher BMI and larger hernia defects. The hybrid approach involves a less technically demanding fascial closure method combined with the benefits of laparoscopy for wide mesh overlap.
Hypoalbuminemia (HA) may be an important and underinvestigated predictor of serious complications and mortality among patients who undergo bariatric surgery. The aims of this study were therefore to (1) determine the prevalence of HA and clinical characteristics of HA among patients who undergo bariatric surgery, (2) compare complication rates among patients who undergo bariatric surgery with low and normal presurgery serum albumin levels and (3) determine the influence of HA on postoperative complications and 30-day mortality among patients who undergo bariatric surgery.
Data were extracted from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) data registry from 2015 to 2018. All primary Rouxen-Y gastric bypass and sleeve gastrectomy procedures were included, while prior revisional surgeries and emergency surgeries were excluded. A presurgical serum albumin level of 3.5 g/dL or less was used to organize the patient population into HA and control cohorts. Bivariate analysis and multivariable logistic regression modelling were used.
Of 590 971 patients, 42 618 (7.2%) were identified as having serum albumin levels of 3.5 g/dL or less. Patients with HA were younger (44.5 [standard deviation (SD) 12.0] yr v. 44.0 [SD 11.9] yr, p < 0.001), had increased body mass index (48.5 [SD 9.0] kg/m2 v. 45.1 [SD 7.7] kg/m2, p < 0.001) and had a lower baseline functional status (1.6% v. 1.0% dependent or partially dependent, p < 0.001). Patients with HA had more anastomotic leaks (0.5% v. 0.4%, p = 0.017), deep surgical site infections (0.4% v. 0.2%, p < 0.001) and composite serious complications (4.4% v. 3.3%, p < 0.001). At 30 days after surgery, complications including need for reintervention (1.6% v. 1.2%, p < 0.001), readmission (4.8% v. 3.7%, p < 0.0001) and mortality (0.14% v. 0.09%, p = 0.001) were all more prevalent among patients with HA. After functional status, HA was the strongest modifiable predictor of serious complications but was not predictive of 30-day mortality.
We identified HA as one of the greatest modifiable factors predictive of serious complications. Adoption of strategies to identify and improve preoperative serum albumin levels may reduce overall serious complications among patients who undergo elective bariatric surgery.
Laparoscopic adjustable gastric band (LAGB) is an effective procedure to lose weight and has low initial operative morbidity. However, specific complications such as gastric erosion of the band can occur in around 3.4% of patients (at a 15-year follow-up in some studies), which requires emergent surgical removal.
We present the case of a 72-year-old woman with a LAGB procedure done 8 years before who suffered a rare complication of jejunal obstruction associated with acute pancreatitis secondary to band migration. After relevant investigations such as blood tests, abdominal x-ray and computed tomographic scan, the band was removed during a laparoscopic surgery. We demonstrate the surgical technique using a video of the procedure. Relevant steps include abdominal exploration and adhesiolysis, LAGB tube cutting, longitudinal enterotomy at the site of obstruction, LAGB removal, Heineke–Mikulicz procedure (2 planes), peroperative gastroscopy and leak test, drain placement and removal of subcutaneous port. The postoperative period was free of complications as was subsequent follow-up. The patient, whose body mass index was 42, declined any additional bariatric procedure.
Acute pancreatitis may be considered as a very rare complication associated with LAGB migration. Since these complications can be severe, an endoscopic or laparoscopic intervention should be quickly performed to remove the LAGB. There is no bariatric guideline that establishes an endoscopic follow-up to prevent or treat complications in asymptomatic patients. But in an acute context, owing to the risk of intestinal perforation with endoscopic procedures, laparoscopic exploration remains the standard of care, which is exemplified in the video. This procedure can be done by any general surgeon with adequate laparoscopic skills. YouTube video link: https://www.youtube.com/watch?v=acW0fG5b5NE
Bariatric surgery (BSX) is known to induce substantial weight loss and improve weight-related comorbidities. However, some people do not achieve successful weight loss outcomes. Research has shown that visceral adipose tissue (VAT) characteristics may contribute to post-BSX success.
The aim of this study was to determine if VAT gene expression can determine metabolic success 12 months after BSX. We compared VAT between those whose (1) insulin resistance (IR) improved versus persisted, (2) metabolic syndrome improved versus persisted and (3) weight loss was adequate (≥ 20% total body weight) versus inadequate. VAT was collected from patients at the time of BSX; gene expression was assessed by reverse transcription polymerase chain reaction, including markers of thermogenic capacity, inflammation, fibrosis, adipokines and others. Biochemical and anthropometric information was collected before BSX. Groups were compared using the Kruskal–Wallis test followed by Wilcoxon ranked sum, or χ2 and Fisher exact test. A p value less than 0.05 was considered significant.
Data were collected for 126 patients, of whom 85 (67%) had IR (homeostatic model assessment [HOMA] ≥ 2.73) at baseline. Those with persistent IR (n = 21) 12 months after BSX had higher baseline VAT fibrotic gene expression and inflammatory marker expression than those whose IR resolved after surgery (n = 64). Thirty-six patients (29%) had metabolic syndrome at the time of BSX; 9 patients with persistent metabolic syndrome at 12 months had higher baseline VAT expression of browning markers, adipokines and fibrotic markers than those whose metabolic syndrome resolved. Finally, we found that the 16 patients (13%) who achieved inadequate weight loss at 12 months had higher expression of inflammatory and fibrotic markers compared with those who achieved adequate weight loss (n = 110, 87%). Overall, there was a significant difference in the expression pattern of intra-operative VAT samples from patients with varied postsurgical outcomes.
VAT characteristics may relate to development of metabolic comorbidities and improvement following BSX. This may help predict which patients may have favorable postoperative outcomes.
The general management for chronic kidney disease (CKD) includes treating reversible causes, including obesity, which may be both a driver and comorbidity for CKD. Bariatric surgery, being the most effective method of achieving substantial and durable weight loss, has been shown to reduce the likelihood of CKD progression and improve kidney function in observational studies. An updated systematic review and meta-analysis on the role of bariatric surgery in obese patients with CKD was warranted.
We searched Embase, Medline and Central for eligible studies reporting on CKD and kidney function outcomes in patients with at least stage 3 CKD, before and after bariatric surgery with comparison to a medical intervention control if available. Mean difference (MD), confidence intervals (CI) and relative risk (RR) were calculated for the described outcomes. Risk of bias was assessed with the Newcastle–Ottawa risk of bias score.
Nineteen studies were included for synthesis. Bariatric surgery significantly improved glomerular filtration rate with a MD of 11.79 (95% CI 5.07 to 18.51, I2 90%) mL/min per 1.73 m2 and significantly reduced serum creatinine with a MD of −0.24 (95% CI −0.21 to −0.39, I2 0%) after bariatric surgery. There was no significant difference in the RR of having greater than CKD stage 3 after bariatric surgery, with a RR of −1.13 (95% CI −0.83 to −2.07, I2 13%), but there was a reduced likelihood of having a urinary albumin-to-creatinine ratio greater than 30 mg/L with a RR of −3.03 (95% CI −1.44 to −6.40, I2 91%). Eleven of the 19 studies were of good quality.
Bariatric surgery may be associated with improved kidney function with the reduction of body mass index and is a safe treatment option for patients with CKD. However, future studies with more robust reporting are required to determine the efficacy of bariatric surgery for the treatment of CKD.
Discharging patients on postoperative day 1 has become common for bariatric surgery over recent years because of the increase in severe obesity. However, limited data are available to identify which patients require a longer hospital stay after surgery. The aim of this study is to build a prediction model to identify prolonged hospital stay following bariatric surgery.
A retrospective chart review was performed between January 2012 and October 2017. Patients’ age, sex, preoperative weight, body mass index (BMI), medication (narcotics, gabapentin and anticoagulant) and comorbidities (hypertension, coronary artery disease, type 1 or 2 diabetes mellitus [T1DM, T2DM], chronic obstructive pulmonary disease, asthma, obstructive sleep apnea [OSA], chronic kidney disease [CKD], gastroesophageal reflux disease [GERD], dyslipidemia and chronic pain) and delayed outcome were recorded. Prediction analysis was performed. Cross-validation (CV) was used to assess the generalizability of the model. CV error was used to choose the best predictor set.
A total of 539 charts were reviewed. The model best predicts prolonged hospital stay with a combined CV error of 0.1545. The CV errors with an additional variable were asthma (0.1590), weight (0.1578), GERD (0.1574), gabapentin (0.1567), BMI (0.1561), anticoagulation (0.1552), chronic pain (0.1550), OSA (0.1547), dyslipidemia (0.1547), CKD (0.1547) and T2DM (0.1545).
A prediction model including preoperative BMI, comorbidities and medications is generated from this study. It could be applied to predict prolonged patient stay following bariatric surgery. It has the potential to reduce costs, increase quality of care and meet the growing demand for bariatric surgery.
Obesity and type 2 diabetes mellitus (T2DM) are growing global health concerns and Canada’s Indigenous population is at higher lifetime risk of both. Obesity increases the risk for insulin resistance, T2DM, cardiovascular disease and all-cause mortality. Bariatric surgery is an effective method for improvement or cure of all obesity-related comorbidities, including T2DM. The objective of this scoping review was to interrogate the literature and explore the experiences and outcomes of Indigenous people undergoing bariatric surgery.
Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for scoping reviews (PRISMA-ScR) guidelines, we conducted a search of Medline, Scopus, CINAHL and Embase. Two independent reviewers identified all studies exploring the experiences and outcomes of Indigenous patients undergoing bariatric surgery. Included quantitative and qualitative data were evaluated using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) and Critical Appraisal Skills Programme (CASP) approaches, respectively.
A total of 92 articles were returned in our search. Thirteen articles were included in our analysis: 4 qualitative studies and 9 quantitative studies. Substantial heterogeneity precluded pooled analysis. Analysis of quantitative data revealed that Indigenous patients underwent fewer bariatric procedures, had poorer clinic attendance (both preoperative and postoperative), similar weight loss outcomes and slightly higher postoperative complication rates. Qualitative data analysis revealed that obese Indigenous patients have a strong desire to improve their health and quality of life with bariatric surgery. Family is a very important support mechanism and motivator for pursuing bariatric surgery; however, nonsurgeon bariatric pathway supports for Indigenous patients are lacking.
There is a paucity of literature examining the experiences or outcomes of Indigenous patients undergoing bariatric surgery. Existing literature appears to suggest inequity in access to bariatric surgery for Indigenous patients despite strong motivators for pursuing surgery. To identify and address the gaps in access and health outcomes for Indigenous peoples, more research needs to be conducted in this area.
Shortening of the common channel (CC) is 1 option to promote further weight loss after unsuccessful Roux-en-Y gastric bypass (RYGB), although there is no established length. This systematic review aims to characterize the optimal length of the common channel in revisional RYGB.
The Medline, Embase, Web of Science, PubMed, Cochrane Library and CINAHL databases were searched up to April 2020. Studies were included if shortening of the CC of RYGB was performed for insufficient weight loss or weight regain when measurements of the new CC were included. The primary outcome was weight loss on the basis of identified CC length groups. Secondary outcomes included rates of 30-day complications, protein calorie malnutrition (PCM) and need for revisional surgery.
Seventeen single-arm studies including 496 patients were included in this review and grouped on the basis of a short CC length (100–120 cm, 8 studies) versus long CC length (120–300 cm, 9 studies) after revisional RYGB. For both groups, body mass index (BMI) decreased from 43.6 and 43.2 before revision to 35.7 and 34.8 at 12 months after revision, respectively. The absolute percentage changes in BMI were 19.9% and 21.5%. The 30-day complication incidence was 37% for short CC (95% confidence interval [CI] 22%–54%, I2 85%) and 26% for long CC (95% CI 12%–35%, I2 76%), while PCM incidence was 23% (95% CI 12%–35%, I2 76%) versus 26% (95% CI 13%–42%, I2 79%) and the incidence of reoperation for lengthening the CC was 12% (95% CI 6%–20%, I2 60%) versus 13% (95% CI 6%–23%, I2 55%).
This systematic review comparing CC length in revisional RYGB found no differences between short and long CC length in terms of BMI changes, rates of PCM or rate of reoperation for relengthening the CC. Shortening the CC under 120 cm has a slightly higher 30-day complication rate and may not be necessary. Although the pooled population for this study is substantial, larger double-arm studies with direct outcome comparison based on CC length are required for a definitive answer.
Internal hernias after Roux-en-Y gastric bypass pose a potential challenge in diagnosis and in surgical management. Considering the rising rates of bariatric surgery in Canada, combined with the fact that roughly half of those undergoing these procedures are women of reproductive age, complications from internal hernias during pregnancy are likely to become more commonly seen. Surgery in this setting is associated with a high risk of laparotomy and laparoscopic conversion.
In this video we describe the management of a 35-year-old woman presenting with clinical and radiologic signs consistent with internal hernia at 34 weeks’ gestation. We demonstrate that the laparoscopic approach to intraoperative diagnosis and repair of internal hernias is technically feasible even during the late stages of pregnancy. YouTube video link: https://www.youtube.com/watch?v=xQf52WZS0Ik
With the growing prevalence of bariatric procedures performed worldwide, it is important to understand the timing of postoperative complications following bariatric surgery and the differences that may exist between procedures.
This retrospective study was conducted using the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program (MBSAQIP) data registry from 2017 to 2018. All patients with primary elective Roux-en-Y gastric bypass (RYGB) and sleeve gastrectomy (SG) procedures were included. The primary outcome was to characterize the timing of postoperative complications for RYGB and SG. Bivariate analysis was conducted using the χ2 test or independent t tests for parametric data, and Mann–Whitney U tests for nonparametric continuous data.
A total of 316 314 patients were identified, with 237 066 (74.9%) in the SG cohort and 79 248 (25.1%) in the RYGB cohort. Early complications included myocardial infarction (4.7 [standard deviation (SD) 6.4] d), cardiac arrest (6.4 [SD 8.5] d), pneumonia (6.9 [SD 6.9] d), progressive renal insufficiency (8.1 [SD 8.1] d) and acute renal failure (8.2 [SD 7.6] d). Late complications included Clostridioides difficile infection (11.3 [SD 7.8] d), organ space infections (11.7 [SD 7.9] d), deep incisional infections (12.4 [SD 6.6] d), superficial incisional infections (13.2 [SD 6.9] d) and urinary tract infections (14.0 [SD 8.4] d). Patients who underwent SG were more likely to be diagnosed later than those who underwent RYGB with regard to superficial incisional infections (14.0 [SD 7.4] d v. 12.5 [SD 6.3] d, p = 0.002), organ space infections (12.6 [SD 7.8] d v. 10.8 [SD 7.9] d, p = 0.001), acute renal failure (9.3 [SD 8.1] d v. 6.8 [SD 6.8] d, p = 0.03) and pulmonary embolism (13.7 [SD 7.5] d v. 11.3 [SD 8.0] d, p = 0.003). No significant difference in timing was observed for any other complication by procedures.
This study provides the first characterization regarding the timing of postoperative complications following bariatric surgery. We demonstrate that significant differences in timing exist between complications and that these differences also vary by surgical procedure. Understanding the course of bariatric surgical complications will enable providers to optimize perioperative care by helping overcome delays in diagnosis and management.
This study describes the landscape of bariatric surgery in Canada, including procedural and technical variation.
An online survey was included in the Canadian Association of General Surgeons (CAGS) newsletter.
Twenty-three respondents indicated they performed bariatric surgery, representing 29.5% of the estimated 78 surgeons with bariatric practices in Canada. Sixteen (69.6%) practised in an academic setting, 6 (26.1%) practised in the community and 1 (4.3%) practised in the private sector. The majority of bariatric surgeons were fellowship trained. Academic surgeons were more likely to have bariatric surgery as their main practice. Most surgeons performed more than the minimum 100 lifetime stapling procedures (91.3%) and minimum 25 yearly stapling procedures (87%) required for American Society for Metabolic and Bariatric Surgery (ASMBS) bariatric certification. The most performed bariatric procedure was the Roux-en-Y gastric bypass (RYGB): for 21 surgeons (91.3%). Sleeve gastrectomy (SG) was the next most commonly performed procedure. Only 3 surgeons (39.1%) performed duodenal switches. No one performed adjustable gastric banding. Perioperative work-up was similar across practice types. For RYGB, the majority of surgeons used a bougie to size the gastric pouch (n = 18, 78%), with most using a 30- to 40-Fr bougie (n = 16, 69.6%). A combined stapler and hand-sewn gastrojejunostomy was the most common anastomotic technique (n = 16, 69.6%). The majority routinely performed a leak test (n = 22, 95.7%). For SG, most surgeons used a bougie to size the sleeve (n = 22, 96%), with the majority using a 30- to 40-Fr bougie (n = 19, 82.6%). The majority did not routinely oversew the staple line (n = 22, 95.6%) and did not routinely perform intraoperative gastroscopy (n = 19, 82.6%). Approximately half routinely performed a leak test.
While SG is now the most common bariatric surgery performed worldwide, the gold-standard RYGB remains the most common bariatric surgery performed in Canada. Surgeons performing bariatric surgery in Canada had similar training, experience and consistent practice patterns.
Surgical stapling devices are used in a wide variety surgical procedures, particularly in minimally invasive and bariatric surgery. However, these staplers have failure rates. Stapler failure can lead to minor inconvenience or major morbidity and mortality. Although surgical stapler failure seems to be rare, the true incidence is not known. Stapler failure can occur because of user error or primary malfunction from faulty manufacturing. This review was performed to identify the incidence, types and consequences of primary stapler failure.
The Medline, Embase and PubMed databases were searched for articles discussing surgical stapler failure. Thirty-four articles were selected that described the incidence or consequences of primary stapler failure. A narrative synthesis was performed.
The incidence of stapler failure ranges from 0.022% to 2.3% on the basis of observational studies in the literature. This may represent underreporting. Recent changes to and declassification of the US Food and Drug Administration adverse event reporting databases suggest the incidence is higher than previously estimated, particularly after reporting exemptions were lifted. The most frequent malfunctions include stapler misfire and locking. Consequences range from inconvenient to catastrophic, particularly for vascular staple misfires. Mortality and major morbidity have been described.
Surgical stapling devices have been widely utilized for decades and have revolutionized surgical technology. Surgeons are increasingly reliant on such technological innovations. Despite the low incidence of stapler failure, surgeons are likely to encounter this problem in clinical practice given the wide use of these devices. It is important for the surgeon to be aware of this and to be able to manage this potentially devastating complication. This video demonstrates 6 of the common stapler and user malfunctions and how to correct these problems. YouTube video link: https://www.youtube.com/watch?v=totmND4pwgI
Postsurgical anatomy poses unique challenges to the management of choledocholithiasis after Roux-en-Y gastric bypass (RYGB). This study compares the efficacy and safety of various strategies used in its management.
Studies reporting on the management of choledocholithiasis in patients after RYGB and including at least 5 patients were analyzed by systematic review and meta-analysis. The primary outcome was successful stone clearance. Secondary outcomes of interest included procedure duration, length of hospital stay and adverse events.
Forty studies involving 566 patients were included in the final analysis. The mean age of included patients was 52.6 (standard deviation [SD] 6.24) years, 78.2% (SD 14.2%) were female and the average body mass index was 31.4 (SD 7.86). Procedures represented in these studies included laparoscopic-assisted transgastric endoscopic retrograde cholangiopancreatography (LAERCP) (n = 26 studies, 381 patients), endoscopic ultrasound (EUS)-directed transgastric ERCP (EDGE) (n = 5 studies, 81 patients), laparoscopic common bile duct exploration (LCBDE) (n = 3 studies, 37 patients), balloon-assisted endoscopy (BAE) (n = 3 studies, 34 patients), intragastric single-port ERCP (IGS ERCP) (n = 2 studies, 14 patients), EUS-guided intrahepatic puncture with antegrade clearance (EGHAC) (n = 2 studies, 12 patients) and rendezvous guidewire–assisted ERCP (n = 1 study, 7 patients). High rates of successful stone clearance were observed with LAERCP (92.1%, 95% confidence interval [CI] 88.6–94.6, p < 0.001), EDGE (95.6%, 95% CI 88.0–96.4, p < 0.001), IGS ERCP (92.9%, 95% CI 66.3–99.1, p = 0.009) and LCBDE (96.1%, 95% CI 82.9–99.2, p < 0.001). Lower rates of stone clearance were observed with BAE (61.5%, 95% CI 44.3–76.3, p = 0.19) and EGHAC (74.0%, 95% CI 42.9–91.5, p = 0.12). Relative to EDGE, LAERCP was associated with a longer procedure time (133.1 min v. 67.4 min) but lower complication rates (12.8% v. 24.3%).
LAERCP and EDGE have high rates of success in the management of choledocholithiasis after RYGB. LAERCP has fewer complications but is associated with a longer procedure time. BAE has lower success rates than both LAERCP and EDGE.