Skip to main content
Cancer Medicine logoLink to Cancer Medicine
. 2020 May 31;9(14):5143–5154. doi: 10.1002/cam4.3026

A measure of case complexity for streamlining workflow in multidisciplinary tumor boards: Mixed methods development and early validation of the MeDiC tool

Tayana Soukup 1,, Abigail Morbi 2,4, Benjamin W Lamb 3, Tasha AK Gandamihardja 5, Katy Hogben 6, Katia Noyes 7, Ted A Skolarus 8, Ara Darzi 4, Nick Sevdalis 1, James SA Green 1,9
PMCID: PMC7367630  PMID: 32476281

Abstract

Background and Objective

There is increasing emphasis in cancer care globally for care to be reviewed and managed by multidisciplinary teams (ie, in tumor boards). Evidence and recommendations suggest that the complexity of each patient case needs to be considered as care is planned; however, no tool currently exists for cancer teams to do so. We report the development and early validation of such a tool.

Methods

We used a mixed‐methods approach involving psychometric evaluation and expert review to develop the Measure of case‐Discussion Complexity (MeDiC) between May 2014 and November 2016. The study ran in six phases and included ethnographic interviews, observations, surveys, feasibility and reliability testing, expert consensus, and multiple expert‐team reviews.

Results

Phase‐1: case complexity factors identified through literature review and expert interviews; Phase‐2: 51 factors subjected to iterative review and content validation by nine cancer teams across four England Trusts with nine further items identified; Phase 3: 60 items subjected to expert review distilled to the most relevant; Phase 4: item weighing and further content validation through a national UK survey; Phases 5 and 6: excellent interassessor reliability between clinical and nonclinical observers, and adequate validity on 903 video case discussions achieved. A final set of 27 factors, measuring clinical and logistical complexities were integrated into MeDiC.

Conclusions

MeDiC is an evidence‐based and expert‐driven tool that gauges the complexity of cancer cases. MeDiC may be used as a clinical quality assurance and screening tool for tumor board consideration through case selection and prioritization.

Keywords: case complexity, decision making, multidisciplinary team meetings, optimization, streamlining, tumor boards, workload


There is increasing emphasis in cancer care globally for care to be reviewed and managed by multidisciplinary teams (ie, in tumor boards). Evidence and recommendations suggest that the complexity of each patient case needs to be considered as care is planned, however no tool currently exists for cancer teams to do so. We report the development and early validation of such a tool. We used a mixed‐methods approach involving psychometric evaluation and expert review to develop the Measure of case‐Discussion Complexity (MeDiC).

graphic file with name CAM4-9-5143-g003.jpg

1. INTRODUCTION

A multidisciplinary approach to cancer diagnosis and treatment appears to be the most effective means of addressing the complex needs of patients with cancer. 1 , 2 , 3 , 4 , 5 In fact, since 1995, the United Kingdom's (UK) National Institute for Health and Care Excellence 4 proposed every newly diagnosed cancer case to be discussed in a weekly multidisciplinary tumor board meeting (MTB; also known as cancer multidisciplinary team meeting or MDT in the UK) to improve consistency and quality of cancer care. These meetings enable a range of specialists the opportunity to review each cancer patient's case (eg, history, imaging, comorbidities, and psychosocial issues), and contribute their expert input into the formulation of treatment plans, thus optimizing care and improving patient outcomes. 1 , 2 , 3 , 4 , 5

However, under pressure from increasing cancer incidence, 6 , 7 an aging population, financial strains on the healthcare system, 8 and specialist shortages, 9 more and more patients need to be reviewed by MTBs leaving shorter discussion times and raising quality and safety concerns. 10 Moreover, cancer provider time 11 , 12 and extensive preparation by radiologists and pathologists 13 is hugely expensive, inadvertently exacerbating financial pressures. 11 , 12 , 13 To address these concerns, both Cancer Research UK, 10 the UK National Cancer Advisory Group, 5 and the NHS England and NHS Improvement 14 recently highlighted prioritization of complex cancer cases as an important safety and quality improvement strategy for MTBs. This is in‐line with implementation of MTBs in the United States, where typically the most complex (ie, not all) patients are reviewed by a MTB to plan their care. 15 Furthermore, clinicians report case complexity as a key determinant of inconsistent MTB decision‐making. 16

Yet, what constitutes a “complex” cancer case and factors contributing to case complexity remain unclear. 14 Clinically, case complexity might refer to specific patient characteristics (eg, prior surgery) or cancer features that lead to prolonged MTB review that makes formulating a treatment plan challenging. 5 , 10 , 16 From a health policy perspective, 5 , 10 , 14 health systems are encouraged to streamline their MTB processes using validated tools to prioritize cancer case workload, ultimately routing cancer cases efficiently through MTBs based on complexity. 5 , 9 , 10 It thus follows that some way of gauging complexity in a valid and reliable manner is necessary.

This study aims to address this need. We report development and initial validation of an evidence‐based and expert‐derived tool for use by cancer MTBs to safely assess complexity of a cancer patient's case and facilitate efficiency in planning treatment—the Measure of case‐Discussion Complexity (MeDiC) tool.

2. METHODS

2.1. Study design and setting

MeDiC was developed over a 30‐month period (May 2014 to November 2016). It was trialed through six rigorous phases and a mixed‐methods approach, including ethnographic interviews and observations, two rounds of national surveys, two rounds of feasibility and reliability testing on video recorded meetings, expert consensus, and multiple expert team reviews (Figure 1).

Figure 1.

Figure 1

Development of the Measure of Case‐Discussion Complexity (MeDiC) Tool for multidisciplinary tumor board team meetings. Reprinted with permission from Soukup, 2017. 17

Data were collected from hospitals across England (Phases 1‐3 and 5‐6) and from the entire UK (Phase 4). Eligibility was defined as MTB members who regularly attend weekly care planning meetings. The research was approved by corresponding local and national institutional review boards prior to data collection.

We summarize the purpose and methods for each phase of the MeDiC tool development and validation below.

2.2. Patient and public involvement

Patient and public were not involved in this study.

2.3. Tool development and validation phases

2.3.1. Phase 1. Exploratory ethnographic interviews and observations: Complexity item identification (May‐November 2014)

We conducted interviews with cancer specialists to better understand what constitutes a complex case for their MTBs. We selected participants using opportunistic sampling and asked a single open‐ended question: “What factors in your opinion contribute to case‐discussion complexity in MTBs?” We recorded each factor put forward by the interviewees.

2.3.2. Phase 2. Content validation survey for case complexity items (November 2014‐December 2015)

We validated Phase 1 factors with a larger sample of cancer specialists in order to determine whether they adequately represented all facets of complexity. To do so, we compiled the factors into a survey (paper and electronic version). We asked participants to rate each complexity factor on a 1‐5 Likert scale (1 = very simple case, rapid MTB review; 5 = very complex case, in‐depth MTB review). We also asked for additional factors adding to case complexity. We used the National Institute for Health Research's Clinical Research Network Portfolio to invite all hospitals with MTBs in England to participate. Hospitals opting into the study distributed the survey to their MTBs through local research support teams.

2.3.3. Phase 3. Expert review: Preliminary content validation of complexity items (February 2016)

We held a 2‐hour virtual conference with expert cancer specialists (BWL & JG: attending urologic cancer surgeons; TG and KH: attending breast oncoplastic surgeons) and an expert in surgical safety and psychometrics (NS). The conference aimed to determine inclusion of factors from Phases 1 and 2 into our tool. A list of complexity factors ranked based on their item‐content validity indices (see below) was provided to the experts allowing them to evaluate the candidate factors. The experts rated each factor as “include,” “exclude,” or “equivocal.” Scoring was done via consensus 18 , 19 : all four cancer specialists had to agree for an item to be retained. 20 , 21

2.3.4. Phase 4. National survey: Item weighing and national content validation (March‐June 2016)

In collaboration with Cancer Research UK (one of the largest cancer support charitable foundations, which funds research, service provision, and workforce development), 10 we conducted a national survey. The aim was to determine each individual factor's weight in terms of how much each contributes to case complexity; and to further establish content validity of the complexity factors that emerged from Phase 3 expert consensus.

2.3.5. Phases 5 and 6. Video recordings of MTBs: feasibility, reliability, and validity testing (September 2015‐November 2016)

We first assessed the feasibility of scoring the MeDiC tool and reliability between assessors on video‐recorded MTBs. We video recorded 12 weeks of breast, colorectal, and gynecological MTBs, and used the first two boards from each cancer team, respectively, for this phase. We then refined MeDiC by clarifying the wording and scoring anchors (Phase 5). We further assessed the feasibility of scoring the MeDiC items and reliability between assessors on the remaining 30 video‐recorded MTBs (Phase 6).

All cancer cases reviewed at these MTBs were scored by a clinical research fellow (AM) and a research psychologist (TS) with over 5 years of expertise assessing cancer MTBs. This determined feasibility of scoring the individual items for each patient, and the reliability between the two assessors. After an initial training session (to calibrate the assessors), MeDiC factors were separately scored using a checklist principle by each assessor and annotated to provide justification for each assigned score.

We categorized factors to be included into MeDiC into three domains for scoring. For clinical complexity, factor weights (as determined by the mean respondent ratings from Phase 4) were added when calculating the overall clinical complexity score. For logistical problems, number of occurrences within each patient review was counted, the sum of which constitutes logistical complexity score. We calculated the overall complexity score for each patient case by adding up clinical and logistical scores. The scores, as well as the feasibility and usability of the tool were discussed by the assessors over two in‐depth data review sessions.

2.4. Data analyses

Our overarching hypothesis was that more complex cancer cases, as scored by MeDiC, would take objectively longer time for the MTB to review and reach a treatment recommendation. We briefly describe our core endpoint analyses (validity, review length, reliability, and complexity level scoring) with full details available in Supplemental File 1.

2.4.1. Validity analyses

We measured content validity of complexity factors included into MeDiC using a widely used measure, the item‐content validity index (I‐CVI). 20 , 21 This index takes both the expert rating and number of respondents into consideration. We used I‐CVI ranges to guide our selection of complexity factors for retention, revision, or deletion in different phases. We further validated MeDiC using correlations between individual factors and the time spent reviewing a case at the MTB, defined as length of time (minutes:seconds) between start and end of each patient's case review. We also used the overall complexity score (ie, item‐total correlation). We reported partial correlations controlling for tumor type for continuous variables, and point‐biserial correlations for associations between continuous and dichotomous factors.

2.4.2. Reliability analyses

We assessed reliability between the two MeDiC assessors (AM and TS) using interclass correlation coefficients (ICC) for continuous variables, and Kappa for categorical items, with generally accepted reliability coefficients of 0.70 and above. 22 Cronbach's alpha was calculated to assess internal consistency. We also applied Cronbach's alpha coefficient as a psychometric criterion to determine whether a complexity factor should be removed in the process of tool development.

2.4.3. Complexity levels scoring

We determined complexity levels using percentiles and quartile values as cut‐off points. For validation purposes, we then used Kruskal‐Wallis H and Mann‐Whitney U tests to analyze differences in MTB review time length across different levels of case complexity.

We used bootstrapping with stratified sampling and tumor type as a stratification variable throughout the analyses. 23 All analyses were carried out using SPSS® version 20.0 with significance set at P < .05.

3. RESULTS

In Phase 1, we conducted 15 interviews with cancer specialists from three hospitals in England, including surgeons (n = 7), oncologists (n = 2), cancer nurses (n = 2), physicians (n = 2), radiologists (n = 1), and pathologists (n = 1) across lung, breast, urology, head and neck, and colorectal cancers. These specialists identified 51 complexity factors, which were grouped into four themes: pathology, patient and treatment factors (Table S2).

In Phase 2 , we compiled the Phase 1 factors into a survey for cancer specialists. Four National Health Service Trusts in England comprising nine MTBs (breast, brain, lung, colorectal, gestational trophoblastic disease, head and neck, skin, urology, and hem‐oncology) participated. Response rate was 48% (52/108) including oncologists (n = 17), cancer nurses (n = 11), surgeons (n = 8), radiologists (n = 8), physicians (n = 6), and pathologists (n = 2). Nine new complexity factors were suggested by survey respondents; totaling 60 items for potential MeDiC inclusion at this point of the research (Table S2).

In Phase 3, a virtual conference with four surgical oncology and one safety/psychometrics experts was held for content validation using all 60 complexity factors—listed as MeDiC potential items. Out of the initial 60 items, 39 were excluded (ICV‐I < 0.67) and 21 received full agreement (I‐CVI = 1). It was further recommended by the experts that six items are merged due to shared meaning (ie, cognitive with mental health comorbidities, immunocompromised with significant physical comorbidities, and treatment toxicity and contraindications to standard treatment), and seven are grouped under the logistical issues domain; totaling 10 items for potential MeDiC inclusion at this point of the study (Table S2).

Phase 4 incorporated complexity items (N = 10) with full agreement into a national Cancer Research UK survey 9 to determine their weight and the level of complexity each factor adds. We received 973 responses (the denominator for the survey is unknown, hence a response rate cannot be computed; the absolute N of respondents was comparable to a recent UK national cancer specialist survey, which had 1141 respondents) 24 from surgeons, oncologists, radiologists, pathologists, physicians, and cancer nurses from 10 different cancer specialties, including, breast, urology, lung, colorectal, head and neck, skin, upper gastrointestinal, gynecology, hematology, and brain across Scotland, Northern Ireland, Wales, and England, resulting in full UK coverage. We used mean ratings for each of the 10 complexity factors (items 1‐10 in Table S3) as weights in scoring and determining the overall complexity of cases in subsequent phases 5 and 6. Ten new items were proposed for inclusion by respondents (items 11‐20 in Table S3) and retained for further evaluation with the original items; totaling 20 items for potential MeDiC inclusion at this point of the study.

In Phase 5, we conducted preliminary feasibility assessment of the MeDiC scoring and psychometric analyses on a smaller sample of five video‐recorded MTBs including 81 patient reviews (18 breast, 34 colorectal, and 29 gynecological). We determined I‐CVIs, frequencies, and associations with case review length during the meetings for each patient (Table S4), along with reliability and validity testing (Table S5).

The feasibility testing revealed several issues. Items poor performance status, mental health comorbidity, and socioeconomic issues did not apply to any cases (see Table S4), meaning their validity was not assessable, warranting further testing on a bigger sample. Similarly, items guidelines do not account for patients' situation, treatment failure, and lifestyle risks, applied to only 1case each. Nonetheless, guidelines do not account for patients' situation (02:46 to 05:15 (minutes:seconds) case review duration) and treatment failure/toxicity and contraindications (02:46 to 04:09 (minutes:seconds) case review duration) led to a nearly twofold increase in review length by the MTBs, suggesting a potentially good proxy for complexity (see Table S4).

Additionally, feasibility testing revealed that some of the more basic indicators of pathology are not captured in MeDiC, yet these are needed to improve the sensitivity of the tool. Therefore seven new items were added, that is, whether the tumor is a confirmed malignant cancer, whether it is large and has metastasized, whether it is advanced cancer, and has an invasive component, or nodal involvement, and whether there is a residual tumor left either because of incomplete excision or because of an incomplete biological response to treatment (see items 20‐27 in Table S4). Although it may be counterintuitive to have the malignancy as a stand‐alone indicator of complexity, we found it a necessary starting point in scoring, especially, since some MTBs also discuss benign or suspicious cases. For example, a case that is malignant but has none of the other variables will be simple or of low complexity in comparison to a case that is not only malignant but also advanced and potentially unresectable, or incompletely excised. Hence it is the combination of different factors that determines overall complexity.

In terms of reliability (Table S5), all cases were double rated by the clinical and nonclinical researchers with ICCs higher than the generally accepted 0.70. 21 The Cronbach's alpha measuring internal consistency of MeDiC scoring was good at 0.77. Hence 27 complexity items were brought into the final study phase.

Finally, Phase 6 included MeDiC scoring of these 27 items and analysis on a large sample of 30 video‐recorded MTBs, which reviewed and managed 822 patients (241 breast, 185 colorectal, and 396 gynecological). 25 Interassessor scoring reliability on a subsample of 136 cases (17% of total) was good with Kappa statistics per item showing a minimum coefficient of 0.53 and maximum of 1.00. Disagreements (n = 15) were due to missing elements of the case review due to recording lapses. Cronbach's alpha measuring internal consistency was good at 0.70.

The final list of MeDiC factors with their reliability coefficients, frequency counts, and correlation coefficients with, first, total complexity score, and, second, case review duration are shown in Table 1. We color‐coded items using a “traffic‐light” system for a visual guide to how well they measure complexity: green represents good measure, amber fair, and red poor (the latter are candidates for removal). Our sensitivity analysis across tumors was broadly similar to the data presented in Table 1, with the exception of five discrepancies detailed in Supplemental File 6.

Table 1.

Final list of complexity factors used in the second and final phase of feasibility and reliability testing

No. Complexity factor Item weighing Assessor reliability Item reliability Item frequency Correlation e
Kappa Cronbach's Alpha if item removed a Count % Item total d Case review duration
r (unadjusted) % variability explained r (adjusted c ) % variability explained r % variability explained
Pathology
1 Malignancy 1 1.000 0.668 433 54 0.57** 33 .47** 22 .27** 7
2 Invasive component 1 0.984 0.669 253 31 0.56** 31 .46** 21 .23** 5
3 Multiple cancers (incl. multiple primaries) 1 1.000 0.688 68 8 0.38** 14 .32** 1 .27** 7
4 Increased size (T3, T4) 1 0.974 0.684 80 10 0.42** 18 .36** 13 .16** 3
5 Nodes affected 1 1.000 0.678 103 13 0.49** 24 .40** 16 .27** 7
6 Mets (local or distant) 1 1.000 0.667 111 14 0.57** 33 .51** 26 .31** 10
7 Advanced stage, progressive 1 1.000 0.676 99 12 0.50** 25 .42** 17 .29** 8
8 Unusual or rare tumor type 4 0.953 0.697 34 4 0.25 ** 6 .37 ** 13 .14 ** 2
9 Residual tumor 1 0.881 0.699 47 6 0.21** 5 .18** 3 .16** 3
10 Recurrence 1 1.000 0.694 39 6 0.28** 8 .27** 7 .26** 7
Patient factors
11 Previous history of cancer 1 1.000 0.697 89 11 0.30 ** 9 .33 ** 11 .14 ** 2
12 Previous oncological treatment 1 1.000 0.685 46 6 0.41** 17 .46** 20 .28** 8
13 Significant surgical history 3 1.000 0.692 83 10 0.34** 11 .49** 21 .17** 3
14 Significant physical comorbidity (incl. poor PS g ) 3 0.983 0.696 178 22 0.33** 11 .50** 25 .17** 3
15 Mental health and cognitive comorbidity 4 1.000 0.700 13 2 0.15** 2 .26** 7 .05 0
16 Socioeconomic issues 3 1.000 0.701 3 0 0.13** 2 .14** 2 .06 0
17 Patient choice and family opinion* 1 1.000 0.701 62 8 0.23** 5 .21** 5 .11** 1
18 Lifestyle risks 3 1.000 0.702 7 1 0.06 0 .08* 1 −.01 0
Treatment factors
19 Diagnostic uncertainty and inconclusiveness of diagnostic tests 1 0.941 0.693 105 13 0.39** 16 .34** 12 .29** 8
20 Unusual anatomy/distribution of tumor 1 1.000 0.691 37 5 0.33** 11 .33** 11 .24** 6
21 Conflict of opinions about treatment options 4 0.905 0.703 46 6 0.17 ** 3 .32 ** 10 .29 ** 8
22 Further tests and patient assessment needed 1 1.000 0.708 231 29 0.46 ** 21 .27 ** 7 .27 ** 7
23 Treatment toxicity and contraindications 3 1.000 0.699 5 0 0.17** 3 .21** 4 .09* 1
24 Further input needed from other specialties 1 0.978 0.701 116 14 0.27** 7 .26** 7 .18** 3
25 Pathway does not account for patients specific situation 4 0.702 1 0 0.01 0 .03 0 .02 0
26 Trial eligibility 1 1.000 0.701 3 0 0.10* 1 .09* 1 .02 0
27 Logistical complexity (occurrences per discussion) f 0.953 b 42 0.19 ** 13 .34 ** 12
Total clinical complexity (sum of items 1 to 26) 0.911 b .55 ** 30
Total complexity (sum of clinical and logistical scores) 0.879 b 0.98 ** 97   .59 ** 35

N = 822 discussions (15 missing cases). Green = good measure of complexity. Yellow = fair. Red = weak and could be removed. Boldface = values changed as a result of item weighing. Percentage values have been rounded to the nearest integer for ease of reading. MeDiC Copyright 2017 © Soukup Sevdalis Green under CC‐BY‐NC‐ND terms. Reprinted with permission from Soukup, 2017. 17

a

Cronbach's alpha is 0.701.

b

Interclass correlation coefficient (ICC).

c

Adjusted for item weighing.

d

Clinical complexity total.

e

Point‐biserial correlation coefficients for items 1‐26; Partial correlation controlling for tumor type for discussion time, clinical, logistical, and overall complexity.

f

R between total and logistical complexity is 0.36** (12.6% of total variance explained).

g

Performance status.

*

< .05.

**

P < .01.

We further validated MeDiC scores against a practical and objective measure of case complexity: duration of MTB review for each case presented for a management plan to be agreed. To do this analysis, we categorized complexity levels into four groups using three quartile medians with bias‐corrected standard errors and confidence intervals (Table 2). We then applied the overall complexity levels to individual MTBs. We found the four complexity levels were significantly distinct with gradual increases in mean time spent reviewing each patient (χ 2(3) = 309.67, P < .001)—which provided further validation that MeDiC truly captures underlying case complexity. As shown in Table 3, 43% of colorectal cases fell within the top 25% of the data, that is, within the very high complexity range. In contrast, the gynecological MTB had had the smallest frequency of high/very high complexity cases, with low to moderately complex cases most prevalent. The breast cancer team had a more balanced spread of patient complexity. Summary statistics of the entire clinical complexities across participating MTBs as assessed by MeDiC are shown in Table 3.

Table 2.

Summary statistics for the total MeDiC score across tumor boards and overall dataset

Cancer team N Mean (SD) Median (IQR) Minimum, maximum Logistical problems
Count %
Breast 241 4 (4) 3 (4) 0, 18 84 29
Gynecological 396 3 (4) 2 (3) 0, 26 134 48
Colorectal 185 6 (4) 6 (5) 0, 19 121 23
Total 822 4 (4) 3 (5) 0, 26 339 41

SD = standard deviation. IQR = interquartile range. % is a percentage of observed cases where logistical problems were present. MeDiC total score range is 0–26, with higher scores indicating higher case complexity. Reprinted with permission from Soukup, 2017. 17

Table 3.

Complexity levels and mean case review time durations across tumor boards and overall dataset (all tumor boards)

  25th percentile 50th percentile 75th percentile
Complexity score: ≤1 2‐3 4‐6 ≥7
Complexity level: Low Moderate High Very High
Breast cancer tumor boards
% of case reviews 33% 23% 24% 20%
Mean review time (MM:SS) 00:52 02:06 02:38 05:06
Median review time (MM:SS) 00:36 02:03 02:28 04:28
Range review time (MM:SS) 02:21 03:59 05:13 09:16
N of case reviews 79 56 58 48
Gynecological cancer tumor boards
% of case reviews 33% 34% 18% 15%
Mean review time (MM:SS) 01:28 02:11 03:13 04:38
Median review time (MM:SS) 01:15 02:00 03:00 04:00
Range review time (MM:SS) 05:09 10:59 07:35 14.10
N of case reviews 130 135 72 59
Colorectal cancer tumor boards
% of case reviews 8% 20% 30% 43%
Mean review time (MM:SS) 01:01 02:09 02:34 04:08
Median review time (MM:SS) 01:11 02:07 02:20 03:17
Range review time (MM:SS) 02:09 04:44 06:04 13:47
N of case reviews 14 37 55 79
Overall dataset (all tumor boards)
% of case reviews 27% 28% 23% 23%
Mean review time (MM:SS) 01:13 02:10 02:50 04:32
Median review time (MM:SS) 01:06 02:00 02:27 04:05
Range review time (MM:SS) 05:09 11:02 07:47 14:00
N of case reviews 223 228 185 186

Categories are based on quartile median values from overall dataset bootstrapped on 5000 stratified samples with tumor type as a stratification variable. Median (upper and lower bias corrected confidence intervals) for the 25th percentile was 1 (1.14‐1.56), for the 50th percentile was 3 (2.99‐3.46), and for the 75th percentile 6 (5.64‐6.57). In red are values that represent highest scores. Reprinted with permission from Soukup, 2017. 17

4. DISCUSSION

To the best of our knowledge, this study offers the first tool to assess the clinical complexity of a cancer patient managed in a tumor board setting. Through a rigorous multiphase research process, with expert input from cancer specialists, we produced the MeDiC tool with evidence of reliability and validity in its scores, feasibility in utilization, and correlation with length of time a case review takes across different MTBs. Our analysis confirmed the hypothesis that the cases that obtain higher MeDiC scores take significantly longer time to discuss and make a treatment plan for within MTBs—thus validating the underlying complexity dimension that MeDiC is intended to capture.

We see numerous ways in which MeDiC can be used by MTBs. In health systems where only a select set of cancer patients are brought to a MTB for review, as is the case in the United States, MeDiC offers a standardized tool to facilitate, standardize, and report how the cases are selected for MTB review. We propose that in such systems, cases could be selected based on complexity—with the cut off determined by individual MTBs. Using MeDiC in this approach would allow less complex cases to be treated according to well‐defined guidelines and evidence‐based protocols agreed on by the entire MTB. While being selective is often the de facto approach used in the United States where institutional cancer accreditation (eg, American College of Surgeons Commission on Cancer) 28 might rely on presenting a proportion of cancer patients at a cancer conference, there is typically not a systematic distinction based on case complexity. This could lead institutions to potentially meet the measure but not consistently among those most likely to benefit (ie, complex patients) and without documentation of doing so. These cases could instead be managed through the MTB chair, ratified by team members with quality assurance maintained through specialist review of pathology and radiology investigations, and regular audit of recommended treatment options. This would redistribute the MTB work toward cases with greater clinical need as illustrated in Figure 2.

Figure 2.

Figure 2

Example of how Measure of case‐Discussion Complexity (MeDiC) could be used to streamline workload

Furthermore, in health systems where MTBs are mandatory, for example, in the United Kingdom, current policy discussions reveal concerns regarding the sustainability of such uniform application of MTBs; 26 and with the guidance on streamlining published by NHS England, 14 the mandate for discussing all cancer cases no longer exists. Clinically, there is the imbalance of more complex patients being squeezed for time due to very long case lists the MTB has to review. Indeed, prior research by our team has found that the median duration of a case review by a MTB is 2 mins, which means that some patients are being “discussed” in an even shorter timeframe. 10 , 27 , 28 Based on both our clinical and research experience, these very briefly reviewed cases would score low on MeDiC—that is, they represent the least complex patients. In healthcare systems such as the one of the UK, MeDiC offers the opportunity to consider screening of the highest scoring (ie, most complex) patients for full MTB review; and allowing the least complex patients to be managed according to well‐defined guidelines and standard pathways and ratified at the MTB.

Finally, in health systems where MTBs are not applied, MeDiC allows for a phased introduction of this approach, without overloading system resources: selecting patients according to how clinically and logistically complex they are, thus allowing such systems to experiment with setting up their cancer management pathways in a gradual manner—that is, by being selective regarding which patients they bring to the MTB's attention.

Based on our finding of variation in case complexity across tumor types, one MTB implementation design is unlikely to fit all situations 26 . MTBs based at different centers that has different case mixes will have different requirements—for example, a tertiary referral cancer center will by definition deal with the most complex cases, either regionally or nationally. The MeDiC tool allows cancer centers to (re‐)design their care processes safely, with adequate governance in enabling patients to be streamlined through the MTB framework effectively. Regardless of how MeDiC is used, it is important that complex cases be reviewed at the beginning of a cancer conference, when teams are fresh, to prevent cognitive fatigue shown to impact decision‐making. 27 , 28 , 29 Evidenced cognitive‐behavioral strategies should be implemented (eg, a short break mid‐meeting) if meetings are particularly long (>1 hour) to prevent performance detriments. 28

This study has limitations. First, MeDiC was developed and tested within the UK's fully MTB‐driven cancer care system. Further testing is needed in other settings where tumor boards are not mandatory and information regarding the clinical complexity may be dispersed across systems and harder to assimilate. Second, MeDiC was tested in real time during MTBs. For most teams, however, the tool should be used to help with meeting preparation and streamlining. 14 , 26 The tool could be completed to generate a score when the decision to present a patient is made with subsequent case selection or ordering of cases on the MTB's agenda (to start with the most complex patients). Third, the expert review team consisted of predominantly surgeons and psychologists, hence insights from other specialists might be lacking. Nonetheless, all factors included in MeDiC were reviewed by a diverse national range of cancer specialists, thus adding credibility and validity to the tool.

Further research on MeDiC should explore how cut‐off scores could be established for each tumor or MTB type—we cannot assume that these will be uniform for all. Clinically, further sensitivity analyses need to be carried out, to ensure MeDiC does not miss complex patients in any way. From a cancer policy point of view, implementation of MeDiC and study of its impact on patient selection, efficiency and costs of care will allow health systems policy makers to determine how MTB‐driven care can be optimally implemented to enhance quality and efficiency in cancer care delivery.

5. CONCLUSION

MeDiC offers an evidence‐based and expert user‐informed tool that allows cancer teams and systems to select and/or streamline their patient caseload for optimal treatment planning and management.

ACKNOWLEDGEMENTS

The financial support for this study was provided entirely by the United Kingdom's (UK) National Institute for Health Research (NIHR) via the Imperial Patient Safety Translational Research Centre at Imperial College London. NS’ research is supported by the NIHR Applied Research Collaboration (ARC) South London at King’s College Hospital NHS Foundation Trust. NS is a member of King’s Improvement Science, which offers co‐funding to the NIHR ARC South London and comprises a specialist team of improvement scientists and senior researchers based at King’s College London. Its work is funded by King’s Health Partners (Guy’s and St Thomas’ NHS Foundation Trust, King’s College Hospital NHS Foundation Trust, King’s College London and South London and Maudsley NHS Foundation Trust), Guy’s and St Thomas’ Charity and the Maudsley Charity. The views expressed in this publication are those of the authors and not necessarily those of the NIHR, the charities, or the Department of Health and Social Care. The funding agreement ensured the authors' independence in designing the study, interpreting data, writing, and publishing the report.

CONFLICT OF INTEREST

BL and TS received funding for training cancer multidisciplinary teams in the assessment and quality improvement methods in the United Kingdom. TS serves as a consultant to F. Hoffmann‐La Roche Ltd Diagnostics providing advisory research services in relation to innovations for multidisciplinary tumor boards. NS is the Director of London Safety & Training Solutions Ltd, which provides teamworking, patient safety, and improvement skills training and advice on a consultancy basis to hospitals and training programs in the United Kingdom and internationally. JG is a Director of Green Cross Medical Ltd that developed MDT FIT for use by National Health Service Cancer Teams in the UK. The other authors have no conflicts of interest to report.

AUTHOR CONTRIBUTION

Tayana Soukup, James SA Green, and Nick Sevdalis have made substantial contribution to the conception and design of the study. Tayana Soukup, Abigail Morbi, Benjamin W Lamb, James SA Green, Nick Sevdalis, Tasha Gandamihardja, and Katy Hogben have made substantial contribution to the acquisition of data, analysis, and interpretation of data. All authors have been involved in drafting the manuscript and revising it critically for important intellectual content; have given final approval of the version to be published; and have agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Supporting information

Supplementary Material

Soukup T, Morbi A, Lamb BW, et al. A measure of case complexity for streamlining workflow in multidisciplinary tumor boards: Mixed methods development and early validation of the MeDiC tool. Cancer Med. 2020;9:5143–5154. 10.1002/cam4.3026

Nick Sevdalis and James S. A. Green have contributed equally and share the senior author position on this paper.

Funding information

United Kingdom's National Institute for Health Research (NIHR); NIHR Applied Research Collaboration (ARC) South London; King’s Health Partners; Guy’s and St Thomas’ Charity; Maudsley Charity.

DATA AVAILABILITY STATEMENT

The copyright of the MeDiC tool and items rests with Soukup, Sevdalis, and Green under the Creative Commons Attribution Non‐Commercial Non‐Derivative (CC‐BY‐NC‐ND) License terms, and the final version of the tool with item description can be obtained from Soukup on request. The anonymized data set supporting the final phase of tool development is available on Zenodo, a research data repository, also under the CC‐BY‐NC‐ND License terms. Researchers are free to reuse the MeDiC tool and the data set on the condition that they attribute it, that they do not use it for commercial purposes, and that they do not alter it. For any reuse or redistribution, researchers must make clear to others the license terms of this work, and reference the MeDiC tool and the data set accordingly (for an example of referencing a data set, please, refer to reference number [25]).

REFERENCES

  • 1. Department of Health . Manual for cancer services. London: Department of Health; 2004. [Google Scholar]
  • 2. National Cancer Action Team . The characteristics of an effective multidisciplinary team (MDT). London: National Cancer Action Team; 2010. [Google Scholar]
  • 3. Fennell ML, Das IP, Clauser S, Petrelli N, Salner A. The organization of multidisciplinary care teams: modelling internal and external influences on cancer care quality. J Natl Cancer Inst Monogr. 2010;40:72‐80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. National Institute for Health and Care Excellence . Improving Outcomes Guidance (IOG). https://www.nice.org.uk/guidance/published?type=csg. Accessed February 04, 2019.
  • 5. Independent Cancer Taskforce . Achieving world‐class cancer outcomes: a strategy for England 2015–2020. Independent Cancer Taskforce; 2015. Available from: tinyurl.com/taskforce‐strategy. Accessed February 21, 2015.
  • 6. Mistry M, Parkin DM, Ahmad AS, Sasieni P. Cancer incidence in the UK: projections to the year 2030. Br J Cancer. 2011;105:1795‐1803. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. World Health Organization . World Cancer Report 2014. France: International Agency for Research on Cancer; 2014. [Google Scholar]
  • 8. NHS England . Everyone counts: planning for patients 2014/2015 to 2018/2019. London: NHS England; 2014. [Google Scholar]
  • 9. NHS Improvement . Evidence from NHS Improvement on clinical staff shortages: a workforce analysis. London: NHS Improvement; 2016. [Google Scholar]
  • 10. Cancer Research UK . Improving the effectiveness of multidisciplinary team meetings in cancer services. London: Cancer Research UK; 2017. [Google Scholar]
  • 11. Fosker CJ, Dodwell D. The cost of the MDT. Br Med J. 2010;340:c951.20332315 [Google Scholar]
  • 12. Taylor C, Shewbridge A, Harris J, Green JS. Benefits of multidisciplinary teamwork in the management of breast cancer. J Breast Cancer. 2013;5:79‐85. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Kane B, Luz S, O'Briain DS, McDermott R. Multidisciplinary team meetings and their impact on workflow in radiology and pathology departments. BMC Med. 2007;5:15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. NHS England and NHS Improvement . Streamlining Multi‐Disciplinary Team Meetings – Guidance for Cancer Alliances. London: NHS England and NHS Improvement; 2020. [Google Scholar]
  • 15. Prabhu Das I, Baker M, Altice C, Castro KM, Brandys B, Mitchell SA. Outcomes of multidisciplinary treatment planning in US cancer care settings. Cancer. 2018;124(18):3656‐3667. [DOI] [PubMed] [Google Scholar]
  • 16. Kidger J, Murdoch J, Donovan JL, Blazeby JM. Clinical decision‐making in a multidisciplinary gynaecological cancer team: a qualitative study. Int J Obstet Gy. 2009;116(4):511‐517. [DOI] [PubMed] [Google Scholar]
  • 17. Soukup T. Socio‐cognitive factors that affect decision‐making in cancer multidisciplinary team meetings [PhD Thesis; Clinical Medicine Research]. London, UK: Imperial College London; 2017. [Google Scholar]
  • 18. Hsu C‐C, Sandford BA. The Delphi technique: making sense of consensus. Pract Asses Res Eval. 2007;12(10). Available from: http://pareonline.net/getvn.asp?v=12&n=10. Accessed September 01, 2014. [Google Scholar]
  • 19. Dalkey N, Helmer O. An experimental application of the Delphi Method to the use of experts. Manag Sci. 1963;9:458‐467. [Google Scholar]
  • 20. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30:459‐467. [DOI] [PubMed] [Google Scholar]
  • 21. Polit DF, Beck CT. The content validity index: are you sure you know what's being reported? Critique and recommendations. Res Nurs Health. 2006;29:489‐497. [DOI] [PubMed] [Google Scholar]
  • 22. Hull L, Arora S, Symons NR, et al. Delphi Expert Consensus Panel . Training faculty in nontechnical skill assessment: national guidelines on program requirements. Ann Surg. 2013;258(2):370‐375. [DOI] [PubMed] [Google Scholar]
  • 23. Wright DB, London K, Field AP. Using bootstrap estimation and the pug‐in principle for clinical psychology data. J Exp Psychopathol. 2011;2(2):252‐272. [Google Scholar]
  • 24. Lamb BW, Sevdalis N, Taylor C, Vincent C, Green JSA. Multidisciplinary team working across different tumour types: analysis of a national survey. Ann Onol. 2012;23:1293‐1300. [DOI] [PubMed] [Google Scholar]
  • 25. Soukup T. Complexity of patient‐discussions in cancer meetings. Zenodo [data set]. 2017. 10.5281/zenodo.582279. [DOI] [Google Scholar]
  • 26. Soukup T, Lamb B, Sevdalis N, Green JSA. Streamlining cancer multidisciplinary team meetings: challenges and solutions. Br J Hosp Med. 2020. 10.12968/hmed.2020.0024 [DOI] [PubMed] [Google Scholar]
  • 27. Lamb BW, Sevdalis N, Benn J, Vincent C, Green JS. Multidisciplinary cancer team meeting structure and treatment decisions: a prospective correlational study. Ann Surg Oncol. 2013;20:715‐722. [DOI] [PubMed] [Google Scholar]
  • 28. Soukup T, Gandamihardja T, McInerney S, Green JSA, Sevdalis N. Do multidisciplinary cancer care teams suffer decision‐making fatigue? An observational, longitudinal team improvement study. BMJ Open. 2019;9:e027303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Soukup T, Lamb BW, Weigl M, Green JSA, Sevdalis N. An integrated literature review of time‐on‐task effects with a pragmatic framework for understanding and improving decision‐making in multidisciplinary oncology team meetings. Front Psychol. 2019;10:1245. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. American College of Surgeons . Commission on Cancer. Available from: https://www.facs.org/quality‐programs/cancer/coc. Accessed March 25, 2019.

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Material

Data Availability Statement

The copyright of the MeDiC tool and items rests with Soukup, Sevdalis, and Green under the Creative Commons Attribution Non‐Commercial Non‐Derivative (CC‐BY‐NC‐ND) License terms, and the final version of the tool with item description can be obtained from Soukup on request. The anonymized data set supporting the final phase of tool development is available on Zenodo, a research data repository, also under the CC‐BY‐NC‐ND License terms. Researchers are free to reuse the MeDiC tool and the data set on the condition that they attribute it, that they do not use it for commercial purposes, and that they do not alter it. For any reuse or redistribution, researchers must make clear to others the license terms of this work, and reference the MeDiC tool and the data set accordingly (for an example of referencing a data set, please, refer to reference number [25]).


Articles from Cancer Medicine are provided here courtesy of Wiley

RESOURCES