Skip to main content
BMJ Open logoLink to BMJ Open
. 2020 Feb 18;10(2):e034269. doi: 10.1136/bmjopen-2019-034269

Evaluating follow-up and complexity in cancer clinical trials (EFACCT): an eDelphi study of research professionals’ perspectives

Helene Markham Jones Markham-Jones 1,2,, Ffion Curtis 1, Graham Law 3, Christopher Bridle 4, Dorothy Boyle 5, Tanweer Ahmed 2
PMCID: PMC7045255  PMID: 32075839

Abstract

Objectives

To evaluate patient follow-up and complexity in cancer clinical trial delivery, using consensus methods to: (1) identify research professionals’ priorities, (2) understand localised challenges, (3) define study complexity and workloads supporting the development of a trial rating and complexity assessment tool (TRACAT).

Design

A classic eDelphi completed in three rounds, conducted as the launch study to a multiphase national project (evaluating follow-up and complexity in cancer clinical trials).

Setting

Multicentre online survey involving professionals at National Health Service secondary care hospital sites in Scotland and England varied in scale, geographical location and patient populations.

Participants

Principal investigators at 13 hospitals across nine clinical research networks recruited 33 participants using pre-defined eligibility criteria to form a multidisciplinary panel.

Main outcome measures

Statements achieving a consensus level of 70% on a 7-point Likert-type scale and ranked trial rating indicators (TRIs) developed by research professionals.

Results

The panel developed 75 consensus statements illustrating factors contributing to complexity, follow-up intensity and operational performance in trial delivery, and specified 14 ranked TRIs. Seven open questions in the first qualitative round generated 531 individual statements. Iterative survey rounds returned rates of 82%, 82% and 93%.

Conclusions

Clinical trials operate within a dynamic, complex healthcare and innovation system where rapid scientific advances present opportunities and challenges for delivery organisations and professionals. Panellists highlighted cultural and organisational factors limiting the profession’s potential to support growing trial complexity and patient follow-up. Enhanced communication, interoperability, funding and capacity have emerged as key priorities. Future operational models should test dialectic Singerian-based approaches respecting open dialogue and shared values. Research capacity building should prioritise innovative, collaborative approaches embedding validated review and evaluation models to understand changing operational needs and challenges. TRACAT provides a mechanism for continual knowledge assimilation to improve decision-making.

Keywords: cancer research, follow-up, Delphi methods, protocol complexity, workforce planning, Singerian Inquiry


Strengths and limitations of this study.

  • The multimodal study design developed consensus-defined trial rating and complexity indicators to support objective analysis of cancer research delivery adaptable to operational evaluation in other therapeutic areas and global settings.

  • Qualitative aspects provide in-depth contextual evidence through the ‘voices’ of patient-facing professionals, articulating human and social aspects of research.

  • This study is the first, to our knowledge, to present a Delphi methodology adopting a Singerian approach involving research professionals, in a consensus process which is holistic and dialectical.

  • The study involved key stakeholders from a wide geographic base reflecting a heterogeneous sample of clinical trial professionals.

  • Participants were limited to research professionals delivering studies at National Health Service sites in Scotland and England. Future research is planned involving a wider demographic to include sponsors, funders, networks and policymakers.

Introduction

Clinical trial delivery in hospital settings is crucial in advancing cancer care and treatment options with evidence indicating sustained commitment to research enhances performance and patient outcomes.1 Cancer research has evolved rapidly in recent years, with innovations in immunotherapy and precision medicine increasingly prioritised in healthcare policy. The National Health Service (NHS) has published ambitions to accelerate innovation, outlining a framework for rapid adoption of next generation treatments offering personalised, stratified care and follow-up models.2 3

The ability to translate scientific, laboratory advances in cancer research into clinical and patient benefit through clinical trials is a critical requirement for healthcare providers, as cancer incidence and patient populations continue to grow.4

Realising these translational benefits is challenging sites as cancer clinical research trial complexity increases,5 with niche designs and stratified treatments affecting research delivery costs and resources. Cancer research is an interdisciplinary enterprise advancing patient care and therapeutic benefits through a collaborative research pathway involving scientific, translational and clinical research trials. As trials evolve to study rare diseases, wide-ranging cancers and molecular sub-types, delivery complexity and workloads grow in tandem. Intricate protocols, narrow selection criteria, high data demands and extended safety, efficacy and outcome monitoring6 7 are stretching staff and site capabilities.

A predicted 70% increase in cancer incidence8 within 20 years combined with improving survival rates, follow-up demands and funding pressures necessitates operational review of trial designs and implementation frameworks to articulate impacts on sites, patients and professionals. Systematic, structured evaluation of research delivery in secondary care (hospital) settings is limited with minimal, current empirical study of trial complexities and follow-up impacts, workloads, institutional dynamics or operational processes across complex healthcare institutions such as the NHS. In-depth review is a paramount priority for the healthcare industry to comprehend variables contributing to service pressures, identify changing stakeholder needs and facilitate evidence-based commissioning of services through appropriately aligned funding and support models.

Delivering research in the era of precision medicine is intense and complex, a clinical reality strongly evidenced in international literature.9 Analysis of operational delivery involving key delivery stakeholders has predominantly operated at regional levels, limiting global relevance and has not yet led to transformative models.10 Lyddiard et al 11 undertook a UK collaborative study to develop a workload measurement tool but excluded investigator and pharmacist roles, anticipating challenges in collating accurate workload data. Further research recommended qualitative evaluation of workload and complexity alongside development of trial rating models using experts whose advice is ‘fundamental to the weighting and scoring’.12 However, within healthcare applications and systems development there is a persistent lack of dialogue with ‘users and implementers of technology for data capture’.13 Operational evaluation including assessment of technologies, training solutions, capacity planning and research delivery models should involve subject-matter experts capable of providing grounded knowledge and insight. The significant complexity gap and incremental patient follow-up activity requires external recognition. Currently there is no national analysis of follow-up or protocol complexity workloads to understand fluctuating operational and resource demands at local, regional and national levels. Systematic rating of trial attributes in real time and over study lifetimes will create longitudinal data sets enabling evidence-based cost attribution and funding decisions to enhance research capacity and productivity. The extant literature underlines a need for broad, cyclical and continual analysis of research advancements and disease burdens to anticipate future demands for resources, as well as facilitating sustainable growth, productivity and improvements in patient care.

Enabling research growth necessitates structured workforce planning; yet there is poor application of this crucial management function across the NHS.14To build capacity, manage increasingly complex trials and support patient-centred care, research organisations, funders and policymakers need to evaluate current delivery and performance management models, seek interdisciplinary stakeholder feedback and consider adopting creative, design-thinking approaches with reflective and critical capabilities.15Research into Singerian organisational models has shown that holistic and dialectic approaches to understanding context-related challenges support process improvement and knowledge generation. Organisations cultivating positive communication with well-integrated systems are associated with improved performance and healthcare outcomes.16 Holistic, collaborative team environments promote valued attributes of respect, creativity and knowledge sharing.17

Aims

Cancer research forms part of a complex collaboration between scientists, clinical research professionals and patients. Evaluation of patient follow-up in cancer clinical trials and the nature of complexity, in its many forms, need to understand the experiences and challenges of research professionals’ implementing and delivering cancer clinical trials in hospital settings. In this study we aimed to contribute to existing knowledge of translational cancer research, to support acceleration of laboratory advances for patient benefit, by engaging research professionals in a democratic, systemic evaluation of cancer clinical trial research delivery. We sought multidisciplinary perspectives to: (1) identify research professionals’ priorities, (2) understand localised challenges, (3) define study complexities and workloads supporting the development of a trial rating and complexity assessment tool (TRACAT). This study adopted a holistic, consensus-based design engaging patient-facing clinical trial professionals in developing grounded, contextual knowledge of trial implementation and end-user input into the development of TRACAT which will function as an operational decision-support tool, as well as highlighting views, perceptions and priorities for their professional field.

Methods

Study design and approach

To facilitate a detailed systems evaluation sensitive to the multi-faceted nature of cancer research delivery a multimodal study was developed. The design reflects the Singerian-Churchmanian model of inquiring systems (SCIS) valuing ethics and community knowledge in complexity evaluation and decision-making.18 The adopted design combining the Delphi technique with a Singerian approach followed an initial scoping review covering subject, policy and methodological literature. The review identified key challenges for the profession directing the overall research and initial survey design. A democratic approach was needed recognising multiple perspectives combined with individual knowledge and experience, to form a comprehensive understanding of the complexities of the systems and networks in which they operate through a dialectical group consensus process, a Singerian Delphi. SCIS provide a framework and meta-method approach to generating actionable knowledge, capable of addressing wicked, complex problems and ‘sensemaking in complex, multifaceted, subjective’19 contexts.

Delphi technique

The Delphi technique is widely used in healthcare to gain insight from front-line experts knowledgeable within specific fields.20 It provides practical applications in consensus development, prioritisation, forecasting, policy development and investigation of multi-faceted issues.20–22 We adopted the method to elicit expert opinion in developing a comprehensive rubric of research delivery variables and in the analysis of complex problems within a group.23 Healthcare and research delivery operate within complex adaptive systems with diverse and multifarious units, processes and interactions. Analysis of complexity concepts provides an explanatory, sensemaking device to interpret ‘phenomena in diverse applications’24 which are dynamic, emergent and entwined. The professionals recruited to the panel performed an ethical role, as their observations and engagement in identifying trial-rating attributes contribute to designing an evaluation tool for operational decision-making and strategic planning. The design of technical applications or models for strategic evaluation or decision support and inclusion criteria for measurement or quantitative judgements should be based on input from ‘experts’ in the field (patients and professionals), the users and benefactors of ‘human-centred automation'.13 17 23For this reason, the research commences with a Delphi designed from a Singerian inquiring system perspective, drawing ethics and heuristics into the development of an information system and model.25 This Singerian-oriented Delphi aimed to incorporate diverse knowledge, experience and ideologies of multiple stakeholders, disciplines and personality types26 to form a prismatic view of cancer research delivery sensitive to its evolving, multi-faceted and complex nature.27

Sampling procedure

A purposive selection process recruited NHS secondary care (hospital) sites from a wide geographic base in the UK. This supported formation of an ‘expert’ panel of professionals, knowledgeable in delivering research at teaching, acute or district general hospitals providing services to rural and metropolitan patient populations. Site characteristic diversity, based on scale and nature of operations and patient populations, aimed for a heterogeneous sample minimising bias and facilitating expression of ranging perspectives. To achieve a target sample (n=20) researchers planned to recruit between 22 and 30 participants. While this is a relatively small sample size the importance in the selection of a Delphi sample is the knowledge and expertise of participants in relation to the research. The interdisciplinary nature of research and delivery roles required a range of professionals to form an expert panel. A smaller sample size is effective when panellists are similarly knowledgeable and expert in the field of study.28

Recruitment procedure

Principal investigators at sites approached potential participants based on their knowledge and experience within cancer research delivery. Pre-defined eligibility criteria stipulated professionals should have 18 months experience in secondary care setting within a research delivery or support role, currently or within the past 18 months.

Materials and survey design

The three-round eDelphi took place online between January and August 2018 using Qualtrics software. Participant information sheets described the iterative process, commencing with open questions in round 1 and moving to structured questions in subsequent rounds. The anonymised design meant participants’ identity was unknown to other panellists, a key benefit of the technique.29 Anonymity facilitates free and open expression of individuals removing the potential for domination by senior or influential colleagues which may lead to bias as participants submit to peer pressure within an open group.30 References to roles within individual textual responses were removed, protecting participants’ anonymity and preventing role seniority influence on consensus development. Consenting participants received an invite and link to the online questionnaire. Detailed instructions guided panellists throughout with individual feedback provided between rounds. Experts were encouraged to complete surveys as fully as possible to facilitate comprehension of perspectives, priorities and levels of consensus and support reliability of results. Optional free-text comments at the end of each question section and survey encouraged dialogue, reflection and refinement of observations. The roles of participants and their ethical contribution were detailed in the study information sheets and documents provided to participants who consented to join the ‘expert panel’.

First round survey

Panellists provided their definitions, perceptions and suggestions to seven open questions shown in table 1. The broad nature of questions aimed to generate rich responses iteratively testing inter-connection of phenomena between categories. Individual responses were analysed in NVivo with responses coded thematically. Similar themes were condensed into the initial 201 group statements with care taken to retain as much of participants’ intended meaning as possible. Participants were advised that themes suggested by the panel would be developed as trial rating indicators (TRIs) as part of the TRACAT tool to support workforce and capacity planning.

Table 1.

First round open questions

Q1. Follow-up definition The term ‘follow-up’ in clinical trials can have different interpretations dependent on the role of the researcher. Please provide your definition of the term ‘follow-up’ in relation to cancer clinical trials.
Q2. Barriers and burdens Please describe the phenomena you encounter in your role within cancer clinical research, which you perceive as barriers or burdens to effective trial implementation and delivery. Please feel free to list as many issues or concepts as you wish. These could relate to local, departmental or regional factors as well as cultural, resource and study design elements.
Q3. Complexity Please provide your analysis of complexity in terms of delivering cancer clinical trials. This could include the complex nature of the disease or interactions involved in managing the treatment and care pathway for a cancer patient participating in a clinical trial. Please feel free to suggest as many themes as you wish.
Q4. Capacity factors Please describe factors affecting your capacity to support and deliver cancer clinical trials within the NHS. These can be elements relative to your specific role, organisation or more global factors. Please list as many considerations as you wish.
Q5. Top priorities Please suggest your top three strategic priorities for the future delivery of cancer clinical trials in the NHS.
Q6. Effective practice Please provide your views on existing elements of cancer clinical research practice within the NHS, which contribute to or demonstrate efficient trial delivery and practice.
Q7. Additional considerations Please add any additional elements you feel should be considered by the Delphi panel in relation to reviewing the operational delivery, follow-up and complexity of cancer clinical trials.

NHS, National Health Service.

Second round survey

Panel-developed statements were circulated alongside a 7-point Likert-type scale ranging from strongly disagree (1) to strongly agree (7), for participants to confirm their level of agreement to question category statements from 1 to 7. A new survey section (question 8) asked panellists to rank TRACAT categories from lowest priority (1) to highest priority (7) as factors to include as TRIs and complexity indicators. To form the initial TRI categories first round responses were coded in NVivo and ranked by frequency of themes.

Third round survey

Panellists received the previous round’s results showing the percentage level of agreement and median response to each statement alongside their own selection. Panellists were asked to review initial responses in light of levels of agreement and either revise or leave their original selection unchanged, following reflection on wider perspectives. Participants were encouraged to comment on reasoning for changing responses by more than two scale points away from consensus, or their original selection. Final round panellists received a summary report of consensus statements and ranked TRACAT categories.

Data analysis

The qualitative data from the open round were content analysed and coded thematically in NVivo using a framework approach to create the initial complexity categories in question 8. The statements relative to each individual question category are shown in table 1. A second stage of hand coding to validate the initial analysis was performed. Quantitative analysis of the second and third round Likert-type scale responses was performed using SPSS V.22.0. Summary statistics reported to panellists described frequency of responses to statements (percentage level) and the median (measure of central tendency). In addition the IQR was used as a measure of dispersion in analysing stability of responses and move towards consensus in order to decide on the final survey iteration.

Consensus level and validity

Consensus was defined as 70% of panellists rating a statement the same on the 7-point Likert-type scale, a recognised level of agreement.31 Instructions advised participants that a convergence of opinion and the agreed consensus measure would determine the stopping point for the study. Items achieving frequency consensus and median strength of agreement contribute to future questionnaire and interview designs.

Patient and public involvement

A patient advisory group reviewed the study design prior to submission to HRA (Health Research Authority) and ethics with revisions made following their recommendations. Panellists received a final consensus report and other stakeholders had the option to receive results by a preferred method of print, email, Qualtrics or evaluating follow-up and complexity in cancer clinical trials (EFACCT) website; https://efacct.com/.

Results

The target sample (n=20) was exceeded with 33 professionals from 13 hospitals and 9 local research networks consenting to join the expert multidisciplinary panel. Forty-four potential participants were approached with 11 professionals declining due to limited capacity or availability to complete the surveys. The summary demographics and return rates are shown in table 2. Twenty-five research professionals completed the three-round process, an increase of 25% on the initial planned sample, compensating for a 24% participant dropout rate. Regular communication with panel members encouraged retention but robust return rates and continued commitment potentially suggest the study’s importance in providing a platform to elucidate role-specific experiences and challenges. The number of panel statements generated in the opening round within each question category is detailed in table 3 alongside the percentage of statements achieving consensus by each category and round.

Table 2.

Participant demographics and response rates by round

Characteristic Round 1 Round 2 Round 3
n % n % n %
Gender
 Male 4 14.81 4 14.81 3 12.00
 Female 22 81.48 22 81.48 21 84.00
 Other 1 3.70 1 3.70 1 4.00
Age
 25–34 3 11.11 3 11.11 2 8.00
 35–44 9 33.33 9 33.33 9 36.00
 45–54 10 37.04 10 37.04 9 36.00
 55–64 5 18.52 5 18.52 5 20.00
Years in clinical research
 Between 2 and 5 years 8 29.63 9* 33.33 9 36.00
 Between 5 and 10 years 11 40.74 11 40.74 9 36.00
 More than 10 years 8 29.63 7 25.93 7 28.00
Role
 Research and development manager 4 14.81 3 11.11 3 12.00
 Research nurse 8 29.63 9 33.33 8 32.00
 Research nurse manager 2 7.41 2 7.41 2 8.00
 CI, PI or co-investigator 3 11.11 3 11.11 3 12.00
 Data manager 2 7.41 2 7.41 2 8.00
 Clinical/senior clinical trials practitioner 3 11.11 3 11.11 2 8.00
 Finance business partner 1 3.70 1 3.70 1 4.00
 Research nurse and PI 1 3.70 1 3.70 1 4.00
 Research support officer 1 3.70 1 3.70 1 4.00
 Research radiographer 1 3.70 1 3.70 1 4.00
 Research pharmacy technician 1 3.70 1 3.70 1 4.00
Total participants 27 27 25

*One participant joined the study in round 2.

CI, chief investigator; PI, principal investigator.

Table 3.

Consensus statements by question category and round

Question category (n) Question category (%) Statements in category (n) Total panel statements (%)
Round 2 performance
 Q1. Follow-up definition 1 25.00 4 0.50
 Q2. Barriers and burdens 6 13.04 46 2.99
 Q3. Complexity 1 2.86 35 0.50
 Q4. Capacity factors 1 2.17 46 0.50
 Q5. Top priorities 2 5.88 34 1.00
 Q6. Effective practice 4 15.38 26 1.99
 Q7. Additional Delphi considerations 0 0.00 10 0.00
 Round 2 totals 15 201 7.46
Round 3 performance
 Q1. Follow-up definition 1 25.00 4 0.47
 Q2. Barriers and burdens 21 45.65 46 9.81
 Q3. Complexity 10 28.57 35 4.67
 Q4. Capacity factors 9 19.57 46 4.21
 Q5. Top priorities 23 67.65 34 10.75
 Q6. Effective practice 9 34.62 26 4.21
 Q7. Additional Delphi considerations 1 4.30 23 0.47
 Round 3 totals 75 214 35.05

Round 1 survey results

Round 1 achieved a return rate of 81.82% with 27 participants completing the initial qualitative survey and demographic information. Open question responses were comprehensive leading to the generation of 531 individual statements, analysed and condensed into 201 group statements.

Round 2 survey results

Round 2 achieved the same response with 15 statements reaching consensus (7.46% of total statements). One participant joined the panel for the quantitative survey rounds. They did have the option to provide individual feedback through free-text comments in line with all other participants.

Round 3 survey results

Twenty-five panellists returned the final survey, a return rate of 92.59%. This round included 13 additional statements generated from free-text responses. Table 3 details the 75 statements reaching consensus. In addition, 14 TRIs were identified with four achieving a median rating of 7 (highest priority) and remaining items rated as 6 or 6.5. Non-responders to round 2 were not included in the third circulation. Based on the groups’ move towards consensus the third survey formed the final round.

Summary of panel responses and discourse

The results provide detailed insights into factors contributing to complexity, follow-up intensity and resource impacts for sites. The researchers chose to retain the broad nature of participant statements following data collection of the initial qualitative open round. As a criterion of the Singerian Delphi, professional panellists needed to witness the diversity, depth and richness of colleague responses, and the complexity of problems in social settings. In retaining detailed statements the full nature of participants’ sentiments in responses is expressed, allowing the Delphi panel the opportunity to reflect on broader perspectives, concepts and nuances of meaning. Characterising a Singerian inquiring approach the Delphi study served as a process for adding to ‘substantive knowledge’ and ‘participants’ knowledge of themselves’ through a group reflective process.23 Participant feedback was encouraged throughout, supporting the concept of the Delphi as a self-reflective and collective decision-making process, whereby there is a move towards consensus, or a participant’s conscious informed choice to revise their opinion or personal philosophy based on wider perspectives of peer group experiences. Panellists described changes in their perspectives stemming from a new understanding of ‘how things may be’ in different contexts or ‘in light of more recent experiences and discussion’. Other feedback illustrated the nature of changing circumstances and experiences on perceptions and sensitivities during the course of the study, leading to a reflection and adjustment of initial views and recognising the subjective nature of issues. Statements achieving the highest levels of agreement are detailed under each question category. Online supplement 1 presents the full list of panel consensus statements.

Supplementary data

bmjopen-2019-034269supp001.pdf (146.2KB, pdf)

Follow-up definition

Participants provided personal definitions of ‘follow-up’ in relation to cancer clinical trial delivery. Responses highlighted diverse interpretations with 56% of panellists defining follow-up as activities relating to any or multiple protocol stages (including active and post-treatment phases) while 44% identified follow-up as occurring solely post-active treatment.

Panellists confirmed their level of agreement to summarised definitions of follow-up created from individual interpretations to form three core categories: (1) any trial stage, (2) multiple stages, (3) post-active treatment. An additional question in round 2 asked panellists to consider the need for a nationally agreed definition supporting research delivery. Panel-developed definitions did not reach consensus but 92% of professionals strongly agreed on a need for a nationally agreed definition of the term and its sub-types (table 4).

Table 4.

Q1 Follow-up definition consensus statement

Median response Consensus % level
1.4 NIHR/nationally agreed definition of follow-up: A nationally agreed definition of the term 'follow-up' and/or types of 'follow-up' in relation to research delivery in the NHS should be published by the NIHR so that all clinical research professionals, allied professions and associated bodies conform to a standard terminology and parameters. Strongly agree (7) 92

NHS, National Health Service; NIHR, National Institute for Health Research.

Barriers and burdens

In round 1 the panel described phenomena encountered in their roles within research and elements perceived as barriers or burdens to effective practice. This category reached high levels of agreement with 21 statements achieving consensus, the highest of which called for an ‘effective and consistently validated funding and support model’, recognising increased levels of complexity within cancer clinical trials and associated workloads. Panellists agreed strongly (92% consensus) that the funding of research delivery does not ‘accurately reflect the requirements, time and effort of sites’ representing a risk for NHS organisations in delivering effective research with inadequate resources and staffing levels (table 5).

Table 5.

Q2 Barriers and burdens—top consensus statements

Median Consensus % level
2.19 Trial sites are under constant pressure to open trials with expectations to recruit high numbers of trial participants to increasingly complex and higher intensity trials treating patients with rare cancers while being faced with reduced resources. Budgetary constraints and outdated payment terms that do not accurately reflect the requirements, time and effort of sites represent a high risk to NHS organisations where audited and reduce the capacity to maintain effective trial delivery and meet patient needs through inadequate staffing levels. The NIHR needs to acknowledge the increased complexity of cancer trials, the workload impact in coordination and management, augmented lab work and data management demands and comprehend the nature of academic and commercial trials and their associated pressures on research delivery sites and staff through the development of an effective and consistently validated funding and support model. Strongly agree (7) 92
2.35 The management of patient follow-up in cancer studies is a key factor affecting site capacity and ability to implement, recruit to and deliver effective research. Follow-up visits for cancer patients and research studies can continue for many years and often until death. Patients may also transfer from other hospitals for follow-up care, which has an impact on the research staff and capacity at site. Follow-up data are essential to the outcomes of research studies but the NIHR research delivery model focuses on and supports recruitment but not follow-up activities. With continual pressure to open studies to gain accruals the ability of teams to manage existing numbers of patients in follow-up is compromised leading to missed timelines, patient visits and missing data, which could be extremely detrimental to follow-up studies and invalidate results of the trial. These burdens and issues are not recognised within research delivery. Strongly agree (7) 88
2.13 PI oversight and involvement are lacking at times in certain tumour sites, studies or hospital locations, particularly for multi-site trusts where the PI works from one centre, leaving research nurses feeling unsupported. When new studies are set up it is important to ensure there is a clear understanding of roles and responsibilities of the research team so that workloads can be accurately assessed. PIs should be aware that they could delegate tasks according to GCP but retain overall responsibility for the study beyond the treatment elements and need to maintain involvement in patient follow-up and review. Strongly agree (7) 88
2.4 Support and retention of research professionals, nurses and specialist roles as well as the provision of sufficiently skilled resource should be the focus of the NIHR and trusts to ensure safe and efficient research environments and reduce excessive workloads. Staff turnover, changes, sickness and absence all have a significant impact on research implementation and delivery at sites. Strongly agree (7) 84
2.23 Protocols and study documentation supplied to assess capacity and capability do not show the impact of eCRFs or the full extent of information and demographic data required. High data demands and the management of sponsor data queries are a significant and time-consuming administrative burden for sites. Difficulties in communication or slow responses can lead to extended or additional work for sites especially where a sponsor's representative does not comprehend the problems in obtaining retrospective information or understand the nature of certain data issues. Strongly agree (7) 84

eCRF, Electronic case report form; GCP, Good clinical practice; NHS, National Health Service; NIHR, National Institute for Health Research; PI, principal investigator.

Analysis of complexity

The highest level of consensus within the study was reached in this category with 96% of professionals strongly agreeing growing protocol burden adds to operational complexity (table 6). Ten statements in this domain reached consensus, 60% of which had a consensus level of over 80%. A further 11 statements in this group were in a 10% range of consensus sharing over 60% agreement levels between panellists.

Table 6.

Q3 Analysis of complexity—top consensus statements

Median Consensus % level
3.21 Cancer clinical trial protocols have varying degrees of complexity but the burden of protocol procedures is growing which adds to the complexity of implementing and delivering studies, with incremental levels of training (eg, 450 training slides on a five arm study with strict guidelines) and increased volumes of tests, questionnaires, visits, assessments and more detailed data requirements. Strongly agree (7) 96
3.1 Cancer is no longer one diagnosis but a complex range of conditions with many subgroups. Cancer clinical research complexity is growing as trials now study a wide range of cancers, rare tumours, haematological malignancies and molecular sub-types with treatments becoming precise, targeted and having more options at each stage of the cancer journey. Trials may now only be suitable for a subgroup of the cancer population such as lymphoma, which has more than 70 sub-types. Sites need to have a greater number of trials open to ensure patients have the opportunity to participate, but each trial will recruit a smaller number of patients adding to the complexity of delivering research. Strongly agree (7) 92
3.17 Managing the communication and coordination of clinical trial appointments, procedures and diagnostics, for example, mammography, ECHO, ECGs, clip insertion, CT scans, bone marrow and surgical/specialist procedures are pressurised and complicated when liaising with multidisciplinary teams and support services to meet protocol specific time frames or treatment windows. Aligning a study with the 2-week wait or fitting it into a surgical pathway isn't always possible due to operational problems and capacity issues. Strongly agree (7) 88
3.6 The clinical trial phase is a key determinant in study complexity with earlier phase studies typically more complex, requiring lots of visits, extra tests or PK analysis. Early phase clinical trials frequently need input from other departments for example, ophthalmology or dermatology requiring collaboration to arrange time and appointments. Studies involving overnight stays can be hard to organise due to bed and resource capacity. Admitting patients for trial monitoring can be hard to justify and negotiate when beds are full. Later stage studies such as phase 3 may include standard of care but complexity is added due to the larger volume of patients required and lengthy follow-up. Strongly agree (7) 88
3.16 Protocol designs that involve short timelines and windows for procedures are more complex and logistically challenging for sites to deliver when trying to schedule registration, randomisation, assessments and treatment around the availability of NHS resources, especially where there is little flexibility from the sponsor. It can be difficult when a patient is excluded from a trial because of scan timings or initial bloods not having been taken by other clinicians who saw the patient first at diagnosis, but not as part of a trial. Additional complexities arise from late diagnostics where a patient comes to the centre late. Strongly agree (7) 80

ECHO, Echocardiogram; NHS, National Health Service; PK, Pharmacokinetics.

Factors affecting capacity

In round 1 the panel described factors affecting their capacity to support and deliver cancer trials. Nine statements reached consensus with the highest item level of agreement (88%) alluding to organisational inadequacies in communication, collaboration and integration across services, impeding the effectiveness of trial delivery (table 7).

Table 7.

Q4 Factors affecting capacity—top consensus statements

Median Consensus % level
4.2 Effective communication is the golden thread, which ensures an organisation can work effectively. The lack of integration, communication and collaboration across hospital sites and departments impacts trial delivery. Strongly agree (7) 88
4.4 Inadequate resources and facilities affect the capacity of research staff to conduct their jobs to the standards expected. Strongly agree (7) 88
4.3 Inadequate staffing levels make it difficult for teams to meet the demands of current trials and to run as efficiently and effectively as possible. Strongly agree (7) 84
4.45 Protocols, which are overly complicated, do not realistically work with hospital systems or have been written in such a way that they are hard to interpret impact capacity and efficiency. Studies with well-written protocols that consider the practicalities of trial delivery are much easier for sites to run. Strongly agree (7) 84
4.46 The increasing complexity of new cancer trials and protocols can be challenging for sites to deliver and therefore detailed feasibility is essential, but the implications of running the study is not always apparent at the outset as frequent or unnecessary amendments can impact the capacity of the team as the study progresses. Strongly agree (7) 84

Strategic priorities

The largest number of consensus statements by category related to strategic priorities with 23 items reaching an agreement level of 76% or higher. Five statements shared panel consensus of 88% in terms of their priority for research delivery, four of which related to social aspects of operations: cognition, collaboration and communication (table 8).

Table 8.

Q5 Top strategic priorities—top consensus statements

Median Consensus % level
5.13 Decision makers at national and local levels require a greater level of understanding of the constraints, resource and capacity issues and the priorities for research delivery and funding in the NHS. Strongly agree (7) 88
5.2 Development of biomarkers for predicting suitability and response to treatment and early diagnosis techniques. Strongly agree (7) 88
5.20 Promote cultural change and education to raise the profile of research and highlight the importance of clinical trials in the provision of cancer care within the NHS. Strongly agree (7) 88
5.22 Ensure development of strong working relationships and rapport between research teams and supporting departments. Strongly agree (7) 88
5.6 Improve collaboration and communication between trusts and organisations (including non-NHS care providers such as hospices) to ensure patient care and choice are prioritised and all are given the opportunity to participate in research, where desired and appropriate. Strongly agree (7) 88

NHS, National Health Service.

Effective research practice

Panellists provided views on existing elements of cancer clinical research practice in the NHS they felt contributed to or demonstrated efficient trial delivery and practice. Statements achieving consensus and a median response of strongly agree in this category related to human-centred elements of research delivery with seven statements reaching 80% agreement levels or above (table 9).

Table 9.

Q6 Effective research practice—top consensus statements

Median Consensus % level
6.17 Good communication skills and effective patient relationships help participants understand the trials and what participation will mean for them. Strongly agree (7) 88
6.2 Well run, established departments and research teams who receive regular training are efficient, proactive, flexible to change and demonstrate a wealth of knowledge and excellence in clinical trial delivery. Strongly agree (7) 84
6.14 Principal investigators who proactively support and engage with the research team are available to provide advice when required, maintain oversight on their trials, including follow-up visits and discussion of treatment plans, ensure that trials are run effectively and safely in their research area. Strongly agree (7) 80
6.18 Effective practice is demonstrated by dedicated staff who are willing to go above and beyond to recruit and support patients in clinical trials. Caring and skilled research professionals who treat patients as individuals and not just as a recruitment figure are appreciated by patients who value their support, and continue on the trial for follow-up visits and are less likely to withdraw from studies. Strongly agree (7) 80
6.21 The provision of dedicated teams and specialists for specific cancer disease areas/sites within trial units enhances research delivery and staff knowledge in their specialty, in contrast to stretching resources across multiple specialisms. Strongly agree (7) 80

Additional Delphi considerations

A final broad category provided participants the opportunity to suggest additional items for panel consideration. Existing categories incorporated related statements but themes which were new, unique or covered multiple areas were presented in section 7. Free-text responses provided by panellists generated 23 statements with one achieving consensus (table 10).

Table 10.

Q7 Additional Delphi considerations—consensus statements

Median Consensus % level
7.3 Supporting the primary endpoints of clinical trials should be the main goal of the NIHR and follow-up should be appropriately funded to achieve this. Strongly agree (7) 72

NIHR, National Institute for Health Research.

TRACAT—trial rating and complexity assessment tool

First round statements were coded thematically within NVivo creating a matrix of codes which were quantified by frequency of themes to form the initial trial complexity analytical categories of question 8. The 14 TRIs (complexity scoring statements) were prioritised by panellists from lowest priority (1) to highest priority (7). Table 4 shows the panel ranking of TRIs which will be used to develop the TRACAT tool. The indicators and rankings are detailed in table 11.

Table 11.

Trial Rating Indicators (TRIs) priority rankings

Rank Q no TRI category 1 (lowest priority)–7 (highest priority) Priority % Median
1 8.2 Protocol procedures—treatments, interventions, tests, samples and their volumes, frequencies and timelines. 72 7
2 8.1 Resource demands—feasibility and personnel impact. 72 7
3 8.7 Investigational treatment complexity—drug administration, novel therapy/drug, toxicity and risk, treatment windows and timelines. 64 7
4 8.5 Follow-up and visit requirements—type, frequency and duration. 60 7
5 8.3 Data management, administration and monitoring—sponsor defined requirements. 48 6.5
6 8.4 Support department involvement and outsourcing—support services (trust/external), for example, RECIST reporting, QA procedures, specialist skills, facilities, equipment, central review or sub-contracted requirements. 48 6
7 8.8 Clinical efficacy and safety—clinical pharmacology and pharmacokinetics requirements. 44 6
8 8.11 Patient management—patient monitoring, safety, reporting or complex patient pathways. 44 6
9 8.12 Patient selection—patient identification, screening, eligibility criteria and consent process. 36 6
10 8.6 Cancer disease complexity, patient population and health status 32 6
11 8.13 Trial phase and design—randomisation process, multiple treatment arms, blinding, study phase 28 6
12 8.10 Recruitment potential—recruitment feasibility and target potential by disease and study type. 24 6
13 8.14 Technology and training—sponsor defined requirements for study. 24 6
14 8.9 Protocol variations—protocol amendments, study extensions and ancillary/sub-studies. 16 6

QA, Quality assurance; RECIST, Response Evaluation Criteria in Solid Tumours; TRIs, trial rating indicators.

Discussion

Overview of main findings

The Delphi’s primary aim was to evaluate cancer clinical research delivery with a focus on patient follow-up and complexity from a multidisciplinary perspective. The study provides in-depth insights of professionals working at the forefront of cancer clinical trial delivery, identifying priorities, concerns and indicators of research complexities. Consensus and priority factors developed by expert panellists illustrate tensions and pressures within the profession. The main findings are discussed in relation to the key objectives across the eight inter-related survey categories with cross-over themes.

Evaluating follow-up and complexity

Follow-up definition: Patient follow-up in cancer clinical trials is a key factor affecting capacity to deliver research, requiring an ostensive definition to ensure support models for its effective management develop from a clarified and equitable stance. The meaning participants attached to follow-up varied significantly which has implications for operational review. Implementation of a funding model acknowledging resource implications in patient follow-up management reached consensus as a strategic priority. Panellists strongly agreed that managing follow-up was a key factor affecting capacity, calling for recognition of the challenges faced and intimating the National Institute for Health Research (NIHR) recruitment focused delivery model does not support follow-up. The group expressed a view that follow-up data are essential to successful trial outcomes but felt under pressure to open new studies to gain accruals, with a detrimental effect on their ability to support existing patients.

Barriers and burdens: A common thread running through statements on barriers and burdens within research was an expression of sites being under pressure, with perceptions of high expectations and demands placed on staff while faced with reduced resources. Communication issues, both internally and externally, were a common theme and perceived as a barrier to effective research. Concerns also related to sponsor documentation and inadequacy of information to accurately assess capacity and capability, or determine the full impact of delivering a study, in terms of its associated workloads and administrative burden. High levels of agreement between panellists indicated a sense of feeling unsupported, indicating principal investigator oversight and involvement can be lacking at times, recommending a clear understanding of roles, responsibilities and accurate assessment of workloads.

Analysis of complexity: In addition to incremental interventions, tests and procedures within evolving study designs, the panel highlighted factors relating to the nature of cancer as a complex disease. Wide-ranging sub-types and niche patient populations combined with variations in health status and support needs of patients add to research complexity. While trial phase is a recognised contributor to complexity, participants frequently cited short timelines and visit windows for protocol procedures as being problematic, particularly in terms of aligning sponsor requirements to site capacity, treatment pathways and the coordination of procedures, multidisciplinary teams and support services.

Factors affecting capacity: Strong consensus existed between research professionals with regard to capacity factors. Inadequacies in staffing levels, funding, resources and facilities featured alongside constraints relating to overly complicated protocols designed without due consideration for practicalities of research delivery. Frequent amendments to trials also affected ongoing capacity reflecting uncertainty within research delivery which cannot always be predicted at site feasibility.

Strategic priorities: Participants strongly agreed on strategic priorities relating to culture, education and collaborative relationships, all social aspects of research delivery. A patient-focussed priority reached an 88% consensus on the requirement to develop biomarkers for prediction of suitability and response to treatment and early diagnosis. The panel came to the same level of consensus in respect of national and organisational recognition of the challenges faced by professionals and sites. A group perspective illustrated the need for local and national leaders to develop greater understanding of the ‘constraints, resource and capacity issues and the priorities for research delivery and funding in the NHS’. The high levels of consensus relating to environment, culture, education, resources and investment delineate the needs of a profession within an evolving healthcare system, providing a strong focus for the NIHR and policymakers and impetus for further dialogue and review.

Effective research practice: Themes of open communication, staff commitment and dedication, well-trained and informed staff and strong collaborative teamwork all achieved high levels of consensus between the Delphi panellists. These skill sets within the profession allow sites and research staff to share best practices, retain staff and contribute to efficient trial delivery despite current challenges and resource limitations.

Additional Delphi considerations: The one statement achieving consensus in this category called for appropriate follow-up funding to support the primary endpoints of clinical trials.

TRACAT: A key outcome of the study is the ranking of TRIs to develop TRACAT, a system-based tool facilitating the accurate mapping and monitoring of factors determining study intensity, workload and resource impact on trial centres. The trial complexity rating will be applied to studies to support sites in feasibility assessment and map any changes to workloads or complexity during study life-cycles. Key stakeholder knowledge is vital in developing operational evaluation models and panellists had an important study role in prioritising and ranking TRIs and recommending additional factors for consideration. Through the assignment of a trial rating and complexity score linked to monitoring of interventions, visits, follow-up and patient volumes TRACAT provides workload and capacity analysis at individual, site, regional and national levels. The aim is to create an objective trial rating and portfolio management tool capable of integrating with existing data systems, to monitor real-time activity linked to complexity, increasing the value and structure of data for strategic and operational decision-making. Enhanced knowledge of trial complexity and acuity will support forecasting and capacity planning to optimise resource allocation in line with research objectives and patient needs.

Strategic opportunities for clinical research delivery: The study identified shortfalls at local and national levels, relating to effective communication and shared comprehension of needs and priorities for research, which provide an immediate opportunity for service improvements through better engagement across networks, organisations and disciplines. Strategic opportunities exist for trusts, local research networks, the NIHR and NHS to work collaboratively to develop specialist services and support models, built on shared understanding and structured operational evaluation, to increase patient ‘accessibility, choice and participation in clinical trials’. To improve research quality and safety it is essential healthcare providers promote open and honest cultures focusing on improvement.32 Professionals and organisations alike need to embrace dialectic approaches where mutual respect, innovation and communication can thrive. Iterative dialogue with research professionals to understand critical values and perceptions, relevant to local contexts, is vital in identifying effective strategic models and measures to improve operational delivery.33 There is no national workforce planning for research delivery and NHS global activities for workforce modelling are fragmented.34 As research advances and organisations grow, they face increasing challenges and complexities. Dynamic, fluctuating and evolving environments call for greater understanding of context-specific challenges. This study highlights the current realities of research delivery, emphasising the importance of dialogue and shared decision-making in developing effective strategies and common goals, respecting mutual understanding.

Evaluating research delivery and performance: Analysing and measuring performance and quality in evolving professions and organisations is challenging. Richardson et al 17 argue that an organisation’s measurement of information decreases in value as they grow and face greater complexity. Evaluation of operational performance and monitoring of success need to take into account not only objective measures but also understand and value qualitative evidence to indicate progress or success, especially where complexity of operational elements is a dominant characteristic. Regular evaluative research of the state and nature of the clinical research delivery industry in the UK should be an ethical requirement of the NHS, NIHR and their partners. There is a moral obligation for researchers to ensure that the work they undertake and the resource allocated to perform these activities provide value, efficiency in service and participant benefit.

Singerian inquiry in operational review: An effective evaluation of trial delivery requires a systems approach engaging multidisciplinary professionals from a wide range of geographical locations, networks and trusts in a collective critique covering multiple realms. Collaborative research cultures supporting enhanced data structuring and synthesis can ‘significantly shorten the time gap between clinical research results to better clinical care decisions’.35 The nuances and complexities of cancer research delivery necessitated a study design involving a critical analysis of strategies, processes and technologies through a collation and synthesis of prismatic perspectives and experiential data. This study supports a systems-based approach to developing effective research capacity planning and performs an ethical role in the review of current NHS research delivery with the intent of improving performance and patient experience. An adaptive NHS research delivery framework capable of analysing and monitoring research capacity and operational models in real-time and over time would enhance knowledge and support strategic planning. This study contributes in-depth qualitative review into operational aspects of clinical trials by engaging key stakeholders in defining variables relating to service pressures as well as highlighting best practices.

Relation to existing research: Our findings support the existing body of research documenting increasing pressures on sites linked to protocol complexity. Growing patient populations, bespoke therapies and extended follow-up pose challenges for existing NHS strategies with resources and research professionals under increasing pressure. The ability to grow research capacity is limited in systems where performance measures do not adequately assess complexity and context or support ‘tailored research capacity-building interventions’.33 Clinical research operational delivery exists within a complex adaptive system faced with growing challenges, one that Britnall argues ‘requires us to think, work and collaborate in different ways’.34 Outdated, hierarchical management styles36 and cognitive dissonance are fuelling a healthcare staffing crisis and stifling innovation through its alienation of experienced, knowledgeable and creative professionals. Britnall discusses the following four key domains where improvement and investment enhances productivity: workforce health and well-being, skills development, technological efficiencies and effective innovation.34 Findings of our study reinforce the need for strategic focus in these domains.

Strengths and limitations

A strength of the study is the holistic, dialectical, consensus-based design which is as far as we are aware the first use of a Singerian Delphi in cancer research evaluation. Qualitative aspects of the design provided in-depth grounded knowledge through the ‘voices’ of clinical trial professionals, articulating human and social aspects of research delivery. The study also developed consensus-defined TRIs and complexity indicators to support objective analysis of cancer research delivery, adaptable to other therapeutic areas and global settings.

Given the exploratory nature of the study in developing a Singerian focused qualitative Delphi the resulting data sets were lengthy and expressive. The causal relationships within the data sets were not fully analysed during the implementation of the Delphi study. The EFACCT Delphi findings contribute to the development of grounded theory as part of a wider national project being conducted by the research team. This democratic study developed new knowledge in defining areas of importance to research delivery stakeholders and forms part of an iterative research programme to evaluate and support operational delivery, focusing on follow-up and complexity.

Participants were limited to patient-facing professionals delivering studies at NHS sites in Scotland and England and did not include representatives from the Clinical Research Network. The results reflect the perspectives of professionals conducting the delivery elements of cancer research at trial sites. This does provide a strong understanding of the priorities in a clinical setting but enhanced knowledge covering the full gamut of roles within the industry is required. This Delphi forms part of a programme of study with future research planned involving a wider demographic to include sponsors, funders, networks and policymakers.

Implications for practice

The results point to operational fragmentation and organisational disconnect with conflicting priorities limiting the ability of the profession to manage growing complexities and pressures. The evidence suggests that the current operating model is not sustainable for NHS sites. Statements achieving the highest level of consensus between Delphi panellists outlined growing protocol and procedural burden, calling on the NIHR to acknowledge increased complexities in cancer clinical trials and associated pressures for sites. High levels of consensus relating to operational challenges in research are relevant to wider global settings and the concepts should be tested in other therapeutic areas. Additional recommendations included the requirement for a nationally agreed definition of follow-up and an effective, consistently validated funding and support model.

The research design considered the suitability of the Singerian approach within the Delphi method in relation to answering the main research question. A Singerian Delphi can serve multiple purposes and answer complex and broad questions in a single study. Our approach demonstrates a pragmatic application of the Singerian Delphi through an engagement with multiple perspectives to develop collaborative knowledge37 and a recognition of diversity and complexity in understanding separate realities. Retrospectively, based on the resultant data and reflection, the Singerian approach has emerged as a potential theoretical lens to apply in future research investigating operational management within healthcare organisations.

Conclusions

Cancer clinical research delivery forms part of a complex system which is in perpetual flux and ill-suited to linear, determinate operational models and processes. Disease, humans and operational networks, all complex in their own respect, continually transpose, synthesise and evolve, requiring a prismatic perspective and adaptive, systems-thinking approach to comprehend and to design effective, sustainable, human-centred research delivery solutions.

In summary, our findings indicate that in order to support patient access to clinical trials, meet national research ambitions and keep pace with scientific advances in cancer research, a delivery model cognisant of complex and diverse contextual challenges is required. To deliver quality research the holistic needs of patients and professionals alike need supporting. Further research into operational efficacy should consider the testing of dialectic models based on the Singerian approach. While the study applied the Singerian approach as a Delphi methodology, it has emerged as a highly appropriate approach to understand and manage the dynamic and evolving field of cancer clinical research as a whole.

Supplementary Material

Reviewer comments
Author's manuscript

Acknowledgments

The authors wish to acknowledge the invaluable contribution of principal investigators and staff at United Lincolnshire Hospitals Trust, Edinburgh Cancer Centre, Dumfries & Galloway Royal Infirmary, Clatterbridge Cancer Centre, University College London Hospitals NHS Foundation Trust, Royal Devon & Exeter NHS Foundation Trust, Harrogate & District NHS Foundation Trust, Derby Teaching Hospitals NHS Foundation Trust, University Hospitals Coventry & Warwickshire, Aintree University Hospitals NHS Foundation Trust, Lancashire Teaching Hospital NHS Foundation Trust, North Bristol NHS Trust, and Poole Hospital NHS Foundation Trust.

Footnotes

Contributors: HMJ was responsible for the study design, data acquisition and analysis and led on manuscript preparation. FC contributed to manuscript preparation, review and revision. GL provided statistical review. FC, GL, CB and TA were responsible for academic and intellectual review of the study design, protocol and manuscript. DB has provided clinical oversight. All authors have read and reviewed the final manuscript.

Funding: United Lincolnshire Hospitals NHS Trust (through cancer charitable funds) and the University of Lincoln funded a PhD studentship leading to the study. University of Lincoln as sponsor provided academic and research governance direction.

Competing interests: None declared.

Patient consent for publication: Not required.

Ethics approval: The study was approved by the East Midlands—Derby Research Ethics Committee (reference: 17/EM/0292) and the University of Lincoln School of Health and Social Care Ethics Committee. All participants taking part in the Delphi study provided informed consent.

Provenance and peer review: Not commissioned; externally peer reviewed.

Data availability statement: Data are available upon reasonable request. Anonymised data will be available on request from the corresponding author.

References

  • 1. Downing A, Morris EJ, Corrigan N, et al. High Hospital research participation and improved colorectal cancer survival outcomes: a population-based study. Gut 2017;66:89–96. 10.1136/gutjnl-2015-311308 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. NHS England Improving outcomes through personalised medicine, 2016. Available: https://www.england.nhs.uk/wp-content/uploads/2016/09/improving-outcomes-personalised medicine.pdf [Accessed 16 June 2019].
  • 3. NHS The NHS long term plan, 2019. Available: https://www.longtermplan.nhs.uk/online-version/ [Accessed 12 Aug 2019].
  • 4. Bray F, Ferlay J, Soerjomataram I, et al. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin 2018;68:394–424. 10.3322/caac.21492 [DOI] [PubMed] [Google Scholar]
  • 5. Malik L, Lu D. Increasing complexity in oncology phase I clinical trials. Invest New Drugs 2019;37:519–23. 10.1007/s10637-018-0699-1 [DOI] [PubMed] [Google Scholar]
  • 6. National Cancer Policy Forum Policy Issues in the Clinical Development and Use of Immunotherapy for Cancer Treatment: Proceedings of a Workshop. National Academies Press (US), 2016. Available: https://www.ncbi.nlm.nih.gov/books/NBK396430/ [Accessed 01 Aug 2019]. [PubMed]
  • 7. Gilbert MR, Rubinstein L, Lesser G. Creating clinical trial designs that incorporate clinical outcome assessments. Neuro Oncol 2016;18:ii21–5. 10.1093/neuonc/nov254 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Stewart BW, Wild CP. World cancer report 2014. IARC Publications, 2014. Available: https://publications.iarc.fr/Non-Series-Publications/World-Cancer-Reports/World-Cancer-Report-2014 [Accessed 12 Jul 2019].
  • 9. Janiaud P, Serghiou S, Ioannidis JPA, et al. New clinical trial designs in the era of precision medicine: an overview of definitions, strengths, weaknesses, and current use in oncology. Cancer Treat Rev 2019;73:20–30. 10.1016/j.ctrv.2018.12.003 [DOI] [PubMed] [Google Scholar]
  • 10. Getz KA, Kaitin KI, Kenneth K. Open innovation: the new face of pharmaceutical research and development. Expert Rev Clin Pharmacol 2012;5:481–3. 10.1586/ecp.12.44 [DOI] [PubMed] [Google Scholar]
  • 11. Lyddiard J, Briggs J, Berridge J, et al. A workload measurement. Appl Clin Trials 2010. (Accessed 12 Jul 2019). [Google Scholar]
  • 12. Briggs J, Lyddiard J, Coffey M, et al. When to say ‘Yes’ in the NHS: Development of a Complexity Scoring System & Management Tool. CR Focus 2011;22:19–22. [Google Scholar]
  • 13. Batchelor J. The assumptions of data capture. J Clin Stud 2017;9:46–7. [Google Scholar]
  • 14. Alderwick H, Dixon J. The NHS long term plan. BMJ 2019;364:l84 10.1136/bmj.l84 10.1136/bmj.l84 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Paquet G, Ragan T. Through the Detox prism: exploring organisational failures and design responses. Ottawa: Invenire Books, 2012: 119–23. [Google Scholar]
  • 16. Vaughn VM, Saint S, Krein SL, et al. Characteristics of healthcare organisations struggling to improve quality: results from a systematic review of qualitative studies. BMJ Qual Saf 2019;28:74–84. 10.1136/bmjqs-2017-007573 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Richardson SM, Courtney JF, Paradice DB. An assessment of the Singerian Inquiring organizational model: cases from academia and the utility industry. Inf Syst Front 2001;3:49–62 https://link.springer.com/article/10.1023%2FA%3A1011449620792 10.1023/A:1011449620792 [DOI] [Google Scholar]
  • 18. Haynes JD. Internet management issues: a global perspective. London: IGI Global, 2012: 167–9. [Google Scholar]
  • 19. Paul D. Addressing Complex Decision Problems in Distributed Environments : van Gigch JP, Wisdom, knowledge, and management. C.West Churchman and related works series. vol 2 Springer, New York, NY, 2006: 85–6. [Google Scholar]
  • 20. Akins RB, Tolson H, Cole BR. Stability of response characteristics of a Delphi panel: application of bootstrap data expansion. BMC Med Res Methodol 2005;5:37 10.1186/1471-2288-5-37 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Kennedy HP. Enhancing Delphi research: methods and results. J Adv Nurs 2004;45:504–11. 10.1046/j.1365-2648.2003.02933.x [DOI] [PubMed] [Google Scholar]
  • 22. Critcher C, Gladstone B. Utilizing the Delphi technique in policy discussion: a case study of a Privatized utility in Britain. Public Adm 1998;76:431–49. 10.1111/1467-9299.00110 [DOI] [Google Scholar]
  • 23. Linstone HA, Turoff M. The Delphi method: techniques and applications. Newark, NJ: New Jersey Institute of Technology, 2002: 167–9. https://web.njit.edu/~turoff/pubs/delphibook.pdf (Accessed 11 Sep.18). [Google Scholar]
  • 24. Courtney J, Merali Y, Paradice D, et al. On the study of complexity in information systems. IJITSA 2008;1:37–48. 10.4018/jitsa.2008010103 [DOI] [Google Scholar]
  • 25. Mozuni M, Jonas W. An Introduction to the Morphological Delphi Method for Design: A Tool for Future-Oriented Design Research, She Ji: The Journal of Design, Economics. and Innovation 2017;3:303–18. [Google Scholar]
  • 26. Mitroff II, Williams J, Rathswohl E. Dialectical inquiring systems: a new methodology for information science. J Am Soc Inf Sci 1972;23:365–78. 10.1002/asi.4630230606 [DOI] [Google Scholar]
  • 27. Saukko P. Doing research in cultural studies: an introduction to classical and new methodological approaches. London: SAGE Publications, 2003: 25–7. [Google Scholar]
  • 28. Keeney S, McKenna H, Hasson F. The Delphi technique in nursing and health research. 53 First Edition Oxford, UK: Wiley, 2011. [Google Scholar]
  • 29. Hsu C, Sandford BA. The Delphi technique: making sense of consensus. Prac Ass Res Eval 2007;12:1–8. [Google Scholar]
  • 30. Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud 2001;38:195–200. 10.1016/S0020-7489(00)00044-4 [DOI] [PubMed] [Google Scholar]
  • 31. Vernon W. The Delphi technique: a review. Int J Ther Rehabil 2009;16:69–76. 10.12968/ijtr.2009.16.2.38892 [DOI] [Google Scholar]
  • 32. HRA Uk policy framework for health and social care research v3.3, 2017. Available: https://www.hra.nhs.uk/planning-and-improving-research [Accessed 12 Aug 2019].
  • 33. Edwards N, Kaseje D, Kahwa E. Building and Evaluating Research Capacity in Healthcare Systems: Case Studies and Innovative Models. Cape Town: UCT Press, 2016: 31. [Google Scholar]
  • 34. Britnell M. Human: solving the global workforce crisis in healthcare. London: Oxford University Press, 2019. [Google Scholar]
  • 35. Batchelor J. Strength through collaboration. Int Clin Trials 2017;9:24–6. [Google Scholar]
  • 36. West M. The NHS crisis of caring for staff: what do we need to do? kings fund, 2019. Available: https://www.kingsfund.org.uk/blog/2019/03/nhs-crisis-caring [Accessed 06 Sep 2019].
  • 37. Greenhalgh T, Jackson C, Shaw S, et al. Achieving research impact through co-creation in community-based health services: literature review and case study. Milbank Q 2016;94:392–429. 10.1111/1468-0009.12197 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary data

bmjopen-2019-034269supp001.pdf (146.2KB, pdf)

Reviewer comments
Author's manuscript

Articles from BMJ Open are provided here courtesy of BMJ Publishing Group

RESOURCES