Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2019 Dec 10;14(12):e0226081. doi: 10.1371/journal.pone.0226081

Using evidence when planning for trial recruitment: An international perspective from time-poor trialists

Heidi R Gardner 1,*, Shaun Treweek 1, Katie Gillies 1
Editor: Kathleen Finlayson2
PMCID: PMC6903711  PMID: 31821373

Abstract

Introduction

Recruiting participants to trials is challenging. To date, research has focussed on improving recruitment once the trial is underway, rather than planning strategies to support it, e.g. developing trial information leaflets together with people like those to be recruited. We explored whether people involved with participant recruitment have explicit planning strategies; if so, how these are developed, and if not, what prevents effective planning.

Methods

Design: Individual qualitative semi-structured interviews. Data were analysed using a Framework approach, and themes linked through comparison of data within and across stakeholder groups.

Participants: 23 international trialists (UK, Canada, South Africa, Italy, the Netherlands); 11 self-identifying as ‘Designers’; those who design recruitment methods, and 12 self-identifying as ‘Recruiters’; those who recruit participants. Interviewees’ had recruitment experience spanning diverse interventions and clinical areas.

Setting: Primary, secondary and tertiary-care sites involved in trials, academic institutions, and contract research organisations supporting pharmaceutical companies.

Results

To varying degrees, respondents had prospective strategies for recruitment. These were seldom based on rigorous evidence.

When describing their recruitment planning experiences, interviewees identified a range of influences that they believe impacted success:

  • The timing of recruitment strategy development relative to the trial start date, and who is responsible for recruitment planning.

  • The methods used to develop trialists’ recruitment strategy design and implementation skills, and when these skills are gained (i.e. before the trial or throughout).

  • The perceived barriers and facilitators to successful recruitment planning; and how trialists modify practice when recruitment is poor.

Conclusions

Respondents from all countries considered limited time and disproportionate approvals processes as major challenges to recruitment planning. Poor planning is a mistake that trialists live with throughout the trial. The experiences of our participants suggest that effective recruitment requires strategies to increase the time for trial planning, as well as access to easily implementable evidence-based strategies.

Introduction

Patient participants are central to the success of randomised controlled trials. The implications of poor participant recruitment threaten the completion of trials, and where poorly-recruiting trials do stagger on without being prematurely closed down then poor recruitment threatens the utility of their results [1]. Researchers have therefore tried to improve the evidence-base with regard to identification of effective recruitment interventions. Currently there are limited numbers of robustly evaluated successful strategies, few of which are generalisable to a wide range of trial populations and settings [2]. With no evidence-base to inform recruitment planning, it is not surprising that trialists around the world struggle to ensure that their trials successfully recruit to target [2,3].

Exploratory research aiming to understand recruitment difficulties has largely centred on perceptions and experiences of participants that have taken part in trials, or those individuals who have declined to take part [49]. Whilst these studies are useful when looking at barriers and facilitators for prospective participants, research into the trialists’ views and experiences of recruitment is needed to ensure that methodologists are able to design trials that are acceptable for both participants and trialists. With this in mind, more recent research has focussed on getting to grips with what the process of recruitment is like for trial teams [1016]. Several studies have highlighted the process of recruitment from the perspective of the recruiter [11]; suggesting that many recruiters find it difficult to combine research alongside their clinical roles [1216, 17], and that this combination of responsibilities can result in significant emotional challenges that may arise from the conflicting priorities of research and clinical care [15]. These studies have been able to provide a more complete picture of the recruitment process and have highlighted the importance of recruitment barriers at various levels. Building a research-friendly culture within healthcare systems is known to impact recruiters as individuals [10], whereas significant system-level barriers arise resulting from conducting research within the healthcare system [10].

This study provides an additional layer of knowledge to the evidence that has explored trial recruitment. Investigating what is happening in trials that are ongoing has been the focus of most of the work to date [10, 17, 18]. However, there is merit in taking a step back and exploring how trialists make decisions that lead to the design and implementation of recruitment processes; the process of recruitment planning. This study employed a semi-structured qualitative interview approach in order to explore how trialists plan for participant recruitment; a key part of that is identifying whether or not evidence is used by trialists when planning their recruitment strategies, as well as exploring the perceived barriers and facilitators to the use of this evidence. We also wanted to explore if and how trialists’ experiences contrasted with the perceived level of planning required by funders and approvals bodies before a trial begins, and how recruitment techniques may or may not change when participants are not being recruited at forecasted levels.

Methods

Ethics statement

This study was approved by the University of Aberdeen College of Life Sciences and Medicine College Ethics Review Board (CERB/2016/6/1382). Signed informed consent was obtained from all participants, which included consent for anonymised quotes from their interviews to be published in publications and presentations resulting from this work.

Study design

Our qualitative interview study included two groups of participants; Recruiters (frontline staff involved in contacting and identifying potential participants and working to actively recruit participants; such as Research Nurses, GPs and Clinicians), and Designers (the people who are able to influence the strategies used for recruitment such as Trial Managers, Principal Investigators and Chief Investigators).

Setting

We focussed on Phase 3 pragmatic effectiveness trials conducted within primary, secondary or tertiary care settings, whilst also working to include trials conducted within the community, and trials funded by industry that often take place at purpose-built sites (e.g. contract research organisation sites that are used only for research).

Participants were invited through a range of networks: Trial Forge collaborators (at the time of the study this encompassed 12 Trials Units, two of which were outside the UK), the MRC Trial Methodology Hubs, the UK CRC Trials Units, the UK Clinical Research Networks, the UK Trial Managers’ Network, and other relevant networks (e.g. the Association of Clinical Research Professionals and the Institute of Clinical Research). Prospective participants were asked to respond by email to express interest. Interested participants were sent written study information, including an information leaflet and a consent form, and were provided with an opportunity to ask questions before making their decision. Participants were given the choice of face-to-face, telephone or Skype interview.

Sample size

We aimed to recruit a minimum of 20 participants. Research has shown that with relatively consistent participant groups, data saturation can occur within the first twelve interviews; although overarching themes and concepts can be present as early as six interviews in [19,20]. In the trials field it is relatively common for Designers to have experience of roles associated with the Recruiter category from earlier on in their career, we therefore anticipated some level of consistency between the two groups. A formal assessment of whether adequate thematic saturation had occurred was not employed [21].

Sampling

We purposively selected participants based on their trial portfolios; aiming to include a diverse range of funders (e.g. private, public, third sector), clinical specialities, and intervention type (e.g. investigational medicinal product, licensed drug, surgical technique, medical device, and behavioural interventions (lifestyle change)).

Research team and data collection

A conversational session within Centre for Healthcare Randomised Trials’ monthly Trials Group meeting at the University of Aberdeen guided the development of topic guides. We invited 12 participants to the meeting, and involved a mix of stakeholder groups from within the Unit; Trial Managers, Data Coordinators and Programmers were all represented. HRG chaired the session and facilitated discussion on the general theme of recruitment planning, and what participants wanted to know about how other trial teams plan their recruitment.

The topic guide (Supporting information S1 and S2 Files) developed was then employed in individual semi-structured interviews conducted by HRG. At the start of each interview, participants were encouraged to discuss their practical experiences of recruiting participants to trials, and their perspective on the recruitment process. Interviews were conversational yet supported by the topic guide to ensure that key issues were covered. The topic guide was refined throughout the study, and field notes were taken after each interview to assist analysis and interpretation.

Analysis

Data were analysed using the Framework method; a qualitative approach that suited this study due to its specific research questions, pre-designed sample, a-priori issues and deducible themes [22]. The Framework method has been used widely and successfully for applied health services research [10, 2326].

All interviews were audio recorded, transcribed verbatim and anonymised before analysis. HRG coded the first four transcripts using an open coding approach to develop a working analytical framework. KG independently reviewed a 10% sample of transcripts. Coding and themes were discussed by the team (HRG, KG and ST) to agree the analytical framework that would be applied to all interviews. We elected to categorise themes into areas of the trial timeline. HRG applied the analytical framework to all transcripts and used NVivo to generate a framework matrix to facilitate comparison of data first within, and then across stakeholder groups (Recruiters and Designers). Following analysis, we selected relevant quotes representing pertinent themes to illustrate study findings. As per our ethics statement, all participant quotes presented here have been anonymised to protect confidentiality.

Results

Participants

Twenty-five trialists from the Recruiter and Designer stakeholder groups were invited to secure 23 interviews which lasted between 32 and 77 minutes (median: 58 minutes).

Key characteristics of the study sample are presented in Table 1.

Table 1. Characteristics of interviewees.

Interviewee characteristics
Recruiter Designer
Stakeholder group 11 12
Location
UK 7 11
Canada 0 1
The Netherlands 1 0
Italy 1 0
South Africa 2 0
Gender
Male 1 3
Female 10 9
Age (years)
30 and under 6 1
31–50 4 6
51 and above 1 5
Experience of working in clinical trials
Less than 10 years 7 4
10 years or more 4 8
Trials background*
Public 8 8
Private 5 3
Third sector 1 1
Involvement with clinical research networks and speciality groups±
Scottish Primary Care Clinical Research Network 3 2
Scottish Cancer Clinical Research Network 2 2
Scottish Stroke Clinical Research Network 4 3
English Clinical Research Network 1 5
Scottish Musculoskeletal Speciality Group 0 1
UKCRC Registered Clinical Trials Unit Network 8 7

Note: * and ± provide brief information on the types of trials and clinical research networks that participants have experience with. A number of participants had experience in more than one of these sub-categories (e.g. both public and private trials, and/or contact with the Scottish Primary Care Clinical Research Network, Scottish Cancer Clinical Research Network, and English Clinical Research Network), hence why figures may total to more than the number of participants in each round of user testing.

When asked to describe their experiences with trial recruitment planning, interviewees were able to identify a range of influences which they considered as having played a role in either positively or negatively impacting the success of trial recruitment. As the following data will show, the points at which planning impacted on the success (or not) of trial recruitment interventions and/or strategies was varied. Participants from both Recruiter and Designer stakeholder groups discussed an assortment of influences throughout the recruitment pathway and beyond, reflecting their diverse professional backgrounds and trial experiences. Principal Investigators and Chief Investigators had a mix of experience; many had previous clinical experience, and others had backgrounds in nursing, midwifery and allied health professions.

They encompass: pre-trial recruitment planning, to what extent recruitment strategies and interventions should be planned in advance of the commencement of participant recruitment, and who is responsible for recruitment planning; the methods and/or resources that trialists use to develop their recruitment intervention/strategy design and implementation skills and when those skills are developed (i.e. before a trial begins, or learning as the trial is ongoing); perceived barriers and facilitators to successful recruitment planning; and how trialists modify their practice when recruitment is lower than anticipated.

Responsibility for recruitment planning

Both Recruiters and Designers valued the importance of planning when it came to recruitment strategies. Unsurprisingly, Designers perceived this planning to be a task within their job specification, and one that they understood was a key component to successful recruitment once the trial had started. On the whole, they perceived recruitment planning to be a useful investment of time, but clearly felt that time was something of a scarcity.

When you start off in research you are told you should spend two thirds of your time thinking about the project and one third of your time doing it, but you don’t bother with that. You just get started! Spend 5% on preparation, then spend a huge amount of time undoing all the mistakes that you’ve made” (Professor and Research Director–Designer, UK, Participant 6).

Detailed planning of recruitment methods during trial design stages

The time invested in planning and working-up a trial before funding is awarded was a significant point of discussion across both stakeholder groups.

When asked to consider recruitment planning in terms of the detail required during the process of grant writing, both Designers and Recruiters perceived that specific details about planned methods for recruitment were not necessary for grant applications. The need for detailed planning of the operationalisation of how potential participants will be recruited was thought to be increasing when applying for trial grants; this was an issue referenced several times by participants with experience in applying to UK-based funders.

“So I haven’t seen whole sections of grant applications that are devoted to that [recruitment], although I think some of the funders like [UK funding body], and others, are starting to indicate that information about that is required. So, I think that the requirements of researchers have not been particularly stringent when it comes to specifying feasibility and recruitment, and likely recruitment potential. (Clinician–Recruiter, UK, Participant 4).

Different funders ask for different amounts [of detail]. It’s gradually getting more that they ask for.” (Principal Investigator and Clinician–Designer, UK, Participant 2).

However, a couple of experienced Designers perceived this to be a task that should be done in advance of grant application submission regardless of whether it was a requirement for funding or not, chiefly because it may increase chances of funding success;

I advise that they do write all those [planned recruitment methods] into the application, so it really gives the funders confidence that we’ve really thought about it clearly. (Trial Manager–Designer, UK, Participant 3).

Once grant applications have been submitted and funds awarded, funders were perceived to take a hands-off approach to recruitment, only providing direct involvement if targets were not being met.

Once we have a plan in place, they will simply monitor that we’re achieving our targets.” (Principal Investigator and Clinician–Designer, UK, Participant 2).

All interview participants agreed that at least some information on how to recruit participants should be included within the protocol, but with regards to level of detail there was a diverse range of experiences among participants. Recruiters tended to perceive the required level of detail as limited, and that the Trial Manager and the research team running the study would produce a practical plan for these processes based on the fundamental detail held within the protocol. In other words, a document separate from the protocol on ‘how to’ recruit.

“I would say not always very much [detail]. It’s pretty much you know, we will recruit by… I think in our protocol it might have said, “We’re going to use these two methods [referring to two methods discussed previously; mail outs from GP, and putting posters up in clinical involved with the trial]. But the actual finer point of how that’s going to happen isn’t listed … it’s up to the Trial Manager and the research team to come up with that. (Specialist Research Nurse–Recruiter, UK, Participant 5).

Designers however, suggested that writing the protocol for recruitment activity required more foresight.

“So a couple of times we’ve had to do amendments to the protocol to incorporate new recruitment strategies. So, I mean I now, having had several years’ experience, then I advise that they do write all those into the grant application so it really gives the funders confidence that we’ve really thought about it clearly and that there’s you know, we’ve got contingency plans. (Trial Manager–Designer, UK, Participant 3).

Designers with significant trial experience (10 years or more) explained how they purposefully build imprecision into the protocol to circumvent the need for future ethical amendments. Interestingly, the imprecision that they build in to their protocols was all related to projected recruitment figures rather than the development of recruitment strategies.

You don’t really want to put the target for randomisation is 3,000 patients, because if you go to 3,001 patients some regulators will ask you to put in an amendment and go back to ethics. So, we would always write, “Randomise at least 3,000 patients” or, “Around 3,000 patients in at least 100 centres”. So, you put in imprecision in your protocol which we discovered that most ethics committees and sponsors and other regulators don’t notice.” (Principal Investigator and Clinician–Designer, UK, Participant 2).

Use of empirical and experiential evidence in recruitment planning

Only one participant explicitly mentioned use of the empirical evidence from the Cochrane recruitment review as informing their decisions about the content of recruitment plans.

Strategies to improve recruitment to randomised controlled trials’;

I look at what’s been effective before, so for example, I’m aware of the Cochrane Reviews about trials, and about the evidence about what improves recruitment, so I know that if you provide money to people, they want shorter questionnaires–we’ve looked at all that evidence and we do try to take that on board.” (Professor and Chief Investigator–Designer, UK, Participant 16).

Commonly, methods for recruitment planning tended to have been fostered and refined as a result of experience from working on multiple trials. Both Recruiters and Designers felt that there were advantages in being aware of what works for others (i.e. experiential evidence) with regard to recruitment methods used in similar trials. Each of the Designers interviewed had developed their own methods of planning for recruitment. Largely these centred on use of experiential evidence from skilled colleagues, as illustrated by the following quote:

Basically we do a lot of canvassing of opinion, getting some expert advice [earlier in the interview participant referred to a number of ‘experts’ by name–all experienced (10 years or more) trialists].” (Trial Manager–Designer, UK, Participant 8).

When participants were asked about what they would do if recruitment was not going well, experiential evidence was discussed at length. Designers explained that the first step would usually be to connect with colleagues with experience in similar types of trials to assess existing strategies and work together to think of new strategies that may be appropriate;

Inevitably when you are faced with a challenging situation about recruiting there tend to be people that you would go to, so for example, someone like… if I was recruiting in Primary Care, someone like [trialist’s name] would be someone that I would have a word with.” (Professor and Chief Investigator–Designer, UK, Participant 16).

Perceived barriers and facilitators to successful recruitment planning

Effective communication and learning from others

Effective communication was common theme across both stakeholder groups, with participants highlighting the need for effective communication of experiential evidence specifically;

Talk to them about why it’s not working because I just think that communication is the only way that you’re going to get anywhere.” (Specialist Research Nurse–Recruiter, UK, Participant 11),

There was also an observation that in the experience of Designers, teams that successfully recruit to target are likely to be more communicative, and as a result more resilient than teams that are less communicative. One experienced Designer explained that;

You have to have an explicit communication strategy built into that, appropriate to the… whatever the study is. Some of it is just factual communication, but some of it is more the social thing, making a team cohesive and resilient whenever the problems arise.” (Principal Investigator and Clinician–Designer, Canada, Participant 9).

Time as a limiting factor for recruitment planning

Both Recruiters and Designers perceived the scarcity of time to be a notable issue that could lead to stress or frustration in the run up to the opening of recruiting sites.

“It can be very stressful, especially having a particular you know, “This is your… you must recruit by this point.” And also it depends you know, on some trials there’s an awful lot to do before you even get to that point. (Specialist Research Nurse–Recruiter, UK, Participant 5).

The time needed to plan and set up a trial is often dictated by funding bodies and the timing between confirmation of funding and the start of the grant; this issue is therefore largely out of the trial teams’ control, and can result in staff working against the clock to try and obtain approvals in order to get recruitment started on time. This was illustrated neatly by a participant that said;

I think the only other thing that is frustrating is when you do get a grant for a trial or you’re putting in for a grant for a trial, there’s often not enough time allocated for planning and setting up. So it’s always a bit of a mad rush you know.” (Trial Manager–Designer, UK, Participant 8).

Inconsistencies with research approvals and governance procedures

Research approvals and governance was a topic covered by all participants. There was a feeling of frustration with regards to how the approvals process is implemented in the UK; particularly surrounding the level of inconsistency observed when working across multiple sites and/or health boards. Inconsistencies were viewed as a point of significant irritation, something that could impact on the quality of recruitment planning and standardisation of recruitment methods used across sites, but not necessarily something that Recruiters and Designers could confidently say was a reason for poor recruitment. These irregularities in decision-making by regulators and ethics committees tended to involve subtle changes to the wording of documentation or method of approach.

“I also think that there is still too much inconsistency between committees in how they may make decisions, and there is also a problem at the committee level of them feeling that they always need to adjust patient information leaflets, or consent forms to, as it were, justify their regulatory position. (Clinician–Recruiter, UK, Participant 4).

Perceptions of the recruitment process

Throughout the interviews, the majority of participants voiced their opinions of the recruitment process more generally. Initially, their thoughts did not appear to be directly linked to recruitment planning, but upon further investigation these experiences provide an insight into the environment that many trialists are working in. Pressure was a topic covered frequently both by Recruiters and Designers. Largely, Recruiters had experienced, or heard of other colleagues experiencing, ‘a lot of pressure’ to recruit. These stakeholders expressed that their efforts were, “constantly never good enough. Never good enough. It was constantly, “You need to get people in the door” to the point where you had to go and deliver leaflets to social places to try and get people in. (Specialist Research Nurse–Recruiter, UK, Participant 5), with pressure coming from funders in the majority of cases. Designers viewed this pressure from an alternative perspective; showing empathy for the Recruiters, and a willingness to provide support to them when appropriate.

Although the international sample was small (5/23 participants were based outside of the UK), the experiences referenced by both international and UK-based participants were relatively similar, particularly when it came to the way that recruitment strategies are developed and implemented. The main difference between international and UK-based experiences arose during interviews with trialists working in countries with a substantially different healthcare system. For example, one of the interviewees working in South Africa explained how trials were often easier to recruit to when they were being conducted within the public healthcare system. In these cases patients that are unable to afford private healthcare are more likely to consider trial participation, as they are given the chance to see healthcare professionals more quickly.

Discussion

This study explored experiences and perceptions of the process of planning recruitment strategies for trials. Our findings provide insight into the way that trialists plan (or not) their recruitment strategies, how these skills are developed over time, and the potential barriers and facilitators that impact on the effectiveness of any planning that is undertaken.

The vast majority of qualitative studies focussed on trial recruitment to date have investigated barriers and facilitators to effective recruitment from the perspective of prospective participants, and the majority of work looking at recruitment from the trialists’ standpoint has centred on parts of the active recruitment process; i.e. when trialists approach potential participants to talk to them about trial participation. Whilst these studies have generated useful data on the recruitment process, so far the literature has lacked exploration of the planning that precedes it.

Communication between trial stakeholders

All of our study participants believed communication to be an important component of recruitment success. Communication involves layers of conversations and information exchange between various trial stakeholders; where communication falters recruitment is expected to stumble too.

Poor communication is an issue between the trial team, and external stakeholders. Both Recruiters and Designers referenced their frustration at the lack of consistency between approvals and governance procedures across multiple sites; explaining that variations impacted on recruitment planning and standardisation of methods, but they could not confidently say that this caused poor recruitment. These experiences are echoed throughout the literature, though most reports are written by frustrated researchers and represent only single applications [2739].

Managing expectations

Open channels of communication have the potential to facilitate productive working environments [40], this extends to the management and communication of realistic expectations. Our findings highlight frustration with unrealistic forecasted recruitment figures, a scenario so common that there is significant literature on the topic. Lasagna’s Law [41, 42] and Muench’s Third Law [43] state that ‘the number of patients available for entering a trial falls markedly at study initiation and rises markedly after study completion’ [42] and ‘the number of cases promised in any clinical study must be divided by a factor of at least 10’ [43] respectively. Recent qualitative work focussed on projection of recruitment numbers in primary care provides additional insight into factors that contribute to poor forecasting [44]. Optimistic figures have been attributed to inappropriately ‘anchoring’ estimates by focussing on positive past experiences [45], failing to consider significant differences between studies, and the ‘Lake Wobegon Effect’ where individuals overestimate their achievements relative to the average [46]. Unsurprisingly, this pressure impacts members of the Recruiter stakeholder group in particular; they are tasked with speaking to potential participants and recruiting to the trial. Designers often express empathy for Recruiters, but the problem can be avoided if predicted recruitment figures are intentionally less optimistic and therefore more dependable from the beginning of the trial.

Clarity and culture change

Both Recruiters and Designers considered the content of grant applications when asked about recruitment planning. These discussions centred on the contrast between the level of detail that funders require, and what trialists feel is necessary for them to get to grips with the process of recruitment i.e. the operationalisation of recruitment. Our interviewees explained that overwhelmingly, trial teams focus on the ‘what’ of recruitment, i.e. how many participants are required for the trial to answer its research question, and over what time period, rather than the ‘how’, i.e. what strategies should be used, where and when. The experiences of our participants suggest that this level of detail is largely accepted by funders ‘as long as it sounds like you [trial teams] know what you’re doing’. Funders have a key role here. In the UK, public and patient involvement (PPI) is now an established part of the funding landscape, with funders requiring detail on how PPI will be used throughout a change. A similar change could be take place for recruitment planning–funders and reviewers should consider operationalisation of the recruitment activity.

Generation and implementation of recruitment evidence

The process of culture change is slow, but given that the Cochrane recruitment review was originally published in 2007 [47], a lack of awareness or of use of the most relevant source of systematically reviewed evidence around participant recruitment over a decade later, is worrying. Just one participant explicitly referenced use of the Cochrane recruitment review without being prompted to do so, and the financial incentives the individual referred to are actually part of the Cochrane retention review [2], How the results of systematic reviews about trial methods are disseminated needs to be improved because trialists are largely unaware of them at present.

This is perhaps not surprising since at 185 pages long the Cochrane recruitment review (for example) does not present itself as an efficient way of gathering new knowledge quickly. As one interviewee who had heard of the Cochrane recruitment review put it: “[I] had been meaning to read it” and had not found time to do so.

Findings from linked work [48] suggests that trialists want to see information presented in a layered format. This information should begin with a simple explanation of the intervention, followed by bite-sized chunks about its impact on recruitment and the level of certainty in the evidence. Once trialists have an outline of the intervention they then want to be able make a decision about accessing further details, rather than being confronted with all of the information to begin with. Our work suggests that the context in which the intervention has been tested (participant population, trial intervention and study location), and what information we still do not have about the intervention (e.g. other contexts, cost, potentially negative implications), are priorities to trialists [48].

Looking beyond improved recruitment rates

This project aimed to provide researchers with evidence about if and how the recruitment process is planned. We hope that this information, along with a significant body of additional work that has been published, and is in the process of being planned, conducted, and disseminated, can be used to ultimately improve trial recruitment figures. We focus on the issue of low participant recruitment not because we believe it will solve every issue that a clinical trial may have, but because it is an avoidable issue that too many trials encounter. As Whitham and colleagues make clear, there are other things that trialists need to stay on top of for a trial to be a success [49].

The potential for selection/participant bias [50] is a problem that we need to be aware of when considering the participants that we recruit [51], and whether they are retained until the end of the trial. Planning inclusive recruitment processes that provide the general population of patients that the trial’s results may apply to with opportunities to participate, is imperative to ensuring that trials provide useful results.

Strengths and limitations

A significant strength of this study was the high level of diversity in interviewees; Specialist Research Nurses, Research Managers, Investigators, Trial Managers, Clinicians and Clinical Trial Educators from the UK, South Africa, Italy, the Netherlands and Canada explained how they plan for participant recruitment. Our findings reflect experiences of trialists working in various environments, both with and without a trials unit, and across disease areas; the experiences we found were consistent, suggesting that our findings will apply to trialists outside of the immediate study population.

One of our recruitment methods was to approach people through the existing list of Trial Forge collaborators; this produced an engaged sample, only 2 people approached declined to take part. It is likely that study participants were interested in trial recruitment, potentially signifying existing knowledge of recruitment methods. That said, our findings demonstrate that even those aware of trial methods research and Trial Forge struggle with recruitment and are largely not aware of the existing evidence base.

Implications for practice

To make the most of time spent planning recruitment strategies, it is important that stakeholder groups communicate effectively. Regulators and other approvals bodies should be working with the research community to ensure that the burden of governance processes do not overshadow the research that they intend to facilitate., Trialists and funders need to communicate to ensure that funders provide sufficient time between confirmation of funding and the start of the trial, to ensure that trialists are able to work with colleagues to produce recruitment methods that are sufficiently thought-through. Potential changes to funding could include a staged approach to funding release for the sole purpose of planning. One idea may be that outline stages for trial grant calls are shorter pitches that, if successful, award smaller funding pots to employ someone to work up operations for the full grant proposal for a period of 3 months or so. Following those three months of dedicated planning, the proposal could be reviewed by the funder, who would release funds for the full-scale trial only when the planning process has been approved.

Less complex operationalised change may come by simply ensuring that trial teams are aware of the existing evidence around recruitment interventions, and that there is an expectation on the part of funders for evidence-based interventions to be used where possible. Initiatives such as Trial Forge (of which this work is a part of) and the UK MRC-NIHR Trials Methodology Research Partnership (https://www.methodologyhubs.mrc.ac.uk/about/tmrp/) are working to strengthen connections with funders to ensure that rigorously evaluated interventions are implemented where possible, and/or that embedded recruitment studies (‘SWATs’) are introduced into trials that could plug a gap in the existing evidence base [3, 52]. That said, the issues associated with planning recruitment strategies are multifaceted, and require multi-stakeholder collaboration.

Conclusions

This study has highlighted the complexity of planning trial recruitment strategies. The work involved can be lengthy and is often rushed as a result of time pressure. It is important that trialists, regulators, and funders recognise this process as an essential part of a trial’s workload, and that as a community we seek to alleviate the barriers and enhance the facilitators to effective recruitment planning. When trialists experience poor recruitment, they tend to implement multiple strategies based on experiential evidence; meaning that robust empirical evidence is rarely generated. The problem is not that there is nowhere for manuscripts covering these kinds of topics to be published (Trials, PLOS One, BMJ Open, and the Journal of Clinical Epidemiology have all published trials methodology research), the problem is that trialists are time-poor and therefore struggle to find the time to test the methods that they are implementing. Where robust evidence does exist, w must work to ensure that trialists have unobstructed access to, know about and use these rigorously evaluated strategies s. Often these strategies are shared at relevant conferences such as the biennial International Clinical Trials Methodology Conference (UK-based) and the annual Society of Clinical Trials Meeting, but as these events require in person attendance, dissemination and expertise sharing is limited to those that can attend. Video recording or remote attendance using online conferencing suites would be one way to improve the accessibility of these events, therefore facilitating knowledge exchange and sharing of expertise with trialists around the world.

If and when evidence-based approaches have been exhausted, we need to encourage implementation of embedded studies that effectively generate evidence that is useful to the wider trials community. There is a significant body of literature on survey methodology covering topics such as incentives and questionnaire length/content [53, 54, 55], which may act as a source of inspiration for testing strategies that have not yet been tested in the trial recruitment sphere. In addition, the Northern Ireland Hub for Trials Methodology Research holds the SWAT repository [56], which provides details (including outcome measures and analysis plans) of ongoing SWATs that can be implemented by trial teams. Once multiple trials have tested the same SWAT, the results can then be pooled to generate results that shed light on if and how the intervention operates across trials in a variety of contexts.

Ultimately, we must add structure to the process of recruitment planning; currently trialists rely on experienced colleagues and experiential evidence, which may be useful in the short-term, but does not offer a sustainable route for long-term evidence-sharing. The process of recruitment planning ultimately needs to be an integral part of the trial planning process and encouraging opportunities for operationalising the recruitment plan will go some way to start that process.

Supporting information

S1 File. Topic guide for designers.

(DOCX)

S2 File. Topic guide for recruiters.

(DOCX)

Acknowledgments

The authors would like to thank all interviewees that took part in this study, as well as all of the trial staff who assisted with identifying potential participants. The authors would also like to thank the trial staff from Aberdeen’s Centre for Healthcare Randomised Trials that took part in the initial session which led to the development of the topic guides used throughout this study.

Data Availability

All relevant data are within the paper and its Supporting Information files.

Funding Statement

This study was funded by the Chief Scientist Office of Scotland’s Health Improvement, Protection and Services Research Committee (project reference HIPS/16/07 - https://www.cso.scot.nhs.uk/outputs/cso-funded-research/hips16/). HRG was supported by a scholarship from Aberdeen Development Trust which funded her PhD fees and stipend, KG was supported by an MRC Methodology Research Fellowship (MR/L01193X/1), and ST was supported by core funding from the University of Aberdeen. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Thoma A, Farrokhyar F, McKnight L, Bhandari M: Practical tips for surgical research: how to optimize patient recruitment. Canadian Journal of Surgery 2010, 53(3):205–210. [PMC free article] [PubMed] [Google Scholar]
  • 2.Treweek S, Mitchell E, Pitkethly M, Cook J, Kjeldstrom M, Taskila T, et al. : Strategies to improve recruitment to randomised controlled trials. Cochrane Database of Systematic Reviews 2010, (1)MR-2010. [DOI] [PubMed] [Google Scholar]
  • 3.Treweek S, Bevan S, Bower P, Campbell M, Christie J, Clarke M, et al. : Trial Forge Guidance 1: what is a Study Within A Trial (SWAT)? Trials [Electronic Resource] 2018, 19(139). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Madsen SM, Mirza MR, Holm S, Hilsted KL, Kampmann K, Riis P: Attitudes towards clinical research amongst participants and nonparticipants. J Intern Med 2002, 251(2):156–168. 10.1046/j.1365-2796.2002.00949.x [DOI] [PubMed] [Google Scholar]
  • 5.McCann S, Campbell M, Entwistle V: Recruitment to clinical trials: a meta-ethnographic synthesis of studies of reasons for participation. Journal of Health Services & Research Policy 2013, 18(4):233–241. [DOI] [PubMed] [Google Scholar]
  • 6.Hughes-Morley A, Young B, Hempel RJ, Russell IT, Waheed W, Bower P: What can we learn from trial decliners about improving recruitment? Qualitative study. Trials [Electronic Resource] 2016, 17(1):494. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Moorcraft SY, Marriott C, Peckitt C, Cunningham D, Chau I, Starling N, et al. : Patients' willingness to participate in clinical trials and their views on aspects of cancer research: results of a prospective patient survey. Trials [Electronic Resource] 2016, 17:17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Carandang L, Goldsack JC, Sonnad SS: Key issues for elderly patients contemplating clinical trial participation. J Women Aging 2016, 28(5):412–417. 10.1080/08952841.2015.1018046 [DOI] [PubMed] [Google Scholar]
  • 9.Zvonareva O, Kutishenko N, Kulikov E, Martsevich S: Risks and benefits of trial participation: A qualitative study of participants' perspectives in Russia. Clinical Trials 2015, 12(6):646–653. 10.1177/1740774515589592 [DOI] [PubMed] [Google Scholar]
  • 10.Skea ZC, Treweek S, Gillies K: 'It's trying to manage the work': a qualitative evaluation of recruitment processes within a UK multicentre trial. BMJ Open 2017, 7(8):e016475 10.1136/bmjopen-2017-016475 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Elliott D, Husbands S, Hamdy FC, Holmberg L, Donovan JL: Understanding and Improving Recruitment to Randomised Controlled Trials: Qualitative Research Approaches. European Urology 2017, 72(5):789–798. 10.1016/j.eururo.2017.04.036 [DOI] [PubMed] [Google Scholar]
  • 12.Bill-Axelson A, Christensson A, Carlsson M, Norlen BJ, Holmberg L: Experiences of randomization: interviews with patients and clinicians in the SPCG-IV trial. Scandinavian Journal of Urology & Nephrology 2008, 42(4):358–363. [DOI] [PubMed] [Google Scholar]
  • 13.Donovan JL, Paramasivan S, de Salis I, Toerien M: Clear obstacles and hidden challenges: understanding recruiter perspectives in six pragmatic randomised controlled trials. Trials [Electronic Resource] 2014, 15:5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Donovan JL, de Salis I, Toerien M, Paramasivan S, Hamdy FC, Blazeby JM: The intellectual challenges and emotional consequences of equipoise contributed to the fragility of recruitment in six randomized controlled trials. J Clin Epidemiol 2014, 67(8):912–920. 10.1016/j.jclinepi.2014.03.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Lawton J, Kirkham J, White D, Rankin D, Cooper C, Heller S: Uncovering the emotional aspects of working on a clinical trial: a qualitative study of the experiences and views of staff involved in a type 1 diabetes trial. Trials [Electronic Resource] 2015, 16:3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Potter S, Mills N, Cawthorn SJ, Donovan J, Blazeby JM: Time to be BRAVE: is educating surgeons the key to unlocking the p otential of randomised clinical trials in surgery? A qualitative study. Trials [Electronic Resource] 2014, 15:80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Donovan JL, Rooshenas L, Jepson M, Elliott D, Wade J, Avery K, et al. : Optimising recruitment and informed consent in randomised controlled trials: the development and implementation of the Quintet Recruitment Intervention (QRI). Trials [Electronic Resource] 2016, 17(1):283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Mills N, Gaunt D, Blazeby JM, Elliott D, Husbands S, Holding P, et al. : Training health professionals to recruit into challenging randomized controlled trials improved confidence: the development of the QuinteT RCT Recruitment Training Intervention. J Clin Epidemiol 2017. November 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Guest G, Bunce A, Johnson L : How Many Interviews Are Enough? An Experiment with Data Saturation and Variability. Family Health International 2006, 18(1). [Google Scholar]
  • 20.Hennink MM, Kaiser BN, Marconi VC: Code Saturation Versus Meaning Saturation: How Many Interviews Are Enough?. Qual Health Res 2017, 27(4):591–608. 10.1177/1049732316665344 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. : What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health 2010, 25(10):1229–1245. 10.1080/08870440903194015 [DOI] [PubMed] [Google Scholar]
  • 22.Srivastava A, Thomson SB : Framework Analysis: A Qualitative Methodology for Applied Policy Research. Journal of Administration & Governance 2009, 4(2). [Google Scholar]
  • 23.Gale NK, Heath G, Cameron E, Rashid S, Redwood S: Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology 2013, 13:117 10.1186/1471-2288-13-117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Murtagh J, Dixey R, Rudolf M: A qualitative investigation into the levers and barriers to weight loss in children: opinions of obese children. Arch Dis Child 2006, 91(11):920–923. 10.1136/adc.2005.085712 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Elkington H, White P, Addington-Hall J, Higgs R, Pettinari C: The last year of life of COPD: a qualitative study of symptoms and services. Respir Med 2004, 98(5):439–445. 10.1016/j.rmed.2003.11.006 [DOI] [PubMed] [Google Scholar]
  • 26.Ayatollahi H, Bath PA, Goodacre S: Factors influencing the use of IT in the emergency department: a qualitative study. Health Informatics Journal 2010, 16(3):189–200. 10.1177/1460458210377480 [DOI] [PubMed] [Google Scholar]
  • 27.Elwyn G, Seagrove A, Thorne K, Yee Cheung W : Ethics and research governance in a multicentre study: add 150 days to your study protocol. BMJ 2005, 330:847 10.1136/bmj.330.7495.847 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tully J, Ninis N, Booy R, Viner R: The new system of review by multicentre research ethics committees: prospective study. BMJ 2000, 320(7243):1179–1182. 10.1136/bmj.320.7243.1179 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Ahmed AH, Nicholson KG: Delays and diversity in the practice of local research ethics committees. J Med Ethics 1996, 22(5):263–266. 10.1136/jme.22.5.263 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Lux A, Edwards S, Osborne J: Responses of local ethics committees to a study with approval from a multicentre research ethics committee. BMJ 2000, 320:1182–1183. 10.1136/bmj.320.7243.1182 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Jones A, Bamford B: The other face of research governance. BMJ 2004, 329:280–281. 10.1136/bmj.329.7460.280 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Torgerson D, Dumville J: Ethics review in research: Research governance also delays research. BMJ 2004, 328:710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Galbraith N, Hawley C, De-Souza V: Research governance: research governance approval is putting people off research. BMJ 2006, 332:238. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Angell EL, Jackson CJ, Ashcroft RE, Bryman A, Windridge K, Dixon-Woods M: Is 'inconsistency' in research ethics committee decision-making really a problem? An empirical investigation and reflection. Clinical Ethics 2007, 2:92–99. [Google Scholar]
  • 35.Greenhalgh T, Robert G, Macfarlane F, Bate P, O: Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly 2004, 82(4):581–629. 10.1111/j.0887-378X.2004.00325.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Badgett RG, Pugh MJV: Comment on "Diffusion of innovations in service organizations: systematic review and recommendations". Milbank Q 2005, 83(1):177–178. 10.1111/j.0887-378X.2005.340_1.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Kameda T, Ohtsubo Y, Masanoi T: Centrality in sociocognitive networks and social influence: an illustration in a group decision-making context. J Pers Soc Psychol 1997, 73:296–309. [Google Scholar]
  • 38.Shaw S, Boynton PM, Greenhalgh T: Research governance: where did it come from, what does it mean? J R Soc Med 2005, 98(11):496–502. 10.1258/jrsm.98.11.496 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Duley L, Gillman A, Duggan M, Belson S, Knox J, McDonald A, et al. : What are the main inefficiencies in trial conduct: a survey of UKCRC registered clinical trials units in the UK. Trials [Electronic Resource] 2018, 19(1):15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Honeycutt A: Maximising the employee productivity factor. International Journal of Manpower, 10(4):24–27. [Google Scholar]
  • 41.Gorringe JAL: Initial preparations for clinical trials In Principles and Practice of Clinical Trials. Edited by Harris EL, Fitzgerald JD. Edinburgh: Churchill Livingstone; 1970:41–46. [Google Scholar]
  • 42.Lasagna L: The pharmaceutical revolution forty years later. Rev Farmacol Clin Exp 1984, 1:157–161. [Google Scholar]
  • 43.Bearman JE, Loewenson RB, Gullen WH: Muench's postulates, laws, and corollaries (biometrics note No.4): Bethesda (MD), USA: Office of Biometry and Epidemiology, National Eye Institute, National Institutes of Health; 1974. [Google Scholar]
  • 44.White D, Hind D: Projection of participant recruitment to primary care research: a qualitative study. Trials [Electronic Resource] 2015, 16:473 10.1186/s13063-015-1002-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Tversky A, Kahneman D: Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185(4157):1124–1131. 10.1126/science.185.4157.1124 [DOI] [PubMed] [Google Scholar]
  • 46.Kruger J: Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments. Journal of Personality & Social Psychology 1999, 77(2):221–232. [DOI] [PubMed] [Google Scholar]
  • 47.Mapstone J, Elbourne D, Roberts IG: Strategies to improve recruitment to research studies. Cochrane Database of Systematic Reviews, 2007, Issue 2 Art. No.: MR000013. 10.1002/14651858.MR000013.pub3 [DOI] [PubMed] [Google Scholar]
  • 48.Gardner HR: Making clinical trials more efficient: consolidating, communicating and improving knowledge of participant recruitment interventions. Unpublished PhD thesis, 2018, University of Aberdeen.
  • 49.Whitham D, Turzanski J, Bradshaw L, Clarke M, Culliford L, Duley L, et al. : Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study. Trials 2018, 19:557 10.1186/s13063-018-2940-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Smith S, Noble H: Bias in research. Evidence Based Nursing 2014, 17(4):100–101. 10.1136/eb-2014-101946 [DOI] [PubMed] [Google Scholar]
  • 51.Hofer A, Hummer M, Huber R, Kurz M, Walch T, Fleischhacker WW: Selection bias in clinical trials with antipsychotics. J Clin Psychopharmacol 2000, 20(6):699–702. 10.1097/00004714-200012000-00019 [DOI] [PubMed] [Google Scholar]
  • 52.Clarke M, Savage G, Maguire L, McAneney H: The SWAT (study within a trial) programme; embedding trials to improve the methodological design and conduct of future research. Trials [Electronic Resource] 2015, 16 (Suppl 2):209. [Google Scholar]
  • 53.Sahlqvist S, Song Y, Bull F, Adams F, Preston J, Ogilvie D and the iConnect consortium: Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: a randomised controlled trial. BMC Medical Research Methodology 2011, 11(62). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Harrison S, Henderson J, Alderdice F, Quigley MA: Methods to increase response rates to a population-based maternity survey: a comparison of two pilot studies. BMC Medical Research Methodology 2019, 19(65). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. : Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews 2009, (3):MR000008 10.1002/14651858.MR000008.pub4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.The Northern Ireland Hub for Trials Methodology Research: SWAT Repository Store. Available from: https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/Repositories/SWATStore/ [last accessed 26 September 2019]

Decision Letter 0

Kathleen Finlayson

9 Sep 2019

PONE-D-19-17368

Using evidence when planning for trial recruitment: An international perspective from time-poor trial recruiters and designers

PLOS ONE

Dear Dr Gardner,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

In particular, the reviewers would like further information to justify the study, with some background on recruitment models, and discussion on the potential risk of bias that may occur with recruitment difficulties. Further information on the possible sources of uncertainty in recruitment would also benefit readers.  Please also carefully address the comment on the risk of disclosure of identity from some comments,  further details are below.

==============================

We would appreciate receiving your revised manuscript by Oct 24 2019 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.

To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled 'Manuscript'.

Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.

We look forward to receiving your revised manuscript.

Kind regards,

Kathleen Finlayson

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

1. Please include a copy of the interview guide used in the study, in both the original language and English, as Supporting Information, or include a citation if it has been published previously.

2. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical.

3. Please amend your manuscript to include your abstract after the title page.

Additional Editor Comments (if provided):

Thank you for submitting your manuscript for review. This is an interesting topic, and as such, I encourage you to consider the extensive feedback provided from the reviewers below.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This manuscript describes interviews with individuals involved with clinical trial recruitment - the planning and design phase. The rationale provided is a bit weak. It does not reference recruitment models. The methods are adequately described; missing is the typical length of the interview. Results are presented with sufficient supportive quotes provided. The discussion is wide ranging and includes areas not addressed within the interviews/data/results. It sounds a bit 'preachy' in places and reads as if the authors had conclusions in mind (e.g. use of evidence to inform selected strategies; its time we create a culture change). The discussion includes items that might better be presented in results and could be significantly shortened.

A typo in the Results section: The encompass should be They encompass

Reviewer #2: This manuscript presents the results of a qualitative study of professionals who run clinical trial recruitment efforts. There were 23 interviews conducted. The interviews were semi-structured, in-depth interviews. The results were analyzed using the Framework method. Topics addresssed include: who is responsible for planning recruitment, detailed planning procedures, using empirical evidence, barriers to planning, and stress on those responsible for recruitment. Each topic is addressed with quotations from those who were interviewed. In general, a lack of resources (especially time) make it difficult for those planning recruitment to have sufficiently detailed plans to respond to issues in recruitment. Recruitment often lags behind expected results. Recruiters respond with ad hoc solutions, but the work is stressful as a result.

This paper presents interesting results. It raises important questions about how clinicial trials are typically run. I am a survey methodologist by training. As such, it is hard for me to evaluate this paper with respect to other papers on a similar topic. However, as a survey methodologist, I see a number of points of overlap with concerns in our field. There are also some differences. Expanding the paper in several areas would reinforce the importance of the results.

Major Comments

1. As with recruitment to clinical trials, surveys are facing rising nonresponse rates. This has been true since the 1990s and an area of concern. This manuscript argues that the major threat is to the power of studies. Clinical trials with recruitment difficulties might not meet their recruitment targets and, therefore, would be underpowered for drawing conclusions about treatment effectiveness. While this certainly is a risk, in the survey context, we have become more concerned with how this difficulty might lead to bias in estimates. This manuscript doesn't really discuss the potential for bias. It would be useful to review the pertinent literature on this question for clinical trials. I found this citation:

Hofer, A., M. Hummer, R. Huber, M. Kurz, T. Walch and W. W. Fleischhacker (2000). "Selection Bias in Clinical Trials with Antipsychotics." Journal of Clinical Psychopharmacology 20(6): 699-702.

As I am not an expert on the risk of bias in recruitment to clinical trials, I would like to hear more about this from a clinical trial perspective.

This raised a larger question that the field of survey methodology has been confronting for several years: what is quality in this context? The authors seem to suggest that meeting targets is the sole measure of quality. Are there other relevant measures? In survey methodology, we have developed measures related to the risk of nonresponse bias, but clinical trials might require other measures. This issue may be larger than this paper.

2. A second area of overlap between the two fields is the problem of uncertainty in the design. In surveys, the ability to predict how survey production will go has been greatly hampered by the general trend of declining response rates. What are the reasons for the uncertainty in recruitment to clinical trials? The paper quotes one of the interviewees as saying "the number of cases promised in any clinical study must be divided by a factor of at least 10." Is this due to a changing environment or is it because the designs in proposals to funders are overly optimistic? Some description of the sources of this uncertainty would be helpful.

In surveys, a technique known as responsive survey design was developed that planned for uncertainty. It's akin to risk management. Design changes are pre-planned and then triggered when indicators (such as response rates) meet (or don't meet) specified targets. I provide a key citation:

Groves, R. M. and S. G. Heeringa (2006). "Responsive design for household surveys: tools for actively controlling survey errors and costs." Journal of the Royal Statistical Society: Series A (Statistics in Society) 169(3): 439-457.

3. As an outsider, it seems that an important question for this field is how to systematize a body of knowledge. Should there be more academic research in the area of methods of recruitment to clinical trials? Could new journals be organized? Is there sufficient professional training available? Do people working in this area have conferences where they can share their knowledge? All of these would be useful and might help stimulate more systematic research on this topic and adoption of new techniques in practice. The conclusion of the paper makes some very general suggestions. "Trial Forge" is mentioned as one program aimed at improving design and implementation of studies. It would be helpful to make some more specific recommendations.

Survey methodology does have a fair amount of published research on recruitment methods and data quality. This research is clustered in a few journals that are associated with the profession (e.g. Public Opinion Quarterly). The role of questionnaire length is one such topic. The use of incentives is the subject of a huge amount of research. Some of this might be helpful for persons running recruitment to clinical trials.

4. General comment on disclosure risk. You have a relatively small sample. I worry that the information in Table 1, plus the comments with descriptive tag lines, could lead to identification of some of the participants. I can't say for sure, but wonder if someone in this community could identify colleagues and determine which quotes were theirs. For example, in Table 1, I can see that there is one person from a particular location. With that knowledge, and knowing who works there, could I figure out who some of the quotes were from?

Minor Comments

1.Sample size. I'm not convinced by a concept of "data saturation" stated in such a general way. I think the sample size should be a function of the thing being studied. For this research, I'm not very worried about justifying the sample size.

2.Organization of paper. The heads and subheads are confusing. It's difficult to see which sections are embedded in which.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: James Wagner

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2019 Dec 10;14(12):e0226081. doi: 10.1371/journal.pone.0226081.r002

Author response to Decision Letter 0


17 Oct 2019

Title: Using evidence when planning for trial recruitment: An international perspective from time-poor trialists

Authors: Heidi Gardner, Shaun Treweek, Katie Gillies

Manuscript ID: PONE-D-19-17368

11th October 2019

Dear Editor,

Thank you very much for considering our manuscript for publication in PLOS One. We appreciate the reviewer’s considered responses and are pleased to have been able to strengthen the manuscript by incorporating their points into our revised submission.

The list below addresses each point raised by the reviewers in turn and identifies where new information to address the comment is included in the revised article. We have provided an amended manuscript with the changes tracked.

I look forward to hearing from you.

Best wishes,

Heidi Gardner

Editor comments:

1. Please include a copy of the interview guide used in the study, in both the original language and English, as Supporting Information, or include a citation if it has been published previously.

We have added in interview guides used for both stakeholder groups (Recruiters and Designers), as S1 and S2 files.

2. Please amend either the title on the online submission form (via Edit Submission) or the title in the manuscript so that they are identical.

Thank you for highlighting this. We have changed the title on the manuscript to ensure that it matches up with the online submission form.

3. Please amend your manuscript to include your abstract after the title page.

Amended as requested.

Reviewer comments:

Reviewer #1: This manuscript describes interviews with individuals involved with clinical trial recruitment - the planning and design phase. The rationale provided is a bit weak. It does not reference recruitment models.

With this project we aimed to explore if and how evidence is used in the process of recruitment planning by those tasked with designing or implementing recruitment strategies. We did not want to impose models on participants but rather see what they themselves raised. As it was, participants did not highlight specific recruitment models in their interviews and we therefore do not draw attention to details of recruitment models in the introduction and rationale of the manuscript. In addition, there are very limited numbers of robustly-evaluated strategies to support participant recruitment (ST and HG lead two systematic reviews of recruitment strategy evaluations), and we made the conscious decision not to draw attention to models and strategies that are not backed by robust evidence. As we found in our interviews, even where there is evidence to support a recruitment approach, our interviewees did not look for it.

The methods are adequately described; missing is the typical length of the interview.

We have provided details regarding interview length at the beginning of the Results section, “Twenty-five trialists from the Recruiter and Designer stakeholder groups were invited to secure 23 interviews which lasted between 32 and 77 minutes (median: 58 minutes).” We believe this is a sufficient level of detail for the manuscript.

Results are presented with sufficient supportive quotes provided.

The discussion is wide ranging and includes areas not addressed within the interviews/data/results. It sounds a bit 'preachy' in places and reads as if the authors had conclusions in mind (e.g. use of evidence to inform selected strategies; it’s time we create a culture change). The discussion includes items that might better be presented in results and could be significantly shortened.

We have shortened the Discussion as suggested from 7 pages double-spaced to just under 6. Half a page of this is a response to Reviewer 2 comment #1. We’ve also removed some of the parts that could be called preachy too, and avoided duplicating presentation of things that already appear in Results.

A typo in the Results section: The encompass should be They encompass

Amended as requested.

Reviewer #2:

Major Comments

1. As with recruitment to clinical trials, surveys are facing rising nonresponse rates. This has been true since the 1990s and an area of concern. This manuscript argues that the major threat is to the power of studies. Clinical trials with recruitment difficulties might not meet their recruitment targets and, therefore, would be underpowered for drawing conclusions about treatment effectiveness. While this certainly is a risk, in the survey context, we have become more concerned with how this difficulty might lead to bias in estimates. This manuscript doesn't really discuss the potential for bias. It would be useful to review the pertinent literature on this question for clinical trials. I found this citation:

Hofer, A., M. Hummer, R. Huber, M. Kurz, T. Walch and W. W. Fleischhacker (2000). "Selection Bias in Clinical Trials with Antipsychotics." Journal of Clinical Psychopharmacology 20(6): 699-702.

As I am not an expert on the risk of bias in recruitment to clinical trials, I would like to hear more about this from a clinical trial perspective.

The reviewer raises an interesting and important point. This is not strictly within the scope of our project, but we have added a section on ‘Looking beyond improved recruitment rates’ to the Discussion section to ensure that the reader is aware of this issue:

This project aimed to provide researchers with evidence about if and how the recruitment process is planned. We hope that this information, along with a significant body of additional work that has been published, planned and conducted, can be used to ultimately improve trial recruitment figures. We focus on the issue of low participant recruitment not because we believe it will solve every issue that a clinical trial may have, but because it is an avoidable issue that too many trials encounter.

The potential for selection/participant bias [49] is a problem that we need to be aware of when considering the participants that we recruit [50], and whether they are retained until the end of the trial. Planning inclusive recruitment processes that provide the general population of patients that the trial’s results may apply to with opportunities to participate, is imperative to ensuring that trials provide useful results.

This raised a larger question that the field of survey methodology has been confronting for several years: what is quality in this context? The authors seem to suggest that meeting targets is the sole measure of quality. Are there other relevant measures? In survey methodology, we have developed measures related to the risk of nonresponse bias, but clinical trials might require other measures. This issue may be larger than this paper.

We agree with the issue raised by the reviewer – there is certainly a larger question regarding the concept of quality, and how it can be measured. As the reviewer highlights, this issue is larger than this paper, but we have added the ‘Looking beyond improved recruitment rates’ part of the Discussion section which highlights the point raised in the comment above and provides a comment on applicability of trial results.

2. A second area of overlap between the two fields is the problem of uncertainty in the design. In surveys, the ability to predict how survey production will go has been greatly hampered by the general trend of declining response rates. What are the reasons for the uncertainty in recruitment to clinical trials? The paper quotes one of the interviewees as saying, "the number of cases promised in any clinical study must be divided by a factor of at least 10." Is this due to a changing environment or is it because the designs in proposals to funders are overly optimistic? Some description of the sources of this uncertainty would be helpful.

In surveys, a technique known as responsive survey design was developed that planned for uncertainty. It's akin to risk management. Design changes are pre-planned and then triggered when indicators (such as response rates) meet (or don't meet) specified targets. I provide a key citation:

Groves, R. M. and S. G. Heeringa (2006). "Responsive design for household surveys: tools for actively controlling survey errors and costs." Journal of the Royal Statistical Society: Series A (Statistics in Society) 169(3): 439-457.

The quote that the reviewer highlights is from Muench’s Third Law which is referenced within the text. That section (Managing Expectations within the Discussion) then goes on to describe why these overly optimistic figures are so often used when forecasting recruitment rates – inappropriately anchoring estimates by focussing on positive past experiences, failing to consider significant differences between studies, and the ‘Lake Wobegon Effect’ where individuals overestimate their achievements relative to the average.

3. As an outsider, it seems that an important question for this field is how to systematize a body of knowledge. Should there be more academic research in the area of methods of recruitment to clinical trials? Could new journals be organized? Is there sufficient professional training available? Do people working in this area have conferences where they can share their knowledge? All of these would be useful and might help stimulate more systematic research on this topic and adoption of new techniques in practice. The conclusion of the paper makes some very general suggestions. "Trial Forge" is mentioned as one program aimed at improving design and implementation of studies. It would be helpful to make some more specific recommendations.

Hearing the perspective of this reviewer, “an outsider”, is incredibly useful here. There are clusters of journals and conferences that we have not mentioned as we incorrectly assumed that readers would know about them. We have now added specific details of these resources to the Conclusion section in order to strengthen our recommendations:

This study has highlighted the complexity of planning trial recruitment strategies. The work involved can be lengthy and is often rushed as a result of time pressure. It is important that trialists, regulators, and funders recognise this process as an essential part of a trial’s workload, and that as a community we seek to alleviate the barriers and enhance the facilitators to effective recruitment planning. When trialists experience poor recruitment, they tend to implement multiple strategies based on experiential evidence; meaning that robust empirical evidence is rarely generated. The problem is not that there is nowhere for manuscripts covering these kinds of topics to be published (Trials, PLOS One, BMJ Open, and the Journal of Clinical Epidemiology have all published trials methodology research), the problem is that trialists are time-poor and therefore struggle to find the time to test the methods that they are implementing. Where robust evidence does exist, we must all work to ensure that trialists have unobstructed access to, know about and use these rigorously evaluated strategies that have been rigorously evaluated and proven to improve recruitment figures. Often these strategies are shared at relevant conferences such as the biennial International Clinical Trials Methodology Conference (UK-based) and the annual Society of Clinical Trials Meeting, but as these events require in person attendance, dissemination and expertise sharing is limited to those that can attend. Video recording or remote attendance using online conferencing suites would be one way to improve the accessibility of these events, therefore facilitating knowledge exchange and sharing of expertise with trialists around the world.

Survey methodology does have a fair amount of published research on recruitment methods and data quality. This research is clustered in a few journals that are associated with the profession (e.g. Public Opinion Quarterly). The role of questionnaire length is one such topic. The use of incentives is the subject of a huge amount of research. Some of this might be helpful for persons running recruitment to clinical trials.

As above, we agree that this needs to be make clear. We have added the following text to the Conclusion section:

If and when evidence-based approaches have been exhausted, we need to encourage implementation of embedded studies that effectively generate evidence that is useful to the wider trials community. There is a significant body of literature on survey methodology covering topics such as incentives and questionnaire length/content, which may act as a source of inspiration for testing strategies that have not yet been tested in the trial recruitment sphere. In addition, the Northern Ireland Hub for Trials Methodology Research holds the SWAT repository [52], which provides details (including outcome measures and analysis plans) of ongoing SWATs that can be implemented by trial teams. Once multiple trials have tested the same SWAT, the results can then be pooled to generate results that shed light on if and how the intervention operates across trials in a variety of contexts.

4. General comment on disclosure risk. You have a relatively small sample. I worry that the information in Table 1, plus the comments with descriptive tag lines, could lead to identification of some of the participants. I can't say for sure, but wonder if someone in this community could identify colleagues and determine which quotes were theirs. For example, in Table 1, I can see that there is one person from a particular location. With that knowledge, and knowing who works there, could I figure out who some of the quotes were from?

We appreciate the reviewer’s concern regarding potential for identification. During the consent process we explained how quotes may be used, and participants were given the opportunity for their quotes to be omitted from any publications or presentations resulting from this work. Only 4 of the 23 interviewees chose not to give consent for their anonymised quotes to be used, the remaining 19 participants were happy for their quotes to be used. We have adhered to their wishes.

In addition, it is common for individuals in the trials community (particularly those in our Designer stakeholder group) hold multiple roles – e.g. Principal Investigator and Clinician – Participant 2, Professor and Chief Investigator – Participant 16, that, combined with our relatively small sample size would make it very difficult for someone in this community to identify colleagues. We have chosen to keep the descriptive tag lines as they are (role, stakeholder group, location, participant number), as each of these descriptors adds a layer of context to the quote being presented, taking out location details for example, would limit that context and reduce value for the reader.

Minor Comments

1.Sample size. I'm not convinced by a concept of "data saturation" stated in such a general way. I think the sample size should be a function of the thing being studied. For this research, I'm not very worried about justifying the sample size.

We agree. We have chosen to keep the ‘Sample size’ section in the manuscript for the benefit of readers that may be less familiar with qualitative research and/or may disagree with this perspective.

2.Organization of paper. The heads and subheads are confusing. It's difficult to see which sections are embedded in which.

We have amended the headings and subheadings to reflect PLOS guidelines.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Kathleen Finlayson

20 Nov 2019

Using evidence when planning for trial recruitment: An international perspective from time-poor trialists

PONE-D-19-17368R1

Dear Dr. Gardner,

We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.

Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.

Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at https://www.editorialmanager.com/pone/, click the "Update My Information" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

With kind regards,

Kathleen Finlayson

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Kathleen Finlayson

27 Nov 2019

PONE-D-19-17368R1

­­­­Using evidence when planning for trial recruitment: An international perspective from time-poor trialists

Dear Dr. Gardner:

I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

For any other questions or concerns, please email plosone@plos.org.

Thank you for submitting your work to PLOS ONE.

With kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Kathleen Finlayson

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Topic guide for designers.

    (DOCX)

    S2 File. Topic guide for recruiters.

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    All relevant data are within the paper and its Supporting Information files.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES