Abstract
Problem
Recruitment to randomised trials is often difficult, and many important trials are not mounted because recruitment is thought to be “impossible.”
Design
Controversial ProtecT (prostate testing for cancer and treatment) trial embedded within qualitative research.
Background and setting
Screening for prostate cancer is hotly debated, and evidence from trials about the effectiveness of treatments (surgery, radiotherapy, and monitoring) is lacking. Mounting a treatment trial is controversial because of past failures and concerns that differences in complications of treatment but not survival make randomisation unacceptable to patients and clinicians, particularly for a trial including monitoring.
Strategy for change
In-depth interviews explored interpretation of study information. Audiotape recordings of recruitment appointments enabled scrutiny of content and presentation of study information by recruiters. Initial qualitative findings showed that recruiters had difficulty discussing equipoise and presenting treatments equally; they unknowingly used terminology that was misinterpreted by participants. Findings were used to determine changes to content and presentation of information.
Effects of change
Changes to the order of presenting treatments encouraged emphasis on equivalence, misinterpreted terms were avoided, the non-radical arm was redefined, and randomisation and clinical equipoise were presented more convincingly. The randomisation rate increased from 40% to 70%, all treatments became acceptable, and the three arm trial became the preferred design.
Lessons learnt
Changes to information and presentation resulted in efficient recruitment acceptable to patients and clinicians. Embedding this controversial trial within qualitative research improved recruitment. Such methods probably have wider applicability and may enable even the most difficult evaluative questions to be tackled.
Background
The randomised controlled trial is the widely acknowledged design of choice for evaluating the effectiveness of medical and surgical interventions,1 but recruitment is often much lower than anticipated.2–4 Methodological literature is almost exclusively statistical and epidemiological, and very little of it is concerned with conduct or the particular demands that trials put on trialists and participants. Problems with mounting surgical trials are well known,5 and systematic reviews have identified a range of barriers for clinicians and patients.2,6 Nested studies within ongoing trials could help to elucidate recruitment difficulties.6
The ProtecT (prostate testing for cancer and treatment) feasibility study provided such an opportunity. The study was controversial; although consensus existed that a trial of treatment was urgently needed, intense debate continued about whether it could be mounted. This was because of the differences in complications of treatment (but not in survival) between radical surgery, radiotherapy, and monitoring and the evidence from previous failures, including a Medical Research Council trial (PR06) and small scale attempts to randomise.7,8
In the ProtecT study, men aged 50-69 were invited to a nurse led clinic in general practice, where they were given detailed information about the implications of testing for prostate specific antigen, uncertainties about treatments, and the need for a treatment trial. If the men consented, blood was taken for prostate specific antigen testing. Participants with abnormal results were invited to undergo further diagnostic testing. Men diagnosed with localised prostate cancer were randomised in a nested trial of recruitment strategies to see a nurse or urologist for an “information” appointment. The men were given details about the treatments and the need for a randomised trial and were asked to consent to randomisation to either a three arm (surgery, radiotherapy, monitoring) or a two arm (surgery, radiotherapy) trial. If they refused randomisation, a patient led preference for treatment was agreed. A multicentre research ethics committee gave ethical approval.
Strategy for change
We used qualitative research methods to investigate the process of recruitment:
In-depth interviews with men after receipt of prostate specific antigen results and diagnosis—to elicit interpretations of study information and experiences of the study, including treatment preferences (LB with JD)
Detailed examination of pairs of audiotaped recruitment (“information”) appointments and follow up interviews to examine the delivery of information by recruiters and its interpretation by patients (NM, MS, JD, AJ)
Detailed examination of other information appointments (all were routinely audiotaped) to investigate reasons for different levels of recruitment between centres and over time (JD).
All interviews were semistructured and carried out by using a checklist of topics to ensure that the same areas were covered but allowing issues to emerge that were of importance to the men themselves. Interviews and information appointments were audiotaped and fully transcribed. We analysed the data by using the methods of “constant comparison,” in which transcripts are scrutinised for similar themes and then examined in detail within themes.9,10
We used early findings to devise presentation strategies, which were implemented initially in one centre. We reproduced the findings and recommendations for changes to the content and presentation of information in three documents and circulated them to recruiters in June, October, and November 2000, and we developed a training programme and delivered it to recruiters. JD evaluated the impact of the documents and training by listening to subsequent information appointments. Recruitment (consent to randomisation and acceptance of allocation) was calculated regularly.
Effects of change
The rate of consent to randomisation changed over time as the findings from the qualitative research were introduced through the circulation of documents and delivery of training (table), increasing from 30-40% in May 2000 to 70% by May 2001. The findings from the qualitative research had an impact on the conduct of the trial in four major ways.
(1) Organisation of study information
Study information was based on the results of the team's systematic review of the literature,11 and treatments were presented in a standard order: surgery, then radiotherapy, and finally monitoring. Recordings of information appointments and patient interviews in the early part of the study showed clearly that the treatments were not presented or interpreted equally. Surgery and radiotherapy were portrayed in detail as aggressive, curative treatments, and monitoring was portrayed briefly as “watchful waiting” (box B1). Recruiters were asked to present the treatments in a different order: (1) monitoring, (2) surgery, and (3) radiotherapy and to describe their advantages and disadvantages in equivalent detail.
Box 1.
How treatments were presented
(2) Terminology used in study information
Patients may interpret trial and clinical terminology differently than intended.12,13 For example, “trial” was sometimes interpreted as monitoring (“try and see”), and recruiters sometimes assumed that patients had refused randomisation when they were really questioning monitoring. Also, the phrase intended to reflect evidence of good 10 year survival (“the majority of men with prostate cancer will be alive 10 years later”) was interpreted as an (unexpected) suggestion that some might be dead in 10 years. Recruiters were thus asked to replace “trial” with “study” and to present survival in terms of “most men with prostate cancer live long lives even with the disease.”
(3) Specification and presentation of the non-radical arm
It was quickly apparent that the non-radical treatment option caused difficulties for patients and recruiters. “Conservative monitoring” was meant to emphasise regular review and lack of radical intervention. Recruiters often called it “watchful waiting,” but patients interpreted this as “no treatment,” as if clinicians would “watch while I die” (box 1a).
In June 2000 (document 1) the non-radical arm was renamed “monitoring” and redefined to involve three monthly or six monthly prostate specific antigen tests, with intervention if required or requested. Recruiters emphasised the slow growing nature of most prostate cancers and presented monitoring first. Men were clearly informed that the risk with monitoring was that future radical treatment might not be possible if the tumour progressed or the patient was no longer fit enough for it. An immediate impact was seen as some patients accepted monitoring, but scrutiny of information appointments showed that some recruiters continued to express it as “inactive” compared with radical treatments (box 1b).
Documents 2 and 3 included examples of “good” and “not so good” presentation of information and renamed the non-radical arm “active monitoring,” emphasising scrutiny of regular prostate specific antigen results so that radical treatments could remain an option for men who wanted them if the cancer progressed. Recruiting staff were then able to express confidence in this treatment option (box B2).
Box 2.
Presentation of “active monitoring”
(4) Presentation of randomisation and clinical equipoise
Recruiters and patients also had difficulty with randomisation and clinical equipoise. Each document contained guidance on this. We found it necessary to emphasise that recruiters must be genuinely uncertain about the best treatment, believe the patient to be suitable for all three treatments, and be confident in these beliefs. Patients commonly expressed lay views that cancer should be removed, told stories of friends or relatives who had died of advanced disease, or brought media information that was often biased in favour of radical treatments. Recruiters were encouraged to elicit these views and then discuss differences with ProtecT study information, explain that randomisation offered a way of resolving the dilemma of treatment choice, attempt randomisation before the end of the information appointment, and inform patients that they could have time to consider whether the allocated treatment was acceptable.
Lessons learnt
Qualitative research methods are increasingly included in health services research, conventionally to help in the interpretation of quantitative results or understanding of trials.12,14,15 In the ProtecT feasibility study we inverted the normal relations between these methods and embedded the randomised trial within the qualitative study. We showed that the integration of qualitative research methods allowed us to understand the recruitment process and elucidate the changes necessary to the content and delivery of information to maximise recruitment and ensure effective and efficient conduct of the trial. The routine recording of information appointments was crucial: the content and method of delivery of the information provided the context within which the men's interpretations of the information could be set.
The qualitative research illuminated four ways in which study information was having a negative impact on the study. Some of the issues raised were simple, such as reordering the presentation of treatments and avoiding terms that had particular and unanticipated meanings for patients. These “simple” issues would probably not have become apparent without the qualitative research. “Watchful waiting,” for example, is commonly used to describe a non-interventionist treatment. In lay terms, this conveys an impression of wilful neglect, in which the disease is watched and everyone waits for an event—death. It was only when the non-radical arm was redefined as “active monitoring” that patients and clinicians gained confidence in it as a legitimate option. Whether the term is more acceptable in other countries, such as the United States, needs investigation.
Other issues that emerged were more complex. It has been shown elsewhere that patients have difficulty with randomisation.15–17 In this study most men could recall and understand randomisation, but they often found it difficult to accept. Equipoise was particularly difficult but has received remarkably little examination in the literature. We found it essential that recruiting staff were able to express confidently that men were eligible for all three treatments, that the most effective treatment was unknown, that a trial was urgently needed, and that randomisation could provide a plausible way of reaching a decision. If recruiters gave any indication that they were not completely committed to these aspects, patients would question randomisation, often using subtle and sophisticated reasoning that surprised some recruiters.
Although our intention was to maximise both recruitment and informed consent, changes to the content and delivery of information could potentially be used to coerce patients and artificially inflate randomisation rates. One outcome might then be to increase dropouts, but, as the table shows, the proportion who accepted the treatment allocation remained similar throughout the study. We are currently exploring reasons for rejection of allocation. The process of verbally presenting study information and obtaining written consent is not usually tape recorded or available for later scrutiny as they were here. Recruitment and informed consent in other trials may not have been maximised, because of different interpretations by patients and researchers. Although these methods carry a danger of coercion, our findings indicate that we ensured that the study became more ethical over time as participants received unambiguous information that allowed them to make an accurately informed decision about whether to accept randomisation. Many men rejecting randomisation early on had received unbalanced information open to misinterpretation.
The controversial nature of the study and the extreme differences between the treatment arms might limit the generalisability of the findings to other randomised trials. However, controversial trials attempting to tackle difficult or “impossible” questions could be the very studies that need to benefit from the qualitative evaluation used here. Indeed, the extreme nature of the treatment choices illuminated issues that were very difficult and encouraged patients to be explicit about their interpretations. The plausibility of these findings suggests that these methods could have a role in improving the efficiency and conduct of trials in general.
The findings also support the contention that the conduct of trials is not straightforward. The concepts inherent in trials, particularly randomisation and equipoise, are complex and difficult and place particular demands on participants and recruiters. Better training and information for these groups may help, but this study suggests that qualitative methods need to be used in feasibility phases in order to understand recruitment to particular trials.
Health services research is a developing tradition, in which different disciplines and paradigms are brought together to tackle health related questions. Combining different approaches can be difficult, but the ProtecT study brought together the qualitative traditions of sociology and anthropology, epidemiological and statistical disciplines informing randomised trial design, and academic urology and nursing. The method of the study contravened conventional approaches by being driven not by the randomised trial design but by the qualitative research. Effectively, the ProtecT feasibility study embedded the randomised trial within the qualitative research and followed a sociological iterative approach. Thus qualitative research methods applied in combination with open minded clinicians and flexible or innovative trial designs may enable even the most difficult evaluative questions to be tackled and have substantial impacts even on apparently routine and uncontroversial trials.
Key learning points
Recruitment to randomised controlled trials is often problematic, potentially threatening the power and external validity of trials and wasting resources
Embedding the controversial ProtecT randomised trial within qualitative research allowed detailed investigation of the presentation of study information by recruiters and its interpretation by participants
Changes to the content and delivery of study information increased recruitment rates from 40% to 70%
The embedding of randomised controlled trials in qualitative research may enable even the most difficult evaluative questions to be tackled and could have substantial impacts on recruitment to apparently routine trials
Table.
Consent to randomisation to ProtecT study over time (patients with data available on final treatment decision). Values are numbers (percentages)
Date
|
Eligible
|
Consent to randomisation
|
Accept allocation*
|
---|---|---|---|
October 1999 to May 2000 | 30 | 30-40% | 60-70% |
Document 1† circulated (changes to order of presentation; avoidance of “trial,” “watchful waiting”; positive presentation of “monitoring” as involving regular tests; re-emphasis that patients eligible for all treatments; presentation of randomisation as reasonable way to reach treatment decision) | |||
August 2000 | 45 | 23 (51) | 18 (78) |
Document 2† circulated (re-emphasis of monitoring as regular testing and review with possibility of radical treatment if disease localised; importance of eliciting and challenging patients' views if at odds with evidence; re-emphasis of no compulsion to accept allocation) | |||
November 2000 | 67 | 39 (58) | 30 (77) |
Document 3† circulated (“good” and “not so good” examples of presentation of information to facilitate equal presentation of treatments) | |||
January 2001 | 83 | 51 (61) | 38 (75) |
Intensive training programme (re-emphasis and role playing for centres 2 and 3 about equal presentation of treatments, challenging patients' views, need for randomised trial, and randomisation as reasonable method of treatment choice; non-radical arm named “active monitoring”) | |||
May 2001 | 155 | 108 (70)‡ | 76 (70) |
Denominator is men consenting to randomisation.
Documents available from authors.
95% confidence interval 62% to 77% with exact binomial method.
Acknowledgments
Members of the Protect Study Group are John Anderson, Miranda Benney, Sally Burton, Daniel Dedman, Ingrid Emmerson, David Gillatt, John Goepel, Louise Goodwin, John Graham, David Gunnell, Helen Harris, Barbara Hattrick, Peter Holding, David Jewell, Clare Kennedy, Sue Kilner, Peter Kirkbride, J Athene Lane, Hing Leung, Teresa Mewes, Steven Oliver, Jon Oxley, Ian Pedley, Philip Powell, Mary Robinson, Liz Salter, Mark Sidaway, Carol Torrington, Lyn Wilkinson, and Andrea Wilson.
Footnotes
Funding: The research was funded jointly by the UK NHS research and development health technology assessment programme and the MRC health services research collaboration. Support for the ProtecT study also came from the South West NHS research and development directorate. The department of social medicine of the University of Bristol is the lead centre of the MRC health services research collaboration.
Competing interests: None declared.
References
- 1.Altman DG. Better reporting of randomised controlled trials: the CONSORT statement. BMJ. 1996;313:570–571. doi: 10.1136/bmj.313.7057.570. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Lovato L, Hill K, Hertert S, Hunninghake D, Probstfield J. Recruitment for controlled clinical trials: literature summary and annotated bibliography. Control Clin Trials. 1997;18:328–357. doi: 10.1016/s0197-2456(96)00236-x. [DOI] [PubMed] [Google Scholar]
- 3.Tognoni G, Alli C, Avanzini F, Bettelli G, Colombo F, Corso R, et al. Randomised clinical trials in general practice: lessons from a failure. BMJ. 1991;303:969–971. doi: 10.1136/bmj.303.6808.969. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Pringle M, Churchill R. Randomised controlled trials in general practice: gold standard or fool's gold? BMJ. 1995;311:1382–1383. doi: 10.1136/bmj.311.7017.1382. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Baum M. Reflections on randomised controlled trials in surgery. Lancet. 1999;353:6–8. doi: 10.1016/s0140-6736(99)90220-9. [DOI] [PubMed] [Google Scholar]
- 6.Ross S, Grant A, Counsell C, Gillespie W, Russell I, Prescott R. Barriers to participation in randomised controlled trials: a systematic review. J Clin Epidemiol. 1999;52:1143–1156. doi: 10.1016/s0895-4356(99)00141-9. [DOI] [PubMed] [Google Scholar]
- 7.O'Reilly P, Martin L, Collins G. Few patients with prostate cancer are willing to be randomised to treatment [letter] BMJ. 1999;318:1556. doi: 10.1136/bmj.318.7197.1556a. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Livesey J, Cowan R, Brown C, Clarke N, Logue P, Lyons J, et al. Trial of randomisation between radical prostatectomy and radiotherapy in early prostate cancer. Clin Oncol. 2000;12:63. [Google Scholar]
- 9.Glaser B, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967. [Google Scholar]
- 10.Ritchie J, Spencer L. Qualitative data analysis for applied policy research. In: Bryman A, Burgess R, editors. Analysing qualitative data. London: Routledge; 1994. [Google Scholar]
- 11.Selley S, Donovan JL, Faulkner A, Coast J, Gillatt D. Diagnosis, management and screening of early localised prostate cancer: a systematic review. Health Technol Assess. 1997;1(2):1–96. [PubMed] [Google Scholar]
- 12.Featherstone K, Donovan J. Random allocation or allocation at random? Patients' perspectives of participation in a randomised controlled trial. BMJ. 1999;317:1177–1180. doi: 10.1136/bmj.317.7167.1177. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Donovan JL, Blake D. “Just a touch of arthritis, doctor?” Qualitative study of interpretation of reassurance among patients attending rheumatology clinics. BMJ. 2000;320:541–544. doi: 10.1136/bmj.320.7234.541. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Mays N, Pope C. Qualitative research in health care. London: BMJ; 1996. [Google Scholar]
- 15.Snowdon C, Garcia J, Elbourne D. Making sense of randomisation: responses of parents of critically ill babies to random allocation of treatment in a clinical trial. Soc Sci Med. 1997;45:1337–1355. doi: 10.1016/s0277-9536(97)00063-4. [DOI] [PubMed] [Google Scholar]
- 16.Roberson N. Clinical trial participation. Viewpoints from racial/ethnic groups. Cancer. 1994;74:2687–2691. doi: 10.1002/1097-0142(19941101)74:9+<2687::aid-cncr2820741817>3.0.co;2-b. [DOI] [PubMed] [Google Scholar]
- 17.Featherstone K, Donovan JL. “Why don't they just tell me straight, why allocate it?” The struggle to make sense of participating in a randomised controlled trial. Soc Sci Med. 2002;55:709–719. doi: 10.1016/s0277-9536(01)00197-6. [DOI] [PubMed] [Google Scholar]