Abstract
Many women are diagnosed with breast cancer and while the survival of this cohort has improved, their likelihood of developing treatment-related chronic conditions is considerable. Over the last 10 years, our research group has developed and refined a whole-of-lifestyle intervention, the Women's Wellness after Cancer Program (WWACP), for women who have finished treatment for primarily breast and gynaecological cancers. Culturally-specific iterations of this program were recently completed with younger breast cancer survivors (aged <50 years) living in Australia, New Zealand/Aotearoa and Hong Kong.
Over the last decade, various approaches have been used to trial the WWACP, mostly randomised controlled trials. While this methodology is considered the gold standard to determine efficacy in health and medical research, its limitations in our interventional research are apparent. In this opinion article, we discuss these limitations as well as alternative options for the appropriate testing of behavioural studies in women treated for cancer. We also discuss how the contribution of informed consumer advocates and participant consumers has influenced changes to our study designs.
Keywords: Nursing, Oncology, Breast cancer, Gynaecological cancer, Lifestyle intervention, Randomised controlled trial, Behavioural intervention, Study design
Highlights
-
•
Lessons learned from a decade of the Women's Wellness after Cancer Program trials.
-
•
Study design and outcomes interpreted by consumers (advocates and participants).
-
•
Quantitative RCT methodology not ideal for assessing health behaviour interventions.
-
•
A mixed method, non-RCT approach to evaluate intervention efficacy is suggested.
-
•
Otherwise, non-randomised pre-post, A/B or waitlist control test designs acceptable.
Randomised controlled trials (RCTs) are the 'gold standard' to determine efficacy in health and medical research [[1], [2], [3]] and often inform clinical practice guidelines. However, our research group has identified a significant disconnect between the quantitative outcomes generated by RCTs and participants' reported qualitative experiences within our studies of health behaviour interventions. Over the last decade our research group, which created the Women's Wellness after Cancer Program (WWACP), has been dedicated to improving quality of life (QoL) outcomes for women after their cancer treatment [[4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]]. Our multidisciplinary e-health lifestyle interventions target modifiable behaviours over a 12-week period. These include physical activity, diet, alcohol reduction, stress, sleep, menopausal symptoms, and sexual wellbeing; with an overall aim to reduce chronic disease risk and cancer recurrence in this population [5]. The WWACP was originally designed for women following treatment for breast, gynaecological, and/or haematological cancers. We have also undertaken several iterations of the WWACP research program with women with different chronic conditions (e.g., diabetes, heart disease, stroke and irritable bowel syndromes) and in different countries. As a result of consistent participant feedback, we have modified our study designs over time to make it easier for women to participate in our studies. Refinements range from the adoption of consumer-driven (advocates and participants) and approved outcomes and outcome measures (e.g., body image, sexuality), adaptations of the intervention to ensure cultural and language sensitivities are accommodated, reduced survey burden, and data collection methods from paper-based to automated electronic delivery. Despite these modifications, it is consistently apparent that the ‘gold standard’ of RCT design underpinning all our studies (repeatedly reported as so according to the literature [[1], [2], [3]]) is not the best fit for our research problem nor our research cohorts. Specifically, when a quantitative RCT is the selected design for a tailored health behaviour intervention that addresses multiple lifestyle factors, dissonance in outcomes and their interpretation become apparent—which can only be attributed to the nature of the quantitative randomised controlled design. Combined formal and informal consumer and research intervention team input further brought about need for adaptation of the trial format (not just program and/or intervention modifications). In this paper, we discuss the ways that the strictures of RCT affect our study outcomes and offer potential solutions.
The primary issue we encounter is the misalignment between outcomes reported quantitatively and the experiences of women in the studies reported qualitatively. Behavioural interventions such as the WWACP provide participants with knowledge, techniques and strategies designed to promote beneficial health behaviour and thereby reduce chronic disease risk. However, as cancer treatment affects every individual in a different way in every domain of health (i.e., physiologically, psychologically, and socially) and has different ramifications in the short and long terms, our intervention is always carefully-tailored to the needs, health goals and contexts of each woman. It is therefore inherently difficult to standardised outcome measures for an intervention that is not a one-size-fits-all. This is particularly difficult when intervention fidelity (i.e. the extent to which delivery of an intervention adheres to the protocol or program model originally developed) is essential for an effective RCT [18]. For instance, one of the primary outcome measures across all WWACP projects is waist circumference, taken at 80-cm target as per World Health Organisation recommendations [19], and accepted by funding bodies as a measurable and desirable outcome with which to power a study. Waist circumference reflects abdominal fat and is a good indicator of health risk. Yet the standardised 80-cm waist measurement is not safe for some women after their cancer treatment, nor is it an outcome that every participant desires or sets to change within the intervention. In addition, some participants commence additional (and allowable) treatments during the study period, such as aromatase inhibitors, which can increase menopausal symptoms and exacerbate increases in waist circumference [20]. This factor, which would be independent of the intervention and not experienced by every participant, could have a significant effect on mean within-group waist circumference. Therefore, despite no changes or even increases in objective measures such as waist circumference, the intervention might have otherwise been deemed successful via self-reported improvements in other areas such as menopausal symptom reduction, sleep quality, or overall QoL. Similar behavioural interventions in women who are obese, have chronic pain, substance abuse, or depression report parallel issues in their quantitative results, due to the individualised approach necessary to treat the specific needs and evolving status of each person [21], which are also difficult (if not impossible) to standardise via RCT principles.
Planner and colleagues [22] highlighted that RCTs often do not consider the participant's individual imperatives, which should be a key factor when trialling a behavioural lifestyle intervention that addresses multiple, usually intertwined, lifestyle factors. In the WWACP, formalised participant focus group discussions are generally undertaken post-intervention. These provide significant insights into understanding the lived experiences of each woman and the impact of the intervention as a whole, compared to the narrower lens of pre-selected quantitative outcome measurements. Mannell and Davis [23] argue that qualitative methods have been underutilised in health research. Specifically, researchers are reluctant to utilise qualitative methods due to the perception of diminished robustness and associated publishing bias, which has resulted in these methods being omitted following pre-trial phases.
As articulated by Deaton and Cartwright [24] the RCT design does not guarantee an accurate assessment of average treatment effects nor diminish the need to consider the influence of covariates. Further, these authors highlight the importance of combining methods to build scientific knowledge. There are several plausible solutions to rectify this problem via a change to the methodological design. When appropriately implemented, a mixed method research design combining quantitative (e.g. RCT, quasi-experimental) and qualitative methods (e.g. individual or focus group interviews) can significantly enhance the depth of study findings [25]. Furthermore, implementing suggested changeCs (identified during qualitative analyses) can overcome the limitations of a RCT by assessing the individual and contextual factors which determine the potential transferability of an intervention into clinical practice [25]. We believe that actioning consumer input is the key to implementing appropriate study methodologies to assess behavioural interventions.
In recent years, the importance of consumers in research has become apparent, with many funding bodies, such as Australia's National Health and Medical Research Council (NHMRC), now requiring consumer input into study design, governance and outcome interpretation [26]. The WWACP has worked with consumers extensively over the last decade to refine both the WWACP intervention as well as the study methodologies used to assess the intervention. For example, previous participants are quoted as saying “[I] Was in control [group] so no benefit this far …” and “Being a participant in the control group was difficult and I had to withdraw. There needs to be better support after active treatment has finished.” As demonstrated by these quotes, controlling who does and does not receive the intervention has a negative impact on participants (and on study retention). As clinical researchers working in clinical spaces with women with complex needs, it is our duty to provide them with the best possible, scientifically credible interventions [27]. From our previous study iterations, we know the WWACP intervention improves QoL for women after breast and gynaecological cancer treatment. When participants are randomly allocated to the control group, they receive what is considered ‘standard care’ (i.e. expected routine follow up from their treating healthcare team). Are we then depriving them of best possible outcomes [28]? To address our duty of care towards our control participants, we have always offered access to the intervention resource materials at study completion. Yet there is a methodology-robust alternative.
Although individuals are made aware of the chance of allocation to a control group with no intervention (in the case of our studies a 50 % chance) this threatens the integrity of the research process – once women realise they do not receive the intervention, they often withdraw from the study altogether before longitudinal assessments can be completed. In this scenario other study designs, such as non-randomised pre-post or waitlist controlled designs could be more ethical [28]. We also have advocated (and received competitive funding for) several single arm studies of the intervention, using more contemporary methods such as interrupted time series, which ensures all participants receive the intervention or act as their own controls. These non-randomised studies [29] are perceived to be more ethical as each participant receives the individualised intervention. Regarding the WWACP, this is especially relevant as management of lifestyle factors is recommended for all individuals who have finished treatment for cancer [30]. Alternatively, if randomization is still desired by the research team for cogent methodological reasons, waitlist controlling is implemented. In the future, and where the research question warrants it, A/B testing methodology will also be employed. A/B testing involves the random allocation of participants to separate groups, with the participants in each group receiving a different version of the intervention [31]. In the case of the WWACP and subsequent iterations, one group could receive the lifestyle intervention materials and concurrent nurse-led virtual consultations, whilst the other group only receives the lifestyle intervention materials (similar to, however more comprehensive than, standard care). This could ensure each participant is offered at least part of the intervention whilst enabling comparisons in intervention efficacy. However, alternative designs, such as those incorporating waitlist controls, could introduce new challenges related to data and its interpretation that would need to be addressed (e.g. unbalanced stages, potential biasing of the true effect of different active treatments [32,33]). While these designs might appear to offer a solution, they are not a guaranteed fix for obtaining a robust assessment of the effectiveness of the behavioural intervention.
In conclusion, the limitations of randomized-controlled trials (RCTs), particularly their misalignment with real-world outcomes, highlight the need for more person- and condition-relevant evaluation methods. The authors found that RCTs often failed to capture the complexity of individual experiences, potentially leading to interventions that might not translate effectively outside controlled settings. In our opinion, while RCTs provide valuable data, they often overlook the nuances of consumer needs and preferences, which are essential for ensuring practical relevance. By incorporating consumer input, trial designs could be refined to align interventions with real-world contexts, fostering more holistic and effective solutions. For multi-faceted behavioural interventions, the authors have observed a mixed method approach offers a more comprehensive evaluation of both efficacy and implementation potential. A non-randomized pre-post study design might be more ethical, ensuring maximal intervention delivery and participant benefit. If randomization is required, A/B or waitlist control designs can help address ethical concerns. Together with consumer feedback, these methods provide a more balanced framework for assessing interventions.
CRediT authorship contribution statement
Sarah M. Balaam: Writing – review & editing, Writing – original draft, Data curation, Conceptualization. Alexandra L. McCarthy: Writing – review & editing, Conceptualization. Natalie K. Vear: Writing – review & editing, Writing – original draft, Conceptualization. Mackenzie J. Petie: Writing – review & editing, Conceptualization. Debra J. Anderson: Writing – review & editing, Conceptualization. Janine P. Porter-Steele: Writing – review & editing, Writing – original draft, Data curation, Conceptualization.
Consent to participate
Informed consent was obtained from all individual consumer participants included in the study from which quotes were sourced.
Ethics approval
Participant quotes used for this paper were sourced from a study which was performed in line with the principles of the Declaration of Helsinki. Approval was granted by the UnitingCare Health Human Research Ethics Committee (Date February 05, 2021/No 202103).
Funding sources
This work was supported by the Wesley Research Institute grant (Grant number ID2020CR02). The funding body was not involved in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit this article for publication.
Declaration of competing interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Janine P. Porter-Steele reports financial support was provided by Wesley Research Institute. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
We would like to thank the participants in our studies. They provided wisdom and insights that enable us to develop and improve the Women's Wellness after Cancer Program (WWACP). We would also like to thank our funding provider Wesley Research Institute and Monica Stephan for her contributions to and assistance with the background literature for this paper.
Contributor Information
Sarah M. Balaam, Email: s.balaam@griffith.edu.au.
Alexandra L. McCarthy, Email: sandie.mccarthy@griffith.edu.au.
Natalie K. Vear, Email: n.vear@uq.edu.au.
Mackenzie J. Petie, Email: m.petie@uq.net.au.
Debra J. Anderson, Email: debra.anderson@uts.edu.au.
Janine P. Porter-Steele, Email: j.porter-steele@griffith.edu.au.
References
- 1.Hariton E., Locascio J.J. Randomised controlled trials – the gold standard for effectiveness research. BJOG An Int. J. Obstet. Gynaecol. 2018;125(13):1716. doi: 10.1111/1471-0528.15199. 1716. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Sibbald B., Roland M. Understanding controlled trials: why are randomised controlled trials important? BMJ. 1998;316(7126):201. doi: 10.1136/bmj.316.7126.201. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Hariton E., Locascio J.J. Randomised controlled trials - the gold standard for effectiveness research: study design: randomised controlled trials. Bjog. 2018;125(13):1716. doi: 10.1111/1471-0528.15199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Anderson D.J., et al. Facilitating lifestyle changes to manage menopausal symptoms in women with breast cancer: a randomized controlled pilot trial of the Pink Women's Wellness Program. Menopause. 2015;22(9):937–945. doi: 10.1097/GME.0000000000000421. [DOI] [PubMed] [Google Scholar]
- 5.Anderson D., et al. The Women's wellness after cancer program: a multisite, single-blinded, randomised controlled trial protocol. BMC Cancer. 2017;17(1):98. doi: 10.1186/s12885-017-3088-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Arneil M., et al. Physical activity and cognitive changes in younger women after breast cancer treatment. BMJ Support. Palliat. Care. 2020;10:122–125. doi: 10.1136/bmjspcare-2019-001876. [DOI] [PubMed] [Google Scholar]
- 7.Bailey T.G., et al. Physical activity and menopausal symptoms in women who have received menopause-inducing cancer treatments: results from the Women's Wellness after Cancer Program. Menopause. 2020;28(2):142–149. doi: 10.1097/GME.0000000000001677. [DOI] [PubMed] [Google Scholar]
- 8.Balaam S., et al. Alcohol and breast cancer: results from the women's wellness after cancer program randomized controlled trial. Cancer Nurs. 2021 doi: 10.1097/NCC.0000000000000956. Publish Ahead of Print. [DOI] [PubMed] [Google Scholar]
- 9.Chan D.N.S., et al. Cultural adaptation of the younger women's wellness after cancer program for younger Chinese women with breast cancer: a pilot randomized controlled trial. Cancer Nurs. 2023 doi: 10.1097/NCC.0000000000001210. [DOI] [PubMed] [Google Scholar]
- 10.Kieseker G.A., et al. A psychometric evaluation of the Female Sexual Function Index in women treated for breast cancer. Cancer Med. 2022;11:1511–1523. doi: 10.1002/cam4.4516. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Seib C., et al. Exposure to stress across the life course and its association with anxiety and depressive symptoms: results from the Australian Women's Wellness after Cancer Program (WWACP) Maturitas. 2017;105:107–112. doi: 10.1016/j.maturitas.2017.05.011. [DOI] [PubMed] [Google Scholar]
- 12.Seib C., et al. Menopausal symptom clusters and their correlates in women with and without a history of breast cancer: a pooled data analysis from the Women's Wellness Research Program. Menopause. 2017;24(6):624–634. doi: 10.1097/GME.0000000000000810. [DOI] [PubMed] [Google Scholar]
- 13.Seib C., et al. Life stress and symptoms of anxiety and depression in women after cancer: the mediating effect of stress appraisal and coping. Psycho Oncol. 2018;27(7):1787–1794. doi: 10.1002/pon.4728. [DOI] [PubMed] [Google Scholar]
- 14.Seib C., et al. Promoting healthy lifestyle changes to improve health-related quality of life in women after cancer: results from the Australian women's wellness after cancer program (WWACP) Maturitas. 2019;124:149. 149. [Google Scholar]
- 15.Seib C., et al. Improving health-related quality of life in women with breast, blood, and gynaecological Cancer with an eHealth-enabled 12-week lifestyle intervention: the women's wellness after Cancer program randomised controlled trial. BMC Cancer. 2022;22(1):747. doi: 10.1186/s12885-022-09797-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Seib C., et al. Determining the psychometric properties of the Greene Climacteric Scale (GCS) in women previously treated for breast cancer: a pooled analysis of data from the Women's Wellness after Cancer Programs. Maturitas. 2022;161:65–71. doi: 10.1016/j.maturitas.2022.02.003. [DOI] [PubMed] [Google Scholar]
- 17.Sharples K., et al. Protocol of trans-Tasman feasibility randomised controlled trial of the Younger Women's Wellness after Breast Cancer (YWWACP) lifestyle intervention. Pilot and Feasibility Studies. 2022;8(1):165. doi: 10.1186/s40814-022-01114-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Toomey E., et al. Focusing on fidelity: narrative review and recommendations for improving intervention fidelity within trials of health behaviour change interventions. Health Psychol Behav Med. 2020;8(1):132–151. doi: 10.1080/21642850.2020.1738935. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.World Health Organization . 2011. Waist Circumference and Waist–Hip Ratio: Report of a WHO Expert Consultation. [Google Scholar]
- 20.Barone I., et al. Obesity and endocrine therapy resistance in breast cancer: mechanistic insights and perspectives. Obes. Rev. 2022;23(2) doi: 10.1111/obr.13358. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Almirall D., et al. Introduction to SMART designs for the development of adaptive interventions: with application to weight loss research. Transl Behav Med. 2014;4(3):260–274. doi: 10.1007/s13142-014-0265-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Planner C., et al. Trials need participants but not their feedback? A scoping review of published papers on the measurement of participant experience of taking part in clinical trials. Trials. 2019;20(1):381. doi: 10.1186/s13063-019-3444-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Mannell J., Davis K. Evaluating complex health interventions with randomized controlled trials: how do we improve the use of qualitative methods? Qual. Health Res. 2019;29(5):623–631. doi: 10.1177/1049732319831032. [DOI] [PubMed] [Google Scholar]
- 24.Deaton A., Cartwright N. Understanding and misunderstanding randomized controlled trials. Soc. Sci. Med. 2018;210:2–21. doi: 10.1016/j.socscimed.2017.12.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Fàbregues S., et al. Use of mixed methods research in intervention studies to increase young people's interest in STEM: a systematic methodological review. Front. Psychol. 2022;13 doi: 10.3389/fpsyg.2022.956300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.National Health and Medical Research Council Guidelines for guidelines: consumer involvement. 2018. https://nhmrc.gov.au/guidelinesforguidelines/plan/consumer-involvement [cited 2024 10 June]; Available from:
- 27.Nursing and Midwifery Board of Australia . 2019. Code Of Conduct for Nurses, Australian Health Practitioner Regulation Agency. [Google Scholar]
- 28.Ioannidis J.P.A. Randomized controlled trials: often flawed, mostly useless, clearly indispensable: a commentary on Deaton and Cartwright. Soc. Sci. Med. 2018;210:53–56. doi: 10.1016/j.socscimed.2018.04.029. [DOI] [PubMed] [Google Scholar]
- 29.Reeves Bc D.J., Higgins J.P.T., Shea B., Tugwell P., Wells G.A. In: Cochrane Handbook for Systematic Reviews of Interventions. Higgins JPT T.J., Chandler J., Cumpston M., Li T., Page M.J., Welch V.A., editors. Cochrane; 2023. Chapter 24: including non-randomized studies on intervention effects. (updated August 2023) [Google Scholar]
- 30.Vardy J.L., et al. Clinical Oncology Society of Australia position statement on cancer survivorship care. Aust J Gen Pract. 2019;48(12):833–836. doi: 10.31128/AJGP-07-19-4999. [DOI] [PubMed] [Google Scholar]
- 31.New South Wales Government, How to Test whether Your Behaviour Change Intervention Works. n.d.
- 32.Sima A.P., Stromberg K.A., Kreutzer J.S. An adaptive method for assigning clinical trials wait-times for controls. Contemp Clin Trials Commun. 2021;21 doi: 10.1016/j.conctc.2021.100727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Faltinsen E., et al. Control interventions in randomised trials among people with mental health disorders. Cochrane Database Syst. Rev. 2022;(4) doi: 10.1002/14651858.MR000050.pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
