Abstract
Issue
Chronic disease is a growing problem affecting approximately half of all Australian adults. In response to growing calls for action on chronic disease, the My health for life program was created, aimed at improving the health of individuals at high risk of developing preventable chronic disease. The preventive health program is multi‐modal, cross‐culturally tailored and contains complex social marketing, community engagement, risk assessment and health promotion components. Therefore, a multi‐component evaluation framework is essential to understand the effectiveness of the My health for life program. This brief report details the evaluation.
Methods
The evaluation design uses non‐randomised, longitudinal analysis using repeated measures, observational, program goal‐based and pretest‐posttest design features to assess the program, its specific modalities and its program adaptations. To ensure timely and credible evaluation, different evaluative implementation frameworks and methods are considered. Quantitative and qualitative methods collect an array of program data at differing levels to assess the processes, outcomes and impacts of My health for life.
Discussion
The implemented evaluation framework has allowed measurement of: (i) process impacts including uptake, retention and attrition, participant satisfaction, fidelity and program stakeholder engagement and (ii) outcomes relating to individual participant level changes in health behaviours.
So what?
This evaluation is an example of an integrated evaluation approach in a large successful preventive health program. Findings from the evaluation will ultimately inform the applicability and transferability of the program and inform policy makers, stakeholders and other health professionals in preventive health practice.
Keywords: chronic disease, evaluation, health behaviours, modifiable lifestyle factors, prevention, program evaluation
1. INTRODUCTION
The burden of chronic disease is increasing globally, 1 and there is an urgent need to arrest and, over time, reverse this trend. 2 Australian statistics are consistent with global figures, with chronic disease affecting around half of all Australians and account for 37% of all hospitalisations (2015‐16) and 61% of the total burden of disease in 2011. 3 The growing prevalence of chronic disease, as well as the increases in modifiable risk factors, amongst Australians highlight the magnitude of this public health issue. 3 To address these challenges in Australia, national 4 and state 5 government frameworks were prioritised to address the issues relating to chronic disease. In response, the My health for life program was developed using a co‐design process 6 and informed by chronic disease prevention programs, FIN‐D2D (Finland), and Life! (Australia). 7 My health for life is a state‐wide government funded initiative, with the aim of identifying individuals at high risk of developing chronic disease, in particular type 2 diabetes, stroke and heart disease and offering a program that supports them in adopting and maintaining positive lifestyle changes to manage their risk factors. 7 My health for life is a, multi‐component preventive health program, which involves integrated social marketing, community engagement and health promotion inputs and is further described elsewhere. 7
Evaluation is critical for developing an evidence base for complex interventions where there is seldom a clear casual chain. 8 This requires assessment of the intervention implementation, outcomes and effectiveness to inform processes to increase impact. 9 This report outlines the plans to evaluate the processes, outcomes and impacts of the My health for life program in one Australian setting representing diverse epidemiological and social contexts.
2. METHODS
2.1. My health for life evaluation design
The objectives of the evaluation are to (i) assess participant, service and system impact and outcomes, (ii) provide ongoing insight and feedback on key success indicators and key areas for improvement, (iii) provide evidence on chronic disease prevention programs to inform policy and practice in Queensland and (iv) contribute to the evidence based on implementation of large‐scale chronic disease prevention programs. A pragmatic mixed‐method (quantitative and qualitative) approach underpins the evaluation. Quantitative aspects include a non‐randomised design, longitudinal analysis using repeated measures, adopting observational, program goal‐based and pretest‐posttest design features. A goal‐based evaluation design with predetermined program goals set by the funding body are the standards of the evaluation. 10 Longitudinal, repeated measures are incorporated for measuring progress towards specific program objectives. 10 A qualitative approach describes the contexts in which the program is delivered and explores contextual factors that may influence the delivery or impact, and the outcomes.
The evaluation is underpinned by a whole‐system approach and examines the three systems levels of the program: (i) overall program at the macro system level; (ii) site/region at the meso system level and (iii) individual participant at the micro system level. Impact and outcome evaluation will investigate the overall program effectiveness, quality, community outcomes and potential for future scale‐up to advance system changes to processes and resource integration of the program to ensure its sustainability and institutionalisation as “business as usual” using an evidence‐informed approach. The program will be evaluated in terms of: effectiveness (health behaviour changes, benefits and barriers to behaviour change), implementation (dose delivered and received, fidelity, recruitment) (micro system level), program context (meso system level) and potential for program sustainability at the organisation and funder level (macro system level).
Process evaluation follows a continuous improvement model based around the circular process of planning, organising, implementing, evaluating and refining. The process evaluation is continuous and occurs throughout the lifetime of the program and its delivery, enabling feedback to the program team on key success indicators and pinpointing areas where improvements can be made. This feedback loop provides opportunities for innovative solutions to be developed and implemented.
2.2. Evaluation and implementation frameworks
The evaluation approach is guided by evaluation frameworks and conceptual models to strengthen the evaluation questions and analysis. The use of translational research frameworks and implementation models adds rigor to the evaluation of real‐world programs, while considering key factors of both implementation and impact. The My health for life evaluation is predominately guided by the RE‐AIM framework. 11 The framework is used to ensure critical constructs for evaluating public health impact at an individual and service levels are included. Constructs include Reach (number, proportion, and representativeness of individuals who are willing to participate in a given initiative), Effectiveness (impact of the intervention), Adoption (number, proportion, and representativeness of settings and intervention agents who initiate the program), Implementation (consistency and cost of delivery of intervention) and Maintenance (individual impacts post completion of the program and the extent to which a program becomes institutionalised or part of the routine organisational practice. 10
Two additional models that enable a more comprehensive implementation evaluation have been included. The Conceptual Model of Implementation Research 12 highlights that implementation efforts require both an empirically tested intervention and an implementation strategy or set of strategies to translate the evidence into practice. 12 This model is used to assess implementation outcomes such as feasibility, fidelity, penetrations, acceptability, sustainability, uptake and costs. 12
The Consolidated Framework for Implementation Research (CFIR) 13 has five key elements of implementation: Intervention characteristics; Outer settings; Inner settings; Characteristics of individuals; and Process are used to ultimately evaluate implementation and sustainability of the intervention. 13 The use of the CFIR framework will enable implementation evaluation of the specific settings used to implement and deliver the program. Table 1 describes the key evaluative constructs of these frameworks and how each are applied to the evaluation in areas relating to process, impact and outcome evaluation. Instruments utilised for evaluating these areas are discussed in the following section.
TABLE 1.
Evaluation methodological frameworks and their application
Framework | Construct | Details | Evaluative area and query | Data collection source and method |
---|---|---|---|---|
RE‐AIM |
Reach Meso level |
Reach into the target population |
Process:
|
|
Effectiveness Micro level |
Effectiveness or efficacy (impact) of an intervention on outcomes |
Impact:
|
|
|
Adoption Meso level |
Adoption of the intervention by target settings, institutions and staff |
Process:
|
|
|
Implementation Meso level |
Implementation consistency and cost of delivery of intervention |
Process:
|
|
|
Maintenance Micro level Macro level |
Maintenance of intervention effects in individuals and settings over time |
Outcome:
|
|
|
Conceptual model of implementation research – implementation outcomes |
Feasibility Meso level |
The extent to which the program or practice can be successfully used or carried out within a setting |
Process:
|
|
Fidelity Micro level |
The degree to which a program or practice was delivered as intended |
Process:
|
|
|
Penetration Meso level |
The integration of an intervention within a service setting |
Process:
|
|
|
Acceptability Meso level |
The perception among stakeholders that a program or practice is agreeable, palatable or satisfactory |
Process:
|
|
|
Sustainability Macro level |
The extent to which a newly implemented intervention is maintained or institutionalised into “business as usual” for a service setting |
Process:
|
|
|
Uptake Meso level |
The intentional initial decision or action to take on or try an intervention/program |
Process:
|
|
|
Costs Macro level |
The true cost of an implementation effort, which includes the costs of the intervention, the implementation strategy used, and the location of service delivery |
Process:
|
|
|
Consolidated Framework for Implementation Research |
Intervention characteristics Meso level |
Complexity, cost, perceived quality |
Process:
|
|
Outer settings Micro level |
Participant needs and resources, cosmopolitanism, peer pressure, external policies and incentives |
Process:
|
|
|
Inner setting Macro level |
Policies and incentives, population unmet needs |
Process:
|
|
|
Characteristics of individuals Micro level |
Leadership, readiness, learning climate, self‐efficacy, knowledge and beliefs, stage of change |
Process:
|
|
|
Process Meso level |
Engagement, planning, reflecting and evaluating |
Process:
|
|
2.3. Evaluation instruments and data
A risk assessment survey developed by the My health for life program is implemented prior to baseline to determine risk of chronic disease and establish program eligibility. The risk assessment survey is completed online via the My health for life website or in person by the risk assessment team. The survey includes the collection of basic socio‐demographic information including age, gender, location, Aboriginal or Torres Strait Islander status and ethnicity. The survey also collects data to establish risk of chronic disease, diabetes 14 and cardiovascular disease. 15 In addition to these items, a further survey is conducted with each participant at baseline (T1), sessions 5 (T2) and 6 (T3) ‐ and 12 months (T4) post program enabling longitudinal analysis (see Table 2). The program survey includes instruments which measure changes in behaviour, self‐efficacy and health quality of life (see Table 2). To ensure a rigorous evaluation, the tools used to assess individual level outcomes of My health for life are developed using previously validated instruments (see Table 2). The first 3 years of the program will collect data from a minimum of 10 000 participants at three time points (T1, T2 and T3).
TABLE 2.
Outcome measures, instruments and time points of the evaluation
Time point | Risk assessment (T0) (0 wk) | Session 1 (T1) (Baseline – 0‐6 wk) | Session 3 10 wk) | Session 5 (T2) (14 wk) | Session 6 (T3) (26 wk) | Maintenance (T4) (52 wk) |
---|---|---|---|---|---|---|
Measures | ||||||
My health for life Risk Assessment (total risk score) | X | |||||
Risk of diabetesa | X | X | ||||
Risk of cardiovascular diseaseb | X | |||||
Socio‐demographics incl. age, gender, SES, CALD and Aboriginal and Torres Strait Islander status | X | |||||
Anthropometry incl. height, weight, waist circumferencec | X | X | X | X | X | |
Dietary habits incl. daily fruit/vegetable, sugar sweetened beverage and takeaway intaked | X | X | X | X | ||
Alcohol and tobacco usee | X | X | X | X | ||
Physical activityf | X | X | X | X | ||
HRQoLg | X | X | X | X | ||
SERVQUALh | X | |||||
Knowledge of health behaviours, intention and confidence, and support | X | X | X |
Primary data collected by program facilitators (coaches); Measured using: aAdapted AUSDRISK items, 14 bAbsoloute CVD Risk, 15 cWHO Steps Surveillance Manual protocol and standards, 16 dAustralian Bureau of Statistics National Health Survey items, 17 eNational Drug Survey items, 18 fActive Australia, 19 gCDC HRQoL measure 20 and hSERVQUAL scale. 21
Abbreviations: SES, socio‐economic status; CALD, culturally and linguistically diverse; HRQoL, health‐related quality of life.
Survey instruments, administered by program facilitators, including in person using pen and paper by face‐to‐face group coaches, and via telephone by telephone health coaches. Participant pen and paper surveys are entered into the survey data tool (specifically developed for the program) by the telephone health coaches. For the purpose of the evaluation, all program participant data are de‐identified when provided as secondary data to the evaluation team by the program team.
While the program evaluation and data collection will be ongoing, longer term impacts will be assessed via the long‐term effects (6 months or more post last intervention contact) of the program on outcomes. Specifically, a sub‐sample of participants (n = 520) will undertake a follow‐up “Maintenance Survey” on diabetes risk, modifiable lifestyle behaviours and quality of life 6 months post program completion (T4). Data will be used to determine long‐term intervention effects on healthy behaviours and risk reduction.
The process evaluation determines whether program activities have been implemented as intended and is assessed using a combination of primary and secondary data. As shown above in Table 1, qualitative and quantitative data are collected across a variety of program areas and levels from participants, facilitators/telephone health coaches, service providers and stakeholders on program uptake, retention and attrition, satisfaction, community engagement and social marketing, networking and partnerships and program governance.
At the participant level, the evaluation team only have access to secondary, non‐identifiable data, provided by the My health for life program in accordance with program data transfer regulations. Due to the ongoing nature of the program, data collection and analysis is continual. Outcome and process evaluation reporting occurs annually. Ongoing outcome and process data analysis and reporting allows continual feedback to the program to enable continual improvements to program delivery. Overall, a broad mixed‐methods approach is undertaken for data analyses. Data are cleaned and entered into one master data set using the current version of SPSS software. Descriptive information is tabulated across evaluation domains and frequency counts and percentages are reported across the RE‐AIM indicators. Inferential statistics and repeated measures (within and between groups) are used when determining effectiveness.
3. DISCUSSION
This report provides a worked example of how to embed an evaluation in the design and implementation of a complex intervention or program, which are scarce throughout current literature. 22 This report responds to calls for guidance on how to proceed in evaluation studies and the decision‐making surrounding this. 23 The main strength of a process evaluation design, as adopted in this study, is the efficient use of continuous data collection and analysis enabling ongoing feedback to the program team and ability for continuous quality improvement during the program's lifecycle. Further strengths of the design is the efficient use of extant data collection tools and processes that are embedded in program delivery and the use of several methods of data collection allow for enough flexibility to capture diverse perspectives related to the program implementation. Moreover, the qualitative approach provides access to nuanced information about processes, priorities and constraints of the program context, which are generally not elicited in a quantitative approach. Although the program evaluation has a number of strengths, it is not without limitations. Firstly, there is no comparison group, meaning that only contribution, rather than attribution can be claimed when discussing the effectiveness of the program. Second, the evaluation relies on self‐report data from participants rather than objective measures of behaviour. This program evaluation examines one context, which may limit its generalisablity into other contexts and settings.
Additionally, the evaluation will create a baseline that the program team and service providers can use to monitor progress in the future. The evaluation will contribute to building long term administrative data that can help to explain changes in outputs and outcomes over time including behaviour change. Examining all three levels of the system allows the evaluators to identify the mechanisms in the system which can be leveraged for greater engagement and program success.
The evaluation will provide vital insights into what contributes to the program outcomes and impacts, while generating new knowledge on the implementation and scale‐up of the program. On an individual level, this program will benefit by helping participants to understand chronic disease risk and available options for actions. On a community level, outcomes of this research have the potential to reduce individual risk factors for chronic disease. Found effective, the My health for life program can advance knowledge and understanding of the effect of multiple health behaviour change interventions in modifying lifestyle risk factors in adults. This preventive health program can potentially be adapted and tailored for other age groups or different at‐risk populations and can be applied in a variety of community or corporate settings to address the growing burden of chronic disease. This research will inform policy, practice and investment decisions regarding how to optimally meet the needs of Queenslanders at risk of chronic disease.
DISCLOSURES
The authors declare that they have no conflict of interests.
Funding information
The My health for life program is a funded initiative by the Queensland Government, through Health and Wellbeing Queensland. The evaluation of the program was awarded via a tender process where funding was provided to Griffith University by the My health for life program lead agency Diabetes Queensland. The My health for life program was developed and is delivered by the Healthier Queensland Alliance.
ACKNOWLEDGEMENTS
The Evaluation Project is an initiative of the Queensland Government. Funding was awarded to the My health for life program from Queensland Government ‐ Queensland Health (Preventive Health Branch). We would like to acknowledge the My health for life participants, My health for life program facilitators and service providers and the My health for life project team. Open access publishing facilitated by Griffith University, as part of the Wiley ‐ Griffith University agreement via the Council of Australian University Librarians.
Parkinson J, McDonald N, Seib C, Moriarty S, Anderson D. A multi‐component evaluation framework of a state‐wide preventive health program: My health for life. Health Promot J Austral. 2022;33(S1):271–7. 10.1002/hpja.591
Handling editor: Sarah Ireland
REFERENCES
- 1. Bertram MY, Sweeny K, Lauer JA, Chisholm D, Sheehan P, Rasmussen B, et al. Investing in non‐communicable diseases: an estimation of the return on investment for prevention and treatment services. Lancet. 2018;391(10134):2071–8. [DOI] [PubMed] [Google Scholar]
- 2. Nugent R, Bertram MY, Jan S, Niessen LW, Sassi F, Jamison DT, et al. Investing in non‐communicable disease prevention and management to advance the Sustainable Development Goals. Lancet. 2018;391(10134):2029–35. [DOI] [PubMed] [Google Scholar]
- 3. Australian Institute of Health and Welfare . Australia’s health 2018: in brief. Cat. no. AUS 222. 2018.
- 4. Australian Health Ministers’ Advisory Council . National strategic framework for chronic conditions. Australian Government. Canberra: Australian Government; 2017. [Google Scholar]
- 5. Queensland Health . My health, Queensland’s future: Advancing health 2026. Queensland Health; 2016. Available from: https://www.health.qld.gov.au/__data/assets/pdf_file/0025/441655/vision‐strat‐healthy‐qld.pdf [Google Scholar]
- 6. My Health for Life . My health for life Caboolture Concept Proof. 2016. Available from: https://www.myhealthforlife.com.au/wp‐content/uploads/2021/11/Concept_Proof_Final_Report_MH4L_Griffith_Uni_Dec2016.pdf [Google Scholar]
- 7. My Health for Life . My health for life an innovative, evidence‐based preventative health program for tackling chronic disease in Queensland ‐ Program design and delivery overview. 2019. Available from: https://www.myhealthforlife.com.au/wp‐content/uploads/2021/11/Program_design___delivery_overview.pdf [Google Scholar]
- 8. Ling T. Evaluating complex and unfolding interventions in real time. Evaluation. 2012;18(1):79–91. [Google Scholar]
- 9. Medical Research Council . Developing and evaluating complex interventions. UK: Medical Research Council; 2019. Available from: https://mrc.ukri.org/documents/pdf/complex‐interventions‐guidance/ [Google Scholar]
- 10. U.S. Department of Health and Human Services Centers for Disease Control and Prevention . Introduction to program evaluation for public health programs: A self‐study guide. Atlanta, GA: Centers for Disease Control and Prevention (U.S.); 2011. Available from: https://stacks.cdc.gov/view/cdc/26245 [Google Scholar]
- 11. Glasgow RE, Vogt TM, Boles S. M, Evaluating the public health impact of health promotion interventions: the RE‐AIM framework. Am J Public Health. 1999;89(9):1322–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B, et al. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health. 2009;36(1):24–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Chen L, Magliano DJ, Balkau B, Colagiuri S, Zimmet PZ, Tonkin AM, et al. AUSDRISK: an Australian Type 2 Diabetes Risk Assessment Tool based on demographic, lifestyle and simple anthropometric measures. Med J Aust. 2010;192(4):197–202. [DOI] [PubMed] [Google Scholar]
- 15. National Vascular Disease Prevention Alliance (NVDPA) . Guidelines for the management of absolute cardiovascular disease risk. Australia: National Stroke Foundation; 2012. [Google Scholar]
- 16. World Health Organization . WHO STREPS Survelliance Manual ‐ The WHO STEPwise approach to noncommunicable disease risk factor surveillance. Switzerland; 2017. Available from: https://www.who.int/teams/noncommunicable‐diseases/surveillance/systems‐tools/steps/manuals [Google Scholar]
- 17. Australian Bureau of Statistics . National Health Survey 2014–15 Questionnaire. Australia: Australian Bureau of Statistics. Canberra; 2015. Available from: https://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/4364.0.55.0012014‐15 [Google Scholar]
- 18. Australian Institute of Health and Welfare . 2016 National Drug Strategy Household Survey. Australia: Australian Institute of Health and Welfare; 2016. Available from: https://www.aihw.gov.au/getmedia/15db8c15‐7062‐4cde‐bfa4‐3c2079f30af3/21028a.pdf.aspx?inline=trues [Google Scholar]
- 19. Australian Institute of Health and Welfare . The Active Australia Survey ‐ A guide and manual for implementation, analysis and reporting. Australia: Australian Institute of Health and Welfare; 2003. Available from: https://www.aihw.gov.au/getmedia/ff25c134‐5df2‐45ba‐b4e1‐6c214ed157e6/aas.pdf [Google Scholar]
- 20. Centers for Disease Control and Prevention . Measuring healthy days. Centers for Disease Control and Prevention. USA; 2000. Available from: https://www.cdc.gov/hrqol/pdfs/mhd.pdf [Google Scholar]
- 21. Parasuraman A, Berry L, Zeithaml V. Refinement and reassessment of the servqual scale. J Retail. 1991;67(4):420–50. [Google Scholar]
- 22. O’Hara BJ, Bauman AE, Eakin EG, King L, Haas M, Allman‐Farinelli M, et al. Evaluation framework for translational research: case study of Australia’s get healthy information and coaching service® . Health Promot Pract. 2013;14(3):380–9. [DOI] [PubMed] [Google Scholar]
- 23. Hallingberg B, Turley R, Segrott J, Wight D, Craig P, Moore L, et al. Exploratory studies to decide whether and how to proceed with full‐scale evaluations of public health interventions: a systematic review of guidance. Pilot Feasibility Stud. 2018;4(1):1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]