Summary.
- What is already known on this subject
-
○Evidence of effective health programs is very important, however relatively little is generated within rural contexts. Clinicians and health program managers are often well placed to lead evaluation activities but may not be familiar with evaluation methods.
-
○
- What this paper adds
-
○This paper provides an overview of a simple step‐by‐step process for planning health program evaluation. It demonstrates a method for developing data collection and reporting plans from a program logic document.
-
○
There is a lack of evidence generated by health program evaluation in rural contexts [1, 2]. Evaluation of health programs is important for determining outcomes and impact [3], providing evidence for policy and funding decisions [4], and to drive continuous improvement of health programs that address inequalities in rural health outcomes [5, 6]. There are several challenges to healthcare evaluation in rural settings [1, 2, 4, 5, 6]. These can include resource limitations such as shortages of healthcare professionals, facilities, infrastructure, and funding, as well as geographical barriers such as vast distances and transportation issues, which may impede stakeholder engagement and data collection processes. Program evaluation must be tailored to the unique cultural and social contexts of rural communities. For example, qualitative data from interviews, video and audio recordings, and written feedback can provide useful evidence of outcomes and experiences of receiving care but may not be deemed appropriate in some communities. The views and support of Elders and other community leaders can guide the design and implementation of culturally safe and welcoming program evaluation activity. While a detailed explanation of these issues and management strategies is beyond the scope of this article, suggestions for approaching some of these issues are noted, and more information about managing these issues can be found.
Given these challenges, rural clinicians and health program managers are often well placed to lead the evaluation of health programs. However, they may not be familiar with evaluation methods. This article provides an overview of a simple step‐by‐step process for planning health program evaluation and demonstrates a method for developing data collection and reporting plans from a program logic document.
1. Practical Steps in Planning Health Program Evaluation
A review of the literature shows some frameworks and guidance documents have been developed to support health program evaluation. For example, the Centre for Disease Control and Prevention (CDC) Program Evaluation Framework provides a thorough foundation with its validated model [7, 8]. However, those who are new to health program evaluation may find some additional explanation of how to operationalise the evaluation planning process useful. Following is an outline of a stepped process for planning a health program evaluation, with a rural‐based Health Care Navigator program as an example.
Understand the expectations of key stakeholders. Identify expectations separately in relation to 1) the activities, output, outcomes and impact of the health program, and 2) the deliverables of the evaluation process (e.g., evaluation report content and structure, video vignettes). Stakeholders can include clients/patients, their families and carers, who may have expectations regarding program activities and outcomes. Additionally, funders and policy makers may specify key performance indicators (KPIs), outputs and outcomes. Other stakeholders who can contribute their expectations include Elders and other community leaders, organisation and program managers, and the staff members who will deliver the service. Careful planning at this stage can help to maximise cultural safety and enable the engagement of key stakeholders from the beginning of the evaluation process.
Develop a program logic. This describes how the program/initiative is intended to work by articulating the theory of change and demonstrating the evidence of need, then linking expected inputs, activities, outputs, and outcomes. A useful guide is provided by NSW Health [9]. A brief example of a Program Logic for a rural Health Care Navigator program is provided in Figure 1 (Step 2).
-
Develop a data collection plan. Two separate tables can be used to operationalise a Data Collection Plan from a Program Logic. Figure 1 provides a simple worked example of how these tables were used in planning an evaluation of the example Health Care Navigator program. The first table is a worksheet (see Figure 1, Step 3a). This worksheet displays the key elements identified from the Program Logic in the rows (e.g., Outputs, Outcomes short term, Outcomes medium term and long term). Step 3a shows the following information in columns.
- What information should be collected (e.g., outputs and other program KPIs, outcome measures, client/patient experience and satisfaction surveys [10]).
-
How information will be collected. In this example, demographic and clinical information will be collated from a client/patient record system, some specific measures such as Goal Attainment Scales, Camberwell Assessment of Need/Partnership Health (CAN/PIH) tools and Patient Reported Experience Measures (PREM) will be used. Clinical outcomes will include average Haemoglobin A1c (HbA1c) for people with diabetes and Girth for people who presented with cardiovascular disease (CVD)/overweight. The Kessler K5/K10 and EQ5D‐5 L scales will be completed to measure mental wellbeing and quality of life. Interviews were planned with consenting clients/patients and carers.Where a small sample and limited statistical power prevents statistical analyses, case summaries or vignettes can present individuals' stories of the outcomes and experiences they achieved through the program via video or text/photos. Overall, methods must be feasible, acceptable and ethical for clients/patients and staff. Data must be stored securely and in a format that enables later analysis (e.g., password protected file storage, data in a spreadsheet).
-
cFrom Whom and Where the information will be sourced. In this example, information will be sought from clients/patients, carers and the client/patient record system. Program staff will support data collection. In planning other evaluation in some rural contexts, it can be useful to seek data from other stakeholders including other family members, staff, other service workers, and community members.
-
dWhen will each piece of information be collected and by when? In this example, the timepoints for data collection are provided, or the report in which the data will be collated is named.
The second table is the final Data Collection Plan (Step 3b). This can be used to present the information generated in the first worksheet in a clean format that can be implemented by the evaluation team. The Data Collection Plan (3b) table lists each of the evaluation/outcome measures and methods as rows, while the columns identify Who will provide the information (e.g., clients/patients, caregivers), Who will collect the information (e.g., particular program staff member) by When the information will be collected (e.g., due by dates) and a brief explanation of What information is being collected (e.g., What the tool measures).
FIGURE 1.
From program logic to data collection plan—Brief example worksheets for a Health Care Navigator program. Consider additional presentation formats (e.g., Video vignettes, case vignettes/examples, photos, reflective journals, presentations…).
Note. PHN = Primary Health Network, HCN = Health Care Navigator, HbA1c = haemoglobin A1C/glycated haemoglobin, Girth cm = Waist circumference centimeters, CAN/PIH = Camberwell Assessment of Need/Partnership Health, PREM = Patient reported experience measure, DE/GP = Diabetes Educator/General Practitioner, EP/GP = Exercise Physiologist/General Practitioner, OOS = Occasions of service, QOL = Quality of life.
-
4
Implement the data collection plan. Problems with the collection of data can result in an inability to complete a program evaluation. Some methods for enabling success include consideration of collecting data at natural points in the client/patient journey or other operational processes, recruiting reliable data collectors, piloting data collection early to identify and manage problems, using central and secure data storage, keeping up with data entry, and actively monitoring and managing data collection.
-
5
Analyse data. Depending on the focus of the evaluation and the key questions, the information collected may require sophisticated analysis. For example, specific analysis may be required to understand whether changes in a client's/patient's functioning before, after and at follow up were statistically and/or clinically significant. Thematic analysis may be required to understand experiences of clients/carers/staff from interview data. As such, expert assistance may be required to complete elements of the analysis and may be accessed from Public Health Units, universities or companies offering these services.
-
6
Report findings. Development of a clear reporting plan can be helpful in confirming the data to be collected, analyses required and in determining the content and structure of reports. This plan can be helpful before commencing data collection. Figure 1 (Step 6) shows an example Reporting plan for the Health Care Navigator program.
The example Reporting Plan is presented in a table with the rows containing a Description of the program, and then Inputs, Activities, Outputs, and Outcomes—short, medium, and long term as rows (as per a program logic). The column can include information to present in the program evaluation report. In this example, the key information about the program, its development and implementation, and then achievements, including outputs and outcomes as listed in the Step 3b Data Collection Plan are noted.
2. Reflections and Conclusions
Evaluation of health programs is recognised as essential for providing evidence that can ultimately address inequality of health outcomes in rural communities [4, 5, 6]. However, this evidence is lacking for many rural health services [1, 2]. Our paper shares a simple introduction to operationalising a data collection and reporting plan from program logic. This method has been successfully implemented across several rural‐based health and wellbeing programs, including a multidisciplinary paediatric outreach clinic, peer navigator, diabetes mentoring, Sense rugby (a rugby skill program for children with disability), as well as a school‐based reading program. It may be useful to leaders in rural health who are considering program evaluation. Future research will evaluate and further develop this method with a focus on rural health programs.
Author Contributions
Matt Thomas: conceptualization, writing – original draft, methodology. Justine Summers: writing – review and editing. Sherryn Honeywood: writing – review and editing, methodology. Alyssa Fitzgerald: methodology, writing – review and editing. Donna Ambler: methodology, writing – review and editing. Kylie Falciani: methodology, writing – review and editing. Amanda Cook: methodology, writing – review and editing. Kelly Smith: methodology, writing – review and editing. Dean Bright: methodology, writing – review and editing. Michelle Lindsay: methodology, writing – review and editing. Catherine Sanford: methodology, writing – review and editing.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgements
We would like to acknowledge James Knight, Sally Hegvold and Raechel Nimmo (Communications Team, Marathon Health) for their assistance in formatting the content of this paper. We are grateful to the Managers, Clinical and Team Leads and staff across Marathon Health programs for their commitment and effort in implementing program evaluation and continuous improvement activities. Finally, we would like to express our gratitude for the engagement and support of the many individuals and organisations who have partnered with us to support program evaluation and the development of our capabilities in this area. Open access publishing facilitated by Charles Sturt University, as part of the Wiley ‐ Charles Sturt University agreement via the Council of Australian University Librarians.
Thomas M., Summers J., Honeywood S., et al., “Steps for Planning Health Program Evaluations: From Program Logic to Data Collection and Reporting Plans,” Australian Journal of Rural Health 33, no. 3 (2025): e70068, 10.1111/ajr.70068.
Data Availability Statement
The authors have nothing to report.
References
- 1. Kavanagh B. E., Mc Namara K. P., Bolton P., Dennis C., and Versace V. L., “Building Research Capacity at a Rural Place‐Based Community Service Organisation in Southwest Victoria, Australia,” Australian Journal of Rural Health 32 (2024): 1068–1071. [DOI] [PubMed] [Google Scholar]
- 2. Moran A., Haines H., Raschke N., et al., “Mind the Gap: Is It Time to Invest in Embedded Researchers in Regional, Rural and Remote Health Services to Address Health Outcome Discrepancies for Those Living in Rural, Remote and Regional Areas?,” Australian Journal of Primary Health 25, no. 2 (2019): 104–107. [DOI] [PubMed] [Google Scholar]
- 3. Nundy S., Cooper L. A., and Mate K. S., “The Quintuple Aim for Health Care Improvement: A New Imperative to Advance Health Equity,” Journal of the American Medical Association 327, no. 6 (2022): 521–522. [DOI] [PubMed] [Google Scholar]
- 4. Alston L., Bourke L., Nichols M., and Allender S., “Responsibility for Evidence‐Based Policy in Cardiovascular Disease in Rural Communities: Implications for Persistent Rural Health Inequalities,” Australian Health Review 44, no. 4 (2020): 527–534. [DOI] [PubMed] [Google Scholar]
- 5. Flavel J., Kedzior S. G., Isaac V., Cameron D., and Baum F., “Regional Health Inequalities in Australia and Social Determinants of Health: Analysis of Trends and Distribution by Remoteness,” Rural and Remote Health 24, no. 1 (2024): 1. [Google Scholar]
- 6. Bourke S. L., Harper C., Johnson E., et al., “Health Care Experiences in Rural, Remote, and Metropolitan Areas of Australia,” Online Journal of Rural Nursing and Health Care 21, no. 1 (2021): 67–84. [Google Scholar]
- 7. Kidder D. P., Fierro L. A., Luna E., et al., “CDC Program Evaluation Framework, 2024,” Morbidity and Mortality Weekly Report 73, no. 6 (2024): 1–37, 10.15585/mmwr.rr7306a1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Milstein B., Wetterhall S., and CDC Evaluation Working Group , “A Framework Featuring Steps and Standards for Program Evaluation,” Health Promotion Practice 1, no. 3 (2000): 221–228. [Google Scholar]
- 9. Centre for Epidemiology and Evidence , “Developing and Using Program Logic: A Guide,” in Evidence and Evaluation Guidance Series, Population and Public Health Division (NSW Ministry of Health, 2023), https://www.health.nsw.gov.au/research/Publications/developing‐program‐logic.pdf. [Google Scholar]
- 10. Gilmore K. J., Corazza I., Coletta L., and Allin S., “The Uses of Patient Reported Experience Measures in Health Systems: A Systematic Narrative Review,” Health Policy 128 (2023): 1. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The authors have nothing to report.