Abstract
Introduction
Patient reported experience measures (PREMs) are important tools to include the voices of patients in healthcare provision. Children have a right to be included in decisions made about their care. A self-reported, locally validated and standardised paediatric PREM (pPREM) does not exist for use in Australian healthcare settings. Further, existing pPREMs are rarely codesigned with children or developed to be completed by children themselves. This study aims to validate a pPREM that will be completed by children within Australian healthcare settings.
Methods and analysis
This study will involve three subphases, engaging children aged 6–11 years old who have had a hospital admission in the past 3 months. First, up to 25 children will participate in cognitive interviews to pilot test pPREM items. Using feedback from the interviews, population testing will occur with about 180 children at six Australian hospitals to determine the validity, reliability and feasibility of the pPREM. The study’s implementation process will be evaluated through interviews with approximately 25–30 clinicians, managers and other stakeholders.
Keywords: Child Health, Patient Rights
WHAT IS ALREADY KNOWN ON THIS TOPIC
Paediatric Patient reported experience measures (pPREMs) are an important part of patient-centred care.
There are no self-reported free to use and validated pPREMs available for use with Australian children where items have been co-designed by children.
WHAT THIS STUDY ADDS
This study will result in a freely accessible, validated, self-completed pPREM tool for Australian children aged 6–11 years.
This study will be conducted at six hospitals across Australia to validate the pPREM in diverse contexts.
HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY
Realtime feedback from pPREMs can help healthcare professionals to provide more responsive and tailored healthcare for children.
By involving children directly in the evaluation of care, paediatric hospitals can improve communication, patient satisfaction and overall service quality.
Introduction
Patient reported experience measures (PREMs) are standardised questionnaires that capture the perspectives of patients regarding the care they receive, providing valuable insights that can drive improvements in healthcare quality and service delivery.1 2 While PREMs have become increasingly common in adult populations, their application in paediatric care remains in its infancy. Traditional paediatric PREMs (pPREMs) often rely on staff to assist children or on proxy reporting, where parents or caregivers complete the measures on behalf of the child.3,5 While these approaches can provide important information, they may not fully capture the unique experiences, preferences and emotional responses of children themselves.6 7 As children mature, their ability to reflect on and articulate their healthcare experiences grows, highlighting the need for self-completed pPREMs that are developmentally appropriate and empower young patients to share their views directly.8 9 pPREMs can also be long and require a subscription, making accessibility difficult for clinicians and children.10 Further, existing pPREMs are often not validated with children with disabilities and of various cultures.11 12 In addition, there are only a few pPREMs that include children in the development process.11
While pPREM development initiatives are occurring in other countries, there is currently no self-completed pPREM tool that is codesigned by children and validated in Australia where items have been co-designed by children.13 14 The Needs of Children Questionnaire (NCQ) is an important contribution to the literature, developed in the Australian and New Zealand hospital context, including priority populations.15 This measure used data from a meta-synthesis of primary research to inform the original item generation, with input from children and young people in subsequent item testing.15 The NCQ methodology has used staff to assist younger children, aged 5 to 10, in the completion of this tool.15 It is anticipated that a pPREM tool validated and tested in the Australian hospital setting will increase use of pPREMs across Australia. The development of a national tool will promote consistency in data collection, facilitate the sharing of outcomes across healthcare services and ensure a more unified approach to improving paediatric care.16 17 Additionally, it will strengthen adherence to the rights of children, ensuring they are heard and their experiences are integral to the ongoing development of healthcare services.18 19 With the support of the Australian Commission for Safety and Quality in Healthcare, Children’s Healthcare Australasia (CHA) and Starlight Children’s Foundation have developed a programme to codesign, develop and validate a pPREM in Australian healthcare settings.20 This programme involves contributions of various stakeholders, including consumers, clinicians and academics. One phase of this work has been completed. In phase 1, children from Australian hospitals were interviewed to determine what is important to them in hospital. Findings showed that children wanted to be involved in decisions, to have positive relationships and interactions with staff and to have a comfortable hospital environment.21 Next, items that aligned with children’s perspectives were gathered from previous measures and stakeholders rated the importance of items using an iterative process over four rounds. Stakeholders completed anonymised online surveys to rate items and provided suggestions for improving the items, such as providing alternate wording. The domains that were expressed by children and agreed on by stakeholders are outlined in figure 1.21 The pPREM that was drafted in phase 1 requires validation, which will be the focus of this study protocol.
Figure 1. Paediatric patient reported experience measures domains resulting from phase 1.
The aim of this study is to validate a pPREM tool that will be completed by children. The specific objective of the proposed study is to measure the validity, reliability and feasibility of the developed pPREM.
Methods and analysis
Phase 2: pilot testing and validation trial
Based on the items generated in phase 1, phase 2 will involve three subphases: (2a) pilot testing items to determine whether items are acceptable and understandable, (2b) a validation trial to determine the validity, reliability and fidelity of the pPREM and (2c) implementation evaluation and feedback from clinicians and managers from implementation trial sites (figure 2). This research is occurring on behalf of an organisation that encompasses all children’s healthcare facilities across Australia and aims to implement the product of phase 2 across the country.
Figure 2. Three subphases of the paediatric patient reported experience measure validation process.
Phase 2a
Study design
Phase 2a will involve pilot testing the content validity of the pPREM by pretesting items with children to determine whether items are appropriate, as has been done in similar studies of this kind.11
Study setting
This study will occur at four hospitals across Australia, including metropolitan, regional and rural communities.
Participants
Flyers will be provided to families by hospital staff. Children will be eligible to participate if they are aged 6–11 years and if they have had a hospital admission over the previous 3 months. Participants will be up to 25 children, an adequate number to reach data saturation on the pPREM items.22 Participants will include children with various cultural backgrounds and those with disabilities. While children with cultural diversity or disabilities will not be specifically targeted in recruitment, the allocation of recruitment settings seeks to ensure their inclusion.
Procedure
Through cognitive interviews, children will be presented with draft items and respond to a series of questions related to their comprehension and interpretation of the item.23 Participants will be invited to read each item aloud, explain their understanding of the item and identify any parts that lack clarity. If children are unable to read the items, the interviewer will use a read-aloud technique. Children will be asked to respond to the item literally and to verbally provide a rating. Next, children will be asked, ‘What does [item] mean to you?’ This question is recommended by the International Society for Pharmacoeconomics Research which provides guidelines on cognitive interviews and has been used in previous research with children aged 6–7 years in testing a mock patient reported outcome measure.24 Next, follow-up questions will be asked (eg, ‘Can you explain how you chose that answer?’; ‘Was anything confusing about that?’; ‘Are there any words that you did not know?’). Areas identified as unclear will be probed in detail and participants will be asked to suggest improvements. In addition, children will be asked whether items are relevant to their experience in the hospital to verify that the items are important to them (eg, ‘Does this sentence apply to you and relate to your experience in the hospital?’). Children will also provide feedback on response options of the pPREM, such as whether they prefer a rating system using stars or smiley faces. Interviews will be audio recorded and transcribed verbatim.
Data analysis
Responses will be qualitatively analysed using framework analysis through NVivo V.12.25 Using an iterative approach, pPREM items will be refined and improved to enhance their validity and reliability. Ongoing analyses will occur between interviews and items will be modified in accordance with child feedback. The research team will meet several times to discuss the analyses and decide on what modifications will be made. Following this process, a final set of items will be tested in phase 2b.
Phase 2b
Study design
Phase 2b will include population testing of the pPREM to determine the validity, reliability, fidelity and consistency of the tool.
Study setting
This study will occur at six hospitals across Australia, including metropolitan, regional and rural communities.
Participants
Recruitment will occur through flyers, posters on wards and hospital social media advertisements. Participants will be children aged 6–11 years, including those with disabilities and diverse cultural backgrounds, who have had one or more hospital admissions over the past 3 months. Sample size recommendations for factor analysis and item parameters suggest that these tests benefit from relatively large sample sizes.26 27 While various guidelines exist, a common recommendation for factor analysis is to have a minimum of 10 observations per variable. 28 29 Assuming the draft pPREM tested in phase 2 includes 18 items, the minimum viable sample size would be 180 participants with equal representation from the participating sites.
Procedure
Children will complete the pPREM online using Research Electronic Data Capture. It is estimated that the pPREM will include approximately 18 items and take roughly 10–20 min to complete. The rating system used for the items will depend on the preferred rating system from phase 2 a. For example, a star system may be used where children give 1, 2, 3, 4 or 5 stars for each item, with one star being the lowest and meaning the worst or never and five stars being the highest and meaning the best or always.
Data analysis
To assess reliability, a test-retest design will be used with an interval of 2 weeks.30 Approximately 40 participants will be randomly selected to complete the pPREM an additional time.31 To adjust for the fact that a number of these agreements may arise by chance alone, chance-corrected agreements will be assessed using Cohen’s kappa coefficients (κ values). The following values will be attached to the coefficients: modest (0.21–0.40); moderate (0.41–0.60); satisfactory (0.61–0.80) and almost perfect (0.81–1.00).32 Consistency of data on interval level will be evaluated by computing intraclass correlation coefficients (ICC) (two-way mixed models; absolute agreement). ICCs ≥0.70 will be interpreted as optimal.30
Item Analysis: prior to assessing the scale’s psychometric properties, a comprehensive item analysis will be conducted. This will involve examining:
Missing data: the proportion of missing data for each item will be calculated to identify potential issues with item comprehension, sensitivity or relevance.
Distribution of item responses: for each item, the distribution of responses across the rating scale will be analysed in order to verify whether the items are able to capture a range of experiences, rather than showing ceiling or floor effects. Items that show highly skewed distributions will be flagged for further consideration and potential revision or elimination, in conjunction with theoretical considerations from phase 1.
Construct validity is essential for determining the overall validity of an assessment tool. Given the items were developed based on theory-informed conceptual framework in Phase 1, a confirmatory factor analysis will be conducted to determine whether the data from the items match the expected factor structure based on theory.33 Robust maximum likelihood estimation with a promax rotation will be used.34 The fit of the model will be confirmed by χ2 statistical method, the ratio between χ2, the df (χ2/df), the goodness-of-fit index (GFI), the adjusted GFI (AGFI) and the root mean square error approximation (RMSEA). χ2/d<532,35 CFI, GFI and AGFI≥0.833,36 RMSEA<0.083437 are considered as fit indices and reasonable values. If the initial CFA models do not achieve adequate fit, we will carefully review modification indices and theoretical relevance to consider potential removal of items or respecification of the factor structure, if needed.
To determine convergent validity, Spearman’s rank correlation coefficient will be used to examine the strength of correlation between the pPREM and another questionnaire.
Feasibility of the pPREM will be determined by calculating the mean and SD of time required to complete the pPREM tool and the percentage of completeness and unanswered responses.
Phase 2c
Study design
Phase 2c aims to understand what barriers or facilitators there could be in fully implementing the pPREM in Australian hospitals using the Consolidated Framework for Implementation Research (CFIR).38 This stage will focus on clinician engagement to implement a pPREM in their local jurisdictions.
Study setting
This study will involve stakeholders working at six hospitals and other healthcare organisations across Australia, representing metropolitan, regional and rural communities.
Participants
Participants will be clinicians and managers from the participating sites and other relevant stakeholders. Research on pPREMs recommends including staff in the implementation process.2 Surveys or interviews will be completed by approximately 25–30 participants.
Procedure
Feedback will be sought using semistructured interviews and surveys regarding the perceived barriers and facilitators to implementing the pPREM in hospitals. The interviews and surveys will be guided by the CFIR. Specifically, questions will focus on the implementation indicators of acceptability, adoption, appropriateness, feasibility, fidelity, coverage, cost and sustainability (eg, understanding if stakeholders perceive the tool as relevant and useful). The findings will be presented at a stakeholder workshop with people not previously involved in pPREM development to gather levels of interest and participation, expectations of benefits and value add to service quality, and opportunities for improvements before finalisation of the tool for scale up. The plan is to implement the product of phase 2 across Australia using the findings of phase 2c to refine and support that implementation process.
Patient and public involvement
JN (consumer researcher) and a consumer representative will provide a patient perspective for the overall programme of research. Consumers were involved from the planning stage and are part of the governance structure for the project. Consumer involvement is continuous throughout the project from development to dissemination. The pPREM items were developed using a codesign process.21 Further, JN will contribute to the analysis of the data and interpretation considering the relevance to the broader community and children.
Steering committee
A steering committee for the CHA pPREM Development Project will oversee the proposed study. Members of the committee include over 30 professionals with a variety of relevant clinical, industry and research backgrounds and individuals with lived experience of childhood hospitalisation. The committee will meet quarterly to discuss the project.
Ethics and dissemination
Ethics approval has been provided by the South West Sydney Local Health District Human Research Ethics Committee (HREC) (2024/ETH00904).
Our study results will be published in a peer-reviewed journal. This study will also result in a freely accessible pPREM tool for use in Australian hospitals. Following this study, roundtable discussions will be held to gather input from all key stakeholders, including those involved in phase 2, as well as decision-makers and policymakers from state and territory governments and key services expected to implement the tool. This process will incorporate their feedback before finalising the pPREM and collaborating with relevant parties (eg, the New South Wales Agency for Clinical Innovation) to develop a dissemination plan. Once developed and validated, the pPREM will be shared with paediatric services across Australia. The aim is to secure endorsement and encourage widespread adoption of the pPREM in children’s hospitals and paediatric units, supported by state and territory health jurisdictions.
The real-time feedback provided by pPREMs will allow healthcare providers to identify strengths and weaknesses in their services, fostering a more responsive and tailored approach to care. By involving children directly in the evaluation of care, hospitals can improve communication, patient satisfaction and overall service quality. The introduction of a validated pPREM will also provide a tool for policymakers to incorporate the voices of children into service improvement and healthcare policy. It ensures that policies are not just designed based on adult-centric models but are grounded in the real, lived experiences of children and young people, supporting the broader objective of child rights in healthcare.
Footnotes
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Patient consent for publication: Not applicable.
Provenance and peer review: Not commissioned; externally peer reviewed.
Patient and public involvement: Patients and/or the public were involved in the design, conduct, reporting or dissemination plans of this research. Refer to the Methods section for further details.
Ethics approval: This study was approved by the South West Sydney Local Health District Human Research Ethics Committee (2024/ETH00904). After project completion, a freely accessible pPREM tool will be available for use in Australian hospitals.
References
- 1.Beattie M, Murphy DJ, Atherton I, et al. Instruments to measure patient experience of healthcare quality in hospitals: a systematic review. Syst Rev. 2015;4:97. doi: 10.1186/s13643-015-0089-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.McCabe E, Rabi S, Bele S, et al. Factors affecting implementation of patient-reported outcome and experience measures in a pediatric health system. J Patient Rep Outcomes . 2023;7:24. doi: 10.1186/s41687-023-00563-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Ali H, Fatemi Y, Cole A, et al. Listening to the Voice of the Hospitalized Child: Comparing Children’s Experiences to Their Parents. Children (Basel) 2022;9:1820. doi: 10.3390/children9121820. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Felnhofer A, Goreis A, Bussek T, et al. Evaluating Parents’ and Children’s Assessments of Competence, Health Related Quality of Life and Illness Perception. J Child Fam Stud. 2019;28:2690–9. doi: 10.1007/s10826-019-01449-x. [DOI] [Google Scholar]
- 5.Bartholdson C, Broström E, Iversen MD, et al. Patient-Reported Experience Measures in Pediatric Healthcare-A Rapid Evidence Assessment. J Patient Exp. 2024;11:23743735241290481. doi: 10.1177/23743735241290481. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Toomey SL, Elliott MN, Zaslavsky AM, et al. Variation in Family Experience of Pediatric Inpatient Care As Measured by Child HCAHPS. Pediatrics. 2017;139:e20163372. doi: 10.1542/peds.2016-3372. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Nafees Z, O’Neill S, Dimmer A, et al. Child- and Proxy-reported Differences in Patient-reported Outcome and Experience Measures in Pediatric Surgery: Systematic Review and Meta-analysis. J Pediatr Surg. 2025;60:162172. doi: 10.1016/j.jpedsurg.2025.162172. [DOI] [PubMed] [Google Scholar]
- 8.Ali H, Cole A, Sienkiewicz A, et al. Collecting child-patient feedback: A systematic review on the patient-reported outcome measures for hospitalized children. Patient Exp J. 2020;7:58–70. doi: 10.35680/2372-0247.1420. [DOI] [Google Scholar]
- 9.Lindeke L, Nakai M, Johnson L. Capturing children’s voices for quality improvement. MCN Am J Matern Child Nurs. 2006;31:290–5. doi: 10.1097/00005721-200609000-00005. [DOI] [PubMed] [Google Scholar]
- 10.Wheat H, Horrell J, Valderas JM, et al. Can practitioners use patient reported measures to enhance person centred coordinated care in practice? A qualitative study. Health Qual Life Outcomes. 2018;16:223. doi: 10.1186/s12955-018-1045-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Wray J, Hobden S, Knibbs S, et al. Hearing the voices of children and young people to develop and test a patient-reported experience measure in a specialist paediatric setting. Arch Dis Child. 2018;103:272–9. doi: 10.1136/archdischild-2017-313032. [DOI] [PubMed] [Google Scholar]
- 12.Mimmo L, Woolfenden S, Travaglia J, et al. Codesigning patient experience measures for and with children and young people with intellectual disability: a study protocol. BMJ Open. 2021;11:e050973. doi: 10.1136/bmjopen-2021-050973. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Hybschmann J, Sørensen JL, Thestrup J, et al. MyHospitalVoice - a digital tool co-created with children and adolescents that captures patient-reported experience measures: a study protocol. Res Involv Engagem. 2024;10:49. doi: 10.1186/s40900-024-00582-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Wray J, Russell J, Gibson F, et al. The Forgotten Voices: Enabling Children and Young People With Intellectual Disability to Express Their Views on Their Inpatient Hospital Experience. Health Expect. 2025;28:e70168. doi: 10.1111/hex.70168. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Foster M, Whitehead L, Arabiat D. Development and validation of the needs of children questionnaire: An instrument to measure children’s self-reported needs in hospital. J Adv Nurs. 2019;75:2246–58. doi: 10.1111/jan.14099. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Jamieson Gilmore K, Corazza I, Coletta L, et al. The uses of Patient Reported Experience Measures in health systems: A systematic narrative review. Health Policy. 2023;128:1–10. doi: 10.1016/j.healthpol.2022.07.008. [DOI] [PubMed] [Google Scholar]
- 17.Bull C, Teede H, Watson D, et al. Selecting and Implementing Patient-Reported Outcome and Experience Measures to Assess Health System Performance. JAMA Health Forum . 2022;3:e220326. doi: 10.1001/jamahealthforum.2022.0326. [DOI] [PubMed] [Google Scholar]
- 18.Bele S, Teela L, Zhang M, et al. Use of Patient-Reported Experience Measures in Pediatric Care: A Systematic Review. Front Pediatr. 2021;9:753536. doi: 10.3389/fped.2021.753536. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Children’s Healthcare Australasia AftWoCiH . Charter of children’s and young people’s rights in healthcare services in Australia. Australia: Ronald McDonald House Charities; 2011. [Google Scholar]
- 20.White L. Rights of children and young people in health care. J Paediatr Child Health. 2020;56:499–501. doi: 10.1111/jpc.14802. [DOI] [PubMed] [Google Scholar]
- 21.Barr KR, Nikolovski J, White L, et al. Co-developing a Paediatric Patient Reported Experience Measure: The Perspectives of Children and Young People. Patient Exp J. 2024;11:64–72. doi: 10.35680/2372-0247.1924. [DOI] [Google Scholar]
- 22.Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative Research in Sport, Exercise and Health. 2021;13:201–16. doi: 10.1080/2159676X.2019.1704846. [DOI] [Google Scholar]
- 23.Lenzner T, Hadler P, Neuert C. An experimental test of the effectiveness of cognitive interviewing in pretesting questionnaires. Qual Quant . 2023;57:3199–217. doi: 10.1007/s11135-022-01489-4. [DOI] [Google Scholar]
- 24.Gale V, Powell PA, Carlton J. Young children (6-7 years) can meaningfully participate in cognitive interviews assessing comprehensibility in health-related quality of life domains: a qualitative study. Qual Life Res. 2025;34:1633–46. doi: 10.1007/s11136-025-03940-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Gale NK, Heath G, Cameron E, et al. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13:117. doi: 10.1186/1471-2288-13-117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Jiang S, Wang C, Weiss DJ. Sample Size Requirements for Estimation of Item Parameters in the Multidimensional Graded Response Model. Front Psychol. 2016;7:109. doi: 10.3389/fpsyg.2016.00109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Comrey AL, Lee HB. A first course in factor analysis. 2nd. Psychology Press; 1992. [Google Scholar]
- 28.Hair Jr JF, Black WC, Babin BJ. Multivariate data analysis. Multivariate data analysis. 2010. p. 785. [Google Scholar]
- 29.Prudon P. Confirmatory Factor Analysis as a Tool in Research Using Questionnaires: A Critique. Comprehensive Psychology . 2015;4:03. doi: 10.2466/03.CP.4.10. [DOI] [Google Scholar]
- 30.Mokkink LB, Terwee CB, Knol DL, et al. The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: a clarification of its content. BMC Med Res Methodol. 2010;10:22. doi: 10.1186/1471-2288-10-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Giraudeau B, Mary JY. Planning a reproducibility study: how many subjects and how many replicates per subject for an expected width of the 95 per cent confidence interval of the intraclass correlation coefficient. Stat Med. 2001;20:3205–14. doi: 10.1002/sim.935. [DOI] [PubMed] [Google Scholar]
- 32.Altman DG. Practical statistics for medical research. Chapman and Hall/CRC; 1990. [Google Scholar]
- 33.Mulaik SA, Millsap RE. Doing the Four-Step Right. Structural Equation Modeling: A Multidisciplinary Journal. 2000;7:36–73. doi: 10.1207/S15328007SEM0701_02. [DOI] [Google Scholar]
- 34.Maydeu-Olivares A, D’Zurilla TJ. The Factor Structure of the Problem Solving Inventory. Eur J Psychol Assess. 1997;13:206–15. doi: 10.1027/1015-5759.13.3.206. [DOI] [Google Scholar]
- 35.Tinsley HEA, Brown SD. Handbook of applied multivariate statistics and mathematical modeling. 1st. San Diego: Academic Press; 2000. [Google Scholar]
- 36.Rhee E, Uleman JS, Lee HK. Variations in collectivism and individualism by ingroup and culture: Confirmatory factor analysis. J Pers Soc Psychol. 1996;71:1037–54. doi: 10.1037/0022-3514.71.5.1037. [DOI] [Google Scholar]
- 37.Kenny DA, Kaniskan B, McCoach DB. The Performance of RMSEA in Models With Small Degrees of Freedom. Sociological Methods & Research . 2015;44:486–507. doi: 10.1177/0049124114543236. [DOI] [Google Scholar]
- 38.Skolarus TA, Lehmann T, Tabak RG, et al. Assessing citation networks for dissemination and implementation research frameworks. Implement Sci. 2017;12:97. doi: 10.1186/s13012-017-0628-2. [DOI] [PMC free article] [PubMed] [Google Scholar]