Table 2.
Prioritization step | Measurement domains | Measurement subdomains | Specific prioritization criteria and their application | Scoring mechanism to prioritize QMs |
---|---|---|---|---|
Round 1: Using of a set of criteria applied to all measures | All | All |
The following criteria were applied to all QM from all the MD and their MSD • Relevance: Is QM specific to the QS of interest • Actionability: Can the data collected for the measure guide clear and rapid QI actions and changes at the relevant health system level • Feasibility: Are the data needed for the QM most likely available and accessible or can they be obtained without substantial resource investments (time, human and financial resources) – either now or in future • Validity: Can the measure truly measure what it purports to measure (face validity) •Reliability*: Are the results of the measure reproducible irrespective of who makes the measurement, from which data source or when it is made •Clarity/specificity*: Is the measure described in a clear and unambiguous terms |
• Each of these criteria was scored using a 5-point scale with a minimum score of 1 and maximum score of 5. Thus, the minimum total score possible for each QM at this stage was 4 (1 minimum score X 4 criteria = 4), and the maximum score possible was 20 (5 maximum score X 4 criteria = 20). Using a predefined cut-off score of 16, which was the median of scores across all relevant QMs. Thus, a QM was considered for the next round of prioritization only if it had a score ≥ 16 |
Round 2: Using additional criteria for specific MDs and MSDs to select Catalogue QMs | MD-1 | MSD-1 |
This criterion was only applied to QM under MSD-1 • Importance (A) How important is the input in delivering high impact evidence-based paediatric care intervention and achieve good care outcome? |
•The impact criterion allowed for prioritization of various input measures for high impact clinical interventions. However, different types of input measures are not equally important for provision of evidence-based care. For example, availability of antibiotic for child with severe pneumonia may be more important for the care outcome than availability of operational guideline or job aid. To minimize subjectivity, different weights were applied to different types of inputs based on their relative importance in provision of evidence-based care • The minimum score per QM was 1, the maximum was 5. The cut-off score was set at 4, which was the median of all scores across all relevant QMs. A QM was prioritized further if it had a score ≥ 4 |
MD-1 | MSD-2 |
These criteria were only applied to QM under MSD-2 • Importance (B): How much does the clinical condition/content area measured by the QM contribute to mortality or disease burden in specific settings • Strength of Evidence base: How strong is the evidence to link the clinical process to care outcome? • Coverage: How many children receive / could receive the clinical intervention that the QM measures |
• The minimum score per QM was 3, and the maximum was 15 (maximum 5 score × 3 criteria = 15). The cut-off score was set at 13, which was the median of all scores across all relevant QMs. Thus, a QM was prioritized further if it had a score ≥ 13 | |
MD-1 | MSD-3 |
This criterion was only applied to MSD-3 • Importance (C): Considering that the criteria “coverage” and “impact” only apply to clinical interventions and are not relevant to care outcomes, “Importance” was the only criterion used to prioritize clinical outcomes |
• The minimum score per QM was 1, and the maximum was 5. The cut-off score was set at 4 which was the median of all scores across relevant QMs. Thus, a QM was prioritized further it had a score ≥ 4 | |
MD-2 | MD-2 |
This criterion was applied to all QM under MD-2 • Importance (D): How important specific cross-cutting facility level input is to improve care processes or health or family-centered outcomes? |
• The minimum score per QM was 1, and the maximum was 5. The cut-off score was set at 4 which was the median of all scores across relevant QMs. Thus, a QM was prioritized further it had a score ≥ 4 | |
MD-3 | MD-3 |
This criterion was applied to all QM under MD-3 • Importance (E): Does the corresponding standard support the following key principles: 1) the willingness and ability of patients and families to participate in care; 2) measures patient-reported outcome; 3) is built upon the principle of no harm and 4) patients’ right? |
• Weights (scaled to 100%) were used to prioritize QM around child- and family-centered practices/experience of care: measures the ability of patients and families to participate in care (30%); measures patient reported outcome (20%); is built upon the principle of no harm (30%) and patients’ rights (20%) • The summary weighted score for each measure was then calculated. The minimum score for this domain was 1, the maximum was 5. The cut-off score was set at 2.5 which was the median of all scores across relevant QMs. Thus, a QM was prioritized further it had a score ≥ 2.5 |
|
Round 3: Using additional criteria to select core indicators | All | All |
These criteria applied to all selected catalogue QM • Usefulness: does the measure focus of performance of the system at population level and once aggregated, is useful to different stakeholders to guide decisions and changes especially at national and global levels? • Impact: Is the measure sensitive to QoC interventions, assessing the highest impact of QoC intervention(s) to national and global child health priority (25) • Comparability: Is the measure aligned to the greatest extent possible with standardized and validated global childcare indicators/ monitoring frameworks and/or are comparable across countries and regions (26) |
• Each of these criteria was scored using a 5-point scale with a minimum score of 1 and maximum score of 5 • The minimum score per each additional criteria was 1, and the maximum was 5. The cut-off score was set at 13 which was the median of all scores across relevant indicators. Thus, a catalogue indicator was prioritized as core it had a score ≥ 13 |
* In addition to 4 criteria that were applied to all MDs and their MSDs, we scored each measure against two additional criteria (reliability and clarity). All measures were scored and color-coded using 1–5-point scale with a minimum score of 1 and maximum score of 5 against reliability and clarity (See additional file 2). To avoid losing important measures that were not fully defined at the initial stage, the scoring results of above two criteria were not included in the prioritization algorithm. The scoring results for reliability and clarity informed the indicator development process to further refine the definition and data collection methods of selected indicators