Table C1.
Description | Question | Coding | |
---|---|---|---|
ID | Unique study identification # | Study | For example, AQC001 |
Paper | Surname/year of first author of paper for effect size data extraction | Open answer | |
Research methods—study design and risk of bias | Design type | What type of study design is used? |
1 = RCT (random assignment to households/individuals) or quasi‐RCT 2 = Cluster‐RCT (quasi‐RCT) 3 = Natural experiment: randomised or as‐if randomised 4 = Natural experiment: regression discontinuity (RD) 5 = CBA (nonrandomised assignment with treatment and contemporaneous comparison group, baseline and endline data collection)—individual repeated measurement 6 = CBA pseudo panel (repeated measurement for groups but different individuals) 7 = Interrupted time series (with or without contemporaneous control group) 8 = Panel data, but no baseline (pretest) 9 = Comparison group with endline data only |
Methods used for analysis | Which methods are used to control for selection bias and confounding? |
1 = Statistical matching (PSM, CEM, covariate matching) 2 = Difference in differences (DID) estimation methods 3 = IV‐regression (2‐stage least squares or bivariate probit) 4=Heckman selection model 5 = Fixed effects regression 6 = Covariate adjusted estimation 7 = Propensity weighted regression 8 = Comparison of means 9 = Other |
|
Design and analysis method description | Briefly describe the study design and analysis method undertaken by the authors | Open answer | |
Unit of analysis | Is unit of analysis in cluster allocation addressed in standard error calculation (RCT and NRS)? |
1 = Yes 2 = No 3 = Not reported/unclear 4 = Not applicable |
|
Method used to address differences between UoA and unit of data collection | Briefly describe methods used to adjust standard errors to account for correlation of observations within clusters (e.g., cluster‐robust standard errors reported) | Open answer | |
Type of comparison group | Indicate type of comparison group |
1 = No intervention (service delivery as usual) 2 = Other PITA intervention 3 = Pipeline (wait‐list) control (still service delivery as usual) |
|
Assignment mechanism | (1) Mechanism of assignment: was the allocation or identification mechanism random or as good as random? |
1 = Yes 2 = Probably yes 3 = Probably no 4 = No 8 = No information/unclear |
|
Assignment justification | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Confounding | Group equivalence: was the method of analysis executed adequately to ensure comparability of groups throughout the study and prevent confounding |
1 = Yes 2 = Probably yes 3 = Probably no 4 = No 8 = No information/unclear |
|
Confounding justification | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Selection bias | Was any differential selection into or out of the study (attrition bias) adequately resolved? |
1 = Yes 2 = Probably yes 3 = Probably no 4 = No 8 = No information/unclear |
|
Selection bias justification | Justification for coding decision (include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Spill‐overs, cross‐overs and contamination | (2) Spill‐overs, cross‐overs and contamination: was the study adequately protected against spill‐overs, cross‐overs and contamination? | 1 = Yes, 2 = Probably yes, 3 = Probably no, 4 = No, 8 = No information/unclear | |
Spill‐overs justification | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Motivation bias | Was the process of being observed free from motivation bias (e.g., Hawthorne effects)? |
1 = Yes 2 = Probably Yes 3 = Probably No 4 = No 8 = No Information/Unclear |
|
Motivation justification | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Outcome reporting | (3) Outcome reporting: was the study free from selective outcome reporting? |
1 = Yes 2 = Probably Yes 3 = Probably No 4 = No 8 = No Information/Unclear |
|
Outcome reporting | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Analysis reporting | 4: Analysis reporting: was the study free from selective analysis reporting? |
1 = Yes 2 = Probably Yes 3 = Probably No 4 = No 8 = No Information/Unclear |
|
Analysis reporting | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Performance bias | (5) Performance bias: was the process of being observed free from motivation bias? |
1 = Yes 2 = Probably Yes 3 = Probably No 4 = No 8 = No Information/Unclear |
|
Performance bias | Justification for coding decision (Include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Other bias | (6) Other risks of bias: is the study free from other sources of bias? Including around measurement of the intervention |
1 = Yes 2 = Probably Yes 3 = Probably No 4 = No 8 = No Information/Unclear |
|
Other bias | Justification for coding decision (include a brief summary of justification for rating, mentioning your response to all sub questions, cite relevant pages) | Open answer | |
Blinded participants | Blinding of participants? |
1 = Yes 2 = No 9 = N/A |
|
Blinded observers | Blinding of outcome assessors? |
1 = Yes 2 = No 9 = N/A |
|
Blinded analysts | Blinding of data analysts? |
1 = Yes 2 = No 9 = N/A |
|
Method used to blind | Describe method(s) used to blind | Open answer (including describe method of placebo control) |