Skip to main content
. 2022 Apr 28;10:877345. doi: 10.3389/fped.2022.877345

Table 1.

Studies included in the review, methodology assessment according to the American Academy for Cerebral Palsy and Developmental Medicine (AACPDM)a.

Study Research design Level of evidenceb AACPDM conduct questionsc Quality scores Quality summary
1d 2d 3 4d 5 6d 7d
Ulrich et al. (32) RCT II Yes No Yes No Yes No Yes 4 Moderate
Ulrich et al. (33) RT II Yes Yes Yes No Yes Yes Yes 6 Strong
Angulo-Barrosso et al. (34) RCT II Yes Yes Yes No Yes Yes Yes 6 Strong
Campbell et al. (30) CCT II Yes No No Yes No Yes Yes 4 Moderate
Lee and Samson (35) Cohort study V Yes No Yes No No Yes No 3 Weak
Schlittler et al. (36) CCT II Yes No No No Yes No No 2 Weak
Kolobe and Fagg (31) Cohort study V Yes No Yes Yes Yes No No 4 Moderate
Wentz (37) CCT II Yes Yes Yes No Yes Yes No 5 Moderate
Cameron et al. (38) RCT II Yes No No Yes Yes Yes Yes 5 Strong
Ustad et al. (39) Cohort study V Yes No No Yes No No No 2 Weak

The order of articles presentation was made according to the type of training.

a

Criteria for methodological quality assessment according to the AACPDM (revision 1.2) 28 with adjustments for the current study in italics.

b

Level of evidence from Sackett et al. (21).

c

AACPDM conduct questions:

1: Were inclusion and exclusion criteria of the study population well-described and followed? Both inclusion and exclusion criteria need to be met to score “yes”.

2: Were the intervention and comparison condition well-described and was there adherence to the intervention assignment? Both parts of the question need to be met to score “yes.” Adherence to intervention implies that adherence is assessed in a systematic way (questionnaire, video) and that >65% of planned intervention was achieved. The cut off of 65% adherence was an arbitrary one based on common sense; it meant that about two-thirds of the intervention had been achieved.

3: Were the measures used clearly described, valid and reliable for measuring the outcomes of interest?

4: Was the outcome assessor unaware of the intervention status of the participants (i.e., was it explicitly described that the assessors were masked)?

5: Did the authors conduct and report appropriate statistical evaluation: that is, did they perform proper statistics and did they include a power calculation (the latter did not need to result in the demonstration of group sizes allowing for adequate power)? Both parts of the question need to be met to score “yes”.

6: Were dropout/loss to follow-up after start of the intervention reported and <20%? For two-group designs, was dropout balanced? Note that dropouts due to death are excluded from the dropout calculation.

7: Considering the potential within the study design, were appropriate methods for controlling confounding variables and limiting potential biases used? Studies with groups with n <10 at the end of the intervention—either because they started with small groups or attrition resulted in groups with fewer than 10 participants—are assigned “no,” as the small number precludes multivariable statistics to control for confounders. Methodological quality is judged—according to the AACPDM criteria—as strong (‘yes’ score on ≥six questions), moderate (score 4 or 5), or weak (score ≤3).

d

Criteria that address the risk of bias within studies. RCT, randomized controlled trial.