Skip to main content
. 2019 Sep 4;17(9):e05778. doi: 10.2903/j.efsa.2019.5778
Question Rating Explanation for expert judgement

Did the study design or analysis account for important confounding?

Key question

++

There is direct evidence that appropriate adjustments or explicit considerations were made for the potential confounders in the final analyses through the study design (e.g. matching, restriction) and/or through the use of statistical models to reduce research‐specific bias including standardisation, adjustment in multivariate model, stratification, propensity scoring, or other methods that were appropriately justified

NOTE: Acceptable consideration of appropriate adjustment factors includes cases when the factor is not included in the final adjustment model because: i) there was evidence indicating that a factor did not need to be included as a confounders (e.g. the author conducted analyses that indicated it did not need to be included; study restricted to males only) OR ii) it is deemed that not considering the factor would not bias the result

ANDThere is direct evidence that confounders were assessed using reliable methods

+

There is indirect evidence that appropriate adjustments were made, OR it is deemed that not considering or only considering a partial list confounders in the final analyses would not substantially bias results

AND

There is evidence (direct or indirect) that confounders were assessed using reliable methods, OR it is deemed that the methods used would not appreciably bias results (i.e., the authors justified the validity of the methods from previously published research)

NR

There is insufficient information provided about the distribution of potential confounders (record ‘NR’ as basis for answer)

OR

There is insufficient information provided about the methods used to assess confounders (record ‘NR’ as basis for answer)

There is indirect evidence that the distribution of potential confounders differed between the groups and was not appropriately adjusted for in the final analyses
− –

There is direct evidence that the distribution of confounders differed between the groups, confounding occurred but was not adjusted for in the final analyses

OR

There is direct evidence that confounders were assessed using non‐reliable methods

2. Were outcome data complete without attrition or exclusion from analysis? ++

There is direct evidence that loss of subjects (i.e. incomplete outcome data) was adequately addressed and reasons were documented when human subjects were lost or removed from a study

NOTE: Acceptable handling of subject attrition includes: very few missing outcome data AND reasons for missing subjects unlikely to be related to outcome (for survival data, censoring unlikely to be introducing bias) AND missing outcome data balanced in numbers across study groups, with similar reasons for missing data across groups (i.e. unlikely to be related to exposure)

+

There is indirect evidence that loss of subjects (i.e. incomplete outcome data) was adequately addressed and reasons were documented when human subjects were removed from a study

OR

It is deemed that the proportion lost to follow‐up would not appreciably bias results, due to the similarity between the characteristics of subjects lost to follow‐up and study participants. Generally, the higher the ratio of participants with missing data to participants with events, the greater potential there is for bias. For studies with a long duration of follow‐up, some withdrawals for such reasons are inevitable

There is indirect evidence that loss of subjects (i.e. incomplete outcome data) was unacceptably large and not adequately addressed
− – There is direct evidence that loss of subjects (i.e. incomplete outcome data) was unacceptably large and not adequately addressed. Unacceptable handling of subject attrition includes: reason for missing outcome data likely to be related to true outcome, with either imbalance in numbers or reasons for missing data across study groups (i.e. likely to be related to the exposure)

3. Can we be confident in the exposure characterisation?

Key question

++

Sodium intake was assessed through multiple 24‐h urinary collection (in a ‘reasonably short time‐frame’)

AND

There is direct evidence that quality assurance measures were in place for the collection of 24‐h urine (e.g. first and last void at the clinic; careful instructions of the participants) OR incomplete collections were excluded on the basis of any method (e.g. PABA, creatinine, self‐reported, volume…)

+

Sodium intake was assessed through a single 24‐h urinary collection

AND

There is direct evidence that quality assurance measures were in place for the collection of 24‐h urine (e.g. first and last void at the clinic; careful instructions of the participants) OR incomplete collections were excluded on the basis of any method (e.g. PABA, creatinine, self‐reported, volume…)

NR There is insufficient information provided about the method of exposure assessment

There is indirect evidence that the exposure (including compliance with the treatment, if applicable) was assessed using poorly validated methods (e.g. FFQs, spot urine etc.)

OR

There is no evidence that quality assurance measures were in place for the collection of 24‐h urine (single or multiple) AND no measures were taken to exclude incomplete samples

− –

There is direct evidence that the exposure (including compliance with the treatment, if applicable) was assessed using poorly validated methods (e.g. FFQs, spot urine etc.)

OR

There is direct evidence for systematic error in the exposure characterisation (exposure misclassification)

4. Can we be confident in the outcome assessment?

Key question

++ There is direct evidence that the outcome was assessed using well‐established methods
+

There is indirect evidence that the outcome was assessed using acceptable methods (i.e. deemed valid and reliable but not the gold standard)

OR

It is deemed that the outcome assessment methods used would not appreciably bias results

NR There is insufficient information provided about the method of measurement
There is indirect evidence that the outcome assessment method is an unacceptable method
− – There is direct evidence that the outcome assessment method is an unacceptable method
5. Were all measured outcomes reported? ++ There is direct evidence that all of the study's measured outcomes (primary and secondary) outlined in the protocol, methods, abstract, and/or introduction (that are relevant for the evaluation) have been reported. This would include outcomes reported with sufficient detail to be included in meta‐analysis or fully tabulated during data extraction and analyses had been planned in advance
+

There is indirect evidence that all of the study's measured outcomes (primary and secondary) outlined in the methods, abstract, and/or introduction (that are relevant for the evaluation) have been reported

ORAnalyses that had not been planned in advance (i.e. retrospective unplanned subgroup analyses) are clearly indicated as such and it is deemed that the unplanned analyses were appropriate and selective reporting would not appreciably bias results (e.g. appropriate analyses of an unexpected effect). This would include outcomes reported with insufficient detail such as only reporting that results were statistically significant (or not)

NR There is insufficient information provided about selective outcome reporting

There is indirect evidence that all of the study's measured outcomes (primary and secondary) outlined in the methods, abstract, and/or introduction (that are relevant for the evaluation) have been reported

OR

There is indirect evidence that unplanned analyses were included that may appreciably bias results

− – There is direct evidence that all of the study's measured outcomes (primary and secondary) outlined in the methods, abstract, and/or introduction (that are relevant for the evaluation) have not been reported. In addition to not reporting outcomes, this would include reporting outcomes based on composite score without individual outcome components or outcomes reported using measurements, analysis methods or subsets of the data (e.g. subscales) that were not pre‐specified or reporting outcomes not pre‐specified, or that unplanned analyses were included that would appreciably bias results
6. Were the statistical methods applied appropriate?  ++ There is direct evidence that the statistical analysis was appropriate
+ There is indirect evidence that the statistical analysis was appropriate
NR There is insufficient information provided about the statistical analysis
There is indirect evidence that the statistical analysis was not appropriate
− – There is direct evidence that the statistical analysis was not appropriate

++: Definitely low RoB; +: Probably low RoB; NR: Not Reported; –: probably high RoB; – −: Definitely high RoB.