Skip to main content
. 2023 Jan 20;21(1):e07704. doi: 10.2903/j.efsa.2023.7704
Question Rating Explanation for expert judgement

1. Was administered dose or exposure level adequately randomised?

Key question

+ There is direct or indirect evidence that subjects were allocated to any study group (or intervention sequence for cross‐over studies) including controls using a method with a random component (including authors state that allocation was random, without description of the method used). Acceptable methods of randomisation include: referring to a random number table, using a computer random number generator, coin tossing, shuffling cards or envelopes, throwing dice, or drawing of lots. Restricted randomisation (e.g., blocked randomisation) to ensure particular allocation ratios will be considered low risk of bias. Similarly, stratified randomisation and minimisation approaches that attempt to minimise imbalance between groups on important prognostic factors (e.g., body weight) will be considered acceptable

There is indirect evidence that subjects were allocated to study groups (or intervention sequence for cross‐over studies) using a method with a random component (i.e., authors state that allocation was random, without description of the method used)

OR

it is deemed that allocation without a clearly random component during the study would not appreciably bias results (e.g. cross‐over studies with no or unlikely carry‐over effects)

NR There is insufficient information provided about how subjects (or clusters) were allocated to study groups

There is indirect evidence that subjects were allocated to study groups using a method with a non‐random component

NOTE: Non‐random allocation methods may be systematic but have the potential to allow participants or researchers to anticipate the allocation to study groups. Such “quasi‐random” methods include alternation, assignment based on date of birth, case record number, or date of presentation to study.

NR: There is insufficient information provided about how subjects were allocated to study groups (or intervention sequence for cross‐over studies)

There is direct evidence that subjects were allocated to study groups (or intervention sequence for cross‐over studies) using a non‐random method including judgement of the clinician, preference of the participant, the results of a laboratory test or a series of tests, or availability of the intervention.
2. Was allocation to study groups adequately concealed? + There is direct evidence that at the time of recruitment the research personnel and subjects did not know what study group subjects were allocated to, and it is unlikely that they could have broken the blinding of allocation until after assignment was complete and irrevocable. Acceptable methods used to ensure allocation concealment include central allocation (including telephone, web‐based and pharmacy‐controlled randomisation); sequentially numbered drug containers of identical appearance; sequentially numbered, opaque, sealed envelopes; or equivalent methods

There is indirect evidence that the research personnel and subjects did not know what study group subjects were allocated to and it is unlikely that they could have broken the blinding of allocation until after recruitment was complete and irrevocable

OR

It is deemed that lack of adequate allocation concealment would not appreciably bias results (e.g. cross‐over studies where all subjects receive all the study treatments)

NR There is insufficient information provided about allocation to study groups.

There is indirect evidence that at the time of recruitment it was possible for the research personnel and subjects to know what study group subjects were allocated to (or treatment sequence for cross‐over studies), or it is likely that they could have broken the blinding of allocation before recruitment was complete and irrevocable

NOTE: Inadequate methods include using an open random allocation schedule (e.g., a list of random numbers); assignment envelopes used without appropriate safeguards (e.g., if envelopes were unsealed or nonopaque or not sequentially numbered); alternation or rotation; date of birth; case record number; or any other explicitly unconcealed procedure.

There is direct evidence that at the time of recruitment it was possible for the research personnel and subjects to know what study group subjects were allocated to, or it is likely that they could have broken the blinding of allocation before recruitment was complete and irrevocable
3. Were the research personnel and human subjects blinded to the study group during the study? + There is direct evidence that the subjects and research personnel were adequately blinded to study group, AND it is unlikely that they could have broken the blinding during the study. Methods used to ensure blinding include central allocation; sequentially numbered drug containers of identical appearance; sequentially numbered, opaque, sealed envelopes; or equivalent methods

There is indirect evidence that the research personnel and subjects were adequately blinded to study group, AND it is unlikely that they could have broken the blinding during the study

OR

it is deemed that lack of adequate blinding during the study would not appreciably bias results (this would depend on the outcome).

NR There is insufficient information provided about blinding to study group during the study

There is indirect evidence that it was possible for research personnel or subjects to infer the study group

NOTE: Inadequate methods include using an open random allocation schedule (e.g., a list of random numbers), assignment envelopes used without appropriate safeguards, alternation or rotation; date of birth; case record number; or any other explicitly unconcealed procedure

There is direct evidence for lack of adequate blinding of the study group including no blinding or incomplete blinding of research personnel and subjects. For some treatments, such as behavioural interventions, allocation to study groups cannot be concealed
4. Were outcome data complete without attrition or exclusion from analysis? +

There is direct evidence that there was no loss of subjects during the study and outcome data were complete,

OR

loss of subjects (i.e., incomplete outcome data) was adequately addressed(a) and reasons were documented when human subjects were removed from a study or analyses. Review authors should be confident that the participants included in the analysis are exactly those who were randomised into the trial. Acceptable handling of subject attrition includes: very little missing outcome data (e.g. < 10% in each group); reasons for missing subjects unlikely to be related to outcome; missing outcome data balanced in numbers across study groups, with similar reasons for missing data across groups,

OR

analyses (such as intention‐to‐treat analysis) in which missing data have been imputed using appropriate methods (insuring that the characteristics of subjects lost to follow up or with unavailable records are described in identical way and are not significantly different from those of the study participants).

NOTE: Participants randomised but subsequently found not to be eligible need not always be considered as having missing outcome data.

There is indirect evidence that loss of subjects (i.e., incomplete outcome data) was adequately addressed and reasons were documented when human subjects were removed from a study,

OR

it is deemed that the proportion lost to follow‐up would not appreciably bias results (e.g. < 20% in each group). This would include reports of no statistical differences in characteristics of subjects lost to follow up or with unavailable records from those of the study participants. Generally, the higher the ratio of participants with missing data to participants with events, the greater potential there is for bias. For studies with a long duration of follow‐up, some withdrawals for such reasons are inevitable.

NR There is insufficient information provided about numbers of subjects lost to follow‐up
There is indirect evidence that loss of subjects (i.e., incomplete outcome data) was unacceptably large (e.g. > 20% in each group) and not adequately addressed.
There is direct evidence that loss of subjects (i.e., incomplete outcome data) was unacceptably large and not adequately addressed. Unacceptable handling of subject attrition includes: reason for missing outcome data likely to be related to true outcome, with either imbalance in numbers or reasons for missing data across study groups; or potentially inappropriate application of imputation.

5. Can we be confident in the exposure characterisation?

Key question

+ There is direct evidence that the exposure was adequately assessed, i.e. the sugar content of the intervention (and control) foods and/or beverages was measured during the study by e.g. food analysis AND there is direct evidence that the exposure was consistently administered (i.e., with the same method and time‐frame) across treatment groups (e.g., administration of study foods or diets was supervised; compliance was assessed)

There is indirect evidence that the exposure was adequately assessed, i.e. the sugar content of the intervention (and control) foods and/or beverages was not measured but rather e.g. calculated from food composition tables, provided by the food manufacturer, calculated form the ingredients list;

AND there is indirect evidence that exposure was consistently administered (i.e., with the same method and time‐frame) across treatment groups (e.g. administration of study foods or diets was not supervised but study products were provided by the investigators and compliance was assessed using food records, return of unconsumed foods, or a similar method).

NR There is insufficient information provided about the validity of the exposure assessment method
There is indirect evidence that the exposure was assessed using poorly validated methods (e.g. study products or diets were not provided by the investigators and compliance was not checked)

There is direct evidence that the exposure was assessed using poorly validated methods

OR

There is direct evidence of poor compliance with the intervention

6. Can we be confident in the outcome assessment?

Key question

+

There is direct evidence that the outcome was assessed using well‐established methods (e.g., the “gold standard”). Such methods will depend on the outcome, but may include: objectively measured with diagnostic methods, measured by trained interviewers, obtained from registries

AND subjects had been followed for the same length of time in all study groups,

AND there is direct evidence that the outcome assessors (including study subjects, if outcomes were self‐reported) were adequately blinded to the study group, and it is unlikely that they could have broken the blinding prior to reporting outcomes.

There is indirect evidence that the outcome was assessed using acceptable methods (i.e., deemed valid and reliable but not the gold standard). Such methods will depend on the outcome, but may include: proxy reporting of outcomes, mining data collected for other purposes AND subjects had been followed for the same length of time on average in all study groups (or if not, this has been accounted for using appropriate statistical approaches), OR it is deemed that the outcome assessment methods used would not appreciably bias results (e.g. when there is no information about the method but standard measurements are most likely, e.g. blood lipids, body weight in a research setting),

AND

there is indirect evidence that the outcome assessors (including study subjects, if outcomes were self‐reported) were adequately blinded to the study group, and it is unlikely that they could have broken the blinding prior to reporting outcomes, OR it is deemed that lack of adequate blinding of outcome assessors would not appreciably bias results, which is more likely to apply to objective outcome measures.

NR There is insufficient information provided about blinding of outcome assessors OR there is no information about the outcome assessment method

There is indirect evidence that the outcome assessment method is an insensitive instrument (e.g., a questionnaire used to assess outcomes with no information on validation),

OR

the length of follow up differed by study group,

OR

there is indirect evidence that it was possible for outcome assessors (including study subjects if outcomes were self‐reported) to infer the study group prior to reporting outcomes AND it is deemed that the outcome assessment methods used could appreciably bias results

There is direct evidence that the outcome assessment method is an insensitive instrument,

OR

the length of follow up differed by study group,

OR

there is direct evidence for lack of adequate blinding of outcome assessors (including study subjects if outcomes were self‐reported), including no blinding or incomplete blinding AND it is deemed that the outcome assessment method could have biased the results

7. Were there no other potential threats to internal validity (e.g. statistical methods were appropriate and researchers adhered to the study protocol)? +

There is direct evidence that variables, other than the exposure and outcome, did not differ between groups during the course of the intervention in a way that could bias results/For cross‐over trials: there is direct evidence of no carry‐over effects,

AND

there is no evidence of differences in baseline characteristics between groups.

There is indirect evidence that variables, other than the exposure and outcome, did not differ between groups during the course of the intervention in a way that could bias results/For cross‐over trials: there is indirect evidence of no carry‐over effects (e.g. presence of a sufficient washout period) AND there is no evidence of differences in baseline characteristics between groups,

OR

there is evidence that reported variables differed between groups at baseline/For cross‐over trials: no washout period AND it is deemed that these differences (or absence of washout for cross‐over trials) would not appreciably bias results (no concern or adequately addressed by analysis)

NR There is no information about baseline characteristics by group (for parallel studies)

There is no information on variables, other than the exposure and outcome, which could bias the results would have differed between groups during the course of the intervention/ For cross‐over trials: no washout period

AND

there is indirect evidence that variables, other than the exposure and outcome, may have differed between groups during the course of the intervention in a way that could bias results/ For cross‐over trials: indirect evidence of carry‐over effects

There is evidence that variables, other than the exposure and outcome, differed between groups during the intervention/For cross‐over trials: direct evidence of carry‐over effects

AND It is deemed that these differences appreciably biased results (there is concern e.g. not adequately addressed by analysis)

OR

there is evidence that reported variables differed between groups at baseline

AND it is deemed that these differences appreciably biased results (e.g. not adequately addressed by analysis)

+: Low RoB; NR: not reported; −: high RoB.

(a) This will depend on the context in which the assessment is performed. For safety assessments the interest lies in particular in the population group who followed the protocol and consumed the intervention accordingly (PP population), while for efficacy assessments the intention‐to‐treat population is of greater importance, This item needs to be judged on a case‐by‐case basis as also other types of information supplied in the publication could be used in the assessment (e.g. considerations on the type of missingness of the data or sensitivity analyses presented in the publication).