Skip to main content
. Author manuscript; available in PMC: 2011 Jun 7.
Published in final edited form as: Neurotoxicol Teratol. 2011 Feb 18;33(3):354–359. doi: 10.1016/j.ntt.2011.01.004

Table 2.

Proposed Guidance for Evaluating Neurodevelopmental Environmental Epidemiology Studies: Harmonization of Neurodevelopmental Environmental Epidemiology Studies (HONEES)

Item Yes No Unclear
Sampling & Participants
1.* Were participant selection criteria clearly described?
2.# Were there clearly defined groups of participants, similar in all important ways other than exposure to the chemical? (e.g. IQ scores, SES, age)
3.* Were the participants representative of the population to whom results would be generalized in practice?
4.* Were withdrawals from the study explained? (e.g., flow diagram, or other accounting)
Assessment Procedures
5.# Were exposures and clinical outcomes measured in the same ways in both groups? (i.e., was the assessment of outcomes either objective or blinded to exposure?)
6.# Was the follow-up of study participants sufficiently long for the outcome to occur?
7.# Did the follow-up have an acceptable level of participant retention to avoid bias?
8. Did the whole sample (or a random selection of the sample) receive testing using a “gold standard” assessment tool?
9.* Did participants receive the same assessments regardless of degree of exposure to toxicant?
10.* Was the method for exposure measurement described in enough detail to permit replication or application to new cases?
11.* Is the assessment tool likely to correctly measure or classify the target construct? (adequate diagnostic sensitivity and accuracy may be particularly important for neurodevelopmental studies)
12.* Was the neurodevelopmental assessment procedure appropriate?
  1. * Were methods described in sufficient detail to permit replication?

  2. * Was the test appropriate for the ages at which it was used?

  3. * Did the protocol avoid burden or fatigue effects that might invalidate results? (e.g., two hours might be the upper limit of length for neurodevelopmental assessments of young children; babies need to be in optimal state before testing can begin)

  4. * Are there normative data for comparison, outside of the study of environmental chemicals?

  5. * Was the administration valid, or were there major departures from standardization? (e.g., poor training, used outside of age norms, translated version without supporting psychometric data. Study must demonstrate evidence of good training, administration, and scoring and checks for continued administration validity across the length of the study by the review of video tapes or other similar measures)

13.* Was exposure status determined without knowledge of the results of the neurodevelopmental assessment?
14.* Were the neurodevelopmental assessment results interpreted without knowledge of the exposure status? (e.g., blinding of assessment administration and scoring staff; computerized administration)
15.* Was the contextual and supporting information used to interpret the test similar in the research protocol versus standard clinical practice?
16.* Were uninterpretable/ intermediate test results reported? (e.g., treatment of missing data, and reporting of borderline or midrange scores versus only extreme groups)
Interpretation and Causal Inference
17.# Do the results of the study fulfill some of the methodological tests for inferring causation?
a.# Is it clear that the exposure preceded the onset of the outcome?
b.# Is there a dose-response gradient?
c.# Is there any positive evidence from other “dechallenge-rechallenge” studies?
d.# Is the association consistent from study to study?
e.# Does the association make biological sense?
*

Adapted from Whiting et al., 2003.

#

Adapted from Straus et al., 2005.