Skip to main content
. Author manuscript; available in PMC: 2021 Jan 6.
Published in final edited form as: Psychol Sci Public Interest. 2020 Oct 14;21(2):55–97. doi: 10.1177/1529100620915848

Table 1.

Proposed explanations of fadeout as a methodological artefact.

Explanation Description Why likely insufficient
Artefactual Explanations
Misleading effect size reporting Changes in standardized effect sizes over time can be misleading, particularly when variance on the underlying construct increases with age. Fadeout has been observed on a variety of measures, scaling decisions, constructs, and age ranges. Effects sometimes reverse in sign.
Publication bias If follow-up assessments are more likely to be conducted in the case of evaluations showing larger end-oftreatment impacts, then the end-oftreatment impacts in studies with follow-up assessments would be positively selected on sampling error and thus upwardly biased. Publication bias can make fadeout look more or less severe. Fadeout is observable in quasi-experimental designs for which all outcome waves have been collected pre-analysis.
Part-Artefactual Explanations
Over-alignment Initial over-alignment between treatments and outcomes creates a spuriously large estimate of end-oftreatment impacts. Fadeout has been observed for combinations of broad treatments and measures (including measures other than cognitive tests), where a strong degree of alignment is unlikely.
Multidimensionality Interventions may meaningfully affect some psychological attribute, but at follow-up, longer-run impacts are misestimated because a different construct is measured. Fadeout has been observed on outcome measures other than psychological attributes with straightforward interpretations such as employment and earnings.