Table 3. Bias in imaging experiments.
Type of bias | Examples in imaging experiments | Strategies |
---|---|---|
Selection bias | • Scanning samples for fields of view that “look good” or “worked” based on subjective or undefined criteria (also confirmation bias) | • Use microscope automation to select fields of view or scan the entire well |
• Choosing to image only the brightest cells/samples (e.g., highest expression level) | • Include all data in analysis, or determine criteria to discard a dataset before collecting data | |
• Only including data from experiments that “worked” in analysis or publication | ||
Confirmation bias | • Adjustments to the analysis strategy based on the direction the results are heading | • Validate the analysis strategy using known samples/controls ahead of time |
• Choosing analysis parameters that yield the desired or expected results, rather than choosing through validation with known samples | • Perform analysis blind | |
• P-hacking (Head et al., 2015) | ||
• Choosing cells or parts of a sample that “make sense” based on the anticipated outcome | ||
Observer bias/experimenter effects | • Spending more time focusing by eye (and therefore photobleaching) on one condition than the others | • Perform acquisition and analysis blind |
• Making subjective conclusions based on visual inspection of the image rather than making quantitative measurements | • Make conclusions based on quantitative measurements rather than qualitative visual impressions (measure length/width/aspect ratio, count, measure intensity, etc.) | |
Asymmetric attention bias/disconfirmation bias | • Performing image corrections only when result seems wrong or is not as expected | • Consider sources of error, validate, and apply corrections equally to all conditions and experiments |
See Lazic (2016), Nuzzo (2015), Nickerson (1998), and Munafò et al. (2017) for more about bias and additional references.