Skip to main content
. 2012 Oct 23;12(11):19. doi: 10.1167/12.11.19

Table 1.

Summary of categorization time predictions using visual search data. Notes: For each image, we calculated the within-category similarity (CRT) and between-category similarity (NRT) using visual search data. We then fit a model that uses a linear combination of NRT and CRT to account for categorization times. The resulting correlations and model coefficients are depicted above. Asterisks represent the statistical significance of the correlation (*p < 0.05, **p < 0.005, ***p < 0.0005, ****p < 0.00005).


Task

Item type

Correlation between categorization & search

Model coefficients
RT = a × NRT + b × CRT + c

a

b

c
Animal/dog/Labrador in canonical views (Experiment 1) All items 0.85**** 0.16 −0.04 0.61
Category 0.91****
Noncategory 0.85****
Animals in oblique and profile views (Experiment 2) All items 0.72****
Category 0.52* 0.08 −0.02 0.62
Noncategory 0.56* 0.06 0.02 0.61
Vehicles (Experiment 3) All items 0.71****
Category 0.43* 0.17 −0.05 0.63
Noncategory 0.48* 0.10 0.11 0.59
Tools Experiment 4) All items 0.62****
Category 0.70*** 0.58 −0.13 0.44
Noncategory 0.45* 0.20 −0.07 0.73