Skip to main content
. 2014 Sep 9;8:109. doi: 10.3389/fncom.2014.00109

Table 2.

Mapping of various popular recognition algorithms to the canonical architecture of Figure 5.

Method Stage 1 Stage 2
Units Saliency Templates Units Assignment Pooling
HMAX Filter responses Random LU Soft Max
MPE Filter responses GMM CPU Soft Sum
NBNN SIFT Bottom-up Training set CPU Hard Sum
SPMK SIFT Bottom-up Codebook PU Hard Sum
HGMM SIFT Bottom-up GMM CPU soft sum
Sparse SPMK SIFT Bottom-up Sparse dictionary PP Soft Max
LSN SL Top-down
HDSN DS Top-down Random DS Sum

HMAX: (Serre et al., 2007), MPE: (Carneiro et al., 2007), NBNN: (Boiman et al., 2008), SPMK: (Lazebnik et al., 2006), HGMM: (Zhou et al., 2009), sparse SPMK: (Yang et al., 2009), LSN: (Elazary and Itti, 2010).