Table 9. Comparison with existing relevant person ReID methods, in which ‘–’ means they do not report the corresponding results.
The best performance is shown in bold.
| Methods | Market-1501 | MSMT17 | |||
|---|---|---|---|---|---|
| mAP (%) | Rank-1 (%) | mAP (%) | Rank-1 (%) | ||
| Stripe-based | RGA-SC (Zhang et al., 2020) | 88.1 | 95.8 | – | – |
| PCB+RPP (Sun et al., 2018) | 81.6 | 93.8 | 40.4 | 68.2 | |
| MGN (Wang et al., 2018) | 86.9 | 95.7 | 52.1 | 76.9 | |
| Extra semantic based | GASM (He & Liu, 2020) | 84.7 | 95.3 | 52.5 | 79.5 |
| SPReID (Kalayeh et al., 2018) | 81.3 | 92.5 | – | – | |
| AANet (Tay, Roy & Yap, 2019) | 83.4 | 93.9 | – | – | |
| -Net (Guo et al., 2019) | 85.6 | 95.2 | – | – | |
| HOReID (Wang et al., 2020) | 84.9 | 94.2 | – | – | |
| General attention methods of CNN-based | IANet (Hou et al., 2019) | 83.1 | 94.4 | 46.8 | 75.5 |
| SCSN (Chen et al., 2020) | 88.5 | 95.7 | 58.5 | 83.8 | |
| ABD-Net (Chen et al., 2019) | 88.3 | 95.6 | 60.8 | 82.3 | |
| BAT-net (Fang et al., 2019) | 85.5 | 94.1 | 56.8 | 79.5 | |
| Vision Transformer-based | DAT (Zhang et al., 2021) | 89.5 | 95.6 | 61.2 | 82.3 |
| (He et al., 2021) | 88.2 | 95.0 | 64.9 | 83.3 | |
| PAT (Li et al., 2021) | 88.0 | 95.4 | – | – | |
| AAformer (Zhu et al., 2021) | 87.7 | 95.4 | 62.2 | 83.1 | |
| TCCNet (Ours) | 90.4 | 96.1 | 66.9 | 84.5 | |