Skip to main content
. 2019 May 15;7:102. doi: 10.3389/fbioe.2019.00102

Table 4.

Unsupervised domain adaptation for TCGA (w/o UP) → UP and TCGA → RCINJ using color normalization and adversarial adaptation.

TCGA (w/o UP) → UP TCGA → RCINJ
Baseline 54.3 56.3
Macenko (Macenko et al., 2009) 1-Ensemble 65.7 ± 11.9 51.3 ±6.1
Macenko (Macenko et al., 2009) 2-Ensemble 70.0 ± 5.9 53.8 ±8.5
Macenko (Macenko et al., 2009) 5-Ensemble 72.3 ± 3.8 55.0 ± 7.3
Macenko (Macenko et al., 2009) 10-Ensemble 72.6 ± 2.3 55.0 ± 4.7
SPCN (Vahadane et al., 2016) 1-Ensemble 70.0 ± 7.3 56.3 ± 13.4
SPCN (Vahadane et al., 2016) 2-Ensemble 71.7 ± 6.7 55.0 ± 15.3
SPCN (Vahadane et al., 2016) 5-Ensemble 72.9 ± 2.6 55.6 ± 9.8
SPCN (Vahadane et al., 2016) 10-Ensemble 73.4 ± 1.8 54.4 ± 8.4
Color augmentation (Liu et al., 2017) 74.5 56.3
Generate-to-Adapt (Sankaranarayanan et al., 2018) 71.7 62.5
La only 71.4± 1.1 62.5 ± 2.5
Lt 77.1± 1.1 75.0 ± 2.5

The classification accuracy of two color normalization methods including Macenko (Macenko et al., 2009) and SPCN (Vahadane et al., 2016) with different number of ensembles, and the target network with adversarial loss (La) only and the target network with adversarial loass and Siamese loss together (Lt) are shown for two sets of adaptations. We also compare our approach with color augmentation (Liu et al., 2017). Our proposed approach has a better performance than other state-of-the-art study (Sankaranarayanan et al., 2018) on the unsupervised adaptation task.