Table 2.
Test errors in loss functions–Entropy, BLEUl, and SF-BLEUl (standard errors in parentheses) of various generators based on 20 random partitions of the UCI sentence categorization text corpus. Here “Separate RNN”, “Indirect”, “Direct”, “Direct-GPT2” and “Coupled” denote the separate RNN, indirect, and direct generators based on the RNN-LSTM architecture, the direct generator based on the RNN-GPT architecture, and the coupled generator, while Indirect-label or Coupled-label refers to the generation without unlabeled data.
| Method | Entropy | BLEU1-loss | BLEU2-loss | BLEU3-loss | BLEU4-loss |
|---|---|---|---|---|---|
| Y : categorical label | |||||
| Separate RNN | 9.317(.040) | 0.895(.010) | 0.926(.008) | 0.954(.007) | 0.971(.005) |
| Indirect | 7.424(.049) | 0.768(.003) | 0.854(.002) | 0.885(.002) | 0.914(.002) |
| Indirect-label | 8.839(.060) | 0.831(.008) | 0.878(.005) | 0.899(.004) | 0.923(.003) |
| Direct | 9.537(.054) | 0.823(.008) | 0.872(.005) | 0.895(.005) | 0.919(.004) |
| Direct-GPT2 | 8.684(.051) | 0.900(.006) | 0.954(.002) | 0.970(.001) | 0.981(.001) |
| Coupled | 7.424(.049) | 0.768(.003) | 0.854(.002) | 0.885(.002) | 0.914(.002) |
| Coupled-label | 8.644(.050) | 0.880(.008) | 0.932(.008) | 0.949(.007) | 0.963(.006) |
| SF-BLEU1-loss | SF-BLEU2-loss | SF-BLEU3-loss | SF-BLEU4-loss | ||
| Separate RNN | 0.076(.010) | 0.208(.027) | 0.271(.036) | 0.303(.043) | |
| Method | Entropy | BLEU1-loss | BLEU2-loss | BLEU3-loss | BLEU4-loss |
| Indirect | 0.105(.006) | 0.296(.009) | 0.416(.012) | 0.502(.013) | |
| Indirect-label | 0.138(.008) | 0.363(.022) | 0.472(.029) | 0.545(.036) | |
| Direct | 0.139(.006) | 0.372(.019) | 0.487(.026) | 0.561(.032) | |
| Direct-GPT2 | 0.053(.006) | 0.159(.019) | 0.255(.031) | 0.320(.040) | |
| Coupled | 0.105(.006) | 0.296(.009) | 0.416(.012) | 0.502(.013) | |
| Coupled-label | 0.082(.011) | 0.233(.028) | 0.342(.038) | 0.417(.045) | |
| Method | Entropy | BLEU1-loss | BLEU2-loss | BLEU3-loss | BLEU4-loss |
| Y: continuous label based on Doc2Vec [23, 24] | |||||
| Indirect | 7.641(.036) | 0.768(.005) | 0.851(.003) | 0.883(.003) | 0.912(.003) |
| Indirect-label | 8.512(.041) | 0.912(.010) | 0.937(.008) | 0.949(.007) | 0.960(.005) |
| Direct | 9.102(.050) | 0.916(.010) | 0.939(.007) | 0.950(.005) | 0.961(.004) |
| Coupled | 7.641(.036) | 0.768(.005) | 0.851(.003) | 0.883(.003) | 0.912(.003) |
| Coupled-label | 8.512(.041) | 0.912(.010) | 0.937(.008) | 0.949(.007) | 0.960(.005) |
| SF-BLEU1-loss | SF-BLEU2-loss | SF-BLEU3-loss | SF-BLEU4-loss | ||
| Indirect | 0.097(.005) | 0.261(.008) | 0.361(.010) | 0.440(.012) | |
| Method | Entropy | BLEU1-loss | BLEU2-loss | BLEU3-loss | BLEU4-loss |
| Indirect-labeled | 0.064(.010) | 0.165(.026) | 0.211(.035) | 0.232(.040) | |
| Direct | 0.079(.014) | 0.202(.037) | 0.252(.046) | 0.271(.050) | |
| Coupled | 0.097(.005) | 0.261(.008) | 0.361(.010) | 0.440(.012) | |
| Coupled-label | 0.064(.010) | 0.165(.026) | 0.211(.035) | 0.232(.040) |