Skip to main content
. 2020 Apr 13;3(2):146–150. doi: 10.1093/jamiaopen/ooaa010

Table 1.

A proposed categorization of the space of domain adaptation algorithms

Source shares Target has Target shares Best methods
Labeled text Labeled text
  • Neural feature augmentation5

  • Parameter transfer7–9

  • Prior knowledge10

  • Instance weighting and selection11,12

Raw text
  • Neural feature correspondence learning14

  • Re-training embeddings19

  • Bootstrapping20,21

  • Adversarial training22

  • Auto-encoders16–18

Labeled features Labeled text Feature augmentation6
Raw text Feature correspondence learning13–15
Trained Models Labeled text
  • Fine-tuning23,24

  • Adaptive off-the-shelf25

Models
Raw text Online self-training21
Models Pseudo in-domain data selection26

Notes: It is assumed that there is always labeled data available in the source domain. “Source shares” describes what the source site is able to share with the target site. “Target has” describes what data are available at the target site. “Target shares” describes what the target site is able to share with the source site. “Methods” gives names for the types of methods in each configuration, and citations to examples of such work