Skip to main content
. 2022 Jan 20;12:1040. doi: 10.1038/s41598-021-04590-0

Table 1.

Some selected works, in no particular order, showing the principal approach of domain knowledge inclusion into deep neural networks. For each work referred here, we show the type of learner with following acronyms: Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Graph Neural Network (GNN), Adaptive Resonance Theory-based Network Map (ARTMAP), DNN refers to a DNN structure dependent on intended task. We use ‘MLP’ here to represent any neural network, that conforms to a layered-structure that may or maynot be fully-connected. RNN also refers to sequence models constructed using Long Short-Term Memory (LSTM) or Gated Recurrent Unit (GRU) cells.

Principal approach Work (reference) Type of learner
Transforming Data DRM24,25 MLP
CILP++28 MLP
R-GCN46 GNN
KGCN61 GNN
KBRD49 GNN
DG-RNN44 RNN
DreamCoder32 DNN
Gated-K-BERT38 Transformer
VEGNN5 GNN
BotGNN6 GNN
KRISP45 GNN, Transformer
Transforming Loss IPKFL78 CNN
ILBKRME30 MLP
HDNNLR64 CNN, RNN
SBR68 MLP
SBR69 CNN
DL266 CNN
Semantic Loss63 CNN
LENSR91 GNN
DANN67 MLP
PC-LSTM72 RNN
DomiKnowS73 DNN*
MultiplexNet74 MLP, CNN
Analogy Model75 RNN
Transforming Model KBANN86 MLP
Cascade-ARTMAP89 ARTMAP
CIL2P29 RNN
DeepProbLog93 CNN
LRNN101 MLP
TensorLog103 MLP
Domain-Aware BERT82 Transformer
NeuralLog104 MLP
DeepStochLog94 DNN*