Skip to main content
. 2023 Feb 20;15:24. doi: 10.1186/s13321-023-00694-z

Table 1.

The performance of four different generators with different number of neurons in hidden layers for pre-training and fine-tuning processes

Methods Hidden Neurons Pre-trained Model Fine-tuned Model
Validity Accuracy Novelty Uniqueness Validity Accuracy Novelty Uniqueness
Graph Transformer 512 100.0% 99.3% 99.9% 99.4% 100.0% 99.2% 68.9% 82.9%
Sequential Transformer 128 91.8% 62.4% 90.2% 92.5% 94.5% 80.5% 8.6% 24.3%
256 94.2% 69.3% 89.3% 91.4% 98.8% 89.5% 9.2% 26.6%
512 96.7% 74.0% 89.1% 91.8% 99.3% 92.7% 8.9% 28.9%
1024 97.1% 77.9% 89.5% 91.4% 99.4% 94.3% 8.2% 32.9%
LSTM-BASE 128 87.1% 38.7% 83.2% 84.0% 85.2% 53.1% 9.9% 26.8%
256 91.4% 48.8% 89.0% 91.2% 94.5% 75.8% 5.8% 21.2%
512 93.9% 52.4% 84.3% 89.1% 98.7% 81.6% 3.9% 19.2%
1024 95.7% 57.0% 79.6% 87.5% 99.6% 90.2% 2.1% 18.1%
LSTM + ATTN 128 89.8% 57.0% 84.2% 85.0% 85.2% 64.8% 14.2% 27.8%
256 92.6% 68.4% 87.1% 89.5% 94.9% 80.5% 8.9% 22.4%
512 94.3% 72.8% 85.3% 89.7& 96.9% 85.9% 6.3% 20.7%
1024 96.0% 75.0% 80.7% 89.4% 99.1% 92.9% 4.2% 20.2%