Skip to main content
. 2020 May 6;15(5):e0232683. doi: 10.1371/journal.pone.0232683

Table 8. Biot’s equations—inverse modeling: PINN performance as function of noise.

This figure shows the average percentage errors of θ1, θ2, θ3, θ4, and θ5 for different numbers of training data Ntr corrupted by different noise levels (ϵ). Here, the neural network architecture is kept fixed to 6 layers and 20 neurons per layer. These results are an average over 10 realizations.

Noise (ϵ) 0% 1% 5% 10%
Ntr
θ1 1000 0.91 6.40 7.15 29.55
1500 0.98 4.64 5.70 14.86
2000 0.67 2.17 5.26 15.95
2500 0.52 3.87 5.33 12.24
5000 0.30 1.39 3.32 7.60
θ2 1000 1.48 10.26 12.30 17.50
1500 0.84 6.95 5.44 14.53
2000 1.37 4.38 6.73 13.48
2500 0.80 3.95 2.91 6.31
5000 0.51 1.97 2.90 3.14
θ3 1000 1.19 2.85 5.06 8.10
1500 0.87 1.32 2.52 4.80
2000 0.58 1.35 2.49 4.11
2500 0.11 0.38 1.54 4.12
5000 0.10 0.24 1.72 2.36
θ4 1000 0.46 4.70 8.35 13.75
1500 0.43 3.01 5.22 8.53
2000 0.23 2.99 4.12 10.08
2500 0.24 0.64 5.88 9.34
5000 0.20 0.45 1.62 6.53
θ5 1000 3.46 9.22 16.15 24.52
1500 2.70 5.64 14.16 19.75
2000 0.44 3.76 11.56 17.55
2500 0.66 2.23 6.82 7.09
5000 0.24 1.28 4.26 5.91

As the noise (ϵ) is increased, the error increases as expected.