Table 3.
Comparison of PINNs using different strategies for robustness to solve the 1D Burgers’ equation. The introduction of error in the initial condition causes a significant increase in MSE for the standard PINN. GP-smoothing reduces the MSE to nearly as low as the PINN with no error. SGP-smoothing is also effective in reducing error and uses fewer inducing points (IPs). Results quoted for L 1 and L 2 regularizations are taken from the best performance observed over choices of .
| Model | MSE | 
|---|---|
| PINN (no error) | 0.0116 | 
| PINN (σ = 0.5) | 0.1982 | 
| PINN (σ = 0.1, L 1 regularization with ) | 0.0392 | 
| PINN (σ = 0.1, L 2 regularization with ) | 0.0293 | 
| PINN (σ = 0.5, Cole-Hopf regularizer) | 0.1125 | 
| cPINN-2 (no error) | 0.0161 | 
| cPINN-2 (σ = 0.5, no smoothing) | 0.0834 | 
| cPINN-2 (σ = 0.5, Cole-Hopf regularizer) | 0.0891 | 
| cPINN-3 (no error) | 2.782 | 
| cPINN-3 (σ = 0.5, no smoothing) | 0.0854 | 
| cPINN-3 (σ = 0.5, Cole-Hopf regularizer) | 0.0329 | 
| UQ-PINN [20] ( | 0.1248 | 
| GP-smoothed PINN (σ = 0.5, 50 IPs) | 0.0384 | 
| SGP-smoothed PINN (σ = 0.5, 41 IPs) | 0.0080 |