Skip to main content
. 2009 Sep 25;36(10):4810–4818. doi: 10.1118/1.3213517

Table 2.

Comparison of the relative gain in performance from regularization and loss due to overfitting of the ANN training methods in the simulation studies.

    No regularization Noise injection Weight decay BANN Early stopping
50 training cases Average gaina or lossb [95% CI]   −0.036 [−0.034, −0.039]   0.032 [0.028, 0.035]   0.019 [0.015, 0.023]     0.017 [0.015, 0.020]
  Percent recovery [95% CI]   87% [80%, 94%] 53% [44%, 61%]   48% [43%, 52%]
50 training cases, complex ANNsc Average gaina or lossb [95% CI]   −0.071 [−0.065, −0.077]   0.064 [0.058, 0.070]   0.051 [0.043, 0.058]     0.054 [0.048, 0.061]
  Percent recovery [95% CI]   90% [84%, 96%] 71% [63%, 79%]   76% [69%, 83%]
100 training cases Average gaina or lossb [95% CI]   −0.019 [−0.017, −0.021]   0.023 [0.021, 0.025]   0.021 [0.020, 0.023]     0.008 [0.006, 0.009]
  Percent recovery [95% CI]   121% [114%, 129%] 114% [107%, 120%]   40% [35%, 45%]
200 training cases Average gaina or lossb [95% CI]   −0.006 [−0.005, −0.008]   0.011 [0.009, 0.013]   0.009 [0.008, 0.011]   0.012 [0.010, 0.014]   0.002 [0.001, 0.003]
  Percent recovery [95% CI]   176% [148%, 204%] 146% [126%, 166%] 194% [165%, 223%] 31% [15%, 47%]
a

Gain=the difference in the AUC value between ANNs trained with regularization and the ANNs trained without regularization at the 485th training iteration (1485th training iteration for the more complex ANNs).

b

Loss=the difference between the maximum AUC value and the AUC value at the 485th training iteration (1485th training iteration for the more complex ANNs) for ANNs trained without regularization.

c

These ANNs had 20 hidden nodes and were trained to 1500 training iterations, whereas all other ANNs had 6 hidden nodes and were trained to 500 training iterations. The results were calculated at the 1485th and 485th training iteration, respectively.