Skip to main content
. 2021 Oct 8;4:146. doi: 10.1038/s41746-021-00511-7

Fig. 4. Model uncertainty.

Fig. 4

a Model disagreement due to model uncertainty, measured as average prediction variance across the top k = 5 models, versus the MAE performance, both plotted in log space. From this, we see that higher model disagreement correlates with worse metric performance. For the best fit line, R2 = 0.539, 4.39x + 3.37. b A rejection diagram showing the percentage of dates on which a prediction is made, after thresholding on model disagreement due to model uncertainty, versus the MAPE performance on those dates. From this, we can see that better average metric performance (on the days for which a forecast is released) can be achieved by withholding forecasts on days with higher model disagreement. Thus, we find the reliability of the forecasting system can be improved through model uncertainty thresholding. For the best fit line, R2 = 0.941, f(x) = 2.18x + 9.50.