Skip to main content
Computational Intelligence and Neuroscience logoLink to Computational Intelligence and Neuroscience
. 2015 Dec 30;2016:3868519. doi: 10.1155/2016/3868519

Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

Petr Maca 1,*, Pavel Pech 1
PMCID: PMC4736223  PMID: 26880875

Abstract

The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.

1. Introduction

Droughts are natural disasters and extreme climate events with large impact in different areas of economy, agriculture, water resources, tourism, and ecosystems. The reviews of significant drought events, their impacts, description, mitigation, and propagation in time are presented in detail in [13].

The drought indices are essential tools for explaining the severity of drought events. They are mainly represented in a form of time series and are used in drought modeling and forecasting [4]. The intercomparison of different drought indices connected with the development of forecasting tools was studied in large number of research studies [59].

The recent development of artificial neural networks (ANN) has a significant impact on the application of those techniques for the forecasting of drought indices. The ANN models are mostly represented by the nonlinear data driven black box modeling techniques. Empirical studies confirm that the multilayer perceptron (MLP) trained by the backpropagation algorithm is one of the most frequently studied ANN models, since it is a universal approximator [1015].

The important direction in ANN research in water resources is the development and application of hybrid and integrated neural network models [1619]. For example, Shamseldin and O'Connor [20] used the feedforward MLP for updating the outflow forecast. Huo et al. [16] developed the two versions of integrated ANN models and successfully applied them on monthly outflow forecast. The first version of integrated ANN models uses several outputs from several MLP models as inputs to final MLP; the second aggregates those outputs from several MLP models into one input to final MLP.

The main aims of the presented paper are to develop and apply several models of integrated neural networks for forecasting of drought indices and compare the integrated ANN models with the currently known models based on MLP. The rest of the paper is organized as follows. Section 2 describes drought indices, architecture of tested neural network models, model performance measures, training method, and datasets. Section 3 shows the results and discussion. Section 4 concludes the paper.

2. Material and Methods

2.1. Drought Indices

The studied droughts were described using the two drought indices: the standardized precipitation index, SPI index [2123], and the standardized precipitation evapotranspiration index, SPEI index [24, 25].

The SPI index is based on the evaluation of precipitation data. The precipitation data are linked to the selected probability distribution, which is further standardized using the normal distribution with zero mean and standard deviation of one. It is often expressed as a meteorological drought index [11], and it is used for the assessment of agricultural and hydrological droughts [21].

The estimation of SPI consists of the determination of probability distribution of analyzed precipitation data, the calculation of probabilities for measured precipitation data from cumulative distribution function of fitted probability distribution, and the application of the inverse of distribution function of normalized normal distribution on probabilities [21, 22].

The SPEI drought index is based on the precipitation and potential evapotranspiration data. The information about the potential evapotranspiration temperature is mostly derived using the temperature data. The SPEI index is expressed using the differences between precipitation and potential evapotranspiration. Its calculation technically follows the derivation of SPI index; the only difference is that instead of the precipitation time series the time series of the abovementioned differences are used [24, 25].

The estimation of SPI and SPEI drought indices was made using the R package [26]. The probability distribution of SPEI was expressed using the three-parameter log-logistic probability distribution; the SPI probability distribution was calculated using the Gamma distribution. The parameters were identified using the method of unbiased probability weighted moments [24, 25].

The SPI indicates the extremity of droughts. The SPI values split the range into extremely dry (SPI ≤ −2), severely dry (−2 < SPI ≤ −1.5), moderately dry (−1.5 < SPI ≤ −1), and near neutral conditions (−1.0 < SPI ≤ 1.0) [22, 23].

2.2. sANN Model

The architecture of the first analyzed neural network model was based on the feedforward multilayer perceptron with one hidden layer of neurons, sANN (the single ANN model; see Figure 1). This type of neural network architecture has been already applied on drought indices forecast [10, 11, 27].

Figure 1.

Figure 1

The scheme of tested ANN architectures, sANN, is based on single MLP; hANN is integrated ANN.

The sANN model has the following mathematical formula:

DIf=v0out+j=1Nhdvjoutfv0jhd+i=1Ninvjihdxi, (1)

where DIf is a network output, that is, drought index forecast for a given time interval, x i is network input for input layer neuron i normalized on the interval (0,1), N in is the number of MLP inputs, v ji hd is the weight of the connection between input i and hidden layer neuron j, f( ) is the activation function for all hidden layer neurons, N hd is the number of hidden neurons, v j out is the weight of the connection between the hidden neuron j and output neuron, and v 0j hd, v 0 out are biases of neurons [14, 28, 29].

The type of activation function of neurons in hidden layer was the RootSig [30, 31]. Its form is

ya=a1+1+a2, (2)

with a = ∑i=1 Nin v ji x i + v 0j.

Since the neurons weights of sANN are unknown real parameters, their values were estimated using training algorithms and calibration and validation dataset of analyzed time series of drought indices. The number of hidden layer neurons was selected according to the current experience with drought indices forecast using the ANN models [10, 11, 27]. The presented analysis was focused on testing three sets of ANN models with different numbers of hidden layer neurons; N hd = 4,6, 8.

2.3. hANN Model

The newly proposed hANN integrates five MLP (sANN) models. Figure 1 shows its scheme. The hANN is formed from two layers of sANN models. The first layer consists of four sANN. The second layer is formed from one sANN. Outputs of the first layer of sANN are inputs to the sANN in the second layer. The final forecast of the time series of selected drought index is obtained from the output from the last MLP. The tested architecture of hANN model was based on the integrated neural network model of Huo et al. [16].

The main enhancement lies in the inputs of the last MLP model. The inputs are obtained from four outputs from sANN models, which were trained according to the different neural network performance statistics: MSE, dMSE, tPI, and CI (see Section 2.4). Suggested approach combines the specific aspects of training sANN using different performance indices in one hybrid neural network. The last MLP is an error correction model or static updating model of drought index forecast [20, 32].

The unknown parameters of hANN are the real values of all sANN weights. Values of weights were estimated using the global optimization algorithm and calibration and validation datasets. We tested only those hANN models for which all five sANNs had the same number of neurons in the hidden layers. The training of hANN is explained in Section 2.5. The analyzed hANN models used the same inputs sets on four sANN in the first layer of hANN.

2.4. The Performance of ANN Models

The evaluations of ANN simulations of time series of drought indices for training and for validation datasets were based on the following statistics [3335].

Mean Absolute Error (MAE)

MAE=1nt=1nDIotDIft. (3)

Mean Squared Error (MSE)

MSE=1nt=1nDIotDIft2. (4)

Means Squared Error in Derivatives (dMSE)

dMSE=1n1t=2ndDIotdDIft2. (5)

Nash-Sutcliffe (NS) Efficiency

NS=1i=tnDIotDIft2t=1nDIotDIo¯2. (6)

Transformed Persistency Index (tPI)

tPI=t=1nDIotDIft2t=1nDIotDIotLAG2. (7)

Persistency Index (PI)

PI=1tPI. (8)

Combined Index (CI)

CI=0.85tPI+0.15dMSE. (9)

Persistency Index 2 (PI2)

PI2=1t=1nDIotDIANN1t2t=1nDIotDIANN2t2. (10)

n represents the total number of time intervals to be predicted, DIo¯ is the average of observed drought index DIo, dDIo[t] = DIo[t] − DIo[t − 1], and dDIf[t] = DIf[t] − DIf[t − 1], LAG is the time shift describing the last observed drought index DIo[t − LAG], and LAG is equal to two in the presented analysis. PI2 was applied on the comparison of forecast DIANN1 with DIANN2 made by two different neural network models.

2.5. The ANN Training Method

The training of tested sANN was based on solving inverse problems using the global optimization algorithm. The values of sANN parameters were found according to the minimization of performance indices MSE, dMSE, tPI, and CI. The performance indices were estimated on times series of analyzed drought indices. All sANNs were trained in batch mode. Only the single objective optimization methods were used [13, 36].

The training of hANN consisted of two steps. The first step was related to the training of four sANN models. Each sANN model had been trained using one of the four main objective functions: MSE, dMSE, tPI, and CI. The second step was based on the training of the last sANN. The fifth sANN was trained using one of the four objective functions and global optimization algorithm in batch mode. Training of one hANN was built on solving five single objective optimization problems [16].

The adaptive differential evolution, JADE, was applied as a main global optimization algorithm [37]. JADE is an adaptive version of differential evolution, which was developed by Storn and Price [38]. It is a nature inspired heuristics. The optimization process is based on the iterative work with population of models. Each population member is represented by the vector of its parameters. The differential evolution combines the mutation, crossover, and selection operators [39, 40].

The used adaptive mutation operator has the following formula:

vkihd,out=vki1hd,out+Fivp-besthd,outvki1hd,out+Fivr1i1hd,outvr2i1hd,out. (11)

The value of ANN weight v k hd,out is changed during the ith iteration using v p-best hd,out parameter, which is randomly selected from %p top models of population, v r1 hd,out and v r2 hd,out are weights of randomly selected models from population, and F i is the mutation factor, which is adaptively adjusted using the Cauchy distribution. The top models in populations are those which have the best values of analyzed objective function in a given generation. The binomial crossover operator is controlled by the crossover probability CRi, which is automatically updated using the normal distribution. The detailed explanation of JADE parameter adaptation together with selection of v p-best hd,out is presented in the work of Zhang and Sanderson [37].

2.6. The Dataset Description

We used for the drought indices neural network prediction the data obtained from two watersheds. The data were part of large dataset prepared within the MOPEX experiment framework [41, 42]. The MOPEX dataset provides the benchmark hydrological and meteorological data, which were explored in large number of environmentally oriented studies [4347].

The first basin was Leaf River near Collins Mississippi with area 1924.36 km2 USGS ID-02472000 and the second was the Santa Ysabel Creek near Ramona California, 290.07 km2 USGS ID-11025500; both catchments are located in the USA. The original daily records were aggregated into the monthly time scale. We used the records from the period 1948–2002. The calibration period was formed from the period 1948–1975; the validation dataset consisted of the records from the period 1975–2002. The standard length of analyzed benchmark dataset was used in the presented study [42].

Table 1 shows the inputs for tested neural network models on both catchments. The forecasted output was DI[t] for both SPI and SPEI drought indices. dT was the monthly mean of averages of differences between daily maximum and minimum temperatures.

Table 1.

The inputs for all tested neural network models.

Input set Number of inputs Input variables
3-N hd-1 3 DI[ti] for i = 2,3, 4
6-N hd-1 6 DI[ti] and dT[ti] for i = 2,3, 4
9-N hd-1 9 DI[ti] for i = 2,3,…, 10

Although there were several derived automatic linear and nonlinear procedures of input selection for ANN models, the applied input selection procedure was iterative. Estimations of cross-correlation and autocorrelation of input time series were used for making the decision about the final tested input sets [4850]. Since ANN models are capable of capturing the nonlinearities between the input and output data, we compared the ANN simulations, which were obtained using several combinations of different input variables with different memories. Table 1 presents the final list of the tested inputs.

The nonlinear transformation was applied on all ANN datasets. Its form was

Dtrans=1expγDorig+1.2minDorig, (12)

with original data D orig, transformed D trans⁡, and minimum of untransformed data min⁡(D orig). This nonlinear transformation emphasises the low values, which are connected to severe drought events.

3. Results and Discussion

In our experiment, we analyzed for each inputs set 3 sANN architectures. They were formed by three different numbers of hidden neurons N hd = 4,6, 8. All sANN and hANN models were calibrated 25 times using 4 objective functions, MSE, tPI, CI, and dMSE, and JADE optimizer. All five sANN models in one hANN had the same value of N hd. In total, we tested 1800 sANN models and 1800 hANN models.

The initial settings of JADE hyperparameters were similar for all ANN model runs. The population of models consisted of 20 × number of weights in sANN or hANN; the v p-best hd,out was randomly selected from 45% percent of the best models in population. The best models in given generation were those which were sorted according to the values of objective function. The number of generations was 40. The selected values balanced the exploration and exploitation during the search process and helped to avoid the premature convergence of the population of the models. The hyperparameter γ of (12) was set to 0.15.

The results of ANN models trained using the dMSE were omitted in our presentation, since all models provided the worst results. However, outputs generated by sANN trained by the dMSE were inputs to the last sANN in all tested hANN models.

Since the Persistency Index (PI) is sensitive to timing error of forecast and enables the comparison of the simulation of drought indices with the naive model, formed by the last known information about drought index [33], we selected it as a main reference index.

3.1. The Forecast of SPI Index

The results of medians of model performance indices on SPI forecast are presented in Tables 2, 3, and 4. The medians were calculated for each set of 25 simulation runs. The SPI results are in several aspects similar to those of SPEI forecast.

Table 2.

The medians of performance metrics for MLP architecture 3-N hd-1 on SPI forecast for sANN_TA and hANN_TA; TA shows the training error function, and the best models are marked with bold fonts.

Calibration period Validation period
MAE MSE dMSE NS PI MAE MSE dMSE NS PI
Leaf River
3-4-1
 sANN_MSE 3.88E − 01 2.56E − 01 1.85E − 01 7.45E − 01 −7.57E − 02 4.40E − 01 3.05E − 01 1.68E − 01 6.36E − 01 −1.55E − 01
  sANN_tPI 3.90E − 01 2.57E − 01 1.90E − 01 7.45E − 01 −7.72E − 02 4.32E − 01 2.99E − 01 1.72E − 01 6.44E − 01 −1.32E − 01
 sANN_PID 3.86E − 01 2.50E − 01 1.77E − 01 7.51E − 01 −4.95E − 02 4.31E − 01 2.93E − 01 1.67E − 01 6.50E − 01 −1.12E − 01
 hANN_MSE 3.66E − 01 2.27E − 01 1.55E − 01 7.38E − 01 4.80E − 02 4.00E − 01 2.55E − 01 1.47E − 01 6.40E − 01 3.35E − 02
 hANN_tPI 3.67E − 01 2.26E − 01 1.50E − 01 7.44E − 01 5.15E − 02 3.96E − 01 2.49E − 01 1.44E − 01 6.27E − 01 5.49E − 02
 hANN_PID 3.64E − 01 2.25E − 01 1.69E − 01 7.39E − 01 5.55E − 02 3.94E − 01 2.48E − 01 1.52E − 01 6.20E − 01 6.15E − 02
3-6-1
 sANN_MSE 4.08E − 01 2.71E − 01 2.02E − 01 7.31E − 01 −1.36E − 01 4.24E − 01 2.83E − 01 1.69E − 01 6.62E − 01 −7.47E − 02
  sANN_tPI 3.81E − 01 2.49E − 01 1.90E − 01 7.53E − 01 −4.34E − 02 4.21E − 01 2.81E − 01 1.74E − 01 6.65E − 01 −6.39E − 02
 sANN_PID 3.84E − 01 2.47E − 01 1.85E − 01 7.55E − 01 −3.59E − 02 4.24E − 01 2.85E − 01 1.69E − 01 6.60E − 01 −7.91E − 02
 hANN_MSE 3.64E − 01 2.23E − 01 1.62E − 01 7.51E − 01 6.39E − 02 3.88E − 01 2.42E − 01 1.54E − 01 6.30E − 01 8.24E − 02
  hANN_tPI 3.63E − 01 2.23E − 01 1.48E − 01 7.46E − 01 6.32E − 02 3.86E − 01 2.34E − 01 1.45E − 01 6.25E − 01 1.14E − 01
 hANN_PID 3.64E − 01 2.24E − 01 1.59E − 01 7.51E − 01 5.94E − 02 3.89E − 01 2.39E − 01 1.54E − 01 6.33E − 01 9.33E − 02
3-8-1
 sANN_MSE 3.74E − 01 2.32E − 01 2.07E − 01 7.60E − 01 2.71e − 02 4.10E − 01 2.66E − 01 1.84E − 01 6.83E − 01 −6.75E − 03
  sANN_tPI 3.76E − 01 2.40E − 01 2.08E − 01 7.61E − 01 −7.60E − 03 4.14E − 01 2.73E − 01 1.85E − 01 6.75E − 01 −3.40E − 02
 sANN_PID 3.78E − 01 2.43E − 01 2.02E − 01 7.59E − 01 −1.83E − 02 4.15E − 01 2.74E − 01 1.80E − 01 6.74E − 01 −3.70E − 02
 hANN_MSE 3.63E − 01 2.23E − 01 1.71E − 01 7.61E − 01 6.56E − 02 3.93E − 01 2.43E − 01 1.56E − 01 6.59E − 01 8.02E − 02
  hANN_tPI 3.64E − 01 2.24E − 01 1.67E − 01 7.62E − 01 5.93E − 02 3.83E − 01 2.36E − 01 1.55E − 01 6.54E − 01 1.05E − 01
 hANN_PID 3.63E − 01 2.25E − 01 1.75E − 01 7.63E − 01 5.71E − 02 3.85E − 01 2.38E − 01 1.60E − 01 6.67E − 01 9.64E − 02
Santa Ysabel Creek
3-4-1
 sANN_MSE 3.41E − 01 2.28E − 01 1.74E − 01 5.48E − 01 3.68E − 02 4.75E − 01 3.83E − 01 1.99E − 01 6.66E − 01 −2.88E − 01
  sANN_tPI 3.50E − 01 2.35E − 01 1.61E − 01 5.34E − 01 7.74E − 03 4.36E − 01 3.52E − 01 1.85E − 01 6.93E − 01 −1.83E − 01
 sANN_PID 3.45E − 01 2.30E − 01 1.67E − 01 5.43E − 01 2.75E − 02 4.75E − 01 3.85E − 01 2.01E − 01 6.64E − 01 −2.95E − 01
 hANN_MSE 3.03E − 01 2.10E − 01 1.41E − 01 5.29E − 01 1.13E − 01 3.59E − 01 2.86E − 01 1.61E − 01 5.64E − 01 3.75E − 02
  hANN_tPI 3.09E − 01 2.09E − 01 1.36E − 01 5.17E − 01 1.16E − 01 3.57E − 01 2.83E − 01 1.62E − 01 6.30E − 01 4.73E − 02
 hANN_PID 2.97E − 01 2.08E − 01 1.44E − 01 5.36E − 01 1.20E − 01 3.38E − 01 2.84E − 01 1.65E − 01 5.58E − 01 4.39E − 02
3-6-1
 sANN_MSE 3.42E − 01 2.29E − 01 1.67E − 01 5.46E − 01 3.31E − 02 4.49E − 01 3.57E − 01 1.89E − 01 6.89E − 01 −2.00E − 01
  sANN_tPI 3.39E − 01 2.24E − 01 1.88E − 01 5.55E − 01 5.23E − 02 4.50E − 01 3.48E − 01 2.18E − 01 6.96E − 01 −1.70E − 01
 sANN_PID 3.30E − 01 2.22E − 01 1.73E − 01 5.60E − 01 6.21E − 02 4.34E − 01 3.45E − 01 2.01E − 01 6.99E − 01 −1.60E − 01
 hANN_MSE 3.02E − 01 2.07E − 01 1.53E − 01 5.54E − 01 1.26E − 01 3.43E − 01 2.78E − 01 1.76E − 01 6.53E − 01 6.31E − 02
  hANN_tPI 3.06E − 01 2.06E − 01 1.50E − 01 5.58E − 01 1.28E − 01 3.55E − 01 2.78E − 01 1.76E − 01 6.20E − 01 6.58E − 02
 hANN_PID 3.04E − 01 2.08E − 01 1.59E − 01 5.56E − 01 1.22E − 01 3.55E − 01 2.76E − 01 1.82E − 01 6.39E − 01 7.16E − 02
3-8-1
 sANN_MSE 3.44E − 01 2.23E − 01 1.87E − 01 5.57E − 01 5.68E − 02 4.50E − 01 3.58E − 01 2.16E − 01 6.88E − 01 −2.04E − 01
  sANN_tPI 3.30E − 01 2.24E − 01 1.89E − 01 5.56E − 01 5.42E − 02 4.19E − 01 3.30E − 01 2.21E − 01 7.12E − 01 −1.11E − 01
 sANN_PID 3.27E − 01 2.22E − 01 1.90E − 01 5.60E − 01 6.32E − 02 4.22E − 01 3.27E − 01 2.23E − 01 7.14E − 01 −1.00E − 01
 hANN_MSE 3.04E − 01 2.06E − 01 1.68E − 01 5.62E − 01 1.29E − 01 3.67E − 01 2.85E − 01 1.93E − 01 6.42E − 01 4.24E − 02
  hANN_tPI 3.02E − 01 2.06E − 01 1.57E − 01 5.63E − 01 1.31E − 01 3.54E − 01 2.77E − 01 1.80E − 01 6.30E − 01 6.66E − 02
 hANN_PID 3.07E − 01 2.06E − 01 1.55E − 01 5.54E − 01 1.29E − 01 3.82E − 01 2.91E − 01 1.81E − 01 6.36E − 01 1.99E − 02

Table 3.

The medians of performance metrics for MLP architecture 6-N hd-1 on SPI forecast for sANN_TA and hANN_TA; TA shows the training error function, and the best models are marked with bold fonts.

Calibration period Validation period
MAE MSE dMSE NS PI MAE MSE dMSE NS PI
Leaf River
6-4-1
 sANN_MSE 5.76E − 01 5.25E − 01 2.71E − 01 4.78E − 01 −1.20E + 00 5.92E − 01 5.43E − 01 2.70E − 01 3.52E − 01 −1.06E + 00
 sANN_tPI 5.91E − 01 5.53E − 01 2.92E − 01 4.50E − 01 −1.32E + 00 6.18E − 01 6.09E − 01 3.02E − 01 2.74E − 01 −1.31E + 00
 sANN_PID 5.78E − 01 5.28E − 01 2.53E − 01 4.75E − 01 −1.22E + 00 6.20E − 01 5.94E − 01 2.40E − 01 2.91E − 01 −1.25E + 00
 hANN_MSE 4.02E − 01 2.81E − 01 1.50E − 01 6.13E − 01 −1.79E − 01 4.46E − 01 3.24E − 01 1.53E − 01 4.32E − 01 −2.30E − 01
 hANN_dMSE 2.81E + 00 8.90E + 00 1.10E − 01 −6.17E + 03 −3.63E + 01 3.19E + 00 1.10E + 01 1.15E − 01 −7.32E + 03 −4.08E + 01
 hANN_tPI 3.91E − 01 2.53E − 01 1.48E − 01 6.09E − 01 −6.22E − 02 4.39E − 01 3.06E − 01 1.50E − 01 4.27E − 01 −1.59E − 01
 hANN_PID 3.99E − 01 2.61E − 01 1.46E − 01 5.98E − 01 −9.73E − 02 4.45E − 01 3.08E − 01 1.51E − 01 3.89E − 01 −1.66E − 01
6-6-1
 sANN_MSE 5.66E − 01 5.12E − 01 2.31E − 01 4.91E − 01 −1.15E + 00 6.22E − 01 5.84E − 01 2.38E − 01 3.03E − 01 −1.22E + 00
  sANN_tPI 5.90E − 01 5.51E − 01 2.81E − 01 4.52E − 01 −1.31E + 00 6.22E − 01 5.77E − 01 2.74E − 01 3.11E − 01 −1.19E + 00
 sANN_PID 5.74E − 01 5.33E − 01 2.39E − 01 4.70E − 01 −1.24E + 00 6.27E − 01 5.99E − 01 2.33E − 01 2.85E − 01 −1.27E + 00
 hANN_MSE 3.95E − 01 2.62E − 01 1.41E − 01 6.16E − 01 −1.01E − 01 4.21E − 01 2.74E − 01 1.54E − 01 4.36E − 01 −4.04E − 02
  hANN_tPI 3.91E − 01 2.57E − 01 1.36E − 01 6.05E − 01 −7.68E − 02 4.19E − 01 2.70E − 01 1.40E − 01 4.24E − 01 −2.43E − 02
 hANN_PID 3.86E − 01 2.55E − 01 1.47E − 01 6.04E − 01 −7.19E − 02 4.20E − 01 2.87E − 01 1.55E − 01 3.98E − 01 −8.81E − 02
6-8-1
 sANN_MSE 5.99E − 01 5.72E − 01 2.90E − 01 4.31E − 01 −1.40E + 00 6.40E − 01 6.21E − 01 3.06E − 01 2.59E − 01 −1.35E + 00
  sANN_tPI 6.16E − 01 5.80E − 01 3.24E − 01 4.24E − 01 −1.43E + 00 6.27E − 01 6.35E − 01 3.22E − 01 2.43E − 01 −1.41E + 00
 sANN_PID 6.17E − 01 5.87E − 01 3.60E − 01 4.16E − 01 −1.46E + 00 6.39E − 01 6.16E − 01 3.26E − 01 2.65E − 01 −1.33E + 00
 hANN_MSE 3.91E − 01 2.56E − 01 1.51E − 01 5.07E − 01 −7.55E − 02 3.99E − 01 2.51E − 01 1.44E − 01 2.86E − 01 4.80E − 02
  hANN_tPI 3.83E − 01 2.47E − 01 1.46E − 01 5.11E − 01 −3.68E − 02 4.30E − 01 2.90E − 01 1.49E − 01 2.57E − 01 −9.82E − 02
 hANN_PID 3.86E − 01 2.47E − 01 1.55E − 01 5.11E − 01 −3.72E − 02 3.98E − 01 2.49E − 01 1.56E − 01 3.12E − 01 5.66E − 02
Santa Ysabel Creek
6-4-1
 sANN_MSE 5.00E − 01 4.04E − 01 2.01E − 01 1.97E − 01 −7.10E − 01 7.08E − 01 7.81E − 01 2.15E − 01 3.18E − 01 −1.63E + 00
  sANN_tPI 5.24E − 01 4.27E − 01 1.95E − 01 1.53E − 01 −8.05E − 01 7.09E − 01 7.75E − 01 2.06E − 01 3.23E − 01 −1.61E + 00
 sANN_PID 5.28E − 01 4.51E − 01 2.19E − 01 1.04E − 01 −9.09E − 01 7.73E − 01 9.23E − 01 2.25E − 01 1.94E − 01 −2.10E + 00
 hANN_MSE 3.50E − 01 2.42E − 01 1.24E − 01 2.01E − 01 −2.44E − 02 4.09E − 01 3.28E − 01 1.47E − 01 −7.21E − 02 −1.03E − 01
  hANN_tPI 3.26E − 01 2.26E − 01 1.25E − 01 2.15E − 01 4.59E − 02 4.15E − 01 3.31E − 01 1.42E − 01 1.26E − 01 −1.13E − 01
 hANN_PID 3.45E − 01 2.30E − 01 1.24E − 01 1.64E − 01 2.83E − 02 4.26E − 01 3.53E − 01 1.46E − 01 −1.38E − 01 −1.89E − 01
6-6-1
 sANN_MSE 5.11E − 01 4.22E − 01 1.96E − 01 1.63E − 01 −7.82E − 01 6.17E − 01 6.03E − 01 2.21E − 01 4.73E − 01 −1.03E + 00
  sANN_tPI 5.43E − 01 4.67E − 01 3.03E − 01 7.31E − 02 −9.75E − 01 7.55E − 01 8.69E − 01 3.11E − 01 2.41E − 01 −1.93E + 00
 sANN_PID 5.22E − 01 4.26E − 01 2.44E − 01 1.54E − 01 −8.01E − 01 6.86E − 01 7.11E − 01 2.57E − 01 3.79E − 01 −1.39E + 00
 hANN_MSE 3.42E − 01 2.22E − 01 1.26E − 01 3.35E − 01 6.10E − 02 4.08E − 01 3.31E − 01 1.51E − 01 3.30E − 01 −1.13E − 01
  hANN_tPI 3.03E − 01 2.21E − 01 1.17E − 01 3.18E − 01 6.40E − 02 3.58E − 01 2.94E − 01 1.41E − 01 2.65E − 01 1.06E − 02
 hANN_PID 3.36E − 01 2.19E − 01 1.18E − 01 3.06E − 01 7.34E − 02 4.01E − 01 3.07E − 01 1.43E − 01 2.84E − 01 −3.36E − 02
6-8-1
 sANN_MSE 5.15E − 01 4.38E − 01 2.78E − 01 1.32E − 01 −8.50E − 01 6.17E − 01 5.94E − 01 2.92E − 01 4.81E − 01 −9.99E − 01
  sANN_tPI 5.24E − 01 4.45E − 01 2.72E − 01 1.17E − 01 −8.81E − 01 6.21E − 01 6.44E − 01 2.91E − 01 4.38E − 01 −1.17E + 00
 sANN_PID 5.47E − 01 4.93E − 01 2.72E − 01 2.24E − 02 −1.08E + 00 6.73E − 01 7.15E − 01 2.84E − 01 3.76E − 01 −1.40E + 00
 hANN_MSE 3.36E − 01 2.18E − 01 1.40E − 01 2.67E − 01 7.84E − 02 4.23E − 01 3.33E − 01 1.61E − 01 3.22E − 01 −1.20E − 01
  hANN_tPI 3.33E − 01 2.25E − 01 1.37E − 01 2.74E − 01 5.07E − 02 4.04E − 01 3.28E − 01 1.53E − 01 3.65E − 01 −1.04E − 01
 hANN_PID 3.31E − 01 2.31E − 01 1.41E − 01 2.50E − 01 2.30E − 02 3.75E − 01 3.11E − 01 1.68E − 01 2.03E − 01 −4.73E − 02

Table 4.

The medians of performance metrics for MLP architecture 9-N hd-1 on SPI forecast for sANN_TA and hANN_TA; TA shows the training error function, and the best models are marked with bold fonts.

Calibration period Validation period
MAE MSE dMSE NS PI MAE MSE dMSE NS PI
Leaf River
9-4-1
  sANN_MSE 3.77E − 01 2.35E − 01 1.71E − 01 7.51E − 01 7.42E − 01 4.54E − 01 3.38E − 01 1.89E − 01 6.05E − 01 7.31E − 01
  sANN_tPI 3.79E − 01 2.36E − 01 1.98E − 01 7.50E − 01 7.41E − 01 4.60E − 01 3.44E − 01 2.16E − 01 5.98E − 01 7.26E − 01
 sANN_PID 3.86E − 01 2.45E − 01 1.52E − 01 7.41E − 01 7.32E − 01 4.62E − 01 3.50E − 01 1.75E − 01 5.91E − 01 7.22E − 01
 hANN_MSE 3.46E − 01 2.01E − 01 1.54E − 01 7.61E − 01 7.80E − 01 4.09E − 01 2.73E − 01 1.62E − 01 5.91E − 01 7.83E − 01
  hANN_tPI 3.44E − 01 2.02E − 01 1.53E − 01 7.66E − 01 3.86E − 02 4.16E − 01 2.83E − 01 1.62E − 01 5.79E − 01 7.75E − 01
 hANN_PID 3.47E − 01 2.01E − 01 1.36E − 01 7.57E − 01 7.79E − 01 4.08E − 01 2.74E − 01 1.57E − 01 5.75E − 01 7.82E − 01
9-6-1
 sANN_MSE 3.75E − 01 2.29E − 01 2.00E − 01 7.57E − 01 7.49E − 01 4.44E − 01 3.23E − 01 2.14E − 01 6.23E − 01 7.43E − 01
  sANN_tPI 3.72E − 01 2.28E − 01 1.88E − 01 7.59E − 01 7.50E − 01 4.45E − 01 3.21E − 01 2.08E − 01 6.25E − 01 7.45E − 01
 sANN_PID 3.82E − 01 2.45E − 01 1.63E − 01 7.41E − 01 7.32E − 01 4.69E − 01 3.63E − 01 1.86E − 01 5.76E − 01 7.11E − 01
 hANN_MSE 3.42E − 01 1.98E − 01 1.57E − 01 7.68E − 01 7.84E − 01 4.08E − 01 2.79E − 01 1.69E − 01 5.88E − 01 7.79E − 01
  hANN_tPI 3.44E − 01 1.99E − 01 1.59E − 01 7.70E − 01 7.82E − 01 4.03E − 01 2.68E − 01 1.74E − 01 6.03E − 01 7.87E − 01
 hANN_PID 3.46E − 01 2.02E − 01 1.44E − 01 7.53E − 01 7.79E − 01 4.10E − 01 2.73E − 01 1.50E − 01 5.87E − 01 7.83E − 01
9-8-1
 sANN_MSE 3.72E − 01 2.29E − 01 1.91E − 01 7.57E − 01 7.49E − 01 4.41E − 01 3.19E − 01 2.15E − 01 6.27E − 01 7.46E − 01
  sANN_tPI 3.79E − 01 2.38E − 01 1.96E − 01 7.48E − 01 7.39E − 01 4.54E − 01 3.36E − 01 2.18E − 01 6.07E − 01 7.33E − 01
 sANN_PID 3.78E − 01 2.44E − 01 1.65E − 01 7.42E − 01 7.33E − 01 4.66E − 01 3.50E − 01 1.85E − 01 5.91E − 01 7.22E − 01
 hANN_MSE 3.45E − 01 1.96E − 01 1.57E − 01 7.75E − 01 7.85E − 01 4.04E − 01 2.69E − 01 1.71E − 01 6.07E − 01 7.86E − 01
  hANN_tPI 3.45E − 01 1.97E − 01 1.64E − 01 7.75E − 01 7.84E − 01 4.08E − 01 2.74E − 01 1.71E − 01 5.94E − 01 7.82E − 01
 hANN_PID 3.44E − 01 1.98E − 01 1.49E − 01 7.66E − 01 7.83E − 01 4.07E − 01 2.77E − 01 1.63E − 01 6.04E − 01 7.80E − 01
Santa Ysabel Creek
9-4-1
 sANN_MSE 3.97E − 01 2.88E − 01 2.27E − 01 5.56E − 01 7.64E − 01 5.37E − 01 4.50E − 01 2.12E − 01 6.25E − 01 6.45E − 01
  sANN_tPI 4.05E − 01 2.90E − 01 2.33E − 01 5.53E − 01 7.62E − 01 5.59E − 01 4.77E − 01 2.20E − 01 6.02E − 01 6.23E − 01
 sANN_PID 4.17E − 01 3.07E − 01 1.87E − 01 5.27E − 01 7.48E − 01 6.39E − 01 5.99E − 01 1.74E − 01 5.01E − 01 5.27E − 01
 hANN_MSE 3.57E − 01 2.52E − 01 1.98E − 01 5.75E − 01 7.93E − 01 4.61E − 01 3.49E − 01 1.81E − 01 5.07E − 01 7.25E − 01
  hANN_tPI 3.56E − 01 2.48E − 01 1.90E − 01 5.77E − 01 7.96E − 01 4.52E − 01 3.36E − 01 1.75E − 01 5.50E − 01 7.34E − 01
 hANN_PID 3.67E − 01 2.62E − 01 1.76E − 01 5.51E − 01 7.85E − 01 5.04E − 01 3.97E − 01 1.65E − 01 4.70E − 01 6.87E − 01
9-6-1
 sANN_MSE 4.06E − 01 2.88E − 01 2.43E − 01 5.56E − 01 7.64E − 01 5.42E − 01 4.55E − 01 2.20E − 01 6.20E − 01 6.41E − 01
  sANN_tPI 3.98E − 01 2.85E − 01 2.59E − 01 5.61E − 01 7.66E − 01 5.43E − 01 4.48E − 01 2.44E − 01 6.26E − 01 6.46E − 01
 sANN_PID 4.22E − 01 3.07E − 01 1.95E − 01 5.26E − 01 7.48E − 01 6.36E − 01 6.02E − 01 1.81E − 01 4.98E − 01 5.25E − 01
 hANN_MSE 3.56E − 01 2.48E − 01 1.96E − 01 5.78E − 01 7.96E − 01 4.73E − 01 3.59E − 01 1.83E − 01 5.77E − 01 7.16E − 01
  hANN_tPI 3.58E − 01 2.49E − 01 2.02E − 01 5.74E − 01 7.96E − 01 4.46E − 01 3.34E − 01 1.86E − 01 5.72E − 01 7.36E − 01
 hANN_PID 3.75E − 01 2.60E − 01 1.75E − 01 5.23E − 01 7.87E − 01 5.35E − 01 4.34E − 01 1.61E − 01 4.41E − 01 6.57E − 01
9-8-1
 sANN_MSE 4.08E − 01 2.91E − 01 2.68E − 01 5.52E − 01 7.61E − 01 5.16E − 01 4.13E − 01 2.41E − 01 6.56E − 01 6.74E − 01
  sANN_tPI 4.04E − 01 2.90E − 01 2.49E − 01 5.54E − 01 7.62E − 01 5.38E − 01 4.38E − 01 2.39E − 01 6.34E − 01 6.54E − 01
 sANN_PID 4.24E − 01 3.13E − 01 1.99E − 01 5.18E − 01 7.43E − 01 6.34E − 01 6.00E − 01 1.85E − 01 4.99E − 01 5.26E − 01
 hANN_MSE 3.54E − 01 2.46E − 01 1.90E − 01 5.74E − 01 7.98E − 01 4.41E − 01 3.55E − 01 1.86E − 01 5.47E − 01 7.20E − 01
  hANN_tPI 3.56E − 01 2.49E − 01 2.01E − 01 5.78E − 01 7.95E − 01 4.42E − 01 3.31E − 01 1.94E − 01 5.28E − 01 7.38E − 01
 hANN_PID 3.66E − 01 2.61E − 01 1.75E − 01 5.44E − 01 7.86E − 01 5.34E − 01 4.34E − 01 1.61E − 01 4.76E − 01 6.57E − 01

When comparing the results of hANN with the results of sANN formed from single multilayer perceptron, the results of hANN were superior in terms of the medians of MAE, dMSE, MSE, and PI on calibration period for both catchments.

hANN models with nine inputs provided better SPI forecast according to the values of medians of PI index than models with six and three inputs. Similar recommendations on input datasets were confirmed in [10, 51].

The best models according to the medians of PI values were hANN models with 9-8-1 architecture with the last sANN trained by the MSE on Santa Ysabel Creek calibration dataset (PI = 0.80). The best values of median of Persistency Index (PI = 0.79) were obtained from hANN forecast on validation dataset using Leaf River dataset, 9-6-1 architecture, and tPI for optimization of last sANN (see Table 4).

Figures 2 and 3, respectively, show the forecast of SPI in calibration and validation, which were obtained using the hANN models with the highest values of medians of Persistency Index.

Figure 2.

Figure 2

The calibration results of SPI forecast using hANN 9-8-1 trained on MSE: the blue line: observations, the green line: forecasted SPI medians, and the dotted lines: simulation error bounds.

Figure 3.

Figure 3

The validation results of SPI forecast using hANN, Leaf River 9-6-1 trained on PI, Santa Ysabel Creek 9-8-1 trained on PI. The blue line: observations, the green lines: forecasted SPI medians, and the dotted line: simulation error bounds.

Results of PI2 index for the models with 9-N hd-1 architecture show that 9-8-1 architecture was superior for SPI forecast on validation datasets for both analyzed catchments (see Table 5). The comparison of calibration results shows that the 9-6-1 architecture has the highest values of PI2 on Santa Ysabel Creek dataset, and 9-8-1 has highest values of PI2 on Leaf River calibration dataset.

Table 5.

The values of PI2 on architectures 9-N hd-1 for SPI forecast with hANN.

Calibration period Validation period
9-4-1 9-6-1 9-8-1 9-4-1 9-6-1 9-8-1
Leaf River
9-4-1 0.00E + 00 −1.52E − 02 −2.91E − 02 0.00E + 00 −6.31E − 03 −3.64E − 02
9-6-1 1.50E − 02 0.00E + 00 −1.37E − 02 6.27E − 03 0.00E + 00 −2.99E − 02
9-8-1 2.83E − 02 1.35E − 02 0.00E + 00 3.51E − 02 2.90E − 02 0.00E + 00
Santa Ysabel
9-4-1 0.00E + 00 −6.92E − 03 −4.77E − 03 0.00E + 00 2.13E − 02 −1.83E − 03
9-6-1 6.88E − 03 0.00E + 00 2.13E − 03 −2.18E − 02 0.00E + 00 −2.36E − 02
9-8-1 4.75E − 03 −2.14E − 03 0.00E + 00 1.82E − 03 2.31E − 02 0.00E + 00

Table 6 shows the results of PI2 index, which were calculated using the results of hANN with 9-N hd-1 architecture. Values of PI2 enable us to compare the tested ANN models according to the performance of the optimization function, which were applied on training of the last MLP. The hANN models with the last sANN trained by the tPI were superior to hANN with last sANN trained using MSE or CI on calibration and validation Leaf River datasets.

Table 6.

The values of PI2 on training functions for SPI forecast with hANN.

Calibration period Validation period
MSE tPI PID MSE tPI PID
Leaf River
MSE 0.00E + 00 −2.94E − 03 1.64E − 02 0.00E + 00 −5.25E − 03 2.44E − 02
tPI 2.94E − 03 0.00E + 00 1.93E − 02 5.22E − 03 0.00E + 00 2.95E − 02
PID −1.67E − 02 −1.97E − 02 0.00E + 00 −2.50E − 02 −3.04E − 02 0.00E + 00
Santa Ysabel
MSE 0.00E + 00 −1.07E − 03 5.61E − 02 0.00E + 00 1.51E − 03 2.07E − 01
tPI 1.06E − 03 0.00E + 00 5.71E − 02 −1.51E − 03 0.00E + 00 2.06E − 01
PID −5.94E − 02 −6.06E − 02 0.00E + 00 −2.61E − 01 −2.59E − 01 0.00E + 00

The Santa Ysabel Creek datasets show that the best results were obtained by the tPI optimization for calibration of the last sANN of hANN models according to PI2. The values of PI2 for validation dataset show that hANN with the last sANN trained using the MSE provided better simulation results than the remaining hANN models (see Table 6).

3.2. The Forecast of SPEI Index

The mean performance of neural network models was explained using the medians of model evaluation metrics. The medians were obtained from the results of 25 runs on each basin for each ANN model architecture. Tables 7, 8, and 9 show the results of the evaluations of SPEI forecasts using the medians of MAE, MSE, dMSE, NS, and PI.

Table 7.

The medians of performance metrics for MLP architecture 3-N hd-1 on SPEI forecast for sANN_TA and hANN_TA; TA shows the training error function, and the best models are marked with bold fonts.

Calibration period Validation period
MAE MSE dMSE NS PI MAE MSE dMSE NS PI
Leaf River
3-4-1
 sANN_MSE 3.76E − 01 2.36E − 01 1.63E − 01 7.62E − 01 −6.99E − 02 4.43E − 01 3.10E − 01 1.72E − 01 6.43E − 01 −1.28E − 01
  sANN_tPI 3.80E − 01 2.46E − 01 1.71E − 01 7.52E − 01 −1.15E − 01 4.47E − 01 3.13E − 01 1.74E − 01 6.40E − 01 −1.37E − 01
 sANN_PID 3.77E − 01 2.35E − 01 1.62E − 01 7.64E − 01 −6.34E − 02 4.46E − 01 3.12E − 01 1.76E − 01 6.41E − 01 −1.35E − 01
 hANN_MSE 3.53E − 01 2.12E − 01 1.44E − 01 7.60E − 01 3.98E − 02 3.98E − 01 2.53E − 01 1.54E − 01 6.13E − 01 7.90E − 02
  hANN_tPI 3.54E − 01 2.13E − 01 1.46E − 01 7.51E − 01 3.62E − 02 4.05E − 01 2.58E − 01 1.53E − 01 6.02E − 01 6.09E − 02
 hANN_PID 3.51E − 01 2.10E − 01 1.43E − 01 7.48E − 01 5.07E − 02 4.05E − 01 2.61E − 01 1.52E − 01 5.91E − 01 5.05E − 02
3-6-1
 sANN_MSE 3.79E − 01 2.36E − 01 1.79E − 01 7.62E − 01 −7.07E − 02 4.37E − 01 3.00E − 01 1.86E − 01 6.55E − 01 −9.08E − 02
  sANN_tPI 3.69E − 01 2.27E − 01 1.78E − 01 7.72E − 01 −2.66E − 02 4.30E − 01 2.91E − 01 1.83E − 01 6.65E − 01 −5.78E − 02
 sANN_PID 3.70E − 01 2.32E − 01 1.68E − 01 7.67E − 01 −4.87E − 02 4.33E − 01 2.94E − 01 1.71E − 01 6.62E − 01 −6.89E − 02
 hANN_MSE 3.52E − 01 2.08E − 01 1.47E − 01 7.64E − 01 5.87E − 02 3.98E − 01 2.52E − 01 1.55E − 01 6.46E − 01 8.21E − 02
  hANN_tPI 3.51E − 01 2.08E − 01 1.53E − 01 7.62E − 01 5.94E − 02 3.97E − 01 2.53E − 01 1.60E − 01 6.34E − 01 8.07E − 02
 hANN_PID 3.55E − 01 2.09E − 01 1.52E − 01 7.63E − 01 5.41E − 02 3.95E − 01 2.53E − 01 1.60E − 01 6.30E − 01 8.12E − 02
3-8-1
 sANN_MSE 3.71E − 01 2.24E − 01 2.02E − 01 7.75E − 01 −1.34E − 02 4.20E − 01 2.78E − 01 2.08E − 01 6.80E − 01 −1.18E − 02
  sANN_tPI 3.72E − 01 2.28E − 01 1.75E − 01 7.71E − 01 −3.12E − 02 4.24E − 01 2.86E − 01 1.83E − 01 6.71E − 01 −3.81E − 02
 sANN_PID 3.71E − 01 2.27E − 01 1.99E − 01 7.72E − 01 −2.74E − 02 4.22E − 01 2.86E − 01 2.01E − 01 6.71E − 01 −3.94E − 02
 hANN_MSE 3.51E − 01 2.09E − 01 1.57E − 01 7.69E − 01 5.25E − 02 3.95E − 01 2.48E − 01 1.66E − 01 6.59E − 01 9.83E − 02
  hANN_tPI 3.50E − 01 2.08E − 01 1.64E − 01 7.73E − 01 5.96E − 02 3.98E − 01 2.52E − 01 1.68E − 01 6.55E − 01 8.28E − 02
 hANN_PID 3.50E − 01 2.07E − 01 1.62E − 01 7.68E − 01 6.29E − 02 3.90E − 01 2.46E − 01 1.73E − 01 6.43E − 01 1.04E − 01
Santa Ysabel Creek
3-4-1
 sANN_MSE 3.97E − 01 2.93E − 01 2.25E − 01 5.42E − 01 3.03E − 02 4.64E − 01 3.55E − 01 2.02E − 01 6.94E − 01 −1.87E − 01
  sANN_tPI 3.88E − 01 2.93E − 01 2.13E − 01 5.42E − 01 3.08E − 02 4.66E − 01 3.62E − 01 1.95E − 01 6.88E − 01 −2.11E − 01
 sANN_PID 4.04E − 01 2.93E − 01 2.16E − 01 5.42E − 01 3.01E − 02 5.18E − 01 4.03E − 01 1.99E − 01 6.53E − 01 −3.48E − 01
 hANN_MSE 3.50E − 01 2.66E − 01 1.89E − 01 5.45E − 01 1.22E − 01 3.62E − 01 2.86E − 01 1.71E − 01 6.26E − 01 4.41E − 02
  hANN_tPI 3.44E − 01 2.63E − 01 1.95E − 01 5.28E − 01 1.30E − 01 3.71E − 01 2.81E − 01 1.79E − 01 6.39E − 01 5.93E − 02
 hANN_PID 3.54E − 01 2.65E − 01 1.80E − 01 5.39E − 01 1.25E − 01 3.76E − 01 2.84E − 01 1.65E − 01 5.70E − 01 5.03E − 02
3-6-1
 sANN_MSE 3.95E − 01 2.84E − 01 2.29E − 01 5.56E − 01 6.01E − 02 4.90E − 01 3.82E − 01 2.06E − 01 6.71E − 01 −2.77E − 01
  sANN_tPI 3.83E − 01 2.87E − 01 2.31E − 01 5.52E − 01 5.14E − 02 4.65E − 01 3.48E − 01 2.08E − 01 7.01E − 01 −1.63E − 01
 sANN_PID 3.89E − 01 2.85E − 01 2.20E − 01 5.55E − 01 5.65E − 02 4.63E − 01 3.48E − 01 1.91E − 01 7.00E − 01 −1.64E − 01
 hANN_MSE 3.47E − 01 2.66E − 01 1.93E − 01 5.59E − 01 1.19E − 01 3.80E − 01 2.89E − 01 1.77E − 01 6.51E − 01 3.15E − 02
  hANN_tPI 3.51E − 01 2.64E − 01 1.87E − 01 5.50E − 01 1.27E − 01 3.62E − 01 2.77E − 01 1.76E − 01 6.52E − 01 7.21E − 02
 hANN_PID 3.52E − 01 2.64E − 01 1.93E − 01 5.52E − 01 1.27E − 01 3.62E − 01 2.78E − 01 1.76E − 01 6.59E − 01 7.05E − 02
3-8-1
 sANN_MSE 3.80E − 01 2.83E − 01 2.39E − 01 5.58E − 01 6.39E − 02 4.51E − 01 3.37E − 01 2.17E − 01 7.10E − 01 −1.28E − 01
  sANN_tPI 3.92E − 01 2.85E − 01 2.26E − 01 5.54E − 01 5.63E − 02 4.77E − 01 3.68E − 01 2.09E − 01 6.83E − 01 −2.30E − 01
 sANN_PID 3.82E − 01 2.83E − 01 2.39E − 01 5.58E − 01 6.37E − 02 4.32E − 01 3.30E − 01 2.20E − 01 7.16E − 01 −1.04E − 01
 hANN_MSE 3.49E − 01 2.64E − 01 2.00E − 01 5.60E − 01 1.26E − 01 3.91E − 01 2.90E − 01 1.80E − 01 6.51E − 01 3.04E − 02
  hANN_tPI 3.46E − 01 2.64E − 01 2.01E − 01 5.57E − 01 1.27E − 01 3.72E − 01 2.82E − 01 1.83E − 01 6.75E − 01 5.50E − 02
 hANN_PID 3.39E − 01 2.63E − 01 2.08E − 01 5.63E − 01 1.29E − 01 3.58E − 01 2.81E − 01 1.90E − 01 6.53E − 01 6.12E − 02

Table 8.

The medians of performance metrics for MLP architecture 6-N hd-1 on SPEI forecast for sANN_TA and hANN_TA; TA shows the training error function, and the best models are marked with bold fonts.

Calibration period Validation period
MAE MSE dMSE NS PI MAE MSE dMSE NS PI
Leaf River
6-4-1
  ANN_MSE 5.99E − 01 5.66E − 01 2.41E − 01 4.31E − 01 −1.56E + 00 6.40E − 01 6.13E − 01 2.61E − 01 2.95E − 01 −1.23E + 00
  ANN_tPI 5.76E − 01 5.32E − 01 2.34E − 01 4.65E − 01 −1.41E + 00 6.34E − 01 6.14E − 01 2.49E − 01 2.94E − 01 −1.23E + 00
 ANN_PID 5.83E − 01 5.24E − 01 2.24E − 01 4.73E − 01 −1.37E + 00 6.22E − 01 5.81E − 01 2.36E − 01 3.32E − 01 −1.11E + 00
 hANN_MSE 3.78E − 01 2.41E − 01 1.38E − 01 5.47E − 01 −9.35E − 02 4.47E − 01 3.19E − 01 1.51E − 01 3.94E − 01 −1.59E − 01
  hANN_tPI 3.92E − 01 2.56E − 01 1.26E − 01 5.75E − 01 −1.60E − 01 4.42E − 01 3.08E − 01 1.44E − 01 3.56E − 01 −1.19E − 01
 hANN_PID 3.84E − 01 2.52E − 01 1.32E − 01 5.56E − 01 −1.41E − 01 4.45E − 01 3.16E − 01 1.51E − 01 3.51E − 01 −1.49E − 01
6-6-1
 ANN_MSE 5.77E − 01 5.12E − 01 2.28E − 01 4.85E − 01 −1.32E + 00 6.45E − 01 6.44E − 01 2.29E − 01 2.59E − 01 −1.34E + 00
  ANN_tPI 5.82E − 01 5.21E − 01 2.68E − 01 4.76E − 01 −1.36E + 00 6.27E − 01 5.96E − 01 2.94E − 01 3.14E − 01 −1.17E + 00
 ANN_PID 6.06E − 01 5.68E − 01 3.26E − 01 4.29E − 01 −1.57E + 00 6.56E − 01 6.61E − 01 3.58E − 01 2.39E − 01 −1.40E + 00
 hANN_MSE 3.81E − 01 2.43E − 01 1.26E − 01 5.90E − 01 −1.00E − 01 4.36E − 01 3.05E − 01 1.45E − 01 3.82E − 01 −1.09E − 01
  hANN_tPI 3.90E − 01 2.42E − 01 1.48E − 01 6.16E − 01 −9.60E − 02 4.17E − 01 2.78E − 01 1.68E − 01 3.80E − 01 −9.50E − 03
 hANN_PID 3.87E − 01 2.49E − 01 1.39E − 01 5.49E − 01 −1.26E − 01 4.20E − 01 2.86E − 01 1.56E − 01 2.55E − 01 −4.12E − 02
6-8-1
 ANN_MSE 6.13E − 01 5.89E − 01 3.39E − 01 4.07E − 01 −1.67E + 00 6.79E − 01 6.81E − 01 3.51E − 01 2.16E − 01 −1.48E + 00
  ANN_tPI 6.01E − 01 5.56E − 01 3.29E − 01 4.40E − 01 −1.52E + 00 6.37E − 01 6.17E − 01 3.31E − 01 2.90E − 01 −1.24E + 00
 ANN_PID 6.35E − 01 6.00E − 01 2.93E − 01 3.96E − 01 −1.72E + 00 6.56E − 01 6.88E − 01 3.24E − 01 2.08E − 01 −1.50E + 00
 hANN_MSE 3.70E − 01 2.35E − 01 1.38E − 01 5.80E − 01 −6.30E − 02 4.23E − 01 2.79E − 01 1.56E − 01 3.97E − 01 −1.53E − 02
  hANN_tPI 3.70E − 01 2.29E − 01 1.35E − 01 6.12E − 01 −3.49E − 02 4.16E − 01 2.75E − 01 1.54E − 01 3.64E − 01 6.22E − 04
 hANN_PID 3.78E − 01 2.41E − 01 1.32E − 01 6.34E − 01 −8.97E − 02 4.23E − 01 2.81E − 01 1.49E − 01 4.34E − 01 −2.32E − 02
Santa Ysabel Creek
6-4-1
 ANN_MSE 5.62E − 01 5.01E − 01 2.22E − 01 2.18E − 01 −6.57E − 01 6.73E − 01 6.78E − 01 1.92E − 01 4.16E − 01 −1.27E + 00
  ANN_tPI 5.64E − 01 5.11E − 01 2.89E − 01 2.03E − 01 −6.88E − 01 6.83E − 01 7.80E − 01 2.65E − 01 3.28E − 01 −1.61E + 00
 ANN_PID 5.56E − 01 4.86E − 01 2.18E − 01 2.42E − 01 −6.05E − 01 7.12E − 01 7.41E − 01 2.00E − 01 3.62E − 01 −1.48E + 00
 hANN_MSE 3.78E − 01 2.95E − 01 1.75E − 01 2.71E − 01 2.47E − 02 4.33E − 01 3.43E − 01 1.60E − 01 2.43E − 01 −1.46E − 01
  hANN_tPI 4.12E − 01 3.14E − 01 1.61E − 01 2.24E − 01 −3.79E − 02 4.31E − 01 3.52E − 01 1.46E − 01 2.79E − 02 −1.79E − 01
 hANN_PID 4.08E − 01 3.05E − 01 1.73E − 01 2.33E − 01 −6.59E − 03 4.36E − 01 3.38E − 01 1.56E − 01 3.06E − 01 −1.30E − 01
6-6-1
 ANN_MSE 5.31E − 01 4.85E − 01 2.89E − 01 2.43E − 01 −6.04E − 01 5.88E − 01 5.57E − 01 2.60E − 01 5.20E − 01 −8.65E − 01
  ANN_tPI 5.44E − 01 4.87E − 01 2.48E − 01 2.40E − 01 −6.11E − 01 5.88E − 01 5.52E − 01 2.36E − 01 5.25E − 01 −8.45E − 01
 ANN_PID 5.42E − 01 5.01E − 01 2.58E − 01 2.18E − 01 −6.56E − 01 6.57E − 01 6.37E − 01 2.34E − 01 4.51E − 01 −1.13E + 00
 hANN_MSE 3.67E − 01 2.78E − 01 1.66E − 01 3.97E − 01 8.27E − 02 3.80E − 01 3.18E − 01 1.54E − 01 4.69E − 01 −6.28E − 02
  hANN_tPI 3.81E − 01 2.80E − 01 1.73E − 01 3.89E − 01 7.60E − 02 4.14E − 01 3.28E − 01 1.59E − 01 4.93E − 01 −9.72E − 02
 hANN_PID 3.71E − 01 2.76E − 01 1.66E − 01 3.87E − 01 8.82E − 02 4.18E − 01 3.15E − 01 1.54E − 01 4.61E − 01 −5.42E − 02
6-8-1
 ANN_MSE 5.97E − 01 5.78E − 01 2.60E − 01 9.81E − 02 −9.10E − 01 7.22E − 01 8.26E − 01 2.35E − 01 2.88E − 01 −1.77E + 00
  ANN_tPI 5.60E − 01 5.37E − 01 3.04E − 01 1.62E − 01 −7.75E − 01 6.42E − 01 6.24E − 01 2.78E − 01 4.62E − 01 −1.09E + 00
 ANN_PID 5.52E − 01 5.21E − 01 2.79E − 01 1.86E − 01 −7.24E − 01 6.86E − 01 7.00E − 01 2.39E − 01 3.97E − 01 −1.34E + 00
 hANN_MSE 3.87E − 01 2.92E − 01 1.64E − 01 3.49E − 01 3.36E − 02 4.49E − 01 3.43E − 01 1.54E − 01 3.95E − 01 −1.47E − 01
  hANN_tPI 3.75E − 01 2.85E − 01 1.73E − 01 3.69E − 01 5.67E − 02 4.18E − 01 3.27E − 01 1.54E − 01 4.79E − 01 −9.51E − 02
 hANN_PID 3.79E − 01 2.94E − 01 1.72E − 01 3.49E − 01 2.75E − 02 3.89E − 01 3.12E − 01 1.58E − 01 4.21E − 01 −4.46E − 02

Table 9.

The medians of performance metrics for MLP architecture 9-N hd-1 on SPEI forecast for sANN_TA and hANN_TA; TA shows the training error function, and the best models are marked with bold fonts.

Calibration period Validation period
MAE MSE dMSE NS PI MAE MSE dMSE NS PI
Leaf River
9-4-1
  sANN_MSE 3.75E − 01 2.36E − 01 1.97E − 01 7.50E − 01 7.41E − 01 4.53E − 01 3.34E − 01 2.07E − 01 6.09E − 01 7.35E − 01
  sANN_tPI 3.77E − 01 2.32E − 01 1.84E − 01 7.54E − 01 7.45E − 01 4.52E − 01 3.46E − 01 2.11E − 01 5.96E − 01 7.26E − 01
 sANN_PID 3.81E − 01 2.40E − 01 1.63E − 01 7.46E − 01 7.37E − 01 4.55E − 01 3.43E − 01 1.76E − 01 5.99E − 01 7.28E − 01
 hANN_MSE 3.47E − 01 2.00E − 01 1.54E − 01 7.63E − 01 7.81E − 01 4.04E − 01 2.66E − 01 1.57E − 01 5.91E − 01 7.89E − 01
  hANN_tPI 3.45E − 01 2.02E − 01 1.56E − 01 7.62E − 01 7.78E − 01 4.07E − 01 2.73E − 01 1.71E − 01 6.00E − 01 7.84E − 01
 hANN_PID 3.48E − 01 2.05E − 01 1.42E − 01 7.48E − 01 7.75E − 01 4.18E − 01 2.92E − 01 1.51E − 01 5.82E − 01 7.68E − 01
9-6-1
 sANN_MSE 3.80E − 01 2.32E − 01 2.11E − 01 7.54E − 01 7.45E − 01 4.41E − 01 3.16E − 01 2.21E − 01 6.31E − 01 7.50E − 01
  sANN_tPI 3.74E − 01 2.30E − 01 1.93E − 01 7.56E − 01 7.48E − 01 4.51E − 01 3.31E − 01 2.10E − 01 6.13E − 01 7.37E − 01
 sANN_PID 3.84E − 01 2.45E − 01 1.60E − 01 7.40E − 01 7.31E − 01 4.69E − 01 3.55E − 01 1.77E − 01 5.85E − 01 7.18E − 01
 hANN_MSE 3.44E − 01 2.01E − 01 1.60E − 01 7.70E − 01 7.79E − 01 4.09E − 01 2.76E − 01 1.74E − 01 6.12E − 01 7.81E − 01
  hANN_tPI 3.42E − 01 2.01E − 01 1.51E − 01 7.72E − 01 7.80E − 01 4.04E − 01 2.61E − 01 1.62E − 01 6.13E − 01 7.93E − 01
 hANN_PID 3.45E − 01 2.02E − 01 1.45E − 01 7.64E − 01 7.78E − 01 4.15E − 01 2.87E − 01 1.58E − 01 6.14E − 01 7.72E − 01
9-8-1
 sANN_MSE 3.72E − 01 2.35E − 01 2.01E − 01 7.52E − 01 7.43E − 01 4.45E − 01 3.16E − 01 2.01E − 01 6.31E − 01 7.50E − 01
  sANN_tPI 3.75E − 01 2.35E − 01 1.92E − 01 7.51E − 01 7.42E − 01 4.54E − 01 3.31E − 01 2.07E − 01 6.13E − 01 7.38E − 01
 sANN_PID 3.76E − 01 2.38E − 01 1.75E − 01 7.48E − 01 7.39E − 01 4.65E − 01 3.49E − 01 1.87E − 01 5.92E − 01 7.23E − 01
 hANN_MSE 3.43E − 01 1.98E − 01 1.56E − 01 7.70E − 01 7.83E − 01 3.95E − 01 2.62E − 01 1.68E − 01 6.32E − 01 7.92E − 01
  hANN_tPI 3.44E − 01 1.94E − 01 1.62E − 01 7.67E − 01 7.87E − 01 3.94E − 01 2.61E − 01 1.68E − 01 6.05E − 01 7.93E − 01
 hANN_PID 3.45E − 01 2.00E − 01 1.46E − 01 7.58E − 01 7.81E − 01 4.14E − 01 2.85E − 01 1.55E − 01 6.02E − 01 7.74E − 01
Santa Ysabel Creek
9-4-1
 sANN_MSE 4.04E − 01 2.90E − 01 2.27E − 01 5.52E − 01 7.61E − 01 5.73E − 01 4.90E − 01 2.05E − 01 5.91E − 01 6.14E − 01
  sANN_tPI 4.07E − 01 2.94E − 01 2.30E − 01 5.47E − 01 7.59E − 01 5.51E − 01 4.53E − 01 2.14E − 01 6.21E − 01 6.43E − 01
 sANN_PID 4.17E − 01 3.11E − 01 1.91E − 01 5.20E − 01 7.44E − 01 6.38E − 01 6.01E − 01 1.80E − 01 4.98E − 01 5.26E − 01
 hANN_MSE 3.65E − 01 2.53E − 01 1.89E − 01 5.77E − 01 7.92E − 01 4.99E − 01 3.83E − 01 1.77E − 01 5.40E − 01 6.98E − 01
  hANN_tPI 3.59E − 01 2.56E − 01 1.92E − 01 5.80E − 01 7.90E − 01 4.63E − 01 3.52E − 01 1.81E − 01 5.43E − 01 7.22E − 01
 hANN_PID 3.73E − 01 2.64E − 01 1.72E − 01 5.47E − 01 7.83E − 01 5.27E − 01 4.22E − 01 1.61E − 01 4.88E − 01 6.67E − 01
9-6-1
 sANN_MSE 4.05E − 01 2.91E − 01 2.40E − 01 5.51E − 01 7.61E − 01 5.69E − 01 4.91E − 01 2.27E − 01 5.89E − 01 6.12E − 01
  sANN_tPI 3.99E − 01 2.88E − 01 2.46E − 01 5.57E − 01 7.64E − 01 5.52E − 01 4.64E − 01 2.23E − 01 6.12E − 01 6.34E − 01
 sANN_PID 4.15E − 01 3.00E − 01 1.90E − 01 5.37E − 01 7.53E − 01 5.91E − 01 5.26E − 01 1.79E − 01 5.61E − 01 5.85E − 01
 hANN_MSE 3.55E − 01 2.49E − 01 1.93E − 01 5.70E − 01 7.95E − 01 4.28E − 01 3.28E − 01 1.85E − 01 5.95E − 01 7.41E − 01
  hANN_tPI 3.53E − 01 2.50E − 01 1.93E − 01 5.68E − 01 7.94E − 01 4.42E − 01 3.34E − 01 1.83E − 01 5.64E − 01 7.36E − 01
 hANN_PID −3.72E − 01 2.59E − 01 1.75E − 01 5.51E − 01 7.87E − 01 5.15E − 01 4.09E − 01 1.67E − 01 4.45E − 01 6.77E − 01
9-8-1
 sANN_MSE 4.08E − 01 2.92E − 01 2.66E − 01 5.49E − 01 7.60E − 01 5.65E − 01 4.82E − 01 2.39E − 01 5.97E − 01 6.20E − 01
  sANN_tPI 4.03E − 01 2.88E − 01 2.54E − 01 5.57E − 01 7.64E − 01 5.51E − 01 4.60E − 01 2.29E − 01 6.16E − 01 6.37E − 01
 sANN_PID 4.21E − 01 3.11E − 01 1.95E − 01 5.21E − 01 7.44E − 01 6.30E − 01 5.91E − 01 1.83E − 01 5.06E − 01 5.34E − 01
 hANN_MSE 3.55E − 01 2.48E − 01 2.04E − 01 5.76E − 01 7.96E − 01 4.51E − 01 3.41E − 01 1.89E − 01 4.98E − 01 7.31E − 01
  hANN_tPI 3.51E − 01 2.48E − 01 2.06E − 01 5.79E − 01 7.96E − 01 4.37E − 01 3.30E − 01 1.96E − 01 5.49E − 01 7.40E − 01
 hANN_PID 3.65E − 01 2.57E − 01 1.78E − 01 5.32E − 01 7.89E − 01 5.06E − 01 3.96E − 01 1.69E − 01 3.30E − 01 6.88E − 01

The integrated hANN models were superior to single multilayer perceptron models sANN in terms of the best values of medians of performance indices MAE, MSE, dMSE, and PI for both catchments. One of the exceptions can be found in Leaf River dataset: the NS of the single MLP for 3-8-1 trained using MSE on calibration and validation periods. However, this sANN model with the highest values of NS did not produce the highest values of PI (see the calibration period for 3-8-1 hANN and 3-8-1 sANN for Leaf River in Table 7).

hANN model results obtained from nine SPEI inputs were superior to the results obtained from ANN models with three and six inputs. Simulation results from hANN models with three inputs were superior in terms of PI to hANN models with six inputs in the first layer of sANN. Incorporation of other information into the ANN inputs did not improve the SPEI forecast.

The hANN models with 9-8-1 and 9-6-1 sANN architectures trained on tPI index were superior in terms of the values of PI for both catchments for calibration results. The calibration results of the best hANN models trained on tPI for both catchments are shown in Figure 4. The best simulation results according to the medians of PI were obtained for hANN on sANN architectures 9-8-1, 9-6-1 trained on tPI and MSE for both basins. The time series are shown in Figure 5.

Figure 4.

Figure 4

The calibration results of SPEI forecast using hANN: the blue line: observations, the salmon line: forecasted SPEI medians, and the dotted lines: simulation error bounds.

Figure 5.

Figure 5

The validation results of SPEI forecast using hANN: the blue line: observations, the salmon line: forecasted SPEI medians, and the dotted lines: simulation error bounds.

When comparing hANN architectures with 9 inputs, the best models according to the PI2 were those with 6 hidden layer neurons in calibration of Leaf River dataset, while on validation data the best PI2 values were obtained from hANN modes with eight hidden layer neurons. On Santa Ysabel datasets, the models with best PI2 indices were hANN with 8 hidden layer neurons for calibration, while hANN models with 6 hidden neurons were superior to the validation dataset (see Table 10).

Table 10.

The values of PI2 on architectures 9-N hd-1 on SPEI forecast with hANN.

Calibration period Validation period
9-4-1 9-6-1 9-8-1 9-4-1 9-6-1 9-8-1
Leaf River
9-4-1 0.00E + 00 −2.26E − 02 −2.00E − 02 0.00E + 00 −1.62E − 02 −3.61E − 02
9-6-1 2.21E − 02 0.00E + 00 2.51E − 03 1.59E − 02 0.00E + 00 −1.97E − 02
9-8-1 1.96E − 02 −2.51E − 03 0.00E + 00 3.49E − 02 1.93E − 02 0.00E + 00
Santa Ysabel
9-4-1 0.00E + 00 −1.36E − 02 −1.56E − 02 0.00E + 00 −6.57E − 02 −9.33E − 03
9-6-1 1.34E − 02 0.00E + 00 −1.99E − 03 6.17E − 02 0.00E + 00 5.29E − 02
9-8-1 1.54E − 02 1.98E − 03 0.00E + 00 9.24E − 03 −5.59E − 02 0.00E + 00

Table 11 shows the comparison of the influence of different optimization functions on the calibration and validation of hANN models with nine inputs. The optimization based on MSE was capable of providing better hANN models than the optimizations which used tPI and CI, on Leaf River dataset. The tPI optimization function enabled us to find hANN models, which had had better PI2 values in Santa Ysabel datasets. Note that the differences between PI2 for tPI and MSE are very small.

Table 11.

The values of PI2 on training functions for SPEI forecast with hANN.

Calibration period Validation period
MSE tPI PID MSE tPI PID
Leaf River
MSE 0.00E + 00 4.17E − 04 2.31E − 02 0.00E + 00 7.66E − 04 4.48E − 02
tPI −4.17E − 04 0.00E + 00 2.27E − 02 −7.66E − 04 0.00E + 00 4.40E − 02
PID −2.36E − 02 −2.32E − 02 0.00E + 00 −4.69E − 02 −4.61E − 02 0.00E + 00
Santa Ysabel
MSE 0.00E + 00 −1.89E − 03 5.14E − 02 0.00E + 00 −1.73E − 02 1.81E − 01
tPI 1.89E − 03 0.00E + 00 5.32E − 02 1.70E − 02 0.00E + 00 1.95E − 01
PID −5.42E − 02 −5.62E − 02 0.00E + 00 −2.22E − 01 −2.43E − 01 0.00E + 00

3.3. Discussion

The results of our computational experiment show the high similarities of values of SPI and SPEI drought indices. The values of correlation coefficients between the SPEI and SPI values were 0.98 for Leaf River dataset and 0.99 for Santa Ysabel dataset. Small differences between both drought indices reflect the fact that the temperatures trends were not apparent in both analyzed datasets [24, 25].

We used the single multilayer perceptron model as a main benchmark model for hANN. This model showed its simulation abilities and was compared with other forecasting techniques, for example, ARIMA models [11, 13, 27].

The comparison of ANN and hANN clearly confirms the finding which was made by Shamseldin and O'Connor [20] and Goswami et al. [32]. It shows the benefits of newly tested neural network model. The updating of simulated values of drought indices using the additional MLP was in all cases successful in terms of the improvement of SPEI and SPI drought forecast according to PI values. Similar overall benefits of the integrated neural network models were confirmed by [16] on simulations of monthly flows.

Our computational exercise also confirms the improvements of hANN drought forecast in terms of correcting the time shift error [11, 34, 35]. The tested hybrid neural network models decreased the overall time shift error in terms of dMSE values. However, the differences between hANN models trained on CI, designed to correct the time shift error, did not show significant improvements over the hANN trained on MSE or tPI.

The increased accuracy of drought index forecast was also influenced by using a model with the higher number of parameters. The simplest sANN with 3 inputs and 4 hidden layer neurons had 21 parameters, while the hANN with 9 inputs and 8 hidden layer neurons had 405 parameters. The high number of parameters may limit the application of hANN model in the case where other parsimonious models with similar simulation performances are available.

4. Conclusions

We analyzed the forecast of two drought indices, SPEI and SPI, using two types of neural network models. The first model was based on the feedforward neural network with three layers of neurons. The second one integrates the drought forecasts from five single multilayer perceptrons trained by the four different performance measures into the hybrid integrated neural network.

The SPEI and SPI neural network forecast was based on the data obtained from the period 1948–2002 from two US watersheds. The analyzed data were collected under MOPEX framework.

When evaluating the ANN models performance, the results of four from five model performance indices show that hybrid ANN models were superior to the single MLP models.

When comparing three different input sets on the SPEI and SPI forecast, the input sets with nine lagged monthly values of SPEI and SPI indices were superior. Adding the other types of inputs did not improve the results of neural network forecast.

The tested hANN and sANN models were trained using adaptive differential evolution. The nature inspired global optimization algorithm was capable of successfully training neural networks models. The optimization was based on four functional relationships describing model performance: MSE, dMSE, tPI, and CI indices. The worst training results were obtained with ANN models based on dMSE.

When comparing hANN models according to the number of neurons in the hidden layer, two neural network architectures, 9-6-1 and 9-8-1, generated the highest values of PI2 on SPEI and SPI forecast. Also when evaluating the influence of different optimization functions on hANN performance using PI2, the tPI and MSE neural network performance functions were superior to dMSE and CI.

Although SPEI and SPI indices are using the precipitation data and have some degree of similarity, the best predictions were obtained using the different combination of neural network model and training and training criteria.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  • 1.Dai A. Drought under global warming: a review. Wiley Interdisciplinary Reviews: Climate Change. 2011;2:45–65. [Google Scholar]
  • 2.Cook E. R., Meko D. M., Stahle D. W., Cleaveland M. K. Drought reconstructions for the continental United States. Journal of Climate. 1999;12(4):1145–1163. doi: 10.1175/1520-0442(1999)012&#x0003c;1145:DRFTCU&#x0003e;2.0.CO;2. [DOI] [Google Scholar]
  • 3.Dai A. Increasing drought under global warming in observations and models. Nature Climate Change. 2013;3(1):52–58. doi: 10.1038/nclimate1633. [DOI] [Google Scholar]
  • 4.Mishra A. K., Singh V. P. Drought modeling—a review. Journal of Hydrology. 2011;403(1-2):157–175. doi: 10.1016/j.jhydrol.2011.03.049. [DOI] [Google Scholar]
  • 5.Ntale H. K., Gan T. Y. Drought indices and their application to East Africa. International Journal of Climatology. 2003;23(11):1335–1357. doi: 10.1002/joc.931. [DOI] [Google Scholar]
  • 6.Morid S., Smakhtin V., Moghaddasi M. Comparison of seven meteorological indices for drought monitoring in Iran. International Journal of Climatology. 2006;26(7):971–985. doi: 10.1002/joc.1264. [DOI] [Google Scholar]
  • 7.Wu H., Hayes M. J., Weiss A., Hu Q. An evaluation of the standardized precipitation index, the China-Z index and the statistical Z-score. International Journal of Climatology. 2001;21(6):745–758. doi: 10.1002/joc.658. [DOI] [Google Scholar]
  • 8.Keyantash J., Dracup J. A. The quantification of drought: an evaluation of drought indices. Bulletin of the American Meteorological Society. 2002;83(8):1167–1180. doi: 10.1175/1520-0477(2002)08360;1191:tqodae62;2.3.co;2. [DOI] [Google Scholar]
  • 9.Haslinger K., Koffler D., Schöner W., Laaha G. Exploring the link between meteorological drought and streamflow: effects of climate-catchment interaction. Water Resources Research. 2014;50(3):2468–2487. doi: 10.1002/2013wr015051. [DOI] [Google Scholar]
  • 10.Morid S., Smakhtin V., Bagherzadeh K. Drought forecasting using artificial neural networks and time series of drought indices. International Journal of Climatology. 2007;27(15):2103–2111. doi: 10.1002/joc.1498. [DOI] [Google Scholar]
  • 11.Belayneh A., Adamowski J., Khalil B., Ozga-Zielinski B. Long-term SPI drought forecasting in the Awash River Basin in Ethiopia using wavelet neural networks and wavelet support vector regression models. Journal of Hydrology. 2014;508:418–429. doi: 10.1016/j.jhydrol.2013.10.052. [DOI] [Google Scholar]
  • 12.Ochoa-Rivera J. C. Prospecting droughts with stochastic artificial neural networks. Journal of Hydrology. 2008;352(1-2):174–180. doi: 10.1016/j.jhydrol.2008.01.006. [DOI] [Google Scholar]
  • 13.Mishra A. K., Desai V. R. Drought forecasting using feed-forward recursive neural network. Ecological Modelling. 2006;198(1-2):127–138. doi: 10.1016/j.ecolmodel.2006.04.017. [DOI] [Google Scholar]
  • 14.Hornik K., Stinchcombe M., White H. Multilayer feedforward networks are universal approximators. Neural Networks. 1989;2(5):359–366. doi: 10.1016/0893-6080(89)90020-8. [DOI] [Google Scholar]
  • 15.Rumelhart D. E., Hinton G. E., Williams R. J. Learning representations by back-propagating errors. Nature. 1986;323(6088):533–536. doi: 10.1038/323533a0. [DOI] [Google Scholar]
  • 16.Huo Z., Feng S., Kang S., Huang G., Wang F., Guo P. Integrated neural networks for monthly river flow estimation in arid inland basin of Northwest China. Journal of Hydrology. 2012;420-421:159–170. doi: 10.1016/j.jhydrol.2011.11.054. [DOI] [Google Scholar]
  • 17.Pektaş A. O., Cigizoglu H. K. ANN hybrid model versus ARIMA and ARIMAX models of runoff coefficient. Journal of Hydrology. 2013;500:21–36. doi: 10.1016/j.jhydrol.2013.07.020. [DOI] [Google Scholar]
  • 18.Nourani V., Hosseini Baghanam A., Adamowski J., Kisi O. Applications of hybrid wavelet-artificial Intelligence models in hydrology: a review. Journal of Hydrology. 2014;514:358–377. doi: 10.1016/j.jhydrol.2014.03.057. [DOI] [Google Scholar]
  • 19.Wang W., Gelder P. H. A. J. M. V., Vrijling J. K., Ma J. Forecasting daily streamflow using hybrid ANN models. Journal of Hydrology. 2006;324(1–4):383–399. doi: 10.1016/j.jhydrol.2005.09.032. [DOI] [Google Scholar]
  • 20.Shamseldin A. Y., O'Connor K. M. A non-linear neural network technique for updating of river flow forecasts. Hydrology and Earth System Sciences. 2001;5(4):577–598. doi: 10.5194/hess-5-577-2001. [DOI] [Google Scholar]
  • 21.Hayes M. J., Svoboda M. D., Wilhite D. A., Vanyarkho O. V. Monitoring the 1996 drought using the standardized precipitation index. Bulletin of the American Meteorological Society. 1999;80(3):429–438. doi: 10.1175/1520-0477(1999)08060;0429:mtduts62;2.0.co;2. [DOI] [Google Scholar]
  • 22.Guttman N. B. Comparing the palmer drought index and the standardized precipitation index. Journal of the American Water Resources Association. 1998;34(1):113–121. doi: 10.1111/j.1752-1688.1998.tb05964.x. [DOI] [Google Scholar]
  • 23.Cancelliere A., Mauro G. D., Bonaccorso B., Rossi G. Drought forecasting using the standardized precipitation index. Water Resources Management. 2007;21(5):801–819. doi: 10.1007/s11269-006-9062-y. [DOI] [Google Scholar]
  • 24.Vicente-Serrano S. M., Beguería S., López-Moreno J. I. A multiscalar drought index sensitive to global warming: the standardized precipitation evapotranspiration index. Journal of Climate. 2010;23(7):1696–1718. doi: 10.1175/2009jcli2909.1. [DOI] [Google Scholar]
  • 25.Beguería S., Vicente-Serrano S. M., Reig F., Latorre B. Standardized precipitation evapotranspiration index (SPEI) revisited: parameter fitting, evapotranspiration models, tools, datasets and drought monitoring. International Journal of Climatology. 2014;34(10):3001–3023. doi: 10.1002/joc.3887. [DOI] [Google Scholar]
  • 26.Beguería S., Vicente-Serrano S. M. SPEI: Calculation of the Standardised Precipitation—Evapotranspiration Index. R package version 1.6.
  • 27.Kim T.-W., Valdés J. B. Nonlinear model for drought forecasting based on a conjunction of wavelet transforms and neural networks. Journal of Hydrologic Engineering. 2003;8(6):319–328. doi: 10.1061/(ASCE)1084-0699(2003)8:6(319). [DOI] [Google Scholar]
  • 28.Bishop C. M. Neural Networks for Pattern Recognition. New York, NY, USA: Oxford University Press; 1995. [Google Scholar]
  • 29.Reed R. D., Marks R. J. Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. Cambridge, Mass, USA: MIT Press; 1998. [Google Scholar]
  • 30.Gomes G. S. D. S., Ludermir T. B., Lima L. M. M. R. Comparison of new activation functions in neural network for forecasting financial time series. Neural Computing & Applications. 2011;20(3):417–439. [Google Scholar]
  • 31.Maca P., Pech P., Pavlasek J. Comparing the selected transfer functions and local optimization methods for neural network flood runoff forecast. Mathematical Problems in Engineering. 2014;2014:10. doi: 10.1155/2014/782351.782351 [DOI] [Google Scholar]
  • 32.Goswami M., O'Connor K. M., Bhattarai K. P., Shamseldin A. Y. Assessing the performance of eight real-time updating models and procedures for the Brosna River. Hydrology and Earth System Sciences. 2005;9(4):394–411. doi: 10.5194/hess-9-394-2005. [DOI] [Google Scholar]
  • 33.Kitanidis P. K., Bras R. L. Real-time forecasting with a conceptual hydrologic model: 2. Applications and results. Water Resources Research. 1980;16(6):1034–1044. doi: 10.1029/wr016i006p01034. [DOI] [Google Scholar]
  • 34.Dawson C. W., Abrahart R. J., See L. M. HydroTest: a web-based toolbox of evaluation metrics for the standardised assessment of hydrological forecasts. Environmental Modelling & Software. 2007;22(7):1034–1052. doi: 10.1016/j.envsoft.2006.06.008. [DOI] [Google Scholar]
  • 35.Dawson C. W., Abrahart R. J., See L. M. HydroTest: further development of a web resource for the standardised assessment of hydrological models. Environmental Modelling and Software. 2010;25(11):1481–1482. doi: 10.1016/j.envsoft.2009.01.001. [DOI] [Google Scholar]
  • 36.Piotrowski A. P., Napiorkowski J. J. Optimizing neural networks for river flow forecasting—evolutionary computation methods versus the levenberg-marquardt approach. Journal of Hydrology. 2011;407(1–4):12–27. doi: 10.1016/j.jhydrol.2011.06.019. [DOI] [Google Scholar]
  • 37.Zhang J., Sanderson A. Jade: adaptive differential evolution with optional external archive. IEEE Transactions on Evolutionary Computation. 2009;13:945–958. [Google Scholar]
  • 38.Storn R., Price K. Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization. 1997;11(4):341–359. doi: 10.1023/a:1008202821328. [DOI] [Google Scholar]
  • 39.Price K. V., Storn R. M., Lampinen J. A. Differential Evolution—A Practical Approach to Global Optimization. Berlin, Germany: Springer; 2006. (Natural Computing). [Google Scholar]
  • 40.Das S., Suganthan P. N. Differential evolution: a survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation. 2011;15(1):4–31. doi: 10.1109/tevc.2010.2059031. [DOI] [Google Scholar]
  • 41.Duan Q., Schaake J., Andréassian V., et al. Model Parameter Estimation Experiment (MOPEX): an overview of science strategy and major results from the second and third workshops. Journal of Hydrology. 2006;320(1-2):3–17. doi: 10.1016/j.jhydrol.2005.07.031. [DOI] [Google Scholar]
  • 42.Schaake J., Duan Q., Andréassian V., Franks S., Hall A., Leavesley G. The model parameter estimation experiment (MOPEX) Journal of Hydrology. 2006;320(1-2):1–2. doi: 10.1016/j.jhydrol.2005.07.054. [DOI] [Google Scholar]
  • 43.Ao T., Ishidaira H., Takeuchi K., et al. Relating BTOPMC model parameters to physical features of MOPEX basins. Journal of Hydrology. 2006;320(1-2):84–102. doi: 10.1016/j.jhydrol.2005.07.006. [DOI] [Google Scholar]
  • 44.Ye A., Duan Q., Yuan X., Wood E. F., Schaake J. Hydrologic post-processing of MOPEX streamflow simulations. Journal of Hydrology. 2014;508:147–156. doi: 10.1016/j.jhydrol.2013.10.055. [DOI] [Google Scholar]
  • 45.Chen J., Brissette F. P., Chaumont D., Braun M. Performance and uncertainty evaluation of empirical downscaling methods in quantifying the climate change impacts on hydrology over two North American river basins. Journal of Hydrology. 2013;479:200–214. doi: 10.1016/j.jhydrol.2012.11.062. [DOI] [Google Scholar]
  • 46.Troch P. A., Martinez G. F., Pauwels V. R. N., et al. Climate and vegetation water use efficiency at catchment scales. Hydrological Processes. 2009;23(16):2409–2414. doi: 10.1002/hyp.7358. [DOI] [Google Scholar]
  • 47.Huang M., Hou Z., Leung L. R., et al. Uncertainty analysis of runoff simulations and parameter identifiability in the community land model: evidence from MOPEX basins. Journal of Hydrometeorology. 2013;14(6):1754–1772. doi: 10.1175/jhm-d-12-0138.1. [DOI] [Google Scholar]
  • 48.Mutlu E., Chaubey I., Hexmoor H., Bajwa S. G. Comparison of artificial neural network models for hydrologic predictions at multiple gauging stations in an agricultural watershed. Hydrological Processes. 2008;22(26):5097–5106. doi: 10.1002/hyp.7136. [DOI] [Google Scholar]
  • 49.May R. J., Maier H. R., Dandy G. C., Fernando T. M. K. G. Non-linear variable selection for artificial neural networks using partial mutual information. Environmental Modelling & Software. 2008;23(10-11):1312–1326. doi: 10.1016/j.envsoft.2008.03.007. [DOI] [Google Scholar]
  • 50.May R. J., Dandy G. C., Maier H. R., Nixon J. B. Application of partial mutual information variable selection to ANN forecasting of water quality in water distribution systems. Environmental Modelling & Software. 2008;23(10-11):1289–1299. doi: 10.1016/j.envsoft.2008.03.008. [DOI] [Google Scholar]
  • 51.Bacanli U. G., Firat M., Dikbas F. Adaptive Neuro-Fuzzy inference system for drought forecasting. Stochastic Environmental Research and Risk Assessment. 2009;23(8):1143–1154. doi: 10.1007/s00477-008-0288-5. [DOI] [Google Scholar]

Articles from Computational Intelligence and Neuroscience are provided here courtesy of Wiley

RESOURCES