Abstract
Over the years, the Weather Research and Forecasting Model (WRF) has been gaining popularity as a low-cost alternative source of data for wind resource assessments. This paper investigates the impact of selected time control, and nudging options on wind simulations in WRF. We conducted 15 numerical experiments, combining 5 simulation run-times and 3 options for disabling nudging in the Planetary Boundary Layer (PBL) in WRF. Hourly wind speed and direction predictions were compared with actual measurements at 40 m, 50 m and 60 m a.g.l. From our results, we recommend that, for optimum performance, the method of disabling nudging in the PBL should be chosen with simulation run times in mind. For wind simulations in our study area, up to 2 days run-times with nudging disabled below 1600 m in model configurations gives the best wind speed predictions. However, disabling nudging below the model-calculated PBL height offers more consistent results and produces relatively less prediction error with longer run times.
Keywords: Atmospheric science, Environmental science
1. Introduction
Assessments of wind resources is key to the successful development of wind power on a commercial scale. Data for such assessments have traditionally been from mast-mounted instruments. However, in recent times, Numerical Weather Prediction Models (NWPs) such as the Weather Research and Forecasting Model (WRF) have been gaining popularity as low-cost alternative sources of data for these assessments. Like most numerical models, WRF offers a wide range of options that must be put together to form model configurations with which the model can be run. Model configurations are key contributors to model performance.
Among the options that have been found to significantly affect model performance in wind simulations with WRF; are the Time Control and Nudging options [1, 2]. The Time Control options are used to specify among other things, the simulation integration time or run time (which is basically the length of the period that is simulated by the model), as well as time intervals between the lateral boundary condition inputs and simulation output files. Nudging (Newtonian relaxation) is one of the options in the Four-Dimensional Data Assimilation (FDDA) system of WRF. The FDDA system comprises options for keeping simulations close to gridded analyses values and/or observed values (actual measurements) over the simulation run time. The former is often referred to as grid or analyses nudging, while the latter is termed observational nudging. Analysis nudging options in WRF include the nudging coefficients for the variables to be nudged, whether nudging should be applied for all vertical levels in the simulation domain, and if not, which levels it should be disabled for, and how. In this paper, we focus on the options of integration or run time, and the vertical levels for which nudging should be disabled. The combined effect of these two parameters has been found to improve model performance in wind simulations by reducing model divergence and error accumulation [2].
NWP models tend to diverge and accumulate approximation errors after running for some time. These situations get worse with increasing simulation run times [2, 3]. With the Time Control options alone, these errors can be reduced for simulations covering long periods. This can be achieved by performing relatively shorter segmented simulations, that together cover the desired (longer) period. However, using this option (of shorter segmented runs) requires more time and computing resources for simulations. This is because, in line with best practices, simulations run times in WRF must incorporate a model “spin-up” time (the average time it takes for the model to adequately develop mesoscale processes). However, model outputs from this spin-up time are not considered as true representations of the state of the atmosphere and so, are often discarded [3, 4, 5, 6, 7, 8, 9]. Using shorter run times for simulations in studies covering longer periods, requires more of such model spin-up times, which in turn requires that, extra time and computing resources be spent on running simulations. For example, for a study that covers a period of one year, and uses 12 hours spin-up time per simulation, the total number of extra days that must be run and discarded as model spin-up are presented in Table 1. Therefore, though using shorter run times might improve wind predictions by WRF, it also increases the time and computational needs for studies and assessments. In addition, for study designs that are computationally expensive and span long study periods, the use of a short run times might not necessarily be worth the improvements in model performance. Nonetheless, the option has been used, often in combination with grid nudging in model configurations for sensitivity studies and model performance assessments of WRF for wind simulations [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. Some of the run times (excluding model spin-up times) that have been used in wind simulation studies include, 12 hours [11], 1 day [3, 14], 2 days [2, 8], 7 days [7], 9 days [6], and 30 or 31 days [5, 12].
Table 1.
Effect of different run times on a study covering 1 year (assuming 12 hours model spin up time).
| Run time (excluding model spin-up time) | Number of simulations required to cover study period | Total number of extra days required as spin up time | Extra time required for simulations |
|---|---|---|---|
| 1 day | 365 | 182.5 days | 50% |
| 2 days | 183 | 91.5 days | 25% |
| 7 days | 53 | 26.5 days | 8% |
| 31 days | 12 | 6 days | 2% |
It is common practice in sensitivity studies of WRF for wind simulations, to apply nudging at heights above the Planetary Boundary Layer (PBL) only (in other words, disable nudging within the PBL). This is done with the intention of allowing mesoscale processes in the PBL to develop freely [11, 12, 13]. There are 3 ways by which analyses nudging within the PBL can be disabled in WRF [1, 15];
-
i.
by specifying the height (in model vertical levels) above which nudging should be applied (or below which nudging should be disabled).
-
ii.
by letting the model apply nudging above the model-calculated height of the PBL (The accuracy of this model-calculated PBL height depends on the PBL parameterisation scheme used in the model configuration).
-
iii.
by letting the model choose the higher of a specified height, and the model-calculated PBL height, and applying Nudging above whichever is chosen (i.e. the highest).
The diversity in options available in the WRF model has always presented the challenge of identifying an optimum configuration for simulations, as model performance has been found to depend on many factors including, availability of computing resources, variables of interest, model configuration and prevailing climatic conditions. In addition, interaction between different options in WRF can sometimes be non-linear, and this calls for different possible combinations of options to be tested to determine an optimum configuration. It has been the practice to determine the optimum configuration for a variable of interest with sensitivity studies, which assess comparatively, the effects of different model configurations on model performance.
Carvalho et al. [2] found that, wind speed predictions by WRF are better, with a model configuration that includes the grid nudging option combined, with a run time of 2 days instead of 30 days. However, this study did not examine the possible effects of other integration times, nor the different methods of applying nudging or the possible interaction of the two options, on model performance. Ohsawa et al. [1] found that, applying nudging above PBL heights calculated by the Mellor-Yamada-Janjic (MYJ) PBL scheme produces better results as compared to applying nudging above a fixed height (of 1000 m) for offshore wind simulations in Japan. However, these options were not tested with different run times, nor the third option of disabling nudging in the PBL, tested. In addition, given the strong sensitivity PBL schemes often exhibit to different climatic conditions that pertain in different seasons and at different locations, the generalisability of the findings of Ohsawa et al., might be limited.
Against this background, in this study, we investigate sensitivity of winds (in an area of good wind energy potential in Ghana [16]), to different combinations of five simulation run times and three methods of disabling nudging in WRF. The accuracies of these different combinations are compared with actual field (ground) measurements at a selected location in the south-eastern part of Ghana (Section 2.1). The aim of this study is to recommend combinations of run time and nudging options that are suitable for model configurations for wind simulations with WRF in Ghana.
2. Methods
2.1. Study area and measured data
The study area, as depicted in Fig. 1, stretches approximately between longitudes 0° and 1° East, and latitudes 4.5° and 6° North, and covers the eastern coastal plains of Ghana. This area was identified to have some of the best wind energy potential in Ghana in a study conducted in 2002 [16]. The Energy Commission of Ghana (EC) has conducted mast measurement exercises at selected sites in the area in the past. The observational (measured) data for this study, is from one of the masts in one such measurement exercise, at Anloga (Lat 5.79 °N and Long. 0.91 °E), a town along the coast in the study area. The data comprises hourly averages of wind speeds for December 2013, measured at heights of 40, 50, and 60 m above ground level, and wind directions, measured at 50 and 60 m.
Fig. 1.
Map of Ghana showing its international borders and the study area (in yellow and red respectively).
2.2. Model and domain configuration
The simulation domains comprised three (3) one-way nested domains of resolutions 27 km, 9 km, and 3 km. The horizontal resolutions were chosen to achieve a recommended nesting ratio of 3 [17, 18]. The final horizontal resolution of 3 km was chosen because it has been found to be optimum for wind simulations in WRF. Increasing the final resolution beyond this value was found not to significantly improve model performance, despite being more computationally expensive [2, 5]. The outermost domain covers Ghana and its neighbouring countries as well as parts of the Sahel deserts to the north and the sea to the south of the country. Domain 2 covers the lower half of the country, and parts of its neighbouring countries to the east. Domain 3 covers the high wind energy potential coastal plains of South-East Ghana. Most of the EC's wind energy measurement masts as well as the sites of some planned wind farms in Ghana, are located in this domain (Domain 3). These domains are depicted in Fig. 2.
Fig. 2.
Simulation domains.
The model configuration is summarised in Table 2. The Mercator Geographical Projection scheme was used as recommended by Wei Wang et al. [19]. The Mellor-Yamada Nakanishi and Nino Level 3 (MYNN3) PBL scheme was chosen for this study based on results of a preliminary evaluation of PBL parameterisation schemes for this area. Following recommendations on vertical level references for tests [19], and nested simulations in WRF [20], 40 vertical levels, (automatically set for a pressure of 5000 Pa at the top of the model) were chosen for all domains. Cumulus parameterisation was turned off for domain 3 as the horizontal resolution in this domain is considered fine enough for adequate resolving of cumulus processes [2, 21]. All simulations were initialised with the NCEP FNL Operational Model Global Tropospheric Analyses (NCEP GFS-FNL) dataset at 1° horizontal, 26 mandatory pressure levels, and 6-hours temporal resolutions [22]. The same dataset was used for the analysis nudging.
Table 2.
Model configuration for all experiments.
| Model version | Advanced research WRF v3.8.1 | ||
|---|---|---|---|
| Driving data | NCEP FNL Operational Model Global Tropospheric Analyses at 1-degree spatial, 26 pressure levels, and 6 hourly temporal resolutions | ||
| Land use data | 24-category USGS | ||
| Geographical projection scheme | Mercator | ||
| Vertical resolution | 40 levels | ||
| Horizontal resolution (km) | 27 | 9 | 3 |
| Domain size (grid points) | 91 × 103 | 82 × 94 | 64 × 55 |
| Parameterization schemes: | |||
| Cloud microphysics (MP) | Eta microphysics (ETA) scheme | ||
| Long-wave radiation (LW-Rad) | Rapid Radiative Transfer Model scheme (RRTMG version) | ||
| Short-wave radiation (SW-Rad) | Dudhia scheme | ||
| Surface layer (SL) | Nakanishi and Nino PBL's surface layer scheme (MYNN) | ||
| Land surface model (LSM) | Noah Land Surface Model | ||
| Planetary boundary layer (PBL) | The Mellor Yamada Nakanishi Nino Level 3 (MYNN3.) | ||
| Cumulus | Kain-Fritsch scheme (turned off for domain 3) | ||
2.3. Experimental design
Fifteen (15) numerical experiments were performed, testing all possible combinations of the five selected run times (1 day, 2 days, 7 days, 14 days, 31 days) and all the three methods of disabling nudging in the PBL. Except for 14 days integration time, all the other integration times tested in this study have been used in previous sensitivity studies on wind simulations in WRF; 1 day [3, 14], 2 days [2, 8], 7 days [7], 9 days [6], and 30 or 31 days [5, 12]. Each experiment involved the simulation of the entire month of December, 2013. For the five experiments in which the height below which nudging must be disabled had to be specified, 10 model vertical levels (corresponding to approximately 1600 m above sea level (asl) in our vertical grid configuration, which has lowest level at approximately 56 m asl) was specified. This number of levels is specified in model configurations similar to the one used in this study (i.e. pressure at the top of the model = 5000 Pa, vertical levels automatically set) [15, 19]. The number of simulations and other details of each experiment are presented in Table 3. All simulations were run with a spin-up time of 12 hours.
Table 3.
Experimental Design (the run times specified exclude the spin-up time).
| No. | Experiment designation | Simulation run time | Levels above which grid nudging should be applied (or below which nudging should be disabled) | Number of simulations |
|---|---|---|---|---|
| 1 | 1 day_N-10l | 1 day | 10 model vertical levels | 31 |
| 2 | 2 days_N-10l | 2 days | 10 model vertical levels | 16 |
| 3 | 7 days_N-10l | 7 days | 10 model vertical levels | 5 |
| 4 | 14 days_N-10l | 14 days | 10 model vertical levels | 3 |
| 5 | 31 days_N-10l | 31 days | 10 model vertical levels | 1 |
| 6 | 1 day_N-pblh | 1 day | Model-calculated PBL height | 31 |
| 7 | 2 days_N-pblh | 2 days | Model-calculated PBL height | 16 |
| 8 | 7 days_N-pblh | 7 days | Model-calculated PBL height | 5 |
| 9 | 14 days_N-pblh | 14 days | Model-calculated PBL height | 3 |
| 10 | 31 days_N-pblh | 31 days | Model-calculated PBL height | 1 |
| 11 | 1 day_N-a | 1 day | The higher of 10 model vertical levels and PBL height | 31 |
| 12 | 2 days_N-a | 2 days | The higher of 10 model vertical levels and PBL height | 16 |
| 13 | 7 days_N-a | 7 days | The higher of 10 model vertical levels and PBL height | 5 |
| 14 | 14 days_N-a | 14 days | The higher of 10 model vertical levels and PBL height | 3 |
| 15 | 31 days_N-a | 31 days | The higher of 10 model vertical levels and PBL height | 1 |
2.4. Post-processing of wind data from WRF output files
A position (specified as latitude (i), longitude (j), and vertical level(k)) on the WRF grid corresponds to a cell [23]. Surface wind speeds and directions for a position in WRF were calculated from the U (x-component) and V (y-component) winds. As illustrated in Fig. 3, U winds are on at the centres of the left and right faces of the cell, while the V winds are on the middles of the front and back faces [23]. The observation data for verification comprises only surface winds (winds in the horizontal plane). Therefore, hourly simulated surface winds for a cell were calculated (for every hour) as:
| (1) |
Fig. 3.
Grid cell in WRF.
Direction was determined from the same U and V averages for each timestep using the four-quadrant inverse tangent function. Winds were not rotated in this study as with the Mercator projection (which was used in this study), the model grid aligns with earth coordinates and so rotation of the winds is not needed [24].
The wind speeds at heights of analysis (40 m, 50 m, and 60 m) were linearly interpolated from wind speeds for the levels immediately below and above them. While linear interpolation might not be the best approach to obtain the wind speeds for the heights of analysis, we believe this should not significantly affect the relative performance of the options being tested, once the same approach is used in processing the data for all the options tested in the study.
For the interpolation, the vertical levels in WRF were converted to height above ground level (in m) as [19];
| (2) |
where PH is the perturbation geopotential height (m2/s2), PHB is the base-state geopotential height (m2/s2), g is acceleration due to gravity (m/s2), and HGT represents the terrain height (m) [19]. Values for PH, PHB and HGT were all WRF simulation outputs.
2.5. Statistical metrics for validation
Hourly predictions of wind speeds and directions were compared with ground data measured hourly, at heights of 40, 50, and 60 m above ground level. The relative performances of the configuration options were evaluated using the comparative Prediction Skill Score measure [25], calculated from the sum of scaled (unity normalised) values of the following statistical metrics of simulated wind speeds and directions:
-
i.
the Root Mean Square Error (RMSE),
-
ii.
the Mean Error (ME),
-
iii.
Standard Error (STDE) and the
-
iv.
Correlation Coefficient (CC) between simulated and measured data.
The RMSE is a measure of the difference between simulated and measured values and is calculated as:
| (3) |
where and N is the number of data points.
The ME, like the RMSE, is a measure of error, but most importantly, helps determine whether the model was over-predicting or under-predicting winds. It was calculated as:
| (4) |
The STDE gives an indication of how spread out predictions are from the predicted mean. A smaller standard error is preferred. STDE is calculated from the RMSE and ME as:
| (5) |
The linear dependence of simulated and measured wind speeds was assessed with the Pearson Correlation Coefficient (CC). It was calculated as [26].
| (6) |
where and are the simulated and observed wind speeds respectively.
For angles, was calculated as [11];
| (7) |
Angular mean was calculated according the circular statistics principles [27], and correlation between simulated and measured angles was determined with a Circular Correlation Coefficient [27].
Metrics were scaled (normalised) according to [25] as:
| (10) |
where Xmax and Xmin refer to maximum and minimum values of the Metric (RMSE, STDE, ME, or CC) being scaled. Scaling is such that .
The Prediction Skill Score was then calculated as:
| (9) |
Such that .
The scheme with the highest Skill Score was ranked as the best scheme and vice versa.
The fitted Weibull probability curves for measured and predicted data from the tested configurations were also compared. The Weibull distribution is widely used to represent wind speed distributions for wind energy applications, primarily because it accurately fits wind data well. Its probability function is [28];
| (11) |
where f(WS) is the probability of observing wind speed (WS), k is dimensionless Weibull shape parameter, and c is the Weibull scale parameter. The two parameters can be determined as follows [28];
| (12) |
| (13) |
where and (m/s), are the standard deviation and the average respectively, of the wind speed data.
Effects of the tested model configurations on wind power estimation were evaluated by comparing the Average Wind Power Flux, estimated with measured and predicted data. The Average Wind Power Flux, assuming a rotor swept area of unity was estimated as [12].
| (14) |
where is the wind speed, and , the air density. Due to a lack of access to local air density measurements, a value of 1.156 kg m−3, based on findings of a study in the Caribbean was assumed [29] (based on the assumption that the Caribbean should have a similar climate as Ghana). In addition, estimates of local air density with data (Average Temperature, Relative Humidity and Pressure) from selected online sources [30, 31, 32], was found to average approximately 1.160 kg m−3, producing less than 1% difference in the Average Power Flux estimated with the air density value from the Caribbean study.
3. Results and discussion
Fig. 4 shows the observed and predicted average wind speeds at 60 m for December 2013. It can be observed from the figure that, generally, the shorter run times gave better predictions, with 1 day runs giving the best predictions. We also see that, average wind speed predictions for experiments that were conducted with Nudging above 10 model vertical levels (the N-10l group) and those conducted with Nudging above the higher of 10 vertical levels or the model-calculated PBL height (the N-a group) are approximately the same. However, in contrast to the trend in the N-10l and N-a groups of experiments; average wind speed predictions for experiments from the N-pblh group, (in which nudging was below the model-calculated height of the PBL), though better with 1 day and 2 days run times, become almost constant with run times of 7 days or more. The better performance of this approach of disabling nudging (disabling it below the model calculated PBL height) for longer run times (7 days or more in this case) can also be seen from the figure.
Fig. 4.
Average wind speeds at 60 m a.g.l from experiments grouped by Nudging options tested.
The better predictions with the smaller run times can be attributed to the relatively lower model divergence and error accumulation that shorter run times achieve [2]. We observe this in Fig. 5, which shows the plots of daily average wind speeds (in December, 2013) for the 5 run times tested. We also observe that, the deviations of the predicted wind speeds from the observed wind speeds appear to be more pronounced in the plots for run times of 7, 14, and 31 days in Fig. 5(c), (d), and (e) respectively, pointing to relatively larger model divergence and error accumulation by experiments with these run times.
Fig. 5.
Average wind speeds at 60 m a.g.l from experiments conducted with run times of; (a) 1 day, (b) 2 days, (c) 7 days, (d) 14 days, (e) 31 days.
The similarity in results between the N-10l and N-a groups of experiments mentioned earlier, can again be seen in the plots in Fig. 5. There is little deviation between the lines for the daily average wind speeds plots for experiments from the two groups, irrespective of simulation run time. This similarity suggests that, the calculated PBL height was either often close to, or less than the height at vertical level 10 (approximately 1600 m for our experimental setup). From Fig. 6, which depicts a plot of the daily maximum PBL heights at the location where our observational data was taken, we find this to be the latter. This means that, in the N-a group experiments, nudging was basically being disabled in the same manner as in the N-10l experiments; for the lower 10 vertical levels.
Fig. 6.
Daily average PBL heights from N-pblh group of experiments.
The more or less constant predictions of average wind speeds in experiments with 7, 14 and 31 days run times from the N-pblh group, can be explained by the relatively little deviation between the predicted wind speeds from the 3 experiments. This can be seen in Fig. 7(b). This result can also be taken as an indication of the consistency of this method of disabling nudging in the PBL in simulations with run times of 7 days or more. A possible contributing factor to the better performance of the experiments from the (N-pblh) group, might be the ability of that method (of disabling nudging) to determine more appropriately, the levels to nudge. With the N-10l experiments, levels above an assumed constant PBL height (of 1600 m) are nudged. However, this might not be best as, the PBL height tends to vary (as can be seen in Fig. 6), falling within the lowest 1000 m–3000 m of the atmosphere depending on the amount of ground friction and turbulent mixing present [14, 33].
Fig. 7.
Average wind speeds at 60 m a.g.l from experiments conducted with Nudging disabled; (a) below 10 vertical levels, (b) below the model-calculated PBL height, (c) below the higher of 10 vertical levels or PBL height.
3.1. Statistical metrics and prediction skill
The trends observed above are confirmed by the statistical metrics presented in Table 4. Generally, experiments with shorter run times had better RMSE, ME, and STDE than those with longer run times from the same experimental groups. The better consistency of disabling nudging below the model-calculated PBL height is seen in the fact that, there is comparatively less variation in RMSEs and MEs, for experiments from the N-pblh group. In addition, experiments from this group record some of the lowest STDEs. Low STDEs can be taken as an indication of consistency in model performance [2, 12]. The greater model divergence and error accumulation associated with longer run times is also seen in the greater RMSEs and MEs for experiments with run times of 7 days or more.
Table 4.
Statistical metrics for wind speed predictions at 60 m.
| Average wind speeds (m/s) | RMSE (m/s) | ME (m/s) | STDE (m/s) | CC | Prediction skill score | |
|---|---|---|---|---|---|---|
| Observation | 5.9 | |||||
| 1 day_N-10l | 5.7 | 1.23 | −0.22 | 1.21 | 0.7 | 3.5 |
| 1 day_N-pblh | 5.3 | 1.25 | −0.61 | 1.10 | 0.8 | 3.4 |
| 1 day_N-a | 5.7 | 1.25 | −0.25 | 1.22 | 0.7 | 3.4 |
| 2 days_N-10l | 5.4 | 1.40 | −0.52 | 1.30 | 0.7 | 2.8 |
| 2 days_N-pblh | 5.2 | 1.30 | −0.67 | 1.12 | 0.8 | 3.3 |
| 2 days_N-a | 5.4 | 1.40 | −0.55 | 1.29 | 0.7 | 2.8 |
| 7 days_N-10l | 4.8 | 1.86 | −1.10 | 1.49 | 0.6 | 1.2 |
| 7 days_N-pblh | 5.1 | 1.38 | −0.76 | 1.15 | 0.8 | 3.0 |
| 7 days_N-a | 4.8 | 1.89 | −1.16 | 1.49 | 0.6 | 1.1 |
| 14 days_N-10l | 4.6 | 1.98 | −1.29 | 1.50 | 0.6 | 0.8 |
| 14 days_N-pblh | 5.1 | 1.42 | −0.82 | 1.16 | 0.8 | 2.9 |
| 14 days_N-a | 4.6 | 2.01 | −1.31 | 1.52 | 0.6 | 0.7 |
| 31 days_N-10l | 4.5 | 1.98 | −1.44 | 1.37 | 0.7 | 1.1 |
| 31 days_N-pblh | 5.1 | 1.48 | −0.81 | 1.24 | 0.7 | 2.6 |
| 31 days_N-a | 4.5 | 2.04 | −1.45 | 1.43 | 0.6 | 0.8 |
Combining the metrics into prediction skill scores, we find that configurations with the shorter run times generally record the highest skill scores as depicted in Fig. 8. In addition, disabling nudging below 1600 m (model vertical level 10), is best for 1 day and 2 days runs only. For the other run times tested, disabling nudging below the model-calculated PBL height offers better performance. Generally, nudging below the model-calculated PBL height offers the most consistent performance. Full results on the metrics and Skill Scores are presented in Annex-I.
Fig. 8.
Speed prediction skill scores at 60 m for configurations tested.
3.2. Effect on wind power estimates
Fig. 9 depicts the Weibull probability distribution plots for observed data and data predicted with seven of the configuration options tested. Plots for the N-a group of experiments are not included because they are very similar to those of the N-10l group of experiments. Moreover, for the 7, 14, and 31 days run times, only experiments from the N-pblh group are plotted because they ranked better than those from the other groups. It can be seen from the figure that, apart from the 1day_N-10l configuration, all the other configurations overestimate and underestimate lower and higher speeds respectively; with deviation for lower speeds greatest for configurations with higher run times. Therefore, as can be observed in Fig. 10, the shorter run times generally gave better estimations of the power flux. Though there are deviations with the 1day_N-10l configuration as well, they are relatively less, and this configuration predicts the higher speeds better. The more accurate estimations by the 1day_N-10l resulted in a relatively better average wind speed prediction error of 4%, and a wind power estimation error of 16%, as can be seen from Table 5.
Fig. 9.
Weibull P.D.F plots for data at 60 m from observations and seven of the options tested.
Fig. 10.
Wind power estimates from observed and predicted data at 60 m.
Table 5.
Percentage error of estimated average wind speed and energy estimates at 60 m.
| Average wind speeds (m/s) | Wind speed prediction error (%) | Shape factor, k | Scale factor, c | Estimated power flux (W/m2) | Power flux estimation error (%) | |
|---|---|---|---|---|---|---|
| Observation | 5.9 | 3.68 | 6.56 | 152 | ||
| 1 day_N-10l | 5.7 | 3.7 | 4.50 | 6.25 | 127 | 16 |
| 1 day_N-pblh | 5.3 | 10.3 | 4.15 | 5.85 | 105 | 31 |
| 1 day_N-a | 5.7 | 4.3 | 4.39 | 6.22 | 125 | 17 |
| 2 days_N-10l | 5.4 | 8.9 | 4.09 | 5.95 | 111 | 27 |
| 2 days_N-pblh | 5.2 | 11.3 | 4.07 | 5.79 | 102 | 33 |
| 2 days_N-a | 5.4 | 9.3 | 3.98 | 5.93 | 110 | 27 |
| 7 days_N-10l | 4.8 | 18.6 | 3.78 | 5.33 | 81 | 46 |
| 7 days_N-pblh | 5.1 | 12.9 | 3.80 | 5.70 | 99 | 35 |
| 7 days_N-a | 4.8 | 19.6 | 3.62 | 5.28 | 80 | 48 |
| 14 days_N-10l | 4.6 | 21.9 | 3.45 | 5.14 | 74 | 51 |
| 14 days_N-pblh | 5.1 | 13.8 | 3.67 | 5.66 | 97 | 36 |
| 14 days_N-a | 4.6 | 22.2 | 3.42 | 5.13 | 74 | 51 |
| 31 days_N-10l | 4.5 | 24.3 | 3.30 | 4.99 | 69 | 54 |
| 31 days_N-pblh | 5.1 | 13.7 | 3.45 | 5.68 | 100 | 34 |
| 31 days_N-a | 4.5 | 24.5 | 3.20 | 4.99 | 70 | 54 |
For the nudging options tested, disabling nudging below the model-calculated PBL height (experiments from the N-pblh group), produced more consistent results, producing errors ranging from approximately 10 to 13.7% of observed average wind speeds, and 31–34% of power flux estimates from observed data (See Table 5). In contrast, though disabling nudging below 1600 m (model vertical level 10) gave the best result with the 1 day run time, it produced a wider error range when the run time is increased to 30 days; from approximately 4 to 24% of observed average wind speeds, and 16–54% of energy estimates from observed data. The same trend is found for experiments from the N-a group.
The trends discussed above are also observed in the shape and scale parameters for the fitted data, also presented in Table 5.
3.3. Wind direction predictions
We found no significant difference in the direction predictions of the various configuration options tested. Observed wind directions for this period was mostly from the North-East and the East. All the model configurations tested predicted the dominant direction to North-East. This can be seen in Fig. 11, which shows wind roses for observed and predicted wind directions at a height of 60 m. From the statistical metrics for direction prediction at 60 m, presented in Table 6, our analysis suggests the 1day_N-pblh as the best ranked configuration. We also observe some of the trends in speed predictions being repeated in the direction predictions as well; the shorter run time options generally give better direction prediction skill scores. In addition, the N-pblh group of experiments exhibit the most consistent performance as compared to other groups. This can be explained by the fact that wind directions were calculated from the wind speed predictions. However, all the options tested predict the same direction (North-East), as the dominant wind direction for this site in December 2013.
Fig. 11.
Wind roses of observation data and scheme predictions at a height of 60 m.
Table 6.
Statistical metrics for wind speed predictions at 60 m.
| Average wind direction (degrees) | RMSE (degrees) | ME (degrees) | STDE (degrees) | CircC | Prediction skill score | |
|---|---|---|---|---|---|---|
| Observation | 55.8 | |||||
| 1 day_N-10l | 39.6 | 47.3 | −25.9 | 39.6 | 0.4 | 2.6 |
| 1 day_N-pblh | 37.5 | 46.7 | −27.6 | 37.7 | 0.5 | 3.2 |
| 1 day_N-a | 39.4 | 47.2 | −26.9 | 38.8 | 0.4 | 2.9 |
| 2 days_N-10l | 37.7 | 47.6 | −25.1 | 40.5 | 0.5 | 2.4 |
| 2 days_N-pblh | 39.7 | 48.9 | −27.0 | 40.7 | 0.4 | 2.6 |
| 2 days_N-a | 38.9 | 47.1 | −25.5 | 39.6 | 0.5 | 2.6 |
| 7 days_N-10l | 41.0 | 56.6 | −26.9 | 49.8 | 0.5 | 1.7 |
| 7 days_N-pblh | 42.7 | 47.8 | −26.0 | 40.1 | 0.4 | 2.5 |
| 7 days_N-a | 38.7 | 56.2 | −27.4 | 49.1 | 0.5 | 1.9 |
| 14 days_N-10l | 40.4 | 61.0 | −27.5 | 54.4 | 0.5 | 1.2 |
| 14 days_N-pblh | 42.8 | 48.1 | −25.6 | 40.8 | 0.4 | 2.3 |
| 14 days_N-a | 37.0 | 59.8 | −28.0 | 52.9 | 0.5 | 1.5 |
| 31 days_N-10l | 44.2 | 62.0 | −29.0 | 54.8 | 0.5 | 1.5 |
| 31 days_N-pblh | 43.3 | 47.8 | −24.9 | 40.8 | 0.4 | 2.3 |
| 31 days_N-a | 50.5 | 59.0 | −24.6 | 53.6 | 0.4 | 0.7 |
3.4. Effect of elevation (height) on findings
We found no significant changes in ranking of the configurations tested in this study when the analysis was repeated for the other heights at which we had observational data (i.e. 50 m and 40 m). Configuration rankings remained the same, albeit with marginal drops in prediction skill sores for direction prediction, as can be seen from Fig. 12(b).
Fig. 12.
Skill scores for; (a) Speed prediction (b) Direction prediction (Values are scores at 60 m height).
We also found no changes in the trends that were observed in the estimated power with data from the configurations. There were drops in estimated power at the lower heights, due to lower wind speeds at these heights. Full results on the estimated power for the lower heights can be found in the Annex-I.
4. Conclusion
This study investigated the effects of combining simulation run times of varying lengths with 3 different methods of disabling nudging in the PBL, on wind speed and direction predictions by WRF model. Effects of the configuration options on power estimated from the data they predicted was also examined. Five selected run times were each combined with three methods of disabling grid nudging within the PBL in WRF simulations.
We found that, shorter simulation run times generally offer better model performance over longer simulation run times. Consistent with findings of Carvalho et al. [2], 2 day runs offered better model performance over 30 day runs, when combined with an appropriate method of disabling grid nudging in the PBL. In this study, it was found that the 2 days runs reduced the average wind speed prediction error for the study area from 24% of observed wind speeds, to less than 10%. The error in wind power flux estimated with predicted data, reduced from 54% of estimates with observed data, to 27%. The simulated results were found to further improve, when simulation run time was reduced to 1 day; average wind speed prediction error dropped to less than 4% of the observed average, and error in power estimated with predicted data dropped to 16% of estimates from observed data. Furthermore, results suggest that, where longer run times must be used for simulations (due to time constraint or computing constraints that the shorter run times require), the error from predictions can be reduced by choosing an appropriate method of applying grid nudging. In line with the findings of Ohsawa et al. [1], when nudging is disabled below the model-calculated PBL height instead of a fixed height, average prediction error is reduced, possibly because of the ability this approach of disabling nudging to better determine the appropriate levels at which winds should be nudged. In this study, this approach reduced speed prediction error from 24% to 14%. The error in power flux estimated with the predicted data also reduced from 54% to 34%. It must be noted that performance margins for this approach (disabling below model-calculated PBL height) might differ with different PBL schemes, as model estimation of the PBL height depends on the PBL scheme used for the simulations [14, 33]. In addition, this study does not examine the sensitivity of the options to seasonal variations conditions, which often affects model performance.
Based on our results, we recommend that, for optimum model performance, grid nudging options should be chosen with run times in mind. For our study area (and perhaps other areas with similar terrain and climatic conditions in Ghana and the West African sub-region), model configurations with shorter run times of 1 or 2 days, combined with grid nudging above a height of 1600 m give the best average wind speed predictions, and are therefore recommended for wind simulations. However, for longer run times (of 7 days or more), the more consistently performing approach of disabling nudging below the model-calculated PBL height, gives better results and is therefore recommended.
Declarations
Author contribution statement
Denis E.K. Dzebre: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Akwasi A. Acheampong: Conceived and designed the experiments.
Joshua Ampofo: Conceived and designed the experiments.
Muyiwa S. Adaramola: Conceived and designed the experiments; Contributed reagents, materials, analysis tools or data.
Funding statement
This work was supported by the Energy and Petroleum (EnPe) Project of the Norwegian Agency for Development Cooperation (Norad) and by the UPERCRET-KNUST Program who supported Denis Edem Kwame Dzebre with a PhD scholarship.
Competing interest statement
The authors declare no conflict of interest.
Additional information
No additional information is available for this paper.
Contributor Information
Denis E.K. Dzebre, Email: dekdzebre.coe@knust.edu.gh.
Muyiwa S. Adaramola, Email: muyiwa.adaramola@nmbu.no.
Appendix A. Supplementary data
The following is the supplementary data related to this article:
References
- 1.Ohsawa T. Wind Europe Summit. Hamburg Messe; Hamburg, Germany: 2016. Investigation of WRF configuration for offshore wind resource maps in Japan. [Google Scholar]
- 2.Carvalho D. A sensitivity study of the WRF model in wind simulation for an area of high wind energy. Environ. Model. Softw. 2012;33:23–34. [Google Scholar]
- 3.Chadee X., Seegobin N., Clarke R. Optimizing the Weather Research and Forecasting (WRF) model for mapping the near-surface wind resources over the Southernmost Caribbean Islands of Trinidad and Tobago. Energies. 2017;10:931. [Google Scholar]
- 4.Fernández-González S. Sensitivity analysis of the WRF model: wind-resource assessment for complex terrain. J. Appl. Meteorol. Climatol. 2017 [Google Scholar]
- 5.Giannakopoulou E.-M., Nhili R. WRF model methodology for offshore wind energy applications. Adv. Meteorol. 2014 [Google Scholar]
- 6.Giannaros T.M., Melas D., Ziomas I. Performance evaluation of the Weather Research and Forecasting (WRF) model for assessing wind resource in Greece. Renew. Energy. 2017;102:190–198. [Google Scholar]
- 7.Ji-Hang L., Zhen-Hai G., Hui-Jun W. Analysis of wind power assessment based on the WRF model. Atmos. Ocean. Sci. Lett. 2014;7:126–131. [Google Scholar]
- 8.Mohammadpour Penchah M., Malakooti H., Satkin M. Evaluation of planetary boundary layer simulations for wind resource study in east of Iran. Renew. Energy. 2017;111:1–10. [Google Scholar]
- 9.Mughal M.O. Wind modelling, validation and sensitivity study using Weather Research and Forecasting model in complex terrain. Environ. Model. Softw. 2017;90:107–125. [Google Scholar]
- 10.Surussavadee C., Wu W. 2015 IEEE International Geoscience and Remote Sensing Symposium. IGARSS); 2015. Evaluation of WRF planetary boundary layer schemes for high-resolution wind simulations in Northeastern Thailand; pp. 3949–3952. [Google Scholar]
- 11.Surussavadee C. 2017 8th International Renewable Energy Congress (IREC) 2017. Evaluation of WRF near-surface wind simulations in tropics employing different planetary boundary layer schemes; pp. 1–4. [Google Scholar]
- 12.Carvalho D. WRF wind simulation and wind energy production estimates forced by different reanalyses: comparison with observed data for Portugal. Appl. Energy. 2014;117:116–126. [Google Scholar]
- 13.Santos-Alamillos F.J. Analysis of WRF model wind estimate sensitivity to physics parameterization choice and terrain representation in Andalusia (Southern Spain) J. Appl. Meteorol. Climatol. 2013;52:1592–1609. [Google Scholar]
- 14.Banks R.F. Sensitivity of boundary-layer variables to PBL schemes in the WRF model based on surface meteorological observations, lidar, and radiosondes during the HygrA-CD campaign. Atmos. Res. 2016;176–177:185–201. [Google Scholar]
- 15.University Corporation for Atmospheric Research (UCAR) 2018. Steps to Run Analysis Nudging in WRF-ARW.http://www2.mmm.ucar.edu/wrf/users/wrfv3.1/How_to_run_grid_fdda.html [cited 2018 August 16]; Available from: [Google Scholar]
- 16.Essandoh E.O., Osei E.Y., Adam F.W. Prospects of wind power generation in Ghana. Int. J. Mech. Eng. Technol. 2014;5(10):156–179. [Google Scholar]
- 17.Gill D., Pyle M. 2012. Nesting in WRF. Presentation. [Google Scholar]
- 18.Duda M. 2016. Running the WRF Preprocessing System. Presentation. [Google Scholar]
- 19.Wei Wang C.B., Duda Michael, Dudhia Jimy, Gill Dave, Kavulich Michael, Kelly Keene, Chen Ming, Lin Hui-Chuan, Michalakes John, Rizvi Syed, Zhang Xin, Berner Judith, Ha Soyoung, Fossell Kate. 2016. ARW Version 3 Modeling System User's Guide. [Google Scholar]
- 20.Wang W. 2017. Considerations for Designing an Numerical Experiment. [Google Scholar]
- 21.Skamarock W.C. A description of the advanced research WRF version 3. Tech. Note. 2008:1–96. [Google Scholar]
- 22.NCEP FNL Operational Model Global Tropospheric Analyses, Continuing from July 1999. 2000. [Google Scholar]
- 23.Mandel J., Beezley J., Kochanski A. Coupled atmosphere-wildland fire modeling with WRF 3.3 and SFIRE 2011. Geosci. Model Dev. 2011;4:591–610. [Google Scholar]
- 24.Ovens, D. How to Properly Rotate WRF Winds to Earth-Relative Coordinates Using Python, GEMPAK, and NCL. [cited 2019 January 18]; Available from: http://www-k12.atmos.washington.edu/∼ovens/wrfwinds.html.
- 25.Dudhia J. WAM); 2017. Evaluation of Weather Research and Forecasting (WRF) Model Physics in Simulating West African Monsoon. [Google Scholar]
- 26.Statistical Computation Laboratory. Computing the Pearson Correlation Coefficient. [cited 2018 27/05]; Available from: http://www.stat.wmich.edu/s216/book/node122.html.
- 27.Agostinelli C., Lund U. 2013. R Package ‘Circular’: Circular Statistics (Version 0.4-7) [Google Scholar]
- 28.Adaramola M.S., Agelin-Chaab M., Paul S.S. Assessment of wind power generation along the coast of Ghana. Energy Convers. Manag. 2014;77:61–69. [Google Scholar]
- 29.Chadee X.T., Clarke R.M. Air density climate of two caribbean tropical islands and relevance to wind power. ISRN Renew. Energy. 2013;2013:7. [Google Scholar]
- 30.Cedar Lake Ventures Inc . 2018. Average Weather in Accra, Ghana, Year Round – Weather Spark.https://weatherspark.com/y/42322/Average-Weather-in-Accra-Ghana-Year-Round [cited 2018 07/06]; Available from: [Google Scholar]
- 31.Climate and Average Monthly Weather in Accra, Ghana. World Weather and Climate Information; Accra,Ghana: 2019. https://weather-and-climate.com/average-monthly-Rainfall-Temperature-Sunshine [cited 2019 January 24]; Available from: [Google Scholar]
- 32.The World Bank Group . 2019. Climate Change Knowledge Portal.http://sdwebx.worldbank.org/climateportal/index.cfm?page=country_historical_climate&ThisRegion=Africa&ThisCCode=GHA [cited 2019 January 24]; Available from: [Google Scholar]
- 33.Boadh R. Sensitivity of PBL schemes of the WRF-ARW model in simulating the boundary layer flow parameters for its application to air pollution dispersion modeling over a tropical station. Atmósfera. 2016;29:61–81. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.












