Next Article in Journal
Impact of Farmland Abandonment on Water Resources and Soil Conservation in Citrus Plantations in Eastern Spain
Previous Article in Journal
Modeling the Physical Nexus across Water Supply, Wastewater Management and Hydropower Generation Sectors in River–Reservoir Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantifying the Performances of the Semi-Distributed Hydrologic Model in Parallel Computing—A Case Study

1
Texas A&M AgriLife Research (Texas A&M University System), P.O.Box 1658, Vernon, TX 76384, USA
2
Department of Soil and Water Systems, University of Idaho, 322E. Front ST, Boise, ID 83702, USA
*
Author to whom correspondence should be addressed.
Water 2019, 11(4), 823; https://doi.org/10.3390/w11040823
Submission received: 21 March 2019 / Revised: 15 April 2019 / Accepted: 16 April 2019 / Published: 19 April 2019
(This article belongs to the Section Hydrology)

Abstract

:
The research features how parallel computing can advance hydrological performances associated with different calibration schemes (SCOs). The result shows that parallel computing can save up to 90% execution time, while achieving 81% simulation improvement. Basic statistics, including (1) index of agreement (D), (2) coefficient of determination (R2), (3) root mean square error (RMSE), and (4) percentage of bias (PBIAS) are used to evaluate simulation performances after model calibration in computer parallelism. Once the best calibration scheme is selected, additional efforts are made to improve model performances at the selected calibration target points, while the Rescaled Adjusted Partial Sums (RAPS) is used to evaluate the trend in annual streamflow. The qualitative result of reducing execution time by 86% on average indicates that parallel computing is another avenue to advance hydrologic simulations in the urban-rural interface, such as the Boise River Watershed, Idaho. Therefore, this research will provide useful insights for hydrologists to design and set up their own hydrological modeling exercises using the cost-effective parallel computing described in this case study.

1. Introduction

A hydrologic model is commonly used to simulate real-world problems in many water-related fields, including hydrological, ecological, biological, and environmental studies [1,2,3,4]. Recent advances in data-intensive products, such as North American Land Data Assimilation system (NLDAS) and NEXt Generation RADar (NEXRAD) enable hydrologists to better characterize hydrological processes at higher spatial and temporal scales [5,6,7,8]. However, it is a daunting task for hydrologists to calibrate their models using these data-intensive inputs.
Due to insufficiently observed datasets, the computer simulation approach is a typical exercise to characterize hydrological processes and to enhance hydrological simulations based on physical and conceptual parameters. In general, hydrologists utilize the selected key parameters to calibrate their models for efficient simulations associated with cost and time [9,10,11,12,13,14]. However, model performances are constrained by the number of sets of parameters used, which do not necessarily ensure that the selected model performs best to characterize hydrological processes in a complex watershed. Therefore, computer parallelism is a way to enhance simulation performances when many parameters are considered for further adjustments in hydrological modeling settings. Thanks to computer parallelism, computational modeling has rapidly advanced [15,16,17]. Although computer speed and capacity improve over time, the model calibration time is still challenging for many practitioners [18].
There are two typical approaches to parallelize hydrological simulations. First, a parallel algorithm with parallel threads (e.g., multiple cores) within a single computer is one approach [19,20,21,22,23]. The other approach is implementation of a parallel algorithm in connection with multiple machines [24,25]. Although several studies have been conducted for parallelizing model calibrations to reduce execution time and effort using multiple threads in a single machine [26,27,28], few studies focus on quantifying how multiple machines associated with cluster-based computing architecture can improve model performances in the field of hydrology. Moreover, computer parallelism on a cluster-based framework has not been fully implemented to find optimal parameters for hydrological simulations, especially Hydrological Simulation Program–Fortran (HSPF) modeling settings. Therefore, this research explores how computer parallelism can be implemented to evaluate the enhancement of hydrological simulations using HSPF so that hydrologists can apply it to their own applications.
Figure 1 shows a flowchart of computer parallelism to calibrate HSPF in a Linux cluster framework. A small Linux cluster system (sLCS) is first developed along with one master and six slave nodes. Next, climate data and geographical information are used to create sub-watersheds using a built-in delineation tool in BASINS 4 Software [29], and then climate data are routed into HSPF to generate streamflow. Once the simulated streamflow is generated by HSPF, calibration and validation exercises in computer parallelism are conducted to evaluate how well HSPF performs characterization of hydrological consequences associated with climate and land-use/land-cover (LULC) profiles in the study area.
Four different calibration schemes (described later) are used to determine optimal calibration scenarios in computer parallelism. For example, the BEOPEST, a special version of PEST (a model-independent parameter optimization program) [30] is used as a tool to calibrate HSPF with 14 cores using a message passing interface (MPI) in sLCS and all simulation outcomes associated with such schemes are then evaluated based on performance criteria as listed in the Appendix A. The scheme includes: (1) index of agreement (D), (2) coefficient of determination (R2), (3) root mean square error (RMSE), and (4) percentage of bias (PBIAS). Once the best calibration scheme is selected, additional efforts are made to improve model performances at the selected calibration target points, while the Rescaled Adjusted Partial Sums (RAPS) is used to evaluate the trend in annual streamflow. The result indicates that hydrologic simulations using BEOPEST and HSPF in sLCS environment is a way to improve model performances, especially when many parameters at the complex watershed are used for model calibration exercises.

2. Study Area and Data

The Boise River Watershed (BRW) is selected as the study area (Figure 2). As a tributary of the Snake River system, the BRW plays a key role of providing water to the Boise metropolitan areas, including Boise, Nampa, Meridian, and Caldwell. The drainage area of the basin is about 10,619 km2 with a mainstream length of 164 km and flows into the Snake River near Parma. More than 40 percent of Idaho residents live in this basin and 60 percent of the people reside around the floodplain. The main physiographic characteristic of the BRW is that a greater proportion of precipitation falls as snow at higher elevations. It causes the predictability of high flows due to the snow melting process, and therefore a localized flood event is often observed during late spring and early summer.
To simulate streamflow using HSPF, it requires primary input datasets, including precipitation, temperature, and potential evapotranspiration (PET). Phase 2 of the North American Land Data Assimilation System (NLDAS-2) data were used as climate forcing because a series of required climate data (e.g., precipitation, temperature, downward solar radiation, downward long wave radiation, wind speed, specific humidity, surface pressure, potential evapotranspiration, and others) were available. These datasets were in the eighth-degree grid spacing and used for the simulation period from 1 January 1979 to 31 December 2015 (36 years) at hourly time-steps. Note that the NLDAS-2 data have been examined along with the observed data product in several studies [31,32,33,34,35].
The derived climate data from NLDAS-2 were then converted to the watershed data management (WDM) format to be used as inputs for HSPF. However, there were a few issues with the conversion of the data from NLDAS-2 to WDM using conventional tools, which required significant time and effort for all 112 grid points at the BRW. Since the existing WDM utility tool could not import a large volume of forcing data (roughly about 30 MB per single file), a R script [36] was used to extract forcing data from NLDAS-2 to a WDM file. The SARA Time Series Utility [37] was then utilized to create a complete set of the WDM file.
A 30 m spatial resolution interval of digital elevation model (DEM) provided by the U.S. Geological Survey (USGS) was used to delineate watersheds and to determine flow directions in BASIN 4.1 modeling platforms [29]. The National Hydrography Dataset (NHD) and DEM were then used to characterize stream routing processes at a functional sub-watershed (1:100,000). A total of six observed streamflow stations were selected for calibration target points (TPs), including three points above reservoirs (no major diversion is found), two points below reservoirs, and one point at watershed outlet (see Figure 2). Additionally, land-use/land-cover data (LULC) in year 2011 [38] was used to classify land segments, such as urban, agricultural land, forest land, water/wet land, shrub land, grass land, and barren/mining land (Figure 3). Model calibration and validation effort were then made from 1 January 1999 to 31 December 2015 (17 years) and 1 January 1979 to 30 December 2000 (22 years), respectively. Note that there was a missing period from October 1, 1997 to December 2000 at the calibration target point six (TP6) so that the period from 1 January 1979 to 30 September 1997 was used for calibration.

3. Methodology

3.1. Small Linux Cluster System (LCS)

A small Linux cluster system (sLCS) was designed and built for this study using a multimode Beowulf, which is a portable computer cluster compatible with various computer architectures [39]. Beowulf is a local memory machine using messaging through local network linking master and slave nodes via local ethernet networks (TCP/IP) so that it can support open multi-processing (OpenMP) [40], message passing interface (MPI) [41], and compute unified device architecture (CUDA) parallelism [42]. A main advantage of sLCS is that it is easy to use and it is cost-effective to build high-performance computing for a small research group at a university and/or a small business since it costs less than $3000 (e.g., 6 × VIA CN10000 with 2 GHZ CPU, 1GB of RAM, 500GB of hard disk, 1Gbps Ethernet card). A typical sLCS is composed of 1 master and 6 slave nodes that are controlled by the master node and linked to each other via TCP/IP. For this study, 22 cores (8 cores in master node, 4 cores + 2 cores each × 5 slave nodes = total 22 cores) were used to implement parallelism during the model calibration. More specifically, a laptop was used as the master node, while the other slave nodes were connected to each other via TCP/IP as shown in Figure 4. The primary roles of the master node included: (1) to use resources for running software, (2) to exchange model parameters with the slave nodes, and (3) to save and display simulation results. Note that Ubuntu 64-bit version [43] was used as an operating system (OS).

3.2. System Setup

A diskless sLCS in Beowulf system was developed in Ubuntu Operating System (OS) [44]. Slave nodes with no hard disk were connected by an Ethernet network hub to the master node that could control, supervise, and monitor other salve nodes. The MPI library was used to coordinate multiple processes in a distributed memory environment. For communication protocols, the Secure Shell (SSH) method was used along with the Ubuntu-based diskless remote boot system (UDRB) to manage cluster nodes. Thus, the UDRB installed in the master node provided a diskless environment for the slave nodes, accessing local hardware. A wireless network (WiFi) was used for the master node to access the internet network, while all other connections between master and slave nodes were linked by network cards.

3.3. Hydrologic Simulation Program–Fortran (HSPF)

HSPF is a process-based, river basin-scale, and semi-distributed model for hydrologic simulations [45]. This model was used to simulate the impact of land management and/or climate change on water, sediment, and water quality in large and complex watersheds. In addition, HSPF was used to simulate water quality and quantity at various basin scales and locations (e.g., urban, agricultural, mountain area), where complicated water issues are intertwined between the states and/or the countries. HSPF consisted of three main modules (PERLND, IMPLND, and RCHRES) and an additional optional utility module. Each module had different state variables representing water quality and hydrological processes [45]. Further compiling effort is needed to make HSPF compatible with the Linux environment so that it can parallelize the calibration processes using BEOPEST in sLCS.

3.4. Time-Series Processor (TSPROC)

A tool known as a general time-series processor (TSPROC) is an interface to assist seamless data exchanges between input and output for optimal parameterizations in hydrologic simulations. Basically, TSPROC generated the key input file for the parameter estimation (PEST) program (which minimized model biases and errors of estimation formulated in a user-specified objective function). To fully implement TSPROC in sLCS, compilation of TSPROC was also required (the compilation process is not shown in the paper) because the current version of TSPROC was compiled for Windows only.

3.5. BEO-Parameter Estimation (BEOPEST)

PEST, the model-independent nonlinear parameter estimation and optimization tool developed by [30] was used to assist with data interpretation, model calibration, and predictive analysis. PEST used a recursive gradient-based optimization technique, linearizing the nonlinear problem by iteratively computing the Jacobian matrix of sensitivities of model observations to parameters. The parameter estimation in PEST was accomplished using the Gauss–Marquardt–Levenberg algorithm (GML) to minimize the user-defined objective function (e.g., minimization of root mean squares between simulated and observed values). The BEOPEST was a tool to mitigate the computation burden and implement parallelism in PEST [46]. The BEOPEST was installed in the master node and it communicated with the slaves without any additional physical file exchanges. Thus, two communication protocols, such as Transmission Control Protocol/Internet protocol (TCP/IP) and MPI were commonly used. Therefore, throughout TCP/IP, the BEOPEST and MPI were utilized to run HSPF through data exchanges in a diskless Linux cluster environment, such as sLCS. As library sources, an OPENMPI library was installed to compile a parallel code fully workable in sLCS. Since BEOPEST in sLCS was a cost-effective approach and powerful, it was highly recommended to execute model calibration in computer parallelism with affordable costs for a small research group.

3.6. Streamflow Calibration Schemes

BASINS Technical Note 6 [47] provided guidance on hydrologic and hydraulic parameters including parameter definition, the units, and acceptable ranges for HSPF. Table 1 lists parameter name, unit, initial value, and ranges for streamflow calibrations. Four different calibration schemes were used for the simulation period, and the first two years (1 January 1999 to 31 December 2000) were selected as a warm-up period to reduce the sensitivity of the model results to the assumed initial conditions (see Table 1).
Specifically, schemes (SCOs) 1 and 2 were designed to calibrate whole basin with different parameter sets. Thus, scheme 1 (SCO1) used 7 model parameters provided by [9] because these parameters are commonly used for model calibration regardless of watersheds. Scheme 2 (SCO2) used 16 model parameters (see Table 1), while scheme 3 (SCO3) and scheme 4 (SCO4) are designed to calibrate the model with different parameter sets for 6 independent sub-watersheds shown in Figure 5. Thus, SCO3 uses 7 model parameters for 6 sub-watersheds (total 42 model parameters = standard 7 model parameters × 6 sub-watersheds) and SCO4 uses 16 model parameters for 6 sub-watersheds (total 96 model parameters = 16 model parameters × 6 sub-watersheds). For each calibration scheme, computer parallelism was applied to evaluate its performances using BEOPEST in sLCS.

3.7. Performance Measures

3.7.1. Performance Measures for Parallel Computing

To evaluate parallel performances, various evaluation criteria, including run time, time reduction, speedup, efficiency, scalability, and more were considered, however, program run time (PT), percentage of time reduction (PP), parallel speedup (PS), and parallel efficiency (PE) were selected for the sake of convenience. Parallel speedup (PS) was defined as the degree of true time reduction between a serial computation and parallel computation, and this measure indicated the relative improvement of model performance during calibrations. A notation of PS is proposed by Amdahl’s law [49] and it was used to compute the theoretical maximum parallel speedup, when multiple processors were used. It was denoted as:
PS = T s T P
where, T s = execution time of a serial computation on a single process core, s. T P = execution time of parallel application on multiple processors, p.
Parallel efficiency (PE) is another way to measure the effectiveness of multiple processors. Under an ideal condition in computer parallelism, PE should be equal to all the cores used with maximum efficiency, which is 1. Although PE varied depending upon the number of cores used, PE should be between 0 and 1 in real-world applications due to the interference of physical components associated with load balancing, lack of hardware capacity, network connection, and other physical constraints, if any. PE was denoted as:
PE = S P
where, S = efficiency of a single process core, P = efficiency of multiple processors.

3.7.2. Performance Measures for HSPF Simulations

Six typical performance measures, including index of agreement (D), coefficient of determination (R2), Nash–Sutcliffe efficiency (NSE), root mean square error (RMSE), RMSE-observations standard deviation ration (RSR), and percentage of bias (PBIAS), were selected to evaluate how well HSPF simulated streamflow as compared with the observed streamflows at the BRW. D was the insensitivity of the correlation-based measure to variances [50]. It ranged from 0.0 to 1.0. R2 was the degree of collinearity between the observed and simulated values. It ranged from 0.0 to 1.0. Note that higher values of D and R2 indicated better agreement between the simulated data and the observed data. Typically, if the R2 value was greater than 0.5, acceptable model performances were granted [51,52]. The NSE was the percentage of the observed variance explained by the model and determined the efficiency criterion for the model verification [53]. It ranged from minus infinity to 1.0, with higher values indicating better agreement between the observed data and the simulated data. If the NSE value was greater than zero, the model was deemed a better system simulation than that of the mean of the observed data. The RMSE was an absolute error measure, quantifying error with regards to the variable units. It calculated a measure of the difference between the simulated data and the observed data. The individual differences were called residuals. The RMSE aggregated them into a single measure of predictive power. A lower value of RMSE showed better model performance and zero value indicated a perfect fit. The RSR was a standardized RMSE using the observed standard deviation. It incorporated both an error index and the additional information recommended by [54]. The lower RSR value (e.g., close to zero) indicated better model performance. The PBIAS was calculated to determine the average tendency of the simulated values as larger or smaller than observed counterparts [55]. A value of zero was the optimal model performance. Positive values indicated the underestimated bias, while negative values indicated the overestimated bias for the simulated results against the observed values.

3.7.3. Streamflow Analysis in Time Series.

The Rescaled Adjusted Partial Sums (RAPS) method [56] was used to detect and quantity trends fluctuation of simulated streamflow at the watershed outlet. This method overcame small systematic changes and variability in the time series. Note that trend, data clustering, irregular fluctuations, and periodicities in the time series can be represented by the RAPS visualization. The RAPS was calculated using the equation below:
R A P S k = t = 1 k Y t     Y ¯ S Y
where, Y ¯ is the mean data for entire data, S Y is standard deviation over the entire data, k (k = 1, 2, 3, 4, …, n) is the counter limit of the summation for k-th year, and n is the number of the values in the time series.

4. Results

4.1. Parallel Performance

Parallel performances in sLCS are evaluated based on four different calibration schemes (SCO1–SCO4) using BEOPEST. Figure 6 shows the results of total program (calibration) run time (PT), percentage of time reduction (PP), parallel speedup (PS), and parallel efficiency (PE) by the number of core processes with respect to SCOs. Obviously, PT decreases as the number of cores increases. Note that PT of SCO4 is about ten times longer than that of SCO1 when a single core is used with the seven parameters (not shown in this paper). However, when model calibrations are conducted using two to eight cores, PT gradually decreases until no distinct improvement is observed at nine cores and above. The PP also shows a similar pattern in the sense that calibration with multiple cores can have time saving advantages. Therefore, the reduction rate of the total calibration time (PP) is achieved for 76%, 89%, 89%, and 90% from SCO1, SCO2, SCO3, and SCO4, respectively.
Based on the values of PS after calibration in parallelism, SCO1 do not gain many benefits as compared with that of the other schemes. Thus, the values of PS from SCO2 and SCO3 gradually increase as the number of cores increases, while that from SCO4 is the highest. It implies that the loss of speedup is due to the communication overhead when more processor nodes are added to the sLCS. Theoretically, if the number of parallel jobs is set, each core simultaneously reads files to be written in hard disk via TCP/IP. For this reason, the speedup will not reach 14, even if 14 cores are fully used due to network constraints. Similarly, PE is most likely a less-than-ideal value, which is one, because of system overhead issues associated with physical constraints (e.g., network bandwidth and/or throughput between cores). The results show that SCO2 has the lowest PE, and SCO4 has the highest PE. Overall, SCO4 has the best parallel performance as compared with other calibration schemes, regardless of number of cores. This implies that BEOPEST in sLCS settings works well, especially when many hydrological parameters need to be calibrated at multiple sub-watersheds.

4.2. HSPF Model Performance

In addition to the computer parallelism aspect, HSPF performances are also observed to evaluate how streamflow simulations are well made, associated with the historical data at the selected calibration target points (PTs). Table 2 shows the comparison of the model performance criteria for SCO1–SCO4. SCO1 and SCO2 are first compared to see how the different number of parameters affect the model performance. The result shows that R2, d, NSE, and PBIAS of SCO 2 are higher than that of SCO1, but RMSE and RSR of SCO 2 are lower than SCO 1. It seems that SCO2 is more affected by the volume variation of streamflow driven by using more parameters. Obviously, HSPF performances after calibration improve against the no calibration option based on the performance criteria (see Table 2). Overall, SCO4 is the best, with higher NSE and D, and lower RSR, RMSE, and PBIAS, as compared with any other schemes, including the no-calibration option. As such, SCO4 is selected for additional effort to calibrate the interior calibration target points (TP1–TP5) at the BRW.

4.3. Results of Calibrated and Validated Streamflow Using SCO4

SCO4 is now employed to calibrate all six TPs and Table 3 shows the final set of calibrated model parameter values at each calibration target point. Note that the same model parameters are initially assigned to six sub-watersheds, but the optimal parameter values differ from each of the others after calibration. Table 4 lists the statistical results after model calibration and validation using SCO4 at all six TPs.
Figure 7 shows hydrograph comparisons between the calibrated and validated simulation results using SCO4 at all calibration target points (TP1–TP6). The results indicate adequate calibration and validation performance over the simulation and validation period. The timing of peak flows and the magnitude of peaks match well between the simulated and observed flows at TPs 1, 2, 3, and 4 during the calibration period. However, the magnitude of peaks at TPs 5 and 6 show somewhat different results due to the large reservoir diversion nearby. The values of D between the simulated and observed streamflows at all TPs during the calibration and validation periods ranged from 0.84 to 0.95 and 0.82 to 0.95, respectively, which is a satisfactory performance as shown in Table 5.

4.4. Results of Streamflow Analysis Using SCO4

Figure 8a shows the time series of the simulated annual streamflow using SCO4 for 1981–2015 at the watershed outlet. In general, annual streamflow shows a negative trend with minimum, mean, and maximum flow of 12.24 m3/s, 31.89 m3/s, 59.14 m3/s, respectively, while a positive trend is also observed when subsets of annual streamflow are used (Figure 8b). Thus, the simulation periods are divided into five subsets for visual inspection with 1981–1986 (Sub 1), 1987–1994 (Sub 2), 1995–1998 (Sub 3), 1999–005 (Sub 4), and 2006–2015 (Sub 5). The trend lines for the respective time window are then generated as shown in Figure 8c.
Our simulation model results show reliable model simulation performance based on the evaluation criteria. However, it is difficult to determine exact model performance since analyzed statistical criteria provide different performance ratings from very good to fair performance depending on the selected criteria. Therefore, the integrated criteria of model performance, such as the ideal point error (IPE) metric [59,60,61,62] or the standardized ranking performance index (sRPI) [63] could be additional measures to evaluate more robust model performance in a future study.

5. Conclusions

Computer parallelism is a useful tool to reduce the computation time. Hydrologic model calibration in parallel computing can provide various opportunities to improve model performance, yet its qualitative performance has not been reported in the hydrology community. To quantify those performances in hydrological simulations, four different calibration schemes are employed and BEOPEST is used as a tool to parallelize the HSPF model in sLCS. Performance measures of parallelism, including program run time (PT), percentage of time reduction (PP), parallel speedup (PS), and parallel efficiency (PE) are used along with other evaluation criteria for hydrological simulations, which include: index of agreement (D), coefficient of determination (R2), Nash–Sutcliffe efficiency (NSE), root mean square error (RMSE), RMSE-observations standard deviation ration (RSR), and percentage of bias (PBIAS).
The results show that total PT during calibration is tremendously reduced by 76%, 89%, 89%, and 90% from SCO 1, SCO 2, SCO 3, and SCO 4, respectively, when 14 cores are used instead of a single core. Additionally, SCO4 outperforms others based on its performance measures described early. As such, SCO4 is used for further analysis to improve model performance for all calibration target points (TP1–TP6). After model calibration based on SCO4, annual streamflow trends are also observed for the interested reader using RAPS for the subsets (Sub 1 thru Sub 5) of the annual streamflow in different time windows.
We can conclude that computer parallelism, with many parameters at multiple sub-watersheds, will benefit hydrologists for improvement of hydrological simulations. In addition, the proposed method will provide great potential for reliable water quality and quantity simulations when large reservoir and irrigation components are fully incorporated into HSPF. Therefore, the proposed case study is a good example for hydrologists to apply computer parallelism using sLCS to their own applications, including but not limited to streamflow, physical and conceptual hydrologic, and environmental simulations in a changing global environment.

Author Contributions

J.J.K. applied HSPF model in computer parallelism, coding, and analysis; and is the primary author on the manuscript. J.R. proposed the study and contributed to conceptualizing the project, interpreting the processes in general as J.J.K.’s advisor.

Funding

This research is supported partially by the National Institute of Food and Agriculture, U.S. Department of Agriculture (USDA), under ID01507 and the Idaho State Board of Education (ISBOE) through IGEM program. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the authors and do not necessarily reflect the view of USDA and ISBOE

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

D   =   1.0     i = 1 N ( Q O i     Q S i ) 2   i = 1 N ( | Q S i     Q ¯ O i |   +   | Q O i   Q ¯ O i | ) 2
D = insensitivity of correlation-based measure to variances (Wilmott, 1984).
R 2   =   ( N × i   = 1 N ( Q Oi × Q Si )   ( i = 1 N Q Oi )   × ( i = 1 N Q Si ) N × ( i = 1 N Q Oi 2 )   ( i = 1 N Q O 1 ) 2   × N × ( i = 1 N Q Si 2 )   ( i = 1 N Q S 1 ) 2 ) 2
N S E   = 1.0     [ i = 1 N ( Q O i     Q S i ) 2 i = 1 N ( Q O i     Q ¯ O i ) 2 ]
R M S E   =   [ 1 N i = 1 N ( Q O i     Q s i ) 2 ] 0.5
R S R   =   R M S E S T D E V o b s   = i = 1 N ( Q Q i   Q s i ) 2 i = 1 N ( Q O i   Q ¯ O i ) 2
P B I A S   =   i = 1 N ( Q O i     Q S i ) i = 1 N Q O i   × 100
where, Q O i and Q S i are observed and simulated streamflow at time step, respectively. Q ¯ O i   and   Q ¯ S i are the mean observed and simulated streamflow for the simulation period. N is total number of values within the simulation period.

References

  1. Borah, D.; Bera, M. Watershed-scale hydrologic and nonpoint-source pollution models: Review of mathematical bases. Trans. ASAE 2003, 46, 1553. [Google Scholar] [CrossRef]
  2. Howarth, R.W.; Billen, G.; Swaney, D.; Townsend, A.; Jaworski, N.; Lajtha, K.; Jordan, T. Regional Nitrogen Budgets and Riverine N & P Fluxes for the Drainages to the North Atlantic Ocean: Natural and Human Influences Nitrogen Cycling in the North Atlantic Ocean and Its Watersheds; Springer: Dordrecht, The Netherlands, 1996; pp. 75–139. [Google Scholar]
  3. Wang, K.; Shen, Z.A. GPU-based parallel genetic algorithm for generating daily activity plans. IEEE Trans. Intell. Transp. Syst. 2012, 13, 1474–1480. [Google Scholar] [CrossRef]
  4. Wu, K.; Xu, Y.J. Evaluation of the applicability of the swat model for coastal watersheds in southeastern louisiana. J. Am. Water Resour. Assoc. 2006, 42, 1247. [Google Scholar] [CrossRef]
  5. Smith, M.B.; Koren, V.; Zhang, Z.; Zhang, Y.; Reed, S.M.; Cui, Z.; Anderson, E.A. Results of the DMIP 2 Oklahoma experiments. J. Hydrol. 2012, 418, 17–48. [Google Scholar] [CrossRef] [Green Version]
  6. Lerat, J.; Perrin, C.; Andréassian, V.; Loumagne, C.; Ribstein, P. Towards robust methods to couple lumped rainfall–runoff models and hydraulic models: A sensitivity analysis on the Illinois River. J. Hydrol. 2012, 418, 123–135. [Google Scholar] [CrossRef] [Green Version]
  7. Nan, Z.; Wang, S.; Liang, X.; Adams, T.E.; Teng, W.; Liang, Y. Analysis of spatial similarities between NEXRAD and NLDAS precipitation data products. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 371–385. [Google Scholar] [CrossRef]
  8. Ryu, J.H. Application of HSPF to the Distributed Model Intercomparison Project: Case Study. J. Am. Water Resour. Assoc. 2009, 14, 847–857. [Google Scholar] [CrossRef]
  9. Kim, J.J.; Ryu, J.H. Threshold of basin discretization levels for hspf simulations with nexrad inputs. J. Hydrol. Eng. 2013, 19, 1401–1412. [Google Scholar] [CrossRef]
  10. Gallagher, M.; Doherty, J. Parameter estimation and uncertainty analysis for a watershed model. Environ. Model. Softw. 2007, 22, 1000–1020. [Google Scholar] [CrossRef]
  11. Xu, Z.; Godrej, A.N.; Grizzard, T.J. The hydrological calibration and validation of a complexly-linked watershed–reservoir model for the Occoquan watershed, Virginia. J. Hydrol. 2007, 345, 167–183. [Google Scholar] [CrossRef]
  12. Göncü, S.; Albek, E. Modeling climate change effects on streams and reservoirs with HSPF. Water Resour. Manag. 2010, 24, 707–726. [Google Scholar] [CrossRef]
  13. Xie, H.; Lian, Y. Uncertainty-based evaluation and comparison of SWAT and HSPF applications to the Illinois River Basin. J. Hydrol. 2007, 481, 119–131. [Google Scholar] [CrossRef]
  14. Seong, C.; Her, Y.; Benham, B.L. Automatic calibration tool for Hydrologic Simulation Program-FORTRAN using a shuffled complex evolution algorithm. Water 2015, 7, 503–527. [Google Scholar] [CrossRef]
  15. Jordi, A.; Wang, D.-P. sbPOM: A parallel implementation of Princenton Ocean Model. Environ. Model. Softw. 2012, 38, 59–61. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, G.; Wu, B.; Li, T. Digital yellow river model. J. Hydro-Environ. Res. 2007, 1, 1–11. [Google Scholar] [CrossRef]
  17. Zhao, G.; Bryan, B.A.; King, D.; Luo, Z.; Wang, E.; Bende-Michl, U.; Yu, Q. Large-scale, high-resolution agricultural systems modeling using a hybrid approach combining grid computing and parallel processing. Environ. Model. Softw. 2013, 41, 231–238. [Google Scholar] [CrossRef]
  18. Zhang, X.; Srinivasan, R.; Zhao, K.; Liew, M.V. Evaluation of global optimization algorithms for parameter calibration of a computationally intensive hydrologic model. Hydrol. Process. 2009, 23, 430–441. [Google Scholar] [CrossRef]
  19. Glotić, A.; Kitak, P.; Pihler, J.; Tičar, I. Parallel self-adaptive differential evolution algorithm for solving short-term hydro scheduling problem. IEEE Trans. Power Syst. 2014, 29, 2347–2358. [Google Scholar] [CrossRef]
  20. Li, T.; Wang, G.; Chen, J.; Wang, H. Dynamic parallelization of hydrological model simulations. Environ. Model. Softw. 2011, 26, 1736–1746. [Google Scholar] [CrossRef] [Green Version]
  21. Li, X.; Wei, J.; Li, T.; Wang, G.; Yeh, W.W.-G. A parallel dynamic programming algorithm for multi-reservoir system optimization. Adv. Water Resour. 2014, 67, 1–15. [Google Scholar] [CrossRef]
  22. Wu, Y.; Li, T.; Sun, L.; Chen, J. Parallelization of a hydrological model using the message passing interface. Environ. Model. Softw. 2013, 43, 124–132. [Google Scholar] [CrossRef]
  23. Zhang, X.; Beeson, P.; Link, R.; Manowitz, D.; Izaurralde, R.C.; Sadeghi, A.; Arnold, J.G. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python. Environ. Model. Softw. 2013, 46, 208–218. [Google Scholar] [CrossRef]
  24. Lecca, G.; Petitdidier, M.; Hluchy, L.; Ivanovic, M.; Kussul, N.; Ray, N.; Thieron, V. Grid computing technology for hydrological applications. J. Hydrol. 2011, 403, 186–199. [Google Scholar] [CrossRef] [Green Version]
  25. Kalyanapu, A.J.; Shankar, S.; Pardyjak, E.R.; Judi, D.R.; Burian, S.J. Assessment of GPU computational enhancement to a 2D flood model. Environ. Model. Softw. 2011, 26, 1009–1016. [Google Scholar] [CrossRef]
  26. Gorgan, D.; Bacu, V.; Mihon, D.; Rodila, D.; Abbaspour, K.; Rouholahnejad, E. Grid based calibration of SWAT hydrological models. Nat. Hazards Earth Syst. Sci. 2012, 12, 2411–2423. [Google Scholar] [CrossRef] [Green Version]
  27. Rouholahnejad, E.; Abbaspour, K.C.; Vejdani, M.; Srinivasan, R.; Schulin, R.; Lehmann, A. A parallelization framework for calibration of hydrological models. Environ. Model. Softw. 2012, 31, 28–36. [Google Scholar] [CrossRef]
  28. Yalew, S.; van Griensven, A.; Ray, N.; Kokoszkiewicz, L.; Betrie, G.D. Distributed computation of large scale SWAT models on the Grid. Environ. Model. Softw. 2013, 41, 223–230. [Google Scholar] [CrossRef]
  29. Environmental Protection Agency (EPA, US). Better Assessment Science Integrating Point & Non-Point Sources (BAINS). 2018. Available online: https://www.epa.gov/exposure-assessment-models/basins (accessed on 1 January 2018).
  30. Doherty, J.; Skahill, B.E. An advanced regularization methodology for use in watershed model calibration. J. Hydrol. 2006, 327, 564–577. [Google Scholar] [CrossRef]
  31. Cosgrove, B.A.; Lohmann, D.; Mitchell, K.E.; Houser, P.R.; Wood, E.F.; Schaake, J.C.; Duan, Q. Real-time and retrospective forcing in the North American Land Data Assimilation System (NLDAS) project. J. Geophys. Res. Atmos. 2003, 108. [Google Scholar] [CrossRef] [Green Version]
  32. Pinker, R.T.; Tarpley, J.D.; Laszlo, I.; Mitchell, K.E.; Houser, P.R.; Wood, E.F.; Cosgrove, B.A. Surface radiation budgets in support of the GEWEX Continental-Scale International Project (GCIP) and the GEWEX Americas Prediction Project (GAPP), including the North American Land Data Assimilation System (NLDAS) project. J. Geophys. Res. Atmos. 2003, 108. [Google Scholar] [CrossRef] [Green Version]
  33. Luo, L.; Robock, A.; Mitchell, K.E.; Houser, P.R.; Wood, E.F.; Schaake, J.C.; Sheffield, J. Validation of the North American land data assimilation system (NLDAS) retrospective forcing over the southern Great Plains. J. Geophys. Res. Atmos. 2003, 108. [Google Scholar] [CrossRef]
  34. Xia, Y.; Mitchell, K.; Ek, M.; Cosgrove, B.; Sheffield, J.; Luo, L.; Livneh, B. Continental-scale water and energy flux analysis and validation for North American Land Data Assimilation System project phase 2 (NLDAS-2): 2. Validation of model-simulated streamflow. J. Geophys. Res. Atmos. 2012, 117. [Google Scholar] [CrossRef] [Green Version]
  35. Xia, Y.; Mitchell, K.; Ek, M.; Sheffield, J.; Cosgrove, B.; Wood, E.; Meng, J. Continental-scale water and energy flux analysis and validation for the North American Land Data Assimilation System project phase 2 (NLDAS-2): 1. Intercomparison and application of model products. J. Geophys. Res. Atmos. 2012, 117. [Google Scholar] [CrossRef] [Green Version]
  36. R Core Team. R: Language and Environmental for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  37. Aqua Terra. SARA Timeseries Utility. 2006. Available online: http://www.aquaterra.com/resources/downloads/saratsutility.php (accessed on 15 January 2017).
  38. Homer, C.G.; Dewitz, J.A.; Yang, L.; Jin, S.; Danielson, P.; Xian, G.; Coulston, J.; Herold, N.D.; Wickhan, J.D.; Megown, K. Completion of the 2011 National Land Cover Database for the conterminous United States-Representing a decade of land cover change information. Photogramm. Eng. Remote Sens. 2015, 81, 345–354. [Google Scholar]
  39. Brown, R.G. Engineering a Beowulf-Style Compute Cluster; Duke University Physics Department: Durham, NC, USA, 2004. [Google Scholar]
  40. Silberschatz, A.; Galvin, P.B.; Gagne, G. Operating System Concepts, 9th ed.; Willy: Hoboken, NJ, USA, 2013; pp. 181–182. [Google Scholar]
  41. Garbriel, E.; Fagg, G.E.; Bosilica, G.; Angskun, T.; Dongarra, J.J.; Squyres, J.M.; Sahay, V.; Kambadur, P.; Barrett, B.; Lumsdaine, A.; et al. OpenMPI: Goals, Concept, and Design of a Next Generation MPI Implementation. Recent Advances in Parallel Virtual Machine and Message Passing Interface; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  42. NVIDIA Corporation. NVIDIA CUDA Programming Guide (Version 2.0); NVIDIA Corporatio: Santa Clara, CA, USA, 2008. [Google Scholar]
  43. Canonical Group Ltd. Ubuntu for Desktop. 2016. Available online: https://www.ubuntu.com/desktop (accessed on 1 March 2016).
  44. Ubuntu Server Guide 2016. Available online: https://help.ubuntu.com/lts/serverguide/serverguide.pdf (accessed on 1 March 2016).
  45. Bicknell, B.; Imhoff, J.; Kittle, J., Jr.; Jobes, T.; Donigian, A., Jr. Hydrologic Simulation Program-Fortran (HSPF) User’s Manual for Version 12; US Environmental Protection Agency, National Exposure Research Laboratory: Athens, GA, USA, 2001.
  46. Schreüder, W.A. Running BeoPEST. In Proceedings of the PEST Conference, Potomac, MD, USA, 2–4 November 2009; pp. 228–240. [Google Scholar]
  47. U.S. EPA. BASINS Technical Note 6: Estimating Hydrology and Hydraulic Parameters for HSPF; EPA: Cincinnati, OH, USA, 2000; EPA-823-R00-012.
  48. United States Environmental Protection Agency (U.S. EPA). User’s Manual: Better Assessment Science Integrating Point and Nonpoint Sources (BASINS); EPA: Cincinnati, OH, USA, 2013; EPA-823-B-13-001.
  49. Rodgers, D.P. Improvements in multiprocessor system design. ACM SIGARCH Comput. Arch. News Arch. 1985, 23, 225–231. [Google Scholar] [CrossRef]
  50. Willmott, C.J. On the evaluation of model performance in physical geography. In Spatial Statistics and Models; Gaile, G.L., Willmott, C.J., Eds.; D. Reidel.: Norwell, MA, USA, 1984; pp. 443–460. [Google Scholar]
  51. Santhi, C.; Arnold, J.G.; Williams, J.R.; Dugas, W.A.; Srinivasan, R.; Hauck, L.M. Validation of the SWAT model on a large river basin with point and nonpoint sources. J. Am. Water Resour. Assoc 2001, 37, 1169–1188. [Google Scholar] [CrossRef]
  52. Van Liew, M.W.; Arnold, J.G.; Garbrecht, J.D. Hydrologic simulation on agricultural watersheds: Choosing between two models. Trans. ASAE 2003, 46, 1539–1551. [Google Scholar] [CrossRef]
  53. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  54. Legates, D.R.; McCabe, G.J. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resour. Res 1999, 35, 233–241. [Google Scholar] [CrossRef]
  55. Gupta, H.V.; Sorooshian, S.; Yapo, P.O. Status of automatic calibration for hydrologic models: Comparison with multilevel expert calibration. J. Hydrol. Eng. 1999, 4, 135–143. [Google Scholar] [CrossRef]
  56. Garbrecht, J.; Fernandez, G.P. Visualization of trends and fluctuations in climate records. Water Resour. Bull. 1994, 30, 297–306. [Google Scholar] [CrossRef]
  57. Donigian, A., Jr. HSPF Training Workshop Handbook and CD: Lecture #19; Calibration and Verification Issues; Environmental Protection Agency: Washington, DC, USA, 2000; Slide #L19-22.
  58. Moriasi, D.N.; Arnold, J.G.; Van Liew, M.W.; Bingner, R.L.; Harmel, R.D.; Gowda, P.H. Model evaluation guidelines for systematic quantification. Trans. ASABE 2007, 50, 885–900. [Google Scholar] [CrossRef]
  59. Dawson, G.W.; Mount, N.J.; Abrahart, R.J.; Shamseldin, A.Y. Ideal point error for model assessment in data-driven river flow forecasting. Hydrol. Earth Syst. Sci. 2012, 16, 3046–3060. [Google Scholar] [CrossRef]
  60. Domfnguez, E.; Dawson, G.W.; Ramirez, A.; Abrahart, R.J. The search for orthogonal hydrological modeling metrics: A case study of 20 monitoring stations in Colombia. J. Hydroinformat. 2011, 13, 397–413. [Google Scholar]
  61. Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D.P. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology—Part 1: Concepts and methodology. Hydrol. Earth Syst. Sci. 2010, 14, 1931–1941. [Google Scholar] [CrossRef]
  62. Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D.P. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology—Part 2: Application. Hydrol. Earth Syst. Sci. 2010, 14, 1943–1961. [Google Scholar] [CrossRef]
  63. Aschonitis, V.G.; Lekakis, K.; Tziachris, P.; Doulgeris, G.; Papadopoulos, F.; Papadopouos, A. A ranking system for comparing models’ performance combining multiple statistical criteria and scenarios: The case of reference evapotranspiration models. Environ. Modeling Softw. 2019, 114, 98–111. [Google Scholar] [CrossRef]
Figure 1. The procedual diagram of Hydological Simulation Program–Fortran (HSPF) simulations in parallel computing.
Figure 1. The procedual diagram of Hydological Simulation Program–Fortran (HSPF) simulations in parallel computing.
Water 11 00823 g001
Figure 2. The study area, the Boise River Watershed adapted from [9].
Figure 2. The study area, the Boise River Watershed adapted from [9].
Water 11 00823 g002
Figure 3. Land use and land cover at the study area, the Boise River Watershed.
Figure 3. Land use and land cover at the study area, the Boise River Watershed.
Water 11 00823 g003
Figure 4. System specification of the small Linux cluster system (sLCS).
Figure 4. System specification of the small Linux cluster system (sLCS).
Water 11 00823 g004
Figure 5. Four different calibration schemes (SCOs) in multiple sub-watersheds.
Figure 5. Four different calibration schemes (SCOs) in multiple sub-watersheds.
Water 11 00823 g005
Figure 6. Parallel performance measures, including (a) total calibraiton time (PT), (b) percentage of time reduction (PP), (c) parallel speedup (PS), and (d) parallel efficiency (PE) in sLCS environment.
Figure 6. Parallel performance measures, including (a) total calibraiton time (PT), (b) percentage of time reduction (PP), (c) parallel speedup (PS), and (d) parallel efficiency (PE) in sLCS environment.
Water 11 00823 g006
Figure 7. Hydrograph comparison at the calibration target points (TP1 (a), TP2 (b), TP3 (c), TP4 (d), TP5 (e), TP6 (f)) between the simulated flows (before and after calibration) and the observed flows for the validation period (1981–2000) and calibration period (2001–2015) using HSPF in sLCS.
Figure 7. Hydrograph comparison at the calibration target points (TP1 (a), TP2 (b), TP3 (c), TP4 (d), TP5 (e), TP6 (f)) between the simulated flows (before and after calibration) and the observed flows for the validation period (1981–2000) and calibration period (2001–2015) using HSPF in sLCS.
Water 11 00823 g007
Figure 8. Time series of (a) annual streamflow, (b) the Rescaled Adjusted Partial Sums (RAPS) for annual streamflow, and (c) annual streamflow with trend lines for the subsets of the simulation periods, 1981–2015.
Figure 8. Time series of (a) annual streamflow, (b) the Rescaled Adjusted Partial Sums (RAPS) for annual streamflow, and (c) annual streamflow with trend lines for the subsets of the simulation periods, 1981–2015.
Water 11 00823 g008
Table 1. Initial values and range of values for streamflow parameters for the HSPF model.
Table 1. Initial values and range of values for streamflow parameters for the HSPF model.
ParameterDefinitionUnitsInitial ValueRange of Values
Typical 1Possible 1,2
AGWETPFraction of remaining potential evapotranspiration from active groundwaterNone00.0–0.050–1.0
AGWRC *Base groundwater recession rateNone0.980.92–0.990.82–0.999
BASETP *Fraction of potential evapotranspiration from baseflowNone0.020.0–0.050–1.0
CEPSCInterception storage capacitymm2.540.76–5.080.25–254
DEEPFRFraction of groundwater inflow to deep rechargeNone0.10.0–0.20.0–1.0
INFILT *Infiltration ratemm/hour4.060.25–6.350.03–12.70
INTFWInterflow inflow parameterNone2.01.0–3.00.0–10.0
IRC *Interflow recession parameter1/day0.50.5–0.70.1–0.9
KVARYVariable groundwater recession flow1/mm00.0–76.20.0–127.0
LZETPLower zone evapotranspiration parameterNone00.0–0.70.1–0.9
LSURLength of the assumed overland flowm152.4–60.96–152.430.48–304.8
LZSN *Lower zone nominal soil moisture storagemm152.4, 165.176.2–203.250.8–381.0
NSURManning’s roughness for overland flowNone0.20.03–0.10.01–1.0
SLSUR *Slope of overland flow planeNone0.0010.30–1.520.0001–304.8
UZSN *Upper zone nominal soil moisture storagemm28.72.54–25.400.25–254.0
INFEXPExponent in infiltration equationnone2.02.0–2.01.0–3.0
* indicates the model parameters that were used by [9]. 1 BASINS Technical Note 6 [47]. 2 HSPF Version 12.4 User’s Manual [48]).
Table 2. The model performance comparison of calibrated monthly streamflow by calibration schemes at the calibration target point 6 (watershed outlet).
Table 2. The model performance comparison of calibrated monthly streamflow by calibration schemes at the calibration target point 6 (watershed outlet).
CriteriaOBSNo CalSCO1SCO2SCO3SCO4
Daily mean streamflow (m3/sec)30.9527.0124.2124.0425.5527.44
D-0.660.500.600.540.74
R2-0.300.410.450.410.37
NSE-0.170.240.310.280.34
RMSE (m3/sec)-28.8727.6426.3026.8925.76
RSR-0.910.870.830.850.81
PBIAS (%)-12.7421.7822.3217.4611.33
Table 3. The final set of calibrated model parameters at six calibration target points.
Table 3. The final set of calibrated model parameters at six calibration target points.
ParameterUnitsCalibration Target Points (TPs)
TP1TP2TP3TP4TP5TP6
AGWETPNone0.00010.00040.00030.00010.01260.0025
AGWRCNone0.97960.98130.98080.91640.97820.9401
BASETPNone0.22560.08640.10000.03620.18670.2000
CEPSCmm12.7012.702.5812.708.5112.70
DEEPFRNone0.50000.50000.10160.50000.33520.5000
INFILTmm/hour1.584.492.080.034.411.38
INTFWNone0.72090.43370.82270.11730.44220.1932
IRC1/day0.90000.85620.90000.90000.85690.4687
KVARY1/mm0.000.000.000.000.003.72
LZETPNone0.50000.52640.96480.26980.10000.1000
LSURm12.0535.0528.81238.9164.05243.84
LZSNmm50.80,
58.13
50.80,
158.23
50.8,
242.73
68.45,
63.62
56.81,
156.96
50.80,
71.39
NSURNone0.13170.15330.12600.99890.28250.0218
SLSURNone4.50680.21930.36230.00331.28190.1635
UZSNmm65.995.342.54195.642.6862.73
INFEXPNone2.002.002.002.002.01.00
Table 4. Statistical results after calibration and validation using SCO4 at the six calibration target points (TP1–TP6) in the BRW.
Table 4. Statistical results after calibration and validation using SCO4 at the six calibration target points (TP1–TP6) in the BRW.
Calibration Target PointDaily Mean Streamflow
(m3/s)
Evaluation Statistic
R2DRSRNSERMSEPBIAS (%)
TP1Cal5.02
(5.90)
0.820.950.450.793.4414.88
Val7.40
(8.09)
0.760.920.600.646.078.60
TP2Cal15.15
(17.04)
0.850.930.410.838.4311.09
Val17.65
(21.27)
0.780.930.490.7613.3117.03
TP3Cal32.63
(30.95)
0.840.950.410.8313.576.83
Val35.40
(34.41)
0.820.950.450.8016.50−2.88
TP4Cal22.75
(21.56)
0.580.860.590.6614.63−5.48
Val25.18
(27.71)
0.540.850.770.4019.679.10
TP5Cal44.94
(61.29)
0.590.840.700.5139.1726.67
Val55.19
(78.69)
0.650.860.690.5347.2129.86
TP6Cal28.26
(30.95)
0.590.870.660.5720.898.63
Val36.03
(48.97)
0.600.820.690.5234.5026.43
() indicate observed daily mean streamflow.
Table 5. General model performance rating for recommended statistics at monthly time steps.
Table 5. General model performance rating for recommended statistics at monthly time steps.
Performance RatingR21RSR2NSE2PBIAS1
Very good0.85 < R20.00 ≤ RSR ≤ 0.500.75 < NSEPBIAS < ±10
Good0.75< R2 ≤ 0.850.50 < RSR ≤ 0.600.65 < NSE 0.85±10 < PBIAS ±15
Fair0.65< R2 ≤ 0.750.60 < RSR ≤ 0.70.50 < NSE 0.65±15 < PBIAS ±25
PoorR2 ≤ 0.65RSR > 0.7NSE ≤ 0.50PBIAS > ±25
Note that the values of 1 and 2 are adopted from [57] and [58], respectively.

Share and Cite

MDPI and ACS Style

Kim, J.; Ryu, J.H. Quantifying the Performances of the Semi-Distributed Hydrologic Model in Parallel Computing—A Case Study. Water 2019, 11, 823. https://doi.org/10.3390/w11040823

AMA Style

Kim J, Ryu JH. Quantifying the Performances of the Semi-Distributed Hydrologic Model in Parallel Computing—A Case Study. Water. 2019; 11(4):823. https://doi.org/10.3390/w11040823

Chicago/Turabian Style

Kim, JungJin, and Jae Hyeon Ryu. 2019. "Quantifying the Performances of the Semi-Distributed Hydrologic Model in Parallel Computing—A Case Study" Water 11, no. 4: 823. https://doi.org/10.3390/w11040823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop