The L’Esteron au Broc [La Clave] is a catchment of 442 km² for which measured streamflows are available from 1999 to 2018, but with missing data (e.g. for the year 2014).

The exercise consists in using the hydro-climatic data available on the catchment and a rainfall-runoff model to reconstitute the missing data through hydrological simulation (see following figure). To do this, we must first ensure, through a calibration-evaluation procedure, that the model has a sufficient level of performance to carry out this reconstruction exercise.

This work will be performed in four steps:

- Manual calibration of the rainfall-runoff model (over the so-called “calibration” period)
- Automatic calibration of the rainfall-runoff model (over the “calibration” period).
- Evaluation of the obtained parameter sets (over the “evaluation” period).
- Hydrological reconstruction by rainfall-runoff modelling (during the “simulation” period).

This section aims to define the calibration and simulation conditions of the hydrological model (parameter calibration period, model initialization period, calibration criterion, etc.).

You will used the GR2M model (Mouelhi et al. 2006). It is a conceptual and lumped rainfall-runoff model, operating on a monthly time step and having two parameters. It requires as input continuous time series of monthly precipitation and potential evapotranspiration.

This model is easily usable thanks to the
**airGRteaching** package
(Delaigue et al. 2023, 2018), developped
for the R software by the Catchment Hydrology research
group of the HYCAR reaserch unit (INRAE, France).

The time series of observed precipitation, PE and streamflow can be
easily formatted using the `PrepGR()`

function. A
rainfall-runoff simulation can be performed with the
`SimGR()`

function, an a parameter calibration using the
`CalGR()`

.

GR2M, like many conceptual rainfall-runoff models, consists of conceptual reservoirs, whose initial levels at the beginning of the simulation are unknown and cannot be estimated by measurement. These initial levels must therefore be chosen arbitrarily (or based on an a priori). This can induce large errors in the model in the case where the estimated initial conditions deviate from what they should be according to the previous climatic conditions. To limit these errors, a warm-up period is generally considered. This period, which precedes the simulation period, is used to ensure that the levels in the reservoirs are independent of the initial conditions. It must therefore be longer than the memory of the catchment at the previous climatic conditions. In many catchments, this memory of previous conditions does not exceed one year, but in others that have a multi-year behavior, for example because of groundwater, it may be necessary to consider several years for the initialization of the model. During this warm-up period, the model errors are not used in the calculation of the performance criteria. This means that it is not necessary to have observed streamflow data over the warm-up period, only climate data are necessary.

In this exercise, a period of 24 months will be considered, starting in January 2003 and ending in December 2004. The next time step (January 2005) will be the first time step of the model calibration.

The period to be considered for the calibration of the GR2M parameters on the L’Esteron au Broc [La Clave] catchment starts in January 2005 and ends in December 2010.

The calibration criterion to be considered in this exercise is the Nash and Sutcliffe criterion (Nash and Sutcliffe 1970), noted \(NSE\) hereafter (see following equation). This criterion is widely used in hydrological modelling.

The NSE criterion, bounded between \(-\infty\) and \(1\), allows to quantify the performance of a model in a relative way, by comparing a series of simulated streamflows with a so-called “naive” model, here the average of the observed streamflows (i.e. a series of streamflows constituted in each time step by the average of the observed streamflows). Thus, a NSE value equal to 1 indicates a perfect agreement between the series of observed and simulated streamflows (which is never the case), whereas a NSE value lower than 0 means that the simulation considered is less efficient than the simulation of the “naive” model. The calculation of \(NSE\) is detailed in the following equation, where \(Q_{obs,t}\) is the observed streamflow at time step \(t\), \(Q_{sim,t}\) is the simulated streamflow at time step \(t\), \(\overline{Q_{obs}}\) is the average of the observed streamflows, and \(n\) is the number of observations:

\[\begin{equation} NSE = 1-\frac{\sum_{t=1}^{n}(Q_{obs,t}-Q_{sim,t})^{2}}{\sum_{t=1}^{n}(Q_{obs,t}-\overline{Q_{obs}})^{2}} \end{equation}\]

The elements necessary for the calculation of the calibration
criterion have to be set as arguments of the `CalGR()`

function.

This task, which can be tedious (but very formative) and requires a certain expertise, is to be carried out by testing several sets of GR2M parameters and by analyzing the quality of the simulations produced over the calibration period. In this exercise, a maximum of 10 sets of model parameters will be tested. The first set of parameters to be tested is composed of the median values of the GR2M parameters, defined by Mouelhi et al. (2006) after numerous calibrations of the model on different catchments. These values as well as the associated bounds of variation are equal to:

- X1 : 380 [mm] (140 [mm] \(\leq\) X1 \(\leq\) 2640 [mm]) ;
- X2 : 0.92 [-] (0.21 [-] \(\leq\) X2 \(\leq\) 1.31 [-]).

Automatic parameter estimation aims at using a search algorithm in the parameter space. This algorithm will automatically generate sets of parameters, test them, and generate others according to the performance of those already tested, until converging to an optimum. The algorithm developed by Michel (1991) will be used in this exercise.

The evaluation period is a period on which a model previously calibrated on another period is applied. This is a classic way of using a model, which is confronted with unknown situations. The independence of the evaluation period from the calibration period ensures that the model does not benefit from already known information. The objective of this test is to evaluate if the model is able to maintain the same level of performance (and therefore error) under new climatic conditions as those encountered during calibration. If so, we can estimate that the parameters of the model depend little on the conditions of the calibration period and therefore that the model is transposable to different conditions (we say that it is robust). If not, we must understood the causes of this decrease in performance.

Over this evaluation period, we can evaluate the model performance using the same criterion as the one used for calibration, but we can also complete the analysis using other criteria.

In this exercise, the evaluation period will start in January 2011, end in December 2018, and will be preceded by an warm-up period starting in January 2009 and ending in December 2010.

The final aims of this exercise is to reconstruct the streamflows of the catchment for the months without measurements using simulated streamflows. In order to have a single simulation covering the entire study period, this simulation will start in January 2001 and end in December 2018, with a warm-up period of 24 months starting in January January 1999 and ending in December 2000.

The data set available to the rainfall-runoff modelling consists of:

- a time series of daily total precipitation (liquid + solid) [mm/day]
(
`Ptot`

); - a time series of daily potential evapotranspiration calculated with
the Oudin et al. (2005) formula [mm/day]
(
`Evap`

); - a time series of daily streamflows expressed as a specific discharge
[mm/day] (
`Qmmd`

).

The daily time series can be aggregated to a monthly time step using
the `SeriesAggreg()`

function.

The following command lines allow to read the data necessary for the calibration of the GR2M rainfall-runoff model and to define the considered temporal periods (warm-up period, calibration period and evaluation period):

```
# Catchment data loading
library(airGRdatasets)
data("Y643401001", package = "airGRdatasets")
# Catchment metadata
str(Y643401001$Meta)
```

```
## List of 4
## $ Code:List of 2
## ..$ H3: chr "Y643401001"
## ..$ H2: chr "Y6434010"
## $ Name: chr "L'Esteron au Broc [La Clave]"
## $ Coor:List of 2
## ..$ X: num 7.16
## ..$ Y: num 43.8
## $ Area: num 442
```

```
# Observed daily time series
<- Y643401001$TS
ts_obs_d
# Summary of the time series
summary(ts_obs_d)
```

```
## Date Ptot Temp Evap
## Min. :1999-01-01 Min. : 0.000 Min. :-7.20 Min. :0.000
## 1st Qu.:2004-01-01 1st Qu.: 0.000 1st Qu.: 5.20 1st Qu.:0.600
## Median :2008-12-31 Median : 0.000 Median :10.40 Median :1.700
## Mean :2008-12-31 Mean : 2.934 Mean :10.69 Mean :1.989
## 3rd Qu.:2013-12-31 3rd Qu.: 1.200 3rd Qu.:16.30 3rd Qu.:3.300
## Max. :2018-12-31 Max. :146.300 Max. :27.20 Max. :5.300
##
## Qls Qmmd
## Min. : 790 Min. : 0.154
## 1st Qu.: 1830 1st Qu.: 0.357
## Median : 3100 Median : 0.605
## Mean : 6303 Mean : 1.231
## 3rd Qu.: 6990 3rd Qu.: 1.365
## Max. :167000 Max. :32.611
## NA's :136 NA's :136
```

```
# Calibration period
<- c("2003-01-01", "2004-12-01")
per_cal_wup <- c("2005-01-01", "2010-12-01")
per_cal_run
# Evaluation period
<- c("2009-01-01", "2010-12-01")
per_eva_wup <- c("2011-01-01", "2018-12-01")
per_eva_run
# Simulation period
<- c("1999-01-01", "2000-12-01")
per_sim_wup <- c("2001-01-01", "2018-12-01") per_sim_run
```

The following command lines are intended to prepare the available
data for use by GR2M, using the `SeriesAggreg()`

and
`PrepGR()`

functions.

```
# Monthly aggregation of observed daily time series
<- SeriesAggreg(ts_obs_d[, c("Date", "Ptot", "Evap", "Qmmd", "Temp")],
ts_obs_m Format = "%Y%m",
ConvertFun = c("sum", "sum", "sum", "mean"))
```

```
# Data processing for GR2M
<- PrepGR(DatesR = ts_obs_m$Date,
prep Precip = ts_obs_m$Ptot,
PotEvap = ts_obs_m$Evap,
Qobs = ts_obs_m$Qmmd,
HydroModel = "GR2M",
CemaNeige = FALSE)
```

The following command lines illustrate the steps required to perform a rainfall-runoff simulation using a given set of parameters (here the set of default values of GR2M) and to calculate the NSE criterion associated with this simulation.

```
# Parameter set to test
<- c(X1 = 380, X2 = 0.92)
i_param_gr2m
# Simulation over the calibration period
<- SimGR(PrepGR = prep,
i_sim_manu Param = i_param_gr2m,
EffCrit = "NSE",
WupPer = per_cal_wup,
SimPer = per_cal_run,
verbose = TRUE)
```

`## Crit. NSE[Q] = 0.8133`

```
# Calibration criterion
GetCrit(i_sim_manu)
```

```
## NSE[Q]
## 0.8132615
```

```
# Graphical assessment of the calibration performance
plot(i_sim_manu)
```

The first set of parameters tested (default GR2M values) gives an NSE criterion of 0.813, which is a good overall performance. The precedent figure allows us to compare the simulation obtained with the observed streamflows.

Now it’s your turn to play! The game consists in testing different values of the GR2M parameters, to perform a simulation and to calculate the NSE criterion for each tested set in order to identify the set that seems to be optimal. For each parameter, plot the evolution of the “objective” function versus the parameter value. The analysis of the simulated hydrographs, coupled with the comparison of NSE values, will allow to “guide” the modification of the parameters values. To do so, you can embed the previous code in a loop. At each iteration you test a new parameter set and compute the corresponding criterion. This way you can find the “best” parameter set.

The following command lines allow the GR2M model to be calibrated over the so-called “calibration” period.

```
# Calibration
<- CalGR(PrepGR = prep,
cal_auto CalCrit = "NSE",
WupPer = per_cal_wup,
CalPer = per_cal_run,
verbose = TRUE)
```

```
## Grid-Screening in progress (0% 20% 40% 60% 80% 100%)
## Screening completed (9 runs)
## Param = 347.234, 0.907
## Crit. NSE[Q] = 0.8022
## Steepest-descent local search in progress
## Calibration completed (17 iterations, 72 runs)
## Param = 639.061, 0.930
## Crit. NSE[Q] = 0.8558
```

```
# Graphical assessment of the calibration performance
plot(cal_auto)
```

```
# Get parameter values
<- GetParam(cal_auto)
param_cal param_cal
```

`## [1] 639.0611 0.9300`

```
# Get criterion value
GetCrit(cal_auto)
```

```
## NSE[Q]
## 0.8558419
```

The two parameters and the value of the calibration criterion (\(NSE\)) obtained after the automatic calibration procedure are:

- X1 = 639 [mm]
- X2 = 0.93 [-]
- \(NSE\) = 0.856 [-]

The performance obtained in calibration is considered good, with a criterion \(NSE\) equal to 0.856.

The following command lines allow to store in the same table the observed streamflows and the simulated streamflows with the set of parameters obtained by automatic calibration, in order to compare them.

```
# Combination of observed and simulated streamflow time series on the calibration period
<- as.data.frame(cal_auto)
ts_cal
# Combination of observed and simulated streamflow time series on the entire period
<- merge(x = ts_obs_m[, "Date", drop = FALSE], y = ts_cal,
ts_cal_all by.x = "Date", by.y = "Dates",
all.x = TRUE)
```

The following figure shows the observed and simulated streamflow series over the calibration period.

The following command lines allow us to use the parameter set obtained by automatic calibration to carry out a simulation over the evaluation period (2011-2018) and to calculate the score \(NSE\) associated with this simulation. This score constitutes the evaluation performance of the model.

```
# Simulation over the evaluation period
<- SimGR(PrepGR = prep,
eva Param = param_cal,
WupPer = per_eva_wup,
SimPer = per_eva_run,
EffCrit = "NSE",
verbose = FALSE)
# Get the criterion value
GetCrit(eva)
```

```
## NSE[Q]
## 0.8642077
```

```
# Combination of observed and simulated streamflow time series
<- as.data.frame(eva) ts_eva
```

The performance obtained by the model in evaluation is 0.864.

The following figure represents the observed and simulated streamflow series over the evaluation period.

The following command lines allow the use of the parameters obtained by automatic calibration to simulate the streamflow over the entire period studied.

```
# Simulation over the entire period
<- SimGR(PrepGR = prep,
sim Param = param_cal,
WupPer = per_sim_wup,
SimPer = per_sim_run,
EffCrit = "NSE",
verbose = FALSE)
```

```
# Get the criterion value
GetCrit(sim)
```

```
## NSE[Q]
## 0.84484
```

```
# Combination of observed and simulated streamflow time series
<- as.data.frame(sim) ts_sim
```

The results of the final simulation are shown in the following figure.

Delaigue, O., L. Coron, P. Brigode, and G. Thirel. 2023. *airGRteaching: Teaching Hydrological Modelling
with GR (Shiny Interface Included)*. *R
News*. https://doi.org/10.15454/W0SSKT.

Delaigue, O., G. Thirel, L. Coron, and P. Brigode. 2018. “airGR and airGRteaching: Two Open-Source Tools for
Rainfall-Runoff Modeling and Teaching Hydrology.” In *HIC
2018. 13th International Conference on Hydroinformatics*, edited by
Goffredo La Loggia, Gabriele Freni, Valeria Puleo, and Mauro De Marchis,
3:541–48. EPiC Series in Engineering. EasyChair. https://doi.org/10.29007/qsqj.

Michel, C. 1991. *Hydrologie Appliquée Aux Petits Bassins
Ruraux*. Cemagref, Antony.

Mouelhi, S., C. Michel, C. Perrin, and V. Andréassian. 2006.
“Stepwise Development of a Two-Parameter Monthly Water Balance
Model.” *Journal of Hydrology* 318 (1–4): 200–214. https://doi.org/10.1016/j.jhydrol.2005.06.014.

Nash, J. E., and J. V. Sutcliffe. 1970. “River Flow Forecasting
Through Conceptual Models Part I – A
Discussion of Principles.” *Journal of Hydrology* 10 (3):
282–90. https://doi.org/10.1016/0022-1694(70)90255-6.

Oudin, L., F. Hervieu, C. Michel, C. Perrin, V. Andréassian, F. Anctil,
and C. Loumagne. 2005. “Which Potential Evapotranspiration Input
for a Lumped Rainfall–Runoff Model?: Part
2-Towards a Simple and Efficient Potential
Evapotranspiration Model for Rainfall–Runoff Modelling.”
*Journal of Hydrology* 303 (1–4): 290–306. https://doi.org/10.1016/j.jhydrol.2004.08.026.