Fit a univariate extreme value distribution functions (e.g., GEV, GP, PP, Gumbel, or Exponential) to data; possibly with covariates in the parameters.
fevd(x, data, threshold = NULL, threshold.fun = ~1, location.fun = ~1,
scale.fun = ~1, shape.fun = ~1, use.phi = FALSE,
type = c("GEV", "GP", "PP", "Gumbel", "Exponential"),
method = c("MLE", "GMLE", "Bayesian", "Lmoments"), initial = NULL,
span, units = NULL, time.units = "days", period.basis = "year",
na.action = na.fail, optim.args = NULL, priorFun = NULL,
priorParams = NULL, proposalFun = NULL, proposalParams = NULL,
iter = 9999, weights = 1, blocks = NULL, verbose = FALSE)# S3 method for fevd
plot(x, type = c("primary", "probprob", "qq", "qq2",
"Zplot", "hist", "density", "rl", "trace"),
rperiods = c(2, 5, 10, 20, 50, 80, 100, 120, 200, 250, 300, 500, 800),
a = 0, hist.args = NULL, density.args = NULL, d = NULL, ...)
# S3 method for fevd.bayesian
plot(x, type = c("primary", "probprob", "qq", "qq2",
"Zplot", "hist", "density", "rl", "trace"),
rperiods = c(2, 5, 10, 20, 50, 80, 100, 120, 200, 250, 300, 500, 800),
a = 0, hist.args = NULL, density.args = NULL, burn.in = 499, d = NULL, ...)
# S3 method for fevd.lmoments
plot(x, type = c("primary", "probprob", "qq", "qq2",
"Zplot", "hist", "density", "rl", "trace"),
rperiods = c(2, 5, 10, 20, 50, 80, 100, 120, 200, 250, 300, 500, 800),
a = 0, hist.args = NULL, density.args = NULL, d = NULL, ...)
# S3 method for fevd.mle
plot(x, type = c("primary", "probprob", "qq", "qq2",
"Zplot", "hist", "density", "rl", "trace"),
rperiods = c(2, 5, 10, 20, 50, 80, 100, 120, 200, 250, 300, 500, 800),
a = 0, hist.args = NULL, density.args = NULL, period = "year",
prange = NULL, d = NULL, ...)
# S3 method for fevd
print(x, ...)
# S3 method for fevd
summary(object, ...)
# S3 method for fevd.bayesian
summary(object, FUN = "mean", burn.in = 499, ...)
# S3 method for fevd.lmoments
summary(object, ...)
# S3 method for fevd.mle
summary(object, ...)
fevd
: A list object of class “fevd” is returned with components:
the function call. Used as a default for titles in plots and output printed to the screen (e.g., by summary
).
character vector giving the name of arguments: x
and data
. This is used by the datagrabber
method function, as well as for some plot labeling.
Not all of these are included, and which ones are depend on how the data are passed to fevd
. These may be character strings, vectors or data frames depending again on the original function call. They are used by datagrabber
in order to obtain the original data. If x is a column of data, then x.fun is a formula specifying which column. Also, if x is a formula, then x.fun is this self-same formula.
logical; is the argument x
a column of the argument data
(TRUE) or not (FALSE)?
character string naming which estimation method ws used for the fit. Same as method
argument.
character string naming which EVD was fit to the data. Same as type
argument.
character string naming the period for return periods. This is used in plot labeling and naming, e.g., output from ci
. Same as the argument passed in.
Not present if not passed in by the user. character string naming the data units. Used for plot labeling.
A list object giving the values of the threshold and parameter function arguments, as well as a logical stating whether the log of the scale parameter is used or not. This is present even if it does not make sense (i.e., for L-moments estimation) because it is used by the is.fixedfevd
function for determining whether or not a model is stationary or not.
logicals stating whether the named parameter is constant (TRUE) or not (FALSE). Currently, not used, but could be useful. Still present even for L-moments estimation even though it does not really make sense in this context (will always be true).
the length of the data to which the model is fit.
For POT models only, this is the estimated number of periods (usually years) and number of points per period, both estimated using time.units
character naming the function used for handling missing values.
If L-moments are used, then this is a simple vector of length equal to the number of parameters of the model. For MLE/GMLE, this is a list object giving the output from optim
(in particular, the par component is a vector giving the parameter estimates). For Bayesian estimation, this is a matrix whose rows give the results for each iteration of the MCMC simulations. The columns give the parameters, as well as an additional last column that gives the number of parameters that were updated at each iteration.
These are only present for GMLE and Bayesian estimation methods. They give the prior df and optional arguments used in the estimation.
These are only present for Bayesian estimation, and they give the proposal function used along with optional arguments.
If estimation method is “Bayesian”, then this component is a matrix whose first several columns give 0 or 1 for each parameter at each iteration, where 0 indicates that the parameter was not updated and 1 that it was. The first row is NA for these columns. the last two columns give the likelihood and prior values for the current parameter values.
matrix whose first n columns give a one or zero depending on whether the parameter was updated or not, resp. The last two columns give the log-likelihood and prior values associated with the parameters of that sample.
Present only if blocks
is supplied as an
argument. Will contain the input information and computed
block-level information for use in post-processing the model object,
in particular block-wise design matrices.
print
: Does not return anything. Information is printed to the screen.
summary
: Depending on the estimation method, either a numeric vector giving the parameter estimates (“Lmoments”) or a list object (all other estimation methods) is returned invisibly, and if silent
is FALSE (default), then summary information is printed to the screen. List components may include:
numeric vectors giving the parameter and standard error estimates, resp.
matrix giving the parameter covariances.
number giving the value of the negative log-likelihood.
numbers giving the Akaike Information Criterion (AIC, Akaike, 1974), the Bayesian Information Criterion (BIC, Schwarz, 1978), and/or the Deviance Information Criterion, resp.
fevd
: x
can be a numeric vector, the name of a column of data
or a formula giving the data to which the EVD is to be fit. In the case of the latter two, the data
argument must be specified, and must have appropriately named columns.
plot
and print
method functions: any list object returned by fevd
.
A list object of class “fevd” as returned by fevd
.
A data frame object with named columns giving the data to be fit, as well as any data necessary for modeling non-stationarity through the threshold and/or any of the parameters.
numeric (single or vector). If fitting a peak over threshold (POT) model (i.e., type
= “PP”, “GP”, “Exponential”) this is the threshold over which (non-inclusive) data (or excesses) are used to estimate the parameters of the distribution function. If the length is greater than 1, then the length must be equal to either the length of x
(or number of rows of data
) or to the number of unique arguments in threshold.fun
.
formula describing a model for the thresholds using columns from data
. Any valid formula will work. data
must be supplied if this argument is anything other than ~ 1. Not for use with method
“Lmoments”.
formula describing a model for each parameter using columns from data
. data
must be supplied if any of these arguments are anything other than ~ 1.
logical; should the log of the scale parameter be used in the numerical optimization (for method
“MLE”, “GMLE” and “Bayesian” only)? For the ML and GML estimation, this may make things more stable for some data.
fevd
: character stating which EVD to fit. Default is to fit the generalized extreme value (GEV) distribution function (df).
plot
method function: character describing which plot(s) is (are) desired. Default is “primary”, which makes a 2 by 2 panel of plots including the QQ plot of the data quantiles against the fitted model quantiles (type
“qq”), a QQ plot (“qq2”) of quantiles from model-simulated data against the data, a density plot of the data along with the model fitted density (type
“density”) and a return level plot (type
“rl”). In the case of a stationary (fixed) model, the return level plot will show return levels calculated for return periods given by return.period
, along with associated CIs (calculated using default method
arguments depending on the estimation method used in the fit. For non-stationary models, the data are plotted as a line along with associated effective return levels for return periods of 2, 20 and 100 years (unless return.period
is specified by the user to other values. Other possible values for type
include “hist”, which is similar to “density”, but shows the histogram for the data and “trace”, which is not used for L-moment fits. In the case of MLE/GMLE, the trace yields a panel of plots that show the negative log-likelihood and gradient negative log-likelihood (note that the MLE gradient is currently used even for GMLE) for each of the estimated parameter(s); allowing one parameter to vary according to prange
, while the others remain fixed at their estimated values. In the case of Bayesian estimation, the “trace” option creates a panel of plots showing the posterior df and MCMC trace for each parameter.
fevd
: character naming which type of estimation method to use. Default is to use maximum likelihood estimation (MLE).
A list object with any named parameter component giving the initial value estimates for starting the numerical optimization (MLE/GMLE) or the MCMC iterations (Bayesian). In the case of MLE/GMLE, it is best to obtain a good intial guess, and in the Bayesian case, it is perhaps better to choose poor initial estimates. If NULL (default), then L-moments estimates and estimates based on Gumbel moments will be calculated, and whichever yields the lowest negative log-likelihood is used. In the case of type
“PP”, an additional MLE/GMLE estimate is made for the generalized Pareto (GP) df, and parameters are converted to those of the Poisson Process (PP) model. Again, the initial estimates yielding the lowest negative log-likelihoo value are used for the initial guess.
single numeric giving the number of years (or other desired temporal unit) in the data set. Only used for POT models, and only important in the estimation for the PP model, but important for subsequent estimates of return levels for any POT model. If missing, it will be calculated using information from time.units
.
(optional) character giving the units of the data, which if given may be used subsequently (e.g., on plot axis labels, etc.).
character string that must be one of “hours”, “minutes”, “seconds”, “days”, “months”, “years”, “m/hour”, “m/minute”, “m/second”, “m/day”, “m/month”, or “m/year”; where m is a number. If span
is missing, then this argument is used in determining the value of span
. It is also returned with the output and used subsequently for plot labelling, etc.
character string giving the units for the period. Used only for plot labelling and naming output vectors from some of the method functions (e.g., for establishing what the period represents for the return period).
numeric vector giving the return period(s) for which it is desired to calculate the corresponding return levels.
character string naming the units for the return period.
The first burn.in
values are thrown out before calculating anything from the MCMC sample.
when plotting empirical probabilies and such, the function ppoints
is called, which has this argument a
.
numeric determining how to scale the rate parameter for the point process. If NULL, the function will attempt to scale based on the values of period.basis
and time.units
, the first of which must be “year” and the second of which must be one of “days”, “months”, “years”, “hours”, “minutes” or “seconds”. If none of these are the case, then d
should be specified, otherwise, it is not necessary.
named list object containing arguments to the density
and hist
functions, respectively.
function to be called to handle missing values. Generally, this should remain at the default (na.fail), and the user should take care to impute missing values in an appropriate manner as it may have serious consequences on the results.
A list with named components matching exactly any arguments that the user wishes to specify to optim
, which is used only for MLE and GMLE methods. By default, the “BFGS” method is used along with grlevd
for the gradient argument. Generally, the grlevd
function is used for the gr
option unless the user specifies otherwise, or the optimization method does not take gradient information.
character naming a prior df to use for methods GMLE and Bayesian. The default for GMLE (not including Gumbel or Exponential types) is to use the one suggested by Martins and Stedinger (2000, 2001) on the shape parameter; a beta df on -0.5 to 0.5 with parameters p
and q
. Must take x
as its first argument for method
“GMLE”. Optional arguments for the default function are p
and q
(see details section).
The default for Bayesian estimation is to use normal distribution functions. For Bayesian estimation, this function must take theta
as its first argument.
Note: if this argument is not NULL and method
is set to “MLE”, it will be changed to “GMLE”.
named list containing any prior df parameters (where the list names are the same as the function argument names). Default for GMLE (assuming the default function is used) is to use q
= 6 and p
= 9. Note that in the Martins and Stedinger (2000, 2001) papers, they use a different EVD parametrization than is used here such that a positive shape parameter gives the upper bounded distribution instead of the heavy-tail one (as emloyed here). To be consistent with these papers, p
and q
are reversed inside the code so that they have the same interpretation as in the papers.
Default for Bayesian estimation is to use ML estimates for the means of each parameter (may be changed using m
, which must be a vector of same length as the number of parameters to be estimated (i.e., if using the default prior df)) and a standard deviation of 10 for all other parameters (again, if using the default prior df, may be changed using v
, which must be a vector of length equal to the number of parameters).
For Bayesian estimation only, this is a character naming a function used to generate proposal parameters at each iteration of the MCMC. If NULL (default), a random walk chain is used whereby if theta.i is the current value of the parameter, the proposed new parameter theta.star is given by theta.i + z, where z is drawn at random from a normal df.
A named list object describing any optional arguments to the proposalFun
function. All functions must take argument p
, which must be a vector of the parameters, and ind
, which is used to identify which parameter is to be proposed. The default proposalFun
function takes additional arguments mean
and sd
, which must be vectors of length equal to the number of parameters in the model (default is to use zero for the mean of z for every parameter and 0.1 for its standard deviation).
Used only for Bayesian estimation, this is the number of MCMC iterations to do.
numeric of length 1 or n giving weights to be applied in the likelihood calculations (e.g., if there are data points to be weighted more/less heavily than others).
An optional list containing information required to fit point process models in a computationally-efficient manner by using only the exceedances and not the observations below the threshold(s). See details for further information.
character string naming a function to use to estimate the parameters from the MCMC sample. The function is applied to each column of the results
component of the returned fevd
object.
logical; should progress information be printed to the screen? If TRUE, for MLE/GMLE, the argument trace
will be set to 6 in the call to optim
.
matrix whose columns are numeric vectors of length two for each parameter in the model giving the parameter range over which trace plots should be made. Default is to use either +/- 2 * std. err. of the parameter (first choice) or, if the standard error cannot be calculated, then +/- 2 * log2(abs(parameter)). Typically, these values seem to work very well for these plots.
Not used by most functions here. Optional arguments to plot
for the various plot
method functions.
In the case of the summary
method functions, the logical argument silent
may be passed to suppress (if TRUE) printing any information to the screen.
Eric Gilleland
See text books on extreme value analysis (EVA) for more on univariate EVA (e.g., Coles, 2001 and Reiss and Thomas, 2007 give fairly accessible introductions to the topic for most audiences; and Beirlant et al., 2004, de Haan and Ferreira, 2006, as well as Reiss and Thomas, 2007 give more complete theoretical treatments). The extreme value distributions (EVDs) have theoretical support for analyzing extreme values of a process. In particular, the generalized extreme value (GEV) df is appropriate for modeling block maxima (for large blocks, such as annual maxima), the generalized Pareto (GP) df models threshold excesses (i.e., x - u | x > u and u a high threshold).
The GEV df is given by
Pr(X <= x) = G(x) = exp[-(1 + shape*(x - location)/scale)^(-1/shape)]
for 1 + shape*(x - location) > 0 and scale > 0. If the shape parameter is zero, then the df is defined by continuity and simplies to
G(x) = exp(-exp((x - location)/scale)).
The GEV df is often called a family of distribution functions because it encompasses the three types of EVDs: Gumbel (shape = 0, light tail), Frechet (shape > 0, heavy tail) and the reverse Weibull (shape < 0, bounded upper tail at location - scale/shape). It was first found by R. von Mises (1936) and also independently noted later by meteorologist A. F. Jenkins (1955). It enjoys theretical support for modeling maxima taken over large blocks of a series of data.
The generalized Pareo df is given by (Pickands, 1975)
Pr(X <= x) = F(x) = 1 - [1 + shape*(x - threshold)/scale]^(-1/shape)
where 1 + shape*(x - threshold)/scale > 0, scale > 0, and x > threshold. If shape = 0, then the GP df is defined by continuity and becomes
F(x) = 1 - exp(-(x - threshold)/scale).
There is an approximate relationship between the GEV and GP distribution functions where the GP df is approximately the tail df for the GEV df. In particular, the scale parameter of the GP is a function of the threshold (denote it scale.u), and is equivalent to scale + shape*(threshold - location) where scale, shape and location are parameters from the “equivalent” GEV df. Similar to the GEV df, the shape parameter determines the tail behavior, where shape = 0 gives rise to the exponential df (light tail), shape > 0 the Pareto df (heavy tail) and shape < 0 the Beta df (bounded upper tail at location - scale.u/shape). Theoretical justification supports the use of the GP df family for modeling excesses over a high threshold (i.e., y = x - threshold). It is assumed here that x
, q
describe x (not y = x - threshold). Similarly, the random draws are y + threshold.
If interest is in minima or deficits under a low threshold, all of the above applies to the negative of the data (e.g., - max(-X_1,...,-X_n) = min(X_1, ..., X_n)) and fevd
can be used so long as the user first negates the data, and subsequently realizes that the return levels (and location parameter) given will be the negative of the desired return levels (and location parameter), etc.
The study of extremes often involves a paucity of data, and for small sample sizes, L-moments may give better estimates than competing methods, but penalized MLE (cf. Coles and Dixon, 1999; Martins and Stedinger, 2000; 2001) may give better estimates than the L-moments for such samples. Martins and Stedinger (2000; 2001) use the terminology generalized MLE, which is also used here.
Non-stationary models:
The current code does not allow for non-stationary models with L-moments estimation.
For MLE/GMLE (see El Adlouni et al 2007 for using GMLE in fitting models whose parameters vary) and Bayesian estimation, linear models for the parameters may be fit using formulas, in which case the data
argument must be supplied. Specifically, the models allowed for a set of covariates, y, are:
location(y) = mu0 + mu1 * f1(y) + mu2 * f2(y) + ...
scale(y) = sig0 + sig1 * g1(y) + sig2 * g2(y) + ...
log(scale(y)) = phi(y) = phi0 + phi1 * g1(y) + phi2 * g2(y) + ...
shape(y) = xi0 + xi1 * h1(y) + xi2 * h2(y) + ...
For non-stationary fitting it is recommended that the covariates within the generalized linear models are (at least approximately) centered and scaled (see examples below). It is generally ill-advised to include covariates in the shape parameter, but there are situations where it makes sense.
Non-stationary modeling is accomplished with fevd
by using formulas via the arguments: threshold.fun
, location.fun
, scale.fun
and shape.fun
. See examples to see how to do this.
Initial Value Estimates:
In the case of MLE/GMLE, it can be very important to get good initial estimates (e.g., see the examples below). fevd
attempts to find such estimates, but it is also possible for the user to supply their own initial estimates as a list object using the initial
argument, whereby the components of the list are named according to which parameter(s) they are associated with. In particular, if the model is non-stationary, with covariates in the location (e.g., mu(t) = mu0 + mu1 * t), then initial
may have a component named “location” that may contain either a single number (in which case, by default, the initial value for mu1 will be zero) or a vector of length two giving initial values for mu0 and mu1.
For Bayesian estimation, it is good practice to try several starting values at different points to make sure the initial values do not affect the outcome. However, if initial values are not passed in, the MLEs are used (which probably is not a good thing to do, but is more likely to yield good results).
For MLE/GMLE, two (in the case of PP, three) initial estimates are calculated along with their associated likelihood values. The initial estimates that yield the highest likelihood are used. These methods are:
1. L-moment estimates.
2. Let m = mean(xdat) and s = sqrt(6 * var(xdat)) / pi. Then, initial values assigend for the lcoation parameter when either initial
is NULL or the location component of initial
is NULL, are m - 0.57722 * s. When initial
or the scale component of initial
is NULL, the initial value for the scale parameter is taken to be s, and when initial
or its shape component is NULL, the initial value for the shape parameter is taken to be 1e-8 (because these initial estimates are moment-based estimates for the Gumbel df, so the initial value is taken to be near zero).
3. In the case of PP, which is often the most difficult model to fit, MLEs are obtained for a GP model, and the resulting parameter estimates are converted to those of the approximately equivalent PP model.
In the case of a non-stationary model, if the default initial estimates are used, then the intercept term for each parameter is given the initial estimate, and all other parameters are set to zero initially. The exception is in the case of PP model fitting where the MLE from the GP fits are used, in which case, these parameter estimates may be non-zero.
The generalized MLE (GMLE) method:
This method places a penalty (or prior df) on the shape parameter to help ensure a better fit. The procedure is nearly identical to MLE, except the likelihood, L, is multiplied by the prior df, p(shape); and because the negative log-likelihood is used, the effect is that of subtracting this term. Currently, there is no supplied function by this package to calculate the gradient for the GMLE case, so in particular, the trace plot is not the trace of the actual negative log-likelihood (or gradient thereof) used in the estimation.
Bayesian Estimation:
It is possible to give your own prior and proposal distribution functions using the appropriate arguments listed above in the arguments section. At each iteration of the chain, the parameters are updated one at a time in random order. The default method uses a random walk chain for the proposal and normal distributions for the parameters.
Plotting output:
plot
: The plot
method function will take information from the fevd
output and make any of various useful plots. The default, regardless of estimation method, is to produce a 2 by 2 panel of plots giving some common diagnostic plots. Possible types (determined by the type
argument) include:
1. “primary” (default): yields the 2 by 2 panel of plots given by 3, 4, 6 and 7 below.
2. “probprob”: Model probabilities against empirical probabilities (obtained from the ppoints
function). A good fit should yield a straight one-to-one line of points. In the case of a non-stationary model, the data are first transformed to either the Gumbel (block maxima models) or exponential (POT models) scale, and plotted against probabilities from these standardized distribution functions. In the case of a PP model, the parameters are first converted to those of the approximately equivalent GP df, and are plotted against the empirical data threshold excesses probabilities.
3. “qq”: Empirical quantiles against model quantiles. Again, a good fit will yield a straight one-to-one line of points. Generally, the qq-plot is preferred to the probability plot in 1 above. As in 2, for the non-stationary case, data are first transformed and plotted against quantiles from the standardized distributions. Also as in 2 above, in the case of the PP model, parameters are converted to those of the GP df and quantiles are from threshold excesses of the data.
4. “qq2”: Similar to 3, first data are simulated from the fitted model, and then the qq-plot between them (using the function qqplot
from this self-same package) is made between them, which also yields confidence bands. Note that for a good fitting model, this should again yield a straight one-to-one line of points, but generally, it will not be as “well-behaved” as the plot in 3. The one-to-one line and a regression line fitting the quantiles is also shown. In the case of a non-stationary model, simulations are obtained by simulating from an appropriate standardized EVD, re-ordered to follow the same ordering as the data to which the model was fit, and then back transformed using the covariates from data
and the parameter estimates to put the simulated sample back on the original scale of the data. The PP model is handled analogously as in 2 and 3 above.
5. and 6. “Zplot”: These are for PP model fits only and are based on Smith and Shively (1995). The Z plot is a diagnostic for determining whether or not the random variable, Zk, defined as the (possibly non-homogeneous) Poisson intensity parameter(s) integrated from exceedance time k - 1 to exceedance time k (beginning the series with k = 1) is independent exponentially distributed with mean 1.
For the Z plot, it is necessary to scale the Poisson intensity parameter appropriately. For example, if the data are given on a daily time scale with an annual period basis, then this parameter should be divided by, for example, 365.25. From the fitted fevd
object, the function will try to account for the correct scaling based on the two components “period.basis” and “time.units”. The former currently must be “year” and the latter must be one of “days”, “months”, “years”, “hours”, “minutes” or “seconds”. If none of these are valid for your specific data (e.g., if an annual basis is not desired), then use the d
argument to explicitly specify the correct scaling.
7. “hist”: A histogram of the data is made, and the model density is shown with a blue dashed line. In the case of non-stationary models, the data are first transformed to an appropriate standardized EVD scale, and the model density line is for the self-same standardized EVD. Currently, this does not work for non-stationary POT models.
8. “density”: Same as 5, but the kernel density (using function density
) for the data is plotted instead of the histogram. In the case of the PP model, block maxima of the data are calculated and the density of these block maxima are compared to the PP in terms of the equivalent GEV df. If the model is non-stationary GEV, then the transformed data (to a stationary Gumbel df) are used. If the model is a non-stationary POT model, then currently this option is not available.
9. “rl”: Return level plot. This is done on the log-scale for the abscissa in order that the type of EVD can be discerned from the shape (i.e., heavy tail distributions are concave, light tailed distributions are straight lines, and bounded upper-tailed distributions are convex, asymptoting at the upper bound). 95 percent CIs are also shown (gray dashed lines). In the case of non-stationary models, the data are plotted as a line, and the “effective” return levels (by default the 2-period (i.e., the median), 20-period and 100-period are used; period is usually annual) are also shown (see, e.g., Gilleland and Katz, 2011). In the case of the PP model, the equivalent GEV df (stationary model) is assumed and data points are block maxima, where the blocks are determined from information passed in the call to fevd
. In particular, the span
argument (which, if not passed by the user, will have been determined by fevd
using time.units
along with the number of points per year (which is estimated from time.units
) are used to find the blocks over which the maxima are taken. For the non-stationary case, the equivalent GP df is assumed and parameters are converted. This helps facilitate a more meaningful plot, e.g., in the presence of a non-constant threshold, but otherwise constant parameters.
10. “trace”: In each of cases (b) and (c) below, a 2 by the number of parameters panel of plots are created.
(a) L-moments: Not available for the L-moments estimation.
(b) For MLE/GMLE, the likelihood traces are shown for each parameter of the model, whereby all but one parameter is held fixed at the MLE/GMLE values, and the negative log-likelihood is graphed for varying values of the parameter of interest. Note that this differs greatly from the profile likelihood (see, e.g., profliker
) where the likelihood is maximized over the remaining parameters. The gradient negative log-likelihoods are also shown for each parameter. These plots may be useful in diagnosing any fitting problems that might arise in practice. For ease of interpretation, the gradients are shown directly below the likleihoods for each parameter.
(c) For Bayesian estimation, the usual trace plots are shown with a gray vertical dashed line showing where the burn.in
value lies; and a gray dashed horizontal line through the posterior mean. However, the posterior densities are also displayed for each parameter directly above the usual trace plots. It is not currently planned to allow for adding the prior dnsities to the posterior density graphs, as this can be easily implemented by the user, but is more difficult to do generally.
As with ci
and distill
, only plot
need be called by the user. The appropriate choice of the other functions is automatically determined from the fevd
fitted object.
Note that when blocks
are provided to fevd
, certain plots
that require the full set of observations (including non-exceedances)
cannot be produced.
Summaries and Printing:
summary
and print
method functions are available, and give different information depending on the estimation method used. However, in each case, the parameter estimates are printed to the screen. summary
returns some useful output (similar to distill, but in a list format). The print
method function need not be called as one can simply type the name of the fevd
fitted object and return to execute the command (see examples below). The deviance information criterion (DIC) is calculated for the Bayesian estimation method as DIC = D(mean(theta)) + 2 * pd, where pd = mean(D(theta)) - D(mean(theta)), and D(theta) = -2 * log-likelihood evaluated at the parameter values given by theta. The means are taken over the posterior MCMC sample. The default estimation method for the parameter values from the MCMC sample is to take the mean of the sample (after removing the first burn.in samples). A good alternative is to set the FUN
argument to “postmode” in order to obtain the posterior mode of the sample.
Using Blocks to Reduce Computation in PP Fitting:
If blocks
is supplied, the user should
provide only the exceedances and not all of the data values. For
stationary models, the list should contain a component called
nBlocks
indicating the number of observations within a block, where
blocks are defined in a manner analogous to that used in GEV
models. For nonstationary models, the list may contain one or more of
several components. For nonstationary models with covariates, the list
should contain a data
component analogous to the data
argument,
providing values for the blocks. If the threshold varies, the list
should contain a threshold
component that is analogous to the
threshold
argument. If some of the observations within any block are
missing (assuming missing at random or missing completely at random),
the list should contain a proportionMissing
component that is a
vector with one value per block indicating the proportion of
observations missing for the block. To weight the blocks, the list can
contain a weights
component with one value per block. Warning: to
properly analyze nonstationary models, any covariates, thresholds, and
weights must be constant within each block.
Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716--723.
Beirlant, J., Goegebeur, Y., Teugels, J. and Segers, J. (2004). Statistics of Extremes: Theory and Applications. Chichester, West Sussex, England, UK: Wiley, ISBN 9780471976479, 522pp.
Coles, S. G. (2001). An introduction to statistical modeling of extreme values, London: Springer-Verlag.
Coles, S. G. and Dixon, M. J. (1999). Likelihood-based inference for extreme value models. Extremes, 2 (1), 5--23.
El Adlouni, S., Ouarda, T. B. M. J., Zhang, X., Roy, R, and Bobee, B. (2007). Generalized maximum likelihood estimators for the nonstationary generalized extreme value model. Water Resources Research, 43, W03410, doi: 10.1029/2005WR004545, 13 pp.
Gilleland, E. and Katz, R. W. (2011). New software to analyze how extremes change over time. Eos, 11 January, 92, (2), 13--14.
de Haan, L. and Ferreira, A. (2006). Extreme Value Theory: An Introduction. New York, NY, USA: Springer, 288pp.
Jenkinson, A. F. (1955). The frequency distribution of the annual maximum (or minimum) of meteorological elements. Quart. J. R. Met. Soc., 81, 158--171.
Martins, E. S. and Stedinger, J. R. (2000). Generalized maximum likelihood extreme value quantile estimators for hydrologic data. Water Resources Research, 36 (3), 737--744.
Martins, E. S. and Stedinger, J. R. (2001). Generalized maximum likelihood Pareto-Poisson estimators for partial duration series. Water Resources Research, 37 (10), 2551--2557.
Pickands, J. (1975). Statistical inference using extreme order statistics. Annals of Statistics, 3, 119--131.
Reiss, R.-D. and Thomas, M. (2007). Statistical Analysis of Extreme Values: with applications to insurance, finance, hydrology and other fields. Birkhauser, 530pp., 3rd edition.
Schwarz, G. E. (1978). Estimating the dimension of a model. Annals of Statistics, 6, 461--464.
Smith, R. L. and Shively, T. S. (1995). A point process approach to modeling trends in tropospheric ozone. Atmospheric Environment, 29, 3489--3499.
von Mises, R. (1936). La distribution de la plus grande de n valeurs, Rev. Math. Union Interbalcanique 1, 141--160.
ci.fevd
for obtaining parameter and return level confidence intervals.
distill.fevd
for stripping out a vector of parameter estimates and perhaps other pertinent information from an fevd object.
For functions to find the density, probability df, quantiles and simulate data from, an EV df, see:
devd
, pevd
, qevd
, revd
For functions to find the probability df and simulate random data from a fitted model from fevd
, see:
pextRemes
, rextRemes
For functions to determine if the extreme data are independent or not, see:
extremalindex
, atdf
For functions to help choose a threshold, see: threshrange.plot
, mrlplot
To decluster stationary dependent extremes, see: decluster
For more on formulas in R, see: formula
To grab the parameters of a fitted fevd
model, see: findpars
To calculate the parameter covariance, see:
optimHess
, parcov.fevd
To see more about the extRemes method functions described here, see: ci
and distill
To calculate effective return levels and CI's for MLE and Bayesian estimation of non-stationary models, see ci.rl.ns.fevd.bayesian
, ci.rl.ns.fevd.mle
and return.level
To obtain the original data set from a fitted fevd
object, use: datagrabber
To calculate the profile likelihood, see: profliker
To test the statistical significance of nested models with additional parameters, see: lr.test
To find effective return levels for non-stationary models, see: erlevd
To determine if an fevd
object is stationary or not, use: is.fixedfevd
and check.constant
For more about the plots created for fevd
fitted objects, see:
ppoints
, density
, hist
, qqplot
For general numerical optimization in R, see: optim