Last chance! 50% off unlimited learning
Sale ends in
gev(llocation = "identity", lscale = "loge", lshape = "logoff",
elocation = list(), escale = list(),
eshape = if (lshape=="logoff") list(offset=0.5) else
if (lshape=="elogit") list(min=-0.5, max=0.5) else list(),
percentiles = c(95, 99), iscale=NULL, ishape = NULL,
method.init = 1, gshape=c(-0.45, 0.45), tolshape0=0.001,
giveWarning=TRUE, zero = 3)
egev(llocation = "identity", lscale = "loge", lshape = "logoff",
elocation = list(), escale = list(),
eshape = if (lshape=="logoff") list(offset=0.5) else
if (lshape=="elogit") list(min=-0.5, max=0.5) else list(),
percentiles = c(95, 99), iscale=NULL, ishape = NULL,
method.init=1, gshape=c(-0.45, 0.45), tolshape0=0.001,
giveWarning=TRUE, zero = 3)
Links
for more choices.percentiles=NULL
, then the mean
$\mu + \sigma (\Gamma(1-\xi)-1) / \xi$
is returned, and this is only defined if $\xiNULL
means a value is computed internally.
The argument ishape
is more important than the other two because
they are initialized from the initial $\xi$.
If a failure to congshape
.
Method 2 is similar to the method of moments.
If both methods fail try using ishape
.method.init
equals 1.dgev
when computing the log-likelihood.zero=NULL
then all linear/additive "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.gev()
with multivariate responses.
In general, egev()
is more reliable than gev()
. Fitting the GEV by maximum likelihood estimation can be numerically
fraught. If $1 + \xi (y-\mu)/ \sigma \leq 0$ then some crude evasive action is taken but the estimation process
can still fail. This is particularly the case if vgam
with s
is used; then smoothing is best done with
vglm
with regression splines (bs
or ns
) because vglm
implements
half-stepsizing whereas vgam
doesn't (half-stepsizing
helps handle the problem of straying outside the parameter space.)
For the GEV distribution, the $k$th moment about the mean exists if $\xi < 1/k$. Provided they exist, the mean and variance are given by $\mu+\sigma{ \Gamma(1-\xi)-1}/ \xi$ and $\sigma^2 { \Gamma(1-2\xi) - \Gamma^2(1-\xi) } / \xi^2$ respectively, where $\Gamma$ is the gamma function.
Smith (1985) established that when $\xi > -0.5$,
the maximum likelihood estimators are completely regular.
To have some control over the estimated $\xi$ try
using lshape="logoff"
and the eshape=list(offset=0.5)
, say,
or lshape="elogit"
and eshape=list(min=-0.5, max=0.5)
, say.
Tawn, J. A. (1988) An extreme-value theory model for dependent observations. Journal of Hydrology, 101, 227--250.
Prescott, P. and Walden, A. T. (1980) Maximum likelihood estimation of the parameters of the generalized extreme-value distribution. Biometrika, 67, 723--724.
Smith, R. L. (1985) Maximum likelihood estimation in a class of nonregular cases. Biometrika, 72, 67--90.
rgev
,
gumbel
,
egumbel
,
guplot
,
rlplot.egev
,
gpd
,
elogit
,
oxtemp
,
venice
.# Multivariate example
fit1 = vgam(cbind(r1,r2) ~ s(year, df=3), gev(zero=2:3), venice, trace=TRUE)
coef(fit1, matrix=TRUE)
head(fitted(fit1))
par(mfrow=c(1,2), las=1)
plot(fit1, se=TRUE, lcol="blue", scol="forestgreen",
main="Fitted mu(year) function (centered)", cex.main=0.8)
with(venice, matplot(year, y[,1:2], ylab="Sea level (cm)", col=1:2,
main="Highest 2 annual sea levels", cex.main=0.8))
with(venice, lines(year, fitted(fit1)[,1], lty="dashed", col="blue"))
legend("topleft", lty="dashed", col="blue", "Fitted 95 percentile")
# Univariate example
(fit = vglm(maxtemp ~ 1, egev, oxtemp, trace=TRUE))
head(fitted(fit))
coef(fit, mat=TRUE)
Coef(fit)
vcov(fit)
vcov(fit, untransform=TRUE)
sqrt(diag(vcov(fit))) # Approximate standard errors
rlplot(fit)
Run the code above in your browser using DataLab