If x
contains any missing (NA
), undefined (NaN
) or
infinite (Inf
, -Inf
) values, they will be removed prior to
performing the estimation.
Let \(\underline{x} = (x_1, x_2, \ldots, x_n)\) be a vector of
\(n\) observations from an extreme value distribution with
parameters location=
\(\eta\) and scale=
\(\theta\).
Estimation
Maximum Likelihood Estimation (method="mle"
)
The maximum likelihood estimators (mle's) of \(\eta\) and \(\theta\) are
the solutions of the simultaneous equations (Forbes et al., 2011):
$$\hat{\eta}_mle = \hat{\theta}_mle \, log[\frac{1}{n} \sum_{i=1}^{n} exp(\frac{-x_i}{\hat{\theta}_mle})]$$
$$\hat{\theta}_mle = \bar{x} - \frac{\sum_{i=1}^{n} x_i exp(\frac{-x_i}{\hat{\theta}_mle})}{\sum_{i=1}^{n} exp(\frac{-x_i}{\hat{\theta}_mle})}$$
where
$$\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i$$.
Method of Moments Estimation (method="mme"
)
The method of moments estimators (mme's) of \(\eta\) and \(\theta\) are
given by (Johnson et al., 1995, p.27):
$$\hat{\eta}_{mme} = \bar{x} - \epsilon \hat{\theta}_{mme}$$
$$\hat{\theta}_{mme} = \frac{\sqrt{6}}{\pi} s_m$$
where \(\epsilon\) denotes Euler's constant and
\(s_m\) denotes the square root of the method of moments estimator of variance:
$$s_m^2 = \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2$$
Method of Moments Estimators Based on the Unbiased Estimator of Variance (method="mmue"
)
These estimators are the same as the method of moments estimators except that
the method of moments estimator of variance is replaced with the unbiased estimator
of variance:
$$s^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2$$
Probability-Weighted Moments Estimation (method="pwme"
)
Greenwood et al. (1979) show that the relationship between the distribution
parameters \(\eta\) and \(\theta\) and the probability-weighted moments
is given by:
$$\eta = M(1, 0, 0) - \epsilon \theta$$
$$\theta = \frac{M(1, 0, 0) - 2M(1, 0, 1)}{log(2)}$$
where \(M(i, j, k)\) denotes the \(ijk\)'th probability-weighted moment and
\(\epsilon\) denotes Euler's constant.
The probability-weighted moment estimators (pwme's) of \(\eta\) and
\(\theta\) are computed by simply replacing the \(M(i,j,k)\)'s in the
above two equations with estimates of the \(M(i,j,k)\)'s (and for the
estimate of \(\eta\), replacing \(\theta\) with its estimated value).
See the help file for pwMoment
for more information on how to
estimate the \(M(i,j,k)\)'s. Also, see Landwehr et al. (1979) for an example
of this method of estimation using the unbiased (U-statistic type)
probability-weighted moment estimators. Hosking et al. (1985) note that this
method of estimation using the U-statistic type probability-weighted moments
is equivalent to Downton's (1966) linear estimates with linear coefficients.
Confidence Intervals
When ci=TRUE
, an approximate \((1-\alpha)\)100% confidence intervals
for \(\eta\) can be constructed assuming the distribution of the estimator of
\(\eta\) is approximately normally distributed. A two-sided confidence
interval is constructed as:
$$[\hat{\eta} - t(n-1, 1-\alpha/2) \hat{\sigma}_{\hat{\eta}}, \, \hat{\eta} + t(n-1, 1-\alpha/2) \hat{\sigma}_{\hat{\eta}}]$$
where \(t(\nu, p)\) is the \(p\)'th quantile of
Student's t-distribution with
\(\nu\) degrees of freedom, and the quantity
$$\hat{\sigma}_{\hat{\eta}}$$
denotes the estimated asymptotic standard deviation of the estimator of \(\eta\).
Similarly, a two-sided confidence interval for \(\theta\) is constructed as:
$$[\hat{\theta} - t(n-1, 1-\alpha/2) \hat{\sigma}_{\hat{\theta}}, \, \hat{\theta} + t(n-1, 1-\alpha/2) \hat{\sigma}_{\hat{\theta}}]$$
One-sided confidence intervals for \(\eta\) and \(\theta\) are computed in
a similar fashion.
Maximum Likelihood (method="mle"
)
Downton (1966) shows that the estimated asymptotic variances of the mle's of
\(\eta\) and \(\theta\) are given by:
$$\hat{\sigma}_{\hat{\eta}_mle}^2 = \frac{\hat{\theta}_mle^2}{n} [1 + \frac{6(1 - \epsilon)^2}{\pi^2}] = \frac{1.10867 \hat{\theta}_mle^2}{n}$$
$$\hat{\sigma}_{\hat{\theta}_mle}^2 = \frac{6}{\pi^2} \frac{\hat{\theta}_mle^2}{n} = \frac{0.60793 \hat{\theta}_mle^2}{n}$$
where \(\epsilon\) denotes Euler's constant.
Method of Moments (method="mme"
or method="mmue"
)
Tiago de Oliveira (1963) and Johnson et al. (1995, p.27) show that the
estimated asymptotic variance of the mme's of \(\eta\) and \(\theta\)
are given by:
$$\hat{\sigma}_{\hat{\eta}_mme}^2 = \frac{\hat{\theta}_mme^2}{n} [\frac{\pi^2}{6} + \frac{\epsilon^2}{4}(\beta_2 - 1) - \frac{\pi \epsilon \sqrt{\beta_1}}{\sqrt{6}}] = \frac{1.1678 \hat{\theta}_mme^2}{n}$$
$$\hat{\sigma}_{\hat{\theta}_mme}^2 = \frac{\hat{\theta}_mle^2}{n} \frac{(\beta_2 - 1)}{4} = \frac{1.1 \hat{\theta}_mme^2}{n}$$
where the quantities
$$\sqrt{\beta_1}, \; \beta_2$$
denote the skew and kurtosis of the distribution, and \(\epsilon\)
denotes Euler's constant.
The estimated asymptotic variances of the mmue's of \(\eta\) and \(\theta\)
are the same, except replace the mme of \(\theta\) in the above equations with
the mmue of \(\theta\).
Probability-Weighted Moments (method="pwme"
)
As stated above, Hosking et al. (1985) note that this method of estimation using
the U-statistic type probability-weighted moments is equivalent to
Downton's (1966) linear estimates with linear coefficients. Downton (1966)
provides exact values of the variances of the estimates of location and scale
parameters for the smallest extreme value distribution. For the largest extreme
value distribution, the formula for the estimate of scale is the same, but the
formula for the estimate of location must be modified. Thus, Downton's (1966)
equation (3.4) is modified to:
$$\hat{\eta}_pwme = \frac{(n-1)log(2) + (n+1)\epsilon}{n(n-1)log(2)} v - \frac{2 \epsilon}{n(n-1)log(2)} w$$
where \(\epsilon\) denotes Euler's constant, and
\(v\) and \(w\) are defined in Downton (1966, p.8). Using
Downton's (1966) equations (3.9)-(3.12), the exact variance of the pwme of
\(\eta\) can be derived. Note that when method="pwme"
and
pwme.method="plotting.position"
, these are only the asymptotically correct
variances.