The sensitivity curve (\(SC\)) is a means to assess how sensitive a particular statistic \(T_{n+1}\) for a sample of size \(n\) is to an additional sample \(x\) to be included. For the implementation by this function, the statistic \(T\) is a specific quantile \(x(F)\) of interest set by a nonexceedance probability \(F\). The \(SC\) is $$SC_{n+1}(x,\,| F) = (n+1)(T_{n+1} - T_n)\mbox{,}$$ where \(T_n\) represent the statistic for the sample of size \(n\). The notation here follows that of Hampel (1974, p. 384) concerning \(n\) and \(n+1\).
sentiv.curve(f, x, method=c("bootstrap", "polynomial", "none"),
data=NULL, para=NULL, ...)
An R
list
is returned.
The value for \(SC(x) = (n+1)(T_{n+1} - T_n)\).
The percent change sensitivity curve by \(SC^{(\%)}(x) = 100\times (T_{n+1} - T_n)/T_n\).
The values for \(T_{n+1} = T_n + SC(x)/(n+1)\).
The value (singular) for \(T_n\) which was estimated according to method
.
The curve potentially passes through a zero depending on the values for \(x\). The color
is set to distinquish between negatives and positives so that the user could use the absolute value of curve
on logarithmic scales and use the color to distinquish the original negatives.
The values for the internal variable EX
.
An attribute identifying the computational source of the sensitivity curve: “sentiv.curve”.
The nonexceedance probability \(F\) of the quantile for which the sensitivity of its estimation is needed. Only the first value if a vector is given is used and a warning issued.
The \(x\) values representing the potential one more value to be added to the original data.
A vector of mandatory sample data values. These will either be converted to (1) order statistic expectations exact analytical expressions or simulation (backup plan), (2) Bernstein (or similar) polynomials, or (3) the provided values treated as if they are the order statistic expectations.
A character variable determining how the statistics \(T\) are computed (see Details).
A distribution parameter list from a function such as vec2par
or lmom2par
.
Additional arguments to pass either to the lmoms.bootbarvar
or to the
dat2bernqua
function.
W.H. Asquith
The main features of this function involve how the statistics are computed and are controlled by the method
argument. Three different approaches are provided.
Bootstrap: Arguments data
and para
are mandatory. If boostrap
is requested, then the distribution type set by the type
attribute in para
is used along with the method of L-moments for \(T(F)\) estimation. The \(T_n(F)\) is directly computed from the distribution in para
. And for each x
, the \(T_{n+1}(F)\) is computed by lmoms
, lmom2par
, and the distribution type. The sample so fed to lmoms
is denoted as c(EX, x)
.
Polynomial: Argument data
is mandatory and para
is not used. If polynomial
is requested, then the Bernstein polynomial (likely) from the dat2bernqua
is used. The \(T_n(F)\) is computed by the data
sample. And for each x
, the \(T_{n+1}(F)\) also is computed by dat2bernqua
, but the sample so fed to dat2bernqua
is denoted as c(EX, x)
.
None: Arguments data
and para
are mandatory. If none
is requested, then the distribution type set by the type
attribute in para
is used along with the method of L-moments. The \(T_n(F)\) is directly computed from the distribution in para
. And for each x
, the \(T_{n+1}(F)\) is computed by lmoms
, lmom2par
, and the distribution type. The sample so fed to lmoms
is denoted as c(EX, x)
.
The internal variable EX
now requires discussion. If method=none
, then the data
are sorted and set into the internal variable EX
. Conversely, if method=
bootstrap
or method=
polynomial
, then EX
will contain the expectations of the order statistics from lmoms.bootbarvar
.
Lastly, the Weibull plotting positions are used for the probability values for the data as provided by the pp
function. Evidently, if method
is either parent
or polynomial
then a “stylized sensitivity curve” would created (David, 1981, p. 165) because the expectations of the sample order statistics and not the sample order statistics (the sorted sample) are used.
David, H.A., 1981, Order statistics: John Wiley, New York.
Hampel, F.R., 1974, The influence curve and its role in robust estimation: Journal of the American Statistical Association, v. 69, no. 346, pp. 383--393.
expect.max.ostat