Learn R Programming

kader (version 0.0.8)

minimize_MSEHat: Minimization of Estimated MSE

Description

Minimization of the estimated MSE as function of \(\sigma\) in four steps.

Usage

minimize_MSEHat(VarHat.scaled, BiasHat.squared, sigma, Ai, Bj, h, K, fnx,
  ticker = FALSE, plot = FALSE, ...)

Arguments

VarHat.scaled

Vector of estimates of the scaled variance (for values of \(\sigma\) in sigma).

BiasHat.squared

Vector of estimates of the squared bias (for values of \(\sigma\) in sigma).

sigma

Numeric vector \((\sigma_1, \ldots, \sigma_s)\) with \(s \ge 1\).

Ai

Numeric vector expecting \((x_0 - X_1, \ldots, x_0 - X_n) / h\), where (usually) \(x_0\) is the point at which the density is to be estimated for the data \(X_1, \ldots, X_n\) with \(h = n^{-1/5}\).

Bj

Numeric vector expecting \((-J(1/n), \ldots, -J(n/n))\) in case of the rank transformation method, but \((\hat{\theta} - X_1, \ldots, \hat{\theta} - X_n)\) in case of the non-robust Srihera-Stute-method. (Note that this the same as argument Bj of adaptive_fnhat!)

h

Numeric scalar, where (usually) \(h = n^{-1/5}\).

K

Kernel function with vectorized in- & output.

fnx

\(f_n(x_0) =\) mean(K(Ai))/h, where here typically \(h = n^{-1/5}\).

ticker

Logical; determines if a 'ticker' documents the iteration progress through sigma. Defaults to FALSE.

plot

Should graphical output be produced? Defaults to FALSE.

Currently ignored.

Value

A list with components sigma.adap, msehat.min and discr.min.smaller whose meanings are as follows:

sigma.adap Found minimizer of MSE estimator.
msehat.min Found minimum of MSE estimator.

Details

Step 1: determine first (= smallest) maximizer of VarHat.scaled (!) on the grid in sigma. Step 2: determine first (= smallest) minimizer of estimated MSE on the \(\sigma\)-grid LEFT OF the first maximizer of VarHat.scaled. Step 3: determine a range around the yet-found (discrete) minimizer of estimated MSE within which a finer search for the ``true'' minimum is continued using numerical minimization. Step 4: check if the numerically determined minimum is indeed better, i.e., smaller than the discrete one; if not keep the first.

Examples

Run this code
# NOT RUN {
require(stats)

set.seed(2017);     n <- 100;     Xdata <- sort(rnorm(n))
x0 <- 1;      Sigma <- seq(0.01, 10, length = 11)

h <- n^(-1/5)
Ai <- (x0 - Xdata)/h
fnx0 <- mean(dnorm(Ai)) / h   # Parzen-Rosenblatt estimator at x0.

 # For non-robust method:
Bj <- mean(Xdata) - Xdata
#  # For rank transformation-based method (requires sorted data):
# Bj <- -J_admissible(1:n / n)   # rank trafo

BV <- kader:::bias_AND_scaledvar(sigma = Sigma, Ai = Ai, Bj = Bj,
  h = h, K = dnorm, fnx = fnx0, ticker = TRUE)

kader:::minimize_MSEHat(VarHat.scaled = BV$VarHat.scaled,
  BiasHat.squared = (BV$BiasHat)^2, sigma = Sigma, Ai = Ai, Bj = Bj,
  h = h, K = dnorm, fnx = fnx0, ticker = TRUE, plot = FALSE)

# }

Run the code above in your browser using DataLab