Learn R Programming

ddalpha (version 1.3.16)

depth.potential: Calculate Potential of the Data

Description

Calculate the potential of the points w.r.t. a multivariate data set. The potential is the kernel-estimated density multiplied by the prior probability of a class. Different from the data depths, a density estimate measures at a given point how much mass is located around it.

Usage

depth.potential (x, data, pretransform = "1Mom", 
                kernel = "GKernel", kernel.bandwidth = NULL, mah.parMcd = 0.75)

Value

Numerical vector of potentials, one for each row in x; or one potential value if x is a numerical vector.

Arguments

x

Matrix of objects (numerical vector as one object) whose depth is to be calculated; each row contains a \(d\)-variate point. Should have the same dimension as data.

data

Matrix of data where each row contains a \(d\)-variate point, w.r.t. which the depth is to be calculated.

pretransform

The method of data scaling.

NULL to use the original data,

1Mom or NMom for scaling using data moments,

1MCD or NMCD for scaling using robust data moments (Minimum Covariance Determinant (MCD) ).

kernel

"EDKernel" for the kernel of type 1/(1+kernel.bandwidth*EuclidianDistance2(x, y)),

"GKernel" [default and recommended] for the simple Gaussian kernel,

"EKernel" exponential kernel: exp(-kernel.bandwidth*EuclidianDistance(x, y)),

"VarGKernel" variable Gaussian kernel, where kernel.bandwidth is proportional to the depth.zonoid of a point.

kernel.bandwidth

the single bandwidth parameter of the kernel. If NULL - the Scott's rule of thumb is used.

mah.parMcd

is the value of the argument alpha for the function covMcd; is used when pretransform = "*MCD".

Details

The potential is the kernel-estimated density multiplied by the prior probability of a class. The kernel bandwidth matrix is decomposed into two parts, one of which describes the form of the data, and the other the width of the kernel. Then the first part is used to transform the data using the moments, while the second is employed as a parameter of the kernel and tuned to achieve the best separation. For details see Pokotylo and Mosler (2015).

References

Aizerman, M.A., Braverman, E.M., and Rozonoer, L.I. (1970). The Method of Potential Functions in the Theory of Machine Learning. Nauka (Moscow).

Pokotylo, O. and Mosler, K. (2015). Classification with the pot-pot plot. Mimeo.

See Also

depth.halfspace for calculation of the Tukey depth.

depth.Mahalanobis for calculation of Mahalanobis depth.

depth.projection for calculation of projection depth.

depth.simplicial for calculation of simplicial depth.

depth.simplicialVolume for calculation of simplicial volume depth.

depth.spatial for calculation of spatial depth.

depth.zonoid for calculation of zonoid depth.

Examples

Run this code
# 3-dimensional normal distribution
data <- mvrnorm(200, rep(0, 3), 
                matrix(c(1, 0, 0,
                         0, 2, 0, 
                         0, 0, 1),
                       nrow = 3))
x <- mvrnorm(10, rep(1, 3), 
             matrix(c(1, 0, 0,
                      0, 1, 0, 
                      0, 0, 1),
                    nrow = 3))

# potential with rule of thumb bandwidth
pot <- depth.potential(x, data)
cat("Potentials: ", pot, "\n")

# potential with bandwidth = 0.1
pot <- depth.potential(x, data, kernel.bandwidth = 0.1)
cat("Potentials: ", pot, "\n")

# potential with robust MCD scaling
pot <- depth.potential(x, data, kernel.bandwidth = 0.1, 
                      pretransform = "NMCD", mah.parMcd = 0.6)
cat("Potentials: ", pot, "\n")

Run the code above in your browser using DataLab