Learn R Programming

KRLS (version 1.0-0)

krls: Kernel-based Regularized Least Squares (KRLS)

Description

Function implements Kernel-Based Regularized Least Squares (KRLS), a machine learning method described in Hainmueller and Hazlett (2014) that allows users to solve regression and classification problems without manual specification search and strong functional form assumptions. KRLS finds the best fitting function by minimizing a Tikhonov regularization problem with a squared loss, using Gaussian Kernels as radial basis functions. KRLS reduces misspecification bias since it learns the functional form from the data. Yet, it nevertheless allows for interpretability and inference in ways similar to ordinary regression models. In particular, KRLS provides closed-form estimates for the predicted values, variances, and the pointwise partial derivatives that characterize the marginal effects of each independent variable at each data point in the covariate space. The distribution of pointwise marginal effects can be used to examine effect heterogeneity and or interactions.

Usage

krls(X = NULL, y = NULL, whichkernel = "gaussian", lambda = NULL,
sigma = NULL, derivative = TRUE, binary= TRUE, vcov=TRUE, 
print.level = 1,L=NULL,U=NULL,tol=NULL,eigtrunc=NULL)

Arguments

X

N by D data numeric matrix that contains the values of D predictor variables for \(i=1,\ldots,N\) observations. The matrix may not contain missing values or constants. Note that no intercept is required since the function operates on demeaned data and subtracting the mean of y is equivalent to including an (unpenalized) intercept into the model.

y

N by 1 data numeric matrix or vector that contains the values of the response variable for all observations. This vector may not contain missing values.

whichkernel

String vector that specifies which kernel should be used. Must be one of gaussian, linear, poly1, poly2, poly3, or poly4 (see details). Default is gaussian.

lambda

A positive scalar that specifies the \(\lambda\) parameter for the regularizer (see details). It governs the tradeoff between model fit and complexity. By default, this parameter is chosen by minimizing the sum of the squared leave-one-out errors.

sigma

A positive scalar that specifies the bandwidth of the Gaussian kernel (see gausskernel for details). By default, the bandwidth is set equal to D (the number of dimensions) which typically yields a reasonable scaling of the distances between observations in the standardized data that is used for the fitting.

derivative

Logical that specifies whether pointwise partial derivatives should be computed. Currently, derivatives are only implemented for the Gaussian Kernel.

binary

Logical that specifies whether first-differences instead of pointwise partial derivatives should be computed for binary predictors. Ignored unless derivative=TRUE.

vcov

Logical that specifies whether variance-covariance matrix for the choice coefficients c and fitted values should be computed. Note that derivative=TRUE requires that vcov=TRUE.

print.level

Positive integer that determines the level of printing. Set to 0 for no printing and 2 for more printing.

L

Non-negative scalar that determines the lower bound of the search window for the leave-one-out optimization to find \(\lambda\). Default is NULL which means that the lower bound is found by using an algorithm outlined in lambdasearch.

U

Positive scalar that determines the upper bound of the search window for the leave-one-out optimization to find \(\lambda\). Default is NULL which means that the upper bound is found by using an algorithm outlined in lambdasearch.

tol

Positive scalar that determines the tolerance used in the optimization routine used to find \(\lambda\). Default is NULL which means that convergence is achieved when the difference in the sum of squared leave-one-out errors between the i and the i+1 iteration is less than N * 10^-3.

eigtrunc

Positive scalar that determines how much eignvalues should be trunacted for finding the upper bound of the search window in the algorithm outlined in lambdasearch. If eigtrunc is set to 10^-6 this means that we keep only eigenvalues that are 10^-6 as large as the first. Default is eigtrunc=NULL which means no truncation is used.

Value

A list object of class krls with the following elements:

K

N by N matrix of pairwise kernel distances between observations.

coeffs

N by 1 vector of choice coefficients c.

Le

scalar with sum of squared leave-one-out errors.

fitted

N by 1 vector of fitted values.

X

original N by D predictor data matrix.

y

original N by 1 matrix of values of the outcome variable.

sigma

scalar with value of bandwidth, \(\sigma^2\), used for the Gaussian kernel.

lambda

scalar with value of regularization parameter, \(\lambda\), used (user specified or based on leave-one-out cross-validation).

R2

scalar with value of R-squared

vcov.c

N by N variance covariance matrix for choice coefficients (NULL unless vcov=TRUE is specified).

vcov.fitted

N by N variance covariance matrix for fitted values (NULL unless vcov=TRUE is specified).

derivatives

N by D matrix of pointwise partial derivatives based on the Gaussian kernel (NULL unless derivative=TRUE is specified. If binary=TRUE is specified, first differences are returned for binary predictors.

avgderivatives

1 by D matrix of average derivative based on the Gaussian kernel (NULL unless derivative=TRUE is specified. If binary=TRUE is specified, average first differences are returned for binary predictors.

var.avgderivatives

1 by D matrix of variances for average derivative based on gaussian kernel (NULL unless derivative=TRUE is specified. If binary=TRUE is specified, variances for average first differences are returned for binary predictors.

binaryindicator

1 by D matrix that indicates for each predictor if it is treated as binary or not (evaluates to FALSE unless binary=TRUE is specified and a predictor is recognized binary.

Details

krls implements the Kernel-based Regularized Least Squares (KRLS) estimator as described in Hainmueller and Hazlett (2014). Please consult this reference for any details.

Kernel-based Regularized Least Squares (KRLS) arises as a Tikhonov minimization problem with a squared loss. Assume we have data of the from \(y_i,\,x_i\) where i indexes observations, \(y_i \in R\) is the outcome and \(x_i \in R^D\) is a D-dimensional vector of predictor values. Then KRLS searches over a space of functions \(H\) and chooses the best fitting function \(f\) according to the rule:

$$argmin_{f \in H} \sum_i^N (y_i - f(x_i))^2 + \lambda ||f||_{H^2}$$

where \((y_i - f(x_i))^2\) is a loss function that computes how `wrong' the function is at each observation i and \(|| f ||_{H^2}\) is the regularizer that measures the complexity of the function according to the \(L_2\) norm \(||f||^2 = \int f(x)^2 dx\). \(\lambda\) is the scalar regularization parameter that governs the tradeoff between model fit and complexity. By default, \(\lambda\) is chosen by minimizing the sum of the squared leave-one-out errors, but it can also be specified by the user in the lambda argument to implement other approaches.

Under fairly general conditions, the function that minimizes the regularized loss within the hypothesis space established by the choice of a (positive semidefinite) kernel function \(k(x_i,x_j)\) is of the form

$$f(x_j)= \sum_i^N c_i k(x_i,x_j)$$

where the kernel function \(k(x_i,x_j)\) measures the distance between two observations \(x_i\) and \(x_j\) and \(c_i\) is the choice coefficient for each observation \(i\). Let \(K\) be the \(N\) by \(N\) kernel matrix with all pairwise distances \(K_ij=k(x_i,x_j)\) and \(c\) be the \(N\) by \(1\) vector of choice coefficients for all observations then in matrix notation the space is \(y=Kc\).

Accordingly, the krls function solves the following minimization problem

$$argmin_{f \in H} \sum_i^n (y - Kc)'(y-Kc)+ \lambda c'Kc$$

which is convex in \(c\) and solved by \(c=(K +\lambda I)^-1 y\) where \(I\) is the identity matrix. Note that this linear solution provides a flexible fitted response surface that typically reduces misspecification bias because it can learn a wide range of nonlinear and or nonadditive functions of the predictors.

If vcov=TRUE is specified, krls also computes the variance-covariance matrix for the choice coefficients \(c\) and fitted values \(y=Kc\) based on a variance estimator developed in Hainmueller and Hazlett (2014). Note that both matrices are N by N and therefore this results in increased memory and computing time.

By default, krls uses the Gaussian Kernel (whichkernel = "gaussian") given by

$$k(x_i,x_j)=exp(\frac{-|| x_i - x_j ||^2}{\sigma^2})$$

where \(||x_i - x_j||\) is the Euclidean distance. The kernel bandwidth \(\sigma^2\) is set to \(D\), the number of dimensions, by default, but the user can also specify other values using the sigma argument to implement other approaches.

If derivative=TRUE is specified, krls also computes the pointwise partial derivatives of the fitted function wrt to each predictor using the estimators developed in Hainmueller and Hazlett (2014). These can be used to examine the marginal effects of each predictor and how the marginal effects vary across the covariate space. Average derivatives are also computed with variances. Note that the derivative=TRUE option results in increased computing time and is only supported for the Gaussian kernel, i.e. when whichkernel = "gaussian". Also derivative=TRUE requires that vcov=TRUE.

If binary=TRUE is also specified, the function will identify binary predictors and return first differences for these predictors instead of partial derivatives. First differences are computed going from the minimum to the maximum value of each binary predictor. Note that first differences are more appropriate to summarize the effects for binary predictors (see Hainmueller and Hazlett (2014) for details).

A few other kernels are also implemented, but derivatives are currently not supported for these: "linear": \(k(x_i,x_j)=x_i'x_j\), "poly1", "poly2", "poly3", "poly4" are polynomial kernels based on \(k(x_i,x_j)=(x_i'x_j +1)^p\) where \(p\) is the order.

References

Jeremy Ferwerda, Jens Hainmueller, Chad J. Hazlett (2017). Kernel-Based Regularized Least Squares in R (KRLS) and Stata (krls). Journal of Statistical Software, 79(3), 1-26. doi:10.18637/jss.v079.i03

Hainmueller, J. and Hazlett, C. (2014). Kernel Regularized Least Squares: Reducing Misspecification Bias with a Flexible and Interpretable Machine Learning Approach. Political Analysis, 22(2)

Rifkin, R. 2002. Everything Old is New Again: A fresh look at historical approaches in machine learning. Thesis, MIT. September, 2002.

Evgeniou, T., Pontil, M., and Poggio, T. (2000). Regularization networks and support vector machines. Advances In Computational Mathematics, 13(1):1-50.

Schoelkopf, B., Herbrich, R. and Smola, A.J. (2001) A generalized representer theorem. In 14th Annual Conference on Computational Learning Theory, pages 416-426.

Kimeldorf, G.S. Wahba, G. 1971. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33:82-95.

See Also

predict.krls for fitted values and predictions. summary.krls for summary of the fit. plot.krls for plots of the fit.

Examples

Run this code
# NOT RUN {
# Linear example
# set up data
N <- 200
x1 <- rnorm(N)
x2 <- rbinom(N,size=1,prob=.2)
y <- x1 + .5*x2 + rnorm(N,0,.15)
X <- cbind(x1,x2)
# fit model
krlsout <- krls(X=X,y=y)
# summarize marginal effects and contribution of each variable
summary(krlsout)
# plot marginal effects and conditional expectation plots
plot(krlsout)


# non-linear example
# set up data
N <- 200
x1 <- rnorm(N)
x2 <- rbinom(N,size=1,prob=.2)
y <- x1^3 + .5*x2 + rnorm(N,0,.15)
X <- cbind(x1,x2)

# fit model
krlsout <- krls(X=X,y=y)
# summarize marginal effects and contribution of each variable
summary(krlsout)
# plot marginal effects and conditional expectation plots
plot(krlsout)

## 2D example:
# predictor data
X <- matrix(seq(-3,3,.1))
# true function
Ytrue <- sin(X)
# add noise 
Y     <- sin(X) + rnorm(length(X),sd=.3)
# approximate function using KRLS
out <- krls(y=Y,X=X)
# get fitted values and ses
fit <- predict(out,newdata=X,se.fit=TRUE)
# results
par(mfrow=c(2,1))
plot(y=Ytrue,x=X,type="l",col="red",ylim=c(-1.2,1.2),lwd=2,main="f(x)")
points(y=fit$fit,X,col="blue",pch=19)
arrows(y1=fit$fit+1.96*fit$se.fit,
       y0=fit$fit-1.96*fit$se.fit,
       x1=X,x0=X,col="blue",length=0)
legend("bottomright",legend=c("true f(x)=sin(x)","KRLS fitted f(x)"),
       lty=c(1,NA),pch=c(NA,19),lwd=c(2,NA),col=c("red","blue"),cex=.8)

plot(y=cos(X),x=X,type="l",col="red",ylim=c(-1.2,1.2),lwd=2,main="df(x)/dx")
points(y=out$derivatives,X,col="blue",pch=19)

legend("bottomright",legend=c("true df(x)/dx=cos(x)","KRLS fitted df(x)/dx"),
       lty=c(1,NA),pch=c(NA,19),lwd=c(2,NA),col=c("red","blue"),,cex=.8)

## 3D example
# plot true function
par(mfrow=c(1,2))
f<-function(x1,x2){ sin(x1)*cos(x2)}
x1 <- x2 <-seq(0,2*pi,.2)
z   <-outer(x1,x2,f)
persp(x1, x2, z,theta=30,main="true f(x1,x2)=sin(x1)cos(x2)")
# approximate function with KRLS
# data and outcomes
X <- cbind(sample(x1,200,replace=TRUE),sample(x2,200,replace=TRUE))
y   <- f(X[,1],X[,2])+ runif(nrow(X))
# fit surface
krlsout <- krls(X=X,y=y)
# plot fitted surface
ff  <- function(x1i,x2i,krlsout){predict(object=krlsout,newdata=cbind(x1i,x2i))$fit}
z   <- outer(x1,x2,ff,krlsout=krlsout)
persp(x1, x2, z,theta=30,main="KRLS fitted f(x1,x2)")

# }

Run the code above in your browser using DataLab