Learn R Programming

DTRlearn (version 1.3)

Olearning_Single: Improved single stage O-learning with cross validation

Description

Improved outcome weighted learning, first take residuals; and then use cross validation to choose best tuning parameter for wsvm. Return the O-learning models with best tuning parameters. Improving from Zhao 2012, the improved outcome weighted learning first take main effect out by regression; the weights are absolute value of the residual; more details can be found in Liu et al. (2015).

Usage

Olearning_Single(H,A,R2,pi=rep(1, n),pentype ="lasso",kernel="linear",
sigma = c(0.03, 0.05, 0.07),clinear = 2^(-2:2), m = 4,e = 1e-05)

Arguments

H

a n by p matrix, n is the sample size, p is the number of feature variables.

A

a vector of n entries coded 1 and -1 for the treatment assignments

R2

a vector of outcome variable, larger is more desirable.

pi

a vector of randomization probability \(P(A|X)\), or the estimated observed probability.

pentype

the type of regression used to take residual, 'lasso' is the default (lasso regression); 'LSE' is the ordinary least square regression.

kernel

kernel function for weighted SVM, can be 'linear' or 'rbf' (radial basis kernel), default is 'linear'. When 'rbf' is specified, one can specify the sigma parameter for radial basis kernel.

sigma

a grid of tuning parameter sigma for 'rbf' kernel for cross validation, when kernel='rbf', the default is \(c(0.03, 0.05, 0.07)\)

clinear

a grid of tuning parameter C for cross validation,the default is \(2^(-2:2)\). C is tuning parameter as defined in wsvm

m

folds of cross validation for choosing tuning parameters C and sigma. If 'lasso' is specified for 'pentype', m is also the folds CV for cv.glmnet in the step of taking residual.

e

the rounding error when computing bias in wsvm

Value

It returns model estimated from wsvm with the best tuning parameters picked by cross validation. If kernel 'linear' is specified, it returns an object of class 'linearcl', and it is a list include the following elements:

alpha1

the scaled solution for the dual problem: \(alpha1_i=\alpha_i A_i wR_i\)

bias

the intercept \(\beta_0\) in \(f(X)=\beta_0+X\beta\).

fit

a vector of estimated values for \(\hat{f(x)}\) in training data, \(fit=bias+X\beta=bias+X*X'*alpha1\).

beta

The coefficients \(\beta\) for linear SVM, \(f(X)=bias+X\beta\).

If kernel 'rbf' is specified, it returns an object of class 'rbfcl', and it is a list include the following elements:

alpha1

the scaled solution for the dual problem: \(alpha1_i=\alpha_i A_i wR_i\) and \(X\beta= K(X,X)*alpha1\)

bias

the intercept \(\beta_0\) in \(f(X)=\beta_0+h(X)\beta\).

fit

a vector of estimated values for \(\hat{f(x)}\) in training data, \(fit=\beta_0+h(X)\beta=bias+K(X,X)*alpha1\).

Sigma

the bandwidth parameter for the rbf kernel

X

the matrix of training feature variable

References

Liu et al. (2015). Under double-blinded review.

Zhao, Y., Zeng, D., Rush, A. J., & Kosorok, M. R. (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, 107(499), 1106-1118.

See Also

Olearning;wsvm

Examples

Run this code
# NOT RUN {
n_cluster=5
pinfo=10
pnoise=10
n_sample=50
set.seed(3)
example=make_classification(n_cluster,pinfo,pnoise,n_sample)
pi=list()
pi[[2]]=pi[[1]]=rep(1,n_sample)
modelrbf=Olearning_Single(example$X,example$A,example$R,kernel='rbf',m=3,sigma=0.05)
modellinear=Olearning_Single(example$X,example$A,example$R)
# }

Run the code above in your browser using DataLab