Learn R Programming

fastcox (version 1.1.1)

cocktail: Fits the regularization paths for the elastic net penalized Cox's model

Description

Fits a regularization path for the elastic net penalized Cox's model at a sequence of regularization parameters lambda.

Usage

cocktail(x,y,d,
	nlambda=100,
	lambda.min=ifelse(nobs

Arguments

x
matrix of predictors, of dimension $N \times p$; each row is an observation vector.
y
a survival time for Cox models. Currently tied failure times are not supported.
d
censor status with 1 if died and 0 if right censored.
nlambda
the number of lambda values - default is 100.
lambda.min
given as a fraction of lambda.max - the smallest value of lambda for which all coefficients are zero. The default depends on the relationship between $N$ (the number of rows in the matrix of predictors) and $p$ (the number of pre
lambda
a user supplied lambda sequence. Typically, by leaving this option unspecified users can have the program compute its own lambda sequence based on nlambda and lambda.min. Supplying a value of
alpha
The elasticnet mixing parameter, with $0 < \alpha
pf
separate penalty weights can be applied to each coefficient of $\beta$ to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and results in that variable always being included in the model. Default is 1 for all va
exclude
indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor.
dfmax
limit the maximum number of variables in the model. Useful for very large $p$, if a partial path is desired. Default is $p+1$.
pmax
limit the maximum number of variables ever to be nonzero. For example once $\beta$ enters the model, no matter how many times it exits or re-enters model through the path, it will be counted only once. Default is min(dfmax*1.2,p).
standardize
logical flag for variable standardization, prior to fitting the model sequence. If TRUE, x matrix is normalized such that sum squares of each column $\sum^N_{i=1}x_{ij}^2/N=1$. Note that x is always centered (i.e. $\sum^N_{i=1}x_{ij}=0$) no
eps
convergence threshold for coordinate majorization descent. Each inner coordinate majorization descent loop continues until the relative change in any coefficient (i.e. $\max_j|\beta_j^{new}-\beta_j^{old}|^2$) is less than eps. Defaults va
maxit
maximum number of outer-loop iterations allowed at fixed lambda value. Default is 1e4. If models do not converge, consider increasing maxit.

Value

  • An object with S3 class cocktail.
  • callthe call that produced this object
  • betaa $p*length(lambda)$ matrix of coefficients, stored as a sparse matrix (dgCMatrix class, the standard class for sparse numeric matrices in the Matrix package.). To convert it into normal type matrix use as.matrix().
  • lambdathe actual sequence of lambda values used
  • dfthe number of nonzero coefficients for each value of lambda.
  • dimdimension of coefficient matrix (ices)
  • npassestotal number of iterations (the most inner loop) summed over all lambda values
  • jerrerror flag, for warnings and errors, 0 if no error.

Details

The algorithm estimates $\beta$ based on observed data, through elastic net penalized log partial likelihood of Cox's model. $$\arg\min(-loglik(Data,\beta)+\lambda*P(\beta))$$ It can compute estimates at a fine grid of values of $\lambda$s in order to pick up a data-driven optimal $\lambda$ for fitting a 'best' final model. The penalty is a combination of l1 and l2 penalty: $$P(\beta)=(1-\alpha)/2||\beta||_2^2+\alpha||\beta||_1.$$ alpha=1 is the lasso penalty. For computing speed reason, if models are not converging or running slow, consider increasing eps, decreasing nlambda, or increasing lambda.min before increasing maxit.

FAQ:

Question:I am not sure how are we optimizing alpha. I can get optimal lambda for each value of alpha. But how do I select optimum alpha?

Answer: cv.cocktail only finds the optimal lambda given alpha fixed. So to chose an good alpha you need to fit cv on a grid of alpha, say (0.1,0.3, 0.6, 0.9, 1) and choose the one corresponds to the lowest predicted deviance.

Question:I understand your are referring to minimizing the quantity cv.cocktail$cvm, the mean 'cross-validated error' to optimize alpha and lambda as you did in your implementation. However, I don't know what the equation of this error is and this error is not referred to in your paper either. Do you mind explaining what this is?

Answer: We first define the log partial-likelihood for the Cox model. Assume $\hat{\beta}^{[k]}$ is the estimate fitted on $k$-th fold, define the log partial likelihood function as $$L(Data,\hat{\beta}[k])=\sum_{s=1}^{S} x_{i_{s}}^{T}\hat{\beta}[k]-\log(\sum_{i\in R_{s}}\exp(x_{i}^{T}\hat{\beta}[k])).$$ Then the log partial-likelihood deviance of the $k$-th fold is defined as $$D[Data,k]=-2(L(Data,\hat{\beta}[k])).$$ We now define the measurement we actually use for cross validation: it is the difference between the log partial-likelihood deviance evaluated on the full dataset and that evaluated on the on the dataset with $k$-th fold excluded. The cross-validated error is defined as $$CV-ERR[k]=D(Data[full],k)-D(Data[k^{th}\,\,fold\,\,excluded],k).$$

References

Yang, Y. and Zou, H. (2012), "A Cocktail Algorithm for Solving The Elastic Net Penalized Cox's Regression in High Dimensions", Statistics and Its Interface. http://code.google.com/p/fastcox/

See Also

plot.cocktail

Examples

Run this code
data(FHT)
m1<-cocktail(x=FHT$x,y=FHT$y,d=FHT$status,alpha=0.5)
predict(m1,newx=FHT$x[1:5,],s=c(0.01,0.005))
predict(m1,type="nonzero")
plot(m1)

Run the code above in your browser using DataLab