Learn R Programming

mpath (version 0.4-2.26)

nclreg_fit: Internal function to fitting a nonconvex loss based robust linear model with regularization

Description

Fit a linear model via penalized nonconvex loss function. The regularization path is computed for the lasso (or elastic net penalty), scad (or snet) and mcp (or mnet penalty), at a grid of values for the regularization parameter lambda.

Usage

nclreg_fit(x, y, weights, offset, rfamily=c("clossR", "closs", "gloss", "qloss"), 
           s=NULL, fk=NULL, iter=10, reltol=1e-5, 
           penalty=c("enet","mnet","snet"), nlambda=100,lambda=NULL, 
           type.path=c("active", "nonactive", "onestep"), decreasing=FALSE, 
           lambda.min.ratio=ifelse(nobs

Value

An object with S3 class "nclreg" for the various types of models.

call

the call that produced the model fit

b0

Intercept sequence of length length(lambda)

beta

A nvars x length(lambda) matrix of coefficients.

lambda

The actual sequence of lambda values used

decreasing

if lambda is an increasing sequence or not, used to determine regularization path direction either from lambda_max to a potentially modified lambda_min or vice versa if type.init="bst", "heu".

Arguments

x

input matrix, of dimension nobs x nvars; each row is an observation vector.

y

response variable. Quantitative for rfamily="clossR" and -1/1 for classifications.

weights

observation weights. Can be total counts if responses are proportion matrices. Default is 1 for each observation

offset

this can be used to specify an a priori known component to be included in the linear predictor during fitting. This should be NULL or a numeric vector of length equal to the number of cases. Currently only one offset term can be included in the formula.

rfamily

Response type and relevant loss functions (see above)

s

nonconvex loss tuning parameter for robust regression and classification. The s value is for robust nonconvex loss where smaller s value is more robust to outliers with rfamily="closs", and larger s value more robust with
rfamily="clossR", "gloss", "qloss".

fk

predicted values at an iteration in the MM algorithm

nlambda

The number of lambda values - default is 100. The sequence may be truncated before nlambda is reached if a close to saturated model is fitted. See also satu.

lambda

by default, the algorithm provides a sequence of regularization values, or a user supplied lambda sequence

type.path

solution path. If type.path="active", then cycle through only the active set in the next increasing lambda sequence. If type.path="nonactive", no active set for each element of the lambda sequence and cycle through all the predictor variables. If type.path="onestep", update for one element of lambda depending on decreasing=FALSE (last element of lambda) or decreasing=TRUE (then first element of lambda) in each MM iteration, and iterate until convergency of prediction. Then fit a solution path based on the sequence of lambda.

lambda.min.ratio

Smallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero except the intercept). Note, there is no closed formula for lambda.max. The default of lambda.min.ratio depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.001, close to zero. If nobs < nvars, the default is 0.05.

alpha

The \(L_2\) penalty mixing parameter, with \(0 \le alpha\le 1\). alpha=1 is lasso (mcp, scad) penalty; and alpha=0 the ridge penalty. However, if alpha=0, one must provide lambda values.

gamma

The tuning parameter of the snet or mnet penalty.

standardize

logical value for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is TRUE.

intercept

logical value: if TRUE (default), intercept(s) are fitted; otherwise, intercept(s) are set to zero

penalty.factor

This is a number that multiplies lambda to allow differential shrinkage of coefficients. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is same shrinkage for all variables.

type.init

a method to determine the initial values. If type.init="ncl", an intercept-only model as initial parameter and run nclreg regularization path forward from lambda_max to lambda_min. If type.init="heu", heuristic initial parameters and run nclreg path backward or forward depending on decreasing, between lambda_min and lambda_max. If type.init="bst", run a boosting model with bst in package bst, depending on mstop.init, nu.init and run nclreg backward or forward depending on decreasing.

mstop.init

an integer giving the number of boosting iterations when type.init="bst"

nu.init

a small number (between 0 and 1) defining the step size or shrinkage parameter when type.init="bst".

decreasing

only used if lambda=NULL, a logical value used to determine regularization path direction either from lambda_max to a potentially modified lambda_min or vice versa if type.init="bst","heu". Since this is a nonconvex optimization, it is possible to generate different estimates for the same lambda depending on decreasing. The choice of decreasing picks different starting values.

iter

number of iteration in the MM algorithm

maxit

Within each MM algorithm iteration, maximum number of coordinate descent iterations for each lambda value; default is 1000.

reltol

convergency criteria

eps

If a coefficient is less than eps in magnitude, then it is reported to be 0

epscycle

If nlambda > 1 and the relative loss values from two consecutive lambda values change > epscycle, then re-estimate parameters in an effort to avoid trap of local optimization.

thresh

Convergence threshold for coordinate descent. Defaults value is 1e-6.

penalty

Type of regularization

trace

If TRUE, fitting progress is reported

Author

Zhu Wang <zwang145@uthsc.edu>

Details

The sequence of robust models implied by lambda is fit by majorization-minimization along with coordinate descent. Note that the objective function is $$weights*loss + \lambda*penalty,$$ if standardize=FALSE and $$\frac{weights}{\sum(weights)}*loss + \lambda*penalty,$$ if standardize=TRUE.

References

Zhu Wang (2021), MM for Penalized Estimation, TEST, tools:::Rd_expr_doi("10.1007/s11749-021-00770-2")

See Also

nclreg