Learn R Programming

sparseGAM (version 1.0)

grpreg.nb: Group-regularized Negative Binomial Regression

Description

This function implements group-regularized negative binomial regression with a known size parameter \(\alpha\) and the log link. In negative binomial regression, we assume that \(y_i \sim NB(\alpha, \mu_i)\), where

$$f(y_i | \alpha, \mu_i ) = \frac{\Gamma(y+\alpha)}{y! \Gamma(\alpha)} (\frac{\mu_i}{\mu_i+\alpha})^{y}(\frac{\alpha}{\mu_i +\alpha})^{\alpha}, y = 0, 1, 2, ...$$

Then \(E(y_i) = \mu_i\), and we relate \(\mu_i\) to a set of \(p\) covariates \(x_i\) through the log link,

$$\log(\mu_i) = \beta_0 + x_i^T \beta, i=1,..., n$$

If the covariates in each \(x_i\) are grouped according to known groups \(g=1, ..., G\), then this function may estimate some of the \(G\) groups of coefficients as all zero, depending on the amount of regularization.

Our implementation for regularized negative binomial regression is based on the least squares approximation approach of Wang and Leng (2007), and hence, the function does not allow the total number of covariates \(p\) to be greater than sample size.

Usage

grpreg.nb(y, X, X.test, groups, nb.size=1, penalty=c("gLASSO","gSCAD","gMCP"),
          weights, taper, nlambda=100, lambda, max.iter=10000, tol=1e-4)

Arguments

y

\(n \times 1\) vector of responses for training data.

X

\(n \times p\) design matrix for training data, where the \(j\)th column of X corresponds to the \(j\)th overall covariate.

X.test

\(n_{test} \times p\) design matrix for test data to calculate predictions. X.test must have the same number of columns as X, but not necessarily the same number of rows. If no test data is provided or if in-sample predictions are desired, then the function automatically sets X.test=X in order to calculate in-sample predictions.

groups

\(p\)-dimensional vector of group labels. The \(j\)th entry in groups should contain either the group number or the name of the factor level that the \(j\)th covariate belongs to. groups must be either a vector of integers or factors.

nb.size

known size parameter \(\alpha\) in \(NB(\alpha,\mu_i)\) distribution for the responses. Default is nb.size=1.

penalty

group regularization method to use on the groups of coefficients. The options are "gLASSO", "gSCAD", "gMCP". To implement negative binomial regression with the SSGL penalty, use the SSGL function.

weights

group-specific, nonnegative weights for the penalty. Default is to use the square roots of the group sizes.

taper

tapering term \(\gamma\) in group SCAD and group MCP controlling how rapidly the penalty tapers off. Default is taper=4 for group SCAD and taper=3 for group MCP. Ignored if "gLASSO" is specified as the penalty.

nlambda

number of regularization parameters \(L\). Default is nlambda=100.

lambda

grid of \(L\) regularization parameters. The user may specify either a scalar or a vector. If the user does not provide this, the program chooses the grid automatically.

max.iter

maximum number of iterations in the algorithm. Default is max.iter=10000.

tol

convergence threshold for algorithm. Default is tol=1e-4.

Value

The function returns a list containing the following components:

lambda

\(L \times 1\) vector of regularization parameters lambda used to fit the model. lambda is displayed in descending order.

beta0

\(L \times 1\) vector of estimated intercepts. The \(k\)th entry in beta0 corresponds to the \(k\)th regularization parameter in lambda.

beta

\(p \times L\) matrix of estimated regression coefficients. The \(k\)th column in beta corresponds to the \(k\)th regularization parameter in lambda.

mu.pred

\(n_{test} \times L\) matrix of predicted mean response values \(\mu_{test} = E(Y_{test})\) based on the test data in X.test (or training data X if no argument was specified for X.test). The \(k\)th column in mu.pred corresponds to the predictions for the \(k\)th regularization parameter in lambda.

classifications

\(G \times L\) matrix of classifications, where \(G\) is the number of groups. An entry of "1" indicates that the group was classified as nonzero, and an entry of "0" indicates that the group was classified as zero. The \(k\)th column of classifications corresponds to the \(k\)th regularization parameter in lambda.

loss

\(L \times 1\) vector of negative log-likelihood of the fitted models. The \(k\)th entry in loss corresponds to the \(k\)th regularization parameter in lambda.

References

Breheny, P. and Huang, J. (2015). "Group descent algorithms for nonconvex penalized linear and logistic regression models with grouped predictors." Statistics and Computing, 25:173-187.

Wang, H. and Leng, C. (2007). "Unified LASSO estimation by least squares approximation." Journal of the American Statistical Association, 102:1039-1048.

Examples

Run this code
# NOT RUN {
## Generate training data
set.seed(1234)
X = matrix(runif(100*16), nrow=100) 
n = dim(X)[1]
groups = c("A","A","A","B","B","B","C","C","D","E","E","F","G","H","H","H")
groups = as.factor(groups)
true.beta = c(-2,2,2,0,0,0,0,0,0,1.5,-1.5,0,0,-2,2,2)
  
## Generate count responses from negative binomial regression
eta = crossprod(t(X), true.beta)
y = rnbinom(n,size=1, mu=exp(eta))
  
## Generate test data
n.test = 50
X.test = matrix(runif(n.test*16), nrow=n.test)
  
## Fit negative binomial regression models with the group SCAD penalty
nb.mod = grpreg.nb(y, X, X.test, groups, penalty="gSCAD")
  
## Tuning parameters used to fit models 
nb.mod$lambda
  
# Predicted n.test-dimensional vectors mu=E(Y.test) based on test data, X.test. 
# The kth column of 'mu.pred' corresponds to the kth entry in 'lambda.'
nb.mod$mu.pred 
  
# Classifications of the 8 groups. The kth column of 'classifications'
# corresponds to the kth entry in lambda.
nb.mod$classifications
# }

Run the code above in your browser using DataLab