Learn R Programming

refund.wave (version 0.1)

wnet: Generalized elastic net in the wavelet domain

Description

Performs generalized linear scalar-on-function or scalar-on-image regression in the wavelet domain, by (naive) elastic net.

Usage

wnet(y, xfuncs, covt = NULL, min.scale = 0, nfeatures = NULL, alpha = 1, lambda = NULL, standardize = FALSE, pen.covt = FALSE, filter.number = 10, wavelet.family = "DaubLeAsymm", family = "gaussian", nfold = 5, nsplit = 1, store.cv = FALSE, store.glmnet = FALSE, seed = NULL, ...)

Arguments

y
scalar outcome vector.
xfuncs
functional predictors. For 1D predictors, an $n \times d$ matrix of signals, where $n$ is the length of y and $d$ is the number of sites at which each signal is observed. For 2D predictors, an $n \times d \times d$ array comprising $n$ images of dimension $d \times d$. For 3D predictors, an $n \times d \times d \times d$ array comprising $n$ images of dimension $d \times d \times d$. Note that $d$ must be a power of 2.
covt
covariates, if any: an $n$-row matrix, or a vector of length $n$.
min.scale
either a scalar, or a vector of candidate values to be compared. Used to control the coarseness level of the wavelet decomposition. Possible values are $0,1,\dots,log_2(d) - 1$.
nfeatures
number(s) of features, i.e. wavelet coefficients, to retain for prediction of y: either a scalar, or a vector of values to be compared.
alpha
elastic net mixing parameter, used by glmnet: either a scalar, or a vector of values to be compared. alpha=1 gives the lasso penalty, while alpha=0 yields the ridge penalty.
lambda
a vector of candidate regularization parameter values. If not supplied, glmnet automatically generates a sequence of candidate values.
standardize
logical, passed to glmnet: should the predictor variables be standardized? Defaults to FALSE, but either way, the coefficients are returned on the original scale.
pen.covt
logical: should the scalar covariates be penalized? If FALSE (the default), penalization is suppressed by setting the appropriate components of penalty.factor to 0 in the call to glmnet.
filter.number
passed to wd, imwd, or wd3D, in the wavethresh package, to select the smoothness of the wavelets.
wavelet.family
family of wavelets: passed to wd, imwd, or wd3D.
family
generalized linear model family. Current version supports "gaussian" (the default) and "binomial".
nfold
the number of validation sets ("folds") into which the data are divided.
nsplit
number of splits into nfold validation sets; CV is computed by averaging over these splits.
store.cv
logical: should the output include a CV result table?
store.glmnet
logical: should the output include the fitted glmnet?
seed
the seed for random data division. If seed = NULL, a random seed is used.
...
other arguments passed to glmnet.

Value

An object of class "wnet", which is a list with the following components:
glmnet
if store.glmnet = TRUE, the object returned by glmnet.
fitted.value
the fitted values.
coef.param
parametric coefficient estimates, for the scalar covariates.
fhat
coefficient function estimate.
Rsq
coefficient of determination.
lambda.table
array giving the candidate lambda values, chosen automatically by glmnet, for each combination of the other tuning parameters.
tuning.params
a $2\times 4$ table giving the indices and values of min.scale, nfeatures, alpha and lambda that minimize the CV. For example, if alpha=c(0,0.5,1) is specified and the CV-minimizing tuning parameter combination takes alpha to be the 2nd of these values, then the third column of the table is c(2, 0.5).
cv.table
if store.cv = TRUE, a table giving the CV criterion for each combination of min.scale, nfeatures, alpha and lambda. Otherwise, just the minimized CV criterion.
se.cv
if store.cv = TRUE, the standard error of the CV estimate for each combination of min.scale, nfeatures, alpha and lambda.
family
generalized linear model family.

Details

This function supports only the standard discrete wavelet transform (see argument type in wd) with periodic boundary handling (see argument bc in wd).

For 2D predictors, setting min.scale=1 will lead to an error, due to a technical detail regarding imwd. Please contact the authors if a workaround is needed.

References

Zhao, Y., Ogden, R. T., and Reiss, P. T. (2012). Wavelet-based LASSO in functional linear regression. Journal of Computational and Graphical Statistics, 21(3), 600--617.

Zhao, Y., Chen, H., and Ogden, R. T. Wavelet-based Adaptive LASSO and screening approaches in functional linear regression. Journal of Computational and Graphical Statistics, to appear.

See Also

wcr, wnet.perm

Examples

Run this code
## Not run: 
# ### 1D functional predictor example ###
# 
# data(gasoline)
# 
# # input a single value of each tuning parameters
# gas.wnet1 <- wnet(gasoline$octane, xfuncs = gasoline$NIR[,1:256], 
#              nfeatures= 20, min.scale = 0, alpha = 1)
# gas.wpcr1 <- wcr(gasoline$octane, xfuncs = gasoline$NIR[,1:256], min.scale = 0,
#                  nfeatures = 20, ncomp = 15)
# gas.wpls1 <- wcr(gasoline$octane, xfuncs = gasoline$NIR[,1:256], min.scale = 0, 
#                  nfeatures = 20, ncomp = 15, method = "pls")
# plot(gas.wnet1)
# plot(gas.wpcr1)
# plot(gas.wpls1)
# 
# # input vectors of candidate tuning parameter values
# gas.wnet2 <- wnet(gasoline$octane, xfuncs = gasoline$NIR[,1:256], 
#                   nfeatures= 20, min.scale = 0:3, alpha = c(0.9, 1))
# gas.wpcr2 <- wcr(gasoline$octane, xfuncs = gasoline$NIR[,1:256], min.scale = 0:3,
#                  nfeatures = c(16, 18, 20), ncomp = 10:15)
# gas.wpls2 <- wcr(gasoline$octane, xfuncs = gasoline$NIR[,1:256], min.scale = 0:3,
#                  nfeatures = c(16, 18, 20), ncomp = 10:15, method = "pls")
# plot(gas.wnet2)
# plot(gas.wpcr2)
# plot(gas.wpls2)
# 
# ### 2D functional predictor example ###
# 
# n = 200; d = 64
# 
# # Create true coefficient function
# ftrue = matrix(0,d,d)
# ftrue[40:46,34:38] = 1
# 
# # Generate random functional predictors, and scalar responses
# ii = array(rnorm(n*d^2), dim=c(n,d,d))
# iimat = ii; dim(iimat) = c(n,d^2)
# yy = iimat %*% as.vector(ftrue) + rnorm(n, sd=.3)
# 
# mm.wnet <- wnet(yy, xfuncs = ii, min.scale = 4, alpha = 1)
# 
# mm.wpls <- wcr(yy, xfuncs = ii, min.scale = 4, nfeatures = 20, ncomp = 6,
#                 method = "pls")
# 
# plot(mm.wnet)
# plot(mm.wpls)
# 
# ### 3D functional predictor example ###
# 
# n = 200; d = 16
# 
# # Create true coefficient function
# ftrue = array(0,dim = rep(d, 3))
# ftrue[10:16,12:15, 4:8] = 1
# 
# # Generate random functional predictors, and scalar responses
# ii = array(rnorm(n*d^3), dim=c(n,rep(d,3)))
# iimat = ii; dim(iimat) = c(n,d^3)
# yy = iimat %*% as.vector(ftrue) + rnorm(n, sd=.3)
# 
# mmm.wnet <- wnet(yy, xfuncs = ii, min.scale = 2, alpha = 1)
# 
# mmm.wpls <- wcr(yy, xfuncs = ii, min.scale = 2, nfeatures = 20, ncomp = 6,
#                 method = "pls")
# plot(mmm.wnet)
# plot(mmm.wpls)
# ## End(Not run)

Run the code above in your browser using DataLab