These functions provide the density and random generation for the Bayesian LASSO prior distribution.
dlasso(x, sigma, tau, lambda, a=1, b=1, log=FALSE)
rlasso(n, sigma, tau, lambda, a=1, b=1)
This is a location vector of length \(J\) at which to evaluate density.
This is the number of observations, which must be a positive integer that has length 1.
This is a positive-only scalar hyperparameter \(\sigma\), which is also the residual standard deviation.
This is a positive-only vector of hyperparameters, \(\tau\), of length \(J\) regarding local sparsity.
This is a positive-only scalar hyperhyperparameter, \(\lambda\), of global sparsity.
These are positive-only scalar hyperhyperhyperparameters for gamma distributed \(\lambda\).
Logical. If log=TRUE
, then the logarithm of the
density is returned.
dlasso
gives the density and
rlasso
generates random deviates.
Application: Multivariate Scale Mixture
Density: \(p(\theta) \sim \mathcal{N}_k(0, \sigma^2 diag(\tau^2))(\frac{1}{sigma^2}) \mathcal{EXP}(\frac{\lambda^2}{2}) \mathcal{G}(a,b)\)
Inventor: Parks and Casella (2008)
Notation 1: \(\theta \sim \mathcal{LASSO}(\sigma, \tau, \lambda, a, b)\)
Notation 2: \(p(\theta) = \mathcal{LASSO}(\theta | \sigma, \tau, \lambda, a, b)\)
Parameter 1: hyperparameter global scale \(\sigma > 0\)
Parameter 2: hyperparameter local scale \(\tau > 0\)
Parameter 3: hyperhyperparameter global scale \(\lambda > 0\)
Parameter 4: hyperhyperhyperparameter scale \(a > 0\)
Parameter 5: hyperhyperhyperparameter scale \(b > 0\)
Mean: \(E(\theta)\)
Variance:
Mode:
The Bayesian LASSO distribution (Parks and Casella, 2008) is a heavy-tailed mixture distribution that can be considered a variance mixture, and it is in the family of multivariate scale mixtures of normals.
The LASSO distribution was proposed as a prior distribution, as a Bayesian version of the frequentist LASSO, introduced by Tibshirani (1996). It is applied as a shrinkage prior in the presence of sparsity for \(J\) regression effects. LASSO priors are most appropriate in large-dimensional models where dimension reduction is necessary to avoid overly complex models that predict poorly.
The Bayesian LASSO results in regression effects that are a compromise between regression effects in the frequentist LASSO and ridge regression. The Bayesian LASSO applies more shrinkage to weak regression effects than ridge regression.
The Bayesian LASSO is an alternative to horseshoe regression and ridge regression.
Park, T. and Casella, G. (2008). "The Bayesian Lasso". Journal of the American Statistical Association, 103, p. 672--680.
Tibshirani, R. (1996). "Regression Shrinkage and Selection via the Lasso". Journal of the Royal Statistical Society, Series B, 58, p. 267--288.
# NOT RUN {
library(LaplacesDemon)
x <- rnorm(100)
sigma <- rhalfcauchy(1, 5)
tau <- rhalfcauchy(100, 5)
lambda <- rhalfcauchy(1, 5)
x <- dlasso(x, sigma, tau, lambda, log=TRUE)
x <- rlasso(length(tau), sigma, tau, lambda)
# }
Run the code above in your browser using DataLab