leaps() performs an exhaustive search for the best subsets of the
variables in x for predicting y in linear regression, using an efficient
branch-and-bound algorithm. It is a compatibility wrapper for
regsubsets
does the same thing better.
Since the algorithm returns a best model of each size, the results do not depend on a penalty model for model size: it doesn't make any difference whether you want to use AIC, BIC, CIC, DIC, ...
leaps(x=, y=, wt=rep(1, NROW(x)), int=TRUE, method=c("Cp", "adjr2", "r2"), nbest=10,
names=NULL, df=NROW(x), strictly.compatible=TRUE)
A list with components
logical matrix. Each row can be used to select the columns of x
in the respective model
Number of variables, including intercept if any, in the model
or adjr2
or r2
is the value of the chosen model
selection statistic for each model
vector of names for the columns of x
A matrix of predictors
A response vector
Optional weight vector
Add an intercept to the model
Calculate Cp, adjusted R-squared or R-squared
Number of subsets of each size to report
vector of names for columns of x
Total degrees of freedom to use instead of nrow(x)
in calculating Cp and adjusted R-squared
Implement misfeatures of leaps() in S
Alan Miller "Subset Selection in Regression" Chapman \& Hall
regsubsets
, regsubsets.formula
,
regsubsets.default
x<-matrix(rnorm(100),ncol=4)
y<-rnorm(25)
leaps(x,y)
Run the code above in your browser using DataLab