Learn R Programming

Laurae (version 0.0.0.9001)

FeatureLookup: The Non-Linear Feature Engineering Assistant

Description

This function is a massive helper in feature engineering, supposing your variables are already conditioned well enough for 2-way or deeper interactions and you are looking for non-linear relationships. It uses a decision tree (Classification and Regression Trees), and supports factors, integer, and numeric variables.

Usage

FeatureLookup(data, label, ban = NULL, antiban = FALSE, type = "auto",
  split = "information", folds = 5, seed = 0, verbose = TRUE,
  plots = TRUE, max_depth = 4, min_split = max(20, nrow(data)/1000),
  min_bucket = round(min_split/3), min_improve = 0.01,
  competing_splits = 2, surrogate_search = 5, surrogate_type = 2,
  surrogate_style = 0)

Arguments

data
Type: data.frame (preferred) or data.table. Your data, preferably a data.frame but it "should" also work perfectly with data.table.
label
Type: vector. Your labels.
ban
Type: vector of characters or of numerics The names (or column numbers) of variables to be banned from the decision tree. Defaults to NULL, which means no variables are banned (all variables are potentially used for the decision tree).
antiban
Type: boolean. Whether banned variable selection should be inverted, which means if TRUE, the ban transforms into a selection (which bans all other variables not "banned" initially). Defaults to FALSE.
type
Type: character. The type of problem to solve. Either classification ("class"), regression ("anova"), count ("poisson"), or survival ("exp"). Defaults to "auto", which will attempt to find the base type (classification / regression) of model to create using simple heuristics.
split
Type: character. If a classification task has been requested (type = "class"), then the split must be either set to "gini" (for Gini index) or "information" (for Information Gain) as the splitting rule. Defaults to "information" as it is less biased than "gini" when it comes to cardinalities.
folds
Type: integer or list of vectors. The folds to use for cross-validation. If you intend to keep the same folds over and over, it is preferrable to provide your own list of folds. A numeric vector matching the length of label is also valid.
seed
Type: integer. The random seed applied to the decision tree and the fold generation (if required).
verbose
Type: boolean. Whether to print debug information about the model. For each node, a maximum of competing_splits + surrogate_search rows will be printed. Defaults to TRUE.
plots
Type: boolean. Whether to plot debug information about the model. If using knitr / Rmarkdown, you will have two plots printed: the complexity plot, and the decision tree. Without knitr / Rmarkdown, make sure you look at both. Defaults to TRUE.
max_depth
Type: numeric. The maximum depth of the decision tree. Do not set to large values if the intent is for analysis. Defaults to 3. Any value greater than 30 will cause issues on 32-bit operating systems due to C code.
min_split
Type: integer. The minimum number of observations in a node to allow a split to be made. If this number is not reached in a node, the node is kept but any other potential splits are cancelled. Keep it large to avoid overfitting. Defaults to max(20, nrow(data) / 1000), which is the maximum between 20 and the 0.1% of the number of observations.
min_bucket
Type: integer. The minimum number of observations in a leaf. If this number is not reached in a leaf, the leaf is destroyed. Defaults to round(min_split/3), which means by defaults at least 7 to approximately 0.033% of the number of observations.
min_improve
Type: numeric. The minimum fitting improvement to create a node (complexity parameter in Classification and Regression Trees). For regression, the requirement for a leaf to be created and kept is an R-squared increase by at least min_improve. For classification, the purity (issued from Gini or Information Gain) must increase by at least min_improve.
competing_splits
Type: numeric. The number of best splitting rules retained per split. When using verbose = TRUE, each node will have competing_splits rules printed, if they are adequate enough (instead of only one splitting rule). This allows the user to lookup for more details. Defaults to 4.
surrogate_search
Type: numeric. The number of surrogate splits to look for. A greater number means more surrogates will be looked for, but increased computation time is required. They are also printed when verbose = TRUE. Defaults to 5.
surrogate_type
Type: numeric. Controls the surrogate creation, with three possible values. If set to 0, any surrogates with missing values are not used for the tree. If set to 1, when all surrogates are with missing values, they are not used the tree. If set to 2, when all surrogates are not used, the majority rule is used (Breiman tree). Sparse frames should preferably use 2. It is recommended to use 2 as it handles better missing values, which is the default. Set to 0 if you need to ignore as much as possible missing values.
surrogate_style
Type: numeric. Controls the selection of the best surrogate, with two values. If set to 1, any missing values in the surrogate is removed to compute the correctness of the surrogate. If set to 0, it ignores any missing values and takes into account all observations to compute the correctness of the surrogate. Defaults to 0. Set to 1 if you need to ignore as much as possible missing values.

Value

The fitted rpart model.

Details

To use this function properly, you require to set the max_depth to a very small value (like 3). This ensures interpretability. Moreover, if you have a sparse frame (with lot of missing values), it is important to keep an eye at surrogate_type and surrogate_style as they will dictate whether a split point will be made depending on the missing values. Default values are made to handle them appropriately. However, if your intent is to penalize missing values (for instance if missing values are anomalies), changing their values respectively to 0 and 1 is recommended.

Examples

Run this code
## Not run: ------------------------------------
# # An example of a heavily regularized decision tree
# # Settings are intentionally difficult enough for a decision tree
# # This way, only great split points are reported
# FeatureLookup(data,
#               label,
#               ban = c("CAR", "TOBACCO"),
#               antiban = FALSE,
#               type = "anova",
#               folds = 20,
#               seed = 11111,
#               verbose = TRUE,
#               plots = TRUE,
#               max_depth = 3,
#               min_split = 1000,
#               min_bucket = 200,
#               min_improve = 0.10,
#               competing_splits = 10,
#               surrogate_search = 10,
#               surrogate_type = 2,
#               surrogate_style = 0)
## ---------------------------------------------

Run the code above in your browser using DataLab