Learn R Programming

Laurae (version 0.0.0.9001)

interactive.eda_tree: Interactive Dashboard for the Non-Linear Feature Engineering Assistant

Description

This function is a massive helper in feature engineering, supposing your variables are already conditioned well enough for 2-way or deeper interactions and you are looking for non-linear relationships. It uses a decision tree (Classification and Regression Trees), and supports factors, integer, and numeric variables.

Usage

interactive.eda_tree(data, label = "!!!!! SELECT ME !!!!!", ban = NULL,
  antiban = "Yes", type = "auto", split = "information", folds = 5,
  seed = 0, verbose = TRUE, plots = TRUE, max_depth = 4,
  min_split = max(20, nrow(data)/1000), min_bucket = round(min_split/3),
  min_improve = 0.01, competing_splits = 2, surrogate_search = 5,
  surrogate_type = 2, surrogate_style = 0, tree_back = "red",
  gain_back = "red", rules_back = "red", details_back = "red",
  f_back = "red", side_width = 300, tree_height = 580,
  gain_height = 200)

Arguments

data
Type: data.frame (preferred) or data.table. Your data, preferably a data.frame but it "should" also work perfectly with data.table.
label
Type: character. The name of the label feature in the data. Defaults to "!!!!! SELECT ME !!!!!"
ban
Type: vector of characters or of numerics The names (or column numbers) of variables to be banned from the decision tree. Defaults to NULL, which means no variables are banned (all variables are potentially used for the decision tree). Defaults to NULL.
antiban
Type: boolean. Whether banned variable selection should be inverted, which means if "yes", the ban transforms into a selection (which bans all other variables not "banned" initially). Defaults to "yes".
type
Type: character. The type of problem to solve. Either classification ("class"), regression ("anova"), count ("poisson"), or survival ("exp"). Defaults to "auto", which will attempt to find the base type (classification / regression) of model to create using simple heuristics.
split
Type: character. If a classification task has been requested (type = "class"), then the split must be either set to "gini" (for Gini index) or "information" (for Information Gain) as the splitting rule. Defaults to "information" as it is less biased than "gini" when it comes to cardinalities.
folds
Type: integer or character. The folds to use for cross-validation. If you intend to keep the same folds over and over, it is preferrable to provide your own fold character (use a variable name). A numeric vector matching the length of label is also valid.
seed
Type: integer. The random seed applied to the decision tree and the fold generation (if required).
verbose
Type: boolean. Whether to print debug information about the model. For each node, a maximum of competing_splits + surrogate_search rows will be printed. Defaults to TRUE.
plots
Type: boolean. Whether to plot debug information about the model. If using knitr / Rmarkdown, you will have two plots printed: the complexity plot, and the decision tree. Without knitr / Rmarkdown, make sure you look at both. Defaults to TRUE.
max_depth
Type: numeric. The maximum depth of the decision tree. Do not set to large values if the intent is for analysis. Defaults to 3. Any value greater than 30 will cause issues on 32-bit operating systems due to C code.
min_split
Type: integer. The minimum number of observations in a node to allow a split to be made. If this number is not reached in a node, the node is kept but any other potential splits are cancelled. Keep it large to avoid overfitting. Defaults to max(20, nrow(data) / 1000), which is the maximum between 20 and the 0.1% of the number of observations.
min_bucket
Type: integer. The minimum number of observations in a leaf. If this number is not reached in a leaf, the leaf is destroyed. Defaults to round(min_split/3), which means by defaults at least 7 to approximately 0.033% of the number of observations.
min_improve
Type: numeric. The minimum fitting improvement to create a node (complexity parameter in Classification and Regression Trees). For regression, the requirement for a leaf to be created and kept is an R-squared increase by at least min_improve. For classification, the purity (issued from Gini or Information Gain) must increase by at least min_improve.
competing_splits
Type: numeric. The number of best splitting rules retained per split. When using verbose = TRUE, each node will have competing_splits rules printed, if they are adequate enough (instead of only one splitting rule). This allows the user to lookup for more details. Defaults to 4.
surrogate_search
Type: numeric. The number of surrogate splits to look for. A greater number means more surrogates will be looked for, but increased computation time is required. They are also printed when verbose = TRUE. Defaults to 5.
surrogate_type
Type: numeric. Controls the surrogate creation, with three possible values. If set to 0, any surrogates with missing values are not used for the tree. If set to 1, when all surrogates are with missing values, they are not used the tree. If set to 2, when all surrogates are not used, the majority rule is used (Breiman tree). Sparse frames should preferably use 2. It is recommended to use 2 as it handles better missing values, which is the default. Set to 0 if you need to ignore as much as possible missing values.
surrogate_style
Type: numeric. Controls the selection of the best surrogate, with two values. If set to 1, any missing values in the surrogate is removed to compute the correctness of the surrogate. If set to 0, it ignores any missing values and takes into account all observations to compute the correctness of the surrogate. Defaults to 0. Set to 1 if you need to ignore as much as possible missing values.
tree_back
Type: character. A background color character for the tree plot. Defaults to "red".
gain_back
Type: character. A background color character for the gain plot. Defaults to "red".
rules_back
Type: character. A background color character for the rules. Defaults to "red".
details_back
Type: character. A background color character for the details Defaults to "red".
f_back
Type: character. A background color character for the header. Defaults to "red".
side_width
Type: numeric. The width of the sidebar containing variable names. Defaults to 300.
tree_height
Type: numeric. The maximum height for the tree plot. Defaults to 580, which fits nicely Full HD screens (580 vertical pixels).
gain_height
Type: numeric. The maximum height for the gain plot. Defaults to 200, which fits nicely Full HD screens (200 vertical pixels).

Value

The fitted rpart model.

Details

To use this function properly, you require to set the max_depth to a very small value (like 3). This ensures interpretability. Moreover, if you have a sparse frame (with lot of missing values), it is important to keep an eye at surrogate_type and surrogate_style as they will dictate whether a split point will be made depending on the missing values. Default values are made to handle them appropriately. However, if your intent is to penalize missing values (for instance if missing values are anomalies), changing their values respectively to 0 and 1 is recommended. The colors (tree_back, gain_back, rules_back, details_back) allowed are the following:
red
red color
yellow
yellow color
aqua
aqua color
blue
blue color
light-blue
light-blue color
green
green color
navy
navy color
teal
teal color
olive
olive color
lime
lime color
orange
orange color
fuchsia
fuchsia color
purple
purple color
maroon
maroon color
black
black color
The colors (header: f_back) allowed are the following:
blue
blue color
black
black color
purple
purple color
green
green color
red
red color
yellow
yellow color

Examples

Run this code
## Not run: ------------------------------------
# library(shiny)
# library(shinydashboard)
# library(rpart)
# library(rpart.plot)
# library(partykit)
# library(datasets)
# data(faithful)
# interactive.eda_tree(data = faithful,
#                      label = "!!!!! SELECT ME !!!!!",
#                      ban = NULL,
#                      antiban = "Yes",
#                      type = "auto",
#                      split = "information",
#                      folds = 5,
#                      seed = 0,
#                      verbose = TRUE,
#                      plots = TRUE,
#                      max_depth = 4,
#                      min_split = max(20, nrow(data)/1000),
#                      min_bucket = round(min_split/3),
#                      min_improve = 0.01,
#                      competing_splits = 2,
#                      surrogate_search = 5,
#                      surrogate_type = 2,
#                      surrogate_style = 0,
#                      tree_back = "red",
#                      gain_back = "red",
#                      rules_back = "red",
#                      details_back = "red",
#                      f_back = "red",
#                      side_width = 300,
#                      tree_height = 580,
#                      gain_height = 200)
## ---------------------------------------------

Run the code above in your browser using DataLab