Builds a Random Forest model on an H2OFrame.
h2o.randomForest(
x,
y,
training_frame,
model_id = NULL,
validation_frame = NULL,
nfolds = 0,
keep_cross_validation_models = TRUE,
keep_cross_validation_predictions = FALSE,
keep_cross_validation_fold_assignment = FALSE,
score_each_iteration = FALSE,
score_tree_interval = 0,
fold_assignment = c("AUTO", "Random", "Modulo", "Stratified"),
fold_column = NULL,
ignore_const_cols = TRUE,
offset_column = NULL,
weights_column = NULL,
balance_classes = FALSE,
class_sampling_factors = NULL,
max_after_balance_size = 5,
ntrees = 50,
max_depth = 20,
min_rows = 1,
nbins = 20,
nbins_top_level = 1024,
nbins_cats = 1024,
r2_stopping = Inf,
stopping_rounds = 0,
stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE",
"AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error",
"custom", "custom_increasing"),
stopping_tolerance = 0.001,
max_runtime_secs = 0,
seed = -1,
build_tree_one_node = FALSE,
mtries = -1,
sample_rate = 0.632,
sample_rate_per_class = NULL,
binomial_double_trees = FALSE,
checkpoint = NULL,
col_sample_rate_change_per_level = 1,
col_sample_rate_per_tree = 1,
min_split_improvement = 1e-05,
histogram_type = c("AUTO", "UniformAdaptive", "Random", "QuantilesGlobal",
"RoundRobin", "UniformRobust"),
categorical_encoding = c("AUTO", "Enum", "OneHotInternal", "OneHotExplicit",
"Binary", "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited"),
calibrate_model = FALSE,
calibration_frame = NULL,
calibration_method = c("AUTO", "PlattScaling", "IsotonicRegression"),
distribution = c("AUTO", "bernoulli", "multinomial", "gaussian", "poisson", "gamma",
"tweedie", "laplace", "quantile", "huber"),
custom_metric_func = NULL,
export_checkpoints_dir = NULL,
check_constant_response = TRUE,
gainslift_bins = -1,
auc_type = c("AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO",
"WEIGHTED_OVO"),
verbose = FALSE
)
Creates a H2OModel object of the right type.
(Optional) A vector containing the names or indices of the predictor variables to use in building the model. If x is missing, then all columns except y are used.
The name or column index of the response variable in the data. The response must be either a numeric or a categorical/factor variable. If the response is numeric, then a regression model will be trained, otherwise it will train a classification model.
Id of the training data frame.
Destination id for this model; auto-generated if not specified.
Id of the validation data frame.
Number of folds for K-fold cross-validation (0 to disable or >= 2). Defaults to 0.
Logical
. Whether to keep the cross-validation models. Defaults to TRUE.
Logical
. Whether to keep the predictions of the cross-validation models. Defaults to FALSE.
Logical
. Whether to keep the cross-validation fold assignment. Defaults to FALSE.
Logical
. Whether to score during each iteration of model training. Defaults to FALSE.
Score the model after every so many trees. Disabled if set to 0. Defaults to 0.
Cross-validation fold assignment scheme, if fold_column is not specified. The 'Stratified' option will stratify the folds based on the response variable, for classification problems. Must be one of: "AUTO", "Random", "Modulo", "Stratified". Defaults to AUTO.
Column with cross-validation fold index assignment per observation.
Logical
. Ignore constant columns. Defaults to TRUE.
Offset column. This argument is deprecated and has no use for Random Forest.
Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed. Note: Weights are per-row observation weights and do not increase the size of the data frame. This is typically the number of times a row is repeated, but non-integer values are supported as well. During training, rows with higher weights matter more, due to the larger loss function pre-factor. If you set weight = 0 for a row, the returned prediction frame at that row is zero and this is incorrect. To get an accurate prediction, remove all rows with weight == 0.
Logical
. Balance training data class counts via over/under-sampling (for imbalanced data). Defaults to
FALSE.
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes. Defaults to 5.0.
Number of trees. Defaults to 50.
Maximum tree depth (0 for unlimited). Defaults to 20.
Fewest allowed (weighted) observations in a leaf. Defaults to 1.
For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best point Defaults to 20.
For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease by factor of two per level Defaults to 1024.
For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher values can lead to more overfitting. Defaults to 1024.
r2_stopping is no longer supported and will be ignored if set - please use stopping_rounds, stopping_metric and stopping_tolerance instead. Previous version of H2O would stop making trees when the R^2 metric equals or exceeds this Defaults to 1.797693135e+308.
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) Defaults to 0.
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression and anonomaly_score for Isolation Forest). Note that custom and custom_increasing can only be used in GBM and DRF with the Python client. Must be one of: "AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "AUCPR", "lift_top_group", "misclassification", "mean_per_class_error", "custom", "custom_increasing". Defaults to AUTO.
Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much) Defaults to 0.001.
Maximum allowed runtime in seconds for model training. Use 0 to disable. Defaults to 0.
Seed for random numbers (affects certain parts of the algo that are stochastic and those might or might not be enabled by default). Defaults to -1 (time-based random number).
Logical
. Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.
Defaults to FALSE.
Number of variables randomly sampled as candidates at each split. If set to -1, defaults to sqrtp for classification and p/3 for regression (where p is the # of predictors Defaults to -1.
Row sample rate per tree (from 0.0 to 1.0) Defaults to 0.632.
A list of row sample rates per class (relative fraction for each class, from 0.0 to 1.0), for each tree
Logical
. For binary classification: Build 2x as many trees (one per class) - can lead to higher
accuracy. Defaults to FALSE.
Model checkpoint to resume training with.
Relative change of the column sampling rate for every level (must be > 0.0 and <= 2.0) Defaults to 1.
Column sample rate per tree (from 0.0 to 1.0) Defaults to 1.
Minimum relative improvement in squared error reduction for a split to happen Defaults to 1e-05.
What type of histogram to use for finding optimal split points Must be one of: "AUTO", "UniformAdaptive", "Random", "QuantilesGlobal", "RoundRobin", "UniformRobust". Defaults to AUTO.
Encoding scheme for categorical features Must be one of: "AUTO", "Enum", "OneHotInternal", "OneHotExplicit", "Binary", "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited". Defaults to AUTO.
Logical
. Use Platt Scaling (default) or Isotonic Regression to calculate calibrated class
probabilities. Calibration can provide more accurate estimates of class probabilities. Defaults to FALSE.
Data for model calibration
Calibration method to use Must be one of: "AUTO", "PlattScaling", "IsotonicRegression". Defaults to AUTO.
Distribution. This argument is deprecated and has no use for Random Forest.
Reference to custom evaluation function, format: `language:keyName=funcName`
Automatically export generated models to this directory.
Logical
. Check if response column is constant. If enabled, then an exception is thrown if the response
column is a constant value.If disabled, then model will train regardless of the response column being a
constant value or not. Defaults to TRUE.
Gains/Lift table number of bins. 0 means disabled.. Default value -1 means automatic binning. Defaults to -1.
Set default multinomial AUC type. Must be one of: "AUTO", "NONE", "MACRO_OVR", "WEIGHTED_OVR", "MACRO_OVO", "WEIGHTED_OVO". Defaults to AUTO.
Logical
. Print scoring history to the console (Metrics per tree). Defaults to FALSE.
predict.H2OModel
for prediction
if (FALSE) {
library(h2o)
h2o.init()
# Import the cars dataset
f <- "https://s3.amazonaws.com/h2o-public-test-data/smalldata/junit/cars_20mpg.csv"
cars <- h2o.importFile(f)
# Set predictors and response; set response as a factor
cars["economy_20mpg"] <- as.factor(cars["economy_20mpg"])
predictors <- c("displacement", "power", "weight", "acceleration", "year")
response <- "economy_20mpg"
# Train the DRF model
cars_drf <- h2o.randomForest(x = predictors, y = response,
training_frame = cars, nfolds = 5,
seed = 1234)
}
Run the code above in your browser using DataLab