Learn R Programming

h2o (version 3.44.0.3)

h2o.adaBoost: Build an AdaBoost model

Description

Builds an AdaBoost model on an H2OFrame.

Usage

h2o.adaBoost(
  x,
  y,
  training_frame,
  model_id = NULL,
  ignore_const_cols = TRUE,
  categorical_encoding = c("AUTO", "Enum", "OneHotInternal", "OneHotExplicit", "Binary",
    "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited"),
  weights_column = NULL,
  nlearners = 50,
  weak_learner = c("AUTO", "DRF", "GLM", "GBM", "DEEP_LEARNING"),
  learn_rate = 0.5,
  weak_learner_params = NULL,
  seed = -1
)

Value

Creates a H2OModel object of the right type.

Arguments

x

(Optional) A vector containing the names or indices of the predictor variables to use in building the model. If x is missing, then all columns except y are used.

y

The name or column index of the response variable in the data. The response must be either a numeric or a categorical/factor variable. If the response is numeric, then a regression model will be trained, otherwise it will train a classification model.

training_frame

Id of the training data frame.

model_id

Destination id for this model; auto-generated if not specified.

ignore_const_cols

Logical. Ignore constant columns. Defaults to TRUE.

categorical_encoding

Encoding scheme for categorical features Must be one of: "AUTO", "Enum", "OneHotInternal", "OneHotExplicit", "Binary", "Eigen", "LabelEncoder", "SortByResponse", "EnumLimited". Defaults to AUTO.

weights_column

Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed. Note: Weights are per-row observation weights and do not increase the size of the data frame. This is typically the number of times a row is repeated, but non-integer values are supported as well. During training, rows with higher weights matter more, due to the larger loss function pre-factor. If you set weight = 0 for a row, the returned prediction frame at that row is zero and this is incorrect. To get an accurate prediction, remove all rows with weight == 0.

nlearners

Number of AdaBoost weak learners. Defaults to 50.

weak_learner

Choose a weak learner type. Defaults to AUTO, which means DRF. Must be one of: "AUTO", "DRF", "GLM", "GBM", "DEEP_LEARNING". Defaults to AUTO.

learn_rate

Learning rate (from 0.0 to 1.0) Defaults to 0.5.

weak_learner_params

Customized parameters for the weak_learner algorithm. E.g list(ntrees=3, max_depth=2, histogram_type='UniformAdaptive'))

seed

Seed for random numbers (affects certain parts of the algo that are stochastic and those might or might not be enabled by default). Defaults to -1 (time-based random number).

See Also

predict.H2OModel for prediction

Examples

Run this code
if (FALSE) {
library(h2o)
h2o.init()

# Import the airlines dataset
f <- "https://s3.amazonaws.com/h2o-public-test-data/smalldata/prostate/prostate.csv"
data <- h2o.importFile(f)

# Set predictors and response; set response as a factor
data["CAPSULE"] <- as.factor(data["CAPSULE"])
predictors <- c("AGE","RACE","DPROS","DCAPS","PSA","VOL","GLEASON")
response <- "CAPSULE"

# Train the AdaBoost model
h2o_adaboost <- h2o.adaBoost(x = predictors, y = response, training_frame = data, seed = 1234)
}

Run the code above in your browser using DataLab