Learn R Programming

grf (version 0.9.5)

quantile_forest: Quantile forest

Description

Trains a regression forest that can be used to estimate quantiles of the conditional distribution of Y given X = x.

Usage

quantile_forest(X, Y, quantiles = c(0.1, 0.5, 0.9),
  regression.splitting = FALSE, sample.fraction = 0.5, mtry = NULL,
  num.trees = 2000, num.threads = NULL, min.node.size = NULL,
  seed = NULL, alpha = 0.05, honesty = TRUE)

Arguments

X

The covariates used in the quantile regression.

Y

The outcome.

quantiles

Vector of quantiles used to calibrate the forest.

regression.splitting

Whether to use regression splits when growing trees instead of specialized splits based on the quantiles (the default). Setting this flag to true corresponds to the approach to quantile forests from Meinshausen (2006).

sample.fraction

Fraction of the data used to build each tree. Note: If honesty is used, these subsamples will further be cut in half.

mtry

Number of variables tried for each split.

num.trees

Number of trees grown in the forest. Note: Getting accurate confidence intervals generally requires more trees than getting accurate predictions.

num.threads

Number of threads used in training. If set to NULL, the software automatically selects an appropriate amount.

min.node.size

A target for the minimum number of observations in each tree leaf. Note that nodes with size smaller than min.node.size can occur, as in the original randomForest package.

seed

The seed for the C++ random number generator.

alpha

Maximum imbalance of a split.

honesty

Whether or not honest splitting (i.e., sub-sample splitting) should be used.

Value

A trained quantile forest object.

Examples

Run this code
# NOT RUN {
# Generate data.
n = 50; p = 10
X = matrix(rnorm(n*p), n, p)
X.test = matrix(0, 101, p)
X.test[,1] = seq(-2, 2, length.out = 101)
Y = X[,1] * rnorm(n)

# Train a quantile forest.
q.forest = quantile_forest(X, Y, quantiles=c(0.1, 0.5, 0.9))

# Make predictions.
q.hat = predict(q.forest, X.test)

# Make predictions for different quantiles than those used in training.
q.hat = predict(q.forest, X.test, quantiles=c(0.1, 0.9))

# Train a quantile forest using regression splitting instead of quantile-based
# splits, emulating the approach in Meinshausen (2006).
meins.forest = quantile_forest(X, Y, regression.splitting=TRUE)

# Make predictions for the desired quantiles.
q.hat = predict(meins.forest, X.test, quantiles=c(0.1, 0.5, 0.9))
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab