Various parameters that control aspects of the `ctree' fit.
ctree_control(teststat = c("quadratic", "maximum"),
splitstat = c("quadratic", "maximum"),
splittest = FALSE,
testtype = c("Bonferroni", "MonteCarlo", "Univariate", "Teststatistic"),
pargs = GenzBretz(),
nmax = c(yx = Inf, z = Inf), alpha = 0.05, mincriterion = 1 - alpha,
logmincriterion = log(mincriterion), minsplit = 20L, minbucket = 7L,
minprob = 0.01, stump = FALSE, maxvar = Inf, lookahead = FALSE,
MIA = FALSE, nresample = 9999L,
tol = sqrt(.Machine$double.eps),maxsurrogate = 0L, numsurrogate = FALSE,
mtry = Inf, maxdepth = Inf,
multiway = FALSE, splittry = 2L, intersplit = FALSE, majority = FALSE,
caseweights = TRUE, applyfun = NULL, cores = NULL, saveinfo = TRUE,
update = NULL, splitflavour = c("ctree", "exhaustive"))
A list.
a character specifying the type of the test statistic to be applied for variable selection.
a character specifying the type of the test statistic
to be applied for splitpoint selection. Prior to
version 1.2-0, maximum
was implemented only.
a logical changing linear (the default FALSE
) to
maximally selected statistics for
variable selection. Currently needs testtype = "MonteCarlo"
.
a character specifying how to compute the distribution of
the test statistic. The first three options refer to
p-values as criterion, Teststatistic
uses the raw
statistic as criterion. Bonferroni
and
Univariate
relate to p-values from the asymptotic
distribution (adjusted or unadjusted).
Bonferroni-adjusted Monte-Carlo p-values are computed
when both Bonferroni
and MonteCarlo
are
given.
control parameters for the computation of multivariate
normal probabilities, see GenzBretz
.
an integer of length two defining the number of bins each variable
(in the response yx
and the partitioning variables
z
)) and is divided into prior to tree building. The default Inf
does not apply any binning. Highly experimental, use at your own
risk.
a double, the significance level for variable selection.
the value of the test statistic or 1 - p-value that must be exceeded in order to implement a split.
the value of the test statistic or 1 - p-value that must be exceeded in order to implement a split on the log-scale.
the minimum sum of weights in a node in order to be considered for splitting.
the minimum sum of weights in a terminal node.
proportion of observations needed to establish a terminal node.
a logical determining whether a stump (a tree with a maximum of three nodes only) is to be computed.
maximum number of variables the tree is allowed to split in.
a logical determining whether a split is implemented only after checking if tests in both daughter nodes can be performed.
a logical determining the treatment of NA
as a category in split,
see Twala et al. (2008).
number of permutations for testtype = "MonteCarlo"
.
tolerance for zero variances.
number of surrogate splits to evaluate.
a logical for backward-compatibility with party. If
TRUE
, only at least ordered variables are considered for surrogate splits.
number of input variables randomly sampled as candidates
at each node for random forest like algorithms. The default
mtry = Inf
means that no random selection takes place.
If ctree_control
is used in cforest
this argument is ignored.
maximum depth of the tree. The default maxdepth = Inf
means that no restrictions are applied to tree sizes.
a logical indicating if multiway splits for all factor levels are implemented for unordered factors.
number of variables that are inspected for admissible splits if the best split doesn't meet the sample size constraints.
a logical indicating if splits in numeric variables
are simply x <= a
(the default) or interpolated
x <= (a + b) / 2
. The latter feature is experimental, see
Galili and Meilijson (2016).
if FALSE
(the default), observations which can't be classified to a
daughter node because of missing information are randomly
assigned (following the node distribution). If TRUE
,
they go with the majority (the default in the first
implementation ctree
) in package
party.
a logical interpreting weights
as case weights.
an optional lapply
-style function with arguments
function(X, FUN, ...)
. It is used for computing the variable selection criterion.
The default is to use the basic lapply
function unless the cores
argument is specified (see below).
If ctree_control
is used in cforest
this argument is ignored.
numeric. If set to an integer the applyfun
is set to
mclapply
with the desired number of cores
.
If ctree_control
is used in cforest
this argument is ignored.
logical. Store information about variable selection
procedure in info
slot of each partynode
.
logical. If TRUE
, the data transformation is updated
in every node. The default always was and still is not to
update unless ytrafo
is a function.
use exhaustive search over splits instead of maximally
selected statistics (ctree
). This feature may change.
The arguments teststat
, testtype
and mincriterion
determine how the global null hypothesis of independence between all input
variables and the response is tested (see ctree
).
The variable with most extreme p-value or test statistic is selected
for splitting. If this isn't possible due to sample size constraints
explained in the next paragraph, up to splittry
other variables
are inspected for possible splits.
A split is established when all of the following criteria are met:
1) the sum of the weights in the current node
is larger than minsplit
, 2) a fraction of the sum of weights of more than
minprob
will be contained in all daughter nodes, 3) the sum of
the weights in all daughter nodes exceeds minbucket
, and 4)
the depth of the tree is smaller than maxdepth
.
This avoids pathological splits deep down the tree.
When stump = TRUE
, a tree with at most two terminal nodes is computed.
The argument mtry > 0
means that a random forest like `variable
selection', i.e., a random selection of mtry
input variables, is
performed in each node.
In each inner node, maxsurrogate
surrogate splits are computed
(regardless of any missing values in the learning sample). Factors
in test samples whose levels were empty in the learning sample
are treated as missing when computing predictions (in contrast
to ctree
. Note also the different behaviour of
majority
in the two implementations.
B. E. T. H. Twala, M. C. Jones, and D. J. Hand (2008), Good Methods for Coping with Missing Data in Decision Trees, Pattern Recognition Letters, 29(7), 950--956.
Tal Galili, Isaac Meilijson (2016), Splitting Matters: How Monotone Transformation of Predictor Variables May Improve the Predictions of Decision Tree Models, https://arxiv.org/abs/1611.04561.