Learn R Programming

policytree (version 1.1.1)

policytree-package: policytree: Policy Learning via Doubly Robust Empirical Welfare Maximization over Trees

Description

A package for learning optimal policies via doubly robust empirical welfare maximization over trees. Many practical policy applications require interpretable predictions. For example, a drug prescription guide that follows a simple 2-question Yes/No checklist can be encoded as a depth 2 decision tree (does the patient have a heart condition - etc.). This package implements the multi-action doubly robust approach of Zhou et al. (2018) in the case where we want to learn policies that belong to the class of depth k decision trees.

Some helpful links for getting started:

Arguments

See Also

Useful links:

Examples

Run this code
# NOT RUN {
# Multi-action policy learning.
n <- 250
p <- 10
X <- matrix(rnorm(n * p), n, p)
W <- as.factor(sample(c("A", "B", "C"), n, replace = TRUE))
Y <- X[, 1] + X[, 2] * (W == "B") + X[, 3] * (W == "C") + runif(n)
multi.forest <- grf::multi_arm_causal_forest(X, Y, W)

# Compute doubly robust reward estimates.
Gamma.matrix <- double_robust_scores(multi.forest)

# Fit a depth 2 tree on a random training subset.
train <- sample(1:n, 200)
opt.tree <- policy_tree(X[train, ], Gamma.matrix[train, ], depth = 2)
opt.tree

# Predict treatment on held out data.
predict(opt.tree, X[-train, ])
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab