weightit
allows for the easy generation of balancing weights using a variety of available methods for binary, continuous, and multi-category treatments. Many of these methods exist in other packages, which weightit
calls; these packages must be installed to use the desired method. Also included are print
and summary
methods for examining the output.
weightit(formula,
data = NULL,
method = "ps",
estimand = "ATE",
stabilize = FALSE,
focal = NULL,
by = NULL,
s.weights = NULL,
ps = NULL,
moments = 1,
int = FALSE,
subclass = NULL,
missing = NULL,
verbose = FALSE,
include.obj = FALSE,
...)# S3 method for weightit
print(x, ...)
a formula with a treatment variable on the left hand side and the covariates to be balanced on the right hand side. See glm
for more details. Interactions and functions of covariates are allowed.
an optional data set in the form of a data frame that contains the variables in formula
.
a string of length 1 containing the name of the method that will be used to estimate weights. See Details below for allowable options. The default is "ps"
for propensity score weighting.
the desired estimand. For binary and multi-category treatments, can be "ATE", "ATT", "ATC", and, for some methods, "ATO" or "ATM". The default for both is "ATE". This argument is ignored for continuous treatments. See the individual pages for each method for more information on which estimands are allowed with each method and what literature to read to interpret these estimands.
logical
; whether or not to stabilize the weights. For the methods that involve estimating propensity scores, this involves multiplying each unit's weight by the proportion of units in their treatment group. For the "ebal"
method, this involves using ebalance.trim
to reduce the variance of the weights. Default is FALSE
.
when multi-category treatments are used and the "ATT" is requested, which group to consider the "treated" or focal group. This group will not be weighted, and the other groups will be weighted to be more like the focal group. If specified, estimand
will automatically be set to "ATT"
.
a string containingg the name of the variable in data
for which weighting is to be done within categories or a one-sided formula with the stratifying variable on the right-hand side. For example, if by = "gender"
or by = ~ gender
, weights will be generated separately within each level of the variable "gender"
. The argument used to be called exact
, which will still work but with a message. Only one by
variable is allowed; to stratify by multiply variables simultaneously, create a new variable that is a full cross of those variables using interaction
.
A vector of sampling weights or the name of a variable in data
that contains sampling weights. These can also be matching weights if weighting is to be used on matched data.
A vector of propensity scores or the name of a variable in data
containing propensity scores. If not NULL
, method
is ignored, and the propensity scores will be used to create weights. formula
must include the treatment variable in data
, but the listed covariates will play no role in the weight estimation. Using ps
is similar to calling get_w_from_ps
directly, but produces a full weightit
object rather than just producing weights.
numeric
; for some methods, the greatest power of each covariate to be balanced. For example, if moments = 3
, for each non-categorical covariate, the covariate, its square, and its cube will be balanced. This argument is ignored for other methods; to balance powers of the covariates, appropriate functions must be entered in formula
. See the specific methods help pages for information on whether they accept moments
.
logical
; for some methods, whether first-order interactions of the covariates are to be balanced. This argument is ignored for other methods; to balance interactions between the variables, appropriate functions must be entered in formula
. See the specific methods help pages for information on whether they accept int
.
numeric
; the number of subclasses to use for computing weights using marginal mean weighting with subclasses (MMWS). If NULL
, standard inverse probability weights (and their extensions) will be computed; if a number greater than 1, subclasses will be formed and weights will be computed based on subclass membership. Attempting to set a non-NULL
value for methods that don't compute a propensity score will result in an error; see each method's help page for information on whether MMWS weights are compatible with the method. See get_w_from_ps
for details and references.
character
; how missing data should be handled. The options and defaults depend on the method
used. Ignored if no missing data is present. It should be noted that multiple imputation outperforms all available missingness methods available in weightit and should probably be used instead. See the MatchThem package for the use of weightit
with multiply imputed data.
whether to print additional information output by the fitting function.
whether to include in the output any fit objects created in the process of estimating the weights. For example, with method = "ps"
, the glm
objects containing the propensity score model will be included. See Details for information on what object will be included if TRUE
.
other arguments for functions called by weightit
that control aspects of fitting that are not covered by the above arguments. See Details.
a weightit
object; the output of a call to weightit
.
A weightit
object with the following elements:
The estimated weights, one for each unit.
The values of the treatment variable.
The covariates used in the fitting. Only includes the raw covariates, which may have been altered in the fitting process.
The estimand requested.
The weight estimation method specified.
The estimated or provided propensity scores. Estimated propensity scores are returned for binary treatments and only when method
is "ps"
, "gbm"
, "cbps"
, or "super"
.
The provided sampling weights.
The focal variable if the ATT was requested with a multi-category treatment.
A data.frame containing the by
variable when specified.
When include.obj = TRUE
, the fit object.
The primary purpose of weightit
is as a dispatcher to other functions in other packages that perform the estimation of balancing weights. These functions are identified by a name, which is used in method
to request them. Each method has some slight distinctions in how it is called, but in general, simply entering the method will cause weightit
to generate the weights correctly using the function. To use each method, the package containing the function must be installed, or else an error will appear. Below are the methods allowed and links to pages containing more information about them, including additional arguments and outputs (e.g., when include.obj = TRUE
) and how missing values are treated.
"ps"
- Propensity score weighting using generalized linear models.
"gbm"
- Propensity score weighting using generalized boosted modeling.
"cbps"
- Covariate Balancing Propensity Score weighting.
"npcbps"
- Non-parametric Covariate Balancing Propensity Score weighting.
"ebal"
- Entropy balancing.
"ebcw"
- Empirical balancing calibration weighting.
"optweight"
- Optimization-based weighting.
"super"
- Propensity score weighting using SuperLearner.
"user-defined"
- Weighting using a user-defined weighting function.
weightitMSM
for estimating weights with sequential (i.e., longitudinal) treatments for use in estimating marginal strcutural models (MSMs).
# NOT RUN {
library("cobalt")
data("lalonde", package = "cobalt")
#Balancing covariates between treatment groups (binary)
(W1 <- weightit(treat ~ age + educ + married +
nodegree + re74, data = lalonde,
method = "ps", estimand = "ATT"))
summary(W1)
bal.tab(W1)
#Balancing covariates with respect to race (multi-category)
(W2 <- weightit(race ~ age + educ + married +
nodegree + re74, data = lalonde,
method = "ebal", estimand = "ATE"))
summary(W2)
bal.tab(W2)
#Balancing covariates with respect to re75 (continuous)
(W3 <- weightit(re75 ~ age + educ + married +
nodegree + re74, data = lalonde,
method = "cbps", over = FALSE))
summary(W3)
bal.tab(W3)
# }
Run the code above in your browser using DataLab