Augment accepts a model object and a dataset and adds
information about each observation in the dataset. Most commonly, this
includes predicted values in the .fitted
column, residuals in the
.resid
column, and standard errors for the fitted values in a .se.fit
column. New columns always begin with a .
prefix to avoid overwriting
columns in the original dataset.
Users may pass data to augment via either the data
argument or the
newdata
argument. If the user passes data to the data
argument,
it must be exactly the data that was used to fit the model
object. Pass datasets to newdata
to augment data that was not used
during model fitting. This still requires that at least all predictor
variable columns used to fit the model are present. If the original outcome
variable used to fit the model is not included in newdata
, then no
.resid
column will be included in the output.
Augment will often behave differently depending on whether data
or
newdata
is given. This is because there is often information
associated with training observations (such as influences or related)
measures that is not meaningfully defined for new observations.
For convenience, many augment methods provide default data
arguments,
so that augment(fit)
will return the augmented training data. In these
cases, augment tries to reconstruct the original data based on the model
object with varying degrees of success.
The augmented dataset is always returned as a tibble::tibble with the
same number of rows as the passed dataset. This means that the passed
data must be coercible to a tibble. If a predictor enters the model as part
of a matrix of covariates, such as when the model formula uses
splines::ns()
, stats::poly()
, or survival::Surv()
, it is represented
as a matrix column.
We are in the process of defining behaviors for models fit with various
na.action
arguments, but make no guarantees about behavior when data is
missing at this time.
# S3 method for rq
augment(x, data = model.frame(x), newdata = NULL, ...)
A tibble::tibble()
with columns:
Fitted or predicted value.
The difference between observed and fitted values.
Quantile.
An rq
object returned from quantreg::rq()
.
A base::data.frame or tibble::tibble()
containing the original
data that was used to produce the object x
. Defaults to
stats::model.frame(x)
so that augment(my_fit)
returns the augmented
original data. Do not pass new data to the data
argument.
Augment will report information such as influence and cooks distance for
data passed to the data
argument. These measures are only defined for
the original training data.
A base::data.frame()
or tibble::tibble()
containing all
the original predictors used to create x
. Defaults to NULL
, indicating
that nothing has been passed to newdata
. If newdata
is specified,
the data
argument will be ignored.
Arguments passed on to quantreg::predict.rq
object
object of class rq or rqs or rq.process produced by rq
interval
type of interval desired: default is 'none', when set to 'confidence' the function returns a matrix predictions with point predictions for each of the 'newdata' points as well as lower and upper confidence limits.
level
converage probability for the 'confidence' intervals.
type
For predict.rq
, the method for 'confidence' intervals, if desired.
If 'percentile' then one of the bootstrap methods is used to generate percentile
intervals for each prediction, if 'direct' then a version of the Portnoy and Zhou
(1998) method is used, and otherwise an estimated covariance matrix for the parameter
estimates is used. Further arguments to determine the choice of bootstrap
method or covariance matrix estimate can be passed via the ... argument.
For predict.rqs
and predict.rq.process
when stepfun = TRUE
,
type
is "Qhat", "Fhat" or "fhat" depending on whether the user would
like to have estimates of the conditional quantile, distribution or density functions
respectively. As noted below the two former estimates can be monotonized with the
function rearrange
. When the "fhat" option is invoked, a list of conditional
density functions is returned based on Silverman's adaptive kernel method as
implemented in akj
and approxfun
.
na.action
function determining what should be done with missing values in 'newdata'. The default is to predict 'NA'.
Depending on the arguments passed on to predict.rq
via ...
,
a confidence interval is also calculated on the fitted values resulting in
columns .lower
and .upper
. Does not provide confidence
intervals when data is specified via the newdata
argument.
augment, quantreg::rq()
, quantreg::predict.rq()
Other quantreg tidiers:
augment.nlrq()
,
augment.rqs()
,
glance.nlrq()
,
glance.rq()
,
tidy.nlrq()
,
tidy.rqs()
,
tidy.rq()
# load modeling library and data
library(quantreg)
data(stackloss)
# median (l1) regression fit for the stackloss data.
mod1 <- rq(stack.loss ~ stack.x, .5)
# weighted sample median
mod2 <- rq(rnorm(50) ~ 1, weights = runif(50))
# summarize model fit with tidiers
tidy(mod1)
glance(mod1)
augment(mod1)
tidy(mod2)
glance(mod2)
augment(mod2)
# varying tau to generate an rqs object
mod3 <- rq(stack.loss ~ stack.x, tau = c(.25, .5))
tidy(mod3)
augment(mod3)
# glance cannot handle rqs objects like `mod3`--use a purrr
# `map`-based workflow instead
Run the code above in your browser using DataLab