loess(formula, data, weights, subset, na.action, model = FALSE,
span = 0.75, enp.target, degree = 2,
parametric = FALSE, drop.square = FALSE, normalize = TRUE,
family = c("gaussian", "symmetric"),
method = c("loess", "model.frame"),
control = loess.control(…), …)
as.data.frame
to a data frame) containing
the variables in the model. If not found in data
, the
variables are taken from environment(formula)
,
typically the environment from which loess
is called.getOption("na.action")
.span
, as the
approximate equivalent number of parameters to be used.degree = 2
, should the quadratic term be dropped for particular
predictors? Terms are specified in the same way as for
parametric
."gaussian"
fitting is by least-squares, and if
"symmetric"
a re-descending M estimator is used with Tukey's
biweight function. Can be abbreviated.loess.control
.control
is not specified)."loess"
.span
or
enp.target
). For \(\alpha < 1\), the
neighbourhood includes proportion \(\alpha\) of the points,
and these have tricubic weighting (proportional to \((1 -
\mathrm{(dist/maxdist)}^3)^3\)). For
\(\alpha > 1\), all points are used, with the
‘maximum distance’ assumed to be \(\alpha^{1/p}\)
times the actual maximum distance for \(p\) explanatory variables. For the default family, fitting is by (weighted) least squares. For
family="symmetric"
a few iterations of an M-estimation
procedure with Tukey's biweight are used. Be aware that as the initial
value is the least-squares fit, this need not be a very resistant fit. It can be important to tune the control list to achieve acceptable
speed. See loess.control
for details.loess.control
,
predict.loess
. lowess
, the ancestor of loess
(with
different defaults!).cars.lo <- loess(dist ~ speed, cars)
predict(cars.lo, data.frame(speed = seq(5, 30, 1)), se = TRUE)
# to allow extrapolation
cars.lo2 <- loess(dist ~ speed, cars,
control = loess.control(surface = "direct"))
predict(cars.lo2, data.frame(speed = seq(5, 30, 1)), se = TRUE)
Run the code above in your browser using DataLab