Learn R Programming

stats (version 3.3)

lm.fit: Fitter Functions for Linear Models

Description

These are the basic computing engines called by lm used to fit linear models. These should usually not be used directly unless by experienced users. .lm.fit() is bare bone wrapper to the innermost QR-based C code, on which glm.fit and lsfit are based as well, for even more experienced users.

Usage

lm.fit (x, y,    offset = NULL, method = "qr", tol = 1e-7,
       singular.ok = TRUE, ...)

lm.wfit(x, y, w, offset = NULL, method = "qr", tol = 1e-7, singular.ok = TRUE, ...)

.lm.fit(x, y, tol = 1e-7)

Arguments

x
design matrix of dimension n * p.
y
vector of observations of length n, or a matrix with n rows.
w
vector of weights (length n) to be used in the fitting process for the wfit functions. Weighted least squares is used with weights w, i.e., sum(w * e^2) is minimized.
offset
numeric of length n). This can be used to specify an a priori known component to be included in the linear predictor during fitting.
method
currently, only method = "qr" is supported.
tol
tolerance for the qr decomposition. Default is 1e-7.
singular.ok
logical. If FALSE, a singular model is an error.
...
currently disregarded.

Value

  • a list with components (for lm.fit and lm.wfit)
  • coefficientsp vector
  • residualsn vector or matrix
  • fitted.valuesn vector or matrix
  • effectsn vector of orthogonal single-df effects. The first rank of them correspond to non-aliased coefficients, and are named accordingly.
  • weightsn vector --- only for the *wfit* functions.
  • rankinteger, giving the rank
  • df.residualdegrees of freedom of residuals
  • qrthe QR decomposition, see qr.
  • Fits without any columns or non-zero weights do not have the effects and qr components.

    .lm.fit() returns a subset of the above, the qr part unwrapped, plus a logical component pivoted indicating if the underlying QR algorithm did pivot.

See Also

lm which you should use for linear least squares regression, unless you know better.

Examples

Run this code
require(utils)set.seed(129)

n <- 7 ; p <- 2
X <- matrix(rnorm(n * p), n, p) # no intercept!
y <- rnorm(n)
w <- rnorm(n)^2

str(lmw <- lm.wfit(x = X, y = y, w = w))

str(lm. <- lm.fit (x = X, y = y))
## These are the same calculations at C level, but a parallel BLAS
  ## might not do them the same way twice, and if seems serial MKL does not.
  lm.. <- .lm.fit(X,y)
  lm.w <- .lm.fit(X*sqrt(w), y*sqrt(w))
  id <- function(x, y) all.equal(x, y, tolerance = 1e-15, scale = 1)
  stopifnot(id(unname(lm.$coef), lm..$coef),
	    id(unname(lmw$coef), lm.w$coef))
if(require("microbenchmark")) {
  mb <- microbenchmark(lm(y~X), lm.fit(X,y), .lm.fit(X,y))
  print(mb)
  boxplot(mb, notch=TRUE)
}

Run the code above in your browser using DataLab