lrm.fit(x, y, offset=0, initial, est, maxit=12, eps=.025, tol=1e-7, trace=FALSE, penalty.matrix=NULL, weights=NULL, normwt=FALSE, scale=FALSE)
x
to fit in the model (default is all columns of x
).
Specifying est=c(1,2,5)
causes columns 1,2, and 5 to have
parameters estimated. The score vector u
and covariance matrix var
can be used to obtain score statistics for other columns
12
). Specifying maxit=1
causes logist to compute statistics at initial estimates.
.025
. If the $-2 log$ likelihood gets
worse by eps/10 while the maximum absolute first derivative of
$-2 log$ likelihood is below 1e-9, convergence is still
declared. This handles the case where the initial estimates are MLEs,
to prevent endless step-halving.
TRUE
to print -2 log likelihood, step-halving
fraction, change in -2 log likelihood, maximum absolute value of first
derivative, and vector of first derivatives at each iteration.
lrm
y
) of possibly fractional case weights
TRUE
to scale weights
so they sum to the length of
y
; useful for sample surveys as opposed to the default of
frequency weighting
TRUE
to subtract column means and divide by
column standard deviations of x
before fitting, and to back-solve for the un-normalized covariance
matrix and regresion coefficients. This can sometimes make the model
converge for very large
sample sizes where for example spline or polynomial component
variables create scaling problems leading to loss of precision when
accumulating sums of squares and crossproducts.lrm
, glm
, matinv
,
solvet
, cr.setup
, gIndex
#Fit an additive logistic model containing numeric predictors age,
#blood.pressure, and sex, assumed to be already properly coded and
#transformed
#
# fit <- lrm.fit(cbind(age,blood.pressure,sex), death)
Run the code above in your browser using DataLab