Maximum likelihood estimation of spatial simultaneous autoregressive “SAC/SARAR” models of the form:
$$y = \rho W1 y + X \beta + u, u = \lambda W2 u + \varepsilon$$
where \(\rho\) and \(\lambda\) are found by nlminb
or optim()
first, and \(\beta\) and other parameters by generalized least squares subsequently
sacsarlm(formula, data = list(), listw, listw2 = NULL, na.action, type="sac",
method = "eigen", quiet = NULL, zero.policy = NULL, tol.solve = 1e-10,
llprof=NULL, interval1=NULL, interval2=NULL, trs1=NULL, trs2=NULL,
control = list())
A list object of class sarlm
“sac”
lag simultaneous autoregressive lag coefficient
error simultaneous autoregressive error coefficient
GLS coefficient estimates
asymptotic standard errors if ase=TRUE, otherwise approximate numeriacal Hessian-based values
TRUE if method=eigen
log likelihood value at computed optimum
GLS residual variance
sum of squared GLS errors
number of parameters estimated
Log likelihood of the non-spatial linear model
AIC of the non-spatial linear model
the method used to calculate the Jacobian
the call used to create this object
GLS residuals
model matrix of the GLS model
response of the GLS model
response of the linear model for \(\rho=0\)
model matrix of the linear model for \(\rho=0\)
object returned from numerical optimisation
starting parameter values for final optimization, either given or found by trial point evaluation
if default input pars, optimal objective function values at trial points
Difference between residuals and response variable
Not used yet
if ase=TRUE, the asymptotic standard error of \(\rho\), otherwise approximate numeriacal Hessian-based value
if ase=TRUE, the asymptotic standard error of \(\lambda\)
the asymptotic coefficient covariance matrix for (s2, rho, lambda, B)
zero.policy for this model
the aliased explanatory variables (if any)
Log-likelihood of the null linear model
the numerical Hessian-based coefficient covariance matrix for (rho, lambda, B) if computed
asymptotic coefficient covariance matrix
FALSE
processing timings
(possibly) named vector of excluded or omitted observations if non-default na.action argument used
default NULL, then set to (method != "eigen") internally; use fdHess
to compute an approximate Hessian using finite differences when using sparse matrix methods with fdHess
from nlme; used to make a coefficient covariance matrix when the number of observations is large; may be turned off to save resources if need be
default FALSE; logical value passed to qr
in the SSE log likelihood function
default 2; used for preparing the Cholesky decompositions for updating in the Jacobian function
default 5; highest power of the approximating polynomial for the Chebyshev approximation
default 16; number of random variates
default 30; number of products of random variates matrix and spatial weights matrix
default FALSE using a simplicial decomposition for the sparse Cholesky decomposition, if TRUE, use a supernodal decomposition
default “nlminb”, may be set to “L-BFGS-B” to use box-constrained optimisation in optim
default list()
, a control list to pass to nlminb
or optim
default NULL
, for which five trial starting values spanning the lower/upper range are tried and the best selected, starting values of \(\rho\) and \(\lambda\)
default integer 4L
, four trial points; if not default value, nine trial points
default NULL; may be used to pass pre-computed vectors of eigenvalues
Because numerical optimisation is used to find the values of lambda and rho, care needs to be shown. It has been found that the surface of the 2D likelihood function often forms a “banana trench” from (low rho, high lambda) through (high rho, high lambda) to (high rho, low lambda) values. In addition, sometimes the banana has optima towards both ends, one local, the other global, and conseqently the choice of the starting point for the final optimization becomes crucial. The default approach is not to use just (0, 0) as a starting point, nor the (rho, lambda) values from gstsls
, which lie in a central part of the “trench”, but either four values at (low rho, high lambda), (0, 0), (high rho, high lambda), and (high rho, low lambda), and to use the best of these start points for the final optimization. Optionally, nine points can be used spanning the whole (lower, upper) space.
Anselin, L. 1988 Spatial econometrics: methods and models. (Dordrecht: Kluwer); LeSage J and RK Pace (2009) Introduction to Spatial Econometrics. CRC Press, Boca Raton.
Roger Bivand, Gianfranco Piras (2015). Comparing Implementations of Estimation Methods for Spatial Econometrics. Journal of Statistical Software, 63(18), 1-36. http://www.jstatsoft.org/v63/i18/.
Bivand, R. S., Hauke, J., and Kossowski, T. (2013). Computing the Jacobian in Gaussian spatial autoregressive models: An illustrated comparison of available methods. Geographical Analysis, 45(2), 150-179.
lm
, lagsarlm
, errorsarlm
,
summary.sarlm
, eigenw
, impacts.sarlm
# NOT RUN {
data(oldcol)
COL.sacW.eig <- sacsarlm(CRIME ~ INC + HOVAL, data=COL.OLD,
nb2listw(COL.nb, style="W"))
summary(COL.sacW.eig, correlation=TRUE)
W <- as(nb2listw(COL.nb, style="W"), "CsparseMatrix")
trMatc <- trW(W, type="mult")
summary(impacts(COL.sacW.eig, tr=trMatc, R=2000), zstats=TRUE, short=TRUE)
COL.msacW.eig <- sacsarlm(CRIME ~ INC + HOVAL, data=COL.OLD,
nb2listw(COL.nb, style="W"), type="sacmixed")
summary(COL.msacW.eig, correlation=TRUE)
summary(impacts(COL.msacW.eig, tr=trMatc, R=2000), zstats=TRUE, short=TRUE)
# }
Run the code above in your browser using DataLab