Learn R Programming

spdep (version 0.8-1)

sacsarlm: Spatial simultaneous autoregressive SAC model estimation

Description

Maximum likelihood estimation of spatial simultaneous autoregressive “SAC/SARAR” models of the form:

$$y = \rho W1 y + X \beta + u, u = \lambda W2 u + \varepsilon$$

where \(\rho\) and \(\lambda\) are found by nlminb or optim() first, and \(\beta\) and other parameters by generalized least squares subsequently

Usage

sacsarlm(formula, data = list(), listw, listw2 = NULL, na.action, Durbin, type,
 method = "eigen", quiet = NULL, zero.policy = NULL, tol.solve = 1e-10,
 llprof=NULL, interval1=NULL, interval2=NULL, trs1=NULL, trs2=NULL,
 control = list())
spBreg_sac(formula, data = list(), listw, listw2=NULL, na.action, 
    Durbin, type, zero.policy=NULL, control=list())

Value

A list object of class sarlm

type

“sac” or “sacmixed”

dvars

vector of length 2, numbers of columns in X and WX; if Durbin is given as a formula, the formula as attribute “f”, the indices of the included Wx as “inds”, and indices of added zero Wx coefficients as “zero_fill”

rho

lag simultaneous autoregressive lag coefficient

lambda

error simultaneous autoregressive error coefficient

coefficients

GLS coefficient estimates

rest.se

asymptotic standard errors if ase=TRUE, otherwise approximate numeriacal Hessian-based values

ase

TRUE if method=eigen

LL

log likelihood value at computed optimum

s2

GLS residual variance

SSE

sum of squared GLS errors

parameters

number of parameters estimated

logLik_lm.model

Log likelihood of the non-spatial linear model

AIC_lm.model

AIC of the non-spatial linear model

% \item{lm.model}{the \code{lm} object returned when estimating for \eqn{\rho=0}{rho=0}}
method

the method used to calculate the Jacobian

call

the call used to create this object

residuals

GLS residuals

% \item{lm.target}{the \code{lm} object returned for the GLS fit}
tarX

model matrix of the GLS model

tary

response of the GLS model

y

response of the linear model for \(\rho=0\)

X

model matrix of the linear model for \(\rho=0\)

opt

object returned from numerical optimisation

pars

starting parameter values for final optimization, either given or found by trial point evaluation

mxs

if default input pars, optimal objective function values at trial points

fitted.values

Difference between residuals and response variable

se.fit

Not used yet

% \item{formula}{model formula}
rho.se

if ase=TRUE, the asymptotic standard error of \(\rho\), otherwise approximate numeriacal Hessian-based value

lambda.se

if ase=TRUE, the asymptotic standard error of \(\lambda\)

resvar

the asymptotic coefficient covariance matrix for (s2, rho, lambda, B)

zero.policy

zero.policy for this model

aliased

the aliased explanatory variables (if any)

LLNullLlm

Log-likelihood of the null linear model

fdHess

the numerical Hessian-based coefficient covariance matrix for (rho, lambda, B) if computed

resvar

asymptotic coefficient covariance matrix

optimHess

FALSE

timings

processing timings

na.action

(possibly) named vector of excluded or omitted observations if non-default na.action argument used

Control arguments

fdHess:

default NULL, then set to (method != "eigen") internally; use fdHess to compute an approximate Hessian using finite differences when using sparse matrix methods with fdHess from nlme; used to make a coefficient covariance matrix when the number of observations is large; may be turned off to save resources if need be

LAPACK:

default FALSE; logical value passed to qr in the SSE log likelihood function

Imult:

default 2; used for preparing the Cholesky decompositions for updating in the Jacobian function

cheb_q:

default 5; highest power of the approximating polynomial for the Chebyshev approximation

MC_p:

default 16; number of random variates

MC_m:

default 30; number of products of random variates matrix and spatial weights matrix

super:

default FALSE using a simplicial decomposition for the sparse Cholesky decomposition, if TRUE, use a supernodal decomposition

opt_method:

default “nlminb”, may be set to “L-BFGS-B” to use box-constrained optimisation in optim

opt_control:

default list(), a control list to pass to nlminb or optim

pars:

default NULL, for which five trial starting values spanning the lower/upper range are tried and the best selected, starting values of \(\rho\) and \(\lambda\)

npars

default integer 4L, four trial points; if not default value, nine trial points

pre_eig1, pre_eig2

default NULL; may be used to pass pre-computed vectors of eigenvalues

Extra Bayesian control arguments

ldet_method

default “SE_classic”; equivalent to the method argument in lagsarlm

interval1

default c(-1, 1); used unmodified or set internally by jacobianSetup

interval2

default c(-1, 1); used unmodified or set internally by jacobianSetup

ndraw

default 2500L; integer total number of draws

nomit

default 500L; integer total number of omitted burn-in draws

thin

default 1L; integer thinning proportion

detval1

default NULL; not yet in use, precomputed matrix of log determinants

detval2

default NULL; not yet in use, precomputed matrix of log determinants

prior

a list with the following components:

Tbeta

default NULL; values of the betas variance-covariance matrix, set to diag(k)*1e+12 if NULL

c_beta

default NULL; values of the betas set to 0 if NULL

lambda

default 0.5; value of the autoregressive coefficient

rho

default 0.5; value of the autoregressive coefficient

sige

default 1; value of the residual variance

nu

default 0; informative Gamma(nu,d0) prior on sige

d0

default 0; informative Gamma(nu,d0) prior on sige

a1

default 1.01; parameter for beta(a1,a2) prior on rho

a2

default 1.01; parameter for beta(a1,a2) prior on rho

cc1

default 0.2; initial tuning parameter for M-H sampling

cc2

default 0.2; initial tuning parameter for M-H sampling

Details

Because numerical optimisation is used to find the values of lambda and rho, care needs to be shown. It has been found that the surface of the 2D likelihood function often forms a “banana trench” from (low rho, high lambda) through (high rho, high lambda) to (high rho, low lambda) values. In addition, sometimes the banana has optima towards both ends, one local, the other global, and conseqently the choice of the starting point for the final optimization becomes crucial. The default approach is not to use just (0, 0) as a starting point, nor the (rho, lambda) values from gstsls, which lie in a central part of the “trench”, but either four values at (low rho, high lambda), (0, 0), (high rho, high lambda), and (high rho, low lambda), and to use the best of these start points for the final optimization. Optionally, nine points can be used spanning the whole (lower, upper) space.

References

Anselin, L. 1988 Spatial econometrics: methods and models. (Dordrecht: Kluwer); LeSage J and RK Pace (2009) Introduction to Spatial Econometrics. CRC Press, Boca Raton.

Roger Bivand, Gianfranco Piras (2015). Comparing Implementations of Estimation Methods for Spatial Econometrics. Journal of Statistical Software, 63(18), 1-36. https://www.jstatsoft.org/v63/i18/.

Bivand, R. S., Hauke, J., and Kossowski, T. (2013). Computing the Jacobian in Gaussian spatial autoregressive models: An illustrated comparison of available methods. Geographical Analysis, 45(2), 150-179.

See Also

lm, lagsarlm, errorsarlm, summary.sarlm, eigenw, impacts.sarlm

Examples

Run this code
# NOT RUN {
data(oldcol)
listw <- nb2listw(COL.nb, style="W")
ev <- eigenw(listw)
COL.sacW.eig <- sacsarlm(CRIME ~ INC + HOVAL, data=COL.OLD, listw,
 control=list(pre_eig1=ev, pre_eig2=ev))
summary(COL.sacW.eig)
W <- as(listw, "CsparseMatrix")
trMatc <- trW(W, type="mult")
set.seed(1)
summary(impacts(COL.sacW.eig, tr=trMatc, R=2000), zstats=TRUE, short=TRUE)
# }
# NOT RUN {
if (require(coda, quietly=TRUE)) {
set.seed(1)
COL.sacW.B0 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw,
 Durbin=FALSE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B0))
print(summary(impacts(COL.sacW.B0, tr=trMatc), zstats=TRUE, short=TRUE))
set.seed(1)
COL.sacW.B1 <- spBreg_sac(CRIME ~ INC + HOVAL, data=COL.OLD, listw,
 Durbin=TRUE, control=list(ndraw=1500L, nomit=500L))
print(summary(COL.sacW.B1))
print(summary(impacts(COL.sacW.B1, tr=trMatc), zstats=TRUE, short=TRUE))
}
# }
# NOT RUN {
COL.msacW.eig <- sacsarlm(CRIME ~ INC + HOVAL, data=COL.OLD, listw,
 type="sacmixed", control=list(pre_eig1=ev, pre_eig2=ev))
summary(COL.msacW.eig)
set.seed(1)
summary(impacts(COL.msacW.eig, tr=trMatc, R=2000), zstats=TRUE, short=TRUE)
COL.msacW1.eig <- sacsarlm(CRIME ~ INC + HOVAL, data=COL.OLD, listw,
 Durbin=TRUE, control=list(pre_eig1=ev, pre_eig2=ev))
summary(COL.msacW1.eig)
set.seed(1)
summary(impacts(COL.msacW1.eig, tr=trMatc, R=2000), zstats=TRUE, short=TRUE)
COL.msacW2.eig <- sacsarlm(CRIME ~ DISCBD + INC + HOVAL, data=COL.OLD, 
 listw, Durbin= ~ INC, control=list(pre_eig1=ev, pre_eig2=ev))
summary(COL.msacW2.eig)
summary(impacts(COL.msacW2.eig, tr=trMatc, R=2000), zstats=TRUE, short=TRUE)
# }

Run the code above in your browser using DataLab