These functions perform \(\chi^2\) tests or Monte Carlo tests
of goodness-of-fit for a point process model, based on quadrat counts.
The function quadrat.test
is generic, with methods for
point patterns (class "ppp"
), split point patterns
(class "splitppp"
), point process models
(class "ppm"
) and quadrat count tables (class "quadratcount"
).
if X
is a point pattern, we test the null hypothesis
that the data pattern is a realisation of Complete Spatial
Randomness (the uniform Poisson point process). Marks in the point
pattern are ignored. (If lambda
is given then the null
hypothesis is the Poisson process with intensity lambda
.)
if X
is a split point pattern, then for each of the
component point patterns (taken separately) we test
the null hypotheses of Complete Spatial Randomness.
See quadrat.test.splitppp
for documentation.
If X
is a fitted point process model, then it should be
a Poisson point process model. The
data to which this model was fitted are extracted from the model
object, and are treated as the data point pattern for the test.
We test the null hypothesis
that the data pattern is a realisation of the (inhomogeneous) Poisson point
process specified by X
.
In all cases, the window of observation is divided
into tiles, and the number of data points in each tile is
counted, as described in quadratcount
.
The quadrats are rectangular by default, or may be regions of arbitrary shape
specified by the argument tess
.
The expected number of points in each quadrat is also calculated,
as determined by CSR (in the first case) or by the fitted model
(in the second case).
Then the Pearson \(X^2\) statistic
$$
X^2 = sum((observed - expected)^2/expected)
$$
is computed.
If method="Chisq"
then a \(\chi^2\) test of
goodness-of-fit is performed by comparing the test statistic
to the \(\chi^2\) distribution
with \(m-k\) degrees of freedom, where m
is the number of
quadrats and \(k\) is the number of fitted parameters
(equal to 1 for quadrat.test.ppp
). The default is to
compute the two-sided \(p\)-value, so that the test will
be declared significant if \(X^2\) is either very large or very
small. One-sided \(p\)-values can be obtained by specifying the
alternative
. An important requirement of the
\(\chi^2\) test is that the expected counts in each quadrat
be greater than 5.
If method="MonteCarlo"
then a Monte Carlo test is performed,
obviating the need for all expected counts to be at least 5. In the
Monte Carlo test, nsim
random point patterns are generated
from the null hypothesis (either CSR or the fitted point process
model). The Pearson \(X^2\) statistic is computed as above.
The \(p\)-value is determined by comparing the \(X^2\)
statistic for the observed point pattern, with the values obtained
from the simulations. Again the default is to
compute the two-sided \(p\)-value.
If conditional
is TRUE
then the simulated samples are
generated from the multinomial distribution with the number of “trials”
equal to the number of observed points and the vector of probabilities
equal to the expected counts divided by the sum of the expected counts.
Otherwise the simulated samples are independent Poisson counts, with
means equal to the expected counts.
If the argument CR
is given, then instead of the
Pearson \(X^2\) statistic, the Cressie-Read (1984) power divergence
test statistic
$$
2nI = \frac{2}{\lambda(\lambda+1)}
\sum_i \left[ \left( \frac{X_i}{E_i} \right)^\lambda - 1 \right]
$$
is computed, where \(X_i\) is the \(i\)th observed count
and \(E_i\) is the corresponding expected count,
and the exponent \(\lambda\) is equal to CR
.
The value CR=1
gives the Pearson \(X^2\) statistic;
CR=0
gives the likelihood ratio test statistic \(G^2\);
CR=-1/2
gives the Freeman-Tukey statistic \(T^2\);
CR=-1
gives the modified likelihood ratio test statistic \(GM^2\);
and CR=-2
gives Neyman's modified statistic \(NM^2\).
In all cases the asymptotic distribution of this test statistic is
the same \(\chi^2\) distribution as above.
The return value is an object of class "htest"
.
Printing the object gives comprehensible output
about the outcome of the test.
The return value also belongs to
the special class "quadrat.test"
. Plotting the object
will display the quadrats, annotated by their observed and expected
counts and the Pearson residuals. See the examples.