
negbinomial(lmu = "loge", lk = "loge", emu = list(), ek = list(),
imu = NULL, ik = NULL, quantile.probs = 0.75,
nsimEIM = 100, cutoff = 0.995,
Maxiter = 5000, deviance.arg = FALSE, method.init = 1,
parallel = FALSE, shrinkage.init = 0.95, zero = -2)
Links
for more choices.
Note that the $k$ parameter is the size
argument of
earg
in Links
for general information.method.init
).
For a $S$-column response, ik
can be of length $S$.
A value NULL
probs
argument
of quantile
when method.init = 3
to obtain an initial value for the mean.CommonVGAMffArguments
for more information
and the TRUE
, the deviance function
is attached to the object. Under ordinary circumstances, it should
be left alone because it really assumes the index parameter is at
the maximum likelihood estimate. Consequently, one cannot use t1
or 2
or 3
which
specifies the initialization method for the $\mu$ parameter.
If failure to converge occurs try another value
and/or else specify a value for shrinkage.init
CommonVGAMffArguments
for more information.
Setting parallel = TRUE
is useful in order to get
something similar to quasipoisso
lk
is applied) is modelled as a single un"vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.poissonff
or quasipoissonff
.
This function is fragile; the maximum likelihood estimate of the
index parameter is fraught (see Lawless, 1987). In general, the
quasipoissonff
is more robust than this function.
Other alternatives are to fit a NB-1 or RR-NB model; see Yee (2010).
Assigning values to the ik
argument may lead to a local solution,
and smaller values are preferred over large values when using this argument.
Yet to do: write a family function which uses the methods of moments estimator for $k$.
fitted.values
slot of the object contains
the estimated value of the $\mu$ parameter, i.e., of the mean
$E(Y)$.
It is common for some to use $\alpha=1/k$ as the
ancillary or heterogeneity parameter;
so common alternatives for lk
are
nloge
and
reciprocal
.
The negative binomial distribution can be coerced into the
classical GLM framework with one of the parameters being of
interest and the other treated as a nuisance/scale parameter
(this is implemented in the MASS library). This negbinomial
treats both parameters
on the same footing, and estimates them both by full maximum
likelihood estimation. Simulated Fisher scoring is employed
as the default (see the nsimEIM
argument).
The parameters $\mu$ and $k$ are independent (diagonal EIM), and the confidence region for $k$ is extremely skewed so that its standard error is often of no practical use. The parameter $1/k$ has been used as a measure of aggregation.
This zero=-2
means that all
species have a $k$ equalling a (different) intercept only.
Hilbe, J. M. (2007) Negative Binomial Regression. Cambridge: Cambridge University Press.
Bliss, C. and Fisher, R. A. (1953) Fitting the negative binomial distribution to biological data. Biometrics 9, 174--200.
Yee, T. W. (2010) Two-parameter reduced-rank vector generalized linear models. In preparation.
quasipoissonff
,
poissonff
,
zinegbinomial
,
posnegbinomial
,
invbinomial
, rnbinom
,
nbolf
,
rrvglm
,
cao
,
cqo
,
CommonVGAMffArguments
.# Example 1: apple tree data
appletree <- data.frame(y = 0:7, w = c(70, 38, 17, 10, 9, 3, 2, 1))
fit <- vglm(y ~ 1, negbinomial, appletree, weights = w)
summary(fit)
coef(fit, matrix = TRUE)
Coef(fit)
# Example 2: simulated data with multivariate response
ndata <- data.frame(x = runif(nn <- 500))
ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x), size = exp(1)),
y2 = rnbinom(nn, mu = exp(2-x), size = exp(0)))
fit1 <- vglm(cbind(y1, y2) ~ x, negbinomial, ndata, trace = TRUE)
coef(fit1, matrix = TRUE)
# Example 3: large counts so definitely use the nsimEIM argument
ndata <- transform(ndata, y3 = rnbinom(nn, mu = exp(12+x), size = exp(1)))
with(ndata, range(y3)) # Large counts
fit2 <- vglm(y3 ~ x, negbinomial(nsimEIM = 100), ndata, trace = TRUE)
coef(fit2, matrix = TRUE)
# Example 4: a NB-1 to estimate a negative binomial with Var(Y) = phi0 * mu
nn <- 1000 # Number of observations
phi0 <- 10 # Specify this; should be greater than unity
delta0 <- 1 / (phi0 - 1)
mydata <- data.frame(x2 = runif(nn), x3 = runif(nn))
mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3))
mydata <- transform(mydata, y3 = rnbinom(nn, mu = mu, size = delta0 * mu))
plot(y3 ~ x2, data = mydata, pch = "+", col = 'blue',
main = paste("Var(Y) = ", phi0, " * mu", sep = ""), las = 1)
nb1 <- vglm(y3 ~ x2 + x3, negbinomial(parallel = TRUE, zero = NULL),
mydata, trace = TRUE)
# Extracting out some quantities:
cnb1 <- coef(nb1, matrix = TRUE)
mydiff <- (cnb1["(Intercept)", "log(k)"] - cnb1["(Intercept)", "log(mu)"])
delta0.hat <- exp(mydiff)
(phi.hat <- 1 + 1 / delta0.hat) # MLE of phi
summary(nb1)
# Obtain a 95 percent confidence interval for phi0:
myvec <- rbind(-1, 1, 0, 0)
(se.mydiff <- sqrt(t(myvec) %*% vcov(nb1) %*% myvec))
ci.mydiff <- mydiff + c(-1.96, 1.96) * se.mydiff
ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff)
(ci.phi0 <- 1 + 1 / rev(ci.delta0)) # The 95 percent conf. interval for phi0
confint_nb1(nb1) # Quick way to get it
summary(glm(y3 ~ x2 + x3, quasipoisson, mydata))$disper # cf. moment estimator
Run the code above in your browser using DataLab