Learn R Programming

VGAM (version 1.1-12)

summary.drrvglm: Summarizing Reduced Rank Vector Generalized Linear Model (RR-VGLM) and Doubly constrained RR-VGLM Fits

Description

These functions are all methods for class "drrvglm" or "summary.drrvglm" objects, or for class "rrvglm" or "summary.rrvglm" objects.

Usage

# S3 method for drrvglm
summary(object, correlation = FALSE, dispersion = NULL,
    digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE,
     omit13 = FALSE, fixA = FALSE, presid = FALSE,
    signif.stars = getOption("show.signif.stars"),
    nopredictors = FALSE, eval0 = TRUE, ...)
# S3 method for summary.drrvglm
show(x, digits = NULL,
    quote = TRUE, prefix = "", signif.stars = NULL)
# S3 method for rrvglm
summary(object, correlation = FALSE, dispersion = NULL,
    digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE,
     omit13 = FALSE, fixA = TRUE, presid = FALSE,
    signif.stars = getOption("show.signif.stars"),
    nopredictors = FALSE, upgrade = FALSE, ...)
# S3 method for summary.rrvglm
show(x, digits = NULL,
    quote = TRUE, prefix = "", signif.stars = NULL)

Value

summarydrrvglm returns an object of class "summary.drrvglm".

Arguments

object

an object of class "drrvglm" or "rrvglm", a result of a call to rrvglm.

x

an object of class "summary.drrvglm" or "summary.rrvglm", a result of a call to summary.drrvglm or summary.rrvglm.

dispersion

used mainly for GLMs. Not really implemented in VGAM so should not be used.

correlation

See summaryvglm.

digits

See summaryvglm.

signif.stars

See summaryvglm.

presid, quote

See summaryvglm.

nopredictors

See summaryvglm.

upgrade

Logical. Upgrade object to drrvglm-class? Treating the object as a DRR-VGLM has advantages since the framework is larger. The code for ordinary RR-VGLMs was written a long time ago so it is a good idea to check that both give the same answer.

numerical

Logical, use a finite difference approximation for partial derivatives? If FALSE then theoretical formulas are used (however this option may no longer be implemented).

h.step

Numeric, positive and close to 0. If numerical then this is the forward step for each finite difference approximation. That is, it plays the role of \(h\) in \((f(x+h)-f(x))/h\) for some function \(f\). If the overall variance-covariance matrix is not positive-definite, varying this argument can make a difference, e.g., increasing it to 0.01 is recommended.

fixA

Logical, if TRUE then the largest block matrix is for B1 and C, else it is for A and B1. This should not make any difference because both estimates of B1 should be extremely similar, including the SEs.

omit13

Logical, if TRUE then the (1,3) block matrix is set to O. That is, A and C are assumed to asymptotically uncorrelated. Setting this TRUE is an option when V (see below) is not positive-definite. If this fails, another option that is often better is to set omit123 = TRUE.

omit123

Logical. If TRUE then two block matrices are set to O (blocks (1,2) and (1,3), else blocks (1,3) and (2,3), depending on fixA), This will almost surely result in an overall variance-covariance matrix that is positive-definite, however, the SEs will be biased. This argument is more extreme than omit13.

prefix

See summaryvglm.

eval0

Logical. Check if V is positive-definite? That is, all its eigenvalues are positive.

...

Logical argument check.2 might work here. If TRUE then some quantities are printed out, for checking and debugging.

Author

T. W. Yee.

Warning

DRR-VGLMs are a recent development so it will take some time to get things totally ironed out. RR-VGLMs were developed a long time ago and are more well-established, however they have only recently been documented here.

Details

Most of this document concerns DRR-VGLMs but also apply equally well to RR-VGLMs as a special case.

The overall variance-covariance matrix The overall variance-covariance matrix (called V below) is computed. Since the parameters comprise the elements of the matrices A, B1 and C (called here block matrices 1, 2, 3 respectively), and an alternating algorithm is used for estimation, then there are two overlapping submodels that are fitted within an IRLS algorithm. These have blocks 1 and 2, and 2 and 3, so that B1 is common to both. They are combined into one large overall variance-covariance matrix. Argument fixA specifies which submodel the B1 block is taken from. Block (1,3) is the most difficult to compute and numerical approximations based on first derivatives are used by default for this.

Sometimes the computed V is not positive-definite. If so, then the standard errors will be NA. To avoid this problem, try varying h.step or refitting the model with a different Index.corner. Argument omit13 and omit123 can also be used to give approximate answers. If V is not positive-definite then this may indicate that the model does not fit the data very well, e.g., Rank is not a good value. Potentially, there are many ways why the model may be ill-conditioned. Try several options and set trace = TRUE to monitor convergence---this is informative about how well the model and data agree.

How can one fit an ordinary RR-VGLM as a DRR-VGLM? If one uses corner constraints (default) then one should input H.A as a list containing Rank diag(M) matrices---one for each column of A. Then since Corner = TRUE by default, then object@H.A.alt has certain columns deleted due to corner constraints. In contrast, object@H.A.thy is the H.A that was inputted. FYI, the alt suffix indicates the alternating algorithm, while the suffix thy stands for theory.

References

Chapter 5 of: Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer. Sections 5.2.2 and 5.3 are particularly relevant.

See Also

rrvglm, rrvglm.control, vcovdrrvglm, CM.free, summaryvglm, summary.rrvglm-class, summary.drrvglm-class.

Examples

Run this code
if (FALSE)   # Fit a rank-1 RR-VGLM as a DRR-VGLM.
set.seed(1); n <- 1000; S <- 6  # S must be even
myrank <- 1
rdata <- data.frame(x1 = runif(n), x2 = runif(n),
           x3 = runif(n), x4 = runif(n))
dval <- ncol(rdata)  # Number of covariates
# Involves x1, x2, ... a rank-1 model:
ymatrix <- with(rdata,
  matrix(rpois(n*S, exp(3 + x1 - 0.5*x2)), n, S))
H.C <- vector("list", dval)  # Ordinary "rrvglm"
for (i in 1:dval) H.C[[i]] <- CM.free(myrank)
names(H.C) <- paste0("x", 1:dval)
H.A <- list(CM.free(S))  # rank-1

rfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4,
           poissonff, rdata, trace = TRUE)
class(rfit1)
dfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4,
           poissonff, rdata, trace = TRUE,
           H.A = H.A,    # drrvglm
           H.C = H.C)    # drrvglm
class(dfit1)
Coef(rfit1)  # The RR-VGLM is the same as
Coef(dfit1)  # the DRR-VGLM.
max(abs(predict(rfit1) - predict(dfit1)))  # 0
abs(logLik(rfit1) - logLik(dfit1))  # 0
summary(rfit1)
summary(dfit1)

Run the code above in your browser using DataLab