The following formula is used for VGLMs:
  \(-2 \mbox{log-likelihood} + 2 trace(V K)\),
  where \(V\) is the inverse of the EIM from the fitted model,
  and \(K\) is the outer product of the score vectors.
  Both \(V\) and \(K\) are order-\(p.VLM\) matrices.
  One has \(V\) equal to vcov(object),
  and \(K\) is computed by taking the outer product of
  the output from the deriv slot multiplied by the
  large VLM matrix and then taking their sum.
  Hence for the huge majority of models,
  the penalty is computed at the MLE and is empirical in nature.
  Theoretically, if the fitted model is the true model then
  AIC equals TIC.
  When there are prior weights the score vectors are divided
  by the square root of these,
  because \( (a_i U_i / \sqrt{a_i})^2 = a_i U_i^2\).
This code relies on the log-likelihood being defined, and computed,
  for the object.
  When comparing fitted objects, the smaller the TIC, the better the fit.
  The log-likelihood and hence the TIC is only defined up to an additive
  constant.
Currently
  any estimated scale parameter (in GLM parlance) is ignored by
  treating its value as unity.
  Also,
  currently
  this function is written only for vglm objects and
  not vgam or rrvglm, etc., objects.