Dynamic predictions for the longitudinal data sub-model based on an
observed measurement history for the longitudinal outcomes of a new subject
are based on either a first-order approximation or Monte Carlo simulation
approach, both of which are described in Rizopoulos (2011). Namely, given
that the subject was last observed at time t, we calculate the
conditional expectation of each longitudinal outcome at time u as
$$E[y_k(u) | T \ge t, y, \theta] \approx x^T(u)\beta_k +
z^T(u)\hat{b}_k,$$
where \(T\) is the failure time for the new subject, and \(y\) is the
stacked-vector of longitudinal measurements up to time t.
First order predictions
For type="first-order"
, \(\hat{b}\) is the mode of the posterior
distribution of the random effects given by
$$\hat{b} = {\arg \max}_b f(b | y, T \ge t; \theta).$$
The predictions are based on plugging in \(\theta = \hat{\theta}\), which
is extracted from the mjoint
object.
Monte Carlo simulation predictions
For type="simulated"
, \(\theta\) is drawn from a multivariate
normal distribution with means \(\hat{\theta}\) and variance-covariance
matrix both extracted from the fitted mjoint
object via the
coef()
and vcov()
functions. \(\hat{b}\) is drawn from the
the posterior distribution of the random effects
$$f(b | y, T \ge t; \theta)$$
by means of a Metropolis-Hasting algorithm with independent multivariate
non-central t-distribution proposal distributions with
non-centrality parameter \(\hat{b}\) from the first-order prediction and
variance-covariance matrix equal to scale
\(\times\) the inverse
of the negative Hessian of the posterior distribution. The choice os
scale
can be used to tune the acceptance rate of the
Metropolis-Hastings sampler. This simulation algorithm is iterated M
times, at each time calculating the conditional survival probability.