Calculates log-likelihood and hazard/cumulative hazard/survival functions over a user-supplied vector time values, based on BSGW model object.
# S3 method for bsgw
predict(object, newdata=NULL, tvec=NULL, burnin=object$control$burnin, ncores=1, ...)
# S3 method for predict.bsgw
summary(object, idx=1:length(object$median$survreg.scale), burnin=object$burnin, pval=0.05
, popmean=identical(idx,1:length(object$median$survreg.scale)), make.plot=TRUE, ...)
The function predict.bsgw
returns as object of class "predict.bsgw" with the following fields:
Actual vector of time values (if any) used for prediction.
Same as input.
List of median values for predicted entities. Currently, only loglike
and survreg.scale
median is produced. See 'Details' for explanation.
List of MCMC samples for predicted entities. Elements include h
(hazard function), H
(cumulative hazard function), S
(survival function), survreg.scale
(inverse of shape parameter in rweibull
), and loglike
(model log-likelihood). All functions are evaluated over time values specified in tvec
.
Kaplan-Meyer fit of the data used for prediction (if data contains response fields).
The function summary.predict.bsgw
returns a list with the following fields:
A list of lower-bound values for h
, H
, S
, hr
(hazard ratio of idx[2]
to idx[1]
observation), and S.diff
(survival probability of idx[2]
minus idx[1]
). The last two are only included if length(idx)==2
.
List of median values for same entities described in lower
.
List of upper-bound values for same entities described in lower
.
Lower-bound/median/upper-bound values for population average of survival probability.
Kaplan-Meyer fit associated with the prediction object (if available).
For predict.bsgw
, an object of class "bsgw", usually the result of a call to bsgw; for summary.predict.bsgw
, an object of class "predict.bsgw", usually the result of a call to predict.bsgw
.
An optional data frame in which to look for variables with which to predict. If omiited, the fitted values (training set) are used.
An optional vector of time values, along which time-dependent entities (hazard, cumulative hazard, survival) will be predicted. If omitted, only the time-independent entities (currently only log-likelihood) will be calculated. If a single integer is provided for tvec
, it is interpreted as number of time points, equally spaced from 0
to object$tmax: tvec <- seq(from=0.0, to=object$tmax, length.out=tvec)
.
Number of samples to discard from the beginning of each MCMC chain before calculating median value(s) for time-independent entities.
Number of cores to use for parallel prediction.
Further arguments to be passed to/from other methods.
Index of observations (rows of newdata
or training data) for which to generate summary statistics. Default is the entire data.
Desired p-value, based on which lower/upper bounds will be calculated. Default is 0.05
.
Whether population averages must be calculated or not. By default, population averages are only calculated when the entire data is included in prediction.
Whether population mean and other plots must be created or not.
Alireza S. Mahani, Mansour T.A. Sharabiani
The time-dependent predicted objects (except loglike
) are three-dimensional arrays of size (nsmp x nt x nobs
), where nsmp
= number of MCMC samples, nt
= number of time values in tvec
, and nobs
= number of rows in newdata
. Therefore, even for modest data sizes, these objects can occupy large chunks of memory. For example, for nsmp=1000, nt=100, nobs=1000
, the three objects h, H, S
have a total size of 2.2GB. Since applying quantile
to these arrays is time-consuming (as needed for calculation of median and lower/upper bounds), we have left such summaries out of the scope of predict
function. Users can instead apply summary
to the prediction object to obtain summary statistics. During cross-validation-based selection of shrinkage parameter lambda
, there is no need to supply tvec
since we only the log-likelihood value. This significantly speeds up the parameter-tuning process. The function summary.predict.bsgw
allows the user to calculates summary statistics for a subset (or all of) data, if desired. This approach is in line with the overall philosophy of delaying the data summarization until necessary, to avoid unnecessary loss in accuracy due to premature blending of information contained in individual samples.
library("survival")
data(ovarian)
est <- bsgw(Surv(futime, fustat) ~ ecog.ps + rx, ovarian
, control=bsgw.control(iter=400, nskip=100))
pred <- predict(est, tvec=100)
predsumm <- summary(pred, idx=1:10)
Run the code above in your browser using DataLab