Simulate a single adaptive clinical trial with a time-to-event endpoint
survival_adapt(
hazard_treatment,
hazard_control = NULL,
cutpoints = 0,
N_total,
lambda = 0.3,
lambda_time = 0,
interim_look = NULL,
end_of_study,
prior = c(0.1, 0.1),
block = 2,
rand_ratio = c(1, 1),
prop_loss = 0,
alternative = "greater",
h0 = 0,
Fn = 0.05,
Sn = 0.9,
prob_ha = 0.95,
N_impute = 10,
N_mcmc = 10,
method = "logrank",
imputed_final = FALSE,
debug = FALSE
)
vector. Constant hazard rates under the treatment arm.
vector. Constant hazard rates under the control arm.
vector. Times at which the baseline hazard changes. Default
is cutpoints = 0
, which corresponds to a simple (non-piecewise)
exponential model.
integer. Maximum sample size allowable
vector. Enrollment rates across simulated enrollment times. See
enrollment
for more details.
vector. Enrollment time(s) at which the enrollment rates
change. Must be same length as lambda. See
enrollment
for more details.
vector. Sample size for each interim look. Note: the maximum sample size should not be included.
scalar. Length of the study; i.e. time at which endpoint will be evaluated.
vector. The prior distributions for the piecewise hazard rate
parameters are each \(Gamma(a_0, b_0)\), with specified (known)
hyper-parameters \(a_0\) and \(b_0\). The default non-informative prior
distribution used is Gamma(0.1, 0.1), which is specified by setting
prior = c(0.1, 0.1)
.
scalar. Block size for generating the randomization schedule.
vector. Randomization allocation for the ratio of control
to treatment. Integer values mapping the size of the block. See
randomization
for more details.
scalar. Overall proportion of subjects lost to follow-up. Defaults to zero.
character. The string specifying the alternative
hypothesis, must be one of "greater"
(default), "less"
or
"two.sided"
.
scalar. Null hypothesis value of \(p_\textrm{treatment} -
p_\textrm{control}\) when method = "bayes"
. Default is h0 = 0
.
The argument is ignored when method = "logrank"
or = "cox"
;
in those cases the usual test of non-equal hazards is assumed.
vector of [0, 1]
values. Each element is the probability
threshold to stop at the \(i\)-th look early for futility. If there are
no interim looks (i.e. interim_look = NULL
), then Fn
is not
used in the simulations or analysis. The length of Fn
should be the
same as interim_look
, else the values are recycled.
vector of [0, 1]
values. Each element is the probability
threshold to stop at the \(i\)-th look early for expected success. If
there are no interim looks (i.e. interim_look = NULL
), then
Sn
is not used in the simulations or analysis. The length of
Sn
should be the same as interim_look
, else the values are
recycled.
scalar [0, 1]
. Probability threshold of alternative
hypothesis.
integer. Number of imputations for Monte Carlo simulation of missing data.
integer. Number of samples to draw from the posterior
distribution when using a Bayesian test (method = "bayes"
).
character. For an imputed data set (or the final data set after
follow-up is complete), whether the analysis should be a log-rank
(method = "logrank"
) test, Cox proportional hazards regression model
Wald test (method = "cox"
), or a fully-Bayesian analysis
(method = "bayes"
). See Details section.
logical. Should the final analysis (after all subjects
have been followed-up to the study end) be based on imputed outcomes for
subjects who were LTFU (i.e. right-censored with time
<end_of_study
)? Default is TRUE
. Setting to FALSE
means that the final analysis would incorporate right-censoring.
logical. If TRUE
can be used to debug aspects of the
code, including producing Kaplan-Meier graphs at each step of the
algorithm. Default is debug = FALSE
.
A data frame containing some input parameters (arguments) as well as statistics from the analysis, including:
N_treatment
integer. The number of patients enrolled in the treatment arm for each simulation.
N_control
integer. The number of patients enrolled in the control arm for each simulation.
est_interim
scalar. The treatment effect that was estimated at the time of the interim analysis. Note this is not actually used in the final analysis.
est_final
scalar. The treatment effect that was estimated at the final analysis. Final analysis occurs when either the maximum sample size is reached and follow-up complete, or the interim analysis triggered an early stopping of enrollment/accrual and follow-up for those subjects is complete.
post_prob_ha
scalar. The corresponding posterior probability from the final
analysis. If imputed_final
is true, this is calculated as the
posterior probability of efficacy (or equivalent, depending on how
alternative
and h0
were specified) for each imputed
final analysis dataset, and then averaged over the N_impute
imputations. If method = "logrank"
, post_prob_ha
is
calculated in the same fashion, but value represents \(1 - P\),
where \(P\) denotes the frequentist \(P\)-value.
stop_futility
integer. A logical indicator of whether the trial was stopped early for futility.
stop_expected_success
integer. A logical indicator of whether the trial was stopped early for expected success.
Implements the Goldilocks design method described in Broglio et al. (2014). At each interim analysis, two probabilities are computed:
The posterior predictive probability of eventual success. This is
calculated as the proportion of imputed datasets at the current sample
size that would go on to be success at the specified threshold. At each
interim analysis it is compared to the corresponding element of
Sn
, and if it exceeds the threshold,
accrual/enrollment is suspended and the outstanding follow-up allowed to
complete before conducting the pre-specified final analysis.
The posterior predictive probability of final success. This is
calculated as the proportion of imputed datasets at the maximum
threshold that would go on to be successful. Similar to above, it is
compared to the corresponding element of Fn
, and if it
is less than the threshold, accrual/enrollment is suspended and the
trial terminated. Typically this would be a binding decision. If it is
not a binding decision, then one should also explore the simulations
with Fn = 0
.
Hence, at each interim analysis look, 3 decisions are allowed:
Stop for expected success
Stop for futility
Continue to enroll new subjects, or if at maximum sample size, proceed to final analysis.
At each interim (and final) analysis methods as:
Log-rank test (method = "logrank"
).
Each (imputed) dataset with both treatment and control arms can be
compared using a standard log-rank test. The output is a P-value,
and there is no treatment effect reported. The function returns \(1 -
P\), which is reported in post_prob_ha
. Whilst not a posterior
probability, it can be contrasted in the same manner. For example, if
the success threshold is \(P < 0.05\), then one requires
post_prob_ha
\(> 0.95\). The reason for this is to enable
simple switching between Bayesian and frequentist paradigms for
analysis.
Cox proportional hazards regression Wald test (method = "cox"
).
Similar to the log-rank test, a P-value is calculated based on a
two-sided test. However, for consistency, \(1 - P\), which is
reported in post_prob_ha
. Whilst not a posterior probability, it
can be contrasted in the same manner. For example, if the success
threshold is \(P < 0.05\), then one requires post_prob_ha
\(> 0.95\).
Bayesian absolute difference (method = "bayes"
).
Each imputed dataset is used to update the conjugate Gamma prior
(defined by prior
), yielding a posterior distribution for the
piecewise exponential rate parameters. In turn, the posterior
distribution of the cumulative incidence function (\(1 - S(t)\), where
\(S(t)\) is the survival function) evaluated at time
end_of_study
is calculated. If a single arm study, then this
summarizes the treatment effect, else, if a two-armed study, the
independent posteriors are used to estimate the posterior distribution
of the difference. A posterior probability is calculated according to
the specification of the test type (alternative
) and the value of
the null hypothesis (h0
).
Imputed final analysis (imputed_final
).
The overall final analysis conducted after accrual is suspended and
follow-up is complete can be analyzed on imputed datasets (default) or
on the non-imputed dataset. Since the imputations/predictions used
during the interim analyses assume all subjects are imputed (since loss
to follow-up is not yet known), it would seem most appropriate to
conduct the trial in the same manner, especially if loss to follow-up
rates are appreciable. Note, this only applies to subjects who are
right-censored due to loss to follow-up, which we assume is a
non-informative process. This can be used with any method
.
Broglio KR, Connor JT, Berry SM. Not too big, not too small: a Goldilocks approach to sample size selection. Journal of Biopharmaceutical Statistics, 2014; 24(3): 685<U+2013>705.
# NOT RUN {
# RCT with exponential hazard (no piecewise breaks)
# Note: the number of imputations is small to enable this example to run
# quickly on CRAN tests. In practice, much larger values are needed.
survival_adapt(
hazard_treatment = -log(0.85) / 36,
hazard_control = -log(0.7) / 36,
cutpoints = 0,
N_total = 600,
lambda = 20,
lambda_time = 0,
interim_look = 400,
end_of_study = 36,
prior = c(0.1, 0.1),
block = 2,
rand_ratio = c(1, 1),
prop_loss = 0.30,
alternative = "less",
h0 = 0,
Fn = 0.05,
Sn = 0.9,
prob_ha = 0.975,
N_impute = 10,
N_mcmc = 10,
method = "bayes")
# }
Run the code above in your browser using DataLab