Simulate multiple clinical trials with fixed input parameters, and tidily extract the relevant data to generate operating characteristics.
sim_trials(
hazard_treatment,
hazard_control = NULL,
cutpoints = 0,
N_total,
lambda = 0.3,
lambda_time = 0,
interim_look = NULL,
end_of_study,
prior = c(0.1, 0.1),
block = 2,
rand_ratio = c(1, 1),
prop_loss = 0,
alternative = "two.sided",
h0 = 0,
Fn = 0.1,
Sn = 0.9,
prob_ha = 0.95,
N_impute = 10,
N_mcmc = 10,
N_trials = 10,
method = "logrank",
imputed_final = FALSE,
ncores = 1L
)
vector. Constant hazard rates under the treatment arm.
vector. Constant hazard rates under the control arm.
vector. Times at which the baseline hazard changes. Default
is cutpoints = 0
, which corresponds to a simple (non-piecewise)
exponential model.
integer. Maximum sample size allowable
vector. Enrollment rates across simulated enrollment times. See
enrollment
for more details.
vector. Enrollment time(s) at which the enrollment rates
change. Must be same length as lambda. See
enrollment
for more details.
vector. Sample size for each interim look. Note: the maximum sample size should not be included.
scalar. Length of the study; i.e. time at which endpoint will be evaluated.
vector. The prior distributions for the piecewise hazard rate
parameters are each \(Gamma(a_0, b_0)\), with specified (known)
hyper-parameters \(a_0\) and \(b_0\). The default non-informative prior
distribution used is Gamma(0.1, 0.1), which is specified by setting
prior = c(0.1, 0.1)
.
scalar. Block size for generating the randomization schedule.
vector. Randomization allocation for the ratio of control
to treatment. Integer values mapping the size of the block. See
randomization
for more details.
scalar. Overall proportion of subjects lost to follow-up. Defaults to zero.
character. The string specifying the alternative
hypothesis, must be one of "greater"
(default), "less"
or
"two.sided"
.
scalar. Null hypothesis value of \(p_\textrm{treatment} -
p_\textrm{control}\) when method = "bayes"
. Default is h0 = 0
.
The argument is ignored when method = "logrank"
or = "cox"
;
in those cases the usual test of non-equal hazards is assumed.
vector of [0, 1]
values. Each element is the probability
threshold to stop at the \(i\)-th look early for futility. If there are
no interim looks (i.e. interim_look = NULL
), then Fn
is not
used in the simulations or analysis. The length of Fn
should be the
same as interim_look
, else the values are recycled.
vector of [0, 1]
values. Each element is the probability
threshold to stop at the \(i\)-th look early for expected success. If
there are no interim looks (i.e. interim_look = NULL
), then
Sn
is not used in the simulations or analysis. The length of
Sn
should be the same as interim_look
, else the values are
recycled.
scalar [0, 1]
. Probability threshold of alternative
hypothesis.
integer. Number of imputations for Monte Carlo simulation of missing data.
integer. Number of samples to draw from the posterior
distribution when using a Bayesian test (method = "bayes"
).
integer. Number of trials to simulate.
character. For an imputed data set (or the final data set after
follow-up is complete), whether the analysis should be a log-rank
(method = "logrank"
) test, Cox proportional hazards regression model
Wald test (method = "cox"
), or a fully-Bayesian analysis
(method = "bayes"
). See Details section.
logical. Should the final analysis (after all subjects
have been followed-up to the study end) be based on imputed outcomes for
subjects who were LTFU (i.e. right-censored with time
<end_of_study
)? Default is TRUE
. Setting to FALSE
means that the final analysis would incorporate right-censoring.
integer. Number of cores to use for parallel processing.
Data frame with 1 row per simulated trial and columns for key summary
statistics. See survival_adapt
for details of what is
returned in each row.
This is basically a wrapper function for
survival_adapt
, whereby we repeatedly run the function for a
independent number of trials (all with the same input design parameters and
treatment effect).
To use will multiple cores (where available), the argument ncores
can be increased from the default of 1. Note: on Windows machines, it is
not possible to use the mclapply
function with
ncores
\(>1\).
# NOT RUN {
hc <- prop_to_haz(c(0.20, 0.30), c(0, 12), 36)
ht <- prop_to_haz(c(0.05, 0.15), c(0, 12), 36)
out <- sim_trials(
hazard_treatment = ht,
hazard_control = hc,
cutpoints = c(0, 12),
N_total = 600,
lambda = 20,
lambda_time = 0,
interim_look = c(400, 500),
end_of_study = 36,
prior = c(0.1, 0.1),
block = 2,
rand_ratio = c(1, 1),
prop_loss = 0.30,
alternative = "two.sided",
h0 = 0,
Fn = 0.05,
Sn = 0.9,
prob_ha = 0.975,
N_impute = 5,
N_mcmc = 5,
N_trials = 2,
method = "logrank",
ncores = 1)
# }
Run the code above in your browser using DataLab