Learn R Programming

PowerUpR (version 0.1.3)

optimal.bcra4r3: Model 4.5: COSA Solver for 4-Level Random Effects Blocked Cluster Random Assignment Designs, Treatment at Level 3

Description

optimal.bcra4r3 finds constrained optimal sample allocation (COSA) solutions for designs with 4-levels where level 3 units are randomly assigned to treatment and control groups within level 4 units (random blocks). COSA can be found in the following forms, (i) under budgetary constraints given marginal costs per unit, (ii) under power contraints given marginal costs per unit, (iii) under MDES contraints given marginal costs per unit, and (iv) under sample size contraints for one or more levels along with any of the i,ii, or iii options.

Usage

optimal.bcra4r3(cn, cJ, cK, cL, cost=NULL, n=NULL, J=NULL, K=NULL, L=NULL,
                 power=.80, mdes=.25, alpha=.05, two.tail=TRUE,
                 nJKL0=c(10,10,10,10), ncase=10, gm=2,
                 constrain="cost", optimizer="auglag_cobyla",
                 rho4, rho3, rho2, omega4,
                 P=.50, g4=0, RT42=0, R32=0, R22=0, R12=0)

Arguments

cn
marginal cost per level 1 unit.
cJ
marginal cost per level 2 unit.
cK
marginal cost per level 3 unit.
cL
marginal cost per level 4 unit.
cost
total cost or budget.
n
harmonic mean of level 1 units across level 2 units (or simple average).
J
harmonic mean of level 2 units across level 3 units (or simple average).
K
harmonic mean of level 3 units across level 4 units (or simple average).
L
number of level 4 units.
power
statistical power (1 - type II error).
mdes
minimum detectable effect size.
alpha
probability of type I error.
two.tail
logical; TRUE for two-tailed hypothesis testing, FALSE for one-tailed hypothesis testing.
nJKL0
vector with a length of four to specifiy starting values for level 1, level 2, level 3, and level 4 sample sizes.
ncase
number of cases to show in the output.
gm
grid multiplier to increase the range of sample size search for each level.
constrain
one of the followings can be constrained at a specified cost or value: "cost", "power", or "mdes".
optimizer
algorithm to find optimal sample sizes given total cost, power, or MDES. Available algorithms: "auglag_cobyla", "auglag_lbfgs", "auglag_mma", or "auglag_slsqp".
rho2
proportion of variance in the outcome explained by level 2 units.
rho3
proportion of variance in the outcome explained by level 3 units.
rho4
proportion of variance in the outcome explained by level 4 units.
omega4
treatment effect heterogeneity as ratio of treatment effect variance among level 4 units to the residual variance at level 4.
P
average proportion of level 3 units randomly assigned to treatment within level 4 units.
g4
number of covariates at level 4.
R12
proportion of level 1 variance in the outcome explained by level 1 covariates.
R22
proportion of level 2 variance in the outcome explained by level 2 covariates.
R32
proportion of level 3 variance in the outcome explained by level 3 covariates.
RT42
proportion of treatment effect variance among level 4 units explained by level 4 covariates.

Value

fun
function name.
par
list of parameters used in the function.
nloptr
list of nloptr log and output.
round.optim
solution after rounding. MDES is calculated at the specified power (default .80), and power is calculated at the specified MDES (default .25).
integer.optim
best integer solutions around round.optim solution. MDES is calculated at the specified power (default .80), and power is calculated at the specified MDES (default .25).

Details

Constrained optimal sample allocation (COSA; Hedges & Borenstein, 2014; Raudenbush, 1997; Raudenbush & Liu, 2000) problems are solved using nloptr (Ypma, 2014) package, an implementation of NLopt (Johnson, n.d.) in R (R Core Team, 2016). More specifically, Augmented Lagrangian method is used for global optimization (AUGLAG, Birgin & Martines, 2008; Conn, Gould, & Toint, 1991) in conjuction with one of the following local optimization algorithms: Constrained Optimization by Linear Approximations (COBYLA, Powell, 1994), Low Storage BFGS (LBFGS, Liu & Nocedal, 1989), Method of Moving Asymptotes (MMA, Svanberg, 2002), or Sequental Least-Squares Quadratic Programming (SLSQP, Kraft, 1988). See http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms for a brief summary. nloptr returns values that are not integer. Rounding may produce cost, power or MDES values different from what was specified. A better solution is approximated using brute force. If the constrained value (cost, power or MDES) in the output deviates from what was specified, increasing grid multiplier (gm) often solves the problem. More cases can be printed by increasing ncase. Further definition of design parameters can be found in Dong & Maynard (2013).

References

Birgin, E. G., Martinez, J. M. (2008) Improving ultimate convergence of an augmented Lagrangian method. Optimization Methods and Software 23(2), 177-195. Conn, A. R., Gould, N. I. M., & Toint, P.L. (1991). A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28(2), 545-572. Dong, N., & Maynard, R. A. (2013). PowerUp!: A Tool for Calculating Minum Detectable Effect Sizes and Minimum Required Sample Sizes for Experimental and Quasi-Experimental Design Studies,Journal of Research on Educational Effectiveness, 6(1), 24-6. Hedges, L. V., & Borenstein, M. (2014). Conditional Optimal Design in Three- and Four-Level Experiments. Journal of Educational and Behavioral Statistics, 39(4), 257-281 Hedges, L. & Rhoads, C.(2009). Statistical Power Analysis in Education Research (NCSER 2010-3006). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education. This report is available on the IES website at http://ies.ed.gov/ncser/. Johnson, S. G. (n.d.). The NLopt nonlinear-optimization package. Package available at http://ab-initio.mit.edu/nlopt. Kraft, D. (1988). A software package for sequential quadratic programming. Obersfaffeuhofen, Germany: DFVLR. Liu, D. C., & Nocedal, J. (1989). On the limited memory BFGS method for large scale optimization. Mathematical programming, 45(1-3), 503-528. Powell, M. J. (1994). A direct search optimization method that models the objective and constraint functions by linear interpolation. In Advances in optimization and numerical analysis (pp. 51-67). Springer Netherlands. R Core Team (2016). R: A language and environment for statistical computin . R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org. Raudenbush, S. W. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, 2, 173-185. Raudenbush, S. W., & Liu, X. (2000). Statistical power and optimal design for multisite trials. Psychological Methods, 5, 199-213. Svanberg, K. (2002). A class of globally convergent optimization methods based on conservative convex separable approximations. SIAM journal on optimization, 12(2), 555-573. Ypma, J. (2014). nloptr: R interface to NLopt. R package version 1.0.4. Package available at https://cran.r-project.org/package=nloptr

See Also

mdes.bcra4r3, power.bcra4r3, mrss.bcra4r3

Examples

Run this code
## Not run: ------------------------------------
# 
#      optimal.bcra4r3(cn=1, cJ=10, cK=100, cL=1000, cost=75600,
#                     constrain="cost",
#                     rho4=.10, rho3=.15, rho2=.20,
#                     omega4=.50)
# 
#   
## ---------------------------------------------

Run the code above in your browser using DataLab