Learn R Programming

powerlmm (version 0.4.0)

summary.plcp_sim: Summarize the results from a simulation of a single study design-object

Description

Summarize the results from a simulation of a single study design-object

Usage

# S3 method for plcp_sim
summary(object, model = NULL, alpha = 0.05,
  para = NULL, ...)

# S3 method for plcp_sim_formula_compare summary(object, model = NULL, alpha = 0.05, model_selection = NULL, LRT_alpha = 0.1, para = NULL, ...)

Arguments

object

A simulate.plcp-object

model

Indicates which model that should be returned. Default is NULL which return results from all model formulas. Can also be a character matching the names used in sim_formula_compare.

alpha

Indicates the significance level. Default is 0.05 (two-tailed), one-tailed tests are not yet implemented.

para

Selects a parameter to return. Default is NULL, which returns all parameters. If multiple model formulas are compared a named list can be used to specify different parameters per model.

...

Currently not used

model_selection

indicates if the summary should be based on a LRT model selection strategy. Default is NULL, which returns all models, if FW or BW a forward or backward model selection strategy is used, see Details.

LRT_alpha

Indicates the alpha level used if doing LRT model comparisons.

Value

Object with class plcp_sim_summary. It contains the following output:

  • parameter is the name of the coefficient

  • M_est is the mean of the estimates taken over all the simulations.

  • M_se is the mean estimated standard error taken over all the simulations.

  • SD_est is the empirical standard error; i.e. the standard deviation of the distribution of the generated estimates.

  • power is the empirical power of the Wald Z test, i.e. the proportion of simulated p-values < alpha.

  • power_satt is the empirical power of the Wald t test using Satterthwaite's degree of freedom approximation.

  • satt_NA is the proportion of Satterthwaite's approximations that failed.

  • prop_zero is the proportion of the simulated estimates that are zero; only shown for random effects.

Details

Model selection

It is possible to summarize the performance of a data driven model selection strategy based on the formulas used in the simulation (see sim_formula_compare). The two model selection strategies are:

  • FW: Forward selection of the models. Starts with the first model formula and compares it with the next formula. Continues until the test of M_i vs M_i + 1 is non-significant, and then picks M_i. Thus if three models are compared, and the comparison of M_1 vs M_2 is non-significant, M_3 will not be tested and M_1 is the winning model.

  • BW: Backward selection of the models. Starts with the last model formula and compares it with the previous formula. Continues until the test of M_i vs M_i - 1 is significant or until all adjacent formulas have been compared. Thus if three models are compared, and the comparison of M_3 vs M_2 is non-significant, M2 vs M1 will be tested and M2 will be picked if significant, and M1 if not.

The model comparison is performed using a likelihood ratio test based the REML criterion. Hence, it assumed you are comparing models with the same fixed effects, and that one of the models is a reduced version of the other (nested models). The LRT test is done as a post-processing step, so model_selection option will not re-run the simulation. This also means that different alpha levels for the LRTs can be investigated without re-running the simulation.

Data transformation

If the data has been transformed sim_formula(data_transform = ...), then true parameter values (thetas shown in the summary will most likely no longer apply. Hence, relative bias and CI coverage will be in relation to the original model. However, the empirical estimates will be summarized correctly, enabling investigation of power and Type I errors using arbitrary transformations.