Learn R Programming

effectsize (version 0.2.0)

cohens_f: Effect size for ANOVA

Description

Functions to compute effect size measures for ANOVAs, such as Eta, Omega and Epsilon squared (or their partialled versions), representing an estimate of how much variance in the response variables are accounted for by the explanatory variables.

Usage

cohens_f(model)

epsilon_squared(model, partial = TRUE)

eta_squared_adj(model, partial = TRUE)

eta_squared(model, partial = TRUE, ci = NULL, iterations = 1000, ...)

omega_squared(model, partial = TRUE, ci = NULL, iterations = 1000)

Arguments

model

An model or ANOVA object.

partial

If TRUE, return partial indices.

ci

Confidence Interval (CI) level when computed via bootstrap.

iterations

Number of bootstrap iterations.

...

Arguments passed to or from other methods.

Value

Data.frame containing the effect size values.

Details

Omega Squared

Omega squared is considered as a lesser biased alternative to eta-squared, especially when sample sizes are small (Albers \& Lakens, 2018). Field (2013) suggests the following interpretation heuristics:

  • Omega Squared = 0 - 0.01: Very small

  • Omega Squared = 0.01 - 0.06: Small

  • Omega Squared = 0.06 - 0.14: Medium

  • Omega Squared > 0.14: Large

Epsilon Squared

It is one of the least common measures of effect sizes: omega squared and eta squared are used more frequently. Although having a different name and a formula in appearance different, this index is equivalent to the adjusted R2 (Allen, 2017, p. 382).

Cohen's f

Cohen's f statistic is one appropriate effect size index to use for a oneway analysis of variance (ANOVA). Cohen's f can take on values between zero, when the population means are all equal, and an indefinitely large number as standard deviation of means increases relative to the average standard deviation within each group. Cohen has suggested that the values of 0.10, 0.25, and 0.40 represent small, medium, and large effect sizes, respectively.

References

  • Albers, C., \& Lakens, D. (2018). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Journal of experimental social psychology, 74, 187-195.

  • Allen, R. (2017). Statistics and Experimental Design for Psychologists: A Model Comparison Approach. World Scientific Publishing Company.

  • Field, A. (2013). Discovering statistics using IBM SPSS statistics. sage.

  • Kelley, K. (2007). Methods for the behavioral, educational, and social sciences: An R package. Behavior Research Methods, 39(4), 979-984.

  • Kelley, T. (1935) An unbiased correlation ratio measure. Proceedings of the National Academy of Sciences. 21(9). 554-559.

The computation of CIs is based on the implementation done by Stanley (2018) in the ApaTables package and Kelley (2007) in the MBESS package. All credits go to them.

Examples

Run this code
# NOT RUN {
library(effectsize)

df <- iris
df$Sepal.Big <- ifelse(df$Sepal.Width >= 3, "Yes", "No")

model <- aov(Sepal.Length ~ Sepal.Big, data = df)
omega_squared(model)
eta_squared(model)
epsilon_squared(model)
cohens_f(model)

model <- anova(lm(Sepal.Length ~ Sepal.Big * Species, data = df))
omega_squared(model)
eta_squared(model)
epsilon_squared(model)
# }
# NOT RUN {
# Don't work for now
model <- aov(Sepal.Length ~ Sepal.Big + Error(Species), data = df)
omega_squared(model)
eta_squared(model)
epsilon_squared(model)
# }
# NOT RUN {
# }

Run the code above in your browser using DataLab