- data
A data frame encoding the data used in the analysis. Can be missing if covs
and nobs
are supplied.
- type
The type of model used. See description.
- sigma
Only used when type = "cov"
. Either "full"
to estimate every element freely, "diag"
to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- kappa
Only used when type = "prec"
. Either "full"
to estimate every element freely, "diag"
to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- omega
Only used when type = "ggm"
. Either "full"
to estimate every element freely, "zero"
to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- lowertri
Only used when type = "chol"
. Either "full"
to estimate every element freely, "diag"
to only include diagonal elements, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- delta
Only used when type = "ggm"
. Either "diag"
or "zero"
(not recommended), or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- rho
Only used when type = "cor"
. Either "full"
to estimate every element freely, "zero"
to set all elements to zero, or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- SD
Only used when type = "cor"
. Either "diag"
or "zero"
(not recommended), or a matrix of the dimensions node x node with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- mu
Optional vector encoding the mean structure. Set elements to 0 to indicate fixed to zero constrains, 1 to indicate free means, and higher integers to indicate equality constrains. For multiple groups, this argument can be a list or array with each element/column encoding such a vector.
- tau
Optional list encoding the thresholds per variable.
- vars
An optional character vector encoding the variables used in the analyis. Must equal names of the dataset in data
.
- groups
An optional string indicating the name of the group variable in data
.
- covs
A sample variance--covariance matrix, or a list/array of such matrices for multiple groups. Make sure covtype
argument is set correctly to the type of covariances used.
- means
A vector of sample means, or a list/matrix containing such vectors for multiple groups.
- nobs
The number of observations used in covs
and means
, or a vector of such numbers of observations for multiple groups.
- covtype
If 'covs' is used, this is the type of covariance (maximum likelihood or unbiased) the input covariance matrix represents. Set to "ML"
for maximum likelihood estimates (denominator n) and "UB"
to unbiased estimates (denominator n-1). The default will try to find the type used, by investigating which is most likely to result from integer valued datasets.
- missing
How should missingness be handled in computing the sample covariances and number of observations when data
is used. Can be "listwise"
for listwise deletion, or "pairwise"
for pairwise deletion.
- equal
A character vector indicating which matrices should be constrained equal across groups.
- baseline_saturated
A logical indicating if the baseline and saturated model should be included. Mostly used internally and NOT Recommended to be used manually.
- estimator
The estimator to be used. Currently implemented are "ML"
for maximum likelihood estimation, "FIML"
for full-information maximum likelihood estimation, "ULS"
for unweighted least squares estimation, "WLS"
for weighted least squares estimation, and "DWLS"
for diagonally weighted least squares estimation.
- optimizer
The optimizer to be used. Can be one of "nlminb"
(the default R nlminb
function), "ucminf"
(from the optimr
package), and C++ based optimizers "cpp_L-BFGS-B"
, "cpp_BFGS"
, "cpp_CG"
, "cpp_SANN"
, and "cpp_Nelder-Mead"
. The C++ optimizers are faster but slightly less stable. Defaults to "nlminb"
.
- storedata
Logical, should the raw data be stored? Needed for bootstrapping (see bootstrap
).
- standardize
Which standardization method should be used? "none"
(default) for no standardization, "z"
for z-scores, and "quantile"
for a non-parametric transformation to the quantiles of the marginal standard normal distribution.
- WLS.W
Optional WLS weights matrix.
- sampleStats
An optional sample statistics object. Mostly used internally.
- verbose
Logical, should progress be printed to the console?
- ordered
A vector with strings indicating the variables that are ordered catagorical, or set to TRUE
to model all variables as ordered catagorical.
- meanstructure
Logical, should the meanstructure be modeled explicitly?
- corinput
Logical, is the input a correlation matrix?
- fullFIML
Logical, should row-wise FIML be used? Not recommended!
- bootstrap
Should the data be bootstrapped? If TRUE
the data are resampled and a bootstrap sample is created. These must be aggregated using aggregate_bootstraps
! Can be TRUE
or FALSE
. Can also be "nonparametric"
(which sets boot_sub = 1
and boot_resample = TRUE
) or "case"
(which sets boot_sub = 0.75
and boot_resample = FALSE
).
- boot_sub
Proportion of cases to be subsampled (round(boot_sub * N)
).
- boot_resample
Logical, should the bootstrap be with replacement (TRUE
) or without replacement (FALSE
)
- ...
Arguments sent to varcov