- data
A data frame encoding the data used in the analysis. Can be missing if covs
and nobs
are supplied.
- omega
The network structure. Either "full"
to estimate every element freely, "zero"
to set all elements to zero, or a matrix of the dimensions nNode x nNode with 0 encoding a fixed to zero element, 1 encoding a free to estimate element, and higher integers encoding equality constrains. For multiple groups, this argument can be a list or array with each element/slice encoding such a matrix.
- tau
Optional vector encoding the threshold/intercept structure. Set elements to 0 to indicate fixed to zero constrains, 1 to indicate free intercepts, and higher integers to indicate equality constrains. For multiple groups, this argument can be a list or array with each element/column encoding such a vector.
- beta
Optional scalar encoding the inverse temperature. 1 indicate free beta parameters, and higher integers to indicate equality constrains. For multiple groups, this argument can be a list or array with each element/column encoding such scalers.
- vars
An optional character vector encoding the variables used in the analyis. Must equal names of the dataset in data
.
- groups
An optional character vector encoding the variables used in the analyis. Must equal names of the dataset in data
.
- covs
A sample variance--covariance matrix, or a list/array of such matrices for multiple groups. Make sure covtype
argument is set correctly to the type of covariances used.
- means
A vector of sample means, or a list/matrix containing such vectors for multiple groups.
- nobs
The number of observations used in covs
and means
, or a vector of such numbers of observations for multiple groups.
- covtype
If 'covs' is used, this is the type of covariance (maximum likelihood or unbiased) the input covariance matrix represents. Set to "ML"
for maximum likelihood estimates (denominator n) and "UB"
to unbiased estimates (denominator n-1). The default will try to find the type used, by investigating which is most likely to result from integer valued datasets.
- responses
A vector of dichotemous responses used (e.g., c(-1,1)
or c(0,1)
. Only needed when 'covs' is used.)
- missing
How should missingness be handled in computing the sample covariances and number of observations when data
is used. Can be "listwise"
for listwise deletion, or "pairwise"
for pairwise deletion. NOT RECOMMENDED TO BE USED YET IN ISING MODEL.
- equal
A character vector indicating which matrices should be constrained equal across groups.
- baseline_saturated
A logical indicating if the baseline and saturated model should be included. Mostly used internally and NOT Recommended to be used manually.
- estimator
The estimator to be used. Currently implemented are "ML"
for maximum likelihood estimation, "FIML"
for full-information maximum likelihood estimation, "ULS"
for unweighted least squares estimation, "WLS"
for weighted least squares estimation, and "DWLS"
for diagonally weighted least squares estimation. Only ML estimation is currently supported for the Ising model.
- optimizer
The optimizer to be used. Can be one of "nlminb"
(the default R nlminb
function), "ucminf"
(from the optimr
package), and C++ based optimizers "cpp_L-BFGS-B"
, "cpp_BFGS"
, "cpp_CG"
, "cpp_SANN"
, and "cpp_Nelder-Mead"
. The C++ optimizers are faster but slightly less stable. Defaults to "nlminb"
.
- storedata
Logical, should the raw data be stored? Needed for bootstrapping (see bootstrap
).
- WLS.W
Optional WLS weights matrix. CURRENTLY NOT USED.
- sampleStats
An optional sample statistics object. Mostly used internally.
- identify
Logical, should the model be identified?
- verbose
Logical, should messages be printed?
- maxNodes
The maximum number of nodes allowed in the analysis. This function will stop with an error if more nodes are used (it is not recommended to set this higher).
- min_sum
The minimum sum score that is artifically possible in the dataset. Defaults to -Inf. Set this only if you know a lower sum score is not possible in the data, for example due to selection bias.
- bootstrap
Should the data be bootstrapped? If TRUE
the data are resampled and a bootstrap sample is created. These must be aggregated using aggregate_bootstraps
! Can be TRUE
or FALSE
. Can also be "nonparametric"
(which sets boot_sub = 1
and boot_resample = TRUE
) or "case"
(which sets boot_sub = 0.75
and boot_resample = FALSE
).
- boot_sub
Proportion of cases to be subsampled (round(boot_sub * N)
).
- boot_resample
Logical, should the bootstrap be with replacement (TRUE
) or without replacement (FALSE
)