Easy-to-use, efficient, flexible and scalable statistical tools. Package bigstatsr provides and uses Filebacked Big Matrices via memory-mapping. It provides for instance matrix operations, Principal Component Analysis, sparse linear supervised models, utility functions and more <doi:10.1093/bioinformatics/bty185>.
A FBM.
A FBM.code256.
Vector of responses, corresponding to ind.train
.
Vector of responses, corresponding to ind.train
.
Must be only 0s and 1s.
An optional vector of the row indices that are used, for the training part. If not specified, all rows are used. Don't use negative indices.
An optional vector of the row indices that are used. If not specified, all rows are used. Don't use negative indices.
An optional vector of the column indices that are used. If not specified, all columns are used. Don't use negative indices.
Maximum number of columns read at once. Default uses block_size.
Number of cores used. Default doesn't use parallelism. You may use nb_cores.
A function that returns a named list of
mean
and sd
for every column, to scale each of their elements
such as followed: $$\frac{X_{i,j} - mean_j}{sd_j}.$$
Default doesn't use any scaling.
Matrix of covariables to be added in each model to correct
for confounders (e.g. the scores of PCA), corresponding to ind.train
.
Default is NULL
and corresponds to only adding an intercept to each model.
Matrix of covariables to be added in each model to correct
for confounders (e.g. the scores of PCA), corresponding to ind.row
.
Default is NULL
and corresponds to only adding an intercept to each model.
Large matrix computations (crossprods) are made block-wise and won't be parallelized in order to not have to reduce the size of these blocks. Instead, you may use Microsoft R Open in order to accelerate these block matrix computations.
Useful links: