Wrapper for read_fst
and write_fst
from fst, but use a different default. For data import, always return a data.table.
For data export, always compress the data to the smallest size.
export_fst(x, path, compress = 100, uniform_encoding = TRUE)import_fst(
path,
columns = NULL,
from = 1,
to = NULL,
as.data.table = TRUE,
old_format = FALSE
)
`import_fst` returns a data.table with the selected columns and rows. `export_fst` writes `x` to a `fst` file and invisibly returns `x` (so you can use this function in a pipeline).
a data frame to write to disk
path to fst file
value in the range 0 to 100, indicating the amount of compression to use. Lower values mean larger file sizes. The default compression is set to 50.
If `TRUE`, all character vectors will be assumed to have elements with equal encoding. The encoding (latin1, UTF8 or native) of the first non-NA element will used as encoding for the whole column. This will be a correct assumption for most use cases. If `uniform.encoding` is set to `FALSE`, no such assumption will be made and all elements will be converted to the same encoding. The latter is a relatively expensive operation and will reduce write performance for character columns.
Column names to read. The default is to read all columns.
Read data starting from this row number.
Read data up until this row number. The default is to read to the last row of the stored dataset.
If TRUE, the result will be returned as a data.table
object. Any keys set on
dataset x
before writing will be retained. This allows for storage of sorted datasets. This option
requires data.table
package to be installed.
must be FALSE, the old fst file format is deprecated and can only be read and converted with fst package versions 0.8.0 to 0.8.10.
read_fst
# \donttest{
export_fst(iris,"iris_fst_test.fst")
iris_dt = import_fst("iris_fst_test.fst")
iris_dt
unlink("iris_fst_test.fst")
# }
Run the code above in your browser using DataLab