Learn R Programming

crunch (version 1.30.4)

exportDataset: Export a dataset to a file

Description

This function allows you to write a CrunchDataset to a .csv or SPSS .sav file.

Usage

exportDataset(
  dataset,
  file,
  format = c("csv", "spss", "parquet"),
  categorical = c("name", "id"),
  na = NULL,
  varlabel = c("name", "description"),
  include.hidden = FALSE,
  ...
)

# S4 method for CrunchDataset write.csv(x, ...)

Value

Invisibly, file.

Arguments

dataset

CrunchDataset, which may have been subsetted with a filter expression on the rows and a selection of variables on the columns.

file

character local filename to write to

format

character export format: currently supported values are "csv" and "spss" (and experimental support for "parquet").

categorical

character: export categorical values to CSV as category "name" (default) or "id". Ignored by the SPSS exporter.

na

Similar to the argument in utils::write.table(), 'na' lets you control how missing values are written into the CSV file. Supported values are:

  1. NULL, the default, which means that categorical variables will have the category name or id as the value, and numeric, text, and datetime variables will have the missing reason string;

  2. A string to use for missing values.

  3. "" means that empty cells will be written for missing values for all types.

varlabel

For SPSS export, which Crunch metadata field should be used as variable labels? Default is "name", but "description" is another valid value.

include.hidden

logical: should hidden variables be included? (default: FALSE)

...

additional options. See the API documentation. Currently supported boolean options include 'include_personal' for personal variables (default: FALSE) and 'prefix_subvariables' for SPSS format: whether to include the array variable's name in each of its subvariables "varlabels" (default: FALSE).

x

(for write.csv) CrunchDataset, which may have been subsetted with a filter expression on the rows and a selection of variables on the columns.

Examples

Run this code
if (FALSE) {
csv_file <- exportDataset(ds, "data.csv")
data <- read.csv(csv_file)

# parquet will likely read more quickly and be a smaller download size
parquet_file <- exportDataset(ds, "data.parquet")
# data <- arrow::read_parquet(parquet_file) # The arrow package can read parquet files
}

Run the code above in your browser using DataLab