Copy an R data.frame
to Spark, and return a reference to the
generated Spark DataFrame as a tbl_spark
. The returned object will
act as a dplyr
-compatible interface to the underlying Spark table.
# S3 method for spark_connection
copy_to(
dest,
df,
name = spark_table_name(substitute(df)),
overwrite = FALSE,
memory = TRUE,
repartition = 0L,
...
)
A tbl_spark
, representing a dplyr
-compatible interface
to a Spark DataFrame.
A spark_connection
.
An R data.frame
.
The name to assign to the copied table in Spark.
Boolean; overwrite a pre-existing table with the name name
if one already exists?
Boolean; should the table be cached into memory?
The number of partitions to use when distributing the table across the Spark cluster. The default (0) can be used to avoid partitioning.
Optional arguments; currently unused.