Learn R Programming

SparkR (version 3.1.2)

checkpoint: checkpoint

Description

Returns a checkpointed version of this SparkDataFrame. Checkpointing can be used to truncate the logical plan, which is especially useful in iterative algorithms where the plan may grow exponentially. It will be saved to files inside the checkpoint directory set with setCheckpointDir

Usage

checkpoint(x, eager = TRUE)

# S4 method for SparkDataFrame checkpoint(x, eager = TRUE)

Arguments

x

A SparkDataFrame

eager

whether to checkpoint this SparkDataFrame immediately

Value

a new checkpointed SparkDataFrame

See Also

setCheckpointDir

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Examples

Run this code
# NOT RUN {
setCheckpointDir("/checkpoint")
df <- checkpoint(df)
# }

Run the code above in your browser using DataLab