Learn R Programming

SparkR (version 3.1.2)

rbind: Union two or more SparkDataFrames

Description

Union two or more SparkDataFrames by row. As in R's rbind, this method requires that the input SparkDataFrames have the same column names.

Usage

rbind(..., deparse.level = 1)

# S4 method for SparkDataFrame rbind(x, ..., deparse.level = 1)

Arguments

...

additional SparkDataFrame(s).

deparse.level

currently not used (put here to match the signature of the base implementation).

x

a SparkDataFrame.

Value

A SparkDataFrame containing the result of the union.

Details

Note: This does not remove duplicate rows across the two SparkDataFrames.

See Also

union unionByName

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Examples

Run this code
# NOT RUN {
sparkR.session()
unions <- rbind(df, df2, df3, df4)
# }

Run the code above in your browser using DataLab