dropna, na.omit - Returns a new SparkDataFrame omitting rows with null values.
fillna - Replace null values.
dropna(x, how = c("any", "all"), minNonNulls = NULL, cols = NULL)na.omit(object, ...)
fillna(x, value, cols = NULL)
# S4 method for SparkDataFrame
dropna(x, how = c("any", "all"),
minNonNulls = NULL, cols = NULL)
# S4 method for SparkDataFrame
na.omit(object, how = c("any", "all"),
minNonNulls = NULL, cols = NULL)
# S4 method for SparkDataFrame
fillna(x, value, cols = NULL)
a SparkDataFrame.
"any" or "all".
if "any", drop a row if it contains any nulls.
if "all", drop a row only if all its values are null.
if minNonNulls is specified, how is ignored.
if specified, drop rows that have less than
minNonNulls non-null values.
This overwrites the how parameter.
optional list of column names to consider. In fillna,
columns specified in cols that do not have matching data
type are ignored. For example, if value is a character, and
subset contains a non-character column, then the non-character
column is simply ignored.
a SparkDataFrame.
further arguments to be passed to or from other methods.
value to replace null values with. Should be an integer, numeric, character or named list. If the value is a named list, then cols is ignored and value must be a mapping from column name (character) to replacement value. The replacement value must be an integer, numeric or character.
A SparkDataFrame.
Other SparkDataFrame functions: SparkDataFrame-class,
agg, alias,
arrange, as.data.frame,
attach,SparkDataFrame-method,
broadcast, cache,
checkpoint, coalesce,
collect, colnames,
coltypes,
createOrReplaceTempView,
crossJoin, cube,
dapplyCollect, dapply,
describe, dim,
distinct, dropDuplicates,
drop, dtypes,
except, explain,
filter, first,
gapplyCollect, gapply,
getNumPartitions, group_by,
head, hint,
histogram, insertInto,
intersect, isLocal,
isStreaming, join,
limit, localCheckpoint,
merge, mutate,
ncol, nrow,
persist, printSchema,
randomSplit, rbind,
registerTempTable, rename,
repartition, rollup,
sample, saveAsTable,
schema, selectExpr,
select, showDF,
show, storageLevel,
str, subset,
summary, take,
toJSON, unionByName,
union, unpersist,
withColumn, withWatermark,
with, write.df,
write.jdbc, write.json,
write.orc, write.parquet,
write.stream, write.text
# NOT RUN {
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
dropna(df)
# }
# NOT RUN {
sparkR.session()
path <- "path/to/file.json"
df <- read.json(path)
fillna(df, 1)
fillna(df, list("age" = 20, "name" = "unknown"))
# }
Run the code above in your browser using DataLab