read.csv.sql(file, sql = "select * from file", header = TRUE, sep = ",",
row.names, eol, skip, filter, nrows, field.types, comment.char,
colClasses, dbname = tempfile(), drv = "SQLite", ...)
read.csv2.sql(file, sql = "select * from file", header = TRUE, sep = ";",
row.names, eol, skip, filter, nrows, field.types, comment.char,
colClasses, dbname = tempfile(), drv = "SQLite", ...)
http://
or ftp://
). If
the filter
argument is used and no file is to be input to the filter
then file
can be omitted, NULL
, NA
or ""
file
.read.csv
.read.csv
.read.csv
.read.csv2.sql
it is by default the following on non-Windows systems: tr , .
. This translates all commas in the file to dots. On Window-1
causes it to use all rows for determining column types.
This argument is rarely needed.read.csv
.sqldf
except that the default is tempfile()
.
Specifying NULL
will put the database in memory which may improve speed
but will limit the size of the database by the available memory.read.csv.sql
and
read.csv2.sql
is SQLite.
Note that the H2 database has a builtin SQL function,
CSVREAD
, which can be used in place ofsqldf
.SQLite
to read the file
which are intended for speed and therefore
not as flexible as in R. For example, it does not
recognize quoted fields as special but will regard the quotes as
part of the field. See the
sqldf
help for more information.
read.csv2.sql
is like read.csv.sql
except
the default sep
is ";"
and the default filter
translates
all commas in the file to decimal points (i.e. to dots).
On Windows, if the filter
argument is used and if Rtools is detected
in the registry then the Rtools bin directory is added to the search path
facilitating use of those tools without explicitly setting any the path.# might need to specify eol= too depending on your system
write.csv(iris, "iris.csv", quote = FALSE, row.names = FALSE)
iris2 <- read.csv.sql("iris.csv",
sql = "select * from file where Species = 'setosa' ")
Run the code above in your browser using DataLab