sqlSave(channel, dat, tablename = NULL, append = FALSE, rownames = TRUE,
colnames = FALSE, verbose = FALSE, oldstyle = FALSE,
safer = TRUE, addPK = FALSE, fast = TRUE, test = TRUE,
nastring = NULL)sqlUpdate(channel, dat, tablename = NULL, index = NULL,
verbose = FALSE, test = FALSE, nastring = NULL, fast = TRUE)
odbcConnect
.rownames
in the table? If character, the column name under which to save
the rownames.sqlTypeInfo
to choose the types of columns when a table has to be created. If
true, create all columns as varchar(255)
.sqlSave
to attempt to delete all the rows of an existing table, or to drop it.NA
s
to the database. The default, NULL
, attempts to write
a missing value as a database null.sqlwrite
.sqlSave
saves the data frame dat
in the table
tablename
. The table name is taken from tablename if given or
the name of the dataframe specified at dat
.
If the table exists and has the appropriate structure
it is used, or else it is created anew.
If rownames = TRUE
the first column of the table will be the row
labels with colname rowname
: rownames
can also be a string giving the desired name (see example). colnames
copied the column names into row 1. This is intended for cases where
case conversion alters the original column names and it is desired that
they are retained. Note that there are drawbacks to this approach:
it presupposes that the rows will be returned in correct order;
not always valid. It will also cause numeric rows to be returned as factors. Argument addPK = TRUE
causes the rownames to be marked as a
primary key. This is usually a good idea, and may allow updates to be
done. However, some DBMSs (e.g. Access) do not support primary keys,
and the PostgreSQL ODBC driver generates internal memory corruption if
this option is used.
WARNING: sqlSave(safer = FALSE)
uses the 'great white shark'
method of testing tables (bite it and see). The logic will
unceremoniously DROP the table and create it anew with its own choice of
column types in its attempt to find a writable solution.
test=TRUE
will not necessarily predict this behaviour.
Attempting to write indexed columns or writing to pseudo-columns are
less obvious causes of failed writes followed by a DROP. If your table
structure is precious to you back it up.
sqlUpdate
updates the table where the rows already exist. Data
frame dat
should columns with names that map to (some of) the
columns in the table. It also needs to contain the column(s)
specified by index
which together identify the rows to be
updated. If index = NULL
, the function tries to identify such
rows. First it looks for a primary key in the data frame, then for
the column(s) that the database regards as the optimal for defining a
row uniquely (these are returned by
sqlColumns(..., special=TRUE)
). If
there is no such column the rownames are used provided they are stored
as column "rownames"
in the table.
The value of nastring
is used for all the columns and no
attempt is made to check if the column is nullable. For all but the
simplest applications it will be better to prepare a data frame with
non-null missing values already substituted.
sqlFetch
, sqlQuery
,
odbcConnect
, odbcGetInfo
library(RODBC)
data(USArrests)
channel <- odbcConnect("test", "", "") # userId and password
sqlSave(channel, USArrests, verbose = TRUE, rownames = "State")
sqlFetch(channel, "USArrests", rownames = "State") # get the lot
foo <- cbind(state=row.names(USArrests), USArrests)[1:3, c(1,3)]
foo[1,2] <- 222
sqlUpdate(channel, foo, "USArrests")
sqlFetch(channel, "USArrests", rownames = "state", max = 5)
sqlDrop(channel, "USArrests")
odbcClose(channel)
Run the code above in your browser using DataLab