Please use api-job and api-perform instead.
insert_extract_job(
project,
dataset,
table,
destination_uris,
compression = "NONE",
destination_format = "NEWLINE_DELIMITED_JSON",
...,
print_header = TRUE,
billing = project
)
a job resource list, as documented at https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs
Project and dataset identifiers
name of table to insert values into
Fully qualified google storage url. For large extracts you may need to specify a wild-card since
Compression type ("NONE", "GZIP")
Destination format ("CSV", "ARVO", or "NEWLINE_DELIMITED_JSON")
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase.
Include row of column headers in the results?
project ID to use for billing
Other jobs:
get_job()
,
insert_query_job()
,
insert_upload_job()
,
wait_for()