Please use api-job instead.
insert_extract_job(project, dataset, table, destination_uris,
compression = "NONE", destination_format = "NEWLINE_DELIMITED_JSON",
..., print_header = TRUE, billing = project)
Project and dataset identifiers
Project and dataset identifiers
name of table to insert values into
Fully qualified google storage url. For large extracts you may need to specify a wild-card since
Compression type ("NONE", "GZIP")
Destination format ("CSV", "ARVO", or "NEWLINE_DELIMITED_JSON")
Additional arguments passed on to the underlying API call. snake_case names are automatically converted to camelCase.
Include row of column headers in the results?
project ID to use for billing
a job resource list, as documented at https://cloud.google.com/bigquery/docs/reference/v2/jobs
Other jobs: get_job
,
insert_query_job
,
insert_upload_job
, wait_for