Defines the public endpoint for the AWS Glue service.
glue()
batch_create_partition | Creates one or more partitions in a batch operation |
batch_delete_connection | Deletes a list of connection definitions from the Data Catalog |
batch_delete_partition | Deletes one or more partitions in a batch operation |
batch_delete_table | Deletes multiple tables at once |
batch_delete_table_version | Deletes a specified batch of versions of a table |
batch_get_crawlers | Returns a list of resource metadata for a given list of crawler names |
batch_get_dev_endpoints | Returns a list of resource metadata for a given list of DevEndpoint names |
batch_get_jobs | Returns a list of resource metadata for a given list of job names |
batch_get_partition | Retrieves partitions in a batch request |
batch_get_triggers | Returns a list of resource metadata for a given list of trigger names |
batch_get_workflows | Returns a list of resource metadata for a given list of workflow names |
batch_stop_job_run | Stops one or more job runs for a specified job definition |
create_classifier | Creates a classifier in the user's account |
create_connection | Creates a connection definition in the Data Catalog |
create_crawler | Creates a new crawler with specified targets, role, configuration, and optional schedule |
create_database | Creates a new database in a Data Catalog |
create_dev_endpoint | Creates a new DevEndpoint |
create_job | Creates a new job definition |
create_partition | Creates a new partition |
create_script | Transforms a directed acyclic graph (DAG) into code |
create_security_configuration | Creates a new security configuration |
create_table | Creates a new table definition in the Data Catalog |
create_trigger | Creates a new trigger |
create_user_defined_function | Creates a new function definition in the Data Catalog |
create_workflow | Creates a new workflow |
delete_classifier | Removes a classifier from the Data Catalog |
delete_connection | Deletes a connection from the Data Catalog |
delete_crawler | Removes a specified crawler from the AWS Glue Data Catalog, unless the crawler state is RUNNING |
delete_database | Removes a specified Database from a Data Catalog |
delete_dev_endpoint | Deletes a specified DevEndpoint |
delete_job | Deletes a specified job definition |
delete_partition | Deletes a specified partition |
delete_resource_policy | Deletes a specified policy |
delete_security_configuration | Deletes a specified security configuration |
delete_table | Removes a table definition from the Data Catalog |
delete_table_version | Deletes a specified version of a table |
delete_trigger | Deletes a specified trigger |
delete_user_defined_function | Deletes an existing function definition from the Data Catalog |
delete_workflow | Deletes a workflow |
get_catalog_import_status | Retrieves the status of a migration operation |
get_classifier | Retrieve a classifier by name |
get_classifiers | Lists all classifier objects in the Data Catalog |
get_connection | Retrieves a connection definition from the Data Catalog |
get_connections | Retrieves a list of connection definitions from the Data Catalog |
get_crawler | Retrieves metadata for a specified crawler |
get_crawler_metrics | Retrieves metrics about specified crawlers |
get_crawlers | Retrieves metadata for all crawlers defined in the customer account |
get_data_catalog_encryption_settings | Retrieves the security configuration for a specified catalog |
get_database | Retrieves the definition of a specified database |
get_databases | Retrieves all Databases defined in a given Data Catalog |
get_dataflow_graph | Transforms a Python script into a directed acyclic graph (DAG) |
get_dev_endpoint | Retrieves information about a specified DevEndpoint |
get_dev_endpoints | Retrieves all the DevEndpoints in this AWS account |
get_job | Retrieves an existing job definition |
get_job_run | Retrieves the metadata for a given job run |
get_job_runs | Retrieves metadata for all runs of a given job definition |
get_jobs | Retrieves all current job definitions |
get_mapping | Creates mappings |
get_partition | Retrieves information about a specified partition |
get_partitions | Retrieves information about the partitions in a table |
get_plan | Gets code to perform a specified mapping |
get_resource_policy | Retrieves a specified resource policy |
get_security_configuration | Retrieves a specified security configuration |
get_security_configurations | Retrieves a list of all security configurations |
get_table | Retrieves the Table definition in a Data Catalog for a specified table |
get_table_version | Retrieves a specified version of a table |
get_table_versions | Retrieves a list of strings that identify available versions of a specified table |
get_tables | Retrieves the definitions of some or all of the tables in a given Database |
get_tags | Retrieves a list of tags associated with a resource |
get_trigger | Retrieves the definition of a trigger |
get_triggers | Gets all the triggers associated with a job |
get_user_defined_function | Retrieves a specified function definition from the Data Catalog |
get_user_defined_functions | Retrieves a multiple function definitions from the Data Catalog |
get_workflow | Retrieves resource metadata for a workflow |
get_workflow_run | Retrieves the metadata for a given workflow run |
get_workflow_run_properties | Retrieves the workflow run properties which were set during the run |
get_workflow_runs | Retrieves metadata for all runs of a given workflow |
import_catalog_to_glue | Imports an existing Athena Data Catalog to AWS Glue |
list_crawlers | Retrieves the names of all crawler resources in this AWS account, or the resources with the specified tag |
list_dev_endpoints | Retrieves the names of all DevEndpoint resources in this AWS account, or the resources with the specified tag |
list_jobs | Retrieves the names of all job resources in this AWS account, or the resources with the specified tag |
list_triggers | Retrieves the names of all trigger resources in this AWS account, or the resources with the specified tag |
list_workflows | Lists names of workflows created in the account |
put_data_catalog_encryption_settings | Sets the security configuration for a specified catalog |
put_resource_policy | Sets the Data Catalog resource policy for access control |
put_workflow_run_properties | Puts the specified workflow run properties for the given workflow run |
reset_job_bookmark | Resets a bookmark entry |
start_crawler | Starts a crawl using the specified crawler, regardless of what is scheduled |
start_crawler_schedule | Changes the schedule state of the specified crawler to SCHEDULED, unless the crawler is already running or the schedule state is already SCHEDULED |
start_job_run | Starts a job run using a job definition |
start_trigger | Starts an existing trigger |
start_workflow_run | Starts a new run of the specified workflow |
stop_crawler | If the specified crawler is running, stops the crawl |
stop_crawler_schedule | Sets the schedule state of the specified crawler to NOT_SCHEDULED, but does not stop the crawler if it is already running |
stop_trigger | Stops a specified trigger |
tag_resource | Adds tags to a resource |
untag_resource | Removes tags from a resource |
update_classifier | Modifies an existing classifier (a GrokClassifier, an XMLClassifier, a JsonClassifier, or a CsvClassifier, depending on which field is present) |
update_connection | Updates a connection definition in the Data Catalog |
update_crawler | Updates a crawler |
update_crawler_schedule | Updates the schedule of a crawler using a cron expression |
update_database | Updates an existing database definition in a Data Catalog |
update_dev_endpoint | Updates a specified DevEndpoint |
update_job | Updates an existing job definition |
update_partition | Updates a partition |
update_table | Updates a metadata table in the Data Catalog |
update_trigger | Updates a trigger definition |
update_user_defined_function | Updates an existing function definition in the Data Catalog |
# NOT RUN {
svc <- glue()
svc$batch_create_partition(
Foo = 123
)
# }
# NOT RUN {
# }
Run the code above in your browser using DataLab