Neptune Data API
The Amazon Neptune data API provides SDK support for more than 40 of Neptune's data operations, including data loading, query execution, data inquiry, and machine learning. It supports the Gremlin and openCypher query languages, and is available in all SDK languages. It automatically signs API requests and greatly simplifies integrating Neptune into your applications.
neptunedata(
config = list(),
credentials = list(),
endpoint = NULL,
region = NULL
)
A client for the service. You can call the service's operations using
syntax like svc$operation(...)
, where svc
is the name you've assigned
to the client. The available operations are listed in the
Operations section.
Optional configuration of credentials, endpoint, and/or region.
credentials:
creds:
access_key_id: AWS access key ID
secret_access_key: AWS secret access key
session_token: AWS temporary session token
profile: The name of a profile to use. If not given, then the default profile is used.
anonymous: Set anonymous credentials.
endpoint: The complete URL to use for the constructed client.
region: The AWS Region used in instantiating the client.
close_connection: Immediately close all HTTP connections.
timeout: The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.
s3_force_path_style: Set this to true
to force the request to use path-style addressing, i.e. http://s3.amazonaws.com/BUCKET/KEY
.
sts_regional_endpoint: Set sts regional endpoint resolver to regional or legacy https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html
Optional credentials shorthand for the config parameter
creds:
access_key_id: AWS access key ID
secret_access_key: AWS secret access key
session_token: AWS temporary session token
profile: The name of a profile to use. If not given, then the default profile is used.
anonymous: Set anonymous credentials.
Optional shorthand for complete URL to use for the constructed client.
Optional shorthand for AWS Region used in instantiating the client.
svc <- neptunedata(
config = list(
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string",
close_connection = "logical",
timeout = "numeric",
s3_force_path_style = "logical",
sts_regional_endpoint = "string"
),
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string"
)
cancel_gremlin_query | Cancels a Gremlin query |
cancel_loader_job | Cancels a specified load job |
cancel_ml_data_processing_job | Cancels a Neptune ML data processing job |
cancel_ml_model_training_job | Cancels a Neptune ML model training job |
cancel_ml_model_transform_job | Cancels a specified model transform job |
cancel_open_cypher_query | Cancels a specified openCypher query |
create_ml_endpoint | Creates a new Neptune ML inference endpoint that lets you query one specific model that the model-training process constructed |
delete_ml_endpoint | Cancels the creation of a Neptune ML inference endpoint |
delete_propertygraph_statistics | Deletes statistics for Gremlin and openCypher (property graph) data |
delete_sparql_statistics | Deletes SPARQL statistics |
execute_fast_reset | The fast reset REST API lets you reset a Neptune graph quicky and easily, removing all of its data |
execute_gremlin_explain_query | Executes a Gremlin Explain query |
execute_gremlin_profile_query | Executes a Gremlin Profile query, which runs a specified traversal, collects various metrics about the run, and produces a profile report as output |
execute_gremlin_query | This commands executes a Gremlin query |
execute_open_cypher_explain_query | Executes an openCypher explain request |
execute_open_cypher_query | Executes an openCypher query |
get_engine_status | Retrieves the status of the graph database on the host |
get_gremlin_query_status | Gets the status of a specified Gremlin query |
get_loader_job_status | Gets status information about a specified load job |
get_ml_data_processing_job | Retrieves information about a specified data processing job |
get_ml_endpoint | Retrieves details about an inference endpoint |
get_ml_model_training_job | Retrieves information about a Neptune ML model training job |
get_ml_model_transform_job | Gets information about a specified model transform job |
get_open_cypher_query_status | Retrieves the status of a specified openCypher query |
get_propertygraph_statistics | Gets property graph statistics (Gremlin and openCypher) |
get_propertygraph_stream | Gets a stream for a property graph |
get_propertygraph_summary | Gets a graph summary for a property graph |
get_rdf_graph_summary | Gets a graph summary for an RDF graph |
get_sparql_statistics | Gets RDF statistics (SPARQL) |
get_sparql_stream | Gets a stream for an RDF graph |
list_gremlin_queries | Lists active Gremlin queries |
list_loader_jobs | Retrieves a list of the loadIds for all active loader jobs |
list_ml_data_processing_jobs | Returns a list of Neptune ML data processing jobs |
list_ml_endpoints | Lists existing inference endpoints |
list_ml_model_training_jobs | Lists Neptune ML model-training jobs |
list_ml_model_transform_jobs | Returns a list of model transform job IDs |
list_open_cypher_queries | Lists active openCypher queries |
manage_propertygraph_statistics | Manages the generation and use of property graph statistics |
manage_sparql_statistics | Manages the generation and use of RDF graph statistics |
start_loader_job | Starts a Neptune bulk loader job to load data from an Amazon S3 bucket into a Neptune DB instance |
start_ml_data_processing_job | Creates a new Neptune ML data processing job for processing the graph data exported from Neptune for training |
start_ml_model_training_job | Creates a new Neptune ML model training job |
start_ml_model_transform_job | Creates a new model transform job |
if (FALSE) {
svc <- neptunedata()
svc$cancel_gremlin_query(
Foo = 123
)
}
Run the code above in your browser using DataLab