Runs a prediction over a saved model file, web API or graph object.
Usage
predict_savedmodel(instances, model, ...)
Arguments
instances
A list of prediction instances to be passed as input tensors
to the service. Even for single predictions, a list with one entry is expected.
model
The model as a local path, a REST url or graph object.
A local path can be exported using export_savedmodel(), a REST URL
can be created using serve_savedmodel() and a graph object loaded using
load_savedmodel().
A type parameter can be specified to explicitly choose the type model
performing the prediction. Valid values are export, webapi and
graph.
# NOT RUN {# perform prediction based on an existing modeltfdeploy::predict_savedmodel(
list(rep(9, 784)),
system.file("models/tensorflow-mnist", package = "tfdeploy")
)
# }# NOT RUN {# }