Performs a prediction using a SavedModel model already loaded using
load_savedmodel()
.
# S3 method for graph_prediction
predict_savedmodel(instances, model, sess,
signature_name = "serving_default", ...)
A list of prediction instances to be passed as input tensors to the service. Even for single predictions, a list with one entry is expected.
The model as a local path, a REST url or graph object.
A local path can be exported using export_savedmodel()
, a REST URL
can be created using serve_savedmodel()
and a graph object loaded using
load_savedmodel()
.
A type
parameter can be specified to explicitly choose the type model
performing the prediction. Valid values are export
, webapi
and
graph
.
The active TensorFlow session.
The named entry point to use in the model for prediction.
See predict_savedmodel.export_prediction()
,
predict_savedmodel.graph_prediction()
,
predict_savedmodel.webapi_prediction()
for additional options.
#' @section Implementations: