(e.g. via TF-Serving).
Note: This can currently only be used with the TensorFlow or JAX backends.
This method lets you export a model to a lightweight SavedModel artifact
that contains the model's forward pass only (its call()
method)
and can be served via e.g. TF-Serving. The forward pass is registered
under the name serve()
(see example below).
The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact -- it is entirely standalone.
# S3 method for keras.src.models.model.Model
export_savedmodel(object, export_dir_base, ...)
This is called primarily for the side effect of exporting object
.
The first argument, object
is also returned, invisibly, to enable usage
with the pipe.
A keras model.
string, file path where to save the artifact.
For forward/backward compatability.
# Create the artifact
model |> tensorflow::export_savedmodel("path/to/location")# Later, in a different process/environment...
library(tensorflow)
reloaded_artifact <- tf$saved_model$load("path/to/location")
predictions <- reloaded_artifact$serve(input_data)
# see tfdeploy::serve_savedmodel() for serving a model over a local web api.
Other saving and loading functions:
layer_tfsm()
load_model()
load_model_weights()
register_keras_serializable()
save_model()
save_model_config()
save_model_weights()
with_custom_object_scope()