It follows: f(x) = x
for x > theta
, f(x) = 0
otherwise.
layer_activation_thresholded_relu(
object,
theta = 1,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
What to compose the new Layer
instance with. Typically a
Sequential model or a Tensor (e.g., as returned by layer_input()
).
The return value depends on object
. If object
is:
missing or NULL
, the Layer
instance is returned.
a Sequential
model, the model with an additional layer is returned.
a Tensor, the output tensor from layer_instance(object)
is returned.
float >= 0. Threshold location of activation.
Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model.
Shapes, including the batch size. For instance,
batch_input_shape=c(10, 32)
indicates that the expected input will be
batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32)
indicates batches of an arbitrary number of 32-dimensional vectors.
Fixed batch size for layer
The data type expected by the input, as a string (float32
,
float64
, int32
...)
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.
Zero-bias autoencoders and the benefits of co-adapting features.
Other activation layers:
layer_activation_elu()
,
layer_activation_leaky_relu()
,
layer_activation_parametric_relu()
,
layer_activation_relu()
,
layer_activation_selu()
,
layer_activation_softmax()
,
layer_activation()