It follows: f(x) = x
for x > theta
, f(x) = 0
otherwise.
layer_activation_thresholded_relu(object, theta = 1,
input_shape = NULL, batch_input_shape = NULL, batch_size = NULL,
dtype = NULL, name = NULL, trainable = NULL, weights = NULL)
Model or layer object
float >= 0. Threshold location of activation.
Input shape (list of integers, does not include the samples axis) which is required when using this layer as the first layer in a model.
Shapes, including the batch size. For instance,
batch_input_shape=c(10, 32)
indicates that the expected input will be
batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32)
indicates batches of an arbitrary number of 32-dimensional vectors.
Fixed batch size for layer
The data type expected by the input, as a string (float32
,
float64
, int32
...)
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.
Zero-bias autoencoders and the benefits of co-adapting features.
Other activation layers: layer_activation_elu
,
layer_activation_leaky_relu
,
layer_activation_parametric_relu
,
layer_activation_relu
,
layer_activation_selu
,
layer_activation_softmax
,
layer_activation