Alpha Dropout is a dropout that keeps mean and variance of inputs to their original values, in order to ensure the self-normalizing property even after this dropout.
layer_alpha_dropout(
object,
rate,
noise_shape = NULL,
seed = NULL,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
What to call the new Layer
instance with. Typically a keras
Model
, another Layer
, or a tf.Tensor
/KerasTensor
. If object
is
missing, the Layer
instance is returned, otherwise, layer(object)
is
returned.
float, drop probability (as with layer_dropout()
). The
multiplicative noise will have standard deviation sqrt(rate / (1 - rate))
.
Noise shape
An integer to use as random seed.
Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model.
Shapes, including the batch size. For instance,
batch_input_shape=c(10, 32)
indicates that the expected input will be
batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32)
indicates batches of an arbitrary number of 32-dimensional vectors.
Fixed batch size for layer
The data type expected by the input, as a string (float32
,
float64
, int32
...)
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.
Arbitrary. Use the keyword argument input_shape
(list
of integers, does not include the samples axis) when using this layer as
the first layer in a model.
Same shape as input.
Alpha Dropout fits well to Scaled Exponential Linear Units by randomly setting activations to the negative saturation value.
Other noise layers:
layer_gaussian_dropout()
,
layer_gaussian_noise()