Implements the operation: output = activation(dot(input, kernel) + bias)
where activation
is the element-wise activation function passed as the
activation
argument, kernel
is a weights matrix created by the layer, and
bias
is a bias vector created by the layer (only applicable if use_bias
is TRUE
). Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with kernel
.
layer_dense(
object,
units,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
input_shape = NULL,
batch_input_shape = NULL,
batch_size = NULL,
dtype = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
Model or layer object
Positive integer, dimensionality of the output space.
Name of activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
Whether the layer uses a bias vector.
Initializer for the kernel
weights matrix.
Initializer for the bias vector.
Regularizer function applied to the kernel
weights matrix.
Regularizer function applied to the bias vector.
Regularizer function applied to the output of the layer (its "activation")..
Constraint function applied to the kernel
weights
matrix.
Constraint function applied to the bias vector.
Dimensionality of the input (integer) not including the samples axis. This argument is required when using this layer as the first layer in a model.
Shapes, including the batch size. For instance,
batch_input_shape=c(10, 32)
indicates that the expected input will be
batches of 10 32-dimensional vectors. batch_input_shape=list(NULL, 32)
indicates batches of an arbitrary number of 32-dimensional vectors.
Fixed batch size for layer
The data type expected by the input, as a string (float32
,
float64
, int32
...)
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.
Input shape: nD tensor with shape: (batch_size, ..., input_dim)
. The most
common situation would be a 2D input with shape (batch_size, input_dim)
.
Output shape: nD tensor with shape: (batch_size, ..., units)
. For
instance, for a 2D input with shape (batch_size, input_dim)
, the output
would have shape (batch_size, unit)
.
Other core layers:
layer_activation()
,
layer_activity_regularization()
,
layer_attention()
,
layer_dense_features()
,
layer_dropout()
,
layer_flatten()
,
layer_input()
,
layer_lambda()
,
layer_masking()
,
layer_permute()
,
layer_repeat_vector()
,
layer_reshape()