layer_locally_connected_1d()
works similarly to layer_conv_1d()
, except
that weights are unshared, that is, a different set of filters is applied at
each different patch of the input.
layer_locally_connected_1d(
object,
filters,
kernel_size,
strides = 1L,
padding = "valid",
data_format = NULL,
activation = NULL,
use_bias = TRUE,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
implementation = 1L,
batch_size = NULL,
name = NULL,
trainable = NULL,
weights = NULL
)
What to compose the new Layer
instance with. Typically a
Sequential model or a Tensor (e.g., as returned by layer_input()
).
The return value depends on object
. If object
is:
missing or NULL
, the Layer
instance is returned.
a Sequential
model, the model with an additional layer is returned.
a Tensor, the output tensor from layer_instance(object)
is returned.
Integer, the dimensionality of the output space (i.e. the number output of filters in the convolution).
An integer or list of a single integer, specifying the length of the 1D convolution window.
An integer or list of a single integer, specifying the stride
length of the convolution. Specifying any stride value != 1 is incompatible
with specifying any dilation_rate
value != 1.
Currently only supports "valid"
(case-insensitive). "same"
may be supported in the future.
A string, one of channels_last
(default) or
channels_first
. The ordering of the dimensions in the inputs.
channels_last
corresponds to inputs with shape (batch, height, width, channels)
while channels_first
corresponds to inputs with shape (batch, channels, height, width)
. It defaults to the image_data_format
value
found in your Keras config file at ~/.keras/keras.json
. If you never set
it, then it will be "channels_last".
Activation function to use. If you don't specify anything,
no activation is applied (ie. "linear" activation: a(x) = x
).
Boolean, whether the layer uses a bias vector.
Initializer for the kernel
weights matrix.
Initializer for the bias vector.
Regularizer function applied to the kernel
weights matrix.
Regularizer function applied to the bias vector.
Regularizer function applied to the output of the layer (its "activation")..
Constraint function applied to the kernel matrix.
Constraint function applied to the bias vector.
either 1, 2, or 3. 1 loops over input spatial locations
to perform the forward pass. It is memory-efficient but performs a lot of
(small) ops. 2 stores layer weights in a dense but sparsely-populated 2D
matrix and implements the forward pass as a single matrix-multiply. It uses
a lot of RAM but performs few (large) ops. 3 stores layer weights in a
sparse tensor and implements the forward pass as a single sparse
matrix-multiply. How to choose: 1: large, dense models, 2: small models, 3:
large, sparse models, where "large" stands for large input/output
activations (i.e. many filters, input_filters, large input_size, output_size
),
and "sparse" stands for few connections between inputs and outputs, i.e.
small ratio filters * input_filters * kernel_size / (input_size * strides)
,
where inputs to and outputs of the layer are assumed to have shapes
(input_size, input_filters)
, (output_size, filters)
respectively.
It is recommended to benchmark each in the setting of interest to pick the
most efficient one (in terms of speed and memory usage). Correct choice of
implementation can lead to dramatic speed improvements (e.g. 50X),
potentially at the expense of RAM. Also, only padding="valid"
is
supported by implementation=1
.
Fixed batch size for layer
An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
Whether the layer weights will be updated during training.
Initial weights for layer.
3D tensor with shape: (batch_size, steps, input_dim)
3D tensor with shape: (batch_size, new_steps, filters)
steps
value might have changed due to padding or strides.
Other locally connected layers:
layer_locally_connected_2d()