Depthwise convolution is a type of convolution in which each input channel is convolved with a different kernel (called a depthwise kernel). You can understand depthwise convolution as the first step in a depthwise separable convolution.
It is implemented via the following steps:
Split the input into individual channels.
Convolve each channel with an individual depthwise kernel with
depth_multiplier
output channels.
Concatenate the convolved outputs along the channels axis.
Unlike a regular 2D convolution, depthwise convolution does not mix information across different input channels.
The depth_multiplier
argument determines how many filters are applied to
one input channel. As such, it controls the amount of output channels that
are generated per input channel in the depthwise step.
layer_depthwise_conv_2d(
object,
kernel_size,
strides = list(1L, 1L),
padding = "valid",
depth_multiplier = 1L,
data_format = NULL,
dilation_rate = list(1L, 1L),
activation = NULL,
use_bias = TRUE,
depthwise_initializer = "glorot_uniform",
bias_initializer = "zeros",
depthwise_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
depthwise_constraint = NULL,
bias_constraint = NULL,
...
)
A 4D tensor representing
activation(depthwise_conv2d(inputs, kernel) + bias)
.
Object to compose the layer with. A tensor, array, or sequential model.
int or list of 2 integer, specifying the size of the depthwise convolution window.
int or list of 2 integer, specifying the stride length
of the depthwise convolution. strides > 1
is incompatible with
dilation_rate > 1
.
string, either "valid"
or "same"
(case-insensitive).
"valid"
means no padding. "same"
results in padding evenly to
the left/right or up/down of the input. When padding="same"
and
strides=1
, the output has the same size as the input.
The number of depthwise convolution output channels
for each input channel. The total number of depthwise convolution
output channels will be equal to input_channel * depth_multiplier
.
string, either "channels_last"
or "channels_first"
.
The ordering of the dimensions in the inputs. "channels_last"
corresponds to inputs with shape (batch, height, width, channels)
while "channels_first"
corresponds to inputs with shape
(batch, channels, height, width)
. It defaults to the
image_data_format
value found in your Keras config file
at ~/.keras/keras.json
.
If you never set it, then it will be "channels_last"
.
int or list of 2 integers, specifying the dilation rate to use for dilated convolution.
Activation function. If NULL
, no activation is applied.
bool, if TRUE
, bias will be added to the output.
Initializer for the convolution kernel.
If NULL
, the default initializer ("glorot_uniform"
)
will be used.
Initializer for the bias vector. If NULL
, the
default initializer ("zeros"
) will be used.
Optional regularizer for the convolution kernel.
Optional regularizer for the bias vector.
Optional regularizer function for the output.
Optional projection function to be applied to the
kernel after being updated by an Optimizer
(e.g. used to implement
norm constraints or value constraints for layer weights). The
function must take as input the unprojected variable and must return
the projected variable (which must have the same shape). Constraints
are not safe to use when doing asynchronous distributed training.
Optional projection function to be applied to the
bias after being updated by an Optimizer
.
For forward/backward compatability.
If data_format="channels_last"
:
A 4D tensor with shape: (batch_size, height, width, channels)
If data_format="channels_first"
:
A 4D tensor with shape: (batch_size, channels, height, width)
If data_format="channels_last"
:
A 4D tensor with shape:
(batch_size, new_height, new_width, channels * depth_multiplier)
If data_format="channels_first"
:
A 4D tensor with shape:
(batch_size, channels * depth_multiplier, new_height, new_width)
ValueError: when both strides > 1
and dilation_rate > 1
.
x <- random_uniform(c(4, 10, 10, 12))
y <- x |> layer_depthwise_conv_2d(kernel_size = 3, activation = 'relu')
shape(y)
## shape(4, 8, 8, 12)
Other convolutional layers:
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_depthwise_conv_1d()
layer_separable_conv_1d()
layer_separable_conv_2d()
Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_auto_contrast()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_equalization()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_group_query_attention()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_num_bounding_boxes()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_mix_up()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_rand_augment()
layer_random_brightness()
layer_random_color_degeneration()
layer_random_color_jitter()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_grayscale()
layer_random_hue()
layer_random_posterization()
layer_random_rotation()
layer_random_saturation()
layer_random_sharpness()
layer_random_shear()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_solarization()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_stft_spectrogram()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()