This is an implementation of grouped-query attention introduced by
Ainslie et al., 2023. Here
num_key_value_heads
denotes number of groups, setting
num_key_value_heads
to 1 is equivalent to multi-query attention, and
when num_key_value_heads
is equal to num_query_heads
it is equivalent
to multi-head attention.
This layer first projects query
, key
, and value
tensors. Then, key
and value
are repeated to match the number of heads of query
.
Then, the query
is scaled and dot-producted with key
tensors. These are
softmaxed to obtain attention probabilities. The value tensors are then
interpolated by these probabilities and concatenated back to a single
tensor.
layer_group_query_attention(
object,
head_dim,
num_query_heads,
num_key_value_heads,
dropout = 0,
use_bias = TRUE,
flash_attention = NULL,
kernel_initializer = "glorot_uniform",
bias_initializer = "zeros",
kernel_regularizer = NULL,
bias_regularizer = NULL,
activity_regularizer = NULL,
kernel_constraint = NULL,
bias_constraint = NULL,
seed = NULL,
...
)
attention_output: Result of the computation, of shape
(batch_dim, target_seq_len, feature_dim)
, where target_seq_len
is for target sequence length and feature_dim
is the query input
last dim.
attention_scores: (Optional) attention coefficients of shape
(batch_dim, num_query_heads, target_seq_len, source_seq_len)
.
Object to compose the layer with. A tensor, array, or sequential model.
Size of each attention head.
Number of query attention heads.
Number of key and value attention heads.
Dropout probability.
Boolean, whether the dense layers use bias vectors/matrices.
If NULL
, the layer attempts to use flash
attention for faster and more memory-efficient attention
computations when possible. This behavior can be configured using
config_enable_flash_attention()
or
config_disable_flash_attention()
.
Initializer for dense layer kernels.
Initializer for dense layer biases.
Regularizer for dense layer kernels.
Regularizer for dense layer biases.
Regularizer for dense layer activity.
Constraint for dense layer kernels.
Constraint for dense layer kernels.
Optional integer to seed the dropout layer.
For forward/backward compatability.
query
: Query tensor of shape (batch_dim, target_seq_len, feature_dim)
,
where batch_dim
is batch size, target_seq_len
is the length of
target sequence, and feature_dim
is dimension of feature.
value
: Value tensor of shape (batch_dim, source_seq_len, feature_dim)
,
where batch_dim
is batch size, source_seq_len
is the length of
source sequence, and feature_dim
is dimension of feature.
key
: Optional key tensor of shape
(batch_dim, source_seq_len, feature_dim)
. If not given, will use
value
for both key
and value
, which is most common case.
attention_mask
: A boolean mask of shape
(batch_dim, target_seq_len, source_seq_len)
, that prevents
attention to certain positions. The boolean mask specifies which
query elements can attend to which key elements, where 1 indicates
attention and 0 indicates no attention. Broadcasting can happen for
the missing batch dimensions and the head dimension.
return_attention_scores
: A boolean to indicate whether the output
should be (attention_output, attention_scores)
if TRUE
, or
attention_output
if FALSE
. Defaults to FALSE
.
training
: Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (no dropout).
Will go with either using the training mode of the parent
layer/model or FALSE
(inference) if there is no parent layer.
use_causal_mask
: A boolean to indicate whether to apply a causal mask to
prevent tokens from attending to future tokens (e.g., used in a
decoder Transformer).
Other attention layers:
layer_additive_attention()
layer_attention()
layer_multi_head_attention()
Other layers:
Layer()
layer_activation()
layer_activation_elu()
layer_activation_leaky_relu()
layer_activation_parametric_relu()
layer_activation_relu()
layer_activation_softmax()
layer_activity_regularization()
layer_add()
layer_additive_attention()
layer_alpha_dropout()
layer_attention()
layer_auto_contrast()
layer_average()
layer_average_pooling_1d()
layer_average_pooling_2d()
layer_average_pooling_3d()
layer_batch_normalization()
layer_bidirectional()
layer_category_encoding()
layer_center_crop()
layer_concatenate()
layer_conv_1d()
layer_conv_1d_transpose()
layer_conv_2d()
layer_conv_2d_transpose()
layer_conv_3d()
layer_conv_3d_transpose()
layer_conv_lstm_1d()
layer_conv_lstm_2d()
layer_conv_lstm_3d()
layer_cropping_1d()
layer_cropping_2d()
layer_cropping_3d()
layer_dense()
layer_depthwise_conv_1d()
layer_depthwise_conv_2d()
layer_discretization()
layer_dot()
layer_dropout()
layer_einsum_dense()
layer_embedding()
layer_equalization()
layer_feature_space()
layer_flatten()
layer_flax_module_wrapper()
layer_gaussian_dropout()
layer_gaussian_noise()
layer_global_average_pooling_1d()
layer_global_average_pooling_2d()
layer_global_average_pooling_3d()
layer_global_max_pooling_1d()
layer_global_max_pooling_2d()
layer_global_max_pooling_3d()
layer_group_normalization()
layer_gru()
layer_hashed_crossing()
layer_hashing()
layer_identity()
layer_integer_lookup()
layer_jax_model_wrapper()
layer_lambda()
layer_layer_normalization()
layer_lstm()
layer_masking()
layer_max_num_bounding_boxes()
layer_max_pooling_1d()
layer_max_pooling_2d()
layer_max_pooling_3d()
layer_maximum()
layer_mel_spectrogram()
layer_minimum()
layer_mix_up()
layer_multi_head_attention()
layer_multiply()
layer_normalization()
layer_permute()
layer_rand_augment()
layer_random_brightness()
layer_random_color_degeneration()
layer_random_color_jitter()
layer_random_contrast()
layer_random_crop()
layer_random_flip()
layer_random_grayscale()
layer_random_hue()
layer_random_posterization()
layer_random_rotation()
layer_random_saturation()
layer_random_sharpness()
layer_random_shear()
layer_random_translation()
layer_random_zoom()
layer_repeat_vector()
layer_rescaling()
layer_reshape()
layer_resizing()
layer_rnn()
layer_separable_conv_1d()
layer_separable_conv_2d()
layer_simple_rnn()
layer_solarization()
layer_spatial_dropout_1d()
layer_spatial_dropout_2d()
layer_spatial_dropout_3d()
layer_spectral_normalization()
layer_stft_spectrogram()
layer_string_lookup()
layer_subtract()
layer_text_vectorization()
layer_tfsm()
layer_time_distributed()
layer_torch_module_wrapper()
layer_unit_normalization()
layer_upsampling_1d()
layer_upsampling_2d()
layer_upsampling_3d()
layer_zero_padding_1d()
layer_zero_padding_2d()
layer_zero_padding_3d()
rnn_cell_gru()
rnn_cell_lstm()
rnn_cell_simple()
rnn_cells_stack()