Flash Sale | 50% off
Get 50% off unlimited learning

torch (version 0.11.0)

nn_conv_transpose1d: ConvTranspose1D

Description

Applies a 1D transposed convolution operator over an input image composed of several input planes.

Usage

nn_conv_transpose1d(
  in_channels,
  out_channels,
  kernel_size,
  stride = 1,
  padding = 0,
  output_padding = 0,
  groups = 1,
  bias = TRUE,
  dilation = 1,
  padding_mode = "zeros"
)

Arguments

in_channels

(int): Number of channels in the input image

out_channels

(int): Number of channels produced by the convolution

kernel_size

(int or tuple): Size of the convolving kernel

stride

(int or tuple, optional): Stride of the convolution. Default: 1

padding

(int or tuple, optional): dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of the input. Default: 0

output_padding

(int or tuple, optional): Additional size added to one side of the output shape. Default: 0

groups

(int, optional): Number of blocked connections from input channels to output channels. Default: 1

bias

(bool, optional): If True, adds a learnable bias to the output. Default: TRUE

dilation

(int or tuple, optional): Spacing between kernel elements. Default: 1

padding_mode

(string, optional): 'zeros', 'reflect', 'replicate' or 'circular'. Default: 'zeros'

Shape

  • Input: (N,Cin,Lin)

  • Output: (N,Cout,Lout) where Lout=(Lin1)×stride2×padding+dilation×(kernel\_size1)+output\_padding+1

Attributes

  • weight (Tensor): the learnable weights of the module of shape (in\_channels,out\_channelsgroups, kernel\_size). The values of these weights are sampled from U(k,k) where k=groupsCoutkernel\_size

  • bias (Tensor): the learnable bias of the module of shape (out_channels). If bias is TRUE, then the values of these weights are sampled from U(k,k) where k=groupsCoutkernel\_size

Details

This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).

  • stride controls the stride for the cross-correlation.

  • padding controls the amount of implicit zero-paddings on both sides for dilation * (kernel_size - 1) - padding number of points. See note below for details.

  • output_padding controls the additional size added to one side of the output shape. See note below for details.

  • dilation controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation does.

  • groups controls the connections between inputs and outputs. in_channels and out_channels must both be divisible by groups. For example,

    • At groups=1, all inputs are convolved to all outputs.

    • At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels, and producing half the output channels, and both subsequently concatenated.

    • At groups= in_channels, each input channel is convolved with its own set of filters (of size out_channelsin_channels).

Examples

Run this code
if (torch_is_installed()) {
m <- nn_conv_transpose1d(32, 16, 2)
input <- torch_randn(10, 32, 2)
output <- m(input)
}

Run the code above in your browser using DataLab