powered by
Conv2d
torch_conv2d( input, weight, bias = list(), stride = 1L, padding = 0L, dilation = 1L, groups = 1L )
input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iH , iW)\)
filters of shape \((\mbox{out\_channels} , \frac{\mbox{in\_channels}}{\mbox{groups}} , kH , kW)\)
optional bias tensor of shape \((\mbox{out\_channels})\). Default: NULL
NULL
the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
(sH, sW)
implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
(padH, padW)
the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
(dH, dW)
split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1
Applies a 2D convolution over an input image composed of several input planes.
See nn_conv2d() for details and output shape.
nn_conv2d()
if (torch_is_installed()) { # With square kernels and equal stride filters = torch_randn(c(8,4,3,3)) inputs = torch_randn(c(1,4,5,5)) nnf_conv2d(inputs, filters, padding=1) }
Run the code above in your browser using DataLab