
Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.
nn_batch_norm2d(
num_features,
eps = 1e-05,
momentum = 0.1,
affine = TRUE,
track_running_stats = TRUE
)
a value added to the denominator for numerical stability. Default: 1e-5
the value used for the running_mean and running_var
computation. Can be set to None
for cumulative moving average
(i.e. simple average). Default: 0.1
a boolean value that when set to TRUE
, this module has
learnable affine parameters. Default: TRUE
a boolean value that when set to TRUE
, this
module tracks the running mean and variance, and when set to FALSE
,
this module does not track such statistics and uses batch statistics instead
in both training and eval modes if the running mean and variance are None
.
Default: TRUE
Input:
Output:
The mean and standard-deviation are calculated per-dimension over
the mini-batches and C
(where C
is the input size). By default, the elements of torch_var(input, unbiased=FALSE)
.
Also by default, during training this layer keeps running estimates of its
computed mean and variance, which are then used for normalization during
evaluation. The running estimates are kept with a default momentum
of 0.1.
If track_running_stats
is set to FALSE
, this layer then does not
keep running estimates, and batch statistics are instead used during
evaluation time as well.
if (torch_is_installed()) {
# With Learnable Parameters
m <- nn_batch_norm2d(100)
# Without Learnable Parameters
m <- nn_batch_norm2d(100, affine = FALSE)
input <- torch_randn(20, 100, 35, 45)
output <- m(input)
}
Run the code above in your browser using DataLab