Performs \(L_p\) normalization of inputs over specified dimension.
nnf_normalize(input, p = 2, dim = 2, eps = 1e-12, out = NULL)
input tensor of any shape
(float) the exponent value in the norm formulation. Default: 2
(int) the dimension to reduce. Default: 1
(float) small value to avoid division by zero. Default: 1e-12
(Tensor, optional) the output tensor. If out
is used, this operation won't be differentiable.
For a tensor input
of sizes \((n_0, ..., n_{dim}, ..., n_k)\), each
\(n_{dim}\) -element vector \(v\) along dimension dim
is transformed as
$$ v = \frac{v}{\max(\Vert v \Vert_p, \epsilon)}. $$
With the default arguments it uses the Euclidean norm over vectors along dimension \(1\) for normalization.