oneflow.nn.functional.normalize

oneflow.nn.functional.normalize(input: Tensor, p: float = 2.0, dim: int = 0, epsilon: float = 1e-12)Tensor

Performs \(L_p\) normalization of inputs over specified dimension

For a tensor input of sizes \((n_0, ..., n_{dim}, ..., n_k)\), each \(n_{dim}\) -element vector \(v\) along dimension dim is transformed as:

\[v = \frac{v}{\max(\lVert v \rVert_p, \epsilon)}.\]

With the default arguments it uses the Euclidean norm over vectors along dimension \(1\) for normalization.

But note that the gradient calculation of the input tensor has different results on different frameworks when input.shape[dim] = 1.

Parameters
  • input (oneflow.Tensor) – input tensor of any shape

  • p (float) – the exponent value in the norm formulation. Default: 2

  • dim (int) – the dimension to reduce. Default: 1

  • eps (float) – small value to avoid division by zero. Default: 1e-12

For example:

>>> import oneflow as flow
>>> x = flow.tensor([[1, 2], [3, 4]], dtype=flow.float32)
>>> out = flow.nn.functional.normalize(x, 2, 0)
>>> out
tensor([[0.3162, 0.4472],
        [0.9487, 0.8944]], dtype=oneflow.float32)
>>> out = flow.nn.functional.normalize(x, 2, 1)
>>> out
tensor([[0.4472, 0.8944],
        [0.6000, 0.8000]], dtype=oneflow.float32)