oneflow.nn.init

oneflow.nn.init.calculate_gain(nonlinearity, param=None)
oneflow.nn.init.uniform_(tensor, a=0.0, b=1.0)

Fills the input Tensor with values drawn from the uniform distribution \(\mathcal{U}(a, b)\).

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • a – the lower bound of the uniform distribution

  • b – the upper bound of the uniform distribution

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.uniform_(w)
oneflow.nn.init.normal_(tensor, mean=0.0, std=1.0)

Fills the input Tensor with values drawn from the normal distribution \(\mathcal{N}(\text{mean}, \text{std}^2)\).

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • mean – the mean of the normal distribution

  • std – the standard deviation of the normal distribution

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.normal_(w)
oneflow.nn.init.constant_(tensor, val)

Fills the input Tensor with the value \(\text{val}\).

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • val – the value to fill the tensor with

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.constant_(w, 0.3)
oneflow.nn.init.ones_(tensor)

Fills the input Tensor with the scalar value 1.

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Parameters

tensor – an n-dimensional oneflow.Tensor

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.ones_(w)
oneflow.nn.init.zeros_(tensor)

Fills the input Tensor with the scalar value 0.

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Parameters

tensor – an n-dimensional oneflow.Tensor

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.zeros_(w)
oneflow.nn.init.xavier_uniform_(tensor, gain=1.0, *, data_format='NCHW')

Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from \(\mathcal{U}(-a, a)\) where

\[a = \text{gain} \times \sqrt{\frac{6}{\text{fan_in} + \text{fan_out}}}\]

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Also known as Glorot initialization.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • gain – an optional scaling factor

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
oneflow.nn.init.xavier_normal_(tensor, gain=1.0, *, data_format='NCHW')

Fills the input Tensor with values according to the method described in Understanding the difficulty of training deep feedforward neural networks - Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from \(\mathcal{N}(0, \text{std}^2)\) where

\[\text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan_in} + \text{fan_out}}}\]

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Also known as Glorot initialization.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • gain – an optional scaling factor

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.xavier_normal_(w)
oneflow.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu', *, data_format='NCHW')

Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from \(\mathcal{U}(-\text{bound}, \text{bound})\) where

\[\text{bound} = \text{gain} \times \sqrt{\frac{3}{\text{fan_mode}}}\]

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Also known as He initialization.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • a – the negative slope of the rectifier used after this layer (only used with 'leaky_relu')

  • mode – either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backwards pass.

  • nonlinearity – the non-linear function (nn.functional name), recommended to use only with 'relu' or 'leaky_relu' (default).

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
oneflow.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu', *, data_format='NCHW')

Fills the input Tensor with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from \(\mathcal{N}(0, \text{std}^2)\) where

\[\text{std} = \frac{\text{gain}}{\sqrt{\text{fan_mode}}}\]

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Also known as He initialization.

Parameters
  • tensor – an n-dimensional oneflow.Tensor

  • a – the negative slope of the rectifier used after this layer (only used with 'leaky_relu')

  • mode – either 'fan_in' (default) or 'fan_out'. Choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backwards pass.

  • nonlinearity – the non-linear function (nn.functional name), recommended to use only with 'relu' or 'leaky_relu' (default).

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
oneflow.nn.init.trunc_normal_(tensor, mean=0.0, std=1.0, a=- 2.0, b=2.0)
oneflow.nn.init.orthogonal_(tensor, gain=1.0)

Fills the input Tensor with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks - Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened.

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/nn.init.html.

Parameters
  • tensor – an n-dimensional torch.Tensor, where \(n \geq 2\)

  • gain – optional scaling factor

Examples

>>> w = flow.empty(3, 5)
>>> nn.init.orthogonal_(w)