oneflow.nn

Operators for neural networks

oneflow.nn.BCELoss(input: oneflow_api.BlobDesc, target: oneflow_api.BlobDesc, weight: <module 'oneflow.python.framework.remote_blob' from '/home/docs/checkouts/readthedocs.org/user_builds/oneflow/envs/v0.3.4/lib/python3.7/site-packages/oneflow/python/framework/remote_blob.py'> = None, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the binary cross entropy loss.

The equation is:

if reduction = “none”:

\[out = -(Target_i*log(Input_i) + (1-Target_i)*log(1-Input_i))\]

if reduction = “mean”:

\[out = -\frac{1}{n}\sum_{i=1}^n(Target_i*log(Input_i) + (1-Target_i)*log(1-Input_i))\]

if reduction = “sum”:

\[out = -\sum_{i=1}^n(Target_i*log(Input_i) + (1-Target_i)*log(1-Input_i))\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def bce_loss_job(input: tp.Numpy.Placeholder(shape=(2, 3)),
                        target: tp.Numpy.Placeholder(shape=(2, 3)),
                        weight: tp.Numpy.Placeholder(shape=(2, 3)))->tp.Numpy:
    sigmoid_input = flow.math.sigmoid(input)
    return flow.nn.BCELoss(sigmoid_input, target, weight, reduction='mean')


np_input = np.array([[1.2, 0.2, -0.3],
                     [0.7, 0.6, -2]]).astype(np.float32)

np_target = np.array([[0, 1, 0],
                      [1, 0, 1]]).astype(np.float32)

np_weight = np.array([[2, 2, 2],
                      [2, 2, 2]]).astype(np.float32)

# output [2.0611262]
Parameters
  • input (oneflow_api.BlobDesc) – The input Blob.

  • target (oneflow_api.BlobDesc) – The target value.

  • weight (remote_blob_util, optional) – The manual rescaling weight to the loss. Default to None, whose corresponding weight value is 1.

  • reduction (str, optional) – The reduce type, it can be one of “none”, “mean”, “sum”. Defaults to “mean”.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Attention

The input value must be in the range of (0, 1). Or the loss function may return nan value.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.BCEWithLogitsLoss(input: oneflow_api.BlobDesc, target: oneflow_api.BlobDesc, weight: <module 'oneflow.python.framework.remote_blob' from '/home/docs/checkouts/readthedocs.org/user_builds/oneflow/envs/v0.3.4/lib/python3.7/site-packages/oneflow/python/framework/remote_blob.py'> = None, pos_weight: <module 'oneflow.python.framework.remote_blob' from '/home/docs/checkouts/readthedocs.org/user_builds/oneflow/envs/v0.3.4/lib/python3.7/site-packages/oneflow/python/framework/remote_blob.py'> = None, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator combines the Sigmoid and BCELoss together. For numerical stability, we apply some math tricks instead of using Sigmoid layer with BCELoss.

The equation is:

if reduction = “none”:

\[out = -weight*[Pos\_weight*y*log\sigma({x}) + (1-y)*log(1-\sigma(x))]\]

if reduction = “mean”:

\[out = -\frac{weight}{n}\sum_{i=1}^n[Pos\_weight*y*log\sigma({x}) + (1-y)*log(1-\sigma(x))]\]

if reduction = “sum”:

\[out = -weight*\sum_{i=1}^n[Pos\_weight*y*log\sigma({x}) + (1-y)*log(1-\sigma(x))]\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def bce_with_logits_loss_job(input: tp.Numpy.Placeholder(shape=(2, 3)),
                             target: tp.Numpy.Placeholder(shape=(2, 3)),
                             weight: tp.Numpy.Placeholder(shape=(2, 3)),
                             pos_weight: tp.Numpy.Placeholder(shape=(3, )))->tp.Numpy:
    return flow.nn.BCEWithLogitsLoss(input, target, weight, pos_weight, reduction='mean')


np_input = np.array([[1.2, 0.2, -0.3],
                     [0.7, 0.6, -2]]).astype(np.float32)

np_target = np.array([[0, 1, 0],
                      [1, 0, 1]]).astype(np.float32)

np_weight = np.array([[2, 2, 2],
                      [2, 2, 2]]).astype(np.float32)

np_pos_weight = np.array([1.2, 1.3, 1.4]).astype(np.float32)

out = bce_with_logits_loss_job(np_input, np_target, np_weight, np_pos_weight)

# output [2.4314096]
Parameters
  • input (oneflow_api.BlobDesc) – The input Tensor.

  • target (oneflow_api.BlobDesc) – The target Tensor.

  • weight (remote_blob_util, optional) – The manual rescaling weight to the loss. Defaults to None.

  • pos_weight (remote_blob_util, optional) – The manual rescaling weight to the positive examples. Defaults to None.

  • reduction (str, optional) – The reduce type, it can be one of “none”, “mean”, “sum”. Defaults to “mean”.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.GroupNorm(x: oneflow_api.BlobDesc, num_groups: int = 32, eps: float = 1e-05, affine: bool = True, name: Optional[str] = None) → oneflow_api.BlobDesc

Applies Group Normalization over a ND(N>=3) input.

Parameters
  • x (oneflow_api.BlobDesc) – input tensor with shape (N,C,∗), where C means the number of channels.

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • affine (bool) – A boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: True.

  • name (Optional[str], optional) – Name of this op.

Returns

The normalized input tensor.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp

@flow.global_function()
def group_norm_Job(x: tp.Numpy.Placeholder((4, 4, 32, 32))
) -> tp.Numpy:
    group_norm = flow.nn.GroupNorm(
        x,
        num_group=2,
        eps=1e-5,
        affine=True,
    )
    return group_norm

x = np.random.random(size=(4, 4, 32, 32)).astype(np.float32)
out = group_norm_Job(x)
oneflow.nn.InstanceNorm1d(x: oneflow_api.BlobDesc, eps: float = 1e-05, affine: bool = True, name: Optional[str] = None) → oneflow_api.BlobDesc

Applies Instance Normalization over a 3D input.

Parameters
  • x (oneflow_api.BlobDesc) – 3D input tensor with NCL data layout.

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • affine (bool) – A boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: True.

  • name (Optional[str], optional) – Name of this op.

Returns

The normalized input tensor.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp

@flow.global_function()
def instance_norm_Job(x: tp.Numpy.Placeholder((4, 2, 32))
) -> tp.Numpy:
    instance_norm = flow.nn.InstanceNorm1d(
        x,
        eps=1e-5,
        affine=True,
    )
    return instance_norm

x = np.random.random(size=(4, 2, 32)).astype(np.float32)
out = instance_norm_Job(x)
oneflow.nn.InstanceNorm2d(x: oneflow_api.BlobDesc, eps: float = 1e-05, affine: bool = True, name: Optional[str] = None) → oneflow_api.BlobDesc

Applies Instance Normalization over a 4D input.

Parameters
  • x (oneflow_api.BlobDesc) – 4D input tensor with NCHW data layout.

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • affine (bool) – A boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: True.

  • name (Optional[str], optional) – Name of this op.

Returns

The normalized input tensor.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp

@flow.global_function()
def instance_norm_Job(x: tp.Numpy.Placeholder((4, 2, 32, 32))
) -> tp.Numpy:
    instance_norm = flow.nn.InstanceNorm2d(
        x,
        eps=1e-5,
        affine=True,
    )
    return instance_norm

x = np.random.random(size=(4, 2, 32, 32)).astype(np.float32)
out = instance_norm_Job(x)
oneflow.nn.InstanceNorm3d(x: oneflow_api.BlobDesc, eps: float = 1e-05, affine: bool = True, name: Optional[str] = None) → oneflow_api.BlobDesc

Applies Instance Normalization over a 5D input.

Parameters
  • x (oneflow_api.BlobDesc) – 5D input tensor with NCDHW data layout.

  • eps (float) – A value added to the denominator for numerical stability. Default: 1e-5.

  • affine (bool) – A boolean value that when set to True, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default: True.

  • name (Optional[str], optional) – Name of this op.

Returns

The normalized input tensor.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp

@flow.global_function()
def instance_norm_Job(x: tp.Numpy.Placeholder((4, 2, 32, 32, 32))
) -> tp.Numpy:
    instance_norm = flow.nn.InstanceNorm2d(
        x,
        eps=1e-5,
        affine=True,
    )
    return instance_norm

x = np.random.random(size=(4, 2, 32, 32, 32)).astype(np.float32)
out = instance_norm_Job(x)
oneflow.nn.KLDivLoss(input: oneflow_api.BlobDesc, target: oneflow_api.BlobDesc, log_target: bool = False, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the Kullback-Leiber divergence loss.

The equation is:

If \(log\_target = True\):

\[loss = e^{target}*(target-input)\]

If \(log\_target = False\):

\[loss = target*(log(target)-input)\]

Attention

In log_target = False case, the element in loss will set to be 0 when the element in target is less than 0

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def of_kldivloss(input: tp.Numpy.Placeholder(shape=(3, 3)),
                target: tp.Numpy.Placeholder(shape=(3, 3))) -> tp.Numpy:
    return flow.nn.KLDivLoss(input, target, log_target=False, reduction='none')


input = np.array([[0.1, 0.2, 0.7],
            [0.8, 0.9, 0.5],
            [0.5, 0.15, 0.35]]).astype(np.float32)
target = np.array([[0.3, 0.1, 0.6],
            [-0.3, 0.4, 0.4],
            [0.35, 0.25, 0.4]]).astype(np.float32)

out = of_kldivloss(input, target)

# output [[-0.39119187 -0.25025854 -0.7264954 ]
#         [ 0.         -0.72651625 -0.56651634]
#         [-0.54243773 -0.3840736  -0.5065163 ]]
Parameters
  • input (oneflow_api.BlobDesc) – The input tensor.

  • target (oneflow_api.BlobDesc) – The target tensor.

  • log_target (bool, optional) – Whether the target is passed in the log space. Defaults to False.

  • reduction (str, optional) – The reduce type, it can be one of “none”, “mean”, “sum”. Defaults to “mean”.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result tensor.

Return type

oneflow_api.BlobDesc

oneflow.nn.L1Loss(input: oneflow_api.BlobDesc, target: oneflow_api.BlobDesc, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the L1 Loss between each element in input and target.

The equation is:

if reduction = “none”:

\[output = |Target - Input|\]

if reduction = “mean”:

\[output = \frac{1}{n}\sum_{i=1}^n|Target_i - Input_i|\]

if reduction = “sum”:

\[output = \sum_{i=1}^n|Target_i - Input_i|\]
Parameters
  • input (oneflow_api.BlobDesc) – The input Blob.

  • target (oneflow_api.BlobDesc) – The target value.

  • reduction (str) – The reduce type, it can be one of “none”, “mean”, “sum”. Defaults to “mean”.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

For example:

Example 1:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def l1_job(x: tp.Numpy.Placeholder(shape=(3, 3)),
        y: tp.Numpy.Placeholder(shape=(3, 3))) -> tp.Numpy:
    out = flow.nn.L1Loss(x, y, reduction="mean", name="l1")

    return out


input = np.array([[1, 1, 1], [2, 2, 2], [7, 7, 7]]).astype(np.float32)
target = np.array([[4, 4, 4], [4, 4, 4], [4, 4, 4]]).astype(np.float32)

out = l1_job(input, target)

# output [2.6666667]

Example 2:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def l1_job(x: tp.Numpy.Placeholder(shape=(3, 3)),
        y: tp.Numpy.Placeholder(shape=(3, 3))) -> tp.Numpy:
    out = flow.nn.L1Loss(x, y, reduction="sum", name="l1")

    return out


input = np.array([[1, 1, 1], [2, 2, 2], [7, 7, 7]]).astype(np.float32)
target = np.array([[4, 4, 4], [4, 4, 4], [4, 4, 4]]).astype(np.float32)

out = l1_job(input, target)

# output [24.]
oneflow.nn.MSELoss(input: oneflow_api.BlobDesc, target: oneflow_api.BlobDesc, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the mean squared error between each element in input and target.

The equation is:

if reduction = “none”:

\[out = (Target_i - Input_i)^2\]

if reduction = “mean”:

\[out = \frac{1}{n}\sum_{i=1}^n(Target_i - Input_i)^2\]

if reduction = “sum”:

\[out = \sum_{i=1}^n(Target_i - Input_i)^2\]
Parameters
  • input (oneflow_api.BlobDesc) – The input Blob.

  • target (oneflow_api.BlobDesc) – The target value.

  • reduction (str) –

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

For example:

Example 1:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def mseloss_job(input: tp.Numpy.Placeholder(shape=(3, 3)),
                target: tp.Numpy.Placeholder(shape=(3, 3)))->tp.Numpy:
    out = flow.nn.MSELoss(input, target, reduction="mean")
    return out

input = np.array([[1, 1, 1], [2, 2, 2], [7, 7, 7]]).astype(np.float32)
target = np.array([[4, 4, 4], [4, 4, 4], [4, 4, 4]]).astype(np.float32)

out = mseloss_job(input, target)

# output [7.3333335]

Example 2:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def mseloss_job(input: tp.Numpy.Placeholder(shape=(3, 3)),
                target: tp.Numpy.Placeholder(shape=(3, 3)))->tp.Numpy:
    out = flow.nn.MSELoss(input, target, reduction="sum")
    return out

input = np.array([[1, 1, 1], [2, 2, 2], [7, 7, 7]]).astype(np.float32)
target = np.array([[4, 4, 4], [4, 4, 4], [4, 4, 4]]).astype(np.float32)

out = mseloss_job(input, target)

# output [66.]
oneflow.nn.MarginRankingLoss(input1: oneflow_api.BlobDesc, input2: oneflow_api.BlobDesc, target: oneflow_api.BlobDesc, margin: float = 0.0, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the Margin Ranking loss.

The equation is:

if reduction = “none”:

\[out = \max\ (0, -y*(x_1-x_2)+margin)\]

if reduction = “mean”:

\[out = \frac{1}{n}\sum_{i=1}^n\max\ (0, -y*(x_1-x_2)+margin)\]

if reduction = “sum”:

\[out = \sum_{i=1}^n\max\ (0, -y*(x_1-x_2)+margin)\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def margin_ranking_loss_job(input1: tp.Numpy.Placeholder(shape=(3, 3)),
                            input2: tp.Numpy.Placeholder(shape=(3, 3)),
                            target: tp.Numpy.Placeholder(shape=(3, 3)))->tp.Numpy:
    out = flow.nn.MarginRankingLoss(input1, input2, target, margin=1.0)
    return out

np_input1 = np.array([[1, 2, 3],
                    [4, 5, 6],
                    [7, 8, 9]]).astype(np.float32)
np_input2 = np.array([[2, 2, 2],
                    [2, 2, 2],
                    [2, 2, 2]]).astype(np.float32)
np_target = np.array([[3, 3, 3],
                    [3, 3, 3],
                    [3, 3, 3]]).astype(np.float32)

out = margin_ranking_loss_job(np_input1, np_input2, np_target)

# output [0.5555556]
Parameters
  • input1 (oneflow_api.BlobDesc) – The ranking score of input1 Blob.

  • input2 (oneflow_api.BlobDesc) – The ranking score of input2 Blob.

  • target (oneflow_api.BlobDesc) – The target Blob.

  • margin (float) – The margin value. Defaults to 0.0.

  • reduction (str, optional) – The reduce type, it can be one of “none”, “mean”, “sum”. Defaults to “mean”.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

class oneflow.nn.Module(name=None)
__init__(name=None)

Initialize self. See help(type(self)) for accurate signature.

property call_seq_no
forward(*args)
property module_name
oneflow.nn.PixelShuffle(input: oneflow_api.BlobDesc, upscale_factor: int, name: Optional[str] = None) → oneflow_api.BlobDesc

This operator do the pixel shuffle, the shape of input(B, C*r*r, H, W) is arranged to (B, C, H*r, W*r). It can be used to do the sub-pixel convolution.

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def PixelShuffleJob(input: tp.Numpy.Placeholder(shape=(3, 4, 2, 2), dtype=flow.float32))->tp.Numpy:
    out = flow.nn.PixelShuffle(input, upscale_factor=2)

    return out

input = np.random.uniform(size=(3, 4, 2, 2)).astype(np.float32)
out = PixelShuffleJob(input)

# out.shape (3, 1, 4, 4)
Parameters
  • input (oneflow_api.BlobDesc) – The input Blob.

  • upscale_factor (int) – The upscale factor.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.PixelShufflev2(input: oneflow_api.BlobDesc, h_upscale_factor: int, w_upscale_factor: int, name: Optional[str] = None) → oneflow_api.BlobDesc

This operator is similar to oneflow.nn.PixelShuffle. The difference is that in oneflow.nn.PixelShuffle, the upscale factor of height and width is the same. But in oneflow.nn.PixelShufflev2, you can set different upscale factor for height and width.

Parameters
  • input (oneflow_api.BlobDesc) – The input Blob.

  • h_upscale_factor (int) – The upscale factor of height.

  • w_upscale_factor (int) – The upscale factor of width.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def PixelShufflev2Job(input: tp.Numpy.Placeholder(shape=(3, 16, 2, 4), dtype=flow.float32))->tp.Numpy:
    out = flow.nn.PixelShufflev2(input, h_upscale_factor=2, w_upscale_factor=4)

    return out

input = np.random.uniform(size=(3, 16, 2, 4)).astype(np.float32)
out = PixelShuffleJob(input)

# out.shape (3, 2, 4, 16)
Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.TripletMarginLoss(anchor: oneflow_api.BlobDesc, positive: oneflow_api.BlobDesc, negative: oneflow_api.BlobDesc, margin: float = 1.0, p: float = 2.0, eps: float = 1e-06, swap: bool = False, reduction: str = 'mean', name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the Triplet Margin Loss.

The equation is:

if reduction = “none”:

\[output = \max\{\left\lVert a_i - p_i \right\rVert_p - \left\lVert a_i - n_i \right\rVert_p + {\rm margin}, 0\}\]

if reduction = “mean”:

\[output = \frac{1}{n}\sum_{i=1}^n\max\{\left\lVert a_i - p_i \right\rVert_p - \left\lVert a_i - n_i \right\rVert_p + {\rm margin}, 0\}\]

if reduction = “sum”:

\[output = \sum_{i=1}^n\max\{\left\lVert a_i - p_i \right\rVert_p - \left\lVert a_i - n_i \right\rVert_p + {\rm margin}, 0\}\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def triplet_loss_job(anchor: tp.Numpy.Placeholder(shape=(3, 3)),
                    pos: tp.Numpy.Placeholder(shape=(3, 3)),
                    neg: tp.Numpy.Placeholder(shape=(3, 3)))->tp.Numpy:
    out = flow.nn.TripletMarginLoss(anchor, pos, neg, margin=1.0, p=2.0)
    return out

np_anchor = np.array([[1, 2, 3],
                    [4, 5, 6],
                    [7, 8, 9]]).astype(np.float32)
np_pos = np.array([[2, 2, 2],
                [2, 2, 2],
                [2, 2, 2]]).astype(np.float32)
np_neg = np.array([[3, 3, 3],
                [3, 3, 3],
                [3, 3, 3]]).astype(np.float32)

out = triplet_loss_job(np_anchor, np_pos, np_neg)

# output [1.8449262]
Parameters
  • anchor (oneflow_api.BlobDesc) – The anchor Blob.

  • positive (oneflow_api.BlobDesc) – The positive sample Blob.

  • negative (oneflow_api.BlobDesc) – The negative sample Blob.

  • margin (float, optional) – The margin value. Defaults to 1.0.

  • p (float, optional) – The norm degree for computing distance. Defaults to 2.0.

  • eps (float, optional) – A small value use in norm computation. Defaults to 1e-6.

  • swap (bool, optional) – Whether to swap the distance.

  • more details you can check the Paper Learning shallow convolutional feature descriptors with triplet losses. Defaults to False. (For) –

  • reduction (str, optional) – The reduce type, it can be one of “none”, “mean”, “sum”. Defaults to “mean”.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.avg_pool1d(input: oneflow_api.BlobDesc, ksize: Union[int, Sequence[int]], strides: Union[int, Sequence[int]], padding: Union[str, Sequence[Sequence[int]]], data_format: str = 'NCW', name: Optional[str] = None) → oneflow_api.BlobDesc

Performs the average pooling on the input Blob.

Parameters
  • input (oneflow_api.BlobDesc) – A 3-D Blob of the format specified by data_format.

  • ksize (Union[int, Sequence[int]]) – An int or list of ints that has length 1 or 3. The size of the window for each dimension of the input Blob.

  • strides (Union[int, Sequence[int]]) – An int or list of ints that has length 1 or 3. The stride of the sliding window for each dimension of the input Blob.

  • padding (str) – ‘VALID’ or ‘SAME’.

  • data_format (str, optional) – ‘NWC’ or ‘NCW’. Defaults to ‘NWC’.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Raises

NotImplementedError – TODO: fix cuDNN bugs in pooling_1d

Returns

A Blob of format specified by data_format. The max pooled output Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.avg_pool2d(input: oneflow_api.BlobDesc, ksize: Union[int, Tuple[int, int]], strides: Union[int, Tuple[int, int]], padding: Union[str, Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int], Tuple[int, int]]], data_format: str = 'NCHW', ceil_mode: bool = False, name: Optional[str] = None) → oneflow_api.BlobDesc

Performs the 2d-average pooling on the input.

Parameters
  • input (oneflow_api.BlobDesc) – A 4-D Blob of shape [batch, height, width, channels].

  • ksize (Union[int, IntPair]) – An int or list of ints that has length 1, 2. The size of the window for each dimension of the input Blob.

  • strides (Union[int, IntPair]) – An int or list of ints that has length 1, 2. The stride of the sliding window for each dimension of the input Blob.

  • padding (str) – ‘VALID’ or ‘SAME’ or ‘SAME_LOWER’ or ‘SAME_UPPER’ or Tuple[IntPair, IntPair, IntPair, IntPair]. The padding algorithm.

  • data_format (str, optional) – ‘NHWC’ or ‘NCHW’. Defaults to “NCHW”.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob with the same type as ‘value’. The average pooled output Blob.

Return type

oneflow_api.BlobDesc

For example:

import numpy as np
import oneflow.typing as tp


@flow.global_function()
def avgpool2d_Job(x: tp.Numpy.Placeholder((1, 32, 128, 128))
) -> tp.Numpy:
    pool_out = flow.nn.avg_pool2d(
        input=x,
        ksize=3,
        strides=2,
        padding='SAME',
        data_format='NCHW'
    )

    return pool_out


x = np.random.randn(1, 32, 128, 128).astype(np.float32)
out = avgpool2d_Job(x)

# out.shape (1, 32, 64, 64)
oneflow.nn.avg_pool3d(input: oneflow_api.BlobDesc, ksize: Union[int, Sequence[int]], strides: Union[int, Sequence[int]], padding: Union[str, Sequence[Sequence[int]]], data_format: str = 'NCDHW', ceil_mode: bool = False, name: Optional[str] = None) → oneflow_api.BlobDesc

Performs the 3d-average pooling on the input.

Parameters
  • input (oneflow_api.BlobDesc) – A 5-D Blob of shape [batch, height, width, channels].

  • ksize (Union[int, Sequence[int]]) – An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input Blob.

  • strides (Union[int, Sequence[int]]) – An int or list of ints that has length 1, 3 or 5. The stride of the sliding window for each dimension of the input Blob.

  • padding (str) – ‘VALID’ or ‘SAME’ or ‘SAME_LOWER’ or ‘SAME_UPPER or Sequence[Sequence[int]]’.

  • data_format (str, optional) – ‘NDHWC’ or ‘NCDHW’. Defaults to “NCDHW”.

  • name (Optional[str], optional) – This operator’s name(optional).Defaults to None.

Returns

A Blob with the same type as value. The average pooled output Blob.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def avgpool3d_Job(x: tp.Numpy.Placeholder((1, 32, 10, 128, 128))
) -> tp.Numpy:
    pool_out = flow.nn.avg_pool3d(
        input=x,
        ksize=3,
        strides=2,
        padding='SAME',
        data_format='NCDHW'
    )

    return pool_out


x = np.random.randn(1, 32, 10, 128, 128).astype(np.float32)
out = avgpool3d_Job(x)

# out.shape (1, 32, 5, 64, 64)
oneflow.nn.batch_normalization(x: oneflow_api.BlobDesc, mean: oneflow_api.BlobDesc, variance: oneflow_api.BlobDesc, offset: Optional[oneflow_api.BlobDesc] = None, scale: Optional[oneflow_api.BlobDesc] = None, variance_epsilon: Optional[float] = 1e-05, axis: int = 1, name: Optional[str] = None) → oneflow_api.BlobDesc

This op does not fully align with tf.nn.batch_normalization.

The mean, variable, offset and scale are always 1D. Users need to specify axis to 1 for NCHW data format.

Parameters
  • x (oneflow_api.BlobDesc) – Input Blob of arbitrary dimensionality.

  • mean (oneflow_api.BlobDesc) – A 1D mean Blob.

  • variance (oneflow_api.BlobDesc) – A 1D variance Blob.

  • offset (Optional[oneflow_api.BlobDesc]) – An 1D offset Blob, often denoted in equations, or None. If present, will be added to the normalized Blob.

  • scale (Optional[oneflow_api.BlobDesc]) – A 1D scale Blob, often denoted in equations, or None. If present, the scale is applied to the normalized Blob.

  • variance_epsilon (float) – A small float number to avoid dividing by 0.

  • axis (int, optional) – 1 for ‘NCHW’ data format. Defaults to 1.

  • name (Optional[str], optional) – This operator’s name.

Returns

the normalized, scaled, offset Blob.

Return type

oneflow_api.BlobDesc

Note

This api is more flexible, if you’re new to OneFlow, it’s more recommend to use oneflow.layers.batch_normalization

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def batch_norm_Job(x: tp.Numpy.Placeholder((1, 5))
) -> tp.Numpy:
    bn_mean, bn_variance = flow.nn.moments(x, axes=[1])
    batch_norm = flow.nn.batch_normalization(
        x,
        mean=bn_mean,
        variance=bn_variance,
        axis=0
    )
    return batch_norm


x = np.array([[1, 2, 3, 4, 5]]).astype(np.float32)
out = batch_norm_Job(x)

# out [[-1.41421  -0.707105  0.        0.707105  1.41421 ]]
oneflow.nn.bias_add(value: oneflow_api.BlobDesc, bias: oneflow_api.BlobDesc, data_format: Optional[str] = None, name: Optional[str] = None) → oneflow_api.BlobDesc

This operator adds a bias to Blob.

Parameters
  • value (oneflow_api.BlobDesc) – A Blob.

  • bias (oneflow_api.BlobDesc) – A 1-D Blob with size matching the channel dimension of value. And has the same type as value unless value is a quantized type.

  • data_format (Optional[str], optional) – A string. ‘N…C’ or ‘NC…’. Defaults to None.

  • name (Optional[str], optional) – This operator’s name. Defaults to None.

Raises

ValueError – ValueError if data format is unrecognized, if value has less than two dimensions with ‘N..C’/None data_format or value has less than three dimensions with ‘NC..’ data_format, if bias is a vector, or if the size of bias does not match the size of the channel dimension of value.

Returns

A Blob with the same type as value.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def bias_add_Job(x: tp.Numpy.Placeholder((1, 64, 128, 128))
) -> tp.Numpy:
    bias_initializer = flow.truncated_normal(0.1)
    bias_regularizer = flow.regularizers.l2(0.0005)
    bias = flow.get_variable(
            "Add_bias",
            shape=(64,),
            initializer=bias_initializer,
            regularizer=bias_regularizer,
        )
    bias_out = flow.nn.bias_add(x, bias)
    return bias_out


x = np.random.randn(1, 64, 128, 128).astype(np.float32)
out = bias_add_Job(x)

# out.shape (1, 64, 128, 128)
oneflow.nn.compat_conv2d(input: oneflow_api.BlobDesc, filters: oneflow_api.BlobDesc, strides: Union[int, Sequence[int]], padding: str, data_format: str = 'NCHW', dilations: Union[int, Sequence[int], None] = None, groups: int = 1, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes a 2-D convolution given input and 4-D filters Blob.

Parameters
  • input (oneflow_api.BlobDesc) – A Blob of rank at least 4.

  • filters (oneflow_api.BlobDesc) – A Blob with the same type as input and has the shape [out_channels, in_channels//groups, filter_height, filter_width] for NCHW, or [out_channels, filter_height, filter_width, in_channels//groups] for NHWC

  • strides (Union[int, Sequence[int]]) – An int or list of ints that has length 1, or 2. The stride of the sliding window for each dimension of input.

  • padding (str) – “SAME” or “VALID” indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension.

  • data_format (str, optional) – “NHWC” or “NCHW”. Defaults to “NCHW”.

  • dilations (Optional[Union[int, Sequence[int]]], optional) – The dilation factor for each dimension of`input`. Defaults to None.

  • groups (int, optional) – int value greater than 0. Defaults to 1.

  • name (Optional[str], optional) – This operator’s name. Defaults to None.

Raises
  • ValueError – strides must be an int or a list.

  • ValueError – data_format must be “NHWC” or “NCHW”.

  • ValueError – dilations length must be 2 when passed as a list.

  • ValueError – dilations must be an int or a list.

  • ValueError – data_format NHWC not support groups > 1.

  • ValueError – invalid data_format.

  • ValueError – padding must be “SAME” or “VALID”.

Returns

A Blob with the same type as input and the same outer batch shape.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


def conv2d(input, filters, kernel_size, strides, padding, name):
    input_shape = input.shape
    weight_initializer = flow.truncated_normal(0.1)
    weight_regularizer = flow.regularizers.l2(0.0005)
    weight_shape = (filters,
                    input_shape[1],
                    kernel_size[0],
                    kernel_size[1])

    weight = flow.get_variable(
        name + "-weight",
        shape=weight_shape,
        initializer=weight_initializer,
        regularizer=weight_regularizer,
    )
    return flow.nn.compat_conv2d(input, weight, strides, padding, name=name)


@flow.global_function()
def conv2d_Job(x: tp.Numpy.Placeholder((1, 64, 32, 32))
) -> tp.Numpy:
    conv = conv2d(x,
                filters=128,
                kernel_size=[3, 3],
                strides=2,
                padding='SAME',
                name="Convlayer")
    return conv


x = np.random.randn(1, 64, 32, 32).astype(np.float32)
out = conv2d_Job(x)

# out.shape (1, 128, 16, 16)
oneflow.nn.conv1d(input: oneflow_api.BlobDesc, filters: oneflow_api.BlobDesc, strides: Union[int, Tuple[int]], padding: Union[str, Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int]]], data_format: str = 'NCW', dilations: Union[int, Tuple[int], None] = None, groups: int = 1, name: Optional[str] = None) → oneflow_api.BlobDesc

1D convolution layer.

Parameters
  • input (oneflow_api.BlobDesc) – A 3D input Blob. [batch_num, channel, width]

  • filters (oneflow_api.BlobDesc) – A Blob with the same type as input and has the shape [out_channels, in_channels//groups, filter_width] for NCW, or [out_channels, filter_width, in_channels//groups] for NWC

  • strides (Union[int, Tuple[int]]) – An int or list of ints that has length 1. The stride of the sliding window for each dimension of input.

  • padding (Union[str, Tuple[IntPair, IntPair, IntPair]]) – padding: string “SAME” or “SAME_LOWER” or “SAME_UPPER” or “VALID” or Tuple[IntPair, IntPair, IntPair] indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension.

  • data_format (str, optional) – “NWC” or “NCW”. Defaults to “NCW”.

  • dilations (Optional[Union[int, Tuple[int]]], optional) – An int or list of ints that has length 1. The dilation factor for each dimension of input. Defaults to None.

  • groups (int, optional) – int value greater than 0. Defaults to 1.

  • name (Optional[str], optional) – This operator’s name. Defaults to None.

Raises
  • ValueError – strides must be an int or a list.

  • ValueError – padding must be “SAME” or “SAME_LOWER” or “SAME_UPPER” or “VALID” or Tuple[IntPair, IntPair, IntPair, IntPair].

  • ValueError – data_format must be “NWC” or “NCW”.

  • ValueError – dilations must be an int or a list.

  • ValueError – invalid data_format.

  • ValueError – data_format NWC not support groups > 1

  • ValueError – invalid data_format.

Returns

A Blob with the same type as input and the same outer batch shape.

Return type

oneflow_api.BlobDesc

Note

This api is more flexible, if you’re new to OneFlow, it’s more recommend to use oneflow.layers.conv1d

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


def conv1d(input, filters, kernel_size, strides, padding, name):
    input_shape = input.shape
    weight_initializer = flow.truncated_normal(0.1)
    weight_regularizer = flow.regularizers.l2(0.0005)
    weight_shape = (filters,
                    input_shape[1],
                    kernel_size)

    weight = flow.get_variable(
        name + "-weight",
        shape=weight_shape,
        initializer=weight_initializer,
        regularizer=weight_regularizer,
    )
    return flow.nn.conv1d(input, weight, strides, padding, name=name)


@flow.global_function()
def conv1d_Job(x: tp.Numpy.Placeholder((1, 64, 32))
) -> tp.Numpy:
    conv = conv1d(x,
                filters=32,
                kernel_size=3,
                strides=1,
                padding='SAME',
                name="Convlayer")
    return conv


x = np.random.randn(1, 64, 32).astype(np.float32)
out = conv1d_Job(x)

# out.shape (1, 32, 32)
oneflow.nn.conv2d(input: oneflow_api.BlobDesc, filters: oneflow_api.BlobDesc, strides: Union[int, Tuple[int, int]], padding: Union[str, Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int], Tuple[int, int]]], data_format: str = 'NCHW', dilations: Union[int, Tuple[int, int], None] = None, groups: int = 1, name: Optional[str] = None) → oneflow_api.BlobDesc

2D convolution layer.

Parameters
  • input (oneflow_api.BlobDesc) – A 4D input Blob. [batch_num, channel, height, width]

  • filters (oneflow_api.BlobDesc) – A Blob with the same type as input and has the shape [out_channels, in_channels//groups, filter_height, filter_width] for NCHW, or [out_channels, filter_height, filter_width, in_channels//groups] for NHWC

  • strides (Union[int, IntPair]) – An int or list of ints that has length 2. The stride of the sliding window for each dimension of input.

  • padding (Union[str, Tuple[IntPair, IntPair, IntPair, IntPair]]) – padding: string “SAME” or “SAME_LOWER” or “SAME_UPPER” or “VALID” or Tuple[IntPair, IntPair, IntPair, IntPair] indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension.

  • data_format (str, optional) – “NHWC” or “NCHW”. Defaults to “NCHW”.

  • dilations (Optional[Union[int, IntPair]], optional) – An int or list of ints that has length 2. The dilation factor for each dimension of input. Defaults to None.

  • groups (int, optional) – int value greater than 0. Defaults to 1.

  • name (Optional[str], optional) – This operator’s name. Defaults to None.

Raises
  • ValueError – strides must be an int or a list.

  • ValueError – padding must be “SAME” or `”SAME_LOWER” or “SAME_UPPER” or “VALID” or Tuple[IntPair, IntPair, IntPair, IntPair].

  • ValueError – data_format must be “NHWC” or “NCHW”.

  • ValueError – dilations must be an int or a list.

  • ValueError – invalid data_format.

  • ValueError – data_format NHWC not support groups > 1

  • ValueError – invalid data_format.

Returns

A Blob with the same type as input and the same outer batch shape.

Return type

oneflow_api.BlobDesc

Note

This api is more flexible, if you’re new to OneFlow, it’s more recommend to use oneflow.layers.conv2d.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


def conv2d(input, filters, kernel_size, strides, padding, name):
    input_shape = input.shape
    weight_initializer = flow.truncated_normal(0.1)
    weight_regularizer = flow.regularizers.l2(0.0005)
    weight_shape = (filters,
                    input_shape[1],
                    kernel_size[0],
                    kernel_size[1])

    weight = flow.get_variable(
        name + "-weight",
        shape=weight_shape,
        initializer=weight_initializer,
        regularizer=weight_regularizer,
    )
    return flow.nn.conv2d(input, weight, strides, padding, name=name)


@flow.global_function()
def conv2d_Job(x: tp.Numpy.Placeholder((1, 64, 32, 32))
) -> tp.Numpy:
    conv = conv2d(x,
                filters=128,
                kernel_size=[3, 3],
                strides=2,
                padding='SAME',
                name="Convlayer")
    return conv


x = np.random.randn(1, 64, 32, 32).astype(np.float32)
out = conv2d_Job(x)

# out.shape (1, 128, 16, 16)
oneflow.nn.conv2d_transpose(value: Optional[oneflow_api.BlobDesc] = None, filter: Optional[oneflow_api.BlobDesc] = None, output_shape: Tuple[int, int, int, int] = None, strides: Union[int, Sequence[int], None] = None, padding: str = 'VALID', data_format: str = 'NCHW', name: Optional[str] = None, input: Optional[oneflow_api.BlobDesc] = None, filters: Optional[oneflow_api.BlobDesc] = None, dilations: Union[int, Sequence[int], None] = None) → oneflow_api.BlobDesc

2d transposed convolution.

Parameters
  • value (Optional[oneflow_api.BlobDesc], optional) – 4-d Blob. Defaults to None.

  • filter (Optional[oneflow_api.BlobDesc], optional) – Filter of transposed convolution, usually a variable. Defaults to None.

  • output_shape (Tuple[int, int, int, int]) – A 1-D Blob representing the output shape of the deconvolution op. Defaults to None.

  • strides (Optional[Union[int, Sequence[int]]], optional) – int or int list. Defaults to None.

  • padding (str, optional) – ‘VALID’ or ‘SAME’. Defaults to “VALID”.

  • data_format (str, optional) – ‘NHWC’ or ‘NCHW’. Defaults to “NCHW”.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

  • input (Optional[oneflow_api.BlobDesc], optional) – Alias for value. Defaults to None.

  • filters (Optional[oneflow_api.BlobDesc], optional) – Alias for filter. Defaults to None.

  • dilations (Optional[Union[int, Sequence[int]]], optional) – The dilation factor for each dimension of input. Defaults to None.

Raises
  • ValueError – shapes of filter and input must match.

  • ValueError – dilations must be an int or a list.

  • ValueError – data_format must be “NHWC” or “NCHW”.

  • ValueError – padding must be “SAME” or “VALID”.

Returns

A Blob with the same type as value.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


def deconv2d(input, filters, kernel_size, strides, padding, name):
    input_shape = input.shape
    weight_initializer = flow.truncated_normal(0.1)
    weight_regularizer = flow.regularizers.l2(0.0005)
    weight_shape = (filters,
                    input_shape[1],
                    kernel_size[0],
                    kernel_size[1])

    weight = flow.get_variable(
        name + "-weight",
        shape=weight_shape,
        initializer=weight_initializer,
        regularizer=weight_regularizer,
    )
    return flow.nn.conv2d_transpose(value=input,
                                    output_shape=(1, 32, 64, 64),
                                    filter=weight,
                                    strides=strides,
                                    padding=padding,
                                    name=name)


@flow.global_function()
def deconv2d_Job(x: tp.Numpy.Placeholder((1, 32, 32, 32),)
) -> tp.Numpy:
    deconv = deconv2d(x,
                    filters=32,
                    kernel_size=[3, 3],
                    strides=2,
                    padding='SAME',
                    name="Convlayer")
    return deconv


x = np.random.randn(1, 32, 32, 32).astype(np.float32)
out = deconv2d_Job(x)

# out.shape (1, 32, 64, 64)
oneflow.nn.conv3d(input: oneflow_api.BlobDesc, filters: oneflow_api.BlobDesc, strides: Union[int, Sequence[int]], padding: Union[str, Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int], Tuple[int, int], Tuple[int, int]]], data_format: str = 'NCDHW', dilations: Union[int, Sequence[int], None] = None, groups: int = 1, name: Optional[str] = None) → oneflow_api.BlobDesc

3D convolution layer.

Parameters
  • input (oneflow_api.BlobDesc) – A 5D input Blob. [batch_num, channel, depth, height, width]

  • filters (oneflow_api.BlobDesc) – A Blob with the same type as input and has the shape [out_channels, in_channels//groups, filter_depth, filter_height, filter_width] for NCDHW, or [out_channels, filter_depth, filter_height, filter_width, in_channels//groups] for NDHWC

  • strides (Union[int, Sequence[int]]) – An int or list of ints that has length 3. The stride of the sliding window for each dimension of input.

  • padding (Union[str, Tuple[IntPair, IntPair, IntPair, IntPair, IntPair]]) – padding: string “SAME” or “SAME_LOWER” or “SAME_UPPER” or “VALID” or Tuple[IntPair, IntPair, IntPair, IntPair, IntPair] indicating the type of padding algorithm to use, or a list indicating the explicit paddings at the start and end of each dimension.

  • data_format (str, optional) – “NDHWC” or “NCDHW”. Defaults to “NCDHW”.

  • dilations (Optional[Union[int, Sequence[int]]], optional) – An int or list of ints that has length 3. The dilation factor for each dimension of input. Defaults to None.

  • groups (int, optional) – int value greater than 0. Defaults to 1.

  • name (Optional[str], optional) – This operator’s name. Defaults to None.

Raises
  • ValueError – strides must be an int or a list.

  • ValueError – padding must be “SAME” or “SAME_LOWER” or “SAME_UPPER” or “VALID” or Tuple[IntPair, IntPair, IntPair, IntPair, IntPair].

  • ValueError – data_format must be “NDHWC” or “NCDHW”.

  • ValueError – dilations must be an int or a list.

  • ValueError – invalid data_format.

  • ValueError – data_format NDHWC not support groups > 1

  • ValueError – invalid data_format.

Returns

A Blob with the same type as input and the same outer batch shape.

Return type

oneflow_api.BlobDesc

Note

This api is more flexible, if you’re new to OneFlow, it’s more recommend to use oneflow.layers.conv3d

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


def conv3d(input, filters, kernel_size, strides, padding, name):
    input_shape = input.shape
    weight_initializer = flow.truncated_normal(0.1)
    weight_regularizer = flow.regularizers.l2(0.0005)
    weight_shape = (filters,
                    input_shape[1],
                    kernel_size[0],
                    kernel_size[1],
                    kernel_size[2])

    weight = flow.get_variable(
        name + "-weight",
        shape=weight_shape,
        initializer=weight_initializer,
        regularizer=weight_regularizer,
    )
    return flow.nn.conv3d(input, weight, strides, padding, name=name)


@flow.global_function()
def conv3d_Job(x: tp.Numpy.Placeholder((1, 64, 10, 16, 16))
) -> tp.Numpy:
    conv = conv3d(x,
                filters=128,
                kernel_size=[3, 3, 3],
                strides=1,
                padding='SAME',
                name="Convlayer")
    return conv


x = np.random.randn(1, 64, 10, 16, 16).astype(np.float32)
out = conv3d_Job(x)

# out.shape (1, 128, 10, 16, 16)
oneflow.nn.dropout(x: oneflow_api.BlobDesc, rate: float, noise_shape: Optional[oneflow_api.BlobDesc] = None, seed: Optional[int] = None, name: Optional[str] = None) → oneflow_api.BlobDesc

For preventing overfitting, randomly set elements to zero.

Parameters
  • x (oneflow_api.BlobDesc) – A floating point Blob.

  • rate (float) – A scalar Blob with the same type as x. The probability that each element is dropped.

  • noise_shape (Optional[oneflow_api.BlobDesc], optional) – optional: A 1-D Blob, representing the shape for randomly generated keep/drop flags. Defaults to None.Defaults to None.

  • seed (Optional[int], optional) – Optional int value. Defaults to None.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob of the same shape of x.

Return type

oneflow_api.BlobDesc

Raises

ValueError – If rate is not in [0, 1) or if x is not a floating point Blob. Rate=1 is not allowed.

For example:

import oneflow as flow


def lenet(data, train=False):
    initializer = flow.truncated_normal(0.1)
    conv1 = flow.layers.conv2d(
        data,
        32,
        5,
        padding="SAME",
        activation=flow.nn.relu,
        name="conv1",
        kernel_initializer=initializer,
    )
    pool1 = flow.nn.max_pool2d(
        conv1, ksize=2, strides=2, padding="SAME", name="pool1", data_format="NCHW"
    )
    conv2 = flow.layers.conv2d(
        pool1,
        64,
        5,
        padding="SAME",
        activation=flow.nn.relu,
        name="conv2",
        kernel_initializer=initializer,
    )
    pool2 = flow.nn.max_pool2d(
        conv2, ksize=2, strides=2, padding="SAME", name="pool2", data_format="NCHW"
    )
    reshape = flow.reshape(pool2, [pool2.shape[0], -1])
    hidden = flow.layers.dense(
        reshape,
        512,
        activation=flow.nn.relu,
        kernel_initializer=initializer,
        name="dense1",
    )
    if train:
        hidden = flow.nn.dropout(hidden, rate=0.5, name="dropout")

    return flow.layers.dense(hidden, 10, kernel_initializer=initializer, name="dense2")
oneflow.nn.elu(x: oneflow_api.BlobDesc, alpha: float = 1.0, name: Optional[str] = None) → oneflow_api.BlobDesc

The ELU activation.

The formula is:

\[\begin{split}\text{ELU}(x) = \begin{cases} x & \text{ if } x \gt 0 \\ \alpha*(exp(x)-1) & \text{ if } x \le 0 \\ \end{cases}\end{split}\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def elu_job(x: tp.Numpy.Placeholder(shape=(3, )))->tp.Numpy:
    return flow.nn.elu(x, alpha=1.0)


x = np.array([-3.5, 1, 3.5]).astype(np.float32)
out = elu_job(x)

# output [-0.9698026  1.         3.5      ]
Parameters
  • x (oneflow_api.BlobDesc) – The input Tensor.

  • alpha (float, optional) – The alpha value for the ELU formula. Defaults to 1.0.

  • name (Optional[str], optional) – The name for the operator. Defaults to None.

Returns

The activated Tensor.

Return type

oneflow_api.BlobDesc

oneflow.nn.fused_scale_tril(x: oneflow_api.BlobDesc, diagonal: int = 0, fill_value: Union[int, float] = 0, scale: Union[int, float] = 1, name: Optional[str] = None) → oneflow_api.BlobDesc
oneflow.nn.hardsigmoid(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

The Hardsigmoid activation.

The formula is:

\[\begin{split}\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{ if } x \le -3 \\ 1 & \text{ if } x \ge +3 \\ \frac{x}{6} + \frac{1}{2} & \text{ otherwise } \\ \end{cases}\end{split}\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def hardsigmoid_job(x: tp.Numpy.Placeholder(shape=(3, )))->tp.Numpy:
    out = flow.math.hardsigmoid(x)

    return out


x = np.array([-3.1, 0, 3.3]).astype(np.float32)
out = hardsigmoid_job(x)

# output [0.  0.5 1. ]
Parameters
  • x (oneflow_api.BlobDesc) – The input Tensor.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The activated Tensor.

Return type

oneflow_api.BlobDesc

oneflow.nn.hardswish(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

The Hardswish activation.

The formula is:

\[\begin{split}\text{Hardswish}(x) = \begin{cases} 0 & \text{ if } x \le -3 \\ x & \text{ if } x \ge +3 \\ x*(x+3)/6 & \text{ otherwise } \\ \end{cases}\end{split}\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def hardswish_job(x: tp.Numpy.Placeholder(shape=(3, )))->tp.Numpy:
    return flow.nn.hardswish(x)


x = np.array([-3.5, 1, 3.5]).astype(np.float32)
out = hardswish_job(x)

# output [0.        0.6666667 3.5      ]
Parameters
  • x (oneflow_api.BlobDesc) – The input Tensor.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The activated Tensor.

Return type

oneflow_api.BlobDesc

oneflow.nn.hardtanh(x: oneflow_api.BlobDesc, min_val: float = -1.0, max_val: float = 1.0, name: Optional[str] = None) → oneflow_api.BlobDesc

The Hardtanh activation.

The equation is:

\[\begin{split}\text{HardTanh}(x) = \begin{cases} max\_val & \text{ if } x > max\_val \\ -min\_val & \text{ if } x < min\_val \\ x & \text{ otherwise } \\ \end{cases}\end{split}\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def hardtanh_job(x: tp.Numpy.Placeholder(shape=(2, 3)))->tp.Numpy:
    return flow.nn.hardtanh(x, min_val=-1.25, max_val=1.2)


x = np.array([[-1.5, -1.1, 0.6],
            [1.2, 1.3, 1.5]]).astype(np.float32)
out = hardtanh_job(x)

# output [[-1.25 -1.1   0.6 ]
#         [ 1.2   1.2   1.2 ]]
Parameters
  • x (oneflow_api.BlobDesc) – The input Tensor.

  • min_val (float, optional) – The minimum value of the linear region range. Defaults to -1.

  • max_val (float, optional) – The maximum value of the linear region range. Defaults to 1.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The activated tensor.

Return type

oneflow_api.BlobDesc

oneflow.nn.layer_norm(inputs: oneflow_api.BlobDesc, gamma: Optional[oneflow_api.BlobDesc] = None, beta: Optional[oneflow_api.BlobDesc] = None, begin_norm_axis: int = 1, begin_params_axis: int = -1, epsilon: float = 1e-05, name: Optional[str] = None) → oneflow_api.BlobDesc

Layer Normalization.

Parameters
  • inputs (oneflow_api.BlobDesc) – Input Blob.

  • gamma (Optional[oneflow_api.BlobDesc]) –

  • beta (Optional[oneflow_api.BlobDesc]) –

  • begin_norm_axis (int, optional) – An integer specifies which axis to normalize at first. Defaults to 1.

  • begin_params_axis (int, optional) – An integer specifies which axis params at . Defaults to -1.

  • epsilon (float, optional) – A small float is added to avoid division by zero. Defaults to 1e-5.

  • name (Optional[str], optional) – This operator’s name. Defaults to None.

Returns

A normalized Blob with same shape of input.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def layer_norm_Job(x: tp.Numpy.Placeholder((1, 64, 128, 128))
) -> tp.Numpy:
    layer_norm = flow.nn.layer_norm(
        x,
        name="LayerNorm1"
    )
    return layer_norm


x = np.random.randn(1, 64, 128, 128).astype(np.float32)
out = layer_norm_Job(x)

# out.shape (1, 64, 128, 128)
oneflow.nn.leaky_relu(x: oneflow_api.BlobDesc, alpha: float = 0.2, name: Optional[str] = None) → oneflow_api.BlobDesc

Leaky ReLU activation.

\[out = max(x, alpha*x)\]
Parameters
  • x (oneflow_api.BlobDesc) – A Blob representing preactivation values.

  • alpha (float, optional) – Slope of the activation function at x < 0 with float type. Default value is 0.2.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

The activation Blob.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def leaky_relu_Job(x: tp.Numpy.Placeholder((5, ),)
) -> tp.Numpy:
    leaky_relu = flow.nn.leaky_relu(x, alpha=0.2)

    return leaky_relu


x = np.array([-10, -5, 0, 5, 10]).astype(np.float32)
out = leaky_relu_Job(x)

# out [-2. -1.  0.  5. 10.]
oneflow.nn.logsoftmax(logits: oneflow_api.BlobDesc, axis: Optional[int] = None, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes logsoftmax activations.

For each element, we apply:

\[LogSoftmax(x_i) = Log(\frac{e^i}{\sum_1^j e^j })\]
Parameters
  • logits (oneflow_api.BlobDesc) – A non-empty Blob.

  • axis (Optional[int], optional) – The dimension logsoftmax would be performed on. Defaults to None.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob has the same type and shape as logits.

Return type

oneflow_api.BlobDesc

Raises

InvalidArgumentError – if logits is empty or axis is beyond the last dimension of logits.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def logsoftmax_Job(x: tp.Numpy.Placeholder((1, 5))
) -> tp.Numpy:
    logsoftmax_out = flow.nn.logsoftmax(x, axis=1)
    return logsoftmax_out


x = np.array([[1, 2, 1, 5, 4]]).astype(np.float32)
out = logsoftmax_Job(x)

# out [[-4.374523  -3.3745232 -4.374523  -0.3745232 -1.374523 ]]
oneflow.nn.max_pool1d(input: oneflow_api.BlobDesc, ksize: Union[int, Sequence[int]], strides: Union[int, Sequence[int]], padding: Union[str, Sequence[Sequence[int]]], data_format: str = 'NWC', name: Optional[str] = None) → oneflow_api.BlobDesc

Performs the 1d-max pooling on the input.

Parameters
  • input (oneflow_api.BlobDesc) – A 3-D Blob of the format specified by data_format.

  • ksize (Union[int, Sequence[int]]) – An int or list of ints that has length 1 or 3. The size of the window for each dimension of the input Blob.

  • strides (Union[int, Sequence[int]]) – An int or list of ints that has length 1 or 3. The stride of the sliding window for each dimension of the input Blob.

  • padding (str) – ‘VALID’ or ‘SAME’. The padding algorithm.

  • data_format (str, optional) – An optional string from: ‘NWC’, ‘NCW’. Defaults to ‘NWC’.

  • name (Optional[str], optional) – This operator’s name(optional).Defaults to None.

Raises

NotImplementedError – TODO: fix cuDNN bugs in pooling_1d

Returns

A Blob of format specified by data_format. The max pooled output Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.max_pool2d(input: oneflow_api.BlobDesc, ksize: Union[int, Tuple[int, int]], strides: Union[int, Tuple[int, int]], padding: Union[str, Tuple[Tuple[int, int], Tuple[int, int], Tuple[int, int], Tuple[int, int]]], data_format: str = 'NCHW', ceil_mode: bool = False, name: Optional[str] = None) → oneflow_api.BlobDesc

Performs the 2d-max pooling on the input Blob.

Parameters
  • input (oneflow_api.BlobDesc) – A 4-D Blob of the format specified by data_format.

  • ksize (Union[int, IntPair]) – An int or list of ints that has length 1, 2. The size of the window for each dimension of the input Blob.

  • strides (Union[int, IntPair]) – An int or list of ints that has length 1, 2. The stride of the sliding window for each dimension of the input Blob.

  • padding (str) – ‘VALID’ or ‘SAME’ or ‘SAME_LOWER’ or ‘SAME_UPPER’ or Tuple[IntPair, IntPair, IntPair, IntPair]`. The padding algorithm.

  • data_format (str, optional) – ‘NHWC’, ‘NCHW’ or ‘NCHW_VECT_C’. Defaults to “NCHW”.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob of format specified by data_format. The max pooled output Blob.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def maxpool2d_Job(x: tp.Numpy.Placeholder((1, 32, 128, 128))
) -> tp.Numpy:
    pool_out = flow.nn.max_pool2d(
        input=x,
        ksize=3,
        strides=2,
        padding='SAME',
        data_format='NCHW'
    )

    return pool_out


x = np.random.randn(1, 32, 128, 128).astype(np.float32)
out = maxpool2d_Job(x)

# out.shape (1, 32, 64, 64)
oneflow.nn.max_pool3d(input: oneflow_api.BlobDesc, ksize: Union[int, Sequence[int]], strides: Union[int, Sequence[int]], padding: Union[str, Sequence[Sequence[int]]], data_format: str = 'NCDHW', ceil_mode: bool = False, name: Optional[str] = None) → oneflow_api.BlobDesc

Performs the 3d-max pooling on the input.

Parameters
  • input (oneflow_api.BlobDesc) – A 5-D Blob of the format specified by data_format.

  • ksize (Union[int, Sequence[int]]) – An int or list of ints that has length 1, 3 or 5. The size of the window for each dimension of the input Blob.

  • strides (Union[int, Sequence[int]]) – An int or list of ints that has length 1, 3 or 5. The stride of the sliding window for each dimension of the input Blob.

  • padding (str) – ‘VALID’ or ‘SAME’ or ‘SAME_LOWER’ or ‘SAME_UPPER’ or ‘Sequence[Sequence[int]]’.

  • data_format (str, optional) – “NDHWC” or “NCDHW”. Defaults to “NCDHW”.

  • name (Optional[str], optional) – This operator’s name(optional).

Returns

A Blob of format specified by data_format. The max pooled output Blob.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def maxpool3d_Job(x: tp.Numpy.Placeholder((1, 32, 10, 128, 128))
) -> tp.Numpy:
    pool_out = flow.nn.max_pool3d(
        input=x,
        ksize=3,
        strides=2,
        padding='SAME',
        data_format='NCDHW'
    )

    return pool_out


x = np.random.randn(1, 32, 10, 128, 128).astype(np.float32)
out = maxpool3d_Job(x)

# out.shape (1, 32, 5, 64, 64)
oneflow.nn.mish(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

The Mish activation function.

The equation is:

\[out = x*tanh(ln(1+e^x))\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def mish_job(x: tp.Numpy.Placeholder(shape=(5, )))->tp.Numpy:
    return flow.nn.mish(x)


x = np.array([-0.5, 0, 0.5, 1.0, 1.5]).astype(np.float32)
out = mish_job(x)
Parameters
  • x (oneflow_api.BlobDesc) – The input Blob.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.moments(x: oneflow_api.BlobDesc, axes: List[int], keepdims: Optional[bool] = False, name: Optional[str] = None) → oneflow_api.BlobDesc

This operator computes the mean and variance value of input Blob.

Parameters
  • x (oneflow_api.BlobDesc) – A Blob

  • axes (List) – Array of ints. Axes along which to compute the mean and variance

  • keepdims (bool, optional) – Whether to keep the same dimensanality as the input x. Defaults to False.

  • name (str, optional) – The operator’s name. Defaults to None.

Returns

Two Blobs, mean and variance.

Return type

remote_blob

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp
from typing import Tuple


@flow.global_function()
def moments_Job(x: tp.Numpy.Placeholder((5,))
) -> Tuple[tp.Numpy, tp.Numpy]:
    return flow.nn.moments(x, axes=[0])


x = np.array([1, 2, 3, 4, 5]).astype(np.float32)
mean, variance = moments_Job(x)

# mean: [3.]
# variance: [2.]
oneflow.nn.random_mask_like(like: oneflow_api.BlobDesc, rate: float, seed: Optional[int] = None, noise_shape: Optional[Sequence] = None, name: Optional[str] = None) → oneflow_api.BlobDesc

Random mask Blob with same shape as ‘like’.

Parameters
  • like (oneflow_api.BlobDesc) – A Blob.

  • rate (float) – A float value for the probability that each element is dropped.

  • seed (Optional[int], optional) – Optional, int value. Defaults to None.

  • noise_shape (Optional[Sequence], optional) – Optional, A 1-D Blob, representing the shape for randomly generated keep/drop flags. Defaults to None.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A random mask Blob of the same shape of like.

Return type

oneflow_api.BlobDesc

Raises

ValueError – If rate is not in [0, 1). Rate=1 is not allowed.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def random_mask_like_Job(like: tp.Numpy.Placeholder((5, 5), dtype=flow.float32)
) -> tp.Numpy:

    return flow.nn.random_mask_like(like=like,
                                    rate=0.5)


like = np.ones(shape=(5, 5)).astype(np.float32)
random_mask = random_mask_like_Job(like)

# out [[0 0 0 0 0]
#      [1 1 1 0 0]
#      [1 0 1 1 0]
#      [0 0 0 0 1]
#      [1 0 1 1 1]]
oneflow.nn.relu(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

Relu activation

The equation is:

\[out = max(X, 0)\]
Parameters
  • x (oneflow_api.BlobDesc) – Input Blob

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

An activated Blob.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp

@flow.global_function()
def reluJob(x: tp.Numpy.Placeholder((3, ))
)->tp.Numpy:
    return flow.math.relu(x)

x = np.array([-1, 0, 5]).astype(np.float32)
out = reluJob(x)

# out [0., 0., 5.]
oneflow.nn.relu6(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

Relu6 activation, it clips the value around (0, 6).

The equation is:

\[\begin{split}\text{Relu6}(x) = \begin{cases} 6 & \text{ if } x > 6 \\ 0 & \text{ if } x < 0 \\ x & \text{ otherwise } \\ \end{cases}\end{split}\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def relu6_job(x: tp.Numpy.Placeholder(shape=(2, 3)))->tp.Numpy:
    return flow.nn.relu6(x)

x = np.array([[-1, -0.5, 0.0],
              [0.5, 6.0, 7]]).astype(np.float32)

out = relu6_job(x)

# output [[0.  0.  0. ]
#         [0.5 6.  6. ]]
Parameters
  • x (oneflow_api.BlobDesc) – The input Tensor.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The activated Tensor.

Return type

oneflow_api.BlobDesc

oneflow.nn.sigmoid_cross_entropy_with_logits(labels: oneflow_api.BlobDesc, logits: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes sigmoid cross entropy given logits.

Parameters
  • labels (oneflow_api.BlobDesc) – A Blob of the same type and shape as logits.

  • logits (oneflow_api.BlobDesc) – A Blob of type float.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob of the same shape as logits with the componentwise logistic losses.

Return type

oneflow_api.BlobDesc

Raises

ValueError – If logits and labels do not have the same shape.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def sigmoid_cross_entropy_Job(input: tp.Numpy.Placeholder((3, 2), dtype=flow.float32),
                            labels: tp.Numpy.Placeholder((3, 2), dtype=flow.float32)
) -> tp.Numpy:
    loss = flow.nn.sigmoid_cross_entropy_with_logits(labels=labels,
                                                    logits=input)
    return loss


x = np.array([[4, 1],
            [3, 2],
            [1, 5]]).astype(np.float32)
labels = np.array([[0.7, 0.3],
                [0.4, 0.6],
                [0.2, 0.8]]).astype(np.float32)
loss = sigmoid_cross_entropy_Job(x, labels)

# out [[0.612735   0.90472794]
#      [0.89778364 0.6990613 ]
#      [0.97783387 0.51372755]]
oneflow.nn.softmax(logits: oneflow_api.BlobDesc, axis: Optional[int] = None, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes softmax activations.

For each element, we apply:

\[S_i = \frac{e^i}{\sum_1^j e^j }\]
Parameters
  • logits (oneflow_api.BlobDesc) – A non-empty Blob.

  • axis (Optional[int], optional) – The dimension softmax would be performed on. Defaults to None.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob has the same type and shape as logits.

Return type

oneflow_api.BlobDesc

Raises

InvalidArgumentError – if logits is empty or axis is beyond the last dimension of logits.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def softmax_Job(x: tp.Numpy.Placeholder((1, 5))
) -> tp.Numpy:
    softmax_out = flow.nn.softmax(x, axis=1)

    return softmax_out


x = np.array([[1, 2, 1, 5, 4]]).astype(np.float32)
out = softmax_Job(x)

# out [[0.01259415 0.03423444 0.01259415 0.68761706 0.2529602 ]]
oneflow.nn.softmax_cross_entropy_with_logits(labels: oneflow_api.BlobDesc, logits: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes softmax cross entropy between logits and labels.

Parameters
  • labels (oneflow_api.BlobDesc) – Each vector along the class dimension should hold a valid probability distribution.

  • logits (oneflow_api.BlobDesc) – Per-label activations, typically a linear output. logits has same shape and dtype as labels.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob that contains the softmax cross entropy loss. Its type is the same as logits and its shape is the same as labels except that it does not have the last dimension of labels.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def softmax_cross_entropy_Job(input: tp.Numpy.Placeholder((3, 3), dtype=flow.float32),
                            labels: tp.Numpy.Placeholder((3, 3), dtype=flow.float32)
) -> tp.Numpy:
    loss = flow.nn.softmax_cross_entropy_with_logits(labels=labels,
                                                    logits=input)
    return loss


x = np.array([[4, 1, 2],
            [3, 2, 3],
            [1, 5, 10]]).astype(np.float32)
labels = np.array([[0.9, 0.05, 0.05],
                [0.3, 0.4, 0.3],
                [0.8, 0.1, 0.1]]).astype(np.float32)
loss = softmax_cross_entropy_Job(x, labels)

# out [0.73441553 1.1240788  1.4488925 ]
oneflow.nn.softmax_grad(y: oneflow_api.BlobDesc, dy: oneflow_api.BlobDesc, axis: Optional[int] = None, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes gradient of softmax activations.

Parameters
  • y (oneflow_api.BlobDesc) – A Blob representing the softmax of x.

  • dy (oneflow_api.BlobDesc) – gradient of y.

  • axis (Optional[int], optional) – The dimension softmax would be performed on. Defaults to None.

  • name (Optional[str], optional) – This operator’s name(optional).

Returns

A Blob representing the gradient of x.

Return type

oneflow_api.BlobDesc

oneflow.nn.sparse_cross_entropy(labels: oneflow_api.BlobDesc, prediction: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes sparse cross entropy

Parameters
  • labels (oneflow_api.BlobDesc) – A Blob of shape [d_0, d_1, …, d_{r-1}] (where r is rank of labels and result). Each entry in labels must be an index in [0, num_classes).

  • prediction (oneflow_api.BlobDesc) – A Blob with the rank that is equal to the rank of the labels plus one.

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Returns

A Blob of the same shape as labels.

Return type

oneflow_api.BlobDesc

Note

The labels data type should be oneflow.int32.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def sparse_cross_entropy_Job(input: tp.Numpy.Placeholder((5, 2), dtype=flow.float32),
                            labels: tp.Numpy.Placeholder((5,), dtype=flow.int32)
) -> tp.Numpy:
    loss = flow.nn.sparse_cross_entropy(labels=labels,
                                        prediction=input)
    return loss


x = np.array([[0.3, 0.7],
            [0.4, 0.6],
            [0.5, 0.5],
            [0.1, 0.9],
            [0.2, 0.8]]).astype(np.float32)
labels = np.array([0, 1, 1, 0, 1]).astype(np.int32)
loss = sparse_cross_entropy_Job(x, labels)

# out [1.2039728  0.5108256  0.6931472  2.3025851  0.22314353]
oneflow.nn.sparse_softmax_cross_entropy_with_logits(labels: oneflow_api.BlobDesc, logits: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc

Computes sparse softmax cross entropy between logits and labels.

Parameters
  • labels (oneflow_api.BlobDesc) – Blob of shape [d_0, d_1, …, d_{r-1}] (where r is rank of labels and result). Each entry in labels must be an index in [0, num_classes).

  • logits (oneflow_api.BlobDesc) – Unscaled log probabilities of shape [d_0, d_1, …, d_{r-1},num_classes].

  • name (Optional[str], optional) – This operator’s name(optional). Defaults to None.

Raises

ValueError – If logits are scalars (need to have rank >= 1) or if the rank of the labels is not equal to the rank of the logits minus one.

Returns

A Blob of the same shape as labels and of the same type as logits with the softmax cross entropy loss.

Return type

oneflow_api.BlobDesc

Note

The labels data type should be oneflow.int32.

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp


@flow.global_function()
def sparse_softmax_cross_entropy_Job(input: tp.Numpy.Placeholder((3, 3), dtype=flow.float32),
                                     labels: tp.Numpy.Placeholder((3, ), dtype=flow.int32)
) -> tp.Numpy:
    loss = flow.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
                                                            logits=input)
    return loss


x = np.array([[4, 1, 2],
            [3, 2, 3],
            [1, 5, 10]]).astype(np.float32)
labels = np.array([0, 1, 2]).astype(np.int32)
loss = sparse_softmax_cross_entropy_Job(x, labels)

# out [0.65784633 1.2842525  0.5557927 ]
oneflow.nn.swish(x: oneflow_api.BlobDesc, beta: float = 1.0, name: Optional[str] = None) → oneflow_api.BlobDesc

The Swish activation function.

The equation is:

\[out = x * sigmoid(\beta*x)\]

For example:

import oneflow as flow
import oneflow.typing as tp
import numpy as np


@flow.global_function()
def swish_job(x: tp.Numpy.Placeholder(shape=(5, )))->tp.Numpy:
    return flow.nn.swish(x)
x = np.array([-0.5, 0, 0.5, 1, 1.5]).astype(np.float32)


out = swish_job(x)
# output [-0.18877034  0.          0.31122968  0.7310586   1.2263618 ]
Parameters
  • x (oneflow_api.BlobDesc) – The input Blob.

  • beta (float, optional) – The smooth factor. Defaults to 1.0.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Returns

The result Blob.

Return type

oneflow_api.BlobDesc

oneflow.nn.torch_conv2d_transpose(value=None, filter=None, output_padding=None, strides=None, padding_needed=None, data_format='NCHW', name=None, input=None, filters=None, dilations=None)
oneflow.nn.tril(x: oneflow_api.BlobDesc, diagonal: int = 0, fill_value: Union[int, float] = 0, name: Optional[str] = None) → oneflow_api.BlobDesc

Compute lower triangle of an matrix.

Parameters
  • x (oneflow_api.BlobDesc) – Input Blob.

  • diagonal (int) – Diagonal offset, when diagonal > 0, diagonal offset up, otherwise, offset downward.

  • fill_value (Union[int, float]) – The value filled into the upper triangle.

  • name (Optional[str], optional) – The name for the operation. Defaults to None.

Attention

The dimension of x must greater or equal to 2.

Returns

The lower triangle blob of input.

Return type

oneflow_api.BlobDesc

For example:

import oneflow as flow
import numpy as np
import oneflow.typing as tp
@flow.global_function()
def tril_Job(x: tp.Numpy.Placeholder((4, 4))
)->tp.Numpy:
    return flow.math.tril(x, 0)
x = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]],
              dtype=np.float32)
out = tril_Job(x).get()

# output [[1, 0, 0, 0],
          [1, 2, 0, 0],
          [1, 2, 3, 0],
          [1, 2, 3, 4]]