oneflow.experimental¶
Experimental features¶
-
oneflow.experimental.nn.
ReLU
(inplace: bool = False)¶ Applies the rectified linear unit function element-wise:
\(\text{ReLU}(x) = (x)^+ = \max(0, x)\)
- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> relu = flow.nn.ReLU() >>> ndarr = np.asarray([1, -2, 3]) >>> x = flow.Tensor(ndarr) >>> relu(x) tensor([1., 0., 3.], dtype=oneflow.float32)
-
oneflow.experimental.nn.
ReLU6
(inplace: bool = False)¶ Applies the element-wise function:
\[\begin{split}\text{Relu6}(x) = \begin{cases} 6 & \text{ if } x > 6 \\ 0 & \text{ if } x < 0 \\ x & \text{ otherwise } \\ \end{cases}\end{split}\]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> relu6 = flow.nn.ReLU6() >>> out = relu6(input) >>> out tensor([0. , 0. , 0.5], dtype=oneflow.float32)
-
oneflow.experimental.nn.
LeakyReLU
(negative_slope: float = 0.01, inplace: bool = False)¶ Applies the element-wise function:
\[\begin{split}\text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{negative_slope} \times x, & \text{ otherwise } \end{cases}\end{split}\]- Parameters
negative_slope – Controls the angle of the negative slope. Default: 1e-2
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.LeakyReLU(0.1) >>> arr = np.array([0.2, 0.3, 3.0, 4.0]) >>> x = flow.Tensor(arr) >>> out = m(x) >>> out tensor([0.2, 0.3, 3. , 4. ], dtype=oneflow.float32)
-
oneflow.experimental.nn.
Tanh
()¶ This operator computes the hyperbolic tangent value of Tensor.
The equation is:
\[out = \frac{e^x-e^{-x}}{e^x+e^{-x}}\]- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-1, 0, 1]).astype(np.float32) >>> input = flow.Tensor(x) >>> tanh = flow.nn.Tanh() >>> out = tanh(input) >>> out tensor([-0.7616, 0. , 0.7616], dtype=oneflow.float32)
-
oneflow.experimental.
tanh
(x)¶ This operator computes the hyperbolic tangent value of Tensor.
The equation is:
\[out = \frac{e^x-e^{-x}}{e^x+e^{-x}}\]- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-1, 0, 1]).astype(np.float32) >>> input = flow.Tensor(x) >>> tanh = flow.nn.Tanh() >>> out = tanh(input) >>> out tensor([-0.7616, 0. , 0.7616], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
tanh
(x)¶ This operator computes the hyperbolic tangent value of Tensor.
The equation is:
\[out = \frac{e^x-e^{-x}}{e^x+e^{-x}}\]- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-1, 0, 1]).astype(np.float32) >>> input = flow.Tensor(x) >>> tanh = flow.nn.Tanh() >>> out = tanh(input) >>> out tensor([-0.7616, 0. , 0.7616], dtype=oneflow.float32)
-
oneflow.experimental.
asin
(input)¶ Returns a new tensor with the arcsine of the elements of
input
.\[\text{out}_{i} = \sin^{-1}(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([-0.5, 0.8, 1.0, -0.8]), dtype=flow.float32) >>> output = flow.asin(input) >>> output.shape flow.Size([4]) >>> output tensor([-0.5236, 0.9273, 1.5708, -0.9273], dtype=oneflow.float32) >>> input1 = flow.Tensor(np.array([[0.8, 1.0], [-0.6, -1.0]]), dtype=flow.float32) >>> output1 = input1.asin() >>> output1.shape flow.Size([2, 2]) >>> output1 tensor([[ 0.9273, 1.5708], [-0.6435, -1.5708]], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
asin
(input)¶
-
oneflow.experimental.
arcsin
(input)¶ Alias for
oneflow.experimental.asin()
-
oneflow.experimental.Tensor.
arcsin
(input)¶
-
oneflow.experimental.
asinh
(input)¶ Returns a new tensor with the inverse hyperbolic sine of the elements of
input
.\[\text{out}_{i} = \sinh^{-1}(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([2, 3, 4]), dtype=flow.float32) >>> output = flow.asinh(input) >>> output.shape flow.Size([3]) >>> output tensor([1.4436, 1.8184, 2.0947], dtype=oneflow.float32) >>> input1 = flow.Tensor(np.array([[-1, 0, -0.4], [5, 7, 0.8]]), dtype=flow.float32) >>> output1 = input1.asinh() >>> output1.shape flow.Size([2, 3]) >>> output1 tensor([[-0.8814, 0. , -0.39 ], [ 2.3124, 2.6441, 0.7327]], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
asinh
(input)¶
-
oneflow.experimental.
arcsinh
(input)¶ Alias for
oneflow.experimental.asinh()
-
oneflow.experimental.Tensor.
arcsinh
(input)¶
-
oneflow.experimental.nn.
ELU
(alpha: float = 1.0, inplace: bool = False)¶ Applies the element-wise function:
\[\begin{split}\text{ELU}(x) = \begin{cases} x & \text{ if } x \gt 0 \\ \alpha*(exp(x)-1) & \text{ if } x \le 0 \\ \end{cases}\end{split}\]- Parameters
alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> elu = flow.nn.ELU() >>> out = elu(input) >>> out tensor([-0.3935, 0. , 0.5 ], dtype=oneflow.float32)
-
oneflow.experimental.nn.
GELU
()¶ Gelu activation operator.
The equation is:
\[out = 0.5 * x * (1 + tanh(\sqrt{\frac{2}{\pi}} * (x + 0.044715x^{3})))\]- Parameters
x (oneflow.Tensor) – Input Tensor
- Returns
A Tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> gelu = flow.nn.GELU() >>> out = gelu(input) >>> out tensor([-0.1543, 0. , 0.3457], dtype=oneflow.float32)
-
oneflow.experimental.
gelu
(x)¶ Gelu activation operator.
The equation is:
\[out = 0.5 * x * (1 + tanh(\sqrt{\frac{2}{\pi}} * (x + 0.044715x^{3})))\]- Parameters
x (oneflow.Tensor) – Input Tensor
- Returns
A Tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> gelu = flow.nn.GELU() >>> out = gelu(input) >>> out tensor([-0.1543, 0. , 0.3457], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
gelu
(x)¶ Gelu activation operator.
The equation is:
\[out = 0.5 * x * (1 + tanh(\sqrt{\frac{2}{\pi}} * (x + 0.044715x^{3})))\]- Parameters
x (oneflow.Tensor) – Input Tensor
- Returns
A Tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> gelu = flow.nn.GELU() >>> out = gelu(input) >>> out tensor([-0.1543, 0. , 0.3457], dtype=oneflow.float32)
-
oneflow.experimental.nn.
Sigmoid
()¶ Applies the element-wise function:
\[\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}\]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([0.81733328, 0.43621480, 0.10351428])) >>> m = flow.nn.Sigmoid() >>> out = m(x) >>> out tensor([0.6937, 0.6074, 0.5259], dtype=oneflow.float32)
-
oneflow.experimental.
sigmoid
(x)¶ Applies the element-wise function:
\[\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}\]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([0.81733328, 0.43621480, 0.10351428])) >>> out = flow.sigmoid(x) >>> out tensor([0.6937, 0.6074, 0.5259], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
sigmoid
(x)¶ Applies the element-wise function:
\[\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}\]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([0.81733328, 0.43621480, 0.10351428])) >>> out = flow.sigmoid(x) >>> out tensor([0.6937, 0.6074, 0.5259], dtype=oneflow.float32)
-
oneflow.experimental.nn.
Hardsigmoid
(inplace: bool = False)¶ Applies the element-wise function:
\[\begin{split}\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{ if } x \le -3 \\ 1 & \text{ if } x \ge +3 \\ \frac{x}{6} + \frac{1}{2} & \text{ otherwise } \\ \end{cases}\end{split}\]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> hardsigmoid = flow.nn.Hardsigmoid() >>> out = hardsigmoid(input) >>> out tensor([0.4167, 0.5 , 0.5833], dtype=oneflow.float32)
-
oneflow.experimental.
softmax
(tensor, dim=None)¶ Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.
Softmax is defined as:
\[\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]When the input Tensor is a sparse tensor then the unspecifed values are treated as
-inf
.- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1]
- Parameters
dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.Softmax(dim = 2) >>> x = flow.Tensor( ... np.array( ... [[[-0.46716809, 0.40112534, 0.61984003], ... [-1.31244969, -0.42528763, 1.47953856]]] ... ) ... ) >>> out = m(x) >>> out tensor([[[0.1575, 0.3754, 0.4671], [0.0507, 0.123 , 0.8263]]], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
softmax
(tensor, dim=None)¶ Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.
Softmax is defined as:
\[\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]When the input Tensor is a sparse tensor then the unspecifed values are treated as
-inf
.- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1]
- Parameters
dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.Softmax(dim = 2) >>> x = flow.Tensor( ... np.array( ... [[[-0.46716809, 0.40112534, 0.61984003], ... [-1.31244969, -0.42528763, 1.47953856]]] ... ) ... ) >>> out = m(x) >>> out tensor([[[0.1575, 0.3754, 0.4671], [0.0507, 0.123 , 0.8263]]], dtype=oneflow.float32)
-
oneflow.experimental.nn.
LogSigmoid
()¶ Applies the element-wise function:
\[\text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right)\]- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> logsigmoid = flow.nn.LogSigmoid() >>> out = logsigmoid(input) >>> out tensor([-0.9741, -0.6931, -0.4741], dtype=oneflow.float32)
-
oneflow.experimental.nn.
Softplus
(beta: int = 1, threshold: int = 20)¶ Applies the element-wise function:
\[\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))\]SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.
For numerical stability the implementation reverts to the linear function when \(input \times \beta > threshold\).
- Parameters
beta – the \(\beta\) value for the Softplus formulation. Default: 1
threshold – values above this revert to a linear function. Default: 20
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> softplus = flow.nn.Softplus() >>> out = softplus(input) >>> out tensor([0.4741, 0.6931, 0.9741], dtype=oneflow.float32)
-
oneflow.experimental.nn.
LogSoftmax
(dim: Optional[int] = 1)¶ Applies the \(\log(\text{Softmax}(x))\) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as:
\[\text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right)\]- Parameters
dim (int) – A dimension along which LogSoftmax will be computed.
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.LogSoftmax(dim=1) >>> x = flow.Tensor( ... np.array( ... [[ 0.4296, -1.1957, 2.5463], ... [ 1.2552, -1.5747, 0.6923]] ... ) ... ) >>> out = m(x) >>> out tensor([[-2.2513, -3.8766, -0.1346], [-0.4877, -3.3176, -1.0506]], dtype=oneflow.float32)
-
oneflow.experimental.
arange
(start: int = 0, end: int = None, step: int = 1, dtype: oneflow._oneflow_internal.dtype = oneflow.int64, device: Union[str, oneflow.device] = 'cpu', requires_grad: bool = False)¶ Returns a 1-D tensor of size \(\left\lfloor \frac{\text{end} - \text{start}}{\text{step}} \right\rfloor + 1\) with values from
start
toend
with stepstep
. Step is the gap between two values in the tensor.\[\text{out}_{i+1} = \text{out}_i + \text{step}.\]- Parameters
start (int) – the starting value for the set of points. Default:
0
.end (int) – the ending value for the set of points
step (int) – the gap between each pair of adjacent points. Default:
1
.
- Keyword Arguments
dtype (flow.dtype, optional) – If dtype is not given, the dtype is inferred to be flow.int64.
device (flow.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor.
requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> y = flow.arange(0, 5) >>> y tensor([0, 1, 2, 3, 4], dtype=oneflow.int64)
-
oneflow.experimental.
argwhere
(x, dtype: Optional[oneflow._oneflow_internal.dtype] = None)¶ This operator finds the indices of input Tensor x elements that are non-zero.
It returns a list in which each element is a coordinate that points to a non-zero element in the condition.
- Parameters
x (oneflow.Tensor) – The input Tensor.
dtype (Optional[flow.dtype], optional) – The data type of output. Defaults to None.
- Returns
The result Tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([[0, 1, 0], ... [2, 0, 2]]).astype(np.float32) >>> input = flow.Tensor(x) >>> output = flow.argwhere(input) >>> output tensor([[0, 1], [1, 0], [1, 2]], dtype=oneflow.int32)
-
oneflow.experimental.Tensor.
argwhere
() → Tensor¶
-
oneflow.experimental.
argmax
(input, dim: int = None, keepdim: bool = False)¶ The op computes the index with the largest value of a Tensor at specified axis.
- Parameters
input (oneflow.Tensor) – Input Tensor
dim (int, optional) – dimension to be calculated. Defaults to the last dim (-1)
keepdim (bool optional) – whether the output tensor has dim retained or not. Ignored if dim=None.
- Returns
A Tensor(dtype=int32) contains the index with the largest value of input
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([[1, 3, 8, 7, 2], ... [1, 9, 4, 3, 2]], dtype=np.float32) >>> out = flow.argmax(flow.Tensor(x)) >>> out tensor([6], dtype=oneflow.int32) >>> out = flow.argmax(flow.Tensor(x), dim=1) >>> out tensor([2, 1], dtype=oneflow.int32)
-
oneflow.experimental.Tensor.
argmax
(input, dim: int = None, keepdim: bool = False)¶ The op computes the index with the largest value of a Tensor at specified axis.
- Parameters
input (oneflow.Tensor) – Input Tensor
dim (int, optional) – dimension to be calculated. Defaults to the last dim (-1)
keepdim (bool optional) – whether the output tensor has dim retained or not. Ignored if dim=None.
- Returns
A Tensor(dtype=int32) contains the index with the largest value of input
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([[1, 3, 8, 7, 2], ... [1, 9, 4, 3, 2]], dtype=np.float32) >>> out = flow.argmax(flow.Tensor(x)) >>> out tensor([6], dtype=oneflow.int32) >>> out = flow.argmax(flow.Tensor(x), dim=1) >>> out tensor([2, 1], dtype=oneflow.int32)
-
oneflow.experimental.nn.
BatchNorm1d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)¶ Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, L)\) or \(L\) from input of size \((N, L)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C)\) or \((N, C, L)\)
Output: \((N, C)\) or \((N, C, L)\) (same shape as input)
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.random.randn(20, 100)) >>> m = flow.nn.BatchNorm1d(100) >>> y = m(x)
-
oneflow.experimental.nn.
BatchNorm2d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)¶ Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.random.randn(4, 2, 8, 3)) >>> m = flow.nn.BatchNorm2d(num_features=2, eps=1e-5, momentum=0.1) >>> y = m(x)
-
oneflow.experimental.nn.
LayerNorm
(normalized_shape: Union[int, Tuple[int], oneflow.Size], eps: float = 1e-05, elementwise_affine: bool = True) → None¶ Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated separately over the last certain number dimensions which have to be of the shape specified by
normalized_shape
. \(\gamma\) and \(\beta\) are learnable affine transform parameters ofnormalized_shape
ifelementwise_affine
isTrue
. The standard-deviation is calculated via the biased estimator.Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affine
option, Layer Normalization applies per-element scale and bias withelementwise_affine
.This layer uses statistics computed from input data in both training and evaluation modes.
- Parameters
normalized_shape (int or list or oneflow.Size) –
input shape from an expected input of size
\[[* \times \text{normalized_shape}[0] \times \text{normalized_shape}[1] \times \ldots \times \text{normalized_shape}[-1]]\]If a single integer is used, it is treated as a singleton list, and this module will
normalize over the last dimension which is expected to be of that specific size.
eps – a value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine – a boolean value that when set to
True
, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
.
- Shape:
Input: \((N, *)\)
Output: \((N, *)\) (same shape as input)
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input_arr = np.array( ... [ ... [ ... [[-0.16046895, -1.03667831], [-0.34974465, 0.26505867]], ... [[-1.24111986, -0.53806001], [1.72426331, 0.43572459]], ... ], ... [ ... [[-0.77390957, -0.42610624], [0.16398858, -1.35760343]], ... [[1.07541728, 0.11008703], [0.26361224, -0.48663723]], ... ], ... ], ... dtype=np.float32, ... ) >>> x = flow.Tensor(input_arr) >>> m = flow.nn.LayerNorm(2) >>> y = m(x).numpy() >>> y array([[[[ 0.99997395, -0.99997395], [-0.999947 , 0.999947 ]], [[-0.9999596 , 0.9999594 ], [ 0.999988 , -0.999988 ]]], [[[-0.9998343 , 0.9998341 ], [ 0.9999914 , -0.9999914 ]], [[ 0.99997866, -0.99997866], [ 0.9999646 , -0.9999646 ]]]], dtype=float32)
-
oneflow.experimental.
cast
(x, dtype)¶ The operation takes input tensor x and casts it to the output with dtype
- Parameters
x (oneflow.Tensor) – A Tensor
dtype (flow.dtype) – Data type of the output tensor
- Returns
A Tensor with specific dtype.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> np_arr = np.random.randn(2, 3, 4, 5).astype(np.float32) >>> input = flow.Tensor(np_arr, dtype=flow.float32) >>> output = flow.cast(input, flow.int8) >>> np.array_equal(output.numpy(), np_arr.astype(np.int8)) True
-
oneflow.experimental.Tensor.
cast
(x, dtype)¶ The operation takes input tensor x and casts it to the output with dtype
- Parameters
x (oneflow.Tensor) – A Tensor
dtype (flow.dtype) – Data type of the output tensor
- Returns
A Tensor with specific dtype.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> np_arr = np.random.randn(2, 3, 4, 5).astype(np.float32) >>> input = flow.Tensor(np_arr, dtype=flow.float32) >>> output = flow.cast(input, flow.int8) >>> np.array_equal(output.numpy(), np_arr.astype(np.int8)) True
-
oneflow.experimental.
cat
(inputs, dim=0)¶ Concatenate two or more Tensor s at specified axis.
Analogous to numpy.concatenate
- Parameters
inputs – a list of Tensor
dim – a int.
- Returns
A Tensor
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> import numpy as np >>> input1 = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> input2 = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> input3 = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> out = flow.cat([input1, input2, input3], dim=1) >>> out.shape flow.Size([2, 18, 5, 3])
-
oneflow.experimental.
ones
(size: Union[int, Tuple[int, ...], oneflow.Size], dtype: Optional[oneflow._oneflow_internal.dtype] = None, device: Union[oneflow.device, str, None] = None, requires_grad: bool = False)¶ Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size.
- Parameters
size (an integer or tuple of integer values) – a variable number of arguments or a collection like a list or tuple.
dtype (flow.dtype, optional) –
device (torch.device, optional) – if None, uses the current device for the default tensor type
requires_grad (bool, optional) – False.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> y = flow.ones(5) >>> y tensor([1., 1., 1., 1., 1.], dtype=oneflow.float32)
-
oneflow.experimental.
zeros
(size: Union[int, Tuple[int, ...], oneflow.Size], dtype: Optional[oneflow._oneflow_internal.dtype] = None, device: Union[oneflow.device, str, None] = None, requires_grad: bool = False)¶ Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size.
- Parameters
size (an integer or tuple of integer values) – a variable number of arguments or a collection like a list or tuple.
dtype (flow.dtype, optional) –
device (torch.device, optional) – if None, uses the current device for the default tensor type
requires_grad (bool, optional) – False.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> y = flow.zeros(5) >>> y tensor([0., 0., 0., 0., 0.], dtype=oneflow.float32)
-
oneflow.experimental.
zeros_like
(other)¶ Returns a tensor filled with the scalar value 0, with the same size as input. flow.zeros_like(input) is equivalent to flow.zeros(input.shape, dtype=input.dtype)
- Parameters
other (Tensor) – The size of input will determine size of the output tensor.
For example:
import oneflow.experimental as flow import numpy as np x = flow.Tensor(np.random.rand([5])) y = flow.zeros_like(x) # [0. 0. 0. 0. 0. ]
-
oneflow.experimental.
ones_like
(other)¶ Returns a tensor filled with the scalar value 1, with the same size as input. flow.ones_like(input) is equivalent to flow.ones(input.shape, dtype=input.dtype)
- Parameters
other (Tensor) – The size of input will determine size of the output tensor.
For example:
import oneflow.experimental as flow import numpy as np x = flow.Tensor(np.random.rand([5])) y = flow.ones_like(x) # [1. 1. 1. 1. 1. ]
-
oneflow.experimental.nn.
Module
()¶
-
oneflow.experimental.nn.
Parameter
(data, requires_grad=True)¶
-
oneflow.experimental.nn.
Sequential
(*args: Any)¶ A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.
To make it easier to understand, here is a small example:
>>> import oneflow.experimental.nn as nn >>> nn.Sequential(nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU()) Sequential( (0): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) (1): ReLU() (2): Conv2d(20, 64, kernel_size=(5, 5), stride=(1, 1)) (3): ReLU() ) >>> nn.Sequential(OrderedDict([ ... ('conv1', nn.Conv2d(1,20,5)), ... ('relu1', nn.ReLU()), ... ('conv2', nn.Conv2d(20,64,5)), ... ('relu2', nn.ReLU()) ... ])) Sequential( (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) (relu1): ReLU() (conv2): Conv2d(20, 64, kernel_size=(5, 5), stride=(1, 1)) (relu2): ReLU() )
-
oneflow.experimental.nn.
ParameterList
(parameters: Optional[Iterable[Parameter]] = None) → None¶
-
oneflow.experimental.nn.
ParameterDict
(parameters: Optional[Mapping[str, Parameter]] = None) → None¶
-
oneflow.experimental.nn.
ModuleList
(modules: Optional[Iterable[oneflow.python.nn.module.Module]] = None) → None¶
-
oneflow.experimental.nn.
ModuleDict
(modules: Optional[Mapping[str, oneflow.python.nn.module.Module]] = None) → None¶
-
oneflow.experimental.nn.
Conv2d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')¶ The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/master/generated/torch.nn.Conv2d.html#conv2d
Applies a 2D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C_{\text{in}}, H, W)\) and output \((N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})\) can be precisely described as:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)\]where \(\star\) is the valid 2D cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(H\) is a height of input planes in pixels, and \(W\) is width in pixels.
stride
controls the stride for the cross-correlation, a single number or a tuple.padding
controls the amount of implicit padding on both sides forpadding
number of points for each dimension.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out_channels}}{\text{in_channels}}\)).,
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
- Parameters
in_channels (int) – Number of channels in the input image
out_channels (int) – Number of channels produced by the convolution
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
padding_mode (string, optional) –
'zeros'
,'reflect'
,'replicate'
or'circular'
. Default:'zeros'
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If
True
, adds a learnable bias to the output. Default:True
- Shape:
Input: \((N, C_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\]
- Attr:
- weight (Tensor): the learnable weights of the module of shape
\((\text{out_channels}, \frac{\text{in_channels}}{\text{groups}},\) \(\text{kernel_size[0]}, \text{kernel_size[1]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel_size}[i]}\)
- bias (Tensor): the learnable bias of the module of shape
(out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel_size}[i]}\)
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> import oneflow.experimental.nn as nn >>> flow.enable_eager_execution() >>> arr = np.random.randn(20, 16, 50, 100) >>> input = flow.Tensor(arr) >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >>> output = m(input)
-
oneflow.experimental.nn.
Dropout
(p: float = 0.5, inplace: bool = False, generator=None)¶ During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper “Improving neural networks by preventing co-adaptation of feature detectors”.
Furthermore, the outputs are scaled by a factor of \(\frac{1}{1-p}\) during training. This means that during evaluation the module simply computes an identity function.
- Parameters
p – probability of an element to be zeroed. Default: 0.5
inplace – If set to
True
, will do this operation in-place. Default:False
- Shape:
Input: \((*)\). Input can be of any shape
Output: \((*)\). Output is of the same shape as input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.Dropout(p=0) >>> arr = np.array( ... [ ... [-0.7797, 0.2264, 0.2458, 0.4163], ... [0.4299, 0.3626, -0.4892, 0.4141], ... [-1.4115, 1.2183, -0.5503, 0.6520], ... ] ... ) >>> x = flow.Tensor(arr) >>> y = m(x) >>> y tensor([[-0.7797, 0.2264, 0.2458, 0.4163], ... [-1.4115, 1.2183, -0.5503, 0.652 ]], dtype=oneflow.float32)
-
oneflow.experimental.
eq
(input, other)¶ Computes element-wise equality. The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
- Parameters
input (oneflow.Tensor) – the tensor to compare
other (oneflow.Tensor, float or int) – the target to compare
- Returns
A boolean tensor that is True where
input
is equal toother
and False elsewhere
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([2, 3, 4, 5]), dtype=flow.float32) >>> other = flow.Tensor(np.array([2, 3, 4, 1]), dtype=flow.float32) >>> y = flow.eq(input, other) >>> y tensor([1, 1, 1, 0], dtype=oneflow.int8)
-
oneflow.experimental.
to
(input, *args, **kwargs)¶ - Performs Tensor dtype and/or device conversion.
A flow.dtype and flow.device are inferred from the arguments of input.to(*args, **kwargs).
Note
If the
input
Tensor already has the correctflow.dtype
andflow.device
, theninput
is returned. Otherwise, the returned tensor is a copy ofinput
with the desired.- Parameters
input (oneflow.Tensor) – An input tensor.
*args (oneflow.Tensor or oneflow.device or oneflow.dtype) – Positional arguments
**kwargs (oneflow.device or oneflow.dtype) – Key-value arguments
- Returns
A Tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> arr = np.random.randint(1, 9, size=(1, 2, 3, 4)) >>> input = flow.Tensor(arr) >>> output = input.to(dtype=flow.float32) >>> np.array_equal(arr.astype(np.float32), output.numpy()) True
-
oneflow.experimental.Tensor.
to
(input, *args, **kwargs)¶ - Performs Tensor dtype and/or device conversion.
A flow.dtype and flow.device are inferred from the arguments of input.to(*args, **kwargs).
Note
If the
input
Tensor already has the correctflow.dtype
andflow.device
, theninput
is returned. Otherwise, the returned tensor is a copy ofinput
with the desired.- Parameters
input (oneflow.Tensor) – An input tensor.
*args (oneflow.Tensor or oneflow.device or oneflow.dtype) – Positional arguments
**kwargs (oneflow.device or oneflow.dtype) – Key-value arguments
- Returns
A Tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> arr = np.random.randint(1, 9, size=(1, 2, 3, 4)) >>> input = flow.Tensor(arr) >>> output = input.to(dtype=flow.float32) >>> np.array_equal(arr.astype(np.float32), output.numpy()) True
-
oneflow.experimental.
equal
(input, other)¶ Computes element-wise equality. The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
- Parameters
input (oneflow.Tensor) – the tensor to compare
other (oneflow.Tensor, float or int) – the target to compare
- Returns
A boolean tensor that is True where
input
is equal toother
and False elsewhere
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([2, 3, 4, 5]), dtype=flow.float32) >>> other = flow.Tensor(np.array([2, 3, 4, 1]), dtype=flow.float32) >>> y = flow.eq(input, other) >>> y tensor([1, 1, 1, 0], dtype=oneflow.int8)
-
oneflow.experimental.Tensor.
eq
(input, other)¶ Computes element-wise equality. The second argument can be a number or a tensor whose shape is broadcastable with the first argument.
- Parameters
input (oneflow.Tensor) – the tensor to compare
other (oneflow.Tensor, float or int) – the target to compare
- Returns
A boolean tensor that is True where
input
is equal toother
and False elsewhere
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([2, 3, 4, 5]), dtype=flow.float32) >>> other = flow.Tensor(np.array([2, 3, 4, 1]), dtype=flow.float32) >>> y = flow.eq(input, other) >>> y tensor([1, 1, 1, 0], dtype=oneflow.int8)
-
oneflow.experimental.
exp
(x)¶ This operator computes the exponential of Tensor.
The equation is:
\[out = e^x\]- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([1, 2, 3]).astype(np.float32)) >>> y = x.exp() >>> y tensor([ 2.7183, 7.3891, 20.0855], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
exp
(x)¶ This operator computes the exponential of Tensor.
The equation is:
\[out = e^x\]- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([1, 2, 3]).astype(np.float32)) >>> y = x.exp() >>> y tensor([ 2.7183, 7.3891, 20.0855], dtype=oneflow.float32)
-
oneflow.experimental.
expand
(x, *sizes)¶ This operator expand the input tensor to a larger size.
Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions and the new ones will be appended at the front.
For the new dimensions, the size cannot be set to -1.
- Parameters
x (oneflow.Tensor) – The input Tensor.
*sizes (flow.Size or int) – The desired expanded size.
- Returns
The result Tensor.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = np.array([[[[0, 1]], ... [[2, 3]], ... [[4, 5]]]]).astype(np.int32) >>> input = flow.Tensor(x) >>> out = input.expand(1, 3, 2, 2) >>> out.shape flow.Size([1, 3, 2, 2])
-
oneflow.experimental.Tensor.
expand
(x, *sizes)¶ This operator expand the input tensor to a larger size.
Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions and the new ones will be appended at the front.
For the new dimensions, the size cannot be set to -1.
- Parameters
x (oneflow.Tensor) – The input Tensor.
*sizes (flow.Size or int) – The desired expanded size.
- Returns
The result Tensor.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = np.array([[[[0, 1]], ... [[2, 3]], ... [[4, 5]]]]).astype(np.int32) >>> input = flow.Tensor(x) >>> out = input.expand(1, 3, 2, 2) >>> out.shape flow.Size([1, 3, 2, 2])
-
oneflow.experimental.nn.
Flatten
(start_dim: int = 1, end_dim: int = -1) → None¶ Flattens a contiguous range of dims into a tensor. For use with: nn.Sequential.
- Parameters
start_dim – first dim to flatten (default = 1).
end_dim – last dim to flatten (default = -1).
For example:
import oneflow.experimental as flow input = flow.Tensor(32, 1, 5, 5) m = flow.nn.Flatten() output = m(input) output.size() # out flow.Size([32, 25])
-
oneflow.experimental.
flatten
(input, start_dim: int = 0, end_dim: int = -1)¶ Flattens a contiguous range of dims into a tensor.
- Parameters
start_dim – first dim to flatten (default = 0).
end_dim – last dim to flatten (default = -1).
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(32, 1, 5, 5) >>> output = input.flatten(start_dim=1) >>> output.size() flow.Size([32, 25])
-
oneflow.experimental.Tensor.
flatten
(input, start_dim: int = 0, end_dim: int = -1)¶ Flattens a contiguous range of dims into a tensor.
- Parameters
start_dim – first dim to flatten (default = 0).
end_dim – last dim to flatten (default = -1).
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(32, 1, 5, 5) >>> output = input.flatten(start_dim=1) >>> output.size() flow.Size([32, 25])
-
oneflow.experimental.
gt
(x, y)¶ Returns the truth value of \(x > y\) element-wise.
- Parameters
x (oneflow.Tensor) – A Tensor
y (oneflow.Tensor) – A Tensor
- Returns
A Tensor with int8 type.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input1 = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> input2 = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> out = flow.gt(input1, input2).shape >>> out flow.Size([2, 6, 5, 3])
-
oneflow.experimental.Tensor.
gt
() → Tensor¶
-
oneflow.experimental.
lt
(x, y)¶ Returns the truth value of \(x < y\) element-wise.
- Parameters
x (oneflow.Tensor) – A Tensor
y (oneflow.Tensor) – A Tensor
- Returns
A Tensor with int8 type.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input1 = flow.Tensor(np.array([1, 2, 3]).astype(np.float32), dtype=flow.float32) >>> input2 = flow.Tensor(np.array([1, 2, 4]).astype(np.float32), dtype=flow.float32) >>> out = flow.lt(input1, input2) >>> out tensor([0, 0, 1], dtype=oneflow.int8)
-
oneflow.experimental.Tensor.
lt
(x, y)¶ Returns the truth value of \(x < y\) element-wise.
- Parameters
x (oneflow.Tensor) – A Tensor
y (oneflow.Tensor) – A Tensor
- Returns
A Tensor with int8 type.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input1 = flow.Tensor(np.array([1, 2, 3]).astype(np.float32), dtype=flow.float32) >>> input2 = flow.Tensor(np.array([1, 2, 4]).astype(np.float32), dtype=flow.float32) >>> out = flow.lt(input1, input2) >>> out tensor([0, 0, 1], dtype=oneflow.int8)
-
oneflow.experimental.nn.
Identity
(*args, **kwargs)¶ A placeholder identity operator that is argument-insensitive.
- Parameters
args – any argument (unused)
kwargs – any keyword argument (unused)
For example:
import numpy as np import oneflow as flow m = flow.nn.Identity() input = flow.Tensor(np.random.rand(2, 3, 4, 5)) output = m(input) # output = input
-
oneflow.experimental.nn.
Linear
(in_features: int, out_features: int, bias: bool = True) → None¶ Applies a linear transformation to the incoming data: \(y = xA^T + b\)
- Parameters
in_features (-) – size of each input sample
out_features (-) – size of each output sample
bias (-) – If set to
False
, the layer will not learn an additive bias. Default:True
- Shape:
Input: \((N, *, H_{in})\) where \(*\) means any number of additional dimensions and \(H_{in} = {in\_features}\)
Output: \((N, *, H_{out})\) where all but the last dimension are the same shape as the input and \(H_{out} = {out\_features}\).
- Attr:
weight
: the learnable weights of the module of shape \(({out\_features}, {in\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \((k = 1 / {in\_features})\)bias
: the learnable bias of the module of shape \(({out\_features})\). Ifbias
isTrue
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \((k = 1 / {in\_features})\)
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.Linear(20, 30, False) >>> input = flow.Tensor(np.random.randn(128, 20)) >>> output = m(input) >>> output.size() flow.Size([128, 30])
-
oneflow.experimental.nn.
CrossEntropyLoss
(weight=None, ignore_index: Optional[int] = None, reduction: Optional[str] = 'mean') → None¶ This criterion combines
LogSoftmax
andNLLLoss
in one single class.It is useful when training a classification problem with C classes.
The input is expected to contain raw, unnormalized scores for each class.
input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case (described later).
This criterion expects a class index in the range \([0, C-1]\) as the target for each value of a 1D tensor of size minibatch;
The loss can be described as:
\[\text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) = -x[class] + \log\left(\sum_j \exp(x[j])\right)\]Can also be used for higher dimension inputs, such as 2D images, by providing an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), where \(K\) is the number of dimensions, and a target of appropriate shape (see below).
- Parameters
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Default:'mean'
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor( ... [[-0.1664078, -1.7256707, -0.14690138], ... [-0.21474946, 0.53737473, 0.99684894], ... [-1.135804, -0.50371903, 0.7645404]], dtype=flow.float32) >>> target = flow.Tensor(np.array([0, 1, 2]), dtype=flow.int32) >>> out = flow.nn.CrossEntropyLoss(reduction="none")(input, target) >>> out tensor([0.802 , 1.1167, 0.3583], dtype=oneflow.float32) >>> out_sum = flow.nn.CrossEntropyLoss(reduction="sum")(input, target) >>> out_sum tensor([2.2769], dtype=oneflow.float32) >>> out_mean = flow.nn.CrossEntropyLoss(reduction="mean")(input, target) >>> out_mean tensor([0.759], dtype=oneflow.float32)
-
oneflow.experimental.nn.
NLLLoss
(weight=None, ignore_index: int = None, reduction: str = 'mean') → None¶ The negative log likelihood loss. It is useful to train a classification problem with C classes.
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case (described later).
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range \([0, C-1]\) where C = number of classes;
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \mathbb{1},\]where \(x\) is the input, \(y\) is the target, \(w\) is the weight, and \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{N} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]Can also be used for higher dimension inputs, such as 2D images, by providing an input of size \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\), where \(K\) is the number of dimensions, and a target of appropriate shape (see below). In the case of images, it computes NLL loss per-pixel.
- Parameters
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Default:'mean'
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> import numpy as np >>> input = flow.Tensor( ... [[-0.1664078, -1.7256707, -0.14690138], ... [-0.21474946, 0.53737473, 0.99684894], ... [-1.135804, -0.50371903, 0.7645404]], dtype=flow.float32) >>> target = flow.Tensor(np.array([0, 1, 2]), dtype=flow.int32) >>> m = flow.nn.NLLLoss(reduction="none") >>> out = m(input, target) >>> out tensor([ 0.1664, -0.5374, -0.7645], dtype=oneflow.float32) >>> m = flow.nn.NLLLoss(reduction="sum") >>> out = m(input, target) >>> out tensor([-1.1355], dtype=oneflow.float32) >>> m = flow.nn.NLLLoss(reduction="mean") >>> out = m(input, target) >>> out tensor([-0.3785], dtype=oneflow.float32)
-
oneflow.experimental.
masked_fill
(input, mask, value)¶ Fills elements of
self
tensor withvalue
wheremask
is True. The shape ofmask
must be broadcastable with the shape of the underlying tensor.- Parameters
mask (BoolTensor) – the boolean mask
value (float) – the value to fill in with
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> in_arr = np.array( ... [[[-0.13169311, 0.97277078, 1.23305363, 1.56752789], ... [-1.51954275, 1.87629473, -0.53301206, 0.53006478], ... [-1.38244183, -2.63448052, 1.30845795, -0.67144869]], ... [[ 0.41502161, 0.14452418, 0.38968 , -1.76905653], ... [ 0.34675095, -0.7050969 , -0.7647731 , -0.73233418], ... [-1.90089858, 0.01262963, 0.74693893, 0.57132389]]] ... ) >>> fill_value = 8.7654321 # random value e.g. -1e9 3.1415 >>> input = flow.Tensor(in_arr, dtype=flow.float32) >>> mask = flow.Tensor((in_arr > 0).astype(np.int8), dtype=flow.int) >>> output = flow.masked_fill(input, mask, fill_value) # tensor([[[-0.1317, 8.7654, 8.7654, 8.7654], # [-1.5195, 8.7654, -0.533 , 8.7654], # [-1.3824, -2.6345, 8.7654, -0.6714]], # [[ 8.7654, 8.7654, 8.7654, -1.7691], # [ 8.7654, -0.7051, -0.7648, -0.7323], # [-1.9009, 8.7654, 8.7654, 8.7654]]], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
masked_fill
(input, mask, value)¶ Fills elements of
self
tensor withvalue
wheremask
is True. The shape ofmask
must be broadcastable with the shape of the underlying tensor.- Parameters
mask (BoolTensor) – the boolean mask
value (float) – the value to fill in with
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> in_arr = np.array( ... [[[-0.13169311, 0.97277078, 1.23305363, 1.56752789], ... [-1.51954275, 1.87629473, -0.53301206, 0.53006478], ... [-1.38244183, -2.63448052, 1.30845795, -0.67144869]], ... [[ 0.41502161, 0.14452418, 0.38968 , -1.76905653], ... [ 0.34675095, -0.7050969 , -0.7647731 , -0.73233418], ... [-1.90089858, 0.01262963, 0.74693893, 0.57132389]]] ... ) >>> fill_value = 8.7654321 # random value e.g. -1e9 3.1415 >>> input = flow.Tensor(in_arr, dtype=flow.float32) >>> mask = flow.Tensor((in_arr > 0).astype(np.int8), dtype=flow.int) >>> output = flow.masked_fill(input, mask, fill_value) # tensor([[[-0.1317, 8.7654, 8.7654, 8.7654], # [-1.5195, 8.7654, -0.533 , 8.7654], # [-1.3824, -2.6345, 8.7654, -0.6714]], # [[ 8.7654, 8.7654, 8.7654, -1.7691], # [ 8.7654, -0.7051, -0.7648, -0.7323], # [-1.9009, 8.7654, 8.7654, 8.7654]]], dtype=oneflow.float32)
-
oneflow.experimental.
sum
(input, dim=None, keepdim=False)¶ Computes the sum of row of elements in a tensor in the given axis, if the axis is None, sum of all elements will be caculated.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor([[1, 2, 3], [4, 5, 6]]) >>> flow.sum(input) tensor([21.], dtype=oneflow.float32) >>> flow.sum(input, dim=0) tensor([5., 7., 9.], dtype=oneflow.float32) >>> flow.sum(input, dim=1) tensor([ 6., 15.], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
sum
(input, dim=None, keepdim=False)¶ Computes the sum of row of elements in a tensor in the given axis, if the axis is None, sum of all elements will be caculated.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor([[1, 2, 3], [4, 5, 6]]) >>> flow.sum(input) tensor([21.], dtype=oneflow.float32) >>> flow.sum(input, dim=0) tensor([5., 7., 9.], dtype=oneflow.float32) >>> flow.sum(input, dim=1) tensor([ 6., 15.], dtype=oneflow.float32)
-
oneflow.experimental.
mul
(input, other)¶ Computes the multiplication of input by other for each element, scalar and broadcast promotation are supported.
The formula is:
\[out = input \times other\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise multiply >>> input = flow.Tensor(np.random.randn(2,3)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.mul(input,other).numpy() >>> out.shape (2, 3) # scalar mutiply >>> input = 5 >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.mul(input,other).numpy() >>> out.shape (2, 3) # broadcast mutiply >>> input = flow.Tensor(np.random.randn(1,1)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.mul(input,other).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.Tensor.
mul
(input, other)¶ Computes the multiplication of input by other for each element, scalar and broadcast promotation are supported.
The formula is:
\[out = input \times other\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise multiply >>> input = flow.Tensor(np.random.randn(2,3)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.mul(input,other).numpy() >>> out.shape (2, 3) # scalar mutiply >>> input = 5 >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.mul(input,other).numpy() >>> out.shape (2, 3) # broadcast mutiply >>> input = flow.Tensor(np.random.randn(1,1)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.mul(input,other).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.
mean
(input, dim=None, keepdim=False)¶ Computes the mean of row of elements in a tensor in the given axis, if the axis is None, mean of all elements will be caculated.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor([[1, 2, 3], [4, 5, 6]]) >>> flow.mean(input) tensor([3.5], dtype=oneflow.float32) >>> flow.mean(input, dim=0) tensor([2.5, 3.5, 4.5], dtype=oneflow.float32) >>> flow.mean(input, dim=1) tensor([2., 5.], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
mean
(input, dim=None, keepdim=False)¶ Computes the mean of row of elements in a tensor in the given axis, if the axis is None, mean of all elements will be caculated.
For example:
>>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor([[1, 2, 3], [4, 5, 6]]) >>> flow.mean(input) tensor([3.5], dtype=oneflow.float32) >>> flow.mean(input, dim=0) tensor([2.5, 3.5, 4.5], dtype=oneflow.float32) >>> flow.mean(input, dim=1) tensor([2., 5.], dtype=oneflow.float32)
-
oneflow.experimental.
sub
(input, other)¶ Computes the subtraction of input by other for each element, scalar and broadcast promotation are supported. The formula is:
\[out = input - other\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise subtract >>> input = flow.Tensor(np.random.randn(2,3)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.sub(input,other).numpy() >>> out.shape (2, 3) # scalar subtract >>> input = 5 >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.sub(input,other).numpy() >>> out.shape (2, 3) # broadcast subtract >>> input = flow.Tensor(np.random.randn(1,1)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.sub(input,other).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.
var
(input, dim=None, keepdim=False)¶ Returns the variance of each row of the input tensor in the given dimension dim.
If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see flow.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).
- Parameters
input (Tensor) – the input tensor.
(int or tuple of python (dim) – ints): the dimension or dimensions to reduce. Defaults to None.
keepdim (bool, optional) – whether the output tensor has dim retained or not. Defaults to False.
- Returns
The result of variance on the specified axis of input Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> np_arr = np.random.randn(2,3,4,5) >>> input = flow.Tensor(np_arr) >>> output = flow.var(input, 1, True)
-
oneflow.experimental.Tensor.
var
(input, dim=None, keepdim=False)¶ Returns the variance of each row of the input tensor in the given dimension dim.
If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed (see flow.squeeze()), resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).
- Parameters
input (Tensor) – the input tensor.
(int or tuple of python (dim) – ints): the dimension or dimensions to reduce. Defaults to None.
keepdim (bool, optional) – whether the output tensor has dim retained or not. Defaults to False.
- Returns
The result of variance on the specified axis of input Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> np_arr = np.random.randn(2,3,4,5) >>> input = flow.Tensor(np_arr) >>> output = flow.var(input, 1, True)
-
oneflow.experimental.Tensor.
sub
(input, other)¶ Computes the subtraction of input by other for each element, scalar and broadcast promotation are supported. The formula is:
\[out = input - other\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise subtract >>> input = flow.Tensor(np.random.randn(2,3)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.sub(input,other).numpy() >>> out.shape (2, 3) # scalar subtract >>> input = 5 >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.sub(input,other).numpy() >>> out.shape (2, 3) # broadcast subtract >>> input = flow.Tensor(np.random.randn(1,1)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.sub(input,other).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.
div
(input, other)¶ Computes the division of input by other for each element, scalar and broadcast promotation are supported. The formula is:
\[out = \frac{input}{other}\]- Parameters
input (Union[int, float, flow.Tensor]) – input.
other (Union[int, float, flow.Tensor]) – other.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise divide >>> input = flow.Tensor(np.random.randn(2,3)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.div(input,other).numpy() >>> out.shape (2, 3) # scalar divide >>> input = 5 >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.div(input,other).numpy() >>> out.shape (2, 3) # broadcast divide >>> input = flow.Tensor(np.random.randn(1,1)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.div(input,other).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.Tensor.
div
(input, other)¶ Computes the division of input by other for each element, scalar and broadcast promotation are supported. The formula is:
\[out = \frac{input}{other}\]- Parameters
input (Union[int, float, flow.Tensor]) – input.
other (Union[int, float, flow.Tensor]) – other.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise divide >>> input = flow.Tensor(np.random.randn(2,3)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.div(input,other).numpy() >>> out.shape (2, 3) # scalar divide >>> input = 5 >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.div(input,other).numpy() >>> out.shape (2, 3) # broadcast divide >>> input = flow.Tensor(np.random.randn(1,1)) >>> other = flow.Tensor(np.random.randn(2,3)) >>> out = flow.div(input,other).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.
reciprocal
(x)¶ Computes the safe reciprocal of x. If x is zero, the reciprocal will be also set to zero.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([[1, 2, 3], [4, 5, 6]])) >>> out = flow.reciprocal(x) >>> out.numpy() array([[1. , 0.5 , 0.33333334], [0.25 , 0.2 , 0.16666667]], dtype=float32)
-
oneflow.experimental.Tensor.
reciprocal
(x)¶ Computes the safe reciprocal of x. If x is zero, the reciprocal will be also set to zero.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([[1, 2, 3], [4, 5, 6]])) >>> out = flow.reciprocal(x) >>> out.numpy() array([[1. , 0.5 , 0.33333334], [0.25 , 0.2 , 0.16666667]], dtype=float32)
-
oneflow.experimental.
add
(x, y)¶ Computes the addition of x by y for each element, scalar and broadcast promotation are supported. The formula is:
\[out = x + y\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise add >>> x = flow.Tensor(np.random.randn(2,3)) >>> y = flow.Tensor(np.random.randn(2,3)) >>> out = flow.add(x, y).numpy() >>> out.shape (2, 3) # scalar add >>> x = 5 >>> y = flow.Tensor(np.random.randn(2,3)) >>> out = flow.add(x, y).numpy() >>> out.shape (2, 3) # broadcast add >>> x = flow.Tensor(np.random.randn(1,1)) >>> y = flow.Tensor(np.random.randn(2,3)) >>> out = flow.add(x, y).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.Tensor.
add
(x, y)¶ Computes the addition of x by y for each element, scalar and broadcast promotation are supported. The formula is:
\[out = x + y\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() # element-wise add >>> x = flow.Tensor(np.random.randn(2,3)) >>> y = flow.Tensor(np.random.randn(2,3)) >>> out = flow.add(x, y).numpy() >>> out.shape (2, 3) # scalar add >>> x = 5 >>> y = flow.Tensor(np.random.randn(2,3)) >>> out = flow.add(x, y).numpy() >>> out.shape (2, 3) # broadcast add >>> x = flow.Tensor(np.random.randn(1,1)) >>> y = flow.Tensor(np.random.randn(2,3)) >>> out = flow.add(x, y).numpy() >>> out.shape (2, 3)
-
oneflow.experimental.
sin
(tensor)¶ Returns a new tensor with the sine of the elements of
input
.\[\text{out}_{i} = \sin(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x1 = flow.Tensor(np.array([-0.5461, 0.1347, -2.7266, -0.2746]).astype(np.float32)) >>> out1 = flow.sin(x1) >>> out1 tensor([-0.5194, 0.1343, -0.4032, -0.2712], dtype=oneflow.float32) >>> x2 = flow.Tensor(np.array([-1.4, 2.6, 3.7]).astype(np.float32),device=flow.device('cuda')) >>> out2 = flow.sin(x2) >>> out2 tensor([-0.9854, 0.5155, -0.5298], device='cuda:0', dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
sin
() → Tensor¶
-
oneflow.experimental.
cos
(tensor)¶ Returns a new tensor with the cosine of the elements of
input
.\[\text{out}_{i} = \cos(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.4309, 1.2706, -0.8562, 0.9796]) >>> input = flow.Tensor(arr, dtype=flow.float32) >>> output = flow.cos(input).numpy()
-
oneflow.experimental.Tensor.
cos
(tensor)¶ Returns a new tensor with the cosine of the elements of
input
.\[\text{out}_{i} = \cos(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.4309, 1.2706, -0.8562, 0.9796]) >>> input = flow.Tensor(arr, dtype=flow.float32) >>> output = flow.cos(input).numpy()
-
oneflow.experimental.
log
(tensor)¶ Returns a new tensor with the natural logarithm of the elements of
input
.\[y_{i} = \log_{e} (x_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.random.randn(2, 3, 4, 5) >>> input = flow.Tensor(arr, dtype=flow.float32) >>> output = flow.log(input)
-
oneflow.experimental.Tensor.
log
(tensor)¶ Returns a new tensor with the natural logarithm of the elements of
input
.\[y_{i} = \log_{e} (x_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.random.randn(2, 3, 4, 5) >>> input = flow.Tensor(arr, dtype=flow.float32) >>> output = flow.log(input)
-
oneflow.experimental.
sqrt
(input)¶ Returns a new tensor with the square-root of the elements of
input
.\[\text{out}_{i} = \sqrt{\text{input}_{i}}\]- Parameters
input – the input tensor.
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.0, 2.0, 3.0]) >>> input = flow.Tensor(arr) >>> output = flow.sqrt(input).numpy() >>> output array([1. , 1.4142135, 1.7320508], dtype=float32)
-
oneflow.experimental.Tensor.
sqrt
(input)¶ Returns a new tensor with the square-root of the elements of
input
.\[\text{out}_{i} = \sqrt{\text{input}_{i}}\]- Parameters
input – the input tensor.
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.0, 2.0, 3.0]) >>> input = flow.Tensor(arr) >>> output = flow.sqrt(input).numpy() >>> output array([1. , 1.4142135, 1.7320508], dtype=float32)
-
oneflow.experimental.
square
(input)¶ Returns a new tensor with the square of the elements of
input
.\[\text{out}_{i} = \sqrt{\text{input}_{i}}\]- Parameters
input – the input tensor.
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.0, 2.0, 3.0]) >>> input = flow.Tensor(arr) >>> output = flow.square(input).numpy() >>> output array([1., 4., 9.], dtype=float32)
-
oneflow.experimental.Tensor.
square
(input)¶ Returns a new tensor with the square of the elements of
input
.\[\text{out}_{i} = \sqrt{\text{input}_{i}}\]- Parameters
input – the input tensor.
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.0, 2.0, 3.0]) >>> input = flow.Tensor(arr) >>> output = flow.square(input).numpy() >>> output array([1., 4., 9.], dtype=float32)
-
oneflow.experimental.
std
(tensor, dim, unbiased=True, keepdim=False)¶ Returns the standard-deviation of each row of the
input
tensor in the dimensiondim
. Ifdim
is a list of dimensions, reduce over all of them.If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed, resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).
If
unbiased
isFalse
, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.- Parameters
input (Tensor) – the input tensor.
(int or tuple of python (dim) – ints): the dimension or dimensions to reduce.
unbiased (bool) – whether to use the unbiased estimation or not
keepdim (bool) – whether the output tensor has dim retained or not.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.0, 2.0, 3.0]) >>> input = flow.Tensor(arr) >>> output = flow.std(input, dim=0).numpy() >>> output array([0.8164968], dtype=float32)
-
oneflow.experimental.Tensor.
std
(tensor, dim, unbiased=True, keepdim=False)¶ Returns the standard-deviation of each row of the
input
tensor in the dimensiondim
. Ifdim
is a list of dimensions, reduce over all of them.If keepdim is True, the output tensor is of the same size as input except in the dimension(s) dim where it is of size 1. Otherwise, dim is squeezed, resulting in the output tensor having 1 (or len(dim)) fewer dimension(s).
If
unbiased
isFalse
, then the standard-deviation will be calculated via the biased estimator. Otherwise, Bessel’s correction will be used.- Parameters
input (Tensor) – the input tensor.
(int or tuple of python (dim) – ints): the dimension or dimensions to reduce.
unbiased (bool) – whether to use the unbiased estimation or not
keepdim (bool) – whether the output tensor has dim retained or not.
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> arr = np.array([1.0, 2.0, 3.0]) >>> input = flow.Tensor(arr) >>> output = flow.std(input, dim=0).numpy() >>> output array([0.8164968], dtype=float32)
-
oneflow.experimental.
pow
(tensor, exponent)¶ - Takes the power of each element in input with exponent and returns a tensor with the result. Exponent can be either a single float number, a single int number, or a tensor with the same shape as input.
When exponent is a scalar value, the operation applied is:
\[\text{out}_i = x_i ^ \text{exponent}\]-
When exponent is a tensor, the operation applied is:
\[\text{out}_i = x_i ^ {\text{exponent}_i}\]- Args:
input (Tensor): the input tensor.
exponent (int, float, Tensor): the exponent.
- Returns:
Tensor: The result of variance on the specified axis of input Tensor
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])) >>> out = flow.pow(x, 2).numpy() >>> out array([ 1., 4., 9., 16., 25., 36.], dtype=float32) >>> x = flow.Tensor(np.array([1.0, 2.0, 3.0, 4.0])) >>> y = flow.Tensor(np.array([1.0, 2.0, 3.0, 4.0])) >>> out = flow.pow(x, y).numpy() >>> out array([ 1., 4., 27., 256.], dtype=float32)
-
oneflow.experimental.Tensor.
pow
(tensor, exponent)¶ - Takes the power of each element in input with exponent and returns a tensor with the result. Exponent can be either a single float number, a single int number, or a tensor with the same shape as input.
When exponent is a scalar value, the operation applied is:
\[\text{out}_i = x_i ^ \text{exponent}\]-
When exponent is a tensor, the operation applied is:
\[\text{out}_i = x_i ^ {\text{exponent}_i}\]- Args:
input (Tensor): the input tensor.
exponent (int, float, Tensor): the exponent.
- Returns:
Tensor: The result of variance on the specified axis of input Tensor
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0])) >>> out = flow.pow(x, 2).numpy() >>> out array([ 1., 4., 9., 16., 25., 36.], dtype=float32) >>> x = flow.Tensor(np.array([1.0, 2.0, 3.0, 4.0])) >>> y = flow.Tensor(np.array([1.0, 2.0, 3.0, 4.0])) >>> out = flow.pow(x, y).numpy() >>> out array([ 1., 4., 27., 256.], dtype=float32)
-
oneflow.experimental.
cosh
(tensor)¶ Returns a new tensor with the hyperbolic cosine of the elements of
input
.\[\text{out}_{i} = \cosh(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> arr = np.array([ 0.1632, 1.1835, -0.6979, -0.7325]) >>> input = flow.Tensor(arr, dtype=flow.float32) >>> output = flow.cosh(input).numpy() >>> output array([1.0133467, 1.7859949, 1.2535787, 1.2804903], dtype=float32)
-
oneflow.experimental.Tensor.
cosh
(tensor)¶ Returns a new tensor with the hyperbolic cosine of the elements of
input
.\[\text{out}_{i} = \cosh(\text{input}_{i})\]- Parameters
input (Tensor) – the input tensor.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> arr = np.array([ 0.1632, 1.1835, -0.6979, -0.7325]) >>> input = flow.Tensor(arr, dtype=flow.float32) >>> output = flow.cosh(input).numpy() >>> output array([1.0133467, 1.7859949, 1.2535787, 1.2804903], dtype=float32)
-
oneflow.experimental.
matmul
(input, other)¶ This operator applies matrix multiplication to two Tensor.
- Parameters
a (oneflow.Tensor) – A Tensor
b (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input1 = flow.Tensor(np.random.randn(2, 6), dtype=flow.float32) >>> input2 = flow.Tensor(np.random.randn(6, 5), dtype=flow.float32) >>> of_out = flow.matmul(input1, input2) >>> of_out.shape flow.Size([2, 5])
-
oneflow.experimental.Tensor.
matmul
(input, other)¶ This operator applies matrix multiplication to two Tensor.
- Parameters
a (oneflow.Tensor) – A Tensor
b (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input1 = flow.Tensor(np.random.randn(2, 6), dtype=flow.float32) >>> input2 = flow.Tensor(np.random.randn(6, 5), dtype=flow.float32) >>> of_out = flow.matmul(input1, input2) >>> of_out.shape flow.Size([2, 5])
-
oneflow.experimental.
negative
(x)¶ This operator computes the negative value of Tensor.
- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor( ... np.array([1.0, -1.0, 2.3]).astype(np.float32), dtype=flow.float32 ... ) >>> out = flow.negative(input) >>> out tensor([-1. , 1. , -2.3], dtype=oneflow.float32)
-
oneflow.experimental.
neg
(x)¶ This operator computes the negative value of Tensor.
- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor( ... np.array([1.0, -1.0, 2.3]).astype(np.float32), dtype=flow.float32 ... ) >>> out = flow.negative(input) >>> out tensor([-1. , 1. , -2.3], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
negative
(x)¶ This operator computes the negative value of Tensor.
- Parameters
x (oneflow.Tensor) – A Tensor
- Returns
The result Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor( ... np.array([1.0, -1.0, 2.3]).astype(np.float32), dtype=flow.float32 ... ) >>> out = flow.negative(input) >>> out tensor([-1. , 1. , -2.3], dtype=oneflow.float32)
-
oneflow.experimental.nn.
LayerNorm
(normalized_shape: Union[int, Tuple[int], oneflow.Size], eps: float = 1e-05, elementwise_affine: bool = True) → None Applies Layer Normalization over a mini-batch of inputs as described in the paper Layer Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated separately over the last certain number dimensions which have to be of the shape specified by
normalized_shape
. \(\gamma\) and \(\beta\) are learnable affine transform parameters ofnormalized_shape
ifelementwise_affine
isTrue
. The standard-deviation is calculated via the biased estimator.Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affine
option, Layer Normalization applies per-element scale and bias withelementwise_affine
.This layer uses statistics computed from input data in both training and evaluation modes.
- Parameters
normalized_shape (int or list or oneflow.Size) –
input shape from an expected input of size
\[[* \times \text{normalized_shape}[0] \times \text{normalized_shape}[1] \times \ldots \times \text{normalized_shape}[-1]]\]If a single integer is used, it is treated as a singleton list, and this module will
normalize over the last dimension which is expected to be of that specific size.
eps – a value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine – a boolean value that when set to
True
, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
.
- Shape:
Input: \((N, *)\)
Output: \((N, *)\) (same shape as input)
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input_arr = np.array( ... [ ... [ ... [[-0.16046895, -1.03667831], [-0.34974465, 0.26505867]], ... [[-1.24111986, -0.53806001], [1.72426331, 0.43572459]], ... ], ... [ ... [[-0.77390957, -0.42610624], [0.16398858, -1.35760343]], ... [[1.07541728, 0.11008703], [0.26361224, -0.48663723]], ... ], ... ], ... dtype=np.float32, ... ) >>> x = flow.Tensor(input_arr) >>> m = flow.nn.LayerNorm(2) >>> y = m(x).numpy() >>> y array([[[[ 0.99997395, -0.99997395], [-0.999947 , 0.999947 ]], [[-0.9999596 , 0.9999594 ], [ 0.999988 , -0.999988 ]]], [[[-0.9998343 , 0.9998341 ], [ 0.9999914 , -0.9999914 ]], [[ 0.99997866, -0.99997866], [ 0.9999646 , -0.9999646 ]]]], dtype=float32)
-
oneflow.experimental.nn.
AvgPool2d
(kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int], None] = None, padding: Union[int, Tuple[int, int]] = 0, ceil_mode: bool = False, count_include_pad: Optional[bool] = None, divisor_override: Optional[int] = None)¶ Performs the 2d-average pooling on the input.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and kernel_size \((kH, kW)\) can be precisely described as:
\[out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n)\]- Parameters
kernel_size (Union[int, Tuple[int, int]]) – An int or list of ints that has length 1, 2. The size of the window for each dimension of the input Tensor.
strides (Union[int, Tuple[int, int]]) – An int or list of ints that has length 1, 2. The stride of the sliding window for each dimension of the input Tensor.
padding (Tuple[int, int]) – An int or list of ints that has length 1, 2. Implicit zero padding to be added on both sides.
ceil_mode (bool, default to False) – When True, will use ceil instead of floor to compute the output shape.
For example:
import oneflow.experimental as flow import numpy as np of_avgpool2d = flow.nn.AvgPool2d( kernel_size=(3, 2), padding=0, stride=(2, 1), ) x = flow.Tensor(shape=(1, 1, 10, 10)) of_y = of_avgpool2d(x)
-
oneflow.experimental.nn.
MaxPool2d
(kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int], None] = None, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, return_indices: bool = False, ceil_mode: bool = False)¶ The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d
Applies a 2D max pooling over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[\begin{split}\begin{aligned} out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times h + m, \text{stride[1]} \times w + n) \end{aligned}\end{split}\]If
padding
is non-zero, then the input is implicitly minimum value padded on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.Note
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
- The parameters
kernel_size
,stride
,padding
,dilation
can either be: a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Parameters
kernel_size – the size of the window to take a max over
stride – the stride of the window. Default value is
kernel_size
padding – implicit minimum value padding to be added on both sides
dilation – a parameter that controls the stride of elements in the window
return_indices – if
True
, will return the max indices along with the outputs. Useful fortorch.nn.MaxUnpool2d
laterceil_mode – when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) - 1}{\text{stride[0]}} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) - 1}{\text{stride[1]}} + 1\right\rfloor\]
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> kernel_size, stride, padding = (3, 4), (1, 1), (1, 2) >>> m = flow.nn.MaxPool2d(kernel_size, stride, padding) >>> np.random.seed(0) >>> x = flow.Tensor(np.random.rand(1, 1, 5, 3)) >>> y = m(x) >>> y tensor([[[[0.7152, 0.7152, 0.7152, 0.7152], ... [0.9256, 0.9256, 0.9256, 0.9256]]]], dtype=oneflow.float32) >>> kernel_size, stride, padding = (2, 4), (4, 5), (1, 2) >>> m = flow.nn.MaxPool2d(kernel_size, stride, padding) >>> x = flow.Tensor(np.random.randn(9, 7, 32, 20)) >>> y = m(x) >>> y.shape flow.Size([9, 7, 9, 5])
- The parameters
-
oneflow.experimental.
repeat
(x, sizes)¶ This operator repeat the input tensor to a larger size along the specified dimensions.
- Parameters
x (oneflow.Tensor) – The input Tensor.
size (Sequence[int]) – The number of times to repeat this tensor along each dimension
- Returns
The result Tensor.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = np.array([[[[0, 1]], ... [[2, 3]], ... [[4, 5]]]]).astype(np.int32) >>> input = flow.Tensor(x) >>> out = input.repeat(sizes=(1, 1, 2, 2)) >>> out.shape flow.Size([1, 3, 2, 4])
-
oneflow.experimental.Tensor.
repeat
(x, sizes)¶ This operator repeat the input tensor to a larger size along the specified dimensions.
- Parameters
x (oneflow.Tensor) – The input Tensor.
size (Sequence[int]) – The number of times to repeat this tensor along each dimension
- Returns
The result Tensor.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> x = np.array([[[[0, 1]], ... [[2, 3]], ... [[4, 5]]]]).astype(np.int32) >>> input = flow.Tensor(x) >>> out = input.repeat(sizes=(1, 1, 2, 2)) >>> out.shape flow.Size([1, 3, 2, 4])
-
oneflow.experimental.
reshape
(x, shape: Sequence[int] = None)¶ This operator reshapes a Tensor.
We can set one dimension in shape as -1, the operator will infer the complete shape.
- Parameters
x – A Tensor.
shape – Shape of the output tensor.
- Returns
A Tensor has the same type as x.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array( ... [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] ... ).astype(np.float32) >>> input = flow.Tensor(x) >>> y = flow.reshape(input, shape=[2, 2, 2, -1]).shape >>> y flow.Size([2, 2, 2, 2])
-
oneflow.experimental.Tensor.
reshape
(x, shape: Sequence[int] = None)¶ This operator reshapes a Tensor.
We can set one dimension in shape as -1, the operator will infer the complete shape.
- Parameters
x – A Tensor.
shape – Shape of the output tensor.
- Returns
A Tensor has the same type as x.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array( ... [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]] ... ).astype(np.float32) >>> input = flow.Tensor(x) >>> y = flow.reshape(input, shape=[2, 2, 2, -1]).shape >>> y flow.Size([2, 2, 2, 2])
-
oneflow.experimental.
squeeze
(input, dim: Optional[Sequence[int]] = None)¶ This operator removes the specified dimention which size is 1 of the input Tensor. If the dim is not specified, this operator will remove all the dimention which size is 1 of the input Tensor.
The amount of element in return value is the same as Tensor input.
- Parameters
input (oneflow.Tensor) – The input Tensor.
dim (Optional[Sequence[int]]) – The dim. Defaults to None.
- Returns
The result Tensor.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([[[[1, 1, 1]]]]).astype(np.int32)) >>> out = flow.squeeze(input, dim=[1, 2]).shape >>> out flow.Size([1, 3])
-
oneflow.experimental.Tensor.
squeeze
(input, dim: Optional[Sequence[int]] = None)¶ This operator removes the specified dimention which size is 1 of the input Tensor. If the dim is not specified, this operator will remove all the dimention which size is 1 of the input Tensor.
The amount of element in return value is the same as Tensor input.
- Parameters
input (oneflow.Tensor) – The input Tensor.
dim (Optional[Sequence[int]]) – The dim. Defaults to None.
- Returns
The result Tensor.
- Return type
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.array([[[[1, 1, 1]]]]).astype(np.int32)) >>> out = flow.squeeze(input, dim=[1, 2]).shape >>> out flow.Size([1, 3])
-
oneflow.experimental.
transpose
(tensor, dim0, dim1)¶ Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.
The resulting out tensor shares its underlying storage with the input tensor, so changing the content of one would change the content of the other.
- Parameters
tensor (oneflow.Tensor) – The input tensor.
dim0 (int) – the first dimension to be transposed.
dim1 (int) – the second dimension to be transposed.
- Returns
A transposed tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> out = flow.transpose(input, 0, 1).shape >>> out flow.Size([6, 2, 5, 3])
-
oneflow.experimental.Tensor.
transpose
(tensor, dim0, dim1)¶ Returns a tensor that is a transposed version of input. The given dimensions dim0 and dim1 are swapped.
The resulting out tensor shares its underlying storage with the input tensor, so changing the content of one would change the content of the other.
- Parameters
tensor (oneflow.Tensor) – The input tensor.
dim0 (int) – the first dimension to be transposed.
dim1 (int) – the second dimension to be transposed.
- Returns
A transposed tensor.
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> out = flow.transpose(input, 0, 1).shape >>> out flow.Size([6, 2, 5, 3])
-
oneflow.experimental.
unsqueeze
(input, dim)¶ Returns a new tensor with a dimension of size one inserted at the specified position.
The returned tensor shares the same underlying data with this tensor.
A
dim
value within the range [-input.ndimension() - 1, input.ndimension() + 1) can be used. Negativedim
will correspond tounsqueeze()
applied atdim
=dim + input.ndimension() + 1
.- Parameters
input (Tensor) – the input tensor.
dim (int) – the index at which to insert the singleton dimension
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.random.rand(2, 3, 4)) >>> y = x.unsqueeze(2) >>> y.shape flow.Size([2, 3, 1, 4])
-
oneflow.experimental.Tensor.
unsqueeze
(input, dim)¶ Returns a new tensor with a dimension of size one inserted at the specified position.
The returned tensor shares the same underlying data with this tensor.
A
dim
value within the range [-input.ndimension() - 1, input.ndimension() + 1) can be used. Negativedim
will correspond tounsqueeze()
applied atdim
=dim + input.ndimension() + 1
.- Parameters
input (Tensor) – the input tensor.
dim (int) – the index at which to insert the singleton dimension
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor(np.random.rand(2, 3, 4)) >>> y = x.unsqueeze(2) >>> y.shape flow.Size([2, 3, 1, 4])
-
oneflow.experimental.
where
(condition, x, y)¶ Return a tensor of elements selected from either
x
ory
, depending oncondition
. If the element in condition is larger than 0,it will take the x element, else it will take the y element
Note
The tensors
condition
,x
,y
must be broadcastable. It will take the x element, else it will take the y element.- Parameters
- Returns
A tensor of shape equal to the broadcasted shape of
condition
,x
,y
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor( ... np.array([[-0.4620, 0.3139], [0.3898, -0.7197], [0.0478, -0.1657]]), ... dtype=flow.float32, ... ) >>> y = flow.Tensor(np.ones(shape=(3, 2)), dtype=flow.float32) >>> condition = flow.Tensor(np.array([[0, 1], [1, 0], [1, 0]]), dtype=flow.int32) >>> out = condition.where(x, y) >>> out tensor([[1. , 0.3139], ... [0.0478, 1. ]], dtype=oneflow.float32)
-
oneflow.experimental.Tensor.
where
(condition, x, y)¶ Return a tensor of elements selected from either
x
ory
, depending oncondition
. If the element in condition is larger than 0,it will take the x element, else it will take the y element
Note
The tensors
condition
,x
,y
must be broadcastable. It will take the x element, else it will take the y element.- Parameters
- Returns
A tensor of shape equal to the broadcasted shape of
condition
,x
,y
- Return type
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = flow.Tensor( ... np.array([[-0.4620, 0.3139], [0.3898, -0.7197], [0.0478, -0.1657]]), ... dtype=flow.float32, ... ) >>> y = flow.Tensor(np.ones(shape=(3, 2)), dtype=flow.float32) >>> condition = flow.Tensor(np.array([[0, 1], [1, 0], [1, 0]]), dtype=flow.int32) >>> out = condition.where(x, y) >>> out tensor([[1. , 0.3139], ... [0.0478, 1. ]], dtype=oneflow.float32)
-
oneflow.experimental.
gather
(input, index, dim=0, sparse_grad=False)¶ Gathers values along an axis specified by dim.
For a 3-D tensor the output is specified by:
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
input
andindex
must have the same number of dimensions. It is also required thatindex.size(d) <= input.size(d)
for all dimensionsd != dim
.out
will have the same shape asindex
. Note thatinput
andindex
do not broadcast against each other.- Parameters
input (Tensor) – the source tensor
dim (int) – the axis along which to index
index (LongTensor) – the indices of elements to gather
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = np.random.randn(3, 4, 3, 5) >>> index = np.random.choice(np.arange(3), size=180, replace=True).reshape((3, 4, 3, 5)) >>> output = flow.gather(flow.Tensor(input), flow.Tensor(index, dtype=flow.int), dim=1) >>> output.shape flow.Size([3, 4, 3, 5])
-
oneflow.experimental.Tensor.
gather
(input, index, dim=0, sparse_grad=False)¶ Gathers values along an axis specified by dim.
For a 3-D tensor the output is specified by:
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
input
andindex
must have the same number of dimensions. It is also required thatindex.size(d) <= input.size(d)
for all dimensionsd != dim
.out
will have the same shape asindex
. Note thatinput
andindex
do not broadcast against each other.- Parameters
input (Tensor) – the source tensor
dim (int) – the axis along which to index
index (LongTensor) – the indices of elements to gather
For example:
>>> import oneflow.experimental as flow >>> import numpy as np >>> flow.enable_eager_execution() >>> input = np.random.randn(3, 4, 3, 5) >>> index = np.random.choice(np.arange(3), size=180, replace=True).reshape((3, 4, 3, 5)) >>> output = flow.gather(flow.Tensor(input), flow.Tensor(index, dtype=flow.int), dim=1) >>> output.shape flow.Size([3, 4, 3, 5])
-
oneflow.experimental.nn.
Embedding
(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: Optional[float] = None, scale_grad_by_freq: bool = False, sparse: bool = False, _weight: Optional[oneflow.Tensor] = None)¶ A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
- Parameters
num_embeddings (int) – size of the dictionary of embeddings
embedding_dim (int) – the size of each embedding vector
padding_idx (int, optional) – If specified, the entries at
padding_idx
do not contribute to the gradient; therefore, the embedding vector atpadding_idx
is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector atpadding_idx
will default to all zeros, but can be updated to another value to be used as the padding vector.
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> indices = flow.Tensor([[1, 2, 4, 5], [4, 3, 2, 9]], dtype=flow.int) >>> m = flow.nn.Embedding(10, 3) >>> y = m(indices)
-
oneflow.experimental.Tensor.
permute
(tensor, *dims)¶ Returns a view of the original tensor with its dimensions permuted.
- Parameters
*dims (int...) – The desired ordering of dimensions
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.random.randn(2, 6, 5, 3), dtype=flow.float32) >>> out = input.permute(1, 0, 2, 3).shape >>> out flow.Size([6, 2, 5, 3])
-
oneflow.experimental.nn.
Hardswish
(inplace: bool = False)¶ Applies the hardswish function, element-wise, as described in the paper: Searching for MobileNetV3.
\[\begin{split}\text{Hardswish}(x) = \begin{cases} 0 & \text{ if } x \le -3 \\ x & \text{ if } x \ge +3 \\ x*(x+3)/6 & \text{ otherwise } \\ \end{cases}\end{split}\]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> x = np.array([-0.5, 0, 0.5]).astype(np.float32) >>> input = flow.Tensor(x) >>> hardswish = flow.nn.Hardswish() >>> out = hardswish(input) >>> out tensor([-0.2083, 0. , 0.2917], dtype=oneflow.float32)
-
oneflow.experimental.nn.
PReLU
(num_parameters: int = 1, init: float = 0.25) → None¶ Applies the element-wise function:
\[PReLU(x) = \max(0,x) + a * \min(0,x)\]Here \(a\) is a learnable parameter. When called without arguments, nn.PReLU() uses a single parameter \(a\) across all input channels. If called with nn.PReLU(nChannels), a separate \(a\) is used for each input channel.
Note
weight decay should not be used when learning \(a\) for good performance.
Note
Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.
- Parameters
num_parameters (int) – number of \(a\) to learn. Although it takes an int as input, there is only two values are legitimate: 1, or the number of channels at input. Default: 1
init (float) – the initial value of \(a\). Default: 0.25
- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
- Attr:
weight (Tensor): the learnable weights of shape (
num_parameters
).
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.PReLU() >>> input = flow.Tensor(np.asarray([[[[1, -2], [3, 4]]]]), dtype=flow.float32) >>> print(m(input).numpy()) [[[[ 1. -0.5] [ 3. 4. ]]]]
-
oneflow.experimental.nn.
Hardtanh
(min_val: float = -1, max_val: float = 1, inplace: bool = False, min_value: Optional[float] = None, max_value: Optional[float] = None)¶ Applies the HardTanh function element-wise
HardTanh is defined as:
\[\begin{split}\text{HardTanh}(x) = \begin{cases} 1 & \text{ if } x > 1 \\ -1 & \text{ if } x < -1 \\ x & \text{ otherwise } \\ \end{cases}\end{split}\]The range of the linear region \([-1, 1]\) can be adjusted using
min_val
andmax_val
.- Parameters
min_val – minimum value of the linear region range. Default: -1
max_val – maximum value of the linear region range. Default: 1
inplace – can optionally do the operation in-place. Default:
False
Keyword arguments
min_value
andmax_value
have been deprecated in favor ofmin_val
andmax_val
.- Shape:
Input: \((N, *)\) where * means, any number of additional dimensions
Output: \((N, *)\), same shape as the input
For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> m = flow.nn.Hardtanh() >>> arr = np.array([0.2, 0.3, 3.0, 4.0]) >>> x = flow.Tensor(arr) >>> out = m(x) >>> out tensor([0.2, 0.3, 1. , 1. ], dtype=oneflow.float32)
-
oneflow.experimental.nn.
Upsample
(size: Union[int, Tuple[int, ...], None] = None, scale_factor: Union[float, Tuple[float, ...], None] = None, mode: str = 'nearest', align_corners: Optional[bool] = None)¶ The interface is consistent with PyTorch.
The documentation is referenced from: https://pytorch.org/docs/1.9.0/_modules/torch/nn/modules/upsampling.html#Upsample
Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a
scale_factor
or the target outputsize
to calculate the output size. (You cannot give both, as it is ambiguous)- Parameters
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float] or Tuple[float, float] or Tuple[float, float, float], optional) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (str, optional) – the upsampling algorithm: one of
'nearest'
,'linear'
,'bilinear'
,'bicubic'
and'trilinear'
. Default:'nearest'
align_corners (bool, optional) – if
True
, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect whenmode
is'linear'
,'bilinear'
, or'trilinear'
. Default:False
- Shape:
Input: \((N, C, W_{in})\), \((N, C, H_{in}, W_{in})\) or \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, W_{out})\), \((N, C, H_{out}, W_{out})\) or \((N, C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor D_{in} \times \text{scale_factor} \right\rfloor\]\[H_{out} = \left\lfloor H_{in} \times \text{scale_factor} \right\rfloor\]\[W_{out} = \left\lfloor W_{in} \times \text{scale_factor} \right\rfloor\]Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, bicubic, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. See below for concrete examples on how this affects the outputs.Note
If you want downsampling/general resizing, you should use
interpolate()
.For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> input = flow.Tensor(np.arange(1, 5).reshape((1, 1, 2, 2)), dtype=flow.float32) >>> input = input.to("cuda") >>> m = flow.nn.Upsample(scale_factor=2.0, mode="nearest") >>> output = m(input) >>> output tensor([[[[1., 1., 2., 2.], ... [3., 3., 4., 4.]]]], device='cuda:0', dtype=oneflow.float32)
-
oneflow.experimental.nn.
UpsamplingNearest2d
(size: Optional[Tuple[int, int]] = None, scale_factor: Optional[Tuple[float, float]] = None) → None¶ Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).- Parameters
size (int or Tuple[int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.
Warning
This class is deprecated in favor of
interpolate()
.- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor H_{in} \times \text{scale_factor} \right\rfloor\]\[W_{out} = \left\lfloor W_{in} \times \text{scale_factor} \right\rfloor\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.arange(1, 5).reshape((1, 1, 2, 2)), dtype=flow.float32) >>> input = input.to("cuda") >>> m = flow.nn.UpsamplingNearest2d(scale_factor=2.0) >>> output = m(input) >>> output tensor([[[[1., 1., 2., 2.], ... [3., 3., 4., 4.]]]], device='cuda:0', dtype=oneflow.float32)
-
oneflow.experimental.nn.
UpsamplingBilinear2d
(size: Optional[Tuple[int, int]] = None, scale_factor: Optional[Tuple[float, float]] = None) → None¶ Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).- Parameters
size (int or Tuple[int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.
Warning
This class is deprecated in favor of
interpolate()
. It is equivalent tonn.functional.interpolate(..., mode='bilinear', align_corners=True)
.- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor H_{in} \times \text{scale_factor} \right\rfloor\]\[W_{out} = \left\lfloor W_{in} \times \text{scale_factor} \right\rfloor\]For example:
>>> import numpy as np >>> import oneflow.experimental as flow >>> flow.enable_eager_execution() >>> input = flow.Tensor(np.arange(1, 5).reshape((1, 1, 2, 2)), dtype=flow.float32) >>> input = input.to("cuda") >>> m = flow.nn.UpsamplingBilinear2d(scale_factor=2.0) >>> output = m(input) >>> output tensor([[[[1. , 1.3333, 1.6667, 2. ], ... [3. , 3.3333, 3.6667, 4. ]]]], device='cuda:0', dtype=oneflow.float32)