oneflow.Tensor

OneFlow Tensor Class

class oneflow.Tensor
property T

Is this Tensor with its dimensions reversed.

If n is the number of dimensions in x, x.T is equivalent to x.permute(n-1, n-2, …, 0).

abs()

See oneflow.abs()

acos()

See oneflow.acos()

acosh()

See oneflow.acosh()

add(other, *, alpha=1)

See oneflow.add()

add_(other, *, alpha=1)

In-place version of oneflow.Tensor.add().

addcmul()

See oneflow.addcmul()

addcmul_()

In-place version of oneflow.Tensor.addcmul().

addmm(mat1, mat2, alpha=1, beta=1)

See oneflow.addmm()

amax()

See oneflow.amax()

amin()

See oneflow.amin()

arccos()

See oneflow.arccos()

arccosh()

See oneflow.arccosh()

arcsin()

See oneflow.arcsin()

arcsinh()

See oneflow.arcsinh()

arctan()

See oneflow.arctan()

arctanh()

See oneflow.arctanh()

argmax()

See oneflow.argmax()

argmin()

See oneflow.argmin()

argsort(dim=None, descending=None)

See oneflow.argsort()

argwhere()

See oneflow.argwhere()

asin()

See oneflow.asin()

asinh()
atan()

See oneflow.atan()

atan2()

See oneflow.atan2()

atanh()

See oneflow.atanh()

backward(gradient=None, retain_graph=False, create_graph=False)

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.Tensor.backward.html.

Computes the gradient of current tensor w.r.t. graph leaves.

The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient. It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self.

This function accumulates gradients in the leaves - you might need to zero .grad attributes or set them to None before calling it. See Default gradient layouts for details on the memory layout of accumulated gradients.

Note

If you run any forward ops, create gradient, and/or call backward in a user-specified CUDA stream context, see Stream semantics of backward passes.

Note

When inputs are provided and a given input is not a leaf, the current implementation will call its grad_fn (though it is not strictly needed to get this gradients). It is an implementation detail on which the user should not rely. See https://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780 for more details.

Parameters
  • gradient (Tensor or None) – Gradient w.r.t. the tensor. If it is a tensor, it will be automatically converted to a Tensor that does not require grad unless create_graph is True. None values can be specified for scalar Tensors or ones that don’t require grad. If a None value would be acceptable then this argument is optional.

  • retain_graph (bool, optional) – If False, the graph used to compute the grads will be freed. Note that in nearly all cases setting this option to True is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph.

  • create_graph (bool, optional) – If True, graph of the derivative will be constructed, allowing to compute higher order derivative products. Defaults to False.

bmm()

See oneflow.bmm()

byte()

self.byte() is equivalent to self.to(oneflow.uint8). See oneflow.Tensor.to()

cast()

See oneflow.cast()

ceil()

See oneflow.ceil()

chunk()

See oneflow.chunk()

clamp()

See oneflow.clamp().

clamp_()

Inplace version of oneflow.Tensor.clamp().

clip()

Alias for oneflow.Tensor.clamp().

clip_()

Alias for oneflow.Tensor.clamp_().

clone()
copy_(other: Union[oneflow.Tensor, numpy.ndarray])

The interface is consistent with PyTorch.

Tensor.copy_(src, non_blocking=False) → Tensor

Copies the elements from src into self tensor and returns self.

The src tensor must be broadcastable with the self tensor. It may be of a different data type or reside on a different device.

Parameters
  • src (Tensor) – the source tensor to copy from

  • non_blocking (bool) – if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.

cos()

See oneflow.cos()

cosh()

See oneflow.cosh()

cpu()

Returns a copy of this object in CPU memory. If this object is already in CPU memory and on the correct device, then no copy is performed and the original object is returned.

For example:

>>> import oneflow as flow

>>> input = flow.tensor([1, 2, 3, 4, 5], device=flow.device("cuda"))
>>> output = input.cpu()
>>> output.device
device(type='cpu', index=0)
cuda()

Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.

Parameters

device (flow.device) – The destination GPU device. Defaults to the current CUDA device.

For example:

>>> import oneflow as flow

>>> input = flow.Tensor([1, 2, 3, 4, 5])
>>> output = input.cuda()
>>> output.device
device(type='cuda', index=0)
data
detach()
device

The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.Tensor.device.html.

Is the oneflow.device where this Tensor is, which is invalid for global tensor.

diag()

See oneflow.diag()

diagonal()

See oneflow.diagonal()

dim()

Tensor.dim() → int

Returns the number of dimensions of self tensor.

div()

See oneflow.div()

div_(value)Tensor

In-place version of oneflow.Tensor.div().

dot()

See oneflow.dot()

double()

Tensor.double() is equivalent to Tensor.to(flow.float64). See to().

Parameters

input (Tensor) – the input tensor.

For example:

>>> import oneflow as flow
>>> import numpy as np

>>> input = flow.tensor(np.random.randn(1, 2, 3), dtype=flow.int)
>>> input = input.double()
>>> input.dtype
oneflow.float64
dtype
element_size()

Tensor.element_size() → int

Returns the size in bytes of an individual element.

eq(other)

See oneflow.eq()

erf()Tensor

See oneflow.erf()

erfc()Tensor

See oneflow.erfc()

erfinv()

See oneflow.erfinv()

erfinv_()

Inplace version of oneflow.erfinv()

exp()

See oneflow.exp()

expand()Tensor

See oneflow.expand()

expand_as(other)Tensor

Expand this tensor to the same size as other. self.expand_as(other) is equivalent to self.expand(other.size()).

Please see expand() for more information about expand.

Parameters

other (oneflow.Tensor) – The result tensor has the same size as other.

expm1()

See oneflow.expm1()

fill_(value)

Tensor.fill_(value) → Tensor

Fills self tensor with the specified value.

flatten()

See oneflow.flatten()

flip(dims)

See oneflow.flip()

float()

Tensor.float() is equivalent to Tensor.to(flow.float32). See to().

Parameters

input (Tensor) – the input tensor.

For example:

>>> import oneflow as flow
>>> import numpy as np

>>> input = flow.tensor(np.random.randn(1, 2, 3), dtype=flow.int)
>>> input = input.float()
>>> input.dtype
oneflow.float32
floor()

See oneflow.floor()

floor_()

In-place version of oneflow.floor()

fmod(other)Tensor

See oneflow.fmod()

gather(dim, index)Tensor

See oneflow.gather()

ge()

See oneflow.ge()

gelu()

See oneflow.gelu()

get_device() -> Device ordinal (Integer)

For CUDA tensors, this function returns the device ordinal of the GPU on which the tensor resides. For CPU tensors, an error is thrown.

global_to_global(placement=None, sbp=None, *, grad_sbp=None, check_meta=False)Tensor

Performs Tensor placement and/or sbp conversion.

Note

This tensor must be global tensor.

At least one of placement and sbp is required.

If placement and sbp are all the same as this tensor’s own placement and sbp, then returns this tensor own.

Parameters
  • placement (flow.placement, optional) – the desired placement of returned global tensor. Default: None

  • sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – the desired sbp of returned global tensor. Default: None

Keyword Arguments
  • grad_sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – manually specify the sbp of this tensor’s grad tensor in the backward pass. If None, the grad tensor sbp will be infered automatically. Default: None

  • check_meta (bool, optional) – indicates whether to check meta information. If set to True, check the consistency of the input meta information (placement and sbp) on each rank. Default: False

>>> # Run on 2 ranks respectively
>>> import oneflow as flow
>>> input = flow.tensor([0., 1.], dtype=flow.float32, placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.broadcast]) 
>>> output = input.global_to_global(placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.split(0)]) 
>>> print(output.size()) 
>>> print(output) 
>>> # results on rank 0
oneflow.Size([2])
tensor([0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
>>> # results on rank 1
oneflow.Size([2])
tensor([0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
grad

Return the gradient calculated by autograd functions. This property is None by default.

grad_fn

Return the function that created this tensor if it’s requires_grad is True.

gt()

See oneflow.gt()

half()

self.half() is equivalent to self.to(dtype=oneflow.float16).

See oneflow.Tensor.to()

in_top_k(targets, predictions, k)Tensor

See oneflow.in_top_k()

index_select(dim, index)Tensor

See oneflow.index_select()

int()

Tensor.int() is equivalent to Tensor.to(flow.int32). See to().

Parameters

input (Tensor) – the input tensor.

For example:

>>> import oneflow as flow
>>> import numpy as np

>>> input = flow.tensor(np.random.randn(1, 2, 3), dtype=flow.float32)
>>> input = input.int()
>>> input.dtype
oneflow.int32
is_contiguous()bool

Returns True if self tensor is contiguous in memory.

is_cuda

Is True if the Tensor is stored on the GPU, False otherwise.

is_floating_point()

See oneflow.is_floating_point()

is_global

Return whether this Tensor is a global tensor.

is_lazy

Return whether this Tensor is a lazy tensor.

is_leaf

Compatible with PyTorch.

All Tensors that have requires_grad which is False will be leaf Tensors by convention.

For Tensor that have requires_grad which is True, they will be leaf Tensors if they were created by source operations.

Only leaf Tensors will have their grad populated during a call to backward(). To get grad populated for non-leaf Tensors, you can use retain_grad().

For example:

>>> import oneflow as flow
>>> a = flow.rand(10, requires_grad=False)
>>> a.is_leaf
True
>>> a = flow.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = a.cuda()
>>> b.is_leaf
False
>>> c = a + 2
>>> c.is_leaf
False
is_pinned()bool

Returns true if this tensor resides in pinned memory.

item()

Returns the value of this tensor as a standard Python number. This only works for tensors with one element. For other cases, see tolist().

This operation is not differentiable.

Parameters

input (Tensor) – the input tensor.

For example:

>>> import oneflow as flow
>>> x = flow.tensor([1.0])
>>> x.item()
1.0
le()

See oneflow.le()

local_to_global(placement=None, sbp=None, *, check_meta=Ture)Tensor

Creates a global tensor from a local tensor.

Note

This tensor must be local tensor.

Both placement and sbp are required.

The returned global tensor takes this tensor as its local component in the current rank.

There is no data communication usually, but when sbp is oneflow.sbp.broadcast, the data on rank 0 will be broadcast to other ranks.

Parameters
  • placement (flow.placement, optional) – the desired placement of returned global tensor. Default: None

  • sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – the desired sbp of returned global tensor. Default: None

Keyword Arguments

check_meta (bool, optional) – indicates whether to check meta information when createing global tensor from local tensor. Only can be set to False when the shape and dtype of the input local tensor on each rank are the same. If set to False, the execution of local_to_global can be accelerated. Default: True

>>> # Run on 2 ranks respectively
>>> import oneflow as flow
>>> input = flow.tensor([0., 1.], dtype=flow.float32) 
>>> output = input.local_to_global(placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.split(0)], check_meta=False) 
>>> print(output.size()) 
>>> print(output) 
>>> # results on rank 0
oneflow.Size([4])
tensor([0., 1., 0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
>>> # results on rank 1
oneflow.Size([4])
tensor([0., 1., 0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
log()

See oneflow.log()

log1p()

See oneflow.log1p()

logical_and()Tensor

See oneflow.logical_and()

logical_not()Tensor

See oneflow.logical_not()

logical_or()Tensor

See oneflow.logical_or()

logical_xor()Tensor

See oneflow.logical_xor()

long()

Tensor.long() is equivalent to Tensor.to(flow.int64). See to().

Parameters

input (Tensor) – the input tensor.

For example:

>>> import oneflow as flow
>>> import numpy as np

>>> input = flow.tensor(np.random.randn(1, 2, 3), dtype=flow.float32)
>>> input = input.long()
>>> input.dtype
oneflow.int64
lt()

See oneflow.lt()

masked_fill()

See oneflow.masked_fill()

masked_select(mask)

See oneflow.masked_select()

matmul()

See oneflow.matmul()

max(dim, index)Tensor

See oneflow.max()

mean(dim=None, keepdim=False)Tensor

See oneflow.mean()

min(dim, index)Tensor

See oneflow.min()

mish()

See oneflow.mish()

mm(mat2)

See oneflow.mm()

mul(value)Tensor

See oneflow.mul()

mul_(value)Tensor

In-place version of oneflow.Tensor.mul().

mv(vec)

See oneflow.mv()

narrow()

See oneflow.narrow()

property ndim

See oneflow.Tensor.dim()

ndimension()
ne()

See oneflow.ne()

negative()

See oneflow.negative()

nelement()

Tensor.nelement() → int

Alias for numel()

new_empty(*size, dtype=None, device=None, placement=None, sbp=None, requires_grad=False)Tensor

Returns a Tensor of size size filled with uninitialized data. By default, the returned Tensor has the same flow.dtype and flow.device as this tensor.

Parameters
  • size (int...) – a list, tuple, or flow.Size of integers defining the shape of the output tensor.

  • dtype (flow.dtype, optional) – the desired type of returned tensor. Default: if None, same flow.dtype as this tensor.

  • device (flow.device, optional) – the desired device of returned tensor. Default: if None, same flow.device as this tensor.

  • placement (flow.placement, optional) – the desired placement of returned global tensor. Default: if None, the returned tensor is local one using the argument device.

  • sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – the desired sbp descriptor of returned global tensor. Default: if None, the returned tensor is local one using the argument device.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

For example:

>>> import oneflow as flow

>>> x = flow.ones(())
>>> y = x.new_empty((2, 2))
>>> y.shape
oneflow.Size([2, 2])
new_ones()Tensor

See oneflow.new_ones()

new_zeros(size=None, dtype=None, device=None, placement=None, sbp=None, requires_grad=False)Tensor

Returns a Tensor of size size filled with 0. By default, the returned Tensor has the same torch.dtype and torch.device as this tensor.

Parameters
  • size (int...) – a list, tuple, or flow.Size of integers defining the shape of the output tensor.

  • dtype (flow.dtype, optional) – the desired type of returned tensor. Default: if None, same flow.dtype as this tensor.

  • device (flow.device, optional) – the desired device of returned tensor. Default: if None, same flow.device as this tensor.

  • placement (flow.placement, optional) – the desired placement of returned global tensor. Default: if None, the returned tensor is local one using the argument device.

  • sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – the desired sbp descriptor of returned global tensor. Default: if None, the returned tensor is local one using the argument device.

  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

For example:

>>> import numpy as np
>>> import oneflow as flow

>>> x = flow.Tensor(np.ones((1, 2, 3)))
>>> y = x.new_zeros((2, 2))
>>> y
tensor([[0., 0.],
        [0., 0.]], dtype=oneflow.float32)
nms(scores, iou_threshold: float)

See oneflow.nms()

nonzero(input, as_tuple=False)Tensor

See oneflow.nonzero()

norm(p='fro', dim=None, keepdim=False, dtype=None)Tensor

See oneflow.norm().

normal_(mean=0, std=1, *, generator=None)Tensor

Fills self tensor with elements samples from the normal distribution parameterized by mean and std.

numel()

See oneflow.numel()

numpy()

Tensor.numpy() → numpy.ndarray

Returns self tensor as a NumPy ndarray. This tensor and the returned ndarray share the same underlying storage. Changes to

self tensor will be reflected in the ndarray and vice versa.

permute()

See oneflow.permute()

pin_memory()Tensor

Copies the tensor to pinned memory, if it’s not already pinned.

placement

Is the oneflow.placement where this Tensor is, which is invalid for local tensor.

pow()

See oneflow.pow()

prod(dim=None, keepdim=False)Tensor

See oneflow.prod()

reciprocal()

See oneflow.reciprocal()

register_hook(hook)

Registers a backward hook.

The hook will be called every time a gradient with respect to the Tensor is computed. The hook should have the following signature:

hook(grad) -> Tensor or None

The hook should not modify its argument, but it can optionally return a new gradient which will be used in place of grad.

For example:

>>> import oneflow as flow
>>> x = flow.ones(5, requires_grad=True)
>>> def hook(grad):
...     return grad * 2
>>> x.register_hook(hook)
>>> y = x * 2
>>> y.sum().backward()
>>> x.grad
tensor([4., 4., 4., 4., 4.], dtype=oneflow.float32)
relu()

See oneflow.relu()

repeat(*size)Tensor

See oneflow.repeat()

repeat_interleave(repeats, dim=None, *, output_size=None)Tensor

See oneflow.repeat_interleave()

requires_grad

Compatible with PyTorch.

Is True if gradient need to be computed for this Tensor, False otherwise.

requires_grad_(requires_grad=True)Tensor

Compatible with PyTorch.

Parameters

requires_grad (bool) – Change the requires_grad flag for this Tensor. Default is True.

For example:

>>> import oneflow as flow
>>> a = flow.rand(10, requires_grad=False)
>>> a.requires_grad
False
>>> a = a.requires_grad_(requires_grad=True)
>>> a.requires_grad
True
reshape()

See oneflow.reshape()

retain_grad()

Compatible with PyTorch.

Enables this Tensor to have their grad populated during backward(). This is a no-op for leaf tensors.

roll()

See oneflow.roll()

round()

See oneflow.round()

rsqrt()

See oneflow.rsqrt()

sbp

Is the oneflow.sbp representing that how the data of the global tensor is distributed, which is invalid for local tensor.

selu()

See oneflow.selu()

shape
sigmoid()

See oneflow.sigmoid()

sign()

See oneflow.sign()

silu()

See oneflow.silu()

sin()Tensor

See oneflow.sin()

sin_()
sinh()

See oneflow.sinh()

size()

The interface is consistent with PyTorch.

Returns the size of the self tensor. If dim is not specified, the returned value is a oneflow.Size, a subclass of tuple. If dim is specified, returns an int holding the size of that dimension.

Parameters

idx (int, optional) – The dimension for which to retrieve the size.

softmax()

See oneflow.softmax()

softplus()

See oneflow.softplus()

softsign()

See oneflow.softsign()

sort(dim: int = - 1, descending: bool = False)

See oneflow.sort()

split(split_size_or_sections=None, dim=0)

See oneflow.split()

sqrt()

See oneflow.sqrt()

square()

See oneflow.square()

squeeze()

See oneflow.squeeze()

std()

See oneflow.std()

storage_offset()Tensor

Returns self tensor’s offset in the underlying storage in terms of number of storage elements (not bytes).

Example:

>>> import oneflow as flow
>>> x = flow.tensor([1, 2, 3, 4, 5])
>>> x.storage_offset()
0
stride()
sub(other)

See oneflow.sub()

sub_(value)Tensor

In-place version of oneflow.Tensor.sub().

sum(dim=None, keepdim=False)Tensor

See oneflow.sum()

swapaxes()

See oneflow.swapaxes()

swapdims()

See oneflow.swapdims()

t()

Tensor.t() → Tensor

See oneflow.t()

tan()

See oneflow.tan()

tanh()

See oneflow.tanh()

tile(*dims)Tensor

See oneflow.tile()

to(*args, **kwargs)
Performs Tensor dtype and/or device conversion.

A flow.dtype and flow.device are inferred from the arguments of input.to(*args, **kwargs).

Note

If the input Tensor already has the correct flow.dtype and flow.device, then input is returned. Otherwise, the returned tensor is a copy of input with the desired.

Parameters
Returns

A Tensor.

Return type

oneflow.Tensor

For example:

>>> import numpy as np
>>> import oneflow as flow

>>> arr = np.random.randint(1, 9, size=(1, 2, 3, 4))
>>> input = flow.Tensor(arr)
>>> output = input.to(dtype=flow.float32)
>>> np.array_equal(arr.astype(np.float32), output.numpy())
True
to_consistent(*args, **kwargs)

This interface is no longer available, please use oneflow.Tensor.to_global() instead.

to_global(placement=None, sbp=None, **kwargs)Tensor

Creates a global tensor if this tensor is a local tensor, otherwise performs Tensor placement and/or sbp conversion.

Note

This tensor can be local tensor or global tensor.

  • For local tensor

    Both placement and sbp are required.

    The returned global tensor takes this tensor as its local component in the current rank.

    There is no data communication usually, but when sbp is oneflow.sbp.broadcast, the data on rank 0 will be broadcast to other ranks.

  • For global tensor

    At least one of placement and sbp is required.

    If placement and sbp are all the same as this tensor’s own placement and sbp, then returns this tensor own.

Parameters
  • placement (flow.placement, optional) – the desired placement of returned global tensor. Default: None

  • sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – the desired sbp of returned global tensor. Default: None

Keyword Arguments
  • grad_sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – manually specify the sbp of this tensor’s grad tensor in the backward pass. If None, the grad tensor sbp will be infered automatically. It is only used if this tensor is a global tensor. Default: None

  • check_meta (bool, optional) – indicates whether to check meta information. If set to True, check the input meta information on each rank. Default: True if this tensor is a local tensor, False if this tensor is a global tensor

For local tensor:

>>> # Run on 2 ranks respectively
>>> import oneflow as flow
>>> input = flow.tensor([0., 1.], dtype=flow.float32) 
>>> output = input.to_global(placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.split(0)], check_meta=False) 
>>> print(output.size()) 
>>> print(output) 
>>> # results on rank 0
oneflow.Size([4])
tensor([0., 1., 0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
>>> # results on rank 1
oneflow.Size([4])
tensor([0., 1., 0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)

For global tensor:

>>> # Run on 2 ranks respectively
>>> import oneflow as flow
>>> input = flow.tensor([0., 1.], dtype=flow.float32, placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.broadcast]) 
>>> output = input.to_global(placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.split(0)]) 
>>> print(output.size()) 
>>> print(output) 
>>> # results on rank 0
oneflow.Size([2])
tensor([0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
>>> # results on rank 1
oneflow.Size([2])
tensor([0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
to_local()Tensor

Returns the local component of this global tensor in the current rank.

Note

This tensor should be a global tensor, and it returns a empty tensor if there is no local component in the current rank.

No copy occurred in this operation.

For example:

>>> # Run on 2 ranks respectively
>>> import oneflow as flow
>>> x = flow.tensor([0., 1.], dtype=flow.float32, placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.split(0)]) 
>>> y = x.to_local() 
>>> print(y.size()) 
>>> print(y) 
>>> # results on rank 0
oneflow.Size([1])
tensor([0.], dtype=oneflow.float32)
>>> # results on rank 1
oneflow.Size([1])
tensor([1.], dtype=oneflow.float32)
tolist()

Returns the tensor as a (nested) list. For scalars, a standard Python number is returned, just like with item(). Tensors are automatically moved to the CPU first if necessary.

This operation is not differentiable.

Parameters

input (Tensor) – the input tensor.

For example:

>>> import oneflow as flow
>>> input = flow.tensor([[1,2,3], [4,5,6]])
>>> input.tolist()
[[1, 2, 3], [4, 5, 6]]
topk(k, dim: Optional[int] = None, largest: bool = True, sorted: bool = True)

See oneflow.topk()

transpose()

See oneflow.transpose()

tril()

See oneflow.tril()

triu()

See oneflow.triu()

type()
Returns the type if dtype is not provided, else casts this object to the specified type.

If this is already of the correct type, no copy is performed and the original object is returned.

Parameters
  • dtype (oneflow.dtype or oneflow.tensortype or string, optional) – The desired type.

  • non_blocking (bool) – (Not Implemented yet) If True, and the source is in pinned memory and destination is on the GPU or vice versa, the copy is performed asynchronously with respect to the host. Otherwise, the argument has no effect.

For example:

>>> import oneflow as flow
>>> a = flow.tensor([1, 2], dtype=flow.float32)
>>> a.type()
'oneflow.FloatTensor'
>>> a.type(flow.int8)  # dtype input
tensor([1, 2], dtype=oneflow.int8)
>>> a.type(flow.cuda.DoubleTensor)  # tensortype input
tensor([1., 2.], device='cuda:0', dtype=oneflow.float64)
>>> a.type("oneflow.HalfTensor")  # string input
tensor([1., 2.], dtype=oneflow.float16)
type_as(target)
Returns this tensor cast to the type of the given tensor.

This is a no-op if the tensor is already of the correct type.

Parameters
  • input (Tensor) – the input tensor.

  • target (Tensor) – the tensor which has the desired type.

For example:

>>> import oneflow as flow
>>> import numpy as np

>>> input = flow.tensor(np.random.randn(1, 2, 3), dtype=flow.float32)
>>> target = flow.tensor(np.random.randn(4, 5, 6), dtype = flow.int32)
>>> input = input.type_as(target)
>>> input.dtype
oneflow.int32
unbind()

See oneflow.unbind()

unfold()

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.Tensor.unfold.html.

Returns a view of the original tensor which contains all slices of size size from self tensor in the dimension dimension.

Step between two slices is given by step.

If sizedim is the size of dimension dimension for self, the size of dimension dimension in the returned tensor will be (sizedim - size) / step + 1.

An additional dimension of size size is appended in the returned tensor.

Parameters
  • dimension (int) – dimension in which unfolding happens

  • size (int) – the size of each slice that is unfolded

  • step (int) – the step between each slice

For example:

>>> import numpy as np
>>> import oneflow as flow

>>> x = flow.arange(1, 8)
>>> x
tensor([1, 2, 3, 4, 5, 6, 7], dtype=oneflow.int64)
>>> x.unfold(0, 2, 1)
tensor([[1, 2],
        [2, 3],
        [3, 4],
        [4, 5],
        [5, 6],
        [6, 7]], dtype=oneflow.int64)
>>> x.unfold(0, 2, 2)
tensor([[1, 2],
        [3, 4],
        [5, 6]], dtype=oneflow.int64)
uniform_(a=0, b=1)

Tensor.uniform_(from=0, to=1) → Tensor

Fills self tensor with numbers sampled from the continuous uniform distribution:

\[P(x)=1/(to-from)\]
unsqueeze()

See oneflow.unsqueeze()

var()

See oneflow.var()

view()

The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.Tensor.view.html.

Returns a new tensor with the same data as the self tensor but of a different shape.

The returned tensor shares the same data and must have the same number of elements, but may have a different size. For a tensor to be viewed, the new view size must be compatible with its original size and stride, i.e., each new view dimension must either be a subspace of an original dimension, or only span across original dimensions \(d, d+1, \dots, d+k\) that satisfy the following contiguity-like condition that \(\forall i = d, \dots, d+k-1\),

\[\text{stride}[i] = \text{stride}[i+1] \times \text{size}[i+1]\]

Otherwise, it will not be possible to view self tensor as shape without copying it (e.g., via contiguous()). When it is unclear whether a view() can be performed, it is advisable to use reshape(), which returns a view if the shapes are compatible, and copies (equivalent to calling contiguous()) otherwise.

Parameters
  • input – A Tensor.

  • *shape – flow.Size or int…

Returns

A Tensor has the same type as input.

For example:

>>> import numpy as np
>>> import oneflow as flow

>>> x = np.array(
...    [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
... ).astype(np.float32)
>>> input = flow.Tensor(x)

>>> y = input.view(2, 2, 2, -1).numpy().shape
>>> y
(2, 2, 2, 2)
view_as(other)Tensor

Expand this tensor to the same size as other. self.view_as(other) is equivalent to self.view(other.size()).

Please see view() for more information about view.

Parameters

other (oneflow.Tensor) – The result tensor has the same size as other.

where(x=None, y=None)

See oneflow.where()

zero_()Tensor

Fills self tensor with zeros.