oneflow.Tensor.global_to_global

Tensor.global_to_global(placement=None, sbp=None, *, grad_sbp=None, check_meta=False, copy=False)Tensor

Performs Tensor placement and/or sbp conversion.

Note

This tensor must be global tensor.

At least one of placement and sbp is required.

If placement and sbp are all the same as this tensor’s own placement and sbp, then returns this tensor own.

Parameters
  • placement (flow.placement, optional) – the desired placement of returned global tensor. Default: None

  • sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – the desired sbp of returned global tensor. Default: None

Keyword Arguments
  • grad_sbp (flow.sbp.sbp or tuple of flow.sbp.sbp, optional) – manually specify the sbp of this tensor’s grad tensor in the backward pass. If None, the grad tensor sbp will be infered automatically. Default: None

  • check_meta (bool, optional) – indicates whether to check meta information. If set to True, check the consistency of the input meta information (placement and sbp) on each rank. Default: False

  • copy (bool, optional) – When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion. Default: False

>>> # Run on 2 ranks respectively
>>> import oneflow as flow
>>> input = flow.tensor([0., 1.], dtype=flow.float32, placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.broadcast]) 
>>> output = input.global_to_global(placement=flow.placement("cpu", ranks=[0, 1]), sbp=[flow.sbp.split(0)]) 
>>> print(output.size()) 
>>> print(output) 
>>> # results on rank 0
oneflow.Size([2])
tensor([0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)
>>> # results on rank 1
oneflow.Size([2])
tensor([0., 1.], placement=oneflow.placement(type="cpu", ranks=[0, 1]), sbp=(oneflow.sbp.split(dim=0),), dtype=oneflow.float32)