oneflow.Tensor.offload

Tensor.offload()

Transfer tensor data from GPU memory back to host (CPU) memory. If the tensor is already in host (CPU) memory, the operation does nothing and gives a warning. Note that this operation only changes the storage of the tensor, and the tensor id will not change.

Note

Both global tensor and local tensor of oneflow are applicable to this operation.

Use with oneflow.Tensor.load() and oneflow.Tensor.is_offloaded(). The behavior of load() is the opposite of offload(), is_offloaded() returns a boolean indicating whether the tensor has been moved to CPU memory.

In addition, support for offloading elements of oneflow.nn.Module.parameters() is provided.

For example:

>>> import oneflow as flow
>>> import numpy as np

>>> # local tensor
>>> x = flow.tensor(np.random.randn(1024, 1024, 100), dtype=flow.float32, device=flow.device("cuda"), )
>>> before_id = id(x)
>>> x.offload() # Move the Tensor from the GPU to the CPU
>>> after_id = id(x)
>>> after_id == before_id
True
>>> x.is_offloaded()
True
>>> x.load() # Move the Tensor from the cpu to the gpu
>>> x.is_offloaded()
False
>>> import oneflow as flow

>>> # global tensor
>>> # Run on 2 ranks respectively
>>> placement = flow.placement("cuda", ranks=[0, 1])
>>> sbp = flow.sbp.broadcast
>>> x = flow.randn(1024, 1024, 100, dtype=flow.float32, placement=placement, sbp=sbp) 
>>> before_id = id(x) 
>>> x.offload() 
>>> after_id = id(x) 
>>> print(after_id == before_id) 
>>> print(x.is_offloaded()) 
>>> x.load() 
>>> print(x.is_offloaded())