oneflow.scope

Scope

class oneflow.scope.DistributeConsistentStrategy

Create a scope in consistent view. All operators within the scope will be automatically parallelized among diffierent accelerators for best performance and least data transfer.

Usage:

with oneflow.scope.consistent_view():
    ...
__init__()

Initialize self. See help(type(self)) for accurate signature.

class oneflow.scope.DistributeMirroredStrategy

Create a scope in mirrored view. All operators within the scope will be mirrored among diffierent accelerators. Usage:

with oneflow.scope.mirrored_view():
    ...
__init__()

Initialize self. See help(type(self)) for accurate signature.

oneflow.scope.consistent_view

alias of oneflow.python.framework.distribute.DistributeConsistentStrategy

oneflow.scope.consistent_view_enabled() → bool
Returns

True if consistent strategy is enabled in current context where this function is called.

Return type

bool

oneflow.scope.mirrored_view

alias of oneflow.python.framework.distribute.DistributeMirroredStrategy

oneflow.scope.mirrored_view_enabled() → bool
Returns

True if mirrored strategy is enabled in current context where this function is called.

Return type

bool

oneflow.scope.namespace(name: str) → None

Create a namespace. All variables within the namespace will have a prefix [SCOPE NAME]-. This is for convenience only and has no other effect on the system. Usage:

with oneflow.scope.namespace("scope1"):
    ...
    with oneflow.scope.namespace("scope2"):
        ...
Parameters

name – Name of this namespace

oneflow.scope.placement(device_tag: str, machine_device_ids: str) → oneflow.python.framework.placement_context.PlacementScope

Create a scope. All ops within the scope will run on specified device that placed by “device_tag” and “machine_device_ids”.

Parameters
  • device_tag (str) – Device tag, “cpu” or “gpu” only

  • machine_device_ids (str) – List of string that specifies what machine & device(s) to use, the format is “List[<NODE INDEX>:<DEVICE START INDEX>-<DEVICE END INDEX>, <NODE INDEX>:<DEVICE START INDEX>-<DEVICE END INDEX>, …]”, For example, “0:0” means use the device 0 of machine 0, and “1:4-6” means use device 4, 5, 6 of machine 1.

Returns

Placement scope

Return type

placement_ctx.DevicePriorPlacementScope

For example:

If you run program on single machine, you can assign the specified device like this:

with flow.scope.placement("gpu", "0:0"):
    logits = lenet(images, train=False)
    loss = flow.nn.sparse_softmax_cross_entropy_with_logits(labels, logits, name="softmax_loss")
    flow.losses.add_loss(loss)

Or you run distributed program, you can assign the specified devices like this:

# configure machines ids, ips, etc.
with flow.scope.placement("gpu", ['0:0-7', '1:0-7']):
    logits = lenet(images, train=False)
    loss = flow.nn.sparse_softmax_cross_entropy_with_logits(labels, logits, name="softmax_loss")
    flow.losses.add_loss(loss)