oneflow¶
Copyright 2020 The OneFlow Authors. All rights reserved.
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
-
class
oneflow.
ConfigProto
¶ -
DESCRIPTOR
= <google.protobuf.pyext._message.MessageDescriptor object>¶
-
io_conf
¶ Field oneflow.ConfigProto.io_conf
-
load_lib_path
¶ Field oneflow.ConfigProto.load_lib_path
-
profiler_conf
¶ Field oneflow.ConfigProto.profiler_conf
-
resource
¶ Field oneflow.ConfigProto.resource
-
session_id
¶ Field oneflow.ConfigProto.session_id
-
-
class
oneflow.
DeprecatedFixedTensorDef
(*args, **kwargs)¶ -
__init__
(*args, **kwargs)¶ Initialize self. See help(type(self)) for accurate signature.
-
-
class
oneflow.
DeprecatedMirroredTensorDef
(*args, **kwargs)¶ -
__init__
(*args, **kwargs)¶ Initialize self. See help(type(self)) for accurate signature.
-
-
class
oneflow.
DeprecatedTensorListDef
(*args, **kwargs)¶ -
__init__
(*args, **kwargs)¶ Initialize self. See help(type(self)) for accurate signature.
-
-
oneflow.
FixedTensorDef
¶ alias of
oneflow.python.framework.input_blob_def.DeprecatedFixedTensorDef
-
class
oneflow.
FunctionConfig
¶ OneFlow function’s configurations.
-
__init__
() → None¶ Initialize self. See help(type(self)) for accurate signature.
-
property
all_reduce_fp16
¶
-
property
all_reduce_group_min_mbyte
¶
-
property
all_reduce_group_num
¶
-
property
all_reduce_group_size_warmup
¶
-
property
all_reduce_lazy_ratio
¶
-
property
allow_cpu_return_op
¶
-
property
concurrency_width
¶
-
property
cudnn_buf_limit_mbyte
¶
-
property
cudnn_conv_enable_pseudo_half
¶
-
property
cudnn_conv_enable_true_half
¶
-
property
cudnn_conv_force_bwd_data_algo
¶
-
property
cudnn_conv_force_bwd_filter_algo
¶
-
property
cudnn_conv_force_fwd_algo
¶
-
property
cudnn_conv_heuristic_search_algo
¶
-
property
cudnn_conv_use_deterministic_algo_only
¶
-
property
default_data_type
¶
-
property
default_distribute_strategy
¶
-
property
default_initializer_conf
¶
-
property
default_logical_view
¶
-
property
default_placement_scope
¶
-
property
disable_all_reduce_sequence
¶
-
property
do_parallel_cast_before_widening_type_cast
¶
-
property
enable_all_reduce_group
¶
-
property
enable_auto_mixed_precision
¶
-
property
enable_cudnn
¶
-
property
enable_cudnn_conv_pseudo_half
¶
-
property
enable_cudnn_fused_normalization_add_relu
¶
-
property
enable_float_compute_for_half_gemm
¶
-
property
enable_fuse_add_to_output
¶
-
property
enable_fuse_cast_scale
¶
-
property
enable_fuse_model_update_ops
¶
-
property
enable_gradients_stats_aggregation
¶
-
property
enable_inplace
¶
-
property
enable_inplace_in_reduce_struct
¶
-
property
enable_keep_header_only
¶
-
property
enable_nccl
¶
-
property
enable_non_distributed_optimizer
¶
-
property
enable_qat
¶
-
property
enable_quantization_aware_training
¶
-
property
enable_reused_mem
¶
-
property
enable_true_half_config_when_conv
¶
-
property
exp_run_conf
¶
-
property
indexed_slices_optimizer_conf
¶
-
property
non_distributed_optimizer_group_size_mbyte
¶
-
property
optimizer_placement_optimization_mode
¶
-
property
optimizer_placement_optimization_threshold
¶
-
property
prune_cast_to_static_shape_ops
¶
-
property
prune_parallel_cast_ops
¶
-
property
qat
¶
-
property
static_mem_alloc_algo_white_list
¶
-
property
static_mem_alloc_policy_white_list
¶
-
property
tensorrt
¶
-
property
train
¶
-
property
use_boxing_v2
¶
-
property
use_memory_allocation_algorithm_v2
¶
-
property
use_nccl_inter_node_communication
¶
-
property
use_tensorrt
¶
-
property
use_xla_jit
¶
-
-
class
oneflow.
JobConfigProto
¶ -
DESCRIPTOR
= <google.protobuf.pyext._message.MessageDescriptor object>¶
-
class
FlagName2flagValueEntry
¶ -
DESCRIPTOR
= <google.protobuf.pyext._message.MessageDescriptor object>¶
-
key
¶ Field oneflow.JobConfigProto.FlagName2flagValueEntry.key
-
value
¶ Field oneflow.JobConfigProto.FlagName2flagValueEntry.value
-
-
concurrency_width
¶ Field oneflow.JobConfigProto.concurrency_width
-
cudnn_buf_limit_mbyte
¶ Field oneflow.JobConfigProto.cudnn_buf_limit_mbyte
-
cudnn_conv_enable_pseudo_half
¶ Field oneflow.JobConfigProto.cudnn_conv_enable_pseudo_half
-
cudnn_conv_force_bwd_data_algo
¶ Field oneflow.JobConfigProto.cudnn_conv_force_bwd_data_algo
-
cudnn_conv_force_bwd_filter_algo
¶ Field oneflow.JobConfigProto.cudnn_conv_force_bwd_filter_algo
-
cudnn_conv_force_fwd_algo
¶ Field oneflow.JobConfigProto.cudnn_conv_force_fwd_algo
-
cudnn_conv_heuristic_search_algo
¶ Field oneflow.JobConfigProto.cudnn_conv_heuristic_search_algo
-
cudnn_conv_use_deterministic_algo_only
¶ Field oneflow.JobConfigProto.cudnn_conv_use_deterministic_algo_only
-
default_data_type
¶ Field oneflow.JobConfigProto.default_data_type
-
default_initialize_with_snapshot_path
¶ Field oneflow.JobConfigProto.default_initialize_with_snapshot_path
-
default_initializer_conf
¶ Field oneflow.JobConfigProto.default_initializer_conf
-
do_parallel_cast_before_widening_type_cast
¶ Field oneflow.JobConfigProto.do_parallel_cast_before_widening_type_cast
-
enable_auto_mixed_precision
¶ Field oneflow.JobConfigProto.enable_auto_mixed_precision
-
enable_cudnn
¶ Field oneflow.JobConfigProto.enable_cudnn
-
enable_cudnn_fused_normalization_add_relu
¶ Field oneflow.JobConfigProto.enable_cudnn_fused_normalization_add_relu
-
enable_float_compute_for_half_gemm
¶ Field oneflow.JobConfigProto.enable_float_compute_for_half_gemm
-
enable_fuse_add_to_output
¶ Field oneflow.JobConfigProto.enable_fuse_add_to_output
-
enable_fuse_cast_scale
¶ Field oneflow.JobConfigProto.enable_fuse_cast_scale
-
enable_fuse_model_update_ops
¶ Field oneflow.JobConfigProto.enable_fuse_model_update_ops
-
enable_gradients_stats_aggregation
¶ Field oneflow.JobConfigProto.enable_gradients_stats_aggregation
-
enable_inplace
¶ Field oneflow.JobConfigProto.enable_inplace
-
enable_inplace_in_reduce_struct
¶ Field oneflow.JobConfigProto.enable_inplace_in_reduce_struct
-
enable_keep_header_only
¶ Field oneflow.JobConfigProto.enable_keep_header_only
-
enable_quantization_aware_training
¶ Field oneflow.JobConfigProto.enable_quantization_aware_training
-
enable_reuse_mem
¶ Field oneflow.JobConfigProto.enable_reuse_mem
-
exp_run_conf
¶ Field oneflow.JobConfigProto.exp_run_conf
-
flag_name2flag_value
¶ Field oneflow.JobConfigProto.flag_name2flag_value
-
indexed_slices_optimizer_conf
¶ Field oneflow.JobConfigProto.indexed_slices_optimizer_conf
-
job_name
¶ Field oneflow.JobConfigProto.job_name
-
logical_object_id
¶ Field oneflow.JobConfigProto.logical_object_id
-
memory_allocation_algorithm_conf
¶ Field oneflow.JobConfigProto.memory_allocation_algorithm_conf
-
optimizer_placement_optimization_mode
¶ Field oneflow.JobConfigProto.optimizer_placement_optimization_mode
-
optimizer_placement_optimization_threshold
¶ Field oneflow.JobConfigProto.optimizer_placement_optimization_threshold
-
predict_conf
¶ Field oneflow.JobConfigProto.predict_conf
-
prune_cast_to_static_shape_ops
¶ Field oneflow.JobConfigProto.prune_cast_to_static_shape_ops
-
prune_parallel_cast_ops
¶ Field oneflow.JobConfigProto.prune_parallel_cast_ops
-
qat_config
¶ Field oneflow.JobConfigProto.qat_config
-
total_batch_num
¶ Field oneflow.JobConfigProto.total_batch_num
-
train_conf
¶ Field oneflow.JobConfigProto.train_conf
-
use_memory_allocation_algorithm_v2
¶ Field oneflow.JobConfigProto.use_memory_allocation_algorithm_v2
-
xrt_config
¶ Field oneflow.JobConfigProto.xrt_config
-
-
oneflow.
MirroredTensorDef
¶ alias of
oneflow.python.framework.input_blob_def.DeprecatedMirroredTensorDef
-
oneflow.
MirroredTensorListDef
¶ alias of
oneflow.python.framework.input_blob_def.DeprecatedTensorListDef
-
oneflow.
acc
(one: oneflow_api.BlobDesc, max_acc_num: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
amp_white_identity
(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
argsort
(input: oneflow_api.BlobDesc, axis: int = -1, direction: str = 'ASCENDING', name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator sorts the input Blob at specified axis and return the indices of the sorted Blob.
- Parameters
input (oneflow_api.BlobDesc) – A Blob
axis (int, optional) – dimension to be sorted. Defaults to the last dim (-1)
direction (str, optional) – The direction in which to sort the Blob values. If the direction is “ASCENDING”, The order of input will be sorted as ascending, else, the order of input will be sorted as descending. Defaults to “ASCENDING”.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The indices of the sorted Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def argsort_Job(x: tp.Numpy.Placeholder((5, )) ) -> tp.Numpy: return flow.argsort(input=x) x = np.array([10, 2, 9, 3, 7]).astype("float32") out = argsort_Job(x) # out [1 3 4 2 0]
-
oneflow.
argwhere
(condition: oneflow_api.BlobDesc, dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator finds the indices of input Blob condition elements that are non-zero. It returns a List. Each element in the output is a coordinate that points to a non-zero element in the condition.
- Parameters
condition (oneflow_api.BlobDesc) – The input Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type of output. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob. Its type is ListNumpy.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def argwhere_Job(x: tp.Numpy.Placeholder(shape=(2, 3), dtype=flow.float32), ) -> tp.ListNumpy: return flow.argwhere(x) x = np.array([[0, 1, 0], [2, 0, 2]]).astype(np.float32) out = argwhere_Job(x) # out [array([[0, 1], # [1, 0], # [1, 2]], dtype=int32)]
-
oneflow.
assign
(ref, value, dtype=None, name=None)¶
-
oneflow.
broadcast_like
(x: oneflow_api.BlobDesc, like: oneflow_api.BlobDesc, broadcast_axes: Optional[Sequence[int]] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator broadcast the input Blob x on the specified axis with input Blob like.
- Parameters
x (oneflow_api.BlobDesc) – The input Blob.
like (oneflow_api.BlobDesc) – A Blob.
broadcast_axes (Optional[Sequence[int]], optional) – The broadcast axis. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – The length of broadcast_axes must be greater than 0 and less than or equal to number of axes of like shape.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
Example 1:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def broadcast_like_Job(x: tp.Numpy.Placeholder(shape=(3, 1), dtype=flow.float32) ) -> tp.Numpy: like_tensor = flow.constant(value=1.0, dtype=flow.float32, shape=(3, 3)) return flow.broadcast_like(x=x, like=like_tensor, broadcast_axes=(1, )) x = np.array([[1], [1], [1]]).astype(np.float32) out = broadcast_like_Job(x) # out [[[1 1 1] # [1 1 1] # [1 1 1]]] # out.shape (3, 3)
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def broadcast_like_Job(x: tp.Numpy.Placeholder(shape=(3, 1, 1), dtype=flow.float32) ) -> tp.Numpy: like_tensor = flow.constant(value=1.0, dtype=flow.float32, shape=(3, 3, 3)) return flow.broadcast_like(x=x, like=like_tensor, broadcast_axes=(1, 2)) x = np.random.randn(3, 1, 1).astype(np.float32) out = broadcast_like_Job(x) # out.shape (3, 3, 3)
-
oneflow.
broadcast_to_compatible_with
(x: oneflow_api.BlobDesc, compatible: Sequence[oneflow_api.BlobDesc], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Returns a ‘Blob’ with the shape can be broadcasted by other shapes
- Parameters
x (oneflow_api.BlobDesc) – a ‘Blob’
compatible (Sequence[oneflow_api.BlobDesc]) – Sequence of different shape
name (Optional[str], optional) – This operator’s name. Defaults to None.
- Returns
A ‘Blob’ with the biggest shape
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def broadcast_to_compatible_with_Job(x: tp.Numpy.Placeholder((4, 1, 1)) )->tp.Numpy: blob_a = flow.constant(value=1, dtype=flow.float32, shape=(1, 2, 1)) blob_b = flow.constant(value=1, dtype=flow.float32, shape=(1, 1, 3)) return flow.math.broadcast_to_compatible_with(x, [blob_a, blob_b]) x = np.ones(shape=(4, 1, 1), dtype=np.float32) out = broadcast_to_compatible_with_Job(x) # out.shape (4, 2, 3)
-
oneflow.
cast
(x: oneflow_api.BlobDesc, dtype: oneflow.python.framework.dtype.dtype, name: Optional[str] = None) → oneflow_api.BlobDesc¶ The op takes input x and casts it to the output with dtype
- Parameters
x (oneflow_api.BlobDesc) – Input Blob
dtype (dtype_util.dtype) – Data type of the output
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def cast_Job(x: tp.Numpy.Placeholder((3, ), dtype=flow.float32) )->tp.Numpy: return flow.cast(x, dtype=flow.int32) x = np.array([1, 2, 3]).astype(np.float32) out = cast_Job(x) # out.dtype = "int32"
-
oneflow.
cast_to_current_logical_view
(x: oneflow_api.BlobDesc) → oneflow_api.BlobDesc¶
-
oneflow.
cast_to_static_shape
(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator returns a Blob that has identical content and data type to input Blob, and whose shape is converted from dynamic to static
- Parameters
x (oneflow_api.BlobDesc) – The input Blob which has dynamic shape.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob which is identical to input blob but has static shape.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def cast_to_static_shape_func( x: tp.ListNumpy.Placeholder(shape=(3, 3), dtype=flow.float32), ) -> tp.Numpy: return flow.cast_to_static_shape(x) x = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]).astype(np.float32) out = cast_to_static_shape_func(x) # out [[1 1 1] # [2 2 2] # [3 3 3]]
-
oneflow.
categorical_ordinal_encode
(table: oneflow_api.BlobDesc, size: oneflow_api.BlobDesc, input_tensor: oneflow_api.BlobDesc, hash_precomputed: bool = True, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator maintains a hash table to encode the categorical ordinal Blob. It converts a discrete input value into a continuous integer ID.
- Parameters
table (oneflow_api.BlobDesc) – The hash table, you can assign it as a variable.
size (oneflow_api.BlobDesc) – The size of hash table.
input_tensor (oneflow_api.BlobDesc) – The input Blob.
hash_precomputed (bool, optional) – We currently only support the ‘True’ mode. The internal hash value will no longer be computed. Defaults to True.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def categorical_ordinal_encode_Job(x: tp.Numpy.Placeholder((3, 3), dtype=flow.int32) ) -> tp.Numpy: dtype = x.dtype with flow.scope.namespace("categorical_ordinal_encode"): table = flow.get_variable( name="Table", shape=(16,), dtype=dtype, initializer=flow.constant_initializer(0, dtype=dtype), trainable=False, reuse=False, ) size = flow.get_variable( name="Size", shape=(1,), dtype=dtype, initializer=flow.constant_initializer(0, dtype=dtype), trainable=False, reuse=False, ) return flow.categorical_ordinal_encode( table=table, size=size, input_tensor=x, name="Encode", ) x = np.array([[7, 0, 2], [1, 7, 2], [0, 1, 7]]).astype(np.int32) out = categorical_ordinal_encode_Job(x) # out [[1 0 2] # [3 1 2] # [0 3 1]]
-
oneflow.
clamp
(values: oneflow_api.BlobDesc, min_value: Union[int, float, None] = None, max_value: Union[int, float, None] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This op clips Blob values to a specified min value and max value.
The equation is:
\[out = MIN(MAX(x, min), max)\]- Parameters
values (oneflow_api.BlobDesc) – Input Blob
min_value (Optional[Union[int, float]], optional) – The minimum value to clip by. Defaults to None.
max_value (Optional[Union[int, float]], optional) – The maximum value to clip by. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – min_value and max_value cannot be None at the same time
- Returns
A clipped Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def clip_by_value_Job(x: tp.Numpy.Placeholder((4, )) )->tp.Numpy: return flow.math.clip_by_value(x, min_value=-1, max_value=5) x = np.array([-2, 1, 4, 7], dtype=np.float32) out = clip_by_value_Job(x) # out [-1. 1. 4. 5.]
-
oneflow.
clear_default_session
() → None¶ Clear the default session. All compiled OneFlow functions will be deleted.
-
oneflow.
clip
(values: oneflow_api.BlobDesc, min_value: Union[int, float, None] = None, max_value: Union[int, float, None] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This op clips Blob values to a specified min value and max value.
The equation is:
\[out = MIN(MAX(x, min), max)\]- Parameters
values (oneflow_api.BlobDesc) – Input Blob
min_value (Optional[Union[int, float]], optional) – The minimum value to clip by. Defaults to None.
max_value (Optional[Union[int, float]], optional) – The maximum value to clip by. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – min_value and max_value cannot be None at the same time
- Returns
A clipped Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def clip_by_value_Job(x: tp.Numpy.Placeholder((4, )) )->tp.Numpy: return flow.math.clip_by_value(x, min_value=-1, max_value=5) x = np.array([-2, 1, 4, 7], dtype=np.float32) out = clip_by_value_Job(x) # out [-1. 1. 4. 5.]
-
oneflow.
clip_by_scalar
(values: oneflow_api.BlobDesc, min_value: Union[int, float, None] = None, max_value: Union[int, float, None] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This op clips Blob values to a specified min value and max value.
The equation is:
\[out = MIN(MAX(x, min), max)\]- Parameters
values (oneflow_api.BlobDesc) – Input Blob
min_value (Optional[Union[int, float]], optional) – The minimum value to clip by. Defaults to None.
max_value (Optional[Union[int, float]], optional) – The maximum value to clip by. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – min_value and max_value cannot be None at the same time
- Returns
A clipped Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def clip_by_value_Job(x: tp.Numpy.Placeholder((4, )) )->tp.Numpy: return flow.math.clip_by_value(x, min_value=-1, max_value=5) x = np.array([-2, 1, 4, 7], dtype=np.float32) out = clip_by_value_Job(x) # out [-1. 1. 4. 5.]
-
oneflow.
clip_by_value
(values: oneflow_api.BlobDesc, min_value: Union[int, float, None] = None, max_value: Union[int, float, None] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This op clips Blob values to a specified min value and max value.
The equation is:
\[out = MIN(MAX(x, min), max)\]- Parameters
values (oneflow_api.BlobDesc) – Input Blob
min_value (Optional[Union[int, float]], optional) – The minimum value to clip by. Defaults to None.
max_value (Optional[Union[int, float]], optional) – The maximum value to clip by. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – min_value and max_value cannot be None at the same time
- Returns
A clipped Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def clip_by_value_Job(x: tp.Numpy.Placeholder((4, )) )->tp.Numpy: return flow.math.clip_by_value(x, min_value=-1, max_value=5) x = np.array([-2, 1, 4, 7], dtype=np.float32) out = clip_by_value_Job(x) # out [-1. 1. 4. 5.]
-
oneflow.
combined_margin_loss
(x: oneflow_api.BlobDesc, label: oneflow_api.BlobDesc, m1: float = 1, m2: float = 0, m3: float = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
concat
(inputs: Optional[Sequence[oneflow_api.BlobDesc]] = None, axis: int = 0, max_dim_size: Optional[int] = None, name: Optional[str] = None, values: Optional[Sequence[oneflow_api.BlobDesc]] = None) → oneflow_api.BlobDesc¶ Concatenate two or more Blob s at specified axis.
Analogous to numpy.concatenate
- Parameters
inputs – a list of Blob
axis – a int. 0 by default
max_dim_size – hint of max dimension size along the given axis
name – name of this operator. None by default
values – deprecated param, use inputs instead
- Returns
A Blob
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def concat_Job() -> tp.Numpy: constant_blob_1 = flow.constant(value=1.5, shape=(1, 3, 3, 4), dtype=flow.float, name="blob1") constant_blob_2 = flow.constant(value=2.5, shape=(1, 3, 3, 4), dtype=flow.float, name="blob2") return flow.concat(inputs=[constant_blob_1, constant_blob_2], axis=3) out = concat_Job() # out.shape (1, 3, 3, 8)
-
oneflow.
consistent_user_op_builder
(op_name)¶
-
oneflow.
consistent_user_op_module_builder
(op_type_name)¶
-
oneflow.
constant
(value: Union[int, float], dtype: Optional[oneflow.python.framework.dtype.dtype] = None, shape: Optional[Sequence[int]] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a constant Blob.
- Parameters
value (Union[int, float]) – The constant value of Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type of Blob. Defaults to None.
shape (Optional[Sequence[int]], optional) – The shape of Blob. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
NotImplementedError – The data type of value should be int or float.
- Returns
The result blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def constant_Job() -> tp.Numpy: constant_blob = flow.constant(value=1.5, shape=(1, 3, 3), dtype=flow.float) return constant_blob out = constant_Job() # out [[[1.5 1.5 1.5] # [1.5 1.5 1.5] # [1.5 1.5 1.5]]]
-
oneflow.
constant_initializer
(value: float = 0, dtype: oneflow.python.framework.dtype.dtype = <class 'oneflow.python.framework.dtype.float32'>) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates blob with constant values.
- Parameters
value (float, optional) – A Python scalar. All elements of the initialized variable . Defaults to 0.
dtype (dtype_util.dtype, optional) – Default data type. Defaults to dtype_util.float.
- Raises
NotImplementedError – Do not support such data type.
- Returns
An InitializerConf object.
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def constant_Job() -> None: init = flow.constant_initializer(2.5) blob = flow.get_variable( "blob-weight", shape=(3, ), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() constant_Job() # out [2.5 2.5 2.5]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_constant_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.constant_initializer(0.01) conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_constant_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
constant_like
(like: oneflow_api.BlobDesc, value: Union[int, float], dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a constant Blob that has the same shape as like.
- Parameters
like (oneflow_api.BlobDesc) – A Blob.
value (Union[int, float]) – The constant value of Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type of Blob. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
NotImplementedError – The data type of value should be int or float.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def constant_like_Job() -> tp.Numpy: constant_blob = flow.constant(value=1.5, shape=(1, 3, 3), dtype=flow.float) constant_like_blob = flow.constant_like(like=constant_blob, value=5.5, dtype=flow.float) return constant_like_blob out = constant_like_Job() # out [[[5.5 5.5 5.5] # [5.5 5.5 5.5] # [5.5 5.5 5.5]]]
-
oneflow.
constant_scalar
(value: Union[int, float], dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a constant scalar Blob.
- Parameters
value (Union[int, float]) – The constant value of Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type of Blob. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def constant_scalar_Job() -> tp.Numpy: constant_scalar = flow.constant_scalar(value=2.5, dtype=flow.float) return constant_scalar out = constant_scalar_Job() # out [2.5]
-
oneflow.
convert_oneflow_dtype_to_numpy_dtype
(oneflow_dtype: oneflow.python.framework.dtype.dtype)¶
-
oneflow.
count_not_finite
(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
current_global_function_desc
() → oneflow.python.framework.function_desc.FunctionDesc¶
-
oneflow.
current_machine_id
()¶ Get machine id of current machine/node
- Returns
[description]
- Return type
[type]
-
oneflow.
current_resource
() → oneflow.core.job.resource_pb2.Resource¶ - Get current resources, such as:machine nums, cpu/gpu device nums,
epoch network threed num, rdma params…
- Returns
[description]
- Return type
resource_util.Resource
-
oneflow.
current_scope
()¶ Return current scope
-
oneflow.
dim_gather
(input: oneflow_api.BlobDesc, dim: int, index: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator gathers elements from input according to index along with the axis dim.
Take a 3-D blob as example, the output is specified by:
output[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 output[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 output[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
The shape of input and index should be the same except in the dim dimension.
That is, if input is a n-dimension blob with shape \((x_0, x_1, \dots, x_{i-1}, x_i, x_{i+1}, \dots, x_n)\), and dim = i, then index must be a n-dimension blob with shape \((x_0, x_1, \dots, x_{i-1}, k, x_{i+1}, \dots, x_n)\) where \(k \geq 1\).
The return Blob output will have the same shape with index.
- Parameters
input (oneflow_api.BlobDesc) – The input blob
dim (int) – The axis along which to index
index (oneflow_api.BlobDesc) – The index blob of elements to gather
name (Optional[str], optional) – The name of the operation. Defaults to None.
- Returns
The elements gathered from input will be returned as the output Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def dim_gather_Job(input: tp.Numpy.Placeholder((2, 2), dtype=flow.float64), index:tp.Numpy.Placeholder((2, 2), dtype=flow.int32))->tp.Numpy: return flow.dim_gather(input, 1, index) input = np.array([[1, 2], [3, 4]]).astype(np.float64) index = np.array([[1, 0], [0, 1]]).astype(np.int32) out = dim_gather_Job(input, index) # output # [[2. 1.] # [3. 4.]]
-
oneflow.
distributed_partial_fc_sample
(weight: oneflow_api.BlobDesc, label: oneflow_api.BlobDesc, num_sample: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
double
¶ alias of
oneflow.python.framework.dtype.float64
-
oneflow.
dtypes
()¶
-
oneflow.
dynamic_reshape
(x: oneflow_api.BlobDesc, shape: Sequence[int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator reshapes a dynamic blob.
- Parameters
x (oneflow_api.BlobDesc) – The input Blob.
shape (Sequence[int]) – The output shape.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def dynamic_reshape_Job(x: tp.Numpy.Placeholder(shape=(1, 3, 64, 64), dtype=flow.float32) ) -> tp.Numpy: reshape_out1 = flow.dynamic_reshape(x, (-1, 64)) variable1 = flow.get_variable( "var1", shape=(64, 32), dtype=flow.float, initializer=flow.random_uniform_initializer(minval=-10, maxval=10), trainable=True, ) matmul_tensor = flow.matmul(reshape_out1, variable1) reshape_out2 = flow.dynamic_reshape(matmul_tensor, (-1, 8, 4)) return reshape_out2 x = np.random.rand(1, 3, 64, 64).astype(np.float32) out = dynamic_reshape_Job(x) # out.shape (192, 8, 4)
-
oneflow.
eager_execution_enabled
() → bool¶ Get current setting of the job, if enable eager execution mode ,then return True
- Returns
[description]
- Return type
bool
-
oneflow.
eager_nccl_all_reduce
(x: oneflow_api.BlobDesc, parallel_conf: str, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
elem_cnt
(inputs: oneflow_api.BlobDesc, dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator returns the amount of elements in input Blob.
- Parameters
inputs (oneflow_api.BlobDesc) – The input Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob. Its type is ListNumpy.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def elem_cnt_Job(x: tp.Numpy.Placeholder(shape=(5, ), dtype=flow.float32), ) -> tp.ListNumpy: return flow.elem_cnt(inputs=x, dtype=flow.int32) x = np.array([10, 20, -30, 40, 50]).astype(np.float32) out = elem_cnt_Job(x) # [array([5], dtype=int32)]
-
oneflow.
enable_eager_execution
(val: bool = True) → None¶ If True, job will execute in eager mode, else use lazy mode(static graph).
- Parameters
val (bool, optional) – Whether eager execution or not. Defaults to True.
-
oneflow.
expand_dims
(input: oneflow_api.BlobDesc, axis: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator inserts a dimention at the specified axis in the input Blob. The size of new dimension can only be 1, and the amount of element in return value is the same as Blob input.
- Parameters
input (oneflow_api.BlobDesc) – The input Blob.
axis (int) – The specified dimension index.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def expand_dim_Job(x: tp.Numpy.Placeholder(shape=(1, 3, 3), dtype=flow.int32), ) -> tp.Numpy: return flow.expand_dims(input=x, axis=2) x = np.array([[[1, 1, 1], [1, 1, 1], [1, 1, 1]]]).astype(np.int32) out = expand_dim_Job(x) # out.shape (1, 3, 1, 3)
-
oneflow.
find_or_create_module
(module_name: str, create: Callable[[], None], reuse: bool = False)¶
-
oneflow.
flatten
(input: oneflow_api.BlobDesc, start_dim: int = 0, end_dim: int = -1, name: Optional[str] = None) → oneflow_api.BlobDesc¶ Flattens a contiguous range of dims in a Blob.
- Parameters
input – A Blob.
start_dim – The first dim to flatten.
end_dim – The last dim to flatten.
name – A name for the operation (optional).
- Returns
A Blob, has the same type as input.
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def flatten_Job(input: tp.Numpy.Placeholder(shape=(4, 4, 3, 2), dtype=flow.float32) ) -> tp.Numpy: flatten_blob = flow.flatten(input, start_dim=1, end_dim=-1) return flatten_blob input = np.zeros((4, 4, 3, 2)).astype(np.float32) out = flatten_Job(input) # out.shape (4, 24)
-
oneflow.
float
¶ alias of
oneflow.python.framework.dtype.float32
-
oneflow.
function_config
¶ alias of
oneflow.python.framework.function_util.FunctionConfig
-
oneflow.
gather
(params: oneflow_api.BlobDesc, indices: oneflow_api.BlobDesc, validate_indices: Optional[oneflow_api.BlobDesc] = None, axis: Optional[int] = None, batch_dims: int = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator gathers slices from params axis according to indices.
- Parameters
params – A Blob. The blob from which to gather values. Must be at least rank axis + 1.
indices – A Blob. Index blob. Must be in range [0, params.shape[axis]).
axis – A int. The axis in params to gather indices from. Defaults to the first dimension. Supports negative indexes.
batch_dims – An optional int. Defaults to 0.
name – A name for the operation (optional).
- Returns
A blob. Has the same type as params.
For example:
Example 1:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def gather_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(2, ), dtype=flow.int32) ) -> tp.Numpy: gather_blob = flow.gather(params=x, indices=indice, axis=1) return gather_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) indice = np.array([0, 2]).astype(np.int32) out = gather_Job(x, indice) # out [[1. 3.] # [4. 6.] # [7. 9.]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def gather_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(2, ), dtype=flow.int32) ) -> tp.Numpy: gather_blob = flow.gather(params=x, indices=indice, axis=0) return gather_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) indice = np.array([0, 2]).astype(np.int32) out = gather_Job(x, indice) # out [[1. 2. 3.] # [7. 8. 9.]]
-
oneflow.
gather_nd
(params: oneflow_api.BlobDesc, indices: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator is a high-dimensional extension of gather, indices is a K-dimensional tensor, which is regarded as a index of input Blob params.
Each element defines a slice of params:
\[output[(i_0,i_1,...,i_{K-2})] = param[indices(i_{0},i_{1},...,i_{K-2})]\]- Parameters
params (oneflow_api.BlobDesc) – The input Blob.
indices (oneflow_api.BlobDesc) – The slice indices.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
Example 1:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def gather_nd_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(2, 1), dtype=flow.int32) ) -> tp.Numpy: gather_nd_blob = flow.gather_nd(params=x, indices=indice) return gather_nd_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) indice = np.array([[0], [2]]).astype(np.int32) out = gather_nd_Job(x, indice) # out [[1. 2. 3.] # [7. 8. 9.]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def gather_nd_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(2, 2), dtype=flow.int32) ) -> tp.Numpy: gather_nd_blob = flow.gather_nd(params=x, indices=indice) return gather_nd_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) indice = np.array([[0, 2], [2, 1]]).astype(np.int32) out = gather_nd_Job(x, indice) # out [3. 8.]
Example3:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def gather_nd_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(3, 2), dtype=flow.int32) ) -> tp.Numpy: gather_nd_blob = flow.gather_nd(params=x, indices=indice) return gather_nd_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) indice = np.array([[0, 1], [1, 0], [2, 2]]).astype(np.int32) out = gather_nd_Job(x, indice) # out [2. 4. 9.]
-
oneflow.
get_all_variables
() → Dict[str, oneflow.python.framework.remote_blob.EagerConsistentBlob]¶ Get all variables of all jobs as a dict.
-
oneflow.
get_variable
(name: str, shape: Optional[Sequence[int]] = None, dtype: Optional[oneflow.python.framework.dtype.dtype] = <class 'oneflow.python.framework.dtype.float32'>, initializer: Optional[oneflow.core.job.initializer_conf_pb2.InitializerConf] = None, regularizer: Optional[oneflow.core.job.regularizer_conf_pb2.RegularizerConf] = None, trainable: Optional[bool] = None, model_name: Optional[str] = None, random_seed: Optional[int] = None, distribute: oneflow_api.distribute.Distribute = <oneflow_api.distribute.BroadcastDistribute object>, reuse: bool = True) → oneflow_api.BlobDesc¶ Create a variable or retrieve an existing one.
- Parameters
name – Name of this variable. One variable could be shared by multiple OneFlow functions. None by defauilt
shape – Shape of the variable. None by defauilt
dtype – Data type of the variable. None by defauilt
initializer – A initializer object. For instance, a
ones_initializer()
. None by defauilttrainable – A bool to indicate if this variable is trainable. True by defauilt
model_name – A string. ‘weight’ or ‘bias’. None by defauilt
random_seed – Random seed for random initializers. None by defauilt
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def variable_Job() -> None: init = flow.constant_initializer(1.25) variable = flow.get_variable( "variable-weight", shape=(1, 3, 2, 2), initializer=init, trainable=True ) flow.watch(variable, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() variable_Job() # out [[[[1.25 1.25] # [1.25 1.25]] # [[1.25 1.25] # [1.25 1.25]] # [[1.25 1.25] # [1.25 1.25]]]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp def conv2d(input, filters, kernel_size, strides, padding, name): input_shape = input.shape weight_initializer = flow.truncated_normal(0.1) weight_regularizer = flow.regularizers.l2(0.0005) weight_shape = (filters, input_shape[1], kernel_size[0], kernel_size[1]) weight = flow.get_variable( name + "-weight", shape=weight_shape, initializer=weight_initializer, regularizer=weight_regularizer, ) return flow.nn.conv2d(input, weight, strides, padding, name=name) @flow.global_function() def conv2d_Job(x: tp.Numpy.Placeholder((1, 64, 32, 32)) ) -> tp.Numpy: conv = conv2d(x, filters=128, kernel_size=[3, 3], strides=2, padding='SAME', name="Convlayer") return conv x = np.random.randn(1, 64, 32, 32).astype(np.float32) out = conv2d_Job(x) # out.shape (1, 128, 16, 16)
-
oneflow.
global_function
(type: str = 'predict', function_config: oneflow.python.framework.function_util.FunctionConfig = None) → Callable[[Callable], Callable]¶ Creates a callable OneFlow global function from a Python function.
For instance:
@oneflow.global_function(flow.FunctionConfig()) def train(): # your model
- Parameters
function_config (FunctionConfig, optional) – a FunctionConfig object. Defaults to FunctionConfig().
- Returns
a callable which is called to execute the compiled function
- Return type
Callable[[Callable], Callable]
-
oneflow.
glorot_normal_initializer
(data_format: str = '') → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates a Xavier normal distribution.
It also can be called as oneflow.glorot_normal_initializer.
The equation is:
\[W\sim N(0, \sqrt{\frac{{2}}{{n_j+n_{j+1}}}})\]\(N\) means normal distribution
\(n_j\) means the amount of Nth layer parameters
- Parameters
data_format (str, optional) – The data format. Defaults to “”.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def xavier_normal_Job() -> None: init = flow.xavier_normal_initializer() blob = flow.get_variable( "blob-weight", shape=(3, 3), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() xavier_normal_Job() # out [[ 0.5908121 -0.10804518 -0.6148571 ] # [ 1.4007381 -0.08172473 0.36579943] # [-0.6461796 -0.15923311 0.33653972]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_xavier_normal_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.xavier_normal_initializer() conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_xavier_normal_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
glorot_uniform_initializer
(data_format: str = '') → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates a Xavier uniform distribution.
It also can be called as oneflow.glorot_uniform_initializer.
The equation is:
\[W\sim U(-\sqrt{\frac{{6}}{{n_j+n_{j+1}}}},\sqrt{\frac{{6}}{{n_j+n_{j+1}}}})\]\(U\) means uniform distribution
\(n_j\) means the amount of Nth layer parameters
- Parameters
data_format (str, optional) – The data format. Defaults to “”.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def xavier_uniform_Job() -> None: init = flow.xavier_uniform_initializer() blob = flow.get_variable( "blob-weight", shape=(3, 3), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() xavier_uniform_Job() # out [[-0.14424723 -0.9532095 -0.08723891] # [-0.8011227 -0.29729813 -0.26769108] # [ 0.9208976 -0.5971756 -0.15077025]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_xavier_uniform_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.xavier_uniform_initializer() conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_xavier_uniform_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
identity
(x: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator returns a Blob that has identical content and data type to input Blob.
Analogous to tf.identity
- Parameters
x (oneflow_api.BlobDesc) – The input Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def identity_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.int32), ) -> tp.Numpy: return flow.identity(x) x = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]).astype(np.int32) out = identity_Job(x) # out [[1 1 1] # [2 2 2] # [3 3 3]]
-
oneflow.
identity_n
(inputs: Sequence[oneflow_api.BlobDesc], name: Optional[str] = None) → List[oneflow_api.BlobDesc]¶ This operator is similar to oneflow.identity. The difference is that the input and output of identity_n is List.
- Parameters
inputs (Iterable[oneflow_api.BlobDesc]) – A List of input Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A list of result Blob.
- Return type
List[oneflow_api.BlobDesc]
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp from typing import List @flow.global_function() def identity_Job(x: tp.Numpy.Placeholder(shape=(1, 3), dtype=flow.int32), y: tp.Numpy.Placeholder(shape=(1, 3), dtype=flow.int32), z: tp.Numpy.Placeholder(shape=(1, 3), dtype=flow.int32) ) -> List[tp.Numpy]: return flow.identity_n([x, y, z]) x = np.array([[1, 1, 1]]).astype(np.int32) y = np.array([[2, 2, 2]]).astype(np.int32) z = np.array([[3, 3, 3]]).astype(np.int32) out = identity_Job(x, y, z) # out[0] [[1, 1, 1]] # out[1] [[2, 2, 2]] # out[2] [[3, 3, 3]]
-
oneflow.
image_batch_align
(images: oneflow_api.BlobDesc, shape: Sequence[int], dtype: oneflow.python.framework.dtype.dtype, alignment: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator aligns the shape for a batch of images.
The aligned shape is computed as:
\[ \begin{align}\begin{aligned}& shape_{width} = int(\frac{(shape_{width}+alignment-1)}{alignment})*alignment\\& shape_{height} = int(\frac{(shape_{height}+alignment-1)}{alignment})*alignment\end{aligned}\end{align} \]- Parameters
images (oneflow_api.BlobDesc) – The images.
shape (Sequence[int]) – The maximum static shape of input images.
dtype (dtype_util.dtype) – The data type.
alignment (int) – The align factor.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import cv2 import numpy as np import oneflow as flow import oneflow.typing as tp def _of_image_batch_align(images, input_shape, output_shape, alignment): func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def image_batch_align_job( images_def: tp.ListListNumpy.Placeholder(shape=input_shape, dtype=flow.float) ) -> tp.ListNumpy: # Convert to tensor buffer images_buffer = flow.tensor_list_to_tensor_buffer(images_def) image = flow.image_batch_align( images_buffer, shape=output_shape[1:], dtype=flow.float, alignment=alignment ) return image image = image_batch_align_job([images]) return image[0] def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return [np.expand_dims(image, axis=0) for image in images] def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() assert image_static_shape[0] == 1, str(image_static_shape) image_static_shape[0] = len(image_shapes) return image_static_shape def _roundup(x, n): # compute the aligned shape return int((x + n - 1) / n) * n if __name__ == "__main__": img = _read_images_by_cv(['./img/1.jpg', './img/2.jpg', './img/3.jpg']) img_shape = _get_images_static_shape(img) # In example is [3, 349, 367, 3] alignment = 16 # alignment factor aligned_image_shape = [ img_shape[0], _roundup(img_shape[1], alignment), _roundup(img_shape[2], alignment), img_shape[3], ] image = _of_image_batch_align(img, tuple(img_shape), aligned_image_shape, alignment)
-
oneflow.
image_decode
(images_bytes_buffer: oneflow_api.BlobDesc, dtype: oneflow.python.framework.dtype.dtype = <class 'oneflow.python.framework.dtype.uint8'>, color_space: str = 'BGR', name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator decode the image.
- Parameters
images_bytes_buffer (oneflow_api.BlobDesc) – The input Blob. Its type should be kTensorBuffer. More details please refer to the code example.
dtype (dtype_util.dtype, optional) – The data type. Defaults to dtype_util.uint8.
color_space (str, optional) – The color space. Defaults to “BGR”.
name (Optional[str], optional) – The name for the opreation. Defaults to None.
- Returns
The decoded image list.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp import numpy as np from PIL import Image def _of_image_decode(images): image_files = [open(im, "rb") for im in images] images_bytes = [imf.read() for imf in image_files] static_shape = (len(images_bytes), max([len(bys) for bys in images_bytes])) for imf in image_files: imf.close() func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def image_decode_job( images_def: tp.ListListNumpy.Placeholder(shape=static_shape, dtype=flow.int8) )->tp.ListListNumpy: # convert to tensor buffer images_buffer = flow.tensor_list_to_tensor_buffer(images_def) decoded_images_buffer = flow.image_decode(images_buffer) # Remember to set a shape # convert back to tensor list return flow.tensor_buffer_to_tensor_list( decoded_images_buffer, shape=(640, 640, 3), dtype=flow.uint8 ) images_np_arr = [ np.frombuffer(bys, dtype=np.byte).reshape(1, -1) for bys in images_bytes ] decoded_images = image_decode_job([images_np_arr]) return decoded_images[0] if __name__ == "__main__": img = _of_image_decode(['./img/1.jpg']) print(img[0].shape) # Our image shape is (1, 349, 367, 3)
-
oneflow.
image_flip
(image: oneflow_api.BlobDesc, flip_code: Union[int, oneflow_api.BlobDesc], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator flips the images.
The flip code corresponds to the different flip mode:
0 (0x00): Non Flip
1 (0x01): Horizontal Flip
16 (0x10): Vertical Flip
17 (0x11): Both Horizontal and Vertical Flip
- Parameters
image (oneflow_api.BlobDesc) – The input images.
flip_code (Union[int, oneflow_api.BlobDesc]) – The flip code.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import cv2 import numpy as np import oneflow as flow import oneflow.typing as tp def _of_image_flip(images, image_shape, flip_code): func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def image_flip_job( images_def: tp.ListListNumpy.Placeholder(shape=image_shape, dtype=flow.float) ) -> tp.ListListNumpy: images_buffer = flow.tensor_list_to_tensor_buffer(images_def) flip_images = flow.image_flip(images_buffer, flip_code) return flow.tensor_buffer_to_tensor_list( flip_images, shape=image_shape[1:], dtype=flow.float ) image_tensor = image_flip_job([images]) return image_tensor[0] def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return [np.expand_dims(image, axis=0) for image in images] def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() assert image_static_shape[0] == 1, str(image_static_shape) image_static_shape[0] = len(image_shapes) return image_static_shape if __name__ == "__main__": img = _read_images_by_cv(['./img/1.jpg', './img/2.jpg', './img/3.jpg']) img_shape = _get_images_static_shape(img) # In example is [3, 349, 367, 3] image = _of_image_flip(img, tuple(img_shape), flip_code=1)
-
oneflow.
image_normalize
(image: oneflow_api.BlobDesc, std: Sequence[float], mean: Sequence[float], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator normalizes the image.
- Parameters
image (oneflow_api.BlobDesc) – The input image.
std (Sequence[float]) – The standard deviation of the images.
mean (Sequence[float]) – The mean value of the images.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import cv2 import numpy as np import oneflow as flow import oneflow.typing as tp def _of_image_normalize(images, image_shape, std, mean): func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def image_normalize_job( images_def: tp.ListListNumpy.Placeholder(shape=image_shape, dtype=flow.float) ) -> tp.ListListNumpy: # Convert to tensor buffer images_buffer = flow.tensor_list_to_tensor_buffer(images_def) # Normalize the imagess norm_images = flow.image_normalize(images_buffer, std, mean) # Convert back to tensor list return flow.tensor_buffer_to_tensor_list( norm_images, shape=image_shape[1:], dtype=flow.float ) image_tensor = image_normalize_job([images]) return image_tensor[0] def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return [np.expand_dims(image, axis=0) for image in images] def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() assert image_static_shape[0] == 1, str(image_static_shape) image_static_shape[0] = len(image_shapes) return image_static_shape if __name__ == "__main__": img = _read_images_by_cv(['./img/1.jpg', './img/2.jpg', './img/3.jpg']) img_shape = _get_images_static_shape(img) # In example is [3, 349, 367, 3] image = _of_image_normalize(img, tuple(img_shape), std=(102.9801, 115.9465, 122.7717), mean=(1.0, 1.0, 1.0))
-
oneflow.
image_random_crop
(input_blob: oneflow_api.BlobDesc, num_attempts: int = 10, seed: Optional[int] = None, random_area: Sequence[float] = None, random_aspect_ratio: Sequence[float] = None, name: str = 'ImageRandomCrop') → oneflow_api.BlobDesc¶ This operator crops the input image randomly.
- Parameters
input_blob (oneflow_api.BlobDesc) – The input Blob.
num_attempts (int, optional) – The maximum number of random cropping attempts. Defaults to 10.
seed (Optional[int], optional) – The random seed. Defaults to None.
random_area (Sequence[float], optional) – The random cropping area. Defaults to None.
random_aspect_ratio (Sequence[float], optional) – The random scaled ratio. Defaults to None.
name (str, optional) – The name for the operation. Defaults to “ImageRandomCrop”.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp import numpy as np import cv2 def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return [np.expand_dims(image, axis=0) for image in images] def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() assert image_static_shape[0] == 1, str(image_static_shape) image_static_shape[0] = len(image_shapes) return image_static_shape def _of_image_random_crop(images, image_static_shape): func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def image_random_crop_job(images_def: tp.ListListNumpy.Placeholder(shape=image_static_shape, dtype=flow.float) ) -> tp.ListListNumpy: # The input Blob type should be "kTensorBuffer" # So we use oneflow.tensor_list_to_tensor_buffer to convert images_buffer = flow.tensor_list_to_tensor_buffer(images_def) # Do the random crop random_crop_buffer = flow.image.random_crop( images_buffer, random_area=[0.15, 0.80], random_aspect_ratio=[0.75, 1.55], ) # We convert back to "tensorlist" type random_crop_images = flow.tensor_buffer_to_tensor_list( random_crop_buffer, shape=(image_static_shape[1], image_static_shape[2], image_static_shape[-1]), dtype=flow.float, ) return random_crop_images random_crop_images = image_random_crop_job([images]) return random_crop_images if __name__ == "__main__": img = _read_images_by_cv(['./img/1.jpg']) img_shape = _get_images_static_shape(img) # In example is (1, 234, 346, 3) random_crop_images = _of_image_random_crop(img, tuple(img_shape)) # random_crop_images.shape is (234, 346, 3)
-
oneflow.
image_resize
(image: oneflow_api.BlobDesc, target_size: Union[int, Sequence[int]] = None, min_size: Optional[int] = None, max_size: Optional[int] = None, keep_aspect_ratio: bool = False, resize_side: str = 'shorter', channels: int = 3, dtype: Optional[oneflow.python.framework.dtype.dtype] = None, interpolation_type: str = 'auto', name: Optional[str] = None, color_space: Optional[str] = None, interp_type: Optional[str] = None, resize_shorter: int = 0, resize_x: int = 0, resize_y: int = 0) → Union[oneflow_api.BlobDesc, Sequence[oneflow_api.BlobDesc]]¶ Resize images to target size.
- Parameters
image – A Tensor consists of images to be resized.
target_size – A list or tuple when keep_aspect_ratio is false or an int when keep_aspect_ratio is true. When keep_aspect_ratio is false, target_size has a form of (target_width, target_height) that image will resize to. When keep_aspect_ratio is true, the longer side or shorter side of the image will be resized to target size.
min_size – An int, optional. Only works when keep_aspect_ratio is true and resize_side is “longer”. If min_size is not None, the shorter side must be greater than or equal to min_size. Default is None.
max_size – An int, optional. Only works when keep_aspect_ratio is true and resize_side is “shorter”. If max_size is not None, the longer side must be less than or equal to max_size. Default is None.
keep_aspect_ratio – A bool. If is false, indicate that image will be resized to fixed width and height, otherwise image will be resized keeping aspect ratio.
resize_side – A str of “longer” or “shorter”. Only works when keep_aspect_ratio is True. If resize_side is “longer”, the longer side of image will be resized to target_size. If resize_side is “shorter”, the shorter side of image will be resized to target_size.
channels – An int. how many channels an image has
dtype – oneflow.dtype. Indicate output resized image data type.
interpolation_type – A str of “auto”, “bilinear”, “nearest_neighbor”, “bicubic” or “area”. Indicate interpolation method used to resize image.
name – A str, optional. Name for the operation.
color_space – Deprecated, a str of “RGB”, “BGR” or “GRAY”. Please use channels instead.
interp_type – Deprecated, s str of “Linear”, “Cubic” or “NN”. Please use interpolation_type instead.
resize_shorter – Deprecated, a int. Indicate target size that the shorter side of image will resize to. Please use target_size and resize_side instead.
resize_x – Deprecated, a int. Indicate the target size that the width of image will resize to. Please use target_size instead.
resize_y – Deprecated, a int. Indicate the target size that the height of image will resize to. Please use target_size instead.
- Returns
Tuple of resized images Blob, width and height scales Blob and new width and height Blob (new width and height Blob will be None when keep_aspect_ratio is false). If deprecated params are used, a single resized images Blob will be returned.
For example:
import oneflow as flow import oneflow.typing as tp from typing import Tuple @flow.global_function(type="predict") def ofrecord_reader_job() -> Tuple[tp.Numpy, tp.Numpy]: batch_size = 16 color_space = "RGB" # our ofrecord file path is "./dataset/part-0" ofrecord = flow.data.ofrecord_reader( "./imgdataset", batch_size=batch_size, data_part_num=1, part_name_suffix_length=-1, part_name_prefix='part-', random_shuffle=True, shuffle_after_epoch=True, ) image = flow.data.OFRecordImageDecoderRandomCrop( ofrecord, "encoded", color_space=color_space ) res_image, scale, new_size = flow.image.Resize( image, target_size=(224, 224) ) label = flow.data.OFRecordRawDecoder( ofrecord, "class/label", shape=(1, ), dtype=flow.int32 ) return res_image, label if __name__ == "__main__": images, labels = ofrecord_reader_job() # image.shape (16, 224, 224, 3)
-
oneflow.
image_target_resize
(images: oneflow_api.BlobDesc, target_size: int, min_size: Optional[int] = None, max_size: Optional[int] = None, resize_side: str = 'shorter', interpolation_type: str = 'auto', name: Optional[str] = None) → Sequence[oneflow_api.BlobDesc]¶ This operator resizes image to target size.
- Parameters
images (oneflow_api.BlobDesc) – The input Blob. Its type should be kTensorBuffer. More details please refer to the code example.
target_size (int) – An int, the target size.
min_size (Optional[int], optional) – If min_size is not None, the shorter side must be greater than or equal to min_size. Default is None. Defaults to None.
max_size (Optional[int], optional) – If max_size is not None, the longer side must be less than or equal to max_size. Defaults to None.
resize_side (str, optional) – A str of “longer” or “shorter”. Only works when keep_aspect_ratio is True. If resize_side is “longer”, the longer side of image will be resized to target_size. If resize_side is “shorter”, the shorter side of image will be resized to target_size. Defaults to “shorter”.
interpolation_type (str, optional) – A str of “auto”, “bilinear”, “nearest_neighbor”, “bicubic” or “area”. Indicate interpolation method used to resize image. Defaults to “auto”.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Sequence includes the result Blob.
- Return type
Sequence[oneflow_api.BlobDesc]
For example:
import oneflow as flow import oneflow.typing as tp from typing import Tuple import numpy as np import cv2 def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return [np.expand_dims(image, axis=0) for image in images] def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() assert image_static_shape[0] == 1, str(image_static_shape) image_static_shape[0] = len(image_shapes) return image_static_shape def _of_image_target_resize(images, image_static_shape, target_size, max_size): func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def image_target_resize_job(images_def: tp.ListListNumpy.Placeholder(shape=image_static_shape, dtype=flow.float) ) -> Tuple[tp.ListListNumpy, tp.ListNumpy, tp.ListNumpy]: # The input Blob type should be "kTensorBuffer" # So we use oneflow.tensor_list_to_tensor_buffer to convert images_buffer = flow.tensor_list_to_tensor_buffer(images_def) resized_images_buffer, size, scale = flow.image_target_resize( images_buffer, target_size=target_size, max_size=max_size, resize_side="shorter", ) # We convert back to "tensorlist" type resized_images = flow.tensor_buffer_to_tensor_list( resized_images_buffer, shape=(target_size, max_size, image_static_shape[-1]), dtype=flow.float, ) return resized_images, size, scale resized_images, size, scale = image_target_resize_job([images]) resized_image = resized_images[0] size = size[0] scale = scale[0] return resized_images, size, scale if __name__ == "__main__": img = _read_images_by_cv(['./img/1.jpg']) img_shape = _get_images_static_shape(img) # In example is [1, 349, 367, 3] target_size = 256 max_size = 512 resized_images, size, scale = _of_image_target_resize(img, tuple(img_shape), target_size, max_size) # Here the shorter side is "349", we resize it to target_size(256) # The scale is 256 / 349 = 0.73 # The longer side will be resized to 367 * scale = 269 # get the first element from the resized_images (its type is `list.list`) print(resized_images[0][0].shape) # (1, 256, 269, 3)
-
oneflow.
in_top_k
(targets: oneflow_api.BlobDesc, predictions: oneflow_api.BlobDesc, k: Optional[int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Says whether the targets are in the top K predictions.
- Parameters
targets (oneflow_api.BlobDesc) – A Blob of type int32 or int64.
predictions (oneflow_api.BlobDesc) – A Blob of type float32.
k (Optional[int], optional) – Number of top elements to look at for computing precision.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Blob of type bool. Computed Precision at k as a bool Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def intopk_Job( targets: tp.Numpy.Placeholder((2,), dtype=flow.int32), predictions: tp.Numpy.Placeholder((2, 4), dtype=flow.float32), ) -> tp.Numpy: return flow.math.in_top_k(targets, predictions, 1) targets = np.array([3, 1], dtype=np.int32) predictions = np.array([[0.0, 1.0, 2.0, 3.0], [3.0, 2.0, 1.0, 0.0],], dtype=np.float32) out = intopk_Job(targets, predictions) # out [1 0]
-
oneflow.
inter_job_reuse_mem_strategy
(strategy_str: str, job_set: Optional[oneflow.core.job.job_set_pb2.JobSet] = None, **kwargs: _VT) → None¶ Set memory sharing strategy for job set.
- Parameters
strategy_str – An optional string from: mem_sharing_priority, parallelism_priority
custom_parallelism. (or) –
job_set – A JobSet object. If None, set default job set.
-
oneflow.
is_deprecated
(func_or_class)¶
-
oneflow.
kaiming_initializer
(shape: Sequence[int], distribution: str = 'random_normal', mode: str = 'fan_in', nonlinearity: str = 'leaky_relu', negative_slope: float = 0.0, data_format: str = 'NCHW') → None¶ Initialize weight according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. et al. (2015), using a normal or uniform distribution.
When distribution is “random_normal”
The equation is:
\[W \sim N(0, \sqrt{\frac{{2}}{{n}}})\]When distribution is “random_uniform”
The equation is:
\[W \sim U(-\sqrt{\frac{{6}}{{n}}}, \sqrt{\frac{{6}}{{n}}})\]If mode is “fan_in”, the “n” is the number of input units in the weight Blob.
If mode is “fan_out”, the “n” is the number of output units in the weight Blob.
if mode is “fan_avg”, the “n” is the average of the number of input and output units in the weight Blob
- Parameters
shape (Sequence[int]) – Blob shape.
distribution (str, optional) – ‘random_normal’ or ‘random_uniform’. Defaults to “random_normal”.
mode (str, optional) – ‘fan_in’, ‘fan_out’ or ‘fan_avg’. Defaults to “fan_in”.
nonlinearity (str, optional) – None, ‘tanh’, ‘sigmoid’, ‘relu’ or ‘leaky_relu’. Defaults to “leaky_relu”.
negative_slope (float, optional) – The negative slope of leaky_relu. Defaults to 0.0.
data_format (str, optional) – ‘NCHW’, ‘NHWC’. Defaults to “NCHW”.
- Raises
NotImplementedError – Only support normal and uniform distribution
- Returns
flow.random_normal_initializer or flow.random_uniform_initializer
- Return type
[type]
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def kaiming_Job() -> None: init = flow.kaiming_initializer(shape=(3, 3), mode="fan_avg", nonlinearity="relu") blob = flow.get_variable( "blob-weight", shape=(3, 3), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() kaiming_Job() # out [[ 0.54521346 0.32585594 1.3474437 ] # [ 0.30729076 -0.19158769 0.2709008 ] # [-0.95830524 -0.05093324 0.28178614]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_kaiming_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.kaiming_initializer(shape=(1, 256, 32, 32)) conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_kaiming_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
load_variables
(value_dict: Dict[str, Union[oneflow.python.framework.remote_blob.EagerBlobTrait, oneflow.python.framework.check_point_v2.FileBackendVariableBlob, numpy.ndarray]], ignore_mismatch: bool = True)¶ Load value in value_dict into oneflow variables. For example, if value_dict is {‘x’, np.ones(x_shape)}, the value of variable “x” will all ones. If ignore_mismatch is False, an exception will be raised when there is a name in value_dict not belonging to any variable.
-
oneflow.
masked_fill
(x: oneflow_api.BlobDesc, mask: oneflow_api.BlobDesc, value: Union[float, int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Fill a blob with a given value according to the given mask.
- Parameters
x (oneflow_api.BlobDesc) – Input Blob.
mask (oneflow_api.BlobDesc) – Composed with 0 and 1, the input blob ‘x’ will be filled with the given value where the mask is 1.
value (Union[int, int]) – The value to use for filling the input blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
Attention
x and mask must be broadcastable to each other. mask must be int type (int8/int32/int64).
- Returns
The value-filled Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def masked_fill_Job(x: tp.Numpy.Placeholder((4, ), mask: tp.Numpy.Placeholder((4, ), dtype = flow.int8))->tp.Numpy: return flow.masked_fill(x, mask, value=5) x = np.array([1, 2, 3, 4], dtype=np.float32) mask = np.array([1, 0, 0, 1], dtype=np.int8) out = masked_fill_Job(x, mask) # output [5 2 3 5]
-
oneflow.
matmul
(a: oneflow_api.BlobDesc, b: oneflow_api.BlobDesc, transpose_a: bool = False, transpose_b: bool = False, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator applies matrix multiplication to two Blobs.
- Parameters
a (oneflow_api.BlobDesc) – A Blob
b (oneflow_api.BlobDesc) – A Blob
transpose_a (bool, optional) – Whether to transpose A Blob. Defaults to False.
transpose_b (bool, optional) – Whether to transpose B Blob. Defaults to False.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def matmul_Job(A: tp.Numpy.Placeholder((3, 3)), B: tp.Numpy.Placeholder((3, 3)) ) -> tp.Numpy: return flow.linalg.matmul(A, B) A = np.array([[1, 0, 0], [0, 1, 1], [0, 0, 1]]).astype(np.float32) B = np.array([[3, 4, 5], [6, 7, 8], [9, 10, 11]]).astype(np.float32) out = matmul_Job(A, B) # output [[ 3. 4. 5.] # [15. 17. 19.] # [ 9. 10. 11.]]
-
oneflow.
multi_count_not_finite
(x: Optional[Sequence[oneflow_api.BlobDesc]] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
nonzero
(a: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator finds the indices of input Blob condition elements that are non-zero.
- Parameters
a (oneflow_api.BlobDesc) – The input Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
-
oneflow.
object_bbox_flip
(bbox: oneflow_api.BlobDesc, image_size: oneflow_api.BlobDesc, flip_code: Union[int, oneflow_api.BlobDesc], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator flips the object bounding box.
The flip code corresponds to the different flip mode:
0 (0x00): Non Flip
1 (0x01): Horizontal Flip
16 (0x10): Vertical Flip
17 (0x11): Both Horizontal and Vertical Flip
- Parameters
bbox (oneflow_api.BlobDesc) – The bounding box.
image_size (oneflow_api.BlobDesc) – The size of input image.
flip_code (Union[int, oneflow_api.BlobDesc]) – The flip code.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import numpy as np import oneflow as flow import oneflow.typing as tp def _of_object_bbox_flip(bbox_list, image_size, flip_code): bbox_shape = _get_bbox_static_shape(bbox_list) func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def object_bbox_flip_job( bbox_def: tp.ListListNumpy.Placeholder( shape=tuple(bbox_shape), dtype=flow.float ), image_size_def: tp.ListNumpy.Placeholder( shape=image_size.shape, dtype=flow.int32 ), ) -> tp.ListListNumpy: bbox_buffer = flow.tensor_list_to_tensor_buffer(bbox_def) flip_bbox = flow.object_bbox_flip(bbox_buffer, image_size_def, flip_code) return flow.tensor_buffer_to_tensor_list( flip_bbox, shape=bbox_shape[1:], dtype=flow.float ) input_bbox_list = [np.expand_dims(bbox, axis=0) for bbox in bbox_list] bbox_tensor = object_bbox_flip_job([input_bbox_list], [image_size]) return bbox_tensor[0] def _get_bbox_static_shape(bbox_list): bbox_shapes = [bbox.shape for bbox in bbox_list] bbox_static_shape = np.amax(bbox_shapes, axis=0) assert isinstance( bbox_static_shape, np.ndarray ), "bbox_shapes: {}, bbox_static_shape: {}".format( str(bbox_shapes), str(bbox_static_shape) ) bbox_static_shape = bbox_static_shape.tolist() bbox_static_shape.insert(0, len(bbox_list)) return bbox_static_shape if __name__ == "__main__": bbox = np.array([[[20.0, 40.0, 80.0, 160.0], [30.0, 50.0, 70.0, 100.0]]]).astype(np.single) # [x1, y1, x2, y2] image_size = np.array([[480, 620]]).astype(np.int32) bbox_flip = _of_object_bbox_flip(bbox, image_size, flip_code=1) # Horizontal Flip print(bbox_flip[0][0]) # [[399. 40. 459. 160.] # [409. 50. 449. 100.]]
-
oneflow.
object_bbox_scale
(bbox: oneflow_api.BlobDesc, scale: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator scales the input image and the corresponding bounding box. It returns the scaled bounding box.
- Parameters
bbox (oneflow_api.BlobDesc) – The bounding box.
scale (oneflow_api.BlobDesc) – The scale factor.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import numpy as np import oneflow as flow import oneflow.typing as tp import cv2 from typing import Tuple def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return images def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() image_static_shape.insert(0, len(image_shapes)) return image_static_shape def _get_bbox_static_shape(bbox_list): bbox_shapes = [bbox.shape for bbox in bbox_list] bbox_static_shape = np.amax(bbox_shapes, axis=0) assert isinstance( bbox_static_shape, np.ndarray ), "bbox_shapes: {}, bbox_static_shape: {}".format( str(bbox_shapes), str(bbox_static_shape) ) bbox_static_shape = bbox_static_shape.tolist() bbox_static_shape.insert(0, len(bbox_list)) return bbox_static_shape def _of_target_resize_bbox_scale(images, bbox_list, target_size, max_size): image_shape = _get_images_static_shape(images) bbox_shape = _get_bbox_static_shape(bbox_list) func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def target_resize_bbox_scale_job( image_def: tp.ListListNumpy.Placeholder( shape=tuple(image_shape), dtype=flow.float ), bbox_def: tp.ListListNumpy.Placeholder( shape=tuple(bbox_shape), dtype=flow.float ), ) -> Tuple[tp.ListListNumpy, tp.ListNumpy]: images_buffer = flow.tensor_list_to_tensor_buffer(image_def) resized_images_buffer, new_size, scale = flow.image_target_resize( images_buffer, target_size=target_size, max_size=max_size ) bbox_buffer = flow.tensor_list_to_tensor_buffer(bbox_def) scaled_bbox = flow.object_bbox_scale(bbox_buffer, scale) scaled_bbox_list = flow.tensor_buffer_to_tensor_list( scaled_bbox, shape=bbox_shape[1:], dtype=flow.float ) return scaled_bbox_list, new_size input_image_list = [np.expand_dims(image, axis=0) for image in images] input_bbox_list = [np.expand_dims(bbox, axis=0) for bbox in bbox_list] output_bbox_list, output_image_size = target_resize_bbox_scale_job( [input_image_list], [input_bbox_list] ) return output_bbox_list[0], output_image_size[0] if __name__ == "__main__": images = _read_images_by_cv(['./img/1.jpg', './img/2.jpg']) bbox = np.array([[[20.0, 40.0, 80.0, 160.0], [30.0, 50.0, 70.0, 100.0]], [[26.0, 40.0, 86.0, 160.0], [36.0, 56.0, 76.0, 106.0]]]).astype(np.single) # [x1, y1, x2, y2] bbox, size = _of_target_resize_bbox_scale(images, bbox, 280, 350) print(bbox[0]) print(bbox[1]) # [[[ 16.0218 32.09169 64.0872 128.36676 ] # [ 24.032698 40.114613 56.076298 80.229225]]] # [[[ 24.186047 37.170418 80. 148.68167 ] # [ 33.488373 52.038586 70.69768 98.5016 ]]]
-
oneflow.
object_segmentation_polygon_flip
(poly: oneflow_api.BlobDesc, image_size: oneflow_api.BlobDesc, flip_code: Union[int, oneflow_api.BlobDesc], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator flips the segmentation points in image.
The flip code corresponds to the different flip mode:
0 (0x00): Non Flip
1 (0x01): Horizontal Flip
16 (0x10): Vertical Flip
17 (0x11): Both Horizontal and Vertical Flip
- Parameters
poly (oneflow_api.BlobDesc) – The poly segmentation points.
image_size (oneflow_api.BlobDesc) – The image size.
flip_code (Union[int, oneflow_api.BlobDesc]) – The filp code.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import numpy as np import oneflow as flow import oneflow.typing as tp import cv2 def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return [np.expand_dims(image, axis=0) for image in images] def _of_object_segm_poly_flip(poly_list, image_size, flip_code): poly_shape = _get_segm_poly_static_shape(poly_list) func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def object_segm_poly_flip_job( poly_def: tp.ListListNumpy.Placeholder( shape=tuple(poly_shape), dtype=flow.float ), image_size_def: tp.ListNumpy.Placeholder( shape=image_size.shape, dtype=flow.int32 ), ) -> tp.ListListNumpy: poly_buffer = flow.tensor_list_to_tensor_buffer(poly_def) flip_poly = flow.object_segmentation_polygon_flip( poly_buffer, image_size_def, flip_code ) return flow.tensor_buffer_to_tensor_list( flip_poly, shape=poly_shape[1:], dtype=flow.float ) input_poly_list = [np.expand_dims(poly, axis=0) for poly in poly_list] poly_tensor = object_segm_poly_flip_job([input_poly_list], [image_size]) return poly_tensor[0] def _get_segm_poly_static_shape(poly_list): poly_shapes = [poly.shape for poly in poly_list] poly_static_shape = np.amax(poly_shapes, axis=0) assert isinstance( poly_static_shape, np.ndarray ), "poly_shapes: {}, poly_static_shape: {}".format( str(poly_shapes), str(poly_static_shape) ) poly_static_shape = poly_static_shape.tolist() poly_static_shape.insert(0, len(poly_list)) return poly_static_shape if __name__ == "__main__": segm_poly_list = [] segmentations = [[[20.0, 40.0], [80.0, 160.0], [100.0, 210.0]], # Image 1 segmentation point [[25.0, 45.0], [85.0, 165.0], [105.0, 215.0]]] # Image 2 segmentation point for segmentation in segmentations: polygon = [] for seg in segmentation: polygon.extend(seg) poly_array = np.array(polygon, dtype=np.single).reshape(-1, 2) # Reshape it segm_poly_list.append(poly_array) image_size = np.array([[480, 620], # Image 1 size [640, 640]]).astype(np.int32) # Image 2 size of_segm_poly_list = _of_object_segm_poly_flip( segm_poly_list, image_size, flip_code=1 ) # Horizontal Flip print(of_segm_poly_list[0]) print(of_segm_poly_list[1]) # of_segm_poly_list[0] # [[[460. 40.] # [400. 160.] # [380. 210.]]] # of_segm_poly_list[1] # [[[615. 45.] # [555. 165.] # [535. 215.]]]
-
oneflow.
object_segmentation_polygon_scale
(poly: oneflow_api.BlobDesc, scale: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator scales the segmentation points in the images.
- Parameters
poly (oneflow_api.BlobDesc) – The poly segmentation points.
scale (oneflow_api.BlobDesc) – The image scale.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import numpy as np import oneflow as flow import oneflow.typing as tp import cv2 from typing import Tuple def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return images def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() image_static_shape.insert(0, len(image_shapes)) return image_static_shape def _get_segm_poly_static_shape(poly_list): poly_shapes = [poly.shape for poly in poly_list] poly_static_shape = np.amax(poly_shapes, axis=0) assert isinstance( poly_static_shape, np.ndarray ), "poly_shapes: {}, poly_static_shape: {}".format( str(poly_shapes), str(poly_static_shape) ) poly_static_shape = poly_static_shape.tolist() poly_static_shape.insert(0, len(poly_list)) return poly_static_shape def _get_bbox_static_shape(bbox_list): bbox_shapes = [bbox.shape for bbox in bbox_list] bbox_static_shape = np.amax(bbox_shapes, axis=0) assert isinstance( bbox_static_shape, np.ndarray ), "bbox_shapes: {}, bbox_static_shape: {}".format( str(bbox_shapes), str(bbox_static_shape) ) bbox_static_shape = bbox_static_shape.tolist() bbox_static_shape.insert(0, len(bbox_list)) return bbox_static_shape def _of_object_segm_poly_scale(images, poly_list, target_size, max_size): image_shape = _get_images_static_shape(images) print(image_shape) poly_shape = _get_segm_poly_static_shape(poly_list) print("Poly shape is ", poly_shape) func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def object_segm_poly_scale_job( image_def: tp.ListListNumpy.Placeholder( shape=tuple(image_shape), dtype=flow.float ), poly_def: tp.ListListNumpy.Placeholder( shape=tuple(poly_shape), dtype=flow.float ), ) -> Tuple[tp.ListListNumpy, tp.ListNumpy]: images_buffer = flow.tensor_list_to_tensor_buffer(image_def) resized_images_buffer, new_size, scale = flow.image_target_resize( images_buffer, target_size=target_size, max_size=max_size ) poly_buffer = flow.tensor_list_to_tensor_buffer(poly_def) scaled_poly = flow.object_segmentation_polygon_scale(poly_buffer, scale) scaled_poly_list = flow.tensor_buffer_to_tensor_list( scaled_poly, shape=poly_shape[1:], dtype=flow.float ) return scaled_poly_list, new_size input_image_list = [np.expand_dims(image, axis=0) for image in images] input_poly_list = [np.expand_dims(poly, axis=0) for poly in poly_list] output_poly_list, output_image_size = object_segm_poly_scale_job( [input_image_list], [input_poly_list] ) return output_poly_list[0], output_image_size if __name__ == "__main__": images = _read_images_by_cv(['./img/1.jpg', './img/2.jpg']) segm_poly_list = [] segmentations = [[[20.0, 40.0], [80.0, 160.0], [100.0, 210.0]], # Image 1 segmentation point [[25.0, 45.0], [85.0, 165.0], [105.0, 215.0]]] # Image 2 segmentation point for segmentation in segmentations: polygon = [] for seg in segmentation: polygon.extend(seg) poly_array = np.array(polygon, dtype=np.single).reshape(-1, 2) # Reshape it segm_poly_list.append(poly_array) bbox, size = _of_object_segm_poly_scale(images, segm_poly_list, 280, 350)
-
oneflow.
object_segmentation_polygon_to_mask
(poly: oneflow_api.BlobDesc, poly_index: oneflow_api.BlobDesc, image_size: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator converts the poly segment points to the segment mask array.
- Parameters
poly (oneflow_api.BlobDesc) – The poly segment points.
poly_index (oneflow_api.BlobDesc) – The poly segment index.
image_size (oneflow_api.BlobDesc) – The input image size.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
import numpy as np import oneflow as flow import oneflow.typing as tp import cv2 from typing import Tuple def _read_images_by_cv(image_files): images = [cv2.imread(image_file).astype(np.single) for image_file in image_files] return images def _get_images_static_shape(images): image_shapes = [image.shape for image in images] image_static_shape = np.amax(image_shapes, axis=0) assert isinstance( image_static_shape, np.ndarray ), "image_shapes: {}, image_static_shape: {}".format( str(image_shapes), str(image_static_shape) ) image_static_shape = image_static_shape.tolist() image_static_shape.insert(0, len(image_shapes)) return image_static_shape def _get_segm_poly_static_shape(poly_list, poly_index_list): assert len(poly_list) == len(poly_index_list) num_images = len(poly_list) max_poly_elems = 0 for poly, poly_index in zip(poly_list, poly_index_list): assert len(poly.shape) == 2 assert len(poly_index.shape) == 2, str(poly_index.shape) assert poly.shape[0] == poly_index.shape[0] assert poly.shape[1] == 2 assert poly_index.shape[1] == 3 max_poly_elems = max(max_poly_elems, poly.shape[0]) return [num_images, max_poly_elems, 2], [num_images, max_poly_elems, 3] def _segm_poly_to_tensor(img_segm_poly_list): poly_array_list = [] poly_index_array_list = [] for img_idx, segm_poly_list in enumerate(img_segm_poly_list): img_poly_elem_list = [] img_poly_index_list = [] for obj_idx, poly_list in enumerate(segm_poly_list): for poly_idx, poly in enumerate(poly_list): img_poly_elem_list.extend(poly) for pt_idx, pt in enumerate(poly): if pt_idx % 2 == 0: img_poly_index_list.append([pt_idx / 2, poly_idx, obj_idx]) img_poly_array = np.array(img_poly_elem_list, dtype=np.single).reshape(-1, 2) assert img_poly_array.size > 0, segm_poly_list poly_array_list.append(img_poly_array) img_poly_index_array = np.array(img_poly_index_list, dtype=np.int32) assert img_poly_index_array.size > 0, segm_poly_list poly_index_array_list.append(img_poly_index_array) return poly_array_list, poly_index_array_list def _of_poly_to_mask_pipline( images, poly_list, poly_index_list, num_segms_list, target_size, max_size ): print(len(images)) print(len(poly_list)) assert len(images) == len(poly_list) assert len(poly_list) == len(poly_index_list) image_shape = _get_images_static_shape(images) poly_shape, poly_index_shape = _get_segm_poly_static_shape( poly_list, poly_index_list ) max_num_segms = max(num_segms_list) func_config = flow.FunctionConfig() func_config.default_logical_view(flow.scope.mirrored_view()) func_config.default_data_type(flow.float) @flow.global_function(function_config=func_config) def poly_to_mask_job( image_def: tp.ListListNumpy.Placeholder( shape=tuple(image_shape), dtype=flow.float ), poly_def: tp.ListListNumpy.Placeholder( shape=tuple(poly_shape), dtype=flow.float ), poly_index_def: tp.ListListNumpy.Placeholder( shape=tuple(poly_index_shape), dtype=flow.int32 ), ) -> Tuple[tp.ListListNumpy, tp.ListListNumpy]: images_buffer = flow.tensor_list_to_tensor_buffer(image_def) resized_images_buffer, new_size, scale = flow.image_target_resize( images_buffer, target_size=target_size, max_size=max_size ) poly_buffer = flow.tensor_list_to_tensor_buffer(poly_def) poly_index_buffer = flow.tensor_list_to_tensor_buffer(poly_index_def) scaled_poly_buffer = flow.object_segmentation_polygon_scale(poly_buffer, scale) mask_buffer = flow.object_segmentation_polygon_to_mask( scaled_poly_buffer, poly_index_buffer, new_size ) mask_list = flow.tensor_buffer_to_tensor_list( mask_buffer, shape=(max_num_segms, target_size, max_size), dtype=flow.int8 ) scaled_poly_list = flow.tensor_buffer_to_tensor_list( scaled_poly_buffer, shape=poly_shape[1:], dtype=flow.float ) return mask_list, scaled_poly_list input_image_list = [np.expand_dims(image, axis=0) for image in images] input_poly_list = [np.expand_dims(poly, axis=0) for poly in poly_list] input_poly_index_list = [ np.expand_dims(poly_index, axis=0) for poly_index in poly_index_list ] output_mask_list, output_poly_list = poly_to_mask_job( [input_image_list], [input_poly_list], [input_poly_index_list] ) return output_mask_list[0], output_poly_list[0] if __name__ == "__main__": images = _read_images_by_cv(['./img/1.jpg', './img/2.jpg']) segm_poly_list = [] segmentations = [[[20.0, 40.0, 80.0, 160.0, 100.0, 210.0, 120.0, 215.0]], # Image 1 segmentation point [[24.0, 42.0, 86.0, 168.0, 103.0, 223.0, 125.0, 235.0]]] # Image 2 segmentation point for segmentation in segmentations: polygon = [] for seg in segmentation: polygon.extend(seg) poly_array = np.array(polygon, dtype=np.single).reshape(-1, 2) # Reshape it segm_poly_list.append([poly_array]) poly_list, poly_index_list = _segm_poly_to_tensor(segm_poly_list) num_segms_list = [len(segm_poly_list) for segm_poly_list in segm_poly_list] target_size = 280 max_size = 350 of_mask_list, of_scaled_poly_list = _of_poly_to_mask_pipline( images, poly_list, poly_index_list, num_segms_list, target_size, max_size ) of_mask_list = [ mask_array.reshape(-1, mask_array.shape[-2], mask_array.shape[-1]) for mask_array in of_mask_list ] # reshape it
-
oneflow.
one_hot
(indices: oneflow_api.BlobDesc, depth: int, on_value: Union[int, float] = 1, off_value: Union[int, float] = 0, axis: int = -1, dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator generates a onehot Blob from input Blob.
If input Blob’s rank is N, the corresponding onehot Blob’s rank is N+1. The new axis is generated on the specified dimension according to the parameter axis.
The locations represented by indices take value on_value, while other locations take off_value
- Parameters
indices (oneflow_api.BlobDesc) – The input Blob.
depth (int) – The length of onehot Blob.
on_value (Union[int, float], optional) – The fill value when indices[i] == i. Defaults to 1.
off_value (Union[int, float], optional) – The fill value when indice[i] != i. Defaults to 0.
axis (int, optional) – The specified dimension that the new axis is generated on. Defaults to -1.
dtype (Optional[dtype_util.dtype], optional) – The output data type, it can be “oneflow.int32”, “oneflow.int64”, “oneflow.float”, “oneflow.double”. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
Note
The data type of input blob should be int32 or int64
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp import numpy as np @flow.global_function() def onehot_Job(x: tp.Numpy.Placeholder((4, ), dtype=flow.int32) ) -> tp.Numpy: return flow.one_hot(indices=x, depth=5, axis=-1, dtype=flow.int32) x = np.array([0, 3, 1, 2]).astype(np.int32) out = onehot_Job(x) # out [[1 0 0 0 0] # [0 0 0 1 0] # [0 1 0 0 0] # [0 0 1 0 0]]
Example 2:
import oneflow as flow import oneflow.typing as tp import numpy as np @flow.global_function() def onehot_Job(x: tp.Numpy.Placeholder((4, ), dtype=flow.int32) ) -> tp.Numpy: return flow.one_hot(indices=x, depth=5, axis=0, dtype=flow.int32) x = np.array([0, 3, 1, 2]).astype(np.int32) out = onehot_Job(x) # out [[1 0 0 0] # [0 0 1 0] # [0 0 0 1] # [0 1 0 0] # [0 0 0 0]]
- Returns
[description]
- Return type
oneflow_api.BlobDesc
-
oneflow.
ones
(shape: Sequence[int], dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a Tensor filled with the scalar value 1.
- Parameters
shape (Sequence[int]) – The shape of the Tensor.
dtype (Optional[dtype_util.dtype], optional) – The data type. Defaults to None.
name (Optional[str], optional) – The name for the operator. Defaults to None.
- Returns
The result Blob filled with value 1
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp @flow.global_function() def ones_job() -> tp.Numpy: return flow.ones(shape=(2, 3), dtype=flow.float32) out = ones_job() # output: [[1. 1. 1.] # [1. 1. 1.]]
-
oneflow.
ones_initializer
(dtype: oneflow.python.framework.dtype.dtype = <class 'oneflow.python.framework.dtype.float32'>) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates blobs initialized to 1.
- Parameters
dtype (dtype_util.dtype, optional) – Default data type. Defaults to dtype_util.float.
- Returns
constant_initializer
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def ones_Job() -> None: init = flow.ones_initializer() blob = flow.get_variable( "blob-weight", shape=(3, ), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() ones_Job() # out [1. 1. 1.]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_one_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.ones_initializer() conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_one_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
ones_like
(like: oneflow_api.BlobDesc, dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a Blob with all elements set to 1 that has the same shape as like.
- Parameters
like (oneflow_api.BlobDesc) – A Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type of Blob. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def ones_like_Job() -> tp.Numpy: constant_blob = flow.constant(value=1.5, shape=(1, 3, 3), dtype=flow.float) ones_like_blob = flow.ones_like(like=constant_blob, dtype=flow.float) return ones_like_blob out = ones_like_Job() # out [[[1. 1. 1.] # [1. 1. 1.] # [1. 1. 1.]]]
-
oneflow.
pack
(input: oneflow_api.BlobDesc, pack_num: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
pad
(x: oneflow_api.BlobDesc, paddings: Sequence[int], constant_value: Union[int, float] = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator pads the input blob with constant value that user specifies. User can set the amount of padding by setting the parameter paddings.
- Parameters
x (oneflow_api.BlobDesc) – The input Blob
paddings (Sequence[int]) – A list of integers to specify the padding width, its length must equal with the length of x.shape.
constant_value (Union[int, float], optional) – The constant value to pad. Defaults to 0.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – The parameter paddings must be a tuple or a list.
- Returns
The Blob after padding.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp import numpy as np @flow.global_function() def pad_Job(x: tp.Numpy.Placeholder((3, 3)) ) -> tp.Numpy: return flow.pad(x, paddings=((2, 2), (1, 1)), constant_value=5) x = np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]).astype(np.float32) out = pad_Job(x) # out [[5. 5. 5. 5. 5.] # [5. 5. 5. 5. 5.] # [5. 1. 1. 1. 5.] # [5. 1. 1. 1. 5.] # [5. 1. 1. 1. 5.] # [5. 5. 5. 5. 5.] # [5. 5. 5. 5. 5.]]
-
oneflow.
pad_grad
(x: oneflow_api.BlobDesc, paddings: Sequence[int], constant_value: Union[int, float] = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
parallel_cast
(input: oneflow_api.BlobDesc, name: Optional[str] = None, distribute: Optional[oneflow_api.distribute.Distribute] = None, gradient_distribute: Optional[oneflow_api.distribute.Distribute] = None) → oneflow_api.BlobDesc¶
-
oneflow.
random_normal_initializer
(mean: float = 0.0, stddev: float = 1.0, seed: Optional[int] = None, dtype: Optional[oneflow.python.framework.dtype.dtype] = None) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates blob with a normal distribution.
- Parameters
mean (float, optional) – A python scalar. Mean of the random values to generate.. Defaults to 0.0.
stddev (float, optional) – A python scalar. Standard deviation of the random values to generate. Defaults to 1.0.
seed (Optional[int], optional) – None. Not support yet. Defaults to None.
dtype (Optional[dtype_util.dtype], optional) – . Defaults to None.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def random_normal_Job() -> None: init = flow.random_normal_initializer(mean=1, stddev=1) blob = flow.get_variable( "blob-weight", shape=(3, ), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() random_normal_Job() # out [1.4190257 2.7663114 1.7114428]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_random_normal_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.random_normal_initializer(mean=0, stddev=1) conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_random_normal_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
random_uniform_initializer
(minval: float = 0, maxval: float = 1, dtype: oneflow.python.framework.dtype.dtype = <class 'oneflow.python.framework.dtype.float32'>) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates blobs with a uniform distribution.
- Parameters
minval (float, optional) – A python scalar. Lower bound of the range of random values to generate. Defaults to 0.
maxval (float, optional) – A python scalar. Upper bound of the range of random values to generate. Defaults to 1.
dtype (dtype_util.dtype, optional) – Default data type. Defaults to dtype_util.float.
- Raises
NotImplementedError – Do not support such data type.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def random_uniform_Job() -> None: init = flow.random_uniform_initializer(minval=0, maxval=0.5) blob = flow.get_variable( "blob-weight", shape=(3, ), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() random_uniform_Job() # out [0.07557311 0.3943565 0.31875622]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_random_uniform_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.random_uniform_initializer(minval=0, maxval=0.5) conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_random_uniform_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
range
(start, limit=None, delta=1, dtype=None, name='range') → oneflow_api.BlobDesc¶ This operator is similar to python range, the difference is that oneflow.range generates a Blob.
- Parameters
start ([type]) – The start of interval. Its type should be int.
limit ([type], optional) – The limit of interval. Its type should be int.
delta (int, optional) – The numerical spacing between elements. Defaults to 1.
dtype ([type], optional) – The output’s data type. Currently we only support oneflow.int64. Defaults to None.
name (str, optional) – The name for the operation. Defaults to “range”.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp @flow.global_function() def range_job()->tp.Numpy: with flow.scope.placement("cpu", "0:0"): out = flow.range(10, dtype=flow.int64) return out out = range_job() # out [0 1 2 3 4 5 6 7 8 9]
Example2:
import oneflow as flow import oneflow.typing as tp @flow.global_function() def range_job()->tp.Numpy: with flow.scope.placement("cpu", "0:0"): out = flow.range(1, 10, 3, dtype=flow.int64) return out out = range_job() # out [1 4 7]
-
oneflow.
reflection_pad2d
(x: oneflow_api.BlobDesc, padding: Union[int, tuple, list], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Pads the input tensor using the reflection of the input boundary.
- Parameters
x (oneflow_api.BlobDesc) – input blob, only support “NCHW” format.
padding (Union[int, oneflow_api.BlobDesc]) – The size or bundary of padding, if is int uses the same padding in all dimension;
4-dims tuple, uses (if) –
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
[description]
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp import numpy as np @flow.global_function() def pad_Job(x: tp.Numpy.Placeholder((1, 2, 3, 3)) ) -> tp.Numpy: return flow.reflection_pad2d(x, padding=[0, 0, 1, 2]) x = np.arange(18).reshape((1, 2, 3, 3)).astype(np.float32) out = pad_Job(x) # out [[[[ 5. 4. 3. 4. 5. 4. 3.] # [ 2. 1. 0. 1. 2. 1. 0.] # [ 5. 4. 3. 4. 5. 4. 3.] # [ 8. 7. 6. 7. 8. 7. 6.] # [ 5. 4. 3. 4. 5. 4. 3.]] # [[14. 13. 12. 13. 14. 13. 12.] # [11. 10. 9. 10. 11. 10. 9.] # [14. 13. 12. 13. 14. 13. 12.] # [17. 16. 15. 16. 17. 16. 15.] # [14. 13. 12. 13. 14. 13. 12.]]]]
-
oneflow.
repeat
(input: oneflow_api.BlobDesc, repeat_num: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
reshape
(x: oneflow_api.BlobDesc, shape: Sequence[int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator reshapes a Blob. If the Blob is dynamic, it will call flow.dynamic_reshape automatically
We can set one dimension in shape as -1, the operator will infer the complete shape.
- Parameters
x – A Blob.
shape – Shape of the output blob.
name – A name for the operation (optional).
- Returns
A Blob, has the same type as x.
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def reshape_Job(x: tp.Numpy.Placeholder(shape=(4, 4), dtype=flow.float32) ) -> tp.Numpy: reshape_blob = flow.reshape(x, shape=[2, 2, 2, -1]) return reshape_blob x = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]).astype(np.float32) out = reshape_Job(x) # out.shape (2, 2, 2, 2)
-
oneflow.
reshape_like
(x: oneflow_api.BlobDesc, like: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator reshapes the Blob x to be the same as Blob like .
- Parameters
x (oneflow_api.BlobDesc) – The input Blob.
like (oneflow_api.BlobDesc) – A Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def reshape_like_Job(x: tp.Numpy.Placeholder(shape=(4, 4), dtype=flow.float32) ) -> tp.Numpy: like_blob = flow.constant(value=1, dtype=flow.int8, shape=(2, 2, 4)) reshape_like_blob = flow.reshape_like(x, like=like_blob) return reshape_like_blob x = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]).astype(np.float32) out = reshape_like_Job(x) # out.shape (2, 2, 4)
-
oneflow.
reverse
(input: oneflow_api.BlobDesc, axis: Union[int, Sequence[int]], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator reverses the elements on the assigned axis.
- Parameters
input (oneflow_api.BlobDesc) – The input Blob.
axis (Union[int, Sequence[int]]) – The reverse axis.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – The name must be a string.
ValueError – The axis must be a int or a list/tuple of int.
ValueError – The axis is out of range.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def reverse_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32)) -> tp.Numpy: reverse_blob = flow.reverse(x, axis=0) return reverse_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) out = reverse_Job(x) # out [[7. 8. 9.] # [4. 5. 6.] # [1. 2. 3.]]
-
oneflow.
same_padding
(x: oneflow_api.BlobDesc, padding: Sequence[int], data_format: str, kernel_size: Sequence[int], strides: Sequence[int], dilation_rate: Sequence[int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator do the padding in “SAME” mode, It can computes the pad width according to the kernel_size and strides to keep the size of feature map unchanged after convolution or other operations.
- Parameters
x (oneflow_api.BlobDesc) – The input blob.
padding (Sequence[int]) – The padding mode. It should be “SAME_UPPER” or “SAME_LOWER”
data_format ([type]) – The data format of input Blob. If the string starts with “NC”, it means the data format is channel first, else the data format is channel last.
kernel_size (Sequence[int]) – The kernel size of operations. Its type should be tuple or list.
strides (Sequence[int]) – The strides of operations. Its type should be tuple or list.
dilation_rate (Sequence[int]) – The dilation rate of operations. Its type should be tuple or list.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The Blob after padding.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp import numpy as np @flow.global_function() def same_pad_Job(x: tp.Numpy.Placeholder((1, 1, 3, 3)) ) -> tp.Numpy: return flow.same_padding(x, padding="SAME_UPPER", data_format="NCHW", kernel_size=(3, 3), strides=(1, 1), dilation_rate=(1, 1)) x = np.ones(shape=(1, 1, 3, 3)).astype(np.float32) out = same_pad_Job(x) # out [[[[0. 0. 0. 0. 0.] # [0. 1. 1. 1. 0.] # [0. 1. 1. 1. 0.] # [0. 1. 1. 1. 0.] # [0. 0. 0. 0. 0.]]]]
-
oneflow.
scatter_nd
(indices: oneflow_api.BlobDesc, updates: oneflow_api.BlobDesc, shape: Sequence[int], name: Optional[str] = None)¶ This operator inserts the elements in updates according to the indices and create a new Blob.
- Parameters
indices (oneflow_api.BlobDesc) – The indice of updates. Its type should be flow.int.
updates (oneflow_api.BlobDesc) – The update Blob.
shape (Sequence[int]) – The constant tensor shape, the constant tensor elements are all zero.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
Example 1:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def scatter_nd_Job(indice: tp.Numpy.Placeholder(shape=(3, 1), dtype=flow.int32), update: tp.Numpy.Placeholder(shape=(3, ), dtype=flow.float32), ) -> tp.Numpy: scatter_blob = flow.scatter_nd(indices=indice, updates=update, shape=[8]) return scatter_blob indice_array = np.array([[1], [6], [4]]).astype(np.int32) update_array = np.array([10.2, 5.1, 12.7]).astype(np.float32) out = scatter_nd_Job(indice_array, update_array) # [ 0. 10.2 0. 0. 12.7 0. 5.1 0. ]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def scatter_nd_Job(indice: tp.Numpy.Placeholder(shape=(3, 1), dtype=flow.int32), update: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), ) -> tp.Numpy: scatter_blob = flow.scatter_nd(indices=indice, updates=update, shape=[5, 3]) return scatter_blob indice_array = np.array([[0], [4], [2]]).astype(np.int32) update_array = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]).astype(np.float32) out = scatter_nd_Job(indice_array, update_array) # out [[1. 1. 1.] # [0. 0. 0.] # [3. 3. 3.] # [0. 0. 0.] # [2. 2. 2.]]
-
oneflow.
slice
(x: oneflow_api.BlobDesc, begin: Sequence[int], size: Sequence[int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Extracts a slice from a tensor.
- Parameters
x – A Blob.
begin – A list or a tuple, indicate each dimension slice begin, whose length must be equal to x’s number of dimensions, the first element of begin must be set to None. (Because the internal op of OneFlow does not support 0-dimension slice at present.)
size – A list or a tuple, indicate each dimension slice size, whose length must be equal to x’s number of dimensions, the first element of beign must be set to None.
name – A name for the operation (optional).
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def slice_Job(x: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32) ) -> tp.Numpy: slice_blob = flow.slice(x, begin=[None, 0], size=[None, 2]) return slice_blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).astype(np.float32) out = slice_Job(x) # out [[1. 2.] # [4. 5.] # [7. 8.]]
-
oneflow.
slice_update
(x: oneflow_api.BlobDesc, update: oneflow_api.BlobDesc, slice_tup_list: Sequence[Tuple[int, int, int]], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Update a slice of tensor x.
- Parameters
x – A Blob, whose slice will be updated.
update – A Blob, indicate the update content.
slice_tup_list – A list of slice tuple, indicate each dimension slice (start, stop, step).
name – A name for the operation (optional).
-
oneflow.
slice_v2
(x: oneflow_api.BlobDesc, slice_tup_list: Sequence[Tuple[int, int, int]], name: Optional[str] = None) → oneflow_api.BlobDesc¶ Extracts a slice from a tensor. The slice_tup_list assigns the slice indices in each dimension, the format is (start, stop, step). The operator will slice the Blob according to the slice_top_list.
- Parameters
x – A Blob.
slice_tup_list – A list of slice tuple, indicate each dimension slice (start, stop, step).
name – A name for the operation (optional).
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
Note: Because the internal op of OneFlow does not support 0-dimension slice at present, we should set the zero element in slice_tup_list as None.
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def slicev2_Job(x: tp.Numpy.Placeholder(shape=(3, 6, 9), dtype=flow.float32) ) -> tp.Numpy: slicev2_blob = flow.slice_v2(x, slice_tup_list=[[None, None, None], [0, 5, 2], # slice in dimension 1, extract [0, 2, 4] [0, 6, 3]]) # slice in dimension 2, extract [0, 3] return slicev2_blob x = np.random.randn(3, 6, 9).astype(np.float32) out = slicev2_Job(x) # out.shape (3, 3, 2)
-
oneflow.
smooth_l1_loss
(prediction: oneflow_api.BlobDesc, label: oneflow_api.BlobDesc, beta: float = 1.0, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator computes the smooth l1 loss.
The equation is:
\[ \begin{align}\begin{aligned}& out = \frac{(\beta*x)^2}{2}, \left|x\right|<\frac{1}{{\beta}^2}\\& out = \left|x\right|-\frac{0.5}{{\beta}^2}, otherwise\end{aligned}\end{align} \]- Parameters
prediction (oneflow_api.BlobDesc) – The prediction Blob
label (oneflow_api.BlobDesc) – The label Blob
beta (float, optional) – The \(\beta\) in the equation. Defaults to 1.0.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def smooth_l1_loss_Job(prediction: tp.Numpy.Placeholder((5, )), label: tp.Numpy.Placeholder((5, )) ) -> tp.Numpy: return flow.smooth_l1_loss(prediction=prediction, label=label) prediction = np.array([0.1, 0.4, 0.3, 0.5, 0.9]).astype(np.float32) label = np.array([0.3, 0.9, 2.5, 0.4, 0.3]).astype(np.float32) out = smooth_l1_loss_Job(prediction, label) # out [0.02 0.12499999 1.7 0.005 0.17999998]
-
oneflow.
sort
(input: oneflow_api.BlobDesc, axis: int = -1, direction: str = 'ASCENDING', name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator sorts the input Blob at specified axis.
- Parameters
input (oneflow_api.BlobDesc) – A Blob
axis (int, optional) – dimension to be sorted. Defaults to the last dim (-1)
direction (str, optional) – The direction in which to sort the Blob values. If the direction is “ASCENDING”, The order of input will be sorted as ascending, else, the order of input will be sorted as descending. Defaults to “ASCENDING”.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The sorted Blob
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def sort_Job(x: tp.Numpy.Placeholder((5, )) ) -> tp.Numpy: return flow.sort(input=x) x = np.array([10, 2, 9, 3, 7]).astype("float32") out = sort_Job(x) # out [ 2. 3. 7. 9. 10.]
-
oneflow.
squeeze
(input: oneflow_api.BlobDesc, axis: Optional[Sequence[int]] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator removes the specified dimention which size is 1 of the input Blob. If the axis is not specified, this operator will remove all the dimention which size is 1 of the input Blob.
The amount of element in return value is the same as Blob input.
- Parameters
input (oneflow_api.BlobDesc) – The input Blob.
axis (Optional[Sequence[int]], optional) – The axis. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
Example 1:
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def squeeze_Job(x: tp.Numpy.Placeholder(shape=(1, 1, 1, 3), dtype=flow.int32), ) -> tp.Numpy: return flow.squeeze(x, axis=[1, 2]) x = np.array([[[[1, 1, 1]]]]).astype(np.int32) out = squeeze_Job(x) # out.shape (1, 3)
-
oneflow.
stack
(inputs: Sequence[oneflow_api.BlobDesc], axis: int = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator stacks the multiple Blobs on the specified axis.
- Parameters
inputs (Sequence[oneflow_api.BlobDesc]) – A list of input Blob.
axis (int) – The stack axis.
name (Optional[str], optional) – The name for the operation. Defaults to None.
For example:
import oneflow as flow import oneflow.typing as tp import numpy as np @flow.global_function() def stack_job(x: tp.Numpy.Placeholder(shape=(2, 4, 6)), y: tp.Numpy.Placeholder(shape=(2, 4, 6)))->tp.Numpy: out = flow.stack([x, y], axis=2) return out x = np.ones(shape=(2, 4, 6), dtype=np.float32) y = np.ones(shape=(2, 4, 6), dtype=np.float32) out = stack_job(x, y) # output.shape (2, 4, 2, 6)
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
-
oneflow.
sync_default_session
() → None¶ Synchronize the default session. Block until every synchronous OneFlow function and its callback finishes running.
-
oneflow.
sync_dynamic_resize
(inputs: oneflow_api.BlobDesc, size: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ - Parameters
inputs (oneflow_api.BlobDesc) – The input Blob.
size (oneflow_api.BlobDesc) – The size of new Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob. Its type is ListNumpy.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def sync_dynamic_resize_Job(x: tp.Numpy.Placeholder(shape=(4, 3), dtype=flow.float32), size: tp.Numpy.Placeholder(shape=(1, ), dtype=flow.int32), ) -> tp.ListNumpy: resize_Blob = flow.sync_dynamic_resize(inputs=x, size=size) return resize_Blob x = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]).astype(np.float32) size = np.array([2]).astype(np.int32) out = sync_dynamic_resize_Job(x, size) # out [array([[1., 2., 3.], # [4., 5., 6.]], dtype=float32)]
-
oneflow.
tensor_buffer_to_tensor
(x: oneflow_api.BlobDesc, dtype: oneflow.python.framework.dtype.dtype, instance_shape: Sequence[int], name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator converts the Blob’s type from TensorBuffer to Tensor. Some operator’s output data type is TensorBuffer, you can use this operator to convert back to Tensor.
Refer to Concept Explanation for more about TensorBuffer.
- Parameters
x (oneflow_api.BlobDesc) – Input Blob.
dtype (dtype_util.dtype) – The data dtype.
instance_shape (Sequence[int]) – The shape of each TensorBuffer instance.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def tensor_buffer_to_tensor_Job(x: tp.Numpy.Placeholder(shape=(4, 16, 64, 64), dtype=flow.float32), ) -> tp.Numpy: x = flow.tensor_to_tensor_buffer(x, instance_dims=2) return flow.tensor_buffer_to_tensor(x, instance_shape=(64, 64), dtype=flow.float) x = np.random.randn(4, 16, 64, 64).astype(np.float32) out = tensor_buffer_to_tensor_Job(x) # out.shape (4, 16, 64, 64)
-
oneflow.
tensor_buffer_to_tensor_list
(input: oneflow_api.BlobDesc, shape: Sequence[int], dtype: oneflow.python.framework.dtype.dtype, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator converts TensorBuffer to TensorList.
Refer to Concept Explanation for more about TensorList.
- Parameters
input (oneflow_api.BlobDesc) – The input Tensor Buffer.
shape (Sequence[int]) – The shape of input Tensor Buffer.
dtype (dtype_util.dtype) – The data type.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def tensorBuffer_to_tensorList_Job(x: tp.Numpy.Placeholder(shape=(4, 16, 64, 64), dtype=flow.float32), ) -> tp.ListListNumpy: x = flow.tensor_to_tensor_buffer(x, instance_dims=3) out = flow.tensor_buffer_to_tensor_list(input=x, shape=(16, 64, 64), dtype=flow.float32) return out x = np.random.randn(4, 16, 64, 64).astype(np.float32) out = tensorBuffer_to_tensorList_Job(x) # out[0][0].shape (1, 16, 64, 64)
-
oneflow.
tensor_list_split
(input_tensor_list: oneflow_api.BlobDesc, name: Optional[str] = None) → Tuple[oneflow_api.BlobDesc]¶ This operator splits the input TensorList.
- Parameters
input_tensor_list (oneflow_api.BlobDesc) – The input TensorList.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Tuple of ListNumpy.
- Return type
Tuple[oneflow_api.BlobDesc]
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp from typing import Tuple func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def tensorList_split_Job(x: tp.ListListNumpy.Placeholder(shape=(2, 5, 4), dtype=flow.float32), ) -> Tuple[tp.ListNumpy, tp.ListNumpy]: return flow.tensor_list_split(x) x = np.random.rand(1, 3, 2).astype(np.float32) y = np.random.rand(1, 2, 2).astype(np.float32) out = tensorList_split_Job([[x, y]]) # out[0][0].shape (3, 2) # out[1][0].shape (2, 2)
-
oneflow.
tensor_list_to_tensor_buffer
(input: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator converts TensorList to TensorBuffer.
Refer to Concept Explanation for more about TensorList.
- Parameters
input (oneflow_api.BlobDesc) – The input TensorList.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp func_config = flow.FunctionConfig() func_config.default_data_type(flow.float) func_config.default_logical_view(flow.scope.mirrored_view()) @flow.global_function(function_config=func_config) def tensorList_to_tensorBuffer_Job(x: tp.ListListNumpy.Placeholder(shape=(2, 5, 4), dtype=flow.float32), ) -> tp.ListListNumpy: x = flow.tensor_list_to_tensor_buffer(input=x) return flow.tensor_buffer_to_tensor_list(x, shape=(5, 4), dtype=flow.float32) x = np.random.rand(1, 3, 2).astype(np.float32) y = np.random.rand(1, 2, 2).astype(np.float32) out = tensorList_to_tensorBuffer_Job([[x, y]]) # out[0][0].shape (1, 3, 2)
-
oneflow.
tensor_scatter_nd_add
(params: oneflow_api.BlobDesc, indices: oneflow_api.BlobDesc, updates: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator adds elements from ‘updates’ to Blob ‘params’ based on the indices.
- Parameters
params (oneflow_api.BlobDesc) – The input Blob.
indices (oneflow_api.BlobDesc) – The indice of updates. Its type should be flow.int32.
updates (oneflow_api.BlobDesc) – The update Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def tensor_scatter_nd_add_Job(x: tp.Numpy.Placeholder(shape=(5, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(3, 1), dtype=flow.int32), update: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), ) -> tp.Numpy: scatter_blob = flow.tensor_scatter_nd_add(params=x, indices=indice, updates=update) return scatter_blob x = np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]).astype(np.float32) indice_array = np.array([[0], [4], [2]]).astype(np.int32) update_array = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]).astype(np.float32) out = tensor_scatter_nd_add_Job(x, indice_array, update_array) # out [[2. 3. 4.] # [1. 2. 3.] # [4. 5. 6.] # [1. 2. 3.] # [3. 4. 5.]]
-
oneflow.
tensor_scatter_nd_update
(params: oneflow_api.BlobDesc, indices: oneflow_api.BlobDesc, updates: oneflow_api.BlobDesc, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator inserts the elements in updates according to the indices into the Blob params.
- Parameters
params (oneflow_api.BlobDesc) – The input Blob.
indices (oneflow_api.BlobDesc) – The indice of updates. Its type should be flow.int32.
updates (oneflow_api.BlobDesc) – The update Blob.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def tensor_scatter_nd_Job(x: tp.Numpy.Placeholder(shape=(5, 3), dtype=flow.float32), indice: tp.Numpy.Placeholder(shape=(3, 1), dtype=flow.int32), update: tp.Numpy.Placeholder(shape=(3, 3), dtype=flow.float32), ) -> tp.Numpy: scatter_blob = flow.tensor_scatter_nd_update(params=x, indices=indice, updates=update) return scatter_blob x = np.array([[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]).astype(np.float32) indice_array = np.array([[0], [4], [2]]).astype(np.int32) update_array = np.array([[1, 1, 1], [2, 2, 2], [3, 3, 3]]).astype(np.float32) out = tensor_scatter_nd_Job(x, indice_array, update_array) # out [[1. 1. 1.] # [1. 2. 3.] # [3. 3. 3.] # [1. 2. 3.] # [2. 2. 2.]]
-
oneflow.
tensor_to_tensor_buffer
(x: oneflow_api.BlobDesc, instance_dims: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator converts the Blob’s type from Tensor to TensorBuffer.
Refer to Concept Explanation for more about TensorBuffer.
- Parameters
x (oneflow_api.BlobDesc) – Input Blob.
instance_dims (int) – The dimensions of dynamic tensor instance.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def tensor_buffer_to_tensor_Job(x: tp.Numpy.Placeholder(shape=(4, 16, 64, 64), dtype=flow.float32), ) -> tp.Numpy: x = flow.tensor_to_tensor_buffer(x, instance_dims=2) return flow.tensor_buffer_to_tensor(x, instance_shape=(64, 64), dtype=flow.float) x = np.random.randn(4, 16, 64, 64).astype(np.float32) out = tensor_buffer_to_tensor_Job(x) # out.shape (4, 16, 64, 64)
-
oneflow.
transpose
(a: oneflow_api.BlobDesc, perm: Sequence[int] = None, conjugate: bool = False, batch_axis_non_change: bool = False, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator transposes the specified axis of input Blob.
- Parameters
a (oneflow_api.BlobDesc) – The input Blob.
perm (Sequence[int], optional) – The list of dimension permutation. Defaults to None.
conjugate (bool, optional) – Still Unavailable. Defaults to False.
batch_axis_non_change (bool, optional) – Whether to change the batch axis, it is a temporary design for solving batch axis infer error in some situations. It will be removed after batch_axis has been depreciated. Defaults to False.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
NotImplementedError – The attribute conjugate still unavailable.
- Returns
A transposed blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def transpose_Job(x: tp.Numpy.Placeholder(shape=(1, 2, 3), dtype=flow.float32) ) -> tp.Numpy: transpose_blob = flow.transpose(x, perm=[2, 0, 1]) return transpose_blob x = np.random.randn(1, 2, 3).astype(np.float32) out = transpose_Job(x) # out.shape (3, 1, 2)
-
oneflow.
truncated_normal
(stddev: float = 1.0) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶
-
oneflow.
truncated_normal_initializer
(mean: float = 0.0, stddev: float = 1.0) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates a truncated normal distribution.
- Parameters
mean (float, optional) – A scalar (float). Defaults to 0.0.
stddev (float, optional) – A scalar (float). Defaults to 1.0.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def truncated_normal_Job() -> None: init = flow.truncated_normal_initializer(mean=1, stddev=1) blob = flow.get_variable( "blob-weight", shape=(3, ), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() truncated_normal_Job() # out [1.8303236 0.09787154 0.83049864]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_truncated_normal_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.truncated_normal_initializer(mean=0, stddev=1) conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_truncated_normal_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
unpack
(input: oneflow_api.BlobDesc, unpack_num: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶
-
oneflow.
unsorted_batch_segment_sum
(data: oneflow_api.BlobDesc, segment_ids: oneflow_api.BlobDesc, num_segments: int, name: Optional[str] = None) → oneflow_api.BlobDesc¶ It is similar with unsorted_segment_sum, the difference is that unsorted_batch_segment_sum brings a batch axis. We can do the segment sum in different batch of data.
For example, the segment id is like:
[[0 0 0 1 2 2 3 3], [0 0 1 1 2 3 3 3]]
- Parameters
data (oneflow_api.BlobDesc) – Input Blob
segment_ids (oneflow_api.BlobDesc) – A Blob with shape (d0, d1). The d0, d1 are the first and second dimension of data.
num_segments (int) – num_segments should equal the number of distinct segment IDs.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def unsorted_batch_segment_sum_Job(data: tp.Numpy.Placeholder((3, 4)), segment_ids: tp.Numpy.Placeholder((3, 4), dtype=flow.int32) )->tp.Numpy: return flow.math.unsorted_batch_segment_sum(data, segment_ids, 2) input_blob = np.array([[1, 2, 3, 4], [1, 2, 3 ,4], [1, 2, 3, 4]]).astype(np.float32) segment_ids = np.array([[0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]]).astype(np.int32) out = unsorted_batch_segment_sum_Job(input_blob, segment_ids) # out [[6. 4.] # [7. 3.] # [8. 2.]]
-
oneflow.
unsorted_segment_sum
(data: oneflow_api.BlobDesc, segment_ids: oneflow_api.BlobDesc, num_segments: int, axis: int = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶ Computes the sum along segments of a Blob.
- Parameters
data (oneflow_api.BlobDesc) – Input Blob
segment_ids (oneflow_api.BlobDesc) – A Blob should be the size of the first dimension, with consecutive IDs in the range 0 to k (k < d0).
num_segments (int) – num_segments should equal the number of distinct segment IDs.
axis (int, optional) – The axis of data. Defaults to 0.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Blob with the same type of data.
- Return type
oneflow_api.BlobDesc
For example:
# Example 1: import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def unsorted_segment_sumJob(data: tp.Numpy.Placeholder((3, 4)), segment_ids: tp.Numpy.Placeholder((4, ), dtype=flow.int32) )->tp.Numpy: return flow.math.unsorted_segment_sum(data, segment_ids, num_segments=2, axis=1) input_blob = np.array([[1, 2, 3, 4], [5, 6, 7 ,8], [9, 10, 11, 12]]).astype(np.float32) segment_ids = np.array([0, 1, 0, 1]).astype(np.int32) out = unsorted_segment_sumJob(input_blob, segment_ids) # out [[ 4. 6.] # [12. 14.] # [20. 22.]] # Example 2 import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def unsorted_segment_sumJob(data: tp.Numpy.Placeholder((3, 4)), segment_ids: tp.Numpy.Placeholder((3, ), dtype=flow.int32) )->tp.Numpy: return flow.math.unsorted_segment_sum(data, segment_ids, num_segments=2, axis=0) input_blob = np.array([[1, 2, 3, 4], [5, 6, 7 ,8], [9, 10, 11, 12]]).astype(np.float32) segment_ids = np.array([0, 1, 0]).astype(np.int32) out = unsorted_segment_sumJob(input_blob, segment_ids) # out [[10. 12. 14. 16.] # [ 5. 6. 7. 8.]]
-
oneflow.
unsorted_segment_sum_like
(data: oneflow_api.BlobDesc, segment_ids: oneflow_api.BlobDesc, like: oneflow_api.BlobDesc, axis: int = 0, name: Optional[str] = None) → oneflow_api.BlobDesc¶ Computes the sum along segments of a Blob, the output shape is the same as the like Blob.
- Parameters
data (oneflow_api.BlobDesc) – Input Blob
segment_ids (oneflow_api.BlobDesc) – A Blob should be the size of the first dimension, with consecutive IDs in the range 0 to k (k < d0).
like (oneflow_api.BlobDesc) – The input Blob which specifies shape
axis (int, optional) – The axis of data. Defaults to 0.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
A Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def unsorted_segment_sum_like_Job(data: tp.Numpy.Placeholder((3, 4)), segment_ids: tp.Numpy.Placeholder((3, ), dtype=flow.int32), like: tp.Numpy.Placeholder((2, 4), dtype=flow.float32) )->tp.Numpy: return flow.math.unsorted_segment_sum_like(data, segment_ids, like, axis=0) input_blob = np.array([[1, 2, 3, 4], [5, 6, 7 ,8], [9, 10, 11, 12]]).astype(np.float32) segment_ids = np.array([0, 1, 0]).astype(np.int32) like = np.zeros(shape=(2, 4), dtype=np.float32) out = unsorted_segment_sum_like_Job(input_blob, segment_ids, like) # out [[10. 12. 14. 16.] # [ 5. 6. 7. 8.]]
-
oneflow.
user_op_builder
(op_name)¶ Build a wrapper of user op.
- For instance::
- def myargmax(
input: oneflow_api.BlobDesc) -> oneflow_api.BlobDesc: return ( flow.user_op_builder(“myargmax”) .Op(“argmax”) .Input(“in”, [input]) .Output(“out”) .Build() .InferAndTryRun() .RemoteBlobList()[0] )
- Parameters
op_name (str) – name of new user op
- Returns
UserOpConfBuilder object used to build a wrapper of user op.
- Return type
UserOpConfBuilder
-
oneflow.
user_op_module_builder
(op_type_name)¶
-
oneflow.
variance_scaling_initializer
(scale: float = 1.0, mode: str = 'fan_in', distribution: str = 'truncated_normal', data_format: str = '') → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates a truncated normal distribution or a random normal distribution or a random uniform distribution with a scale adapting to it.
When the distribution is “truncated_normal”
The equation is:
\[W\sim N(0, \sqrt{\frac{{scale}}{{n}}})\]If mode is “fan_in”, the “n” is the number of input units in the weight Blob.
If mode is “fan_out”, the “n” is the number of output units in the weight Blob.
if mode is “fan_avg”, the “n” is the average of the number of input and output units in the weight Blob
- Parameters
scale (float, optional) – Scaling factor (positive float). Defaults to 1.0.
mode (str, optional) – One of “fan_in”, “fan_out”, “fan_avg”. Defaults to “fan_in”.
distribution (str, optional) – Random distribution to use. One of “truncated_normal”,. Defaults to “truncated_normal”.
data_format (str, optional) – A string be one of “N…C” or “NC…”. Defaults to “”.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def variance_scale_Job() -> None: init = flow.variance_scaling_initializer(scale=2.0, mode="fan_avg") blob = flow.get_variable( "blob-weight", shape=(3, 3), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() variance_scale_Job() # out [[-0.13931477 0.12266728 -0.9434968 ] # [-0.49665168 0.10231158 -0.19194333] # [-0.7902896 -1.7034698 -0.38695997]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_variance_scaling_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.variance_scaling_initializer(mode="fan_out") conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_variance_scaling_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
watch
(blob_watched: oneflow_api.BlobDesc, handler_or_prompt: Union[Callable, str, None] = None) → None¶ Register callback for a blob. The callback function will be called after the computation produce the blob finishes. We can use it to watch the values of Blob.
- Parameters
blob_watched – a Blob
handler_or_prompt – a function has an argument of a Blob
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def watch_Job() -> None: init = flow.constant_initializer(2.5) variable = flow.get_variable( "variable-weight", shape=(5, ), initializer=init, trainable=True ) flow.watch(variable, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() watch_Job() # out [2.5 2.5 2.5 2.5 2.5]
Example 2:
import oneflow as flow import oneflow.typing as tp import numpy as np def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def watch_Job(x: tp.Numpy.Placeholder((1, 3, 2, 2)) ) -> None: initializer = flow.truncated_normal(0.1) conv2d = flow.layers.conv2d( x, filters=3, kernel_size=1, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) flow.watch(conv2d, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() x = np.ones(shape=(1, 3, 2, 2)).astype(np.float32) watch_Job(x) # out [[[[ 0.03757111 0.03757111] # [ 0.03757111 0.03757111]] # [[-0.36131713 -0.36131713] # [-0.36131713 -0.36131713]] # [[-0.12266113 -0.12266113] # [-0.12266113 -0.12266113]]]]
-
oneflow.
watch_diff
(blob_watched: oneflow_api.BlobDesc, handler_or_prompt: Union[Callable, str, None] = None) → None¶ Register callback for gradient of a blob. The callback will be called after the computation produce the gradient blob finishes.
- Parameters
blob_watched – a Blob
handler_or_prompt – a function has an argument of a Blob
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp BATCH_SIZE = 20 def watch_diff_handler(blob: tp.Numpy): print("watch_diff_handler:", blob, blob.shape, blob.dtype) @flow.global_function(type="train") def train_job( images: tp.Numpy.Placeholder((BATCH_SIZE, 1, 28, 28), dtype=flow.float), labels: tp.Numpy.Placeholder((BATCH_SIZE,), dtype=flow.int32), ) -> tp.Numpy: initializer = flow.truncated_normal(0.1) with flow.scope.placement("gpu", "0:0"): reshape = flow.reshape(images, [images.shape[0], -1]) hidden = flow.layers.dense( reshape, 512, activation=flow.nn.relu, kernel_initializer=initializer, name="hidden", ) logits = flow.layers.dense( hidden, 10, kernel_initializer=initializer, name="output" ) loss = flow.nn.sparse_softmax_cross_entropy_with_logits(labels, logits, name="softmax_loss") lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.1]) flow.optimizer.SGD(lr_scheduler, momentum=0).minimize(loss) flow.watch_diff(logits, watch_diff_handler) return loss if __name__ == "__main__": checkpoint = flow.train.CheckPoint() checkpoint.init() (train_images, train_labels), (test_images, test_labels) = flow.data.load_mnist( BATCH_SIZE ) for i, (images, labels) in enumerate(zip(train_images, train_labels)): loss = train_job(images, labels) # watch_diff_handler: [[-1.88834548e-01 2.71021971e-03 2.28271242e-02 7.17673637e-03 # 4.10183379e-03 8.93106461e-02 2.23669074e-02 3.86103359e-03 # 3.12465224e-02 5.23346756e-03] .....
Example 2:
import oneflow as flow import oneflow.typing as tp import numpy as np BATCH_SIZE = 20 def watch_diff_handler(blob: tp.Numpy): print("watch_diff_handler:", blob) @flow.global_function(type="train") def watch_matmul_diff_job( images: tp.Numpy.Placeholder((3, 3), dtype=flow.float), ) -> None: with flow.scope.placement("cpu", "0:0"): weight_initializer = flow.constant_initializer(2) weight_shape = (3, BATCH_SIZE) weight = flow.get_variable( "matmultest-weight", shape=weight_shape, initializer=weight_initializer) output = flow.linalg.matmul(images, weight) lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.1]) flow.optimizer.SGD(lr_scheduler, momentum=0.9).minimize(output) flow.watch_diff(weight, watch_diff_handler) if __name__ == "__main__": check_point = flow.train.CheckPoint() check_point.init() x = np.array([[1, 1, 1], [1, 1, 1], [1, 1, 1]]).astype(np.float32) watch_matmul_diff_job(x) # watch_diff_handler: [[3. 3. 3.] # [3. 3. 3.] # [3. 3. 3.]]
Example 3:
import oneflow as flow import oneflow.typing as tp import numpy as np def watch_diff_handler(blob: tp.Numpy): print("watch_diff_handler:", blob, blob.shape, blob.dtype) @flow.global_function(type="train") def watch_conv_diff_job( images: tp.Numpy.Placeholder((1, 1, 4, 4), dtype=flow.float), ) -> None: with flow.scope.placement("gpu", "0:0"): weight_shape = (1, 1, 3, 3) weight_initializer = flow.truncated_normal(0.1) weight = flow.get_variable( name="conv-weight", shape=weight_shape, initializer=weight_initializer ) output = flow.nn.conv2d(images, weight, strides=1, padding="VALID") lr_scheduler = flow.optimizer.PiecewiseConstantScheduler([], [0.1]) flow.optimizer.SGD(lr_scheduler, momentum=0.9).minimize(output) flow.watch_diff(weight, watch_diff_handler) if __name__ == "__main__": check_point = flow.train.CheckPoint() check_point.init() x = np.array([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]]]]).astype(np.float32) watch_conv_diff_job(x) # watch_diff_handler: [[[[14. 18. 22.] # [30. 34. 38.] # [46. 50. 54.]]]]
-
oneflow.
where
(condition: oneflow_api.BlobDesc, x: Optional[oneflow_api.BlobDesc] = None, y: Optional[oneflow_api.BlobDesc] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator returns the elements where condition is larger than 0.
If x and y is None, this operator is equal to oneflow.argwhere.
If x and y both are not None, If the element in condition is larger than 0, it will take the x element, else it will take the y element.
- Parameters
condition (oneflow_api.BlobDesc) – The input Blob.
x (Optional[oneflow_api.BlobDesc], optional) – A Blob. Defaults to None.
y (Optional[oneflow_api.BlobDesc], optional) – A Blob. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Raises
ValueError – It is not supported when exactly one of x or y is non-None
- Returns
The result Blob. Its type is ListNumpy.
- Return type
oneflow_api.BlobDesc
For example:
Example 1:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def where_Job(condition: tp.Numpy.Placeholder(shape=(5, ), dtype=flow.int32), x: tp.Numpy.Placeholder(shape=(5, ), dtype=flow.float32), y: tp.Numpy.Placeholder(shape=(5, ), dtype=flow.float32), ) -> tp.ListNumpy: return flow.where(condition=condition, x=x, y=y) condition = np.array([3, 0, 1, 0, 1]).astype(np.int32) x = np.array([10, 20, 30, 40, 50]).astype(np.float32) y = np.array([100, 200, 300, 400, 500]).astype(np.float32) out = where_Job(condition, x, y) # out [array([ 10., 200., 30., 400., 50.], dtype=float32)]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def where_Job(condition: tp.Numpy.Placeholder(shape=(5, ), dtype=flow.int32), ) -> tp.ListNumpy: return flow.where(condition=condition) condition = np.array([3, 0, 1, 0, 1]).astype(np.int32) out = where_Job(condition) # out [array([[0], # [2], # [4]], dtype=int32)]
-
oneflow.
xavier_normal_initializer
(data_format: str = '') → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates a Xavier normal distribution.
It also can be called as oneflow.glorot_normal_initializer.
The equation is:
\[W\sim N(0, \sqrt{\frac{{2}}{{n_j+n_{j+1}}}})\]\(N\) means normal distribution
\(n_j\) means the amount of Nth layer parameters
- Parameters
data_format (str, optional) – The data format. Defaults to “”.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def xavier_normal_Job() -> None: init = flow.xavier_normal_initializer() blob = flow.get_variable( "blob-weight", shape=(3, 3), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() xavier_normal_Job() # out [[ 0.5908121 -0.10804518 -0.6148571 ] # [ 1.4007381 -0.08172473 0.36579943] # [-0.6461796 -0.15923311 0.33653972]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_xavier_normal_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.xavier_normal_initializer() conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_xavier_normal_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
xavier_uniform_initializer
(data_format: str = '') → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates a Xavier uniform distribution.
It also can be called as oneflow.glorot_uniform_initializer.
The equation is:
\[W\sim U(-\sqrt{\frac{{6}}{{n_j+n_{j+1}}}},\sqrt{\frac{{6}}{{n_j+n_{j+1}}}})\]\(U\) means uniform distribution
\(n_j\) means the amount of Nth layer parameters
- Parameters
data_format (str, optional) – The data format. Defaults to “”.
- Returns
Initial configuration
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def xavier_uniform_Job() -> None: init = flow.xavier_uniform_initializer() blob = flow.get_variable( "blob-weight", shape=(3, 3), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() xavier_uniform_Job() # out [[-0.14424723 -0.9532095 -0.08723891] # [-0.8011227 -0.29729813 -0.26769108] # [ 0.9208976 -0.5971756 -0.15077025]]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_xavier_uniform_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.xavier_uniform_initializer() conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_xavier_uniform_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
zeros
(shape: Sequence[int], dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a Tensor filled with the scalar value 0.
- Parameters
shape (Sequence[int]) – The shape of the Tensor.
dtype (Optional[dtype_util.dtype], optional) – The data type. Defaults to None.
name (Optional[str], optional) – The name for the operator. Defaults to None.
- Returns
The result Tensor filled with value 0
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import oneflow.typing as tp @flow.global_function() def zeros_job() -> tp.Numpy: return flow.zeros(shape=(2, 3), dtype=flow.float32) out = zeros_job() # output: [[0. 0. 0.] # [0. 0. 0.]]
-
oneflow.
zeros_initializer
(dtype: oneflow.python.framework.dtype.dtype = <class 'oneflow.python.framework.dtype.float32'>) → oneflow.core.job.initializer_conf_pb2.InitializerConf¶ Initializer that generates blobs initialized to 0
- Parameters
dtype (dtype_util.dtype, optional) – Default data type. Defaults to dtype_util.float.
- Returns
constant_initializer
- Return type
initializer_conf_util.InitializerConf
For example:
Example 1:
import oneflow as flow import oneflow.typing as tp def watch_handler(y: tp.Numpy): print("out", y) @flow.global_function() def zeros_Job() -> None: init = flow.zeros_initializer() blob = flow.get_variable( "blob-weight", shape=(3, ), initializer=init, trainable=True ) flow.watch(blob, watch_handler) checkpoint = flow.train.CheckPoint() checkpoint.init() zeros_Job() # out [0. 0. 0.]
Example 2:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def conv2d_zero_Job(x: tp.Numpy.Placeholder((1, 256, 32, 32)) ) -> tp.Numpy: initializer = flow.zeros_initializer() conv2d = flow.layers.conv2d( x, filters=128, kernel_size=3, strides=1, padding='SAME', kernel_initializer=initializer, name="Conv2d" ) return conv2d x = np.random.randn(1, 256, 32, 32).astype(np.float32) out = conv2d_zero_Job(x) # out.shape (1, 128, 32, 32)
-
oneflow.
zeros_like
(like: oneflow_api.BlobDesc, dtype: Optional[oneflow.python.framework.dtype.dtype] = None, name: Optional[str] = None) → oneflow_api.BlobDesc¶ This operator creates a Blob that has the same shape as like whose all elements are set to 0.
- Parameters
like (oneflow_api.BlobDesc) – A Blob.
dtype (Optional[dtype_util.dtype], optional) – The data type of Blob. Defaults to None.
name (Optional[str], optional) – The name for the operation. Defaults to None.
- Returns
The result Blob.
- Return type
oneflow_api.BlobDesc
For example:
import oneflow as flow import numpy as np import oneflow.typing as tp @flow.global_function() def zeros_like_Job() -> tp.Numpy: constant_blob = flow.constant(value=1.5, shape=(1, 3, 3), dtype=flow.float) zeros_like_blob = flow.zeros_like(like=constant_blob, dtype=flow.float) return zeros_like_blob out = zeros_like_Job() # out [[[0. 0. 0.] # [0. 0. 0.] # [0. 0. 0.]]]
Types¶
oneflow.double
oneflow.float
oneflow.float32
oneflow.float64
oneflow.int32
oneflow.int64
oneflow.int8