oneflow.utils

Utils

Copyright 2020 The OneFlow Authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

class oneflow.utils.data.BatchSampler(sampler: oneflow.utils.data.sampler.Sampler[int], batch_size: int, drop_last: bool)

Wraps another sampler to yield a mini-batch of indices.

Parameters
  • sampler (Sampler or Iterable) – Base sampler. Can be any iterable object

  • batch_size (int) – Size of mini-batch.

  • drop_last (bool) – If True, the sampler will drop the last batch if its size would be less than batch_size

Example

>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=False))
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9]]
>>> list(BatchSampler(SequentialSampler(range(10)), batch_size=3, drop_last=True))
[[0, 1, 2], [3, 4, 5], [6, 7, 8]]
class oneflow.utils.data.ConcatDataset(datasets: Iterable[oneflow.utils.data.dataset.Dataset])

Dataset as a concatenation of multiple datasets.

This class is useful to assemble different existing datasets.

Parameters

datasets (sequence) – List of datasets to be concatenated

static cumsum(sequence)
cumulative_sizes: List[int]
datasets: List[oneflow.utils.data.dataset.Dataset[T_co]]
class oneflow.utils.data.DataLoader(dataset: oneflow.utils.data.dataset.Dataset[T_co], batch_size: Optional[int] = 1, shuffle: bool = False, sampler: Optional[oneflow.utils.data.sampler.Sampler[int]] = None, batch_sampler: Optional[oneflow.utils.data.sampler.Sampler[Sequence[int]]] = None, num_workers: int = 0, collate_fn: Optional[Callable[[List[T]], Any]] = None, drop_last: bool = False, timeout: float = 0, worker_init_fn: Optional[Callable[[int], None]] = None, multiprocessing_context=None, generator=<oneflow._oneflow_internal.Generator object>, *, prefetch_factor: int = 2, persistent_workers: bool = False)

Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset.

The DataLoader supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.

See flow.utils.data documentation page for more details.

Parameters
  • dataset (Dataset) – dataset from which to load the data.

  • batch_size (int, optional) – how many samples per batch to load (default: 1).

  • shuffle (bool, optional) – set to True to have the data reshuffled at every epoch (default: False).

  • sampler (Sampler or Iterable, optional) – defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.

  • batch_sampler (Sampler or Iterable, optional) – like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.

  • num_workers (int, optional) – how many subprocesses to use for data loading (default: 0). 0 means that the data will be loaded in the main process.

  • collate_fn (callable, optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.

  • drop_last (bool, optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)

  • timeout (numeric, optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)

  • worker_init_fn (callable, optional) – If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)

  • prefetch_factor (int, optional, keyword-only arg) – Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2)

  • persistent_workers (bool, optional) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False)

Warning

If the spawn start method is used, worker_init_fn cannot be an unpicklable object, e.g., a lambda function.

Warning

len(dataloader) heuristic is based on the length of the sampler used. When dataset is an IterableDataset, it instead returns an estimate based on len(dataset) / batch_size, with proper rounding depending on drop_last, regardless of multi-process loading configurations. This represents the best guess OneFlow can make because OneFlow trusts user dataset code in correctly handling multi-process loading to avoid duplicate data.

However, if sharding results in multiple workers having incomplete last batches, this estimate can still be inaccurate, because (1) an otherwise complete batch can be broken into multiple ones and (2) more than one batch worth of samples can be dropped when drop_last is set. Unfortunately, OneFlow can not detect such cases in general.

batch_size: Optional[int]
check_worker_number_rationality()
dataset: oneflow.utils.data.dataset.Dataset[T_co]
drop_last: bool
num_workers: int
prefetch_factor: int
sampler: oneflow.utils.data.sampler.Sampler
timeout: float
class oneflow.utils.data.Dataset(*args, **kwds)

An abstract class representing a Dataset.

All datasets that represent a map from keys to data samples should subclass it. All subclasses should overwrite __getitem__(), supporting fetching a data sample for a given key. Subclasses could also optionally overwrite __len__(), which is expected to return the size of the dataset by many Sampler implementations and the default options of DataLoader.

Note

DataLoader by default constructs a index sampler that yields integral indices. To make it work with a map-style dataset with non-integral indices/keys, a custom sampler must be provided.

class oneflow.utils.data.IterableDataset(*args, **kwds)

An iterable Dataset.

All datasets that represent an iterable of data samples should subclass it. Such form of datasets is particularly useful when data come from a stream.

All subclasses should overwrite __iter__(), which would return an iterator of samples in this dataset.

When a subclass is used with DataLoader, each item in the dataset will be yielded from the DataLoader iterator. When num_workers > 0, each worker process will have a different copy of the dataset object, so it is often desired to configure each copy independently to avoid having duplicate data returned from the workers.

Example 1: splitting workload across all workers in __iter__():

>>> class MyIterableDataset(flow.utils.data.IterableDataset):
...     def __init__(self, start, end):
...         super(MyIterableDataset).__init__()
...         assert end > start, "this example code only works with end >= start"
...         self.start = start
...         self.end = end
...
...     def __iter__(self):
...         iter_start = self.start
...         iter_end = self.end
...         return iter(range(iter_start, iter_end))
...
>>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
>>> ds = MyIterableDataset(start=3, end=7)

>>> # Single-process loading
>>> print(list(flow.utils.data.DataLoader(ds, num_workers=0)))
[3, 4, 5, 6]

Example 2: splitting workload across all workers using worker_init_fn:

>>> class MyIterableDataset(flow.utils.data.IterableDataset):
...     def __init__(self, start, end):
...         super(MyIterableDataset).__init__()
...         assert end > start, "this example code only works with end >= start"
...         self.start = start
...         self.end = end
...
...     def __iter__(self):
...         return iter(range(self.start, self.end))
...
>>> # should give same set of data as range(3, 7), i.e., [3, 4, 5, 6].
>>> ds = MyIterableDataset(start=3, end=7)

>>> # Single-process loading
>>> print(list(flow.utils.data.DataLoader(ds, num_workers=0)))
[3, 4, 5, 6]
functions: Dict[str, Callable] = {}
reduce_ex_hook: Optional[Callable] = None
classmethod register_datapipe_as_function(function_name, cls_to_register)
classmethod register_function(function_name, function)
classmethod set_reduce_ex_hook(hook_fn)
class oneflow.utils.data.RandomSampler(data_source: Sized, replacement: bool = False, num_samples: Optional[int] = None, generator=None)

Samples elements randomly. If without replacement, then sample from a shuffled dataset. If with replacement, then user can specify num_samples to draw.

Parameters
  • data_source (Dataset) – dataset to sample from

  • replacement (bool) – samples are drawn on-demand with replacement if True, default=``False``

  • num_samples (int) – number of samples to draw, default=`len(dataset)`. This argument is supposed to be specified only when replacement is True.

  • generator (Generator) – Generator used in sampling.

data_source: Sized
property num_samples
replacement: bool
class oneflow.utils.data.Sampler(data_source: Optional[Sized])

Base class for all Samplers.

Every Sampler subclass has to provide an __iter__() method, providing a way to iterate over indices of dataset elements, and a __len__() method that returns the length of the returned iterators.

Note

The __len__() method isn’t strictly required by DataLoader, but is expected in any calculation involving the length of a DataLoader.

class oneflow.utils.data.SequentialSampler(data_source)

Samples elements sequentially, always in the same order.

Parameters

data_source (Dataset) – dataset to sample from

data_source: Sized
class oneflow.utils.data.Subset(dataset: oneflow.utils.data.dataset.Dataset[T_co], indices: Sequence[int])

Subset of a dataset at specified indices.

Parameters
  • dataset (Dataset) – The whole Dataset

  • indices (sequence) – Indices in the whole set selected for subset

dataset: oneflow.utils.data.dataset.Dataset[T_co]
indices: Sequence[int]
class oneflow.utils.data.SubsetRandomSampler(indices: Sequence[int], generator=None)

Samples elements randomly from a given list of indices, without replacement.

Parameters
  • indices (sequence) – a sequence of indices

  • generator (Generator) – Generator used in sampling.

indices: Sequence[int]
class oneflow.utils.data.TensorDataset(*tensors: oneflow._oneflow_internal.Tensor)

Dataset wrapping tensors.

Each sample will be retrieved by indexing tensors along the first dimension.

Parameters

*tensors (Tensor) – tensors that have the same size of the first dimension.

oneflow.utils.data.random_split(dataset: oneflow.utils.data.dataset.Dataset[T], lengths: Sequence[int], generator: Optional[object] = <built-in method default_generator of PyCapsule object>)List[oneflow.utils.data.dataset.Subset[T]]

Randomly split a dataset into non-overlapping new datasets of given lengths. Optionally fix the generator for reproducible results, e.g.:

>>> random_split(range(10), [3, 7], generator=flow.Generator().manual_seed(42))
Parameters
  • dataset (Dataset) – Dataset to be split

  • lengths (sequence) – lengths of splits to be produced

  • generator (Generator) – Generator used for the random permutation.

Copyright 2020 The OneFlow Authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

class oneflow.utils.data.distributed.DistributedSampler(dataset: oneflow.utils.data.dataset.Dataset, num_replicas: Optional[int] = None, rank: Optional[int] = None, shuffle: bool = True, seed: int = 0, drop_last: bool = False)

Sampler that restricts data loading to a subset of the dataset.

It is especially useful in conjunction with flow.nn.parallel.DistributedDataParallel. In such a case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.

Note

Dataset is assumed to be of constant size.

Parameters
  • dataset – Dataset used for sampling.

  • num_replicas (int, optional) – Number of processes participating in distributed training. By default, world_size is retrieved from the current distributed group.

  • rank (int, optional) – Rank of the current process within num_replicas. By default, rank is retrieved from the current distributed group.

  • shuffle (bool, optional) – If True (default), sampler will shuffle the indices.

  • seed (int, optional) – random seed used to shuffle the sampler if shuffle=True. This number should be identical across all processes in the distributed group. Default: 0.

  • drop_last (bool, optional) – if True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default: False.

Warning

In distributed mode, calling the set_epoch() method at the beginning of each epoch before creating the DataLoader iterator is necessary to make shuffling work properly across multiple epochs. Otherwise, the same ordering will be always used.

For example:

>>> sampler = DistributedSampler(dataset) if is_distributed else None
>>> loader = DataLoader(dataset, shuffle=(sampler is None), sampler=sampler)
>>> for epoch in range(start_epoch, n_epochs):
...     if is_distributed:
...         sampler.set_epoch(epoch)
...     train(loader)
set_epoch(epoch: int)None

Sets the epoch for this sampler. When shuffle=True, this ensures all replicas use a different random ordering for each epoch. Otherwise, the next iteration of this sampler will yield the same ordering.

Parameters

epoch (int) – Epoch number.

Copyright 2020 The OneFlow Authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

class oneflow.utils.vision.datasets.CIFAR10(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False, source_url: Optional[str] = None)

CIFAR10 Dataset.

Parameters
  • root (string) – Root directory of dataset where directory cifar-10-batches-py exists or will be saved to if download is set to True.

  • train (bool, optional) – If True, creates dataset from training set, otherwise creates from test set.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

base_folder = 'cifar-10-batches-py'
download()None
extra_repr()str
filename = 'cifar-10-python.tar.gz'
meta = {'filename': 'batches.meta', 'key': 'label_names', 'md5': '5ff9c542aee3614f3951f8cda6e48888'}
test_list = [['test_batch', '40351d587109b95175f43aff81a1287e']]
tgz_md5 = 'c58f30108f718f92721af3b95e74349a'
train_list = [['data_batch_1', 'c99cafc152244af753f735de768cd75f'], ['data_batch_2', 'd4bba439e000b95fd0a9bffe97cbabec'], ['data_batch_3', '54ebc095f3ab1f0389bbae665268c751'], ['data_batch_4', '634d18415352ddfa80567beed471001a'], ['data_batch_5', '482c414d41f54cd18b22e5b47cb7c3cb']]
url = 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz'
class oneflow.utils.vision.datasets.CIFAR100(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False, source_url: Optional[str] = None)

CIFAR100 Dataset.

This is a subclass of the CIFAR10 Dataset.

base_folder = 'cifar-100-python'
data: Any
filename = 'cifar-100-python.tar.gz'
meta = {'filename': 'meta', 'key': 'fine_label_names', 'md5': '7973b15100ade9c7d40fb424638fde48'}
test_list = [['test', 'f0ef6b0ae62326f3e7ffdfab6717acfc']]
tgz_md5 = 'eb9058c3a382ffc7106e4002c42a8d85'
train_list = [['train', '16019d7e3df5f24257cddd939b257f8d']]
url = 'https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz'
class oneflow.utils.vision.datasets.CocoCaptions(root: str, annFile: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)

MS Coco Captions Dataset.

Parameters
  • root (string) – Root directory where images are downloaded to.

  • annFile (string) – Path to json annotation file.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.ToTensor

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry and returns a transformed version.

Example

import oneflow.utils.vision.datasets as dset
import oneflow.utils.vision.transforms as transforms
cap = dset.CocoCaptions(root = 'dir where images are',
                        annFile = 'json annotation file',
                        transform=transforms.ToTensor())
print('Number of samples: ', len(cap))
img, target = cap[3] # load 4th sample
print("Image Size: ", img.size())
print(target)

Output:

Number of samples: 82783
Image Size: (3L, 427L, 640L)
[u'A plane emitting smoke stream flying over a mountain.',
u'A plane darts across a bright blue sky behind a mountain covered in snow',
u'A plane leaves a contrail above the snowy mountain top.',
u'A mountain that has a plane flying overheard in the distance.',
u'A mountain view with a plume of smoke in the background']
class oneflow.utils.vision.datasets.CocoDetection(root: str, annFile: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)

MS Coco Detection Dataset.

Parameters
  • root (string) – Root directory where images are downloaded to.

  • annFile (string) – Path to json annotation file.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.ToTensor

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry and returns a transformed version.

class oneflow.utils.vision.datasets.DatasetFolder(root: str, loader: Callable[[str], Any], extensions: Optional[Tuple[str, ]] = None, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, is_valid_file: Optional[Callable[[str], bool]] = None)

A generic data loader. This default directory structure can be customized by overriding the find_classes() method.

Parameters
  • root (string) – Root directory path.

  • loader (callable) – A function to load a sample given its path.

  • extensions (tuple[string]) – A list of allowed extensions. both extensions and is_valid_file should not be passed.

  • transform (callable, optional) – A function/transform that takes in a sample and returns a transformed version. E.g, transforms.RandomCrop for images.

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • is_valid_file – A function that takes path of a file and check if the file is a valid file (used to check of corrupt files) both extensions and is_valid_file should not be passed.

find_classes(directory: str)Tuple[List[str], Dict[str, int]]

Find the class folders in a dataset structured as follows:

directory/
├── class_x
│   ├── xxx.ext
│   ├── xxy.ext
│   └── ...
│       └── xxz.ext
└── class_y
    ├── 123.ext
    ├── nsdf3.ext
    └── ...
    └── asd932_.ext

This method can be overridden to only consider a subset of classes, or to adapt to a different dataset directory structure.

Parameters

directory (str) – Root directory path, corresponding to self.root

Raises

FileNotFoundError – If dir has no class folders.

Returns

List of all classes and dictionary mapping each class to an index.

Return type

(Tuple[List[str], Dict[str, int]])

static make_dataset(directory: str, class_to_idx: Dict[str, int], extensions: Optional[Tuple[str, ]] = None, is_valid_file: Optional[Callable[[str], bool]] = None)List[Tuple[str, int]]

Generates a list of samples of a form (path_to_sample, class). This can be overridden to e.g. read files from a compressed zip file instead of from the disk.

Parameters
  • directory (str) – root dataset directory, corresponding to self.root.

  • class_to_idx (Dict[str, int]) – Dictionary mapping class name to class index.

  • extensions (optional) – A list of allowed extensions. Either extensions or is_valid_file should be passed. Defaults to None.

  • is_valid_file (optional) – A function that takes path of a file and checks if the file is a valid file (used to check of corrupt files) both extensions and is_valid_file should not be passed. Defaults to None.

Raises
  • ValueError – In case class_to_idx is empty.

  • ValueError – In case extensions and is_valid_file are None or both are not None.

  • FileNotFoundError – In case no valid file was found for any class.

Returns

samples of a form (path_to_sample, class)

Return type

List[Tuple[str, int]]

class oneflow.utils.vision.datasets.FashionMNIST(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False, source_url: Optional[str] = None)

Fashion-MNIST Dataset.

Parameters
  • root (string) – Root directory of dataset where FashionMNIST/processed/training.pt and FashionMNIST/processed/test.pt exist.

  • train (bool, optional) – If True, creates dataset from training.pt, otherwise from test.pt.

  • download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
mirrors = ['http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/']
resources = [('train-images-idx3-ubyte.gz', '8d4fb7e6c68d591d4c3dfef9ec88bf0d'), ('train-labels-idx1-ubyte.gz', '25c81989df183df01b3e8a0aad5dffbe'), ('t10k-images-idx3-ubyte.gz', 'bef4ecab320f06d8554ea6380940ec79'), ('t10k-labels-idx1-ubyte.gz', 'bb300cfdad3c16e7a12a480ee83cd310')]
class oneflow.utils.vision.datasets.ImageFolder(root: str, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, loader: Callable[[str], Any] = <function default_loader>, is_valid_file: Optional[Callable[[str], bool]] = None)

A generic data loader where the images are arranged in this way by default:

root/dog/xxx.png
root/dog/xxy.png
root/dog/[...]/xxz.png
root/cat/123.png
root/cat/nsdf3.png
root/cat/[...]/asd932_.png

This class inherits from DatasetFolder so the same methods can be overridden to customize the dataset.

Parameters
  • root (string) – Root directory path.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • loader (callable, optional) – A function to load an image given its path.

  • is_valid_file – A function that takes path of an Image file and check if the file is a valid file (used to check of corrupt files)

class oneflow.utils.vision.datasets.ImageNet(root: str, split: str = 'train', download: Optional[str] = None, **kwargs: Any)

ImageNet 2012 Classification Dataset.

Parameters
  • root (string) – Root directory of the ImageNet Dataset.

  • split (string, optional) – The dataset split, supports train, or val.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • loader – A function to load an image given its path.

extra_repr()str
parse_archives()None
property split_folder
class oneflow.utils.vision.datasets.MNIST(root: str, train: bool = True, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, download: bool = False, source_url: Optional[str] = None)

MNIST Dataset.

Parameters
  • root (string) – Root directory of dataset where MNIST/processed/training.pt and MNIST/processed/test.pt exist.

  • train (bool, optional) – If True, creates dataset from training.pt, otherwise from test.pt.

  • download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

property class_to_idx
classes = ['0 - zero', '1 - one', '2 - two', '3 - three', '4 - four', '5 - five', '6 - six', '7 - seven', '8 - eight', '9 - nine']
download()None

Download the MNIST data if it doesn’t exist already.

extra_repr()str
mirrors = ['http://yann.lecun.com/exdb/mnist/', 'https://ossci-datasets.s3.amazonaws.com/mnist/']
property processed_folder
property raw_folder
resources = [('train-images-idx3-ubyte.gz', 'f68b3c2dcbeaaa9fbdd348bbdeb94873'), ('train-labels-idx1-ubyte.gz', 'd53e105ee54ea40749a09fcbcd1e9432'), ('t10k-images-idx3-ubyte.gz', '9fb629c4189551a2d022fa330f9573f3'), ('t10k-labels-idx1-ubyte.gz', 'ec29112dd5afa0611ce80d1b7f02629c')]
property test_data
test_file = 'test.pt'
property test_labels
property train_data
property train_labels
training_file = 'training.pt'
class oneflow.utils.vision.datasets.VOCDetection(root: str, year: str = '2012', image_set: str = 'train', download: bool = False, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)

Pascal VOC Detection Dataset.

Parameters
  • root (string) – Root directory of the VOC Dataset.

  • year (string, optional) – The dataset year, supports years "2007" to "2012".

  • image_set (string, optional) – Select the image_set to use, "train", "trainval" or "val". If year=="2007", can also be "test".

  • download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again. (default: alphabetic indexing of VOC’s 20 classes).

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, required) – A function/transform that takes in the target and transforms it.

  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry and returns a transformed version.

property annotations
parse_voc_xml(node: xml.etree.ElementTree.Element)Dict[str, Any]
class oneflow.utils.vision.datasets.VOCSegmentation(root: str, year: str = '2012', image_set: str = 'train', download: bool = False, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None, transforms: Optional[Callable] = None)

Pascal VOC Segmentation Dataset.

Parameters
  • root (string) – Root directory of the VOC Dataset.

  • year (string, optional) – The dataset year, supports years "2007" to "2012".

  • image_set (string, optional) – Select the image_set to use, "train", "trainval" or "val". If year=="2007", can also be "test".

  • download (bool, optional) – If true, downloads the dataset from the internet and puts it in root directory. If dataset is already downloaded, it is not downloaded again.

  • transform (callable, optional) – A function/transform that takes in an PIL image and returns a transformed version. E.g, transforms.RandomCrop

  • target_transform (callable, optional) – A function/transform that takes in the target and transforms it.

  • transforms (callable, optional) – A function/transform that takes input sample and its target as entry and returns a transformed version.

property masks

Copyright 2020 The OneFlow Authors. All rights reserved.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

class oneflow.utils.vision.transforms.CenterCrop(size)

Crops the given image at the center. If the image is oneflow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions. If image size is smaller than output size along any edge, image is padded with 0 and then center cropped.

Parameters

size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

class oneflow.utils.vision.transforms.Compose(transforms)

Composes several transforms together. Please, see the note below. :param transforms: list of transforms to compose. :type transforms: list of Transform objects

Example

>>> transforms.Compose([
>>>     transforms.CenterCrop(10),
>>>     transforms.ToTensor(),
>>> ])

Note

In order to script the transformations, please use flow.nn.Sequential as below. >>> transforms = flow.nn.Sequential( >>> transforms.CenterCrop(10), >>> transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), >>> ) Make sure to use only scriptable transformations, i.e. that work with flow.Tensor, does not require lambda functions or PIL.Image.

class oneflow.utils.vision.transforms.ConvertImageDtype(dtype: oneflow._oneflow_internal.dtype)

Convert a tensor image to the given dtype and scale the values accordingly This function does not support PIL Image.

Parameters

dtype (flow.dtype) – Desired data type of the output

Note

When converting from a smaller to a larger integer dtype the maximum values are not mapped exactly. If converted back and forth, this mismatch has no effect.

Raises

RuntimeError – When trying to cast flow.float32 to flow.int32 or flow.int64 as well as for trying to cast flow.float64 to flow.int64. These conversions might lead to overflow errors since the floating point dtype cannot store consecutive integers over the whole range of the integer dtype.

class oneflow.utils.vision.transforms.FiveCrop(size)

Crop the given image into four corners and the central crop. If the image is flow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions

Note

This transform returns a tuple of images and there may be a mismatch in the number of inputs and targets your Dataset returns. See below for an example of how to deal with this.

Parameters

size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop of size (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

Example

>>> transform = Compose([
>>>    FiveCrop(size), # this is a list of PIL Images
>>>    Lambda(lambda crops: flow.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor
>>> ])
>>> #In your test loop you can do the following:
>>> input, target = batch # input is a 5d tensor, target is 2d
>>> bs, ncrops, c, h, w = input.size()
>>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
>>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
class oneflow.utils.vision.transforms.InterpolationMode(value)

Interpolation modes

BICUBIC = 'bicubic'
BILINEAR = 'bilinear'
BOX = 'box'
HAMMING = 'hamming'
LANCZOS = 'lanczos'
NEAREST = 'nearest'
class oneflow.utils.vision.transforms.Lambda(lambd)

Apply a user-defined lambda as a transform.

Parameters

lambd (function) – Lambda/function to be used for transform.

class oneflow.utils.vision.transforms.Normalize(mean, std, inplace=False)

Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: (mean[1],...,mean[n]) and std: (std[1],..,std[n]) for n channels, this transform will normalize each channel of the input flow.*Tensor i.e., output[channel] = (input[channel] - mean[channel]) / std[channel]

Note

This transform acts out of place, i.e., it does not mutate the input tensor.

Parameters
  • mean (sequence) – Sequence of means for each channel.

  • std (sequence) – Sequence of standard deviations for each channel.

  • inplace (bool,optional) – Bool to make this operation in-place.

class oneflow.utils.vision.transforms.PILToTensor

Convert a PIL Image to a tensor of the same type

Converts a PIL Image (H x W x C) to a Tensor of shape (C x H x W).

class oneflow.utils.vision.transforms.Pad(padding, fill=0, padding_mode='constant')

Pad the given image on all sides with the given “pad” value. If the image is oneflow Tensor, it is expected to have […, H, W] shape, where … means at most 2 leading dimensions for mode reflect and symmetric, at most 3 leading dimensions for mode edge, and an arbitrary number of leading dimensions for mode constant

Parameters
  • padding (int or sequence) – Padding on each border. If a single int is provided this is used to pad all borders. If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. If a sequence of length 4 is provided this is the padding for the left, top, right and bottom borders respectively.

  • fill (number or str or tuple) – Pixel fill value for constant fill. Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively. This value is only used when the padding_mode is constant. Only number is supported for oneflow Tensor. Only int or str or tuple value is supported for PIL Image.

  • padding_mode (str) –

    Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.

    • constant: pads with a constant value, this value is specified with fill

    • edge: pads with the last value at the edge of the image. If input a 5D oneflow Tensor, the last 3 dimensions will be padded instead of the last 2

    • reflect: pads with reflection of image without repeating the last value on the edge. For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode will result in [3, 2, 1, 2, 3, 4, 3, 2]

    • symmetric: pads with reflection of image repeating the last value on the edge. For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode will result in [2, 1, 1, 2, 3, 4, 4, 3]

class oneflow.utils.vision.transforms.RandomApply(transforms, p=0.5)

Apply randomly a list of transformations with a given probability.

Note

In order to script the transformation, please use flow.nn.ModuleList as input instead of list/tuple of transforms as shown below:

>>> transforms = transforms.RandomApply(flow.nn.ModuleList([
>>>     transforms.ColorJitter(),
>>> ]), p=0.3)

Make sure to use only scriptable transformations, i.e. that work with flow.Tensor, does not require lambda functions or PIL.Image.

Parameters
  • transforms (sequence or Module) – list of transformations

  • p (float) – probability

class oneflow.utils.vision.transforms.RandomChoice(transforms)

Apply single transformation randomly picked from a list.

class oneflow.utils.vision.transforms.RandomCrop(size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant')

Crop the given image at a random location. If the image is oneflow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions, but if non-constant padding is used, the input is expected to have at most 2 leading dimensions

Parameters
  • size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • padding (int or sequence, optional) – Optional padding on each border of the image. Default is None. If a single int is provided this is used to pad all borders. If sequence of length 2 is provided this is the padding on left/right and top/bottom respectively. If a sequence of length 4 is provided this is the padding for the left, top, right and bottom borders respectively.

  • pad_if_needed (boolean) – It will pad the image if smaller than the desired size to avoid raising an exception. Since cropping is done after padding, the padding seems to be done at a random offset.

  • fill (number or str or tuple) – Pixel fill value for constant fill. Default is 0. If a tuple of length 3, it is used to fill R, G, B channels respectively. This value is only used when the padding_mode is constant. Only number is supported for flow Tensor. Only int or str or tuple value is supported for PIL Image.

  • padding_mode (str) –

    Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.

    • constant: pads with a constant value, this value is specified with fill

    • edge: pads with the last value at the edge of the image. If input a 5D flow Tensor, the last 3 dimensions will be padded instead of the last 2

    • reflect: pads with reflection of image without repeating the last value on the edge. For example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode will result in [3, 2, 1, 2, 3, 4, 3, 2]

    • symmetric: pads with reflection of image repeating the last value on the edge. For example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode will result in [2, 1, 1, 2, 3, 4, 4, 3]

static get_params(img: oneflow._oneflow_internal.Tensor, output_size: Tuple[int, int])Tuple[int, int, int, int]

Get parameters for crop for a random crop.

Parameters
  • img (PIL Image or Tensor) – Image to be cropped.

  • output_size (tuple) – Expected output size of the crop.

Returns

params (i, j, h, w) to be passed to crop for random crop.

Return type

tuple

class oneflow.utils.vision.transforms.RandomHorizontalFlip(p=0.5)

Horizontally flip the given image randomly with a given probability. If the image is flow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions

Parameters

p (float) – probability of the image being flipped. Default value is 0.5

class oneflow.utils.vision.transforms.RandomOrder(transforms)

Apply a list of transformations in a random order.

class oneflow.utils.vision.transforms.RandomResizedCrop(size, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), interpolation=<InterpolationMode.BILINEAR: 'bilinear'>)

Crop a random portion of image and resize it to a given size.

If the image is flow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions

A crop of the original image is made: the crop has a random area (H * W) and a random aspect ratio. This crop is finally resized to the given size. This is popularly used to train the Inception networks.

Parameters
  • size (int or sequence) – expected output size of the crop, for each edge. If size is an int instead of sequence like (h, w), a square output size (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • scale (tuple of float) – Specifies the lower and upper bounds for the random area of the crop, before resizing. The scale is defined with respect to the area of the original image.

  • ratio (tuple of float) – lower and upper bounds for the random aspect ratio of the crop, before resizing.

  • interpolation (InterpolationMode) – Desired interpolation enum defined by flow.utils.vision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR and InterpolationMode.BICUBIC are supported. For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

static get_params(img: oneflow._oneflow_internal.Tensor, scale: List[float], ratio: List[float])Tuple[int, int, int, int]

Get parameters for crop for a random sized crop.

Parameters
  • img (PIL Image or Tensor) – Input image.

  • scale (list) – range of scale of the origin size cropped

  • ratio (list) – range of aspect ratio of the origin aspect ratio cropped

Returns

params (i, j, h, w) to be passed to crop for a random sized crop.

Return type

tuple

class oneflow.utils.vision.transforms.RandomSizedCrop(*args, **kwargs)

Note: This transform is deprecated in favor of RandomResizedCrop.

class oneflow.utils.vision.transforms.RandomTransforms(transforms)

Base class for a list of transformations with randomness

Parameters

transforms (sequence) – list of transformations

class oneflow.utils.vision.transforms.RandomVerticalFlip(p=0.5)

Vertically flip the given image randomly with a given probability. If the image is flow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions

Parameters

p (float) – probability of the image being flipped. Default value is 0.5

class oneflow.utils.vision.transforms.Resize(size, interpolation=<InterpolationMode.BILINEAR: 'bilinear'>)

Resize the input image to the given size. If the image is oneflow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions

Parameters
  • size (sequence or int) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size).

  • interpolation (InterpolationMode) – Desired interpolation enum defined by flow.utils.vision.transforms.InterpolationMode. Default is InterpolationMode.BILINEAR. If input is Tensor, only InterpolationMode.NEAREST, InterpolationMode.BILINEAR and InterpolationMode.BICUBIC are supported. For backward compatibility integer values (e.g. PIL.Image.NEAREST) are still acceptable.

class oneflow.utils.vision.transforms.Scale(*args, **kwargs)

Note: This transform is deprecated in favor of Resize.

class oneflow.utils.vision.transforms.TenCrop(size, vertical_flip=False)

Crop the given image into four corners and the central crop plus the flipped version of these (horizontal flipping is used by default). If the image is flow Tensor, it is expected to have […, H, W] shape, where … means an arbitrary number of leading dimensions

Note

This transform returns a tuple of images and there may be a mismatch in the number of inputs and targets your Dataset returns. See below for an example of how to deal with this.

Parameters
  • size (sequence or int) – Desired output size of the crop. If size is an int instead of sequence like (h, w), a square crop (size, size) is made. If provided a sequence of length 1, it will be interpreted as (size[0], size[0]).

  • vertical_flip (bool) – Use vertical flipping instead of horizontal

Example

>>> transform = Compose([
>>>    TenCrop(size), # this is a list of PIL Images
>>>    Lambda(lambda crops: flow.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor
>>> ])
>>> #In your test loop you can do the following:
>>> input, target = batch # input is a 5d tensor, target is 2d
>>> bs, ncrops, c, h, w = input.size()
>>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops
>>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops
class oneflow.utils.vision.transforms.ToPILImage(mode=None)

Convert a tensor or an ndarray to PIL Image.

Converts a flow.Tensor of shape C x H x W or a numpy ndarray of shape H x W x C to a PIL Image while preserving the value range.

Parameters

mode (PIL.Image mode) – color space and pixel depth of input data (optional). If mode is None (default) there are some assumptions made about the input data: - If the input has 4 channels, the mode is assumed to be RGBA. - If the input has 3 channels, the mode is assumed to be RGB. - If the input has 2 channels, the mode is assumed to be LA. - If the input has 1 channel, the mode is determined by the data type (i.e int, float, short).

class oneflow.utils.vision.transforms.ToTensor

Convert a PIL Image or numpy.ndarray to tensor.

Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a flow.FloatTensor of shape (C x H x W) in the range [0.0, 1.0] if the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the numpy.ndarray has dtype = np.uint8 In the other cases, tensors are returned without scaling.

Note

Because the input image is scaled to [0.0, 1.0], this transformation should not be used when transforming target image masks.