oneflow.nn.utils.rnn.PackedSequence¶
-
class
oneflow.nn.utils.rnn.
PackedSequence
(data: oneflow.Tensor, batch_sizes: Optional[oneflow.Tensor] = None, sorted_indices: Optional[oneflow.Tensor] = None, unsorted_indices: Optional[oneflow.Tensor] = None)¶ The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.nn.utils.rnn.PackedSequence.html.
Holds the data and list of
batch_sizes
of a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence()
. For instance, given dataabc
andx
thePackedSequence
would contain dataaxbc
withbatch_sizes=[2,1,1]
.-
batch_sizes
¶ Tensor of integers holding information about the batch size at each sequence step
- Type
-
sorted_indices
¶ Tensor of integers holding how this
PackedSequence
is constructed from sequences.- Type
Tensor, optional
-
unsorted_indices
¶ Tensor of integers holding how this to recover the original sequences with correct order.
- Type
Tensor, optional
Note
data
can be on arbitrary device and of arbitrary dtype.sorted_indices
andunsorted_indices
must beoneflow.int64
tensors on the same device asdata
.However,
batch_sizes
should always be a CPUoneflow.int64
tensor.This invariant is maintained throughout
PackedSequence
class, and all functions that construct a :class:PackedSequence in PyTorch (i.e., they only pass in tensors conforming to this constraint).-
__init__
(data: oneflow.Tensor, batch_sizes: Optional[oneflow.Tensor] = None, sorted_indices: Optional[oneflow.Tensor] = None, unsorted_indices: Optional[oneflow.Tensor] = None)¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__delattr__
(name, /)Implement delattr(self, name).
__dir__
()Default dir() implementation.
__eq__
(value, /)Return self==value.
__format__
(format_spec, /)Default object formatter.
__ge__
(value, /)Return self>=value.
__getattribute__
(name, /)Return getattr(self, name).
__gt__
(value, /)Return self>value.
__hash__
()Return hash(self).
__init__
(data[, batch_sizes, …])Initialize self.
__init_subclass__
This method is called when a class is subclassed.
__le__
(value, /)Return self<=value.
__lt__
(value, /)Return self<value.
__ne__
(value, /)Return self!=value.
__new__
(**kwargs)Create and return a new object.
__reduce__
()Helper for pickle.
__reduce_ex__
(protocol, /)Helper for pickle.
__repr__
()Return repr(self).
__setattr__
(name, value, /)Implement setattr(self, name, value).
__sizeof__
()Size of object in memory, in bytes.
__str__
()Return str(self).
__subclasshook__
Abstract classes can override this to customize issubclass().
byte
()char
()cpu
(*args, **kwargs)cuda
(*args, **kwargs)double
()float
()half
()int
()is_pinned
()Returns true if self.data stored on in pinned memory
long
()pin_memory
()short
()to
(*args, **kwargs)Performs dtype and/or device conversion on self.data.
Attributes
is_cuda
Returns true if self.data stored on a gpu
-