oneflow.nn.utils.rnn.pad_packed_sequence(sequence: oneflow.nn.utils.rnn.PackedSequence, batch_first: bool = False, padding_value: float = 0.0, total_length: Optional[int] = None)Tuple[oneflow.Tensor, oneflow.Tensor]

The interface is consistent with PyTorch. The documentation is referenced from:

Pads a packed batch of variable length sequences.

It is an inverse operation to pack_padded_sequence().

The returned Tensor’s data will be of size T x B x *, where T is the length of the longest sequence and B is the batch size. If batch_first is True, the data will be transposed into B x T x * format.


total_length is useful to implement the pack sequence -> recurrent network -> unpack sequence pattern in a Module wrapped in DataParallel.

  • sequence (PackedSequence) – batch to pad

  • batch_first (bool, optional) – if True, the output will be in B x T x * format.

  • padding_value (float, optional) – values for padded elements.

  • total_length (int, optional) – if not None, the output will be padded to have length total_length. This method will throw ValueError if total_length is less than the max sequence length in sequence.


Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch. Batch elements will be re-ordered as they were ordered originally when the batch was passed to pack_padded_sequence or pack_sequence.

For example:

>>> from oneflow.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
>>> import oneflow as flow

>>> seq = flow.tensor([[4,5,6], [1,2,0], [3,0,0]])
>>> lens = [3, 2, 1]
>>> packed = pack_padded_sequence(seq, lens, batch_first=True, enforce_sorted=True)
tensor([4, 1, 3, 5, 2, 6], dtype=oneflow.int64)
>>> packed.batch_sizes
tensor([3, 2, 1], dtype=oneflow.int64)
>>> seq_unpacked, lens_unpacked = pad_packed_sequence(packed, batch_first=True)
>>> seq_unpacked
tensor([[4, 5, 6],
        [1, 2, 0],
        [3, 0, 0]], dtype=oneflow.int64)
>>> lens_unpacked
tensor([3., 2., 1.], dtype=oneflow.float32)