oneflow.nn.functional.ctc_loss

oneflow.nn.functional.ctc_loss(log_probs: oneflow.Tensor, targets: oneflow.Tensor, input_lengths: oneflow.Tensor, target_lengths: oneflow.Tensor, blank=0, reduction='mean', zero_infinity=False)oneflow.Tensor

The Connectionist Temporal Classification loss.

The documentation is referenced from: https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html

See CTCLoss for details.

Parameters
  • log_probs – The logarithmized probabilities of the outputs.

  • targets – Targets cannot be blank. In the second form, the targets are assumed to be concatenated.

  • input_lengths – Lengths of the inputs.

  • target_lengths – Lengths of the targets.

  • blank – Black label, default 0.

  • reduction – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum' . Default 'Mean'.

  • zero_infinity – Whether to zero infinite losses and the associated gradients. Default False.

Example

>>> import oneflow as flow
>>> import oneflow.nn as nn
>>> import oneflow.nn.functional as F
>>> log_probs = flow.tensor(
...     [
...         [[-1.1031, -0.7998, -1.5200], [-0.9808, -1.1363, -1.1908]],
...         [[-1.2258, -1.0665, -1.0153], [-1.1135, -1.2331, -0.9671]],
...         [[-1.3348, -0.6611, -1.5118], [-0.9823, -1.2355, -1.0941]],
...         [[-1.3850, -1.3273, -0.7247], [-0.8235, -1.4783, -1.0994]],
...         [[-0.9049, -0.8867, -1.6962], [-1.4938, -1.3630, -0.6547]],
...     ],
...     dtype=flow.float32,
...     requires_grad=True,
...     )
>>> targets = flow.tensor([[1, 2, 2], [1, 2, 2]], dtype=flow.int32, device="cuda")
>>> input_lengths = flow.tensor([5, 5], dtype=flow.int32)
>>> target_lengths = flow.tensor([3, 3], dtype=flow.int32)
>>> out = F.ctc_loss(log_probs, targets, input_lengths, target_lengths)
>>> out
tensor(1.1376, dtype=oneflow.float32, grad_fn=<scalar_mulBackward>)