oneflow.nn.functional.binary_cross_entropy_with_logits

oneflow.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, reduction='mean', pos_weight=None)

The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.nn.functional.binary_cross_entropy_with_logits.html.

Function that measures Binary Cross Entropy between target and input logits.

See BCEWithLogitsLoss for details.

Parameters
  • input – Tensor of arbitrary shape as unnormalized scores (often referred to as logits).

  • target – Tensor of the same shape as input with values between 0 and 1

  • weight (Tensor, optional) – a manual rescaling weight if provided it’s repeated to match input tensor shape

  • reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'

  • pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes.

Examples:

>>> import oneflow as flow
>>> import oneflow.nn.functional as F
>>> input = flow.randn(3, requires_grad=True)
>>> target = flow.randn(3)
>>> target[target >= 0] = 1
>>> target[target < 0] = 0
>>> loss = F.binary_cross_entropy_with_logits(input, target)
>>> loss.backward()