oneflow.nn.KLDivLoss¶
-
class
oneflow.nn.
KLDivLoss
(reduction: str = 'mean', log_target: bool = False)¶ The Kullback-Leibler divergence loss measure
Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with
NLLLoss
, the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are interpreted as probabilities by default, but could be considered as log-probabilities withlog_target
set toTrue
.This criterion expects a target Tensor of the same size as the input Tensor.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right)\]where the index \(N\) spans all dimensions of
input
and \(L\) has the same shape asinput
. Ifreduction
is not'none'
(default'mean'
), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]In default
reduction
mode'mean'
, the losses are averaged for each minibatch over observations as well as over dimensions.'batchmean'
mode gives the correct KL divergence where losses are averaged over batch dimension only.'mean'
mode’s behavior will be changed to the same as'batchmean'
in the next major release.The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.nn.KLDivLoss.html.
- Parameters
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied.'batchmean'
: the sum of the output will be divided by batchsize.'sum'
: the output will be summed.'mean'
: the output will be divided by the number of elements in the output. Default:'mean'
log_target (bool, optional) – Specifies whether target is passed in the log space. Default:
False
Note
reduction
='mean'
doesn’t return the true kl divergence value, please usereduction
='batchmean'
which aligns with KL math definition. In the next major release,'mean'
will be changed to be the same as'batchmean'
.- Shape:
Input: \((N, *)\) where \(*\) means, any number of additional dimensions
Target: \((N, *)\), same shape as the input
Output: scalar by default. If :attr:
reduction
is'none'
, then \((N, *)\), the same shape as the input
For example:
>>> import oneflow as flow >>> import numpy as np >>> input = flow.tensor([-0.9021705, 0.08798598, 1.04686249], dtype=flow.float32) >>> target = flow.tensor([1.22386942, -0.89729659, 0.01615712], dtype=flow.float32) >>> m = flow.nn.KLDivLoss(reduction="none", log_target=False) >>> out = m(input, target) >>> out tensor([ 1.3514, 0.0000, -0.0836], dtype=oneflow.float32) >>> m = flow.nn.KLDivLoss(reduction="mean", log_target=False) >>> out = m(input, target) >>> out tensor(0.4226, dtype=oneflow.float32) >>> m = flow.nn.KLDivLoss(reduction="sum", log_target=True) >>> out = m(input, target) >>> out tensor(5.7801, dtype=oneflow.float32)