oneflow.nn.CombinedMarginLoss¶
-
class
oneflow.nn.
CombinedMarginLoss
(m1: float = 1.0, m2: float = 0.0, m3: float = 0.0)¶ The operation implements “margin_softmax” in InsightFace: https://github.com/deepinsight/insightface/blob/master/recognition/arcface_mxnet/train.py The implementation of margin_softmax in InsightFace is composed of multiple operators. We fuse them for speed up.
Applies the function:
\[\begin{split}{\rm CombinedMarginLoss}(x_i, label) = \left\{\begin{matrix} \cos(m_1\cdot\arccos x_i+m_2) - m_3 & {\rm if} \ i == label \\ x_i & {\rm otherwise} \end{matrix}\right.\end{split}\]- Parameters
x (oneflow.Tensor) – A Tensor
label (oneflow.Tensor) – label with integer data type
m1 (float) – loss m1 parameter
m2 (float) – loss m2 parameter
m3 (float) – loss m3 parameter
Note
Here are some special cases:
when \(m_1=1, m_2\neq 0, m_3=0\), CombineMarginLoss has the same parameter as ArcFace .
when \(m_1=1, m_2=0, m_3\neq 0\), CombineMarginLoss has the same parameter as CosFace (a.k.a AM-Softmax) .
when \(m_1\gt 1, m_2=m_3=0\), CombineMarginLoss has the same parameter as A-Softmax.
- Returns
A Tensor
- Return type
For example:
>>> import numpy as np >>> import oneflow as flow >>> np_x = np.array([[-0.7027179, 0.0230609], [-0.02721931, -0.16056311], [-0.4565852, -0.64471215]]) >>> np_label = np.array([0, 1, 1]) >>> x = flow.tensor(np_x, dtype=flow.float32) >>> label = flow.tensor(np_label, dtype=flow.int32) >>> loss_func = flow.nn.CombinedMarginLoss(0.3, 0.5, 0.4) >>> out = loss_func(x, label) >>> out tensor([[-0.0423, 0.0231], [-0.0272, 0.1237], [-0.4566, -0.0204]], dtype=oneflow.float32)