# oneflow.optim.SGD¶

class oneflow.optim.SGD(params: Union[Iterator[oneflow.nn.Parameter], List[Dict]], lr: float = 0.001, momentum: float = 0.0, dampening: float = 0.0, weight_decay: float = 0.0, nesterov: bool = False, maximize: bool = False)

Implements SGD algorithm.

This algorithm takes a random sample’s gradient as an approximate estimate of the overall gradient in small batch gradient descent.

When the momentum = 0, the equation of parameters updating is:

$param_{new} = param_{old} - learning\_rate * grad$

With momentum, the equation of parameters updating is:

\begin{align}\begin{aligned}& V_t = \beta * V_{t-1} - learning\_rate * (g_t + param_{old} * weight\_decay)\\& param_{new} = param_{old} + V_t\end{aligned}\end{align}
Parameters
• params (iterable) – iterable of parameters to optimize or dicts defining parameter groups

• lr (float, optional) – learning rate (default: 1e-3)

• momentum (float, optional) – Momentum factor (default: 0.0)

• weight_decay (float, optional) – weight decay (L2 penalty) (default: 0.0)

For example:

Example 1:

# Assume net is a custom model.
sgd = flow.optim.SGD(net.parameters(), lr=1e-3)

for epoch in range(epochs):
# Read data, Compute the loss and so on.
# ...
loss.backward()
sgd.step()

Example 2:

# Assume net is a custom model.
sgd = flow.optim.SGD(
[
{
"params": net.parameters(),
"lr": learning_rate,
}
],
)

for epoch in range(epochs):
# Read data, Compute the loss and so on.
# ...
loss.backward()
sgd.step()

If you want to use clip_grad, you can refer this example.