oneflow.nn.functional.embedding

oneflow.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)

A simple lookup table that looks up embeddings in a fixed dictionary and size.

This module is often used to retrieve word embeddings using indices. The input to the module is a list of indices, and the embedding matrix, and the output is the corresponding word embeddings.

See oneflow.nn.Embedding for more details.

Parameters
  • input (oneflow.LongTensor) – Tensor containing indices into the embedding matrix

  • weight (Tensor) – The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size

  • padding_idx (int, optional) – If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e. it remains as a fixed “pad”.

  • max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm

  • norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2.

  • scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False

For example:

>>> import oneflow as flow
>>> import oneflow.nn.functional as F

>>> # a batch of 2 samples of 4 indices each
>>> input = flow.tensor([[1,2,4,5],[4,3,2,9]])
>>> # an embedding matrix containing 10 tensors of size 3
>>> embedding_matrix = flow.rand(10, 3)
>>> output = F.embedding(input, embedding_matrix)
>>> output.shape
oneflow.Size([2, 4, 3])
>>> # example with padding_idx
>>> input = flow.tensor([[0,2,0,5]])
>>> output = F.embedding(input, embedding_matrix, padding_idx=0)
>>> output.shape
oneflow.Size([1, 4, 3])