Embedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, _weight: Optional[oneflow.Tensor] = None, device=None, dtype=None)¶
A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
num_embeddings (int) – size of the dictionary of embeddings
embedding_dim (int) – the size of each embedding vector
padding_idx (int, optional) – If specified, the entries at
padding_idxdo not contribute to the gradient; therefore, the embedding vector at
padding_idxis not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector at
padding_idxwill default to all zeros, but can be updated to another value to be used as the padding vector.
max_norm (float, optional) – If given, each embedding vector with norm larger than
max_normis renormalized to have norm
norm_type (float, optional) – The p of the p-norm to compute for the
scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
>>> import numpy as np >>> import oneflow as flow >>> indices = flow.tensor([[1, 2, 4, 5], [4, 3, 2, 9]], dtype=flow.int) >>> m = flow.nn.Embedding(10, 3) >>> y = m(indices)