oneflow.nn.LSTM¶
-
class
oneflow.nn.
LSTM
(*args, **kwargs)¶ Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
For each element in the input sequence, each layer computes the following
function:
\[\begin{split}\begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t = f_t \odot c_{t-1} + i_t \odot g_t \\ h_t = o_t \odot \tanh(c_t) \\ \end{array}\end{split}\]where \(h_t\) is the hidden state at time t, \(c_t\) is the cell state at time t, \(x_t\) is the input at time t, \(h_{t-1}\) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and output gates, respectively. \(\sigma\) is the sigmoid function, and \(\odot\) is the Hadamard product.
In a multilayer LSTM, the input \(x^{(l)}_t\) of the \(l\) -th layer (\(l >= 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\) where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is \(0\) with probability
dropout
.If
proj_size > 0
is specified, LSTM with projections will be used. This changes the LSTM cell in the following way. First, the dimension of \(h_t\) will be changed fromhidden_size
toproj_size
(dimensions of \(W_{hi}\) will be changed accordingly). Second, the output hidden state of each layer will be multiplied by a learnable projection matrix: \(h_t = W_{hr}h_t\). Note that as a consequence of this, the output of LSTM network will be of different shape as well. See Inputs/Outputs sections below for exact dimensions of all variables. You can find more details in https://arxiv.org/abs/1402.1128.The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/_modules/torch/nn/modules/rnn.html#LSTM.
- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default:False
dropout – If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to
dropout
. Default: 0bidirectional – If
True
, becomes a bidirectional LSTM. Default:False
proj_size – If
> 0
, will use LSTM with projections of corresponding size. Default: 0
- Inputs: input, (h_0, c_0)
input: tensor of shape \((L, N, H_{in})\) when
batch_first=False
or \((N, L, H_{in})\) whenbatch_first=True
containing the features of the input sequence.h_0: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the initial hidden state for each element in the batch. Defaults to zeros if (h_0, c_0) is not provided.
c_0: tensor of shape \((D * \text{num\_layers}, N, H_{cell})\) containing the initial cell state for each element in the batch. Defaults to zeros if (h_0, c_0) is not provided.
where:
\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{cell} ={} & \text{hidden\_size} \\ H_{out} ={} & \text{proj\_size if } \text{proj\_size}>0 \text{ otherwise hidden\_size} \\ \end{aligned}\end{split}\]- Outputs: output, (h_n, c_n)
output: tensor of shape \((L, N, D * H_{out})\) when
batch_first=False
or \((N, L, D * H_{out})\) whenbatch_first=True
containing the output features (h_t) from the last layer of the LSTM, for each t.h_n: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the final hidden state for each element in the batch.
c_n: tensor of shape \((D * \text{num\_layers}, N, H_{cell})\) containing the final cell state for each element in the batch.
-
weight_ih_l[k]
the learnable input-hidden weights of the \(\text{k}^{th}\) layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size, input_size) for k = 0. Otherwise, the shape is (4*hidden_size, num_directions * hidden_size)
-
weight_hh_l[k]
the learnable hidden-hidden weights of the \(\text{k}^{th}\) layer (W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size, hidden_size). If
proj_size > 0
was specified, the shape will be (4*hidden_size, proj_size).
-
bias_ih_l[k]
the learnable input-hidden bias of the \(\text{k}^{th}\) layer (b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)
-
bias_hh_l[k]
the learnable hidden-hidden bias of the \(\text{k}^{th}\) layer (b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)
-
weight_hr_l[k]
the learnable projection weights of the \(\text{k}^{th}\) layer of shape (proj_size, hidden_size). Only present when
proj_size > 0
was specified.
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Note
For bidirectional LSTMs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when
batch_first=False
:output.view(seq_len, batch, num_directions, hidden_size)
.For example:
>>> import oneflow as flow >>> import numpy as np >>> rnn = flow.nn.LSTM(10, 20, 2) >>> input = flow.tensor(np.random.randn(5, 3, 10), dtype=flow.float32) >>> h0 = flow.tensor(np.random.randn(2, 3, 20), dtype=flow.float32) >>> c0 = flow.tensor(np.random.randn(2, 3, 20), dtype=flow.float32) >>> output, (hn, cn) = rnn(input, (h0, c0)) >>> output.size() oneflow.Size([5, 3, 20])
-
__init__
(*args, **kwargs)¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__call__
(*args, **kwargs)Call self as a function.
__delattr__
(name, /)Implement delattr(self, name).
__dir__
()Default dir() implementation.
__eq__
(value, /)Return self==value.
__format__
(format_spec, /)Default object formatter.
__ge__
(value, /)Return self>=value.
__getattr__
(name)__getattribute__
(name, /)Return getattr(self, name).
__gt__
(value, /)Return self>value.
__hash__
()Return hash(self).
__init__
(*args, **kwargs)Initialize self.
__init_subclass__
This method is called when a class is subclassed.
__le__
(value, /)Return self<=value.
__lt__
(value, /)Return self<value.
__ne__
(value, /)Return self!=value.
__new__
(**kwargs)Create and return a new object.
__reduce__
()Helper for pickle.
__reduce_ex__
(protocol, /)Helper for pickle.
__repr__
()Return repr(self).
__setattr__
(attr, value)Implement setattr(self, name, value).
__sizeof__
()Size of object in memory, in bytes.
__str__
()Return str(self).
__subclasshook__
Abstract classes can override this to customize issubclass().
_apply
(fn[, applied_dict])_get_name
()_load_from_state_dict
(state_dict, prefix, …)_named_members
(get_members_fn[, prefix, recurse])_save_to_state_dict
(destination, prefix, …)_shallow_repr
()add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.buffers
([recurse])Returns an iterator over module buffers.
check_forward_args
(input, hidden, batch_sizes)check_hidden_size
(hx, expected_hidden_size)check_input
(input, batch_sizes)children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
extra_repr
()Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(input[, hx])get_expected_cell_size
(input, batch_sizes)get_expected_hidden_size
(input, batch_sizes)half
()Casts all floating point parameters and buffers to
half
datatype.load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children
()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
permute_hidden
(hx, permutation)register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
register_forward_pre_hook
(hook)Registers a forward pre-hook on the module.
register_parameter
(name, param)Adds a parameter to the module.
reset_parameters
()state_dict
([destination, prefix, keep_vars])Returns a dictionary containing a whole state of the module.
to
([device])Moves the parameters and buffers.
to_consistent
(*args, **kwargs)This interface is no longer available, please use
oneflow.nn.Module.to_global()
instead.to_global
([placement, sbp])Convert the parameters and buffers to global.
train
([mode])Sets the module in training mode.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
Attributes
all_weights