oneflow.nn.RNN¶
-
class
oneflow.nn.
RNN
(*args, **kwargs)¶ Applies a multi-layer Elman RNN with tanhtanh or text{ReLU}ReLU non-linearity to an input sequence.
For each element in the input sequence, each layer computes the following function:
function:
\[h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh})\]where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If
nonlinearity
is'relu'
, then \(\text{ReLU}\) is used instead of \(\tanh\).The interface is consistent with PyTorch. The documentation is referenced from: https://pytorch.org/docs/1.10/generated/torch.nn.RNN.html.
- Parameters
input_size – The number of expected features in the input x
hidden_size – The number of features in the hidden state h
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1nonlinearity – The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default:False
dropout – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to
dropout
. Default: 0bidirectional – If
True
, becomes a bidirectional RNN. Default:False
- Inputs: input, h_0
input: tensor of shape \((L, N, H_{in})\) when
batch_first=False
or \((N, L, H_{in})\) whenbatch_first=True
containing the features of the input sequence.h_0: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the initial hidden state for each element in the batch. Defaults to zeros if not provided.
where:
\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input_size} \\ H_{out} ={} & \text{hidden_size} \end{aligned}\end{split}\]- Outputs: output, h_n
output: tensor of shape \((L, N, D * H_{out})\) when
batch_first=False
or \((N, L, D * H_{out})\) whenbatch_first=True
containing the output features (h_t) from the last layer of the RNN, for each t.h_n: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the final hidden state for each element in the batch.
-
weight_ih_l[k]
the learnable input-hidden weights of the k-th layer, of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)
-
weight_hh_l[k]
the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size, hidden_size)
-
bias_ih_l[k]
the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
-
bias_hh_l[k]
the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Note
For bidirectional RNNs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when
batch_first=False
:output.view((seq_len, batch, num_directions, hidden_size))
.For example:
>>> import oneflow as flow >>> import numpy as np >>> rnn = flow.nn.RNN(10, 20, 2) >>> input = flow.tensor(np.random.randn(5, 3, 10), dtype=flow.float32) >>> h0 = flow.tensor(np.random.randn(2, 3, 20), dtype=flow.float32) >>> output, hn = rnn(input, h0) >>> output.size() oneflow.Size([5, 3, 20])