class tf.contrib.rnn.CoupledInputForgetGateLSTMCell
See the guide: RNN and Cells (contrib) > Core RNN Cell wrappers (RNNCells that wrap other RNNCells)
Long short-term memory unit (LSTM) recurrent network cell.
The default non-peephole implementation is based on:
http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
S. Hochreiter and J. Schmidhuber. "Long Short-Term Memory". Neural Computation, 9(8):1735-1780, 1997.
The peephole implementation is based on:
https://research.google.com/pubs/archive/43905.pdf
Hasim Sak, Andrew Senior, and Francoise Beaufays. "Long short-term memory recurrent neural network architectures for large scale acoustic modeling." INTERSPEECH, 2014.
The coupling of input and forget gate is based on:
http://arxiv.org/pdf/1503.04069.pdf
Greff et al. "LSTM: A Search Space Odyssey"
The class uses optional peep-hole connections, and an optional projection layer.
output_size
state_size
__init__(num_units, use_peepholes=False, initializer=None, num_proj=None, proj_clip=None, num_unit_shards=1, num_proj_shards=1, forget_bias=1.0, state_is_tuple=False, activation=tf.tanh)
Initialize the parameters for an LSTM cell.
num_units
: int, The number of units in the LSTM celluse_peepholes
: bool, set True to enable diagonal/peephole connections.initializer
: (optional) The initializer to use for the weight and projection matrices.num_proj
: (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.proj_clip
: (optional) A float value. If num_proj > 0
and proj_clip
is provided, then the projected values are clipped elementwise to within [-proj_clip, proj_clip]
.num_unit_shards
: How to split the weight matrix. If >1, the weight matrix is stored across num_unit_shards.num_proj_shards
: How to split the projection matrix. If >1, the projection matrix is stored across num_proj_shards.forget_bias
: Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training.state_is_tuple
: If True, accepted and returned states are 2-tuples of the c_state
and m_state
. By default (False), they are concatenated along the column axis. This default behavior will soon be deprecated.activation
: Activation function of the inner states.zero_state(batch_size, dtype)
Return zero-filled state tensor(s).
batch_size
: int, float, or unit Tensor representing the batch size.dtype
: the data type to use for the state.If state_size
is an int or TensorShape, then the return value is a N-D
tensor of shape [batch_size x state_size]
filled with zeros.
If state_size
is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D
tensors with the shapes [batch_size x s]
for each s in state_size
.
Defined in tensorflow/contrib/rnn/python/ops/rnn_cell.py
.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/CoupledInputForgetGateLSTMCell