LSTM AE¶
path: keras.Sequential.LSTMSeq2Seq
description: this is a reconstruction model autoencoder using LSTM layers.
see json.
argument 
type 
description 
parameters 



ndimensional array containing the input sequences for the model 


ndimensional array containing the target sequences for the model 
hyperparameters 



indicator of whether this is a classification or regression model. Default is False 


number of epochs to train the model. An epoch is an iteration over the entire X and y data provided. Default is 35 


list of callbacks to apply during training 


float between 0 and 1. Fraction of the training data to be used as validation data. Default is 0.2 


number of samples per gradient update. Default is 64 


integer denoting the size of the window per input sample 


tuple denoting the shape of an input sample 


tuple denoting the shape of an output sample 


string (name of optimizer) or optimizer instance. Default is 


string (name of the objective function) or an objective function instance. Default is 


list of metrics to be evaluated by the model during training and testing. Default is [“mse”] 


whether to return the last output in the output sequence, or the full sequence. Default is False 


list of keras layers which are the basic building blocks of a neural network 


verbosity mode. Default is False 


dimensionality of the output space for the first LSTM layer. Default is 80 


float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs for the first LSTM layer. Default: 0.3 


dimensionality of the output space for the second LSTM layer. Default is 80 


float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs for the second LSTM layer. Default: 0.3 
output 



predicted values 
In [1]: import numpy as np
In [2]: from mlstars import load_primitive
In [3]: X = np.array([1] * 100).reshape(1, 1, 1)
In [4]: primitive = load_primitive('keras.Sequential.LSTMSeq2Seq',
...: arguments={"X": X, "y": X, "input_shape":(100, 1), "target_shape":(100, 1),
...: "window_size": 100, "batch_size": 1, "validation_split": 0, "epochs": 5})
...:
In [5]: primitive.fit()
Epoch 1/5
1/1 [==============================]  ETA: 0s  loss: 0.7061  mse: 0.7061
1/1 [==============================]  2s 2s/step  loss: 0.7061  mse: 0.7061
Epoch 2/5
1/1 [==============================]  ETA: 0s  loss: 0.4771  mse: 0.4771
1/1 [==============================]  0s 22ms/step  loss: 0.4771  mse: 0.4771
Epoch 3/5
1/1 [==============================]  ETA: 0s  loss: 0.2720  mse: 0.2720
1/1 [==============================]  0s 21ms/step  loss: 0.2720  mse: 0.2720
Epoch 4/5
1/1 [==============================]  ETA: 0s  loss: 0.1073  mse: 0.1073
1/1 [==============================]  0s 20ms/step  loss: 0.1073  mse: 0.1073
Epoch 5/5
1/1 [==============================]  ETA: 0s  loss: 0.0218  mse: 0.0218
1/1 [==============================]  0s 20ms/step  loss: 0.0218  mse: 0.0218
In [6]: pred = primitive.produce(X=X)
1/1 [==============================]  ETA: 0s
1/1 [==============================]  0s 467ms/step
In [7]: pred.mean()
Out[7]: 1.196486446073707