Orion
path: keras.Sequential.LSTMSeq2Seq
keras.Sequential.LSTMSeq2Seq
description: this is a reconstruction model autoencoder using LSTM layers.
see json.
argument
type
description
parameters
X
numpy.ndarray
n-dimensional array containing the input sequences for the model
y
n-dimensional array containing the target sequences for the model
hyperparameters
classification
bool
indicator of whether this is a classification or regression model. Default is False
epochs
int
number of epochs to train the model. An epoch is an iteration over the entire X and y data provided. Default is 35
callbacks
list
list of callbacks to apply during training
validation_split
float
float between 0 and 1. Fraction of the training data to be used as validation data. Default is 0.2
batch_size
number of samples per gradient update. Default is 64
window_size
integer denoting the size of the window per input sample
input_shape
tuple
tuple denoting the shape of an input sample
target_shape
tuple denoting the shape of an output sample
optimizer
str
string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam
keras.optimizers.Adam
loss
string (name of the objective function) or an objective function instance. Default is keras.losses.mean_squared_error
keras.losses.mean_squared_error
metrics
list of metrics to be evaluated by the model during training and testing. Default is [“mse”]
return_seqeunces
whether to return the last output in the output sequence, or the full sequence. Default is False
layers
list of keras layers which are the basic building blocks of a neural network
verbose
verbosity mode. Default is False
lstm_1_unit
dimensionality of the output space for the first LSTM layer. Default is 80
dropout_1_rate
float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs for the first LSTM layer. Default: 0.3
lstm_2_unit
dimensionality of the output space for the second LSTM layer. Default is 80
dropout_2_rate
float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs for the second LSTM layer. Default: 0.3
output
predicted values
In [1]: import numpy as np In [2]: from mlstars import load_primitive In [3]: X = np.array([1] * 100).reshape(1, -1, 1) In [4]: primitive = load_primitive('keras.Sequential.LSTMSeq2Seq', ...: arguments={"X": X, "y": X, "input_shape":(100, 1), "target_shape":(100, 1), ...: "window_size": 100, "batch_size": 1, "validation_split": 0, "epochs": 5}) ...: In [5]: primitive.fit() Epoch 1/5 1/1 [==============================] - ETA: 0s - loss: 1.0663 - mse: 1.0663 1/1 [==============================] - 2s 2s/step - loss: 1.0663 - mse: 1.0663 Epoch 2/5 1/1 [==============================] - ETA: 0s - loss: 0.7870 - mse: 0.7870 1/1 [==============================] - 0s 22ms/step - loss: 0.7870 - mse: 0.7870 Epoch 3/5 1/1 [==============================] - ETA: 0s - loss: 0.5462 - mse: 0.5462 1/1 [==============================] - 0s 22ms/step - loss: 0.5462 - mse: 0.5462 Epoch 4/5 1/1 [==============================] - ETA: 0s - loss: 0.3401 - mse: 0.3401 1/1 [==============================] - 0s 22ms/step - loss: 0.3401 - mse: 0.3401 Epoch 5/5 1/1 [==============================] - ETA: 0s - loss: 0.1720 - mse: 0.1720 1/1 [==============================] - 0s 23ms/step - loss: 0.1720 - mse: 0.1720 In [6]: pred = primitive.produce(X=X) 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 0s 483ms/step In [7]: pred.mean() Out[7]: 0.7950445381401614