LSTM AE

path: keras.Sequential.LSTMSeq2Seq

description: this is a reconstruction model autoencoder using LSTM layers.

see json.

argument

type

description

parameters

X

numpy.ndarray

n-dimensional array containing the input sequences for the model

y

numpy.ndarray

n-dimensional array containing the target sequences for the model

hyperparameters

classification

bool

indicator of whether this is a classification or regression model. Default is False

epochs

int

number of epochs to train the model. An epoch is an iteration over the entire X and y data provided. Default is 35

callbacks

list

list of callbacks to apply during training

validation_split

float

float between 0 and 1. Fraction of the training data to be used as validation data. Default is 0.2

batch_size

int

number of samples per gradient update. Default is 64

window_size

int

integer denoting the size of the window per input sample

input_shape

tuple

tuple denoting the shape of an input sample

target_shape

tuple

tuple denoting the shape of an output sample

optimizer

str

string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam

loss

str

string (name of the objective function) or an objective function instance. Default is keras.losses.mean_squared_error

metrics

list

list of metrics to be evaluated by the model during training and testing. Default is [“mse”]

return_seqeunces

bool

whether to return the last output in the output sequence, or the full sequence. Default is False

layers

list

list of keras layers which are the basic building blocks of a neural network

verbose

bool

verbosity mode. Default is False

lstm_1_unit

int

dimensionality of the output space for the first LSTM layer. Default is 80

dropout_1_rate

float

float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs for the first LSTM layer. Default: 0.3

lstm_2_unit

int

dimensionality of the output space for the second LSTM layer. Default is 80

dropout_2_rate

float

float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs for the second LSTM layer. Default: 0.3

output

y

numpy.ndarray

predicted values

In [1]: import numpy as np

In [2]: from mlstars import load_primitive

In [3]: X = np.array([1] * 100).reshape(1, -1, 1)

In [4]: primitive = load_primitive('keras.Sequential.LSTMSeq2Seq',
   ...:     arguments={"X": X, "y": X, "input_shape":(100, 1), "target_shape":(100, 1),
   ...:                "window_size": 100, "batch_size": 1, "validation_split": 0, "epochs": 5})
   ...: 

In [5]: primitive.fit()
Epoch 1/5

1/1 [==============================] - ETA: 0s - loss: 0.9923 - mse: 0.9923
1/1 [==============================] - 2s 2s/step - loss: 0.9923 - mse: 0.9923
Epoch 2/5

1/1 [==============================] - ETA: 0s - loss: 0.7640 - mse: 0.7640
1/1 [==============================] - 0s 22ms/step - loss: 0.7640 - mse: 0.7640
Epoch 3/5

1/1 [==============================] - ETA: 0s - loss: 0.5618 - mse: 0.5618
1/1 [==============================] - 0s 22ms/step - loss: 0.5618 - mse: 0.5618
Epoch 4/5

1/1 [==============================] - ETA: 0s - loss: 0.3806 - mse: 0.3806
1/1 [==============================] - 0s 21ms/step - loss: 0.3806 - mse: 0.3806
Epoch 5/5

1/1 [==============================] - ETA: 0s - loss: 0.2211 - mse: 0.2211
1/1 [==============================] - 0s 23ms/step - loss: 0.2211 - mse: 0.2211

In [6]: pred = primitive.produce(X=X)

1/1 [==============================] - ETA: 0s
1/1 [==============================] - 0s 468ms/step

In [7]: pred.mean()
Out[7]: 0.7112000477046491