Orion
path: orion.primitives.aer.AER
orion.primitives.aer.AER
description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.
see json.
argument
type
description
parameters
X
numpy.ndarray
n-dimensional array containing the input sequences for the model
y
n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.
hyperparameters
epochs
int
number of epochs to train the model. An epoch is an iteration over the entire X data provided
input_shape
tuple
tuple denoting the shape of an input sample
optimizer
str
string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam
keras.optimizers.Adam
learning_rate
float
float denoting the learning rate of the optimizer. Default is 0.001
batch_size
number of samples per gradient update. Default is 64
layers_encoder
list
list containing layers of encoder
layers_generator
list containing layers of generator
output
ry_hat
n-dimensional array containing the regression for each input sequence (reverse)
y_hat
n-dimensional array containing the reconstructions for each input sequence
fy_hat
n-dimensional array containing the regression for each input sequence (forward)
In [1]: import numpy as np In [2]: from mlstars import load_primitive In [3]: X = np.ones((64, 100, 1)) In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0) In [5]: primitive = load_primitive('orion.primitives.aer.AER', ...: arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1}) ...: In [6]: primitive.fit() 1/51 [..............................] - ETA: 3:10 - loss: 0.9381 - tf.__operators__.getitem_loss: 0.7652 - tf.__operators__.getitem_1_loss: 0.9073 - tf.__operators__.getitem_2_loss: 1.1725 4/51 [=>............................] - ETA: 1s - loss: 0.6283 - tf.__operators__.getitem_loss: 0.5270 - tf.__operators__.getitem_1_loss: 0.5386 - tf.__operators__.getitem_2_loss: 0.9090 7/51 [===>..........................] - ETA: 1s - loss: 0.4229 - tf.__operators__.getitem_loss: 0.3458 - tf.__operators__.getitem_1_loss: 0.3257 - tf.__operators__.getitem_2_loss: 0.6945 10/51 [====>.........................] - ETA: 0s - loss: 0.3320 - tf.__operators__.getitem_loss: 0.2432 - tf.__operators__.getitem_1_loss: 0.2762 - tf.__operators__.getitem_2_loss: 0.5325 13/51 [======>.......................] - ETA: 0s - loss: 0.2911 - tf.__operators__.getitem_loss: 0.1882 - tf.__operators__.getitem_1_loss: 0.2743 - tf.__operators__.getitem_2_loss: 0.4275 16/51 [========>.....................] - ETA: 0s - loss: 0.2551 - tf.__operators__.getitem_loss: 0.1539 - tf.__operators__.getitem_1_loss: 0.2510 - tf.__operators__.getitem_2_loss: 0.3645 19/51 [==========>...................] - ETA: 0s - loss: 0.2253 - tf.__operators__.getitem_loss: 0.1365 - tf.__operators__.getitem_1_loss: 0.2185 - tf.__operators__.getitem_2_loss: 0.3278 22/51 [===========>..................] - ETA: 0s - loss: 0.2042 - tf.__operators__.getitem_loss: 0.1311 - tf.__operators__.getitem_1_loss: 0.1902 - tf.__operators__.getitem_2_loss: 0.3055 25/51 [=============>................] - ETA: 0s - loss: 0.1889 - tf.__operators__.getitem_loss: 0.1306 - tf.__operators__.getitem_1_loss: 0.1681 - tf.__operators__.getitem_2_loss: 0.2888 28/51 [===============>..............] - ETA: 0s - loss: 0.1762 - tf.__operators__.getitem_loss: 0.1302 - tf.__operators__.getitem_1_loss: 0.1509 - tf.__operators__.getitem_2_loss: 0.2730 31/51 [=================>............] - ETA: 0s - loss: 0.1651 - tf.__operators__.getitem_loss: 0.1279 - tf.__operators__.getitem_1_loss: 0.1379 - tf.__operators__.getitem_2_loss: 0.2566 34/51 [===================>..........] - ETA: 0s - loss: 0.1556 - tf.__operators__.getitem_loss: 0.1240 - tf.__operators__.getitem_1_loss: 0.1291 - tf.__operators__.getitem_2_loss: 0.2402 37/51 [====================>.........] - ETA: 0s - loss: 0.1478 - tf.__operators__.getitem_loss: 0.1195 - tf.__operators__.getitem_1_loss: 0.1233 - tf.__operators__.getitem_2_loss: 0.2248 40/51 [======================>.......] - ETA: 0s - loss: 0.1411 - tf.__operators__.getitem_loss: 0.1153 - tf.__operators__.getitem_1_loss: 0.1188 - tf.__operators__.getitem_2_loss: 0.2114 43/51 [========================>.....] - ETA: 0s - loss: 0.1352 - tf.__operators__.getitem_loss: 0.1120 - tf.__operators__.getitem_1_loss: 0.1141 - tf.__operators__.getitem_2_loss: 0.2004 46/51 [==========================>...] - ETA: 0s - loss: 0.1299 - tf.__operators__.getitem_loss: 0.1097 - tf.__operators__.getitem_1_loss: 0.1090 - tf.__operators__.getitem_2_loss: 0.1916 49/51 [===========================>..] - ETA: 0s - loss: 0.1251 - tf.__operators__.getitem_loss: 0.1082 - tf.__operators__.getitem_1_loss: 0.1039 - tf.__operators__.getitem_2_loss: 0.1846 51/51 [==============================] - 6s 51ms/step - loss: 0.1223 - tf.__operators__.getitem_loss: 0.1073 - tf.__operators__.getitem_1_loss: 0.1006 - tf.__operators__.getitem_2_loss: 0.1807 - val_loss: 0.0521 - val_tf.__operators__.getitem_loss: 0.0842 - val_tf.__operators__.getitem_1_loss: 0.0198 - val_tf.__operators__.getitem_2_loss: 0.0847 In [7]: ry, y, fy = primitive.produce(X=X) 1/2 [==============>...............] - ETA: 0s 2/2 [==============================] - 0s 9ms/step 1/2 [==============>...............] - ETA: 0s 2/2 [==============================] - 0s 11ms/step In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy)) Reverse Prediction: [[0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779] [0.70987779]] Reconstructed Values: [[[0.80058991] [0.87140675] [0.9268512 ] ... [0.95519217] [0.89325505] [0.81269645]] [[0.80058991] [0.87140675] [0.9268512 ] ... [0.95519217] [0.89325505] [0.81269645]] [[0.80058991] [0.87140675] [0.9268512 ] ... [0.95519217] [0.89325505] [0.81269645]] ... [[0.80058991] [0.87140675] [0.9268512 ] ... [0.95519217] [0.89325505] [0.81269645]] [[0.80058991] [0.87140675] [0.9268512 ] ... [0.95519217] [0.89325505] [0.81269645]] [[0.80058991] [0.87140675] [0.9268512 ] ... [0.95519217] [0.89325505] [0.81269645]]], Forward Prediction: [[0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031] [0.70891031]]