Orion
path: orion.primitives.aer.AER
orion.primitives.aer.AER
description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.
see json.
argument
type
description
parameters
X
numpy.ndarray
n-dimensional array containing the input sequences for the model
y
n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.
hyperparameters
epochs
int
number of epochs to train the model. An epoch is an iteration over the entire X data provided
input_shape
tuple
tuple denoting the shape of an input sample
optimizer
str
string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam
keras.optimizers.Adam
learning_rate
float
float denoting the learning rate of the optimizer. Default is 0.001
batch_size
number of samples per gradient update. Default is 64
layers_encoder
list
list containing layers of encoder
layers_generator
list containing layers of generator
output
ry_hat
n-dimensional array containing the regression for each input sequence (reverse)
y_hat
n-dimensional array containing the reconstructions for each input sequence
fy_hat
n-dimensional array containing the regression for each input sequence (forward)
In [1]: import numpy as np In [2]: from mlstars import load_primitive In [3]: X = np.ones((64, 100, 1)) In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0) In [5]: primitive = load_primitive('orion.primitives.aer.AER', ...: arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1}) ...: In [6]: primitive.fit() 1/51 [..............................] - ETA: 3:12 - loss: 0.9978 - tf.__operators__.getitem_loss: 1.1175 - tf.__operators__.getitem_1_loss: 0.9945 - tf.__operators__.getitem_2_loss: 0.8845 4/51 [=>............................] - ETA: 1s - loss: 0.7450 - tf.__operators__.getitem_loss: 0.9694 - tf.__operators__.getitem_1_loss: 0.6848 - tf.__operators__.getitem_2_loss: 0.6409 7/51 [===>..........................] - ETA: 1s - loss: 0.5495 - tf.__operators__.getitem_loss: 0.8351 - tf.__operators__.getitem_1_loss: 0.4578 - tf.__operators__.getitem_2_loss: 0.4473 10/51 [====>.........................] - ETA: 0s - loss: 0.4213 - tf.__operators__.getitem_loss: 0.7104 - tf.__operators__.getitem_1_loss: 0.3285 - tf.__operators__.getitem_2_loss: 0.3179 13/51 [======>.......................] - ETA: 0s - loss: 0.3561 - tf.__operators__.getitem_loss: 0.6014 - tf.__operators__.getitem_1_loss: 0.2870 - tf.__operators__.getitem_2_loss: 0.2488 16/51 [========>.....................] - ETA: 0s - loss: 0.3165 - tf.__operators__.getitem_loss: 0.5191 - tf.__operators__.getitem_1_loss: 0.2705 - tf.__operators__.getitem_2_loss: 0.2060 19/51 [==========>...................] - ETA: 0s - loss: 0.2826 - tf.__operators__.getitem_loss: 0.4633 - tf.__operators__.getitem_1_loss: 0.2467 - tf.__operators__.getitem_2_loss: 0.1738 22/51 [===========>..................] - ETA: 0s - loss: 0.2540 - tf.__operators__.getitem_loss: 0.4263 - tf.__operators__.getitem_1_loss: 0.2190 - tf.__operators__.getitem_2_loss: 0.1516 25/51 [=============>................] - ETA: 0s - loss: 0.2318 - tf.__operators__.getitem_loss: 0.4001 - tf.__operators__.getitem_1_loss: 0.1943 - tf.__operators__.getitem_2_loss: 0.1383 28/51 [===============>..............] - ETA: 0s - loss: 0.2143 - tf.__operators__.getitem_loss: 0.3780 - tf.__operators__.getitem_1_loss: 0.1742 - tf.__operators__.getitem_2_loss: 0.1307 31/51 [=================>............] - ETA: 0s - loss: 0.1996 - tf.__operators__.getitem_loss: 0.3566 - tf.__operators__.getitem_1_loss: 0.1582 - tf.__operators__.getitem_2_loss: 0.1255 34/51 [===================>..........] - ETA: 0s - loss: 0.1869 - tf.__operators__.getitem_loss: 0.3345 - tf.__operators__.getitem_1_loss: 0.1460 - tf.__operators__.getitem_2_loss: 0.1209 37/51 [====================>.........] - ETA: 0s - loss: 0.1760 - tf.__operators__.getitem_loss: 0.3127 - tf.__operators__.getitem_1_loss: 0.1375 - tf.__operators__.getitem_2_loss: 0.1164 40/51 [======================>.......] - ETA: 0s - loss: 0.1669 - tf.__operators__.getitem_loss: 0.2926 - tf.__operators__.getitem_1_loss: 0.1314 - tf.__operators__.getitem_2_loss: 0.1123 43/51 [========================>.....] - ETA: 0s - loss: 0.1590 - tf.__operators__.getitem_loss: 0.2753 - tf.__operators__.getitem_1_loss: 0.1258 - tf.__operators__.getitem_2_loss: 0.1091 46/51 [==========================>...] - ETA: 0s - loss: 0.1519 - tf.__operators__.getitem_loss: 0.2611 - tf.__operators__.getitem_1_loss: 0.1198 - tf.__operators__.getitem_2_loss: 0.1069 49/51 [===========================>..] - ETA: 0s - loss: 0.1456 - tf.__operators__.getitem_loss: 0.2494 - tf.__operators__.getitem_1_loss: 0.1138 - tf.__operators__.getitem_2_loss: 0.1054 51/51 [==============================] - 6s 51ms/step - loss: 0.1418 - tf.__operators__.getitem_loss: 0.2426 - tf.__operators__.getitem_1_loss: 0.1100 - tf.__operators__.getitem_2_loss: 0.1045 - val_loss: 0.0482 - val_tf.__operators__.getitem_loss: 0.0758 - val_tf.__operators__.getitem_1_loss: 0.0171 - val_tf.__operators__.getitem_2_loss: 0.0826 In [7]: ry, y, fy = primitive.produce(X=X) 1/2 [==============>...............] - ETA: 0s 2/2 [==============================] - 0s 10ms/step 1/2 [==============>...............] - ETA: 0s 2/2 [==============================] - 0s 12ms/step In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy)) Reverse Prediction: [[0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173] [0.72470173]] Reconstructed Values: [[[0.83230102] [0.90975951] [0.96619513] ... [0.94908595] [0.88921066] [0.81186142]] [[0.83230102] [0.90975951] [0.96619513] ... [0.94908595] [0.88921066] [0.81186142]] [[0.83230102] [0.90975951] [0.96619513] ... [0.94908595] [0.88921066] [0.81186142]] ... [[0.83230102] [0.90975951] [0.96619513] ... [0.94908595] [0.88921066] [0.81186142]] [[0.83230102] [0.90975951] [0.96619513] ... [0.94908595] [0.88921066] [0.81186142]] [[0.83230102] [0.90975951] [0.96619513] ... [0.94908595] [0.88921066] [0.81186142]]], Forward Prediction: [[0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849] [0.71258849]]