Orion
path: orion.primitives.aer.AER
orion.primitives.aer.AER
description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.
see json.
argument
type
description
parameters
X
numpy.ndarray
n-dimensional array containing the input sequences for the model
y
n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.
hyperparameters
epochs
int
number of epochs to train the model. An epoch is an iteration over the entire X data provided
input_shape
tuple
tuple denoting the shape of an input sample
optimizer
str
string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam
keras.optimizers.Adam
learning_rate
float
float denoting the learning rate of the optimizer. Default is 0.001
batch_size
number of samples per gradient update. Default is 64
layers_encoder
list
list containing layers of encoder
layers_generator
list containing layers of generator
output
ry_hat
n-dimensional array containing the regression for each input sequence (reverse)
y_hat
n-dimensional array containing the reconstructions for each input sequence
fy_hat
n-dimensional array containing the regression for each input sequence (forward)
In [1]: import numpy as np In [2]: from mlstars import load_primitive In [3]: X = np.ones((64, 100, 1)) In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0) In [5]: primitive = load_primitive('orion.primitives.aer.AER', ...: arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1}) ...: In [6]: primitive.fit() 1/51 [..............................] - ETA: 3:19 - loss: 0.8546 - tf.__operators__.getitem_loss: 0.9104 - tf.__operators__.getitem_1_loss: 0.8131 - tf.__operators__.getitem_2_loss: 0.8817 4/51 [=>............................] - ETA: 0s - loss: 0.6269 - tf.__operators__.getitem_loss: 0.7159 - tf.__operators__.getitem_1_loss: 0.5464 - tf.__operators__.getitem_2_loss: 0.6989 7/51 [===>..........................] - ETA: 0s - loss: 0.4527 - tf.__operators__.getitem_loss: 0.5511 - tf.__operators__.getitem_1_loss: 0.3584 - tf.__operators__.getitem_2_loss: 0.5431 10/51 [====>.........................] - ETA: 0s - loss: 0.3376 - tf.__operators__.getitem_loss: 0.4167 - tf.__operators__.getitem_1_loss: 0.2595 - tf.__operators__.getitem_2_loss: 0.4145 13/51 [======>.......................] - ETA: 0s - loss: 0.2838 - tf.__operators__.getitem_loss: 0.3232 - tf.__operators__.getitem_1_loss: 0.2445 - tf.__operators__.getitem_2_loss: 0.3230 16/51 [========>.....................] - ETA: 0s - loss: 0.2511 - tf.__operators__.getitem_loss: 0.2642 - tf.__operators__.getitem_1_loss: 0.2378 - tf.__operators__.getitem_2_loss: 0.2647 19/51 [==========>...................] - ETA: 0s - loss: 0.2219 - tf.__operators__.getitem_loss: 0.2278 - tf.__operators__.getitem_1_loss: 0.2154 - tf.__operators__.getitem_2_loss: 0.2289 22/51 [===========>..................] - ETA: 0s - loss: 0.1987 - tf.__operators__.getitem_loss: 0.2073 - tf.__operators__.getitem_1_loss: 0.1895 - tf.__operators__.getitem_2_loss: 0.2084 25/51 [=============>................] - ETA: 0s - loss: 0.1819 - tf.__operators__.getitem_loss: 0.1959 - tf.__operators__.getitem_1_loss: 0.1675 - tf.__operators__.getitem_2_loss: 0.1966 28/51 [===============>..............] - ETA: 0s - loss: 0.1690 - tf.__operators__.getitem_loss: 0.1880 - tf.__operators__.getitem_1_loss: 0.1499 - tf.__operators__.getitem_2_loss: 0.1883 31/51 [=================>............] - ETA: 0s - loss: 0.1582 - tf.__operators__.getitem_loss: 0.1803 - tf.__operators__.getitem_1_loss: 0.1360 - tf.__operators__.getitem_2_loss: 0.1805 34/51 [===================>..........] - ETA: 0s - loss: 0.1485 - tf.__operators__.getitem_loss: 0.1719 - tf.__operators__.getitem_1_loss: 0.1252 - tf.__operators__.getitem_2_loss: 0.1720 37/51 [====================>.........] - ETA: 0s - loss: 0.1402 - tf.__operators__.getitem_loss: 0.1629 - tf.__operators__.getitem_1_loss: 0.1174 - tf.__operators__.getitem_2_loss: 0.1631 40/51 [======================>.......] - ETA: 0s - loss: 0.1332 - tf.__operators__.getitem_loss: 0.1542 - tf.__operators__.getitem_1_loss: 0.1120 - tf.__operators__.getitem_2_loss: 0.1545 43/51 [========================>.....] - ETA: 0s - loss: 0.1272 - tf.__operators__.getitem_loss: 0.1463 - tf.__operators__.getitem_1_loss: 0.1078 - tf.__operators__.getitem_2_loss: 0.1468 46/51 [==========================>...] - ETA: 0s - loss: 0.1218 - tf.__operators__.getitem_loss: 0.1397 - tf.__operators__.getitem_1_loss: 0.1037 - tf.__operators__.getitem_2_loss: 0.1402 49/51 [===========================>..] - ETA: 0s - loss: 0.1169 - tf.__operators__.getitem_loss: 0.1343 - tf.__operators__.getitem_1_loss: 0.0993 - tf.__operators__.getitem_2_loss: 0.1349 51/51 [==============================] - 6s 41ms/step - loss: 0.1140 - tf.__operators__.getitem_loss: 0.1313 - tf.__operators__.getitem_1_loss: 0.0963 - tf.__operators__.getitem_2_loss: 0.1319 - val_loss: 0.0411 - val_tf.__operators__.getitem_loss: 0.0616 - val_tf.__operators__.getitem_1_loss: 0.0200 - val_tf.__operators__.getitem_2_loss: 0.0630 In [7]: ry, y, fy = primitive.produce(X=X) 1/2 [==============>...............] - ETA: 0s 2/2 [==============================] - 0s 10ms/step 1/2 [==============>...............] - ETA: 0s 2/2 [==============================] - 0s 11ms/step In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy)) Reverse Prediction: [[0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597] [0.75185597]] Reconstructed Values: [[[0.85202879] [0.92947942] [0.98824289] ... [1.03146749] [0.9733126 ] [0.88472319]] [[0.85202879] [0.92947942] [0.98824289] ... [1.03146749] [0.9733126 ] [0.88472319]] [[0.85202879] [0.92947942] [0.98824289] ... [1.03146749] [0.9733126 ] [0.88472319]] ... [[0.85202879] [0.92947942] [0.98824289] ... [1.03146749] [0.9733126 ] [0.88472319]] [[0.85202879] [0.92947942] [0.98824289] ... [1.03146749] [0.9733126 ] [0.88472319]] [[0.85202879] [0.92947942] [0.98824289] ... [1.03146749] [0.9733126 ] [0.88472319]]], Forward Prediction: [[0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007] [0.74904007]]