AERΒΆ

path: orion.primitives.aer.AER

description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.

see json.

argument

type

description

parameters

X

numpy.ndarray

n-dimensional array containing the input sequences for the model

y

numpy.ndarray

n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.

hyperparameters

epochs

int

number of epochs to train the model. An epoch is an iteration over the entire X data provided

input_shape

tuple

tuple denoting the shape of an input sample

optimizer

str

string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam

learning_rate

float

float denoting the learning rate of the optimizer. Default is 0.001

batch_size

int

number of samples per gradient update. Default is 64

layers_encoder

list

list containing layers of encoder

layers_generator

list

list containing layers of generator

output

ry_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (reverse)

y_hat

numpy.ndarray

n-dimensional array containing the reconstructions for each input sequence

fy_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (forward)

In [1]: import numpy as np

In [2]: from mlstars import load_primitive

In [3]: X = np.ones((64, 100, 1))

In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0)

In [5]: primitive = load_primitive('orion.primitives.aer.AER',
   ...:     arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1})
   ...: 

In [6]: primitive.fit()

 1/51 [..............................] - ETA: 3:23 - loss: 0.6301 - tf.__operators__.getitem_loss: 0.8236 - tf.__operators__.getitem_1_loss: 0.5631 - tf.__operators__.getitem_2_loss: 0.5708
 3/51 [>.............................] - ETA: 1s - loss: 0.4633 - tf.__operators__.getitem_loss: 0.6747 - tf.__operators__.getitem_1_loss: 0.3729 - tf.__operators__.getitem_2_loss: 0.4327  
 5/51 [=>............................] - ETA: 1s - loss: 0.3378 - tf.__operators__.getitem_loss: 0.5448 - tf.__operators__.getitem_1_loss: 0.2435 - tf.__operators__.getitem_2_loss: 0.3196
 8/51 [===>..........................] - ETA: 1s - loss: 0.2369 - tf.__operators__.getitem_loss: 0.3903 - tf.__operators__.getitem_1_loss: 0.1751 - tf.__operators__.getitem_2_loss: 0.2071
11/51 [=====>........................] - ETA: 1s - loss: 0.2046 - tf.__operators__.getitem_loss: 0.2933 - tf.__operators__.getitem_1_loss: 0.1871 - tf.__operators__.getitem_2_loss: 0.1510
14/51 [=======>......................] - ETA: 0s - loss: 0.1801 - tf.__operators__.getitem_loss: 0.2392 - tf.__operators__.getitem_1_loss: 0.1812 - tf.__operators__.getitem_2_loss: 0.1190
17/51 [=========>....................] - ETA: 0s - loss: 0.1576 - tf.__operators__.getitem_loss: 0.2116 - tf.__operators__.getitem_1_loss: 0.1586 - tf.__operators__.getitem_2_loss: 0.1018
20/51 [==========>...................] - ETA: 0s - loss: 0.1417 - tf.__operators__.getitem_loss: 0.1986 - tf.__operators__.getitem_1_loss: 0.1363 - tf.__operators__.getitem_2_loss: 0.0954
23/51 [============>.................] - ETA: 0s - loss: 0.1308 - tf.__operators__.getitem_loss: 0.1912 - tf.__operators__.getitem_1_loss: 0.1189 - tf.__operators__.getitem_2_loss: 0.0941
26/51 [==============>...............] - ETA: 0s - loss: 0.1221 - tf.__operators__.getitem_loss: 0.1840 - tf.__operators__.getitem_1_loss: 0.1056 - tf.__operators__.getitem_2_loss: 0.0933
29/51 [================>.............] - ETA: 0s - loss: 0.1143 - tf.__operators__.getitem_loss: 0.1750 - tf.__operators__.getitem_1_loss: 0.0955 - tf.__operators__.getitem_2_loss: 0.0910
31/51 [=================>............] - ETA: 0s - loss: 0.1096 - tf.__operators__.getitem_loss: 0.1682 - tf.__operators__.getitem_1_loss: 0.0907 - tf.__operators__.getitem_2_loss: 0.0887
34/51 [===================>..........] - ETA: 0s - loss: 0.1036 - tf.__operators__.getitem_loss: 0.1577 - tf.__operators__.getitem_1_loss: 0.0861 - tf.__operators__.getitem_2_loss: 0.0845
36/51 [====================>.........] - ETA: 0s - loss: 0.1002 - tf.__operators__.getitem_loss: 0.1510 - tf.__operators__.getitem_1_loss: 0.0841 - tf.__operators__.getitem_2_loss: 0.0817
38/51 [=====================>........] - ETA: 0s - loss: 0.0972 - tf.__operators__.getitem_loss: 0.1449 - tf.__operators__.getitem_1_loss: 0.0823 - tf.__operators__.getitem_2_loss: 0.0792
41/51 [=======================>......] - ETA: 0s - loss: 0.0930 - tf.__operators__.getitem_loss: 0.1372 - tf.__operators__.getitem_1_loss: 0.0792 - tf.__operators__.getitem_2_loss: 0.0763
44/51 [========================>.....] - ETA: 0s - loss: 0.0892 - tf.__operators__.getitem_loss: 0.1311 - tf.__operators__.getitem_1_loss: 0.0756 - tf.__operators__.getitem_2_loss: 0.0744
46/51 [==========================>...] - ETA: 0s - loss: 0.0869 - tf.__operators__.getitem_loss: 0.1277 - tf.__operators__.getitem_1_loss: 0.0731 - tf.__operators__.getitem_2_loss: 0.0737
49/51 [===========================>..] - ETA: 0s - loss: 0.0838 - tf.__operators__.getitem_loss: 0.1234 - tf.__operators__.getitem_1_loss: 0.0694 - tf.__operators__.getitem_2_loss: 0.0729
51/51 [==============================] - ETA: 0s - loss: 0.0819 - tf.__operators__.getitem_loss: 0.1208 - tf.__operators__.getitem_1_loss: 0.0671 - tf.__operators__.getitem_2_loss: 0.0724
51/51 [==============================] - 7s 56ms/step - loss: 0.0819 - tf.__operators__.getitem_loss: 0.1208 - tf.__operators__.getitem_1_loss: 0.0671 - tf.__operators__.getitem_2_loss: 0.0724 - val_loss: 0.0345 - val_tf.__operators__.getitem_loss: 0.0534 - val_tf.__operators__.getitem_1_loss: 0.0134 - val_tf.__operators__.getitem_2_loss: 0.0580

In [7]: ry, y, fy = primitive.produce(X=X)

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 10ms/step

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 1s 12ms/step

In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy))
Reverse Prediction: [[0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]
 [0.76895839]]
Reconstructed Values: [[[0.90440903]
  [0.98825211]
  [1.03952519]
  ...
  [1.00278176]
  [0.95202465]
  [0.87533848]]

 [[0.90440903]
  [0.98825211]
  [1.03952519]
  ...
  [1.00278176]
  [0.95202465]
  [0.87533848]]

 [[0.90440903]
  [0.98825211]
  [1.03952519]
  ...
  [1.00278176]
  [0.95202465]
  [0.87533848]]

 ...

 [[0.90440903]
  [0.98825211]
  [1.03952519]
  ...
  [1.00278176]
  [0.95202465]
  [0.87533848]]

 [[0.90440903]
  [0.98825211]
  [1.03952519]
  ...
  [1.00278176]
  [0.95202465]
  [0.87533848]]

 [[0.90440903]
  [0.98825211]
  [1.03952519]
  ...
  [1.00278176]
  [0.95202465]
  [0.87533848]]], Forward Prediction: [[0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]
 [0.75921422]]