AERΒΆ

path: orion.primitives.aer.AER

description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.

see json.

argument

type

description

parameters

X

numpy.ndarray

n-dimensional array containing the input sequences for the model

y

numpy.ndarray

n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.

hyperparameters

epochs

int

number of epochs to train the model. An epoch is an iteration over the entire X data provided

input_shape

tuple

tuple denoting the shape of an input sample

optimizer

str

string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam

learning_rate

float

float denoting the learning rate of the optimizer. Default is 0.001

batch_size

int

number of samples per gradient update. Default is 64

layers_encoder

list

list containing layers of encoder

layers_generator

list

list containing layers of generator

output

ry_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (reverse)

y_hat

numpy.ndarray

n-dimensional array containing the reconstructions for each input sequence

fy_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (forward)

In [1]: import numpy as np

In [2]: from mlstars import load_primitive

In [3]: X = np.ones((64, 100, 1))

In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0)

In [5]: primitive = load_primitive('orion.primitives.aer.AER',
   ...:     arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1})
   ...: 

In [6]: primitive.fit()

 1/51 [..............................] - ETA: 3:03 - loss: 1.0642 - tf.__operators__.getitem_loss: 0.9613 - tf.__operators__.getitem_1_loss: 1.0691 - tf.__operators__.getitem_2_loss: 1.1572
 4/51 [=>............................] - ETA: 1s - loss: 0.7657 - tf.__operators__.getitem_loss: 0.7305 - tf.__operators__.getitem_1_loss: 0.7097 - tf.__operators__.getitem_2_loss: 0.9129  
 7/51 [===>..........................] - ETA: 0s - loss: 0.5409 - tf.__operators__.getitem_loss: 0.5427 - tf.__operators__.getitem_1_loss: 0.4595 - tf.__operators__.getitem_2_loss: 0.7019
10/51 [====>.........................] - ETA: 0s - loss: 0.4004 - tf.__operators__.getitem_loss: 0.3989 - tf.__operators__.getitem_1_loss: 0.3354 - tf.__operators__.getitem_2_loss: 0.5318
13/51 [======>.......................] - ETA: 0s - loss: 0.3377 - tf.__operators__.getitem_loss: 0.3072 - tf.__operators__.getitem_1_loss: 0.3147 - tf.__operators__.getitem_2_loss: 0.4143
16/51 [========>.....................] - ETA: 0s - loss: 0.2976 - tf.__operators__.getitem_loss: 0.2501 - tf.__operators__.getitem_1_loss: 0.3002 - tf.__operators__.getitem_2_loss: 0.3398
19/51 [==========>...................] - ETA: 0s - loss: 0.2618 - tf.__operators__.getitem_loss: 0.2147 - tf.__operators__.getitem_1_loss: 0.2698 - tf.__operators__.getitem_2_loss: 0.2930
22/51 [===========>..................] - ETA: 0s - loss: 0.2334 - tf.__operators__.getitem_loss: 0.1947 - tf.__operators__.getitem_1_loss: 0.2372 - tf.__operators__.getitem_2_loss: 0.2646
25/51 [=============>................] - ETA: 0s - loss: 0.2126 - tf.__operators__.getitem_loss: 0.1840 - tf.__operators__.getitem_1_loss: 0.2096 - tf.__operators__.getitem_2_loss: 0.2471
28/51 [===============>..............] - ETA: 0s - loss: 0.1968 - tf.__operators__.getitem_loss: 0.1774 - tf.__operators__.getitem_1_loss: 0.1876 - tf.__operators__.getitem_2_loss: 0.2345
31/51 [=================>............] - ETA: 0s - loss: 0.1837 - tf.__operators__.getitem_loss: 0.1717 - tf.__operators__.getitem_1_loss: 0.1699 - tf.__operators__.getitem_2_loss: 0.2231
34/51 [===================>..........] - ETA: 0s - loss: 0.1722 - tf.__operators__.getitem_loss: 0.1656 - tf.__operators__.getitem_1_loss: 0.1558 - tf.__operators__.getitem_2_loss: 0.2116
37/51 [====================>.........] - ETA: 0s - loss: 0.1622 - tf.__operators__.getitem_loss: 0.1588 - tf.__operators__.getitem_1_loss: 0.1451 - tf.__operators__.getitem_2_loss: 0.2000
40/51 [======================>.......] - ETA: 0s - loss: 0.1537 - tf.__operators__.getitem_loss: 0.1518 - tf.__operators__.getitem_1_loss: 0.1372 - tf.__operators__.getitem_2_loss: 0.1887
43/51 [========================>.....] - ETA: 0s - loss: 0.1465 - tf.__operators__.getitem_loss: 0.1452 - tf.__operators__.getitem_1_loss: 0.1312 - tf.__operators__.getitem_2_loss: 0.1784
46/51 [==========================>...] - ETA: 0s - loss: 0.1401 - tf.__operators__.getitem_loss: 0.1393 - tf.__operators__.getitem_1_loss: 0.1259 - tf.__operators__.getitem_2_loss: 0.1694
49/51 [===========================>..] - ETA: 0s - loss: 0.1344 - tf.__operators__.getitem_loss: 0.1344 - tf.__operators__.getitem_1_loss: 0.1208 - tf.__operators__.getitem_2_loss: 0.1619
51/51 [==============================] - 6s 43ms/step - loss: 0.1310 - tf.__operators__.getitem_loss: 0.1316 - tf.__operators__.getitem_1_loss: 0.1173 - tf.__operators__.getitem_2_loss: 0.1576 - val_loss: 0.0452 - val_tf.__operators__.getitem_loss: 0.0681 - val_tf.__operators__.getitem_1_loss: 0.0273 - val_tf.__operators__.getitem_2_loss: 0.0580

In [7]: ry, y, fy = primitive.produce(X=X)

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 9ms/step

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 11ms/step

In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy))
Reverse Prediction: [[0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]
 [0.73902399]]
Reconstructed Values: [[[0.84885972]
  [0.92837642]
  [0.986791  ]
  ...
  [1.01359031]
  [0.95272911]
  [0.87024111]]

 [[0.84885972]
  [0.92837642]
  [0.986791  ]
  ...
  [1.01359031]
  [0.95272911]
  [0.87024111]]

 [[0.84885972]
  [0.92837642]
  [0.986791  ]
  ...
  [1.01359031]
  [0.95272911]
  [0.87024111]]

 ...

 [[0.84885972]
  [0.92837642]
  [0.986791  ]
  ...
  [1.01359031]
  [0.95272911]
  [0.87024111]]

 [[0.84885972]
  [0.92837642]
  [0.986791  ]
  ...
  [1.01359031]
  [0.95272911]
  [0.87024111]]

 [[0.84885972]
  [0.92837642]
  [0.986791  ]
  ...
  [1.01359031]
  [0.95272911]
  [0.87024111]]], Forward Prediction: [[0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]
 [0.75915156]]