AERΒΆ

path: orion.primitives.aer.AER

description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.

see json.

argument

type

description

parameters

X

numpy.ndarray

n-dimensional array containing the input sequences for the model

y

numpy.ndarray

n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.

hyperparameters

epochs

int

number of epochs to train the model. An epoch is an iteration over the entire X data provided

input_shape

tuple

tuple denoting the shape of an input sample

optimizer

str

string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam

learning_rate

float

float denoting the learning rate of the optimizer. Default is 0.001

batch_size

int

number of samples per gradient update. Default is 64

layers_encoder

list

list containing layers of encoder

layers_generator

list

list containing layers of generator

output

ry_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (reverse)

y_hat

numpy.ndarray

n-dimensional array containing the reconstructions for each input sequence

fy_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (forward)

In [1]: import numpy as np

In [2]: from mlstars import load_primitive

In [3]: X = np.ones((64, 100, 1))

In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0)

In [5]: primitive = load_primitive('orion.primitives.aer.AER',
   ...:     arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1})
   ...: 

In [6]: primitive.fit()

 1/51 [..............................] - ETA: 3:09 - loss: 1.1531 - tf.__operators__.getitem_loss: 1.0243 - tf.__operators__.getitem_1_loss: 1.1917 - tf.__operators__.getitem_2_loss: 1.2046
 4/51 [=>............................] - ETA: 1s - loss: 0.7711 - tf.__operators__.getitem_loss: 0.7675 - tf.__operators__.getitem_1_loss: 0.7289 - tf.__operators__.getitem_2_loss: 0.8592  
 7/51 [===>..........................] - ETA: 0s - loss: 0.5146 - tf.__operators__.getitem_loss: 0.5635 - tf.__operators__.getitem_1_loss: 0.4475 - tf.__operators__.getitem_2_loss: 0.6001
10/51 [====>.........................] - ETA: 0s - loss: 0.3835 - tf.__operators__.getitem_loss: 0.4152 - tf.__operators__.getitem_1_loss: 0.3447 - tf.__operators__.getitem_2_loss: 0.4292
13/51 [======>.......................] - ETA: 0s - loss: 0.3292 - tf.__operators__.getitem_loss: 0.3220 - tf.__operators__.getitem_1_loss: 0.3324 - tf.__operators__.getitem_2_loss: 0.3301
16/51 [========>.....................] - ETA: 0s - loss: 0.2886 - tf.__operators__.getitem_loss: 0.2653 - tf.__operators__.getitem_1_loss: 0.3101 - tf.__operators__.getitem_2_loss: 0.2689
19/51 [==========>...................] - ETA: 0s - loss: 0.2527 - tf.__operators__.getitem_loss: 0.2321 - tf.__operators__.getitem_1_loss: 0.2735 - tf.__operators__.getitem_2_loss: 0.2316
21/51 [===========>..................] - ETA: 0s - loss: 0.2336 - tf.__operators__.getitem_loss: 0.2189 - tf.__operators__.getitem_1_loss: 0.2495 - tf.__operators__.getitem_2_loss: 0.2165
24/51 [=============>................] - ETA: 0s - loss: 0.2120 - tf.__operators__.getitem_loss: 0.2069 - tf.__operators__.getitem_1_loss: 0.2191 - tf.__operators__.getitem_2_loss: 0.2029
27/51 [==============>...............] - ETA: 0s - loss: 0.1959 - tf.__operators__.getitem_loss: 0.1989 - tf.__operators__.getitem_1_loss: 0.1952 - tf.__operators__.getitem_2_loss: 0.1945
30/51 [================>.............] - ETA: 0s - loss: 0.1826 - tf.__operators__.getitem_loss: 0.1912 - tf.__operators__.getitem_1_loss: 0.1760 - tf.__operators__.getitem_2_loss: 0.1872
33/51 [==================>...........] - ETA: 0s - loss: 0.1709 - tf.__operators__.getitem_loss: 0.1825 - tf.__operators__.getitem_1_loss: 0.1608 - tf.__operators__.getitem_2_loss: 0.1793
36/51 [====================>.........] - ETA: 0s - loss: 0.1605 - tf.__operators__.getitem_loss: 0.1730 - tf.__operators__.getitem_1_loss: 0.1493 - tf.__operators__.getitem_2_loss: 0.1705
39/51 [=====================>........] - ETA: 0s - loss: 0.1518 - tf.__operators__.getitem_loss: 0.1633 - tf.__operators__.getitem_1_loss: 0.1411 - tf.__operators__.getitem_2_loss: 0.1617
42/51 [=======================>......] - ETA: 0s - loss: 0.1444 - tf.__operators__.getitem_loss: 0.1543 - tf.__operators__.getitem_1_loss: 0.1350 - tf.__operators__.getitem_2_loss: 0.1534
45/51 [=========================>....] - ETA: 0s - loss: 0.1379 - tf.__operators__.getitem_loss: 0.1465 - tf.__operators__.getitem_1_loss: 0.1295 - tf.__operators__.getitem_2_loss: 0.1462
48/51 [===========================>..] - ETA: 0s - loss: 0.1320 - tf.__operators__.getitem_loss: 0.1401 - tf.__operators__.getitem_1_loss: 0.1238 - tf.__operators__.getitem_2_loss: 0.1404
51/51 [==============================] - ETA: 0s - loss: 0.1268 - tf.__operators__.getitem_loss: 0.1351 - tf.__operators__.getitem_1_loss: 0.1181 - tf.__operators__.getitem_2_loss: 0.1358
51/51 [==============================] - 6s 50ms/step - loss: 0.1268 - tf.__operators__.getitem_loss: 0.1351 - tf.__operators__.getitem_1_loss: 0.1181 - tf.__operators__.getitem_2_loss: 0.1358 - val_loss: 0.0417 - val_tf.__operators__.getitem_loss: 0.0593 - val_tf.__operators__.getitem_1_loss: 0.0200 - val_tf.__operators__.getitem_2_loss: 0.0676

In [7]: ry, y, fy = primitive.produce(X=X)

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 10ms/step

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 12ms/step

In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy))
Reverse Prediction: [[0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]
 [0.75645933]]
Reconstructed Values: [[[0.87390024]
  [0.9533735 ]
  [1.00791113]
  ...
  [0.97429464]
  [0.92057215]
  [0.84575759]]

 [[0.87390024]
  [0.9533735 ]
  [1.00791113]
  ...
  [0.97429464]
  [0.92057215]
  [0.84575759]]

 [[0.87390024]
  [0.9533735 ]
  [1.00791113]
  ...
  [0.97429464]
  [0.92057215]
  [0.84575759]]

 ...

 [[0.87390024]
  [0.9533735 ]
  [1.00791113]
  ...
  [0.97429464]
  [0.92057215]
  [0.84575759]]

 [[0.87390024]
  [0.9533735 ]
  [1.00791113]
  ...
  [0.97429464]
  [0.92057215]
  [0.84575759]]

 [[0.87390024]
  [0.9533735 ]
  [1.00791113]
  ...
  [0.97429464]
  [0.92057215]
  [0.84575759]]], Forward Prediction: [[0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]
 [0.73996402]]