AERΒΆ

path: orion.primitives.aer.AER

description: this an autoencoder-based model capable of creating both prediction-based and reconstruction-based anomaly scores.

see json.

argument

type

description

parameters

X

numpy.ndarray

n-dimensional array containing the input sequences for the model

y

numpy.ndarray

n-dimensional array containing the target sequences we want to reconstruct. Typically y is a signal from a selected set of channels from X.

hyperparameters

epochs

int

number of epochs to train the model. An epoch is an iteration over the entire X data provided

input_shape

tuple

tuple denoting the shape of an input sample

optimizer

str

string (name of optimizer) or optimizer instance. Default is keras.optimizers.Adam

learning_rate

float

float denoting the learning rate of the optimizer. Default is 0.001

batch_size

int

number of samples per gradient update. Default is 64

layers_encoder

list

list containing layers of encoder

layers_generator

list

list containing layers of generator

output

ry_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (reverse)

y_hat

numpy.ndarray

n-dimensional array containing the reconstructions for each input sequence

fy_hat

numpy.ndarray

n-dimensional array containing the regression for each input sequence (forward)

In [1]: import numpy as np

In [2]: from mlstars import load_primitive

In [3]: X = np.ones((64, 100, 1))

In [4]: y = X[:,:, [0]] # signal to reconstruct from X (channel 0)

In [5]: primitive = load_primitive('orion.primitives.aer.AER',
   ...:     arguments={"X": X, "y": y, "epochs": 1, "batch_size": 1})
   ...: 

In [6]: primitive.fit()

 1/51 [..............................] - ETA: 3:28 - loss: 0.9555 - tf.__operators__.getitem_loss: 1.0848 - tf.__operators__.getitem_1_loss: 0.9365 - tf.__operators__.getitem_2_loss: 0.8644
 3/51 [>.............................] - ETA: 1s - loss: 0.7532 - tf.__operators__.getitem_loss: 0.9170 - tf.__operators__.getitem_1_loss: 0.6942 - tf.__operators__.getitem_2_loss: 0.7075  
 5/51 [=>............................] - ETA: 1s - loss: 0.5911 - tf.__operators__.getitem_loss: 0.7719 - tf.__operators__.getitem_1_loss: 0.5089 - tf.__operators__.getitem_2_loss: 0.5746
 8/51 [===>..........................] - ETA: 1s - loss: 0.4168 - tf.__operators__.getitem_loss: 0.5895 - tf.__operators__.getitem_1_loss: 0.3303 - tf.__operators__.getitem_2_loss: 0.4170
11/51 [=====>........................] - ETA: 0s - loss: 0.3233 - tf.__operators__.getitem_loss: 0.4509 - tf.__operators__.getitem_1_loss: 0.2665 - tf.__operators__.getitem_2_loss: 0.3093
14/51 [=======>......................] - ETA: 0s - loss: 0.2807 - tf.__operators__.getitem_loss: 0.3587 - tf.__operators__.getitem_1_loss: 0.2605 - tf.__operators__.getitem_2_loss: 0.2431
17/51 [=========>....................] - ETA: 0s - loss: 0.2484 - tf.__operators__.getitem_loss: 0.3006 - tf.__operators__.getitem_1_loss: 0.2461 - tf.__operators__.getitem_2_loss: 0.2009
20/51 [==========>...................] - ETA: 0s - loss: 0.2201 - tf.__operators__.getitem_loss: 0.2655 - tf.__operators__.getitem_1_loss: 0.2200 - tf.__operators__.getitem_2_loss: 0.1746
23/51 [============>.................] - ETA: 0s - loss: 0.1982 - tf.__operators__.getitem_loss: 0.2454 - tf.__operators__.getitem_1_loss: 0.1938 - tf.__operators__.getitem_2_loss: 0.1597
26/51 [==============>...............] - ETA: 0s - loss: 0.1821 - tf.__operators__.getitem_loss: 0.2331 - tf.__operators__.getitem_1_loss: 0.1720 - tf.__operators__.getitem_2_loss: 0.1513
29/51 [================>.............] - ETA: 0s - loss: 0.1696 - tf.__operators__.getitem_loss: 0.2237 - tf.__operators__.getitem_1_loss: 0.1546 - tf.__operators__.getitem_2_loss: 0.1456
32/51 [=================>............] - ETA: 0s - loss: 0.1589 - tf.__operators__.getitem_loss: 0.2143 - tf.__operators__.getitem_1_loss: 0.1406 - tf.__operators__.getitem_2_loss: 0.1400
35/51 [===================>..........] - ETA: 0s - loss: 0.1494 - tf.__operators__.getitem_loss: 0.2041 - tf.__operators__.getitem_1_loss: 0.1297 - tf.__operators__.getitem_2_loss: 0.1339
37/51 [====================>.........] - ETA: 0s - loss: 0.1437 - tf.__operators__.getitem_loss: 0.1971 - tf.__operators__.getitem_1_loss: 0.1241 - tf.__operators__.getitem_2_loss: 0.1296
40/51 [======================>.......] - ETA: 0s - loss: 0.1363 - tf.__operators__.getitem_loss: 0.1866 - tf.__operators__.getitem_1_loss: 0.1177 - tf.__operators__.getitem_2_loss: 0.1230
43/51 [========================>.....] - ETA: 0s - loss: 0.1299 - tf.__operators__.getitem_loss: 0.1768 - tf.__operators__.getitem_1_loss: 0.1130 - tf.__operators__.getitem_2_loss: 0.1170
46/51 [==========================>...] - ETA: 0s - loss: 0.1243 - tf.__operators__.getitem_loss: 0.1682 - tf.__operators__.getitem_1_loss: 0.1087 - tf.__operators__.getitem_2_loss: 0.1118
49/51 [===========================>..] - ETA: 0s - loss: 0.1193 - tf.__operators__.getitem_loss: 0.1611 - tf.__operators__.getitem_1_loss: 0.1043 - tf.__operators__.getitem_2_loss: 0.1076
51/51 [==============================] - 6s 44ms/step - loss: 0.1162 - tf.__operators__.getitem_loss: 0.1570 - tf.__operators__.getitem_1_loss: 0.1012 - tf.__operators__.getitem_2_loss: 0.1053 - val_loss: 0.0398 - val_tf.__operators__.getitem_loss: 0.0608 - val_tf.__operators__.getitem_1_loss: 0.0223 - val_tf.__operators__.getitem_2_loss: 0.0538

In [7]: ry, y, fy = primitive.produce(X=X)

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 10ms/step

1/2 [==============>...............] - ETA: 0s
2/2 [==============================] - 0s 12ms/step

In [8]: print("Reverse Prediction: {}\nReconstructed Values: {}, Forward Prediction: {}".format(ry, y, fy))
Reverse Prediction: [[0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]
 [0.75351467]]
Reconstructed Values: [[[0.88852339]
  [0.97556671]
  [1.0321947 ]
  ...
  [1.00180738]
  [0.94580684]
  [0.87006425]]

 [[0.88852339]
  [0.97556671]
  [1.0321947 ]
  ...
  [1.00180738]
  [0.94580684]
  [0.87006425]]

 [[0.88852339]
  [0.97556671]
  [1.0321947 ]
  ...
  [1.00180738]
  [0.94580684]
  [0.87006425]]

 ...

 [[0.88852339]
  [0.97556671]
  [1.0321947 ]
  ...
  [1.00180738]
  [0.94580684]
  [0.87006425]]

 [[0.88852339]
  [0.97556671]
  [1.0321947 ]
  ...
  [1.00180738]
  [0.94580684]
  [0.87006425]]

 [[0.88852339]
  [0.97556671]
  [1.0321947 ]
  ...
  [1.00180738]
  [0.94580684]
  [0.87006425]]], Forward Prediction: [[0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]
 [0.76811006]]