orion.evaluation

Contextual Metrics

contextual_accuracy(expected, observed[, …])

Compute an accuracy score between the ground truth and the detected anomalies.

contextual_precision(expected, observed[, …])

Compute an precision score between the ground truth and the detected anomalies.

contextual_recall(expected, observed[, …])

Compute an recall score between the ground truth and the detected anomalies.

contextual_f1_score(expected, observed[, …])

Compute an f1 score between the ground truth and the detected anomalies.

contextual_confusion_matrix(expected, observed)

Compute the confusion matrix between the ground truth and the detected anomalies.

Point Metrics

point_accuracy(expected, observed[, data, …])

Compute an accuracy score between the ground truth and the detected anomalies.

point_precision(expected, observed[, data, …])

Compute an precision score between the ground truth and the detected anomalies.

point_recall(expected, observed[, data, …])

Compute an recall score between the ground truth and the detected anomalies.

point_f1_score(expected, observed[, data, …])

Compute an f1 score between the ground truth and the detected anomalies.

point_confusion_matrix(expected, observed[, …])

Compute the confusion matrix between the ground truth and the detected anomalies.