orion.Orion.evaluate

Orion.evaluate(data, ground_truth, fit=False, train_data=None, metrics={'accuracy': <function contextual_accuracy>, 'f1': <function contextual_f1_score>, 'precision': <function contextual_precision>, 'recall': <function contextual_recall>})[source]

Evaluate the performance against ground truth anomalies.

Parameters
  • data (DataFrame) – Input data, passed as a pandas.DataFrame containing exactly two columns: timestamp and value.

  • ground_truth (DataFrame) – Ground truth anomalies passed as a pandas.DataFrame containing two columns: start and stop.

  • fit (bool) – Whether to fit the pipeline before evaluating it. Defaults to False.

  • train_data (DataFrame) – Training data, passed as a pandas.DataFrame containing exactly two columns: timestamp and value. If not given, the pipeline is fitted on data.

  • metrics (list) – List of metrics to used passed as a list of strings. If not given, it defaults to all the Orion metrics.

Returns

pandas.Series containing one element for each metric applied, with the metric name as index.

Return type

Series