Key Characteristics of Algorithms' Dynamics Beyond Accuracy - Evaluation Tests (v2)

DOI

Key Characteristics of Algorithms' Dynamics Beyond Accuracy - Evaluation Tests (v2)

conducted for the paper: What do anomaly scores actually mean? Key characteristics of algorithms' dynamics beyond accuracy by F. Iglesias, H. O. Marques, A. Zimek, T. Zseby

Context and methodology

Anomaly detection is intrinsic to a large number of data analysis applications today. Most of the algorithms used assign an outlierness score to each instance prior to establishing anomalies in a binary form. The experiments in this repository study how different algorithms generate different dynamics in the outlierness scores and react in very different ways to possible model perturbations that affect data.

The study elaborated in the referred paper presents new indices and coefficients to assess the dynamics and explores the responses of the algorithms as a function of variations in these indices, revealing key aspects of the interdependence between algorithms, data geometries and the ability to discriminate anomalies. Therefeore, this repository reproduces the conducted experiments, which study eight algorithms (ABOD, HBOS, iForest, K-NN, LOF, OCSVM, SDO and GLOSH), submitted to seven perturbations related to: cardinality, dimensionality, outlier proportion, inlier-outlier density ratio, density layers, clusters and local outliers, and collects behavioural profiles with eleven measurements (Adjusted Average Precission, ROC-AUC, Perini's Confidence [1], Perini's Stability [2], S-curves, Discriminant Power, Robust Coefficients of Variations for Inliers and Outliers, Coherence, Bias and Robustness) under two types of normalization: linear and Gaussian, the latter aiming to standardize the outlierness scores issued by different algorithms [3].

This repository is framed within the research on the following domains: algorithm evaluation, outlier detection, anomaly detection, unsupervised learning, machine learning, data mining, data analysis. Datasets and algorithms can be used for experiment replication and for further evaluation and comparison.

References

[1] Perini, L., Vercruyssen, V., Davis, J.: Quantifying the confidence of anomaly detectors in their example-wise predictions. In: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Springer Verlag (2020).

[2] Perini, L., Galvin, C., Vercruyssen, V.: A Ranking Stability Measure for Quantifying the Robustness of Anomaly Detection Methods. In: 2nd Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning @ ECML/PKDD (2020).

[3] Kriegel, H.-P., Kröger, P., Schubert, E., Zimek, A.: Interpreting and unifying outlier scores. In: Proceedings of the 2011 SIAM International Conference on Data Mining (SDM), pp. 13–24 (2011)

Technical details

Experiments are tested Python 3.9.6. Provided scripts generate all synthetic data and results. We keep them in the repo for the sake of comparability and replicability ("outputs.zip" file). The file and folder structure is as follows:

"compare_scores_group.py" is a Python script to extract new dynamic indices proposed in the paper.

"generate_data.py" is a Python script to generate datasets used for evaluation.

"latex_table.py" is a Python script to show results in a latex-table format. 

"merge_indices.py" is a Python script to merge accuracy and dynamic indices in the same table-structured summary.

"metric_corr.py" is a Python script to calculate correlation estimations between indices.

"outdet.py" is a Python script that runs outlier detection with different algorithms on diverse datasets.

"perini_tests.py" is a Python script to run Perini's confidence and stability on all datasets and algorithms' performances.

"scatterplots.py" is a Python script that generates scatter plots for comparing accuracy and dynamic performances.

"README.md" provides explanations and step by step instructions for replication.

"requirements.txt" contains references to required Python libraries and versions.

"outputs.zip" contains all result tables, plots and synthetic data generated with the scripts.

[data/real_data] contain CSV versions of the Wilt, Shuttle, Waveform and Cardiotocography datasets (inherited and adapted from the LMU repository)

License

The CC-BY license applies to all data generated with the "generated_data.py" script. All distributed code is under the GNU GPL license. For the "ExCeeD.py" and "stability.py" scripts, please consult and refer to the original sources provided above.

Identifier
DOI https://doi.org/10.48436/9x3kb-ha870
Related Identifier Requires https://github.com/Lorenzo-Perini/Confidence_AD
Related Identifier Requires https://github.com/Lorenzo-Perini/StabilityRankings_AD
Related Identifier HasPart https://www.dbs.ifi.lmu.de/research/outlier-evaluation/DAMI/
Related Identifier IsVersionOf https://doi.org/10.48436/qrj8h-v7816
Metadata Access https://researchdata.tuwien.ac.at/oai2d?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:researchdata.tuwien.ac.at:9x3kb-ha870
Provenance
Creator Iglesias Vazquez, Felix (ORCID: 0000-0001-6081-969X)
Publisher TU Wien
Publication Year 2024
Rights GNU General Public License v3.0 or later; Creative Commons Attribution 4.0 International; https://www.gnu.org/licenses/gpl-3.0-standalone.html; https://creativecommons.org/licenses/by/4.0/legalcode
OpenAccess true
Contact tudata(at)tuwien.ac.at
Representation
Resource Type Software
Discipline Other