Key Characteristics of Algorithms' Dynamics Beyond Accuracy - Evaluation Tests

DOI

Key Characteristics of Algorithms' Dynamics Beyond Accuracy - Evaluation Tests conducted for the paper: What do anomaly scores actually mean? Key characteristics of algorithms' dynamics beyond accuracy by F. Iglesias, H. O. Marques, A. Zimek, T. Zseby Context and methodology Anomaly detection is intrinsic to a large number of data analysis applications today. Most of the algorithms used assign an outlierness score to each instance prior to establishing anomalies in a binary form. The experiments in this repository study how different algorithms generate different dynamics in the outlierness scores and react in very different ways to possible model perturbations that affect data. The study elaborated in the referred paper presents new indices and coefficients to assess the dynamics and explores the responses of the algorithms as a function of variations in these indices, revealing key aspects of the interdependence between algorithms, data geometries and the ability to discriminate anomalies. Therefeore, this repository reproduces the conducted experiments, which study eight algorithms (ABOD, HBOS, iForest, K-NN, LOF, OCSVM, SDO and GLOSH), submitted to seven perturbations related to: cardinality, dimensionality, outlier proportion, inlier-outlier density ratio, density layers, clusters and local outliers, and collects behavioural profiles with eleven measurements (Adjusted Average Precission, ROC-AUC, Perini's Confidence [1], Perini's Stability [2], S-curves, Discriminant Power, Robust Coefficients of Variations for Inliers and Outliers, Coherence, Bias and Robustness) under two types of normalization: linear and Gaussian, the latter aiming to standardize the outlierness scores issued by different algorithms [3]. This repository is framed within the research on the following domains: algorithm evaluation, outlier detection, anomaly detection, unsupervised learning, machine learning, data mining, data analysis. Datasets and algorithms can be used for experiment replication and for further evaluation and comparison. References [1] Perini, L., Vercruyssen, V., Davis, J.: Quantifying the confidence of anomaly detectors in their example-wise predictions. In: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Springer Verlag (2020). [2] Perini, L., Galvin, C., Vercruyssen, V.: A Ranking Stability Measure for Quantifying the Robustness of Anomaly Detection Methods. In: 2nd Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning @ ECML/PKDD (2020). [3] Kriegel, H.-P., Kröger, P., Schubert, E., Zimek, A.: Interpreting and unifying outlier scores. In: Proceedings of the 2011 SIAM International Conference on Data Mining (SDM), pp. 13–24 (2011) Technical details Experiments are in Python 3. Provided scripts generate all data and results. We keep them in the repo for the sake of comparability and replicability. The file and folder structure is as follows:

[dataS] contains 70 synthetic datasets for the evaluation tests. [plots_corr] contains plots with correlation matrices between indices through all experiments. [plots_minmax] contains plots with performances for all experiments when linear normalization is used. [plots_proba] contains plots with performances for all experiments when Gaussian normalization is used. [scores_minmax] contains CSV files (one per dataset) with the outlier scores estimated by all algorithms under test (linear normalization) [scores_proba] contains CSV files (one per dataset) with the outlier scores estimated by all algorithms under test (Gaussian normalization) "all_minmax.csv" contains dynamic and accuracy indices calculated for all datasets and algorithms (linear normalization). "all_proba.csv" contains dynamic and accuracy indices calculated for all datasets and algorithms (Gaussian normalization). "compare_scores_group.py" is a Python script to extract new dynamic indices proposed in the paper. "dyn_minmax.csv" contains new dynamic indices calculated for all datasets and algorithms (linear normalization). "dyn_proba.csv" contains new dynamic indices calculated for all datasets and algorithms (Gaussian normalization). "ExCeeD.py" is a Python library to extract Perini's Confidence index (taken from: https://github.com/Lorenzo-Perini/Confidence_AD). "stability.py" is a Python library to extract Perini's Stability index (taken from: https://github.com/Lorenzo-Perini/StabilityRankings_AD). "generate_data.py" is a Python script to generate datasets used for evaluation. "indices.py" is a Python library to calculate accuracy indices. "latex_table.py" is a Python script to show results in a latex-table format.  "merge_indices.py" is a Python script to merge accuracy and dynamic indices in the same table-structured summary. "metric_corr.py" is a Python script to calculate correlation estimations between indices. "outdet.py" is a Python script that runs outlier detection with different algorithms on diverse datasets. "perf_minmax.csv" contains accuracy indices calculated for all datasets and algorithms (linear normalization). "perf_proba.csv" contains accuracy indices calculated for all datasets and algorithms (Gaussian normalization). "peri_conf_minmax.csv" contains Perini's confidence indices calculated for all datasets and algorithms (linear normalization). "peri_conf_proba.csv" contains Perini's confidence indices calculated for all datasets and algorithms (Gaussian normalization). "perini_tests.py" is a Python script to run Perini's confidence and stability on all datasets and algorithms' performances. "peri_stab_minmax.csv" contains Perini's stability indices calculated for all datasets and algorithms (linear normalization). "peri_stab_proba.csv" contains Perini's stability indices calculated for all datasets and algorithms (Gaussian normalization). "README.md" provides explanations and step by step instructions for replication. "scatterplots.py" is a Python script that generates scatter plots for comparing accuracy and dynamic performances.   License The CC-BY license applies to all data generated with the "generated_data.py" script. All distributed code is under the GNU GPL license. For the "ExCeeD.py" and "stability.py" scripts, please consult and refer to the original sources provided above.

Identifier
DOI https://doi.org/10.48436/4fwns-h8r74
Related Identifier IsVersionOf https://github.com/CN-TU/py-outlier-detection-dynamics/tree/main
Related Identifier Requires https://github.com/Lorenzo-Perini/Confidence_AD
Related Identifier Requires https://github.com/Lorenzo-Perini/StabilityRankings_AD
Metadata Access https://researchdata.tuwien.ac.at/oai2d?verb=GetRecord&metadataPrefix=oai_datacite&identifier=oai:researchdata.tuwien.ac.at:4fwns-h8r74
Provenance
Creator Iglesias Vazquez, Felix (ORCID: 0000-0001-6081-969X)
Publisher TU Wien
Publication Year 2023
Rights Creative Commons Attribution 4.0 International; GNU General Public License v3.0 or later; https://creativecommons.org/licenses/by/4.0/legalcode; https://www.gnu.org/licenses/gpl-3.0-standalone.html
OpenAccess true
Contact tudata(at)tuwien.ac.at
Representation
Resource Type Software
Version 1.0.0
Discipline Other