ALAD
Adversarially Learned Anomaly Detection
IEEE ICDM 2018
paper
code
研究動機(主要解決的問題)
1、developing effective methods for complex and high-dimensional data remains a challenge
對復雜的高維的數據難處理
2、The need to solve an optimization problem for every test example makes this method impractical on large datasets or for real-time applications
優點:effective, but also efficient at test time.
框架方法
Loss & Anomaly Score
loss
V(Dxz,Dxx,Dzz,E,G)=V(Dxz,E,G)+V(Dxx,E,G)+V(Dzz,E,G)\begin{array}{l}{V\left(D_{x z}, D_{x x}, D_{z z}, E, G\right) = \quad V\left(D_{x z}, E, G\right)+V\left(D_{x x}, E, G\right)+V\left(D_{z z}, E, G\right)}\end{array} V(Dxz?,Dxx?,Dzz?,E,G)=V(Dxz?,E,G)+V(Dxx?,E,G)+V(Dzz?,E,G)?
Anomaly Score
A(x)=∥fxx(x,x)?fxx(x,G(E(x)))∥1A(x)=\left\|f_{x x}(x, x)-f_{x x}(x, G(E(x)))\right\|_{1} A(x)=∥fxx?(x,x)?fxx?(x,G(E(x)))∥1?
A(x) 表示D的置信度,樣本是都被很好的encoder或者reconstructed by generator。值越大表示越異常。
實驗
數據集:
參數設置:
KDDCup99 :20%的異常
Arrhythmia :15%的異常
use 80% of the whole official dataset for training and keep the remaining 20% as our test set.
We further remove 25% from the training set for a validation set and discard anomalous samples from both training and validation sets (thus setting up a novelty detection task).
評價方法:
Precision, Recall, F1 score
baselines:
One Class Support Vector Machines (OC-SVM)
Support vector method for novelty detection 1999
Isolation Forests (IF)
Isolation forest 2008
Deep Structured Energy Based Models (DSEBM)
Deep structured energy based models for anomaly detection 2016
Deep Autoencoding Gaussian Mixture Model (DAGMM)
Deep autoencoding gaussian mixture model for unsupervised anomaly detection 2018
AnoGAN
Unsupervised anomaly detection with generative adversarial networks to guide marker discovery 2017
實驗結果
總結
我們提出了一種基于GAN的異常檢測方法ALAD,它在訓練期間從數據空間到潛在空間學習編碼器,使得它在測試時比單獨發布的GAN方法更有效。 此外,我們還采用了額外的鑒別器來改進編碼器,以及已經發現可以穩定GAN訓練的頻譜歸一化。
總結
- 上一篇: 《卓有成效的管理者》读书分享
- 下一篇: Qwt Plot Magnifier 缩