Abstract
Autoencoder based methods are the majority of deep unsupervised outlier detection methods. However, these methods perform not well on complex image datasets and suffer from the noise introduced by outliers, especially when the outlier ratio is high. In this paper, we propose a framework named Transformation Invariant AutoEncoder (TIAE), which can achieve stable and high performance on unsupervised outlier detection. First, instead of using a conventional autoencoder, we propose a transformation invariant autoencoder to do better representation learning for complex image datasets. Next, to mitigate the negative effect of noise introduced by outliers and stabilize the network training, we select the most confident inliers likely examples in each epoch as the training set by incorporating adaptive self-paced learning in our TIAE framework. Extensive evaluations show that TIAE significantly advances unsupervised outlier detection performance by up to 10% AUROC against other autoencoder based methods on five image datasets.
Original language | English |
---|---|
Article number | 9376856 |
Pages (from-to) | 43991-44002 |
Number of pages | 12 |
Journal | IEEE Access |
Volume | 9 |
DOIs | |
State | Published - 2021 |
Externally published | Yes |
Funding
This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFB0204301, in part by the National Natural Science Foundation of China under Grant 62006236, in part by the Hunan Provincial Natural Science Foundation under Grant 2020JJ5673, and in part by the National University of Defense Technology (NUDT) Research Project under Grant ZK20-10.
Keywords
- Deep Learning
- autoencoder
- transformation invariant autoencoder
- unsupervised outlier detection