For anomaly detection (AD), early approaches often train separate models for individual classes, yielding high performance but posing challenges in scalability and resource management. Recent efforts have shifted toward training a single model capable of handling multiple classes. However, directly extending early AD methods to multi-class settings often results in degraded performance.
In this paper, we analyze this degradation observed in reconstruction-based methods, identifying two key issues: catastrophic forgetting and inter-class confusion. To this end, we propose a plug-and-play modification by incorporating class-aware contrastive learning (CL). By explicitly leveraging raw object category information (e.g., carpet or wood) as supervised signals, we apply local CL to fine-tune multiscale features and global CL to learn more compact feature representations of normal patterns, thereby effectively adapting the models to multi-class settings. Experiments across four datasets (over 60 categories) verify the effectiveness of our approach, yielding significant improvements and superior performance compared to advanced methods. Notably, ablation studies show that even using pseudo-class labels can achieve comparable performance.
We focus on the question: "Why do one-for-one models degrade when trained on multiple classes?". Specifically, previous AD methods, such as RD, DeSTSeg, SimpleNet, DRAEM, perform well yet training different models for different categories. We refer to such training strategy one-for-one models, which is challenging to handle due to computational cost and model management. However, when directly trianing these one-for-one models on multiple classes, the performance will decrease significantly.
We found two issues when training one-for-one models on multiple classes:
@article{fan2024revita,
title={Revitalizing Reconstruction Models for Multi-class Anomaly Detection via Class-Aware Contrastive Learning},
author={Fan, Lei and Huang, Junjie and Di, Donglin and Su, Anyang and Pagnucco, Maurice and Song, Yang},
journal={arXiv preprint arXiv:2412.04769},
year={2024}
}