Authors:
Jinghao Zhang
1
;
Zhenhua Feng
1
and
Yaochu Jin
2
Affiliations:
1
School of Computer Science and Electronic Engineering, University of Surrey, Guildford GU2 7XH, U.K.
;
2
School of Engineering, Westlake University, Hangzhou 310030, China
Keyword(s):
Deep Learning, Image Classification, Adversarial Training, Long-Tailed Recognition.
Abstract:
Long-tailed data distribution is a common issue in many practical learning-based approaches, causing Deep Neural Networks (DNNs) to under-fit minority classes. Although this biased problem has been extensively studied by the research community, the existing approaches mainly focus on the class-wise (inter-class) imbalance problem. In contrast, this paper considers both inter-class and intra-class data imbalance problems for network training. To this end, we present Adversarial Feature Re-calibration (AFR), a method that improves the standard accuracy of a trained deep network by adding adversarial perturbations to the majority samples of each class. To be specific, an adversarial attack model is fine-tuned to perturb the majority samples by injecting the features from their corresponding intra-class long-tailed minority samples. This procedure makes the dataset more evenly distributed from both the inter- and intra-class perspectives, thus encouraging DNNs to learn better representat
ions. The experimental results obtained on CIFAR-100-LT demonstrate the effectiveness and superiority of the proposed AFR method over the state-of-the-art long-tailed learning methods.
(More)