Analyzing the Stability of Convolutional Neural Networks against Image Degradation

Hamed Habibi Aghdam, Elnaz Jahani Heravi, Domenec Puig

2016

Abstract

Understanding the underlying process of Convolutional Neural Networks (ConvNets) is usually done through visualization techniques. However, these techniques do not provide accurate information about the stability of ConvNets. In this paper, our aim is to analyze the stability of ConvNets through different techniques. First, we propose a new method for finding the minimum noisy image which is located in the minimum distance from the decision boundary but it is misclassified by its ConvNet. Second, we exploratorly and quanitatively analyze the stability of the ConvNets trained on the CIFAR10, the MNIST and the GTSRB datasets. We observe that the ConvNets might make mistakes by adding a Gaussian noise with s = 1 (barely perceivable by human eyes) to the clean image. This suggests that the inter-class margin of the feature space obtained from a ConvNet is slim. Our second founding is that augmenting the clean dataset with many noisy images does not increase the inter-class margin. Consequently, a ConvNet trained on a dataset augmented with noisy images might incorrectly classify the images degraded with a low magnitude noise. The third founding reveals that even though an ensemble improves the stability, its performance is considerably reduced by a noisy dataset.

References

  1. Aghdam, H. H., Heravi, E. J., and Puig, D. (2015). Recognizing Traffic Signs using a Practical Deep Neural Network. In Second Iberian Robotics Conference, Lisbon. Springer.
  2. Ba, L. and Caurana, R. (2013). Do Deep Nets Really Need to be Deep ? arXiv preprint arXiv:1312.6184, pages 1-6.
  3. Cirean, D., Meier, U., Masci, J., and Schmidhuber, J. (2012). Multi-column deep neural network for traffic sign classification.Neural Networks, 32:333-338.
  4. Coates, A. and Ng, A. (2011). Selecting Receptive Fields in Deep Networks. Nips, (i):1-9.
  5. Dosovitskiy, A. and Brox, T. (2015). Inverting Convolutional Networks with Convolutional Networks. pages 1-15.
  6. Girshick, R., Donahue, J., Darrell, T., Berkeley, U. C., and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Cvpr'14, pages 2-9.
  7. Glorot, X. and Bengio, Y. (2010). Understanding the difficulty of training deep feedforward neural networks. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), 9:249-256.
  8. Goodfellow, I., Mirza, M., Da, X., Courville, A., and Bengio, Y. (2013). An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks. arXiv preprint arXiv: . . . .
  9. Jin, J., Fu, K., and Zhang, C. (2014). Traffic Sign Recognition With Hinge Loss Trained Convolutional Neural Networks. IEEE Transactions on Intelligent Transportation Systems, 15(5):1991-2000.
  10. Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images. pages 1-60.
  11. Krizhevsky, a., Sutskever, I., and Hinton, G. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, pages 1097-1105.
  12. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2323.
  13. Mahendran, A. and Vedaldi, A. (2014). Understanding Deep Image Representations by Inverting Them.
  14. Nguyen, a., Yosinski, J., and Clune, J. (2015). Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Cvpr 2015.
  15. Sermanet, P. and Lecun, Y. (2011). Traffic sign recognition with multi-scale convolutional networks. Proceedings of the International Joint Conference on Neural Networks, pages 2809-2813.
  16. Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv preprint arXiv:1312.6034, pages 1-8.
  17. Stallkamp, J., Schlipsing, M., Salmen, J., and Igel, C. (2012). Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32:323-332.
  18. Sutskever, I., Martens, J., Dahl, G., and Hinton, G. (2013). On the importance of initialization and momentum in deep learning. Jmlr W&Cp, 28(2010):1139-1147.
  19. Szegedy, C., Zaremba, W., and Sutskever, I. (2013). Intriguing properties of neural networks. arXiv preprint arXiv: . . . , pages 1-10.
  20. Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. (2014). How transferable are features in deep neural networks ? Nips'14, 27.
  21. Zeiler, M. D. and Fergus, R. (2013). Visualizing and Understanding Convolutional Networks. arXiv preprint arXiv:1311.2901.
Download


Paper Citation


in Harvard Style

Habibi Aghdam H., Jahani Heravi E. and Puig D. (2016). Analyzing the Stability of Convolutional Neural Networks against Image Degradation . In Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016) ISBN 978-989-758-175-5, pages 370-382. DOI: 10.5220/0005720703700382


in Bibtex Style

@conference{visapp16,
author={Hamed Habibi Aghdam and Elnaz Jahani Heravi and Domenec Puig},
title={Analyzing the Stability of Convolutional Neural Networks against Image Degradation},
booktitle={Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)},
year={2016},
pages={370-382},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005720703700382},
isbn={978-989-758-175-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPP, (VISIGRAPP 2016)
TI - Analyzing the Stability of Convolutional Neural Networks against Image Degradation
SN - 978-989-758-175-5
AU - Habibi Aghdam H.
AU - Jahani Heravi E.
AU - Puig D.
PY - 2016
SP - 370
EP - 382
DO - 10.5220/0005720703700382