loading
Documents

Research.Publish.Connect.

Paper

Authors: Alessio Ansuini 1 ; Eric Medvet 2 ; Felice Pellegrino 2 and Marco Zullich 2

Affiliations: 1 International School for Advanced Studies, Trieste, Italy ; 2 Dipartimento di Ingegneria e Architettura, Università degli Studi di Trieste, Trieste, Italy

ISBN: 978-989-758-397-1

Keyword(s): Machine Learning, Pruning, Convolutional Neural Networks, Lottery Ticket Hypothesis, Canonical Correlation Analysis, Explainable Knowledge.

Abstract: During the last few decades, artificial neural networks (ANN) have achieved an enormous success in regression and classification tasks. The empirical success has not been matched with an equally strong theoretical understanding of such models, as some of their working principles (training dynamics, generalization properties, and the structure of inner representations) still remain largely unknown. It is, for example, particularly difficult to reconcile the well known fact that ANNs achieve remarkable levels of generalization also in conditions of severe over-parametrization. In our work, we explore a recent network compression technique, called Iterative Magnitude Pruning (IMP), and apply it to convolutional neural networks (CNN). The pruned and unpruned models are compared layer-wise with Canonical Correlation Analysis (CCA). Our results show a high similarity between layers of pruned and unpruned CNNs in the first convolutional layers and in the fully-connected layer, while for the intermediate convolutional layers the similarity is significantly lower. This suggests that, although in intermediate layers representation in pruned and unpruned networks is markedly different, in the last part the fully-connected layers act as pivots, producing not only similar performances but also similar representations of the data, despite the large difference in the number of parameters involved. (More)

PDF ImageFull Text

Download
CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 54.236.59.154

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ansuini, A.; Medvet, E.; Pellegrino, Felice Andrea and Zullich, M. (2020). On the Similarity between Hidden Layers of Pruned and Unpruned Convolutional Neural Networks.In Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM, ISBN 978-989-758-397-1, pages 52-59. DOI: 10.5220/0008960300520059

@conference{icpram20,
author={Alessio Ansuini. and Eric Medvet. and Pellegrino, Felice Andrea and Marco Zullich.},
title={On the Similarity between Hidden Layers of Pruned and Unpruned Convolutional Neural Networks},
booktitle={Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,},
year={2020},
pages={52-59},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008960300520059},
isbn={978-989-758-397-1},
}

TY - CONF

JO - Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM,
TI - On the Similarity between Hidden Layers of Pruned and Unpruned Convolutional Neural Networks
SN - 978-989-758-397-1
AU - Ansuini, A.
AU - Medvet, E.
AU - Pellegrino, Felice Andrea
AU - Zullich, M.
PY - 2020
SP - 52
EP - 59
DO - 10.5220/0008960300520059

Login or register to post comments.

Comments on this Paper: Be the first to review this paper.