loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Matthias Rath 1 ; 2 and Alexandru Paul Condurache 1 ; 2

Affiliations: 1 Automated Driving Research, Robert Bosch GmbH, Stuttgart, Germany ; 2 Institute for Signal Processing, University of Lübeck, Lübeck, Germany

Keyword(s): Geometric Prior Knowledge, Invariance, Group Transformations, Representation Learning.

Abstract: Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks. This makes them applicable to practically important use-cases where training data is scarce. Rather than being learned, this knowledge can be embedded by enforcing invariance to those transformations. Invariance can be imposed using group-equivariant convolutions followed by a pooling operation. For rotation-invariance, previous work investigated replacing the spatial pooling operation with invariant integration which explicitly constructs invariant representations. Invariant integration uses monomials which are selected using an iterative approach requiring expensive pre-training. We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems. Additionally, we replace monomials with different functions such as weighted sums, multi-layer perceptrons and self-attention, thereby streamlining the training of invariant-integration-based architectures. We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets where rotation-invariant-integration-based Wide-ResNet architectures using monomials and weighted sums outperform the respective baselines in the limited sample regime. We achieve state-of-the-art results using full data on Rotated-MNIST and SVHN where rotation is a main source of intraclass variation. On STL-10 we outperform a standard and a rotation-equivariant convolutional neural network using pooling. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.143.9.115

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Rath, M. and Condurache, A. (2022). Improving the Sample-complexity of Deep Classification Networks with Invariant Integration. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 5: VISAPP; ISBN 978-989-758-555-5; ISSN 2184-4321, SciTePress, pages 214-225. DOI: 10.5220/0010872000003124

@conference{visapp22,
author={Matthias Rath. and Alexandru Paul Condurache.},
title={Improving the Sample-complexity of Deep Classification Networks with Invariant Integration},
booktitle={Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 5: VISAPP},
year={2022},
pages={214-225},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010872000003124},
isbn={978-989-758-555-5},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2022) - Volume 5: VISAPP
TI - Improving the Sample-complexity of Deep Classification Networks with Invariant Integration
SN - 978-989-758-555-5
IS - 2184-4321
AU - Rath, M.
AU - Condurache, A.
PY - 2022
SP - 214
EP - 225
DO - 10.5220/0010872000003124
PB - SciTePress