loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Dávid Szeghy 1 ; 2 ; Mahmoud Aslan 3 ; Áron Fóthi 3 ; Balázs Mészáros 3 ; Zoltán Ádám Milacski 4 and András Lőrincz 3

Affiliations: 1 AImotive Inc., 18-22 Szépvölgyi út, Budapest, 1025, Hungary ; 2 Department of Geometry, Faculty of Natural Sciences, ELTE Eötvös Loránd University, 1/C. Pázmány Péter sétány, Budapest, 1117, Hungary ; 3 Department of Artificial Intelligence, Faculty of Informatics, ELTE Eötvös Loránd University, 1/A. Pázmány Péter sétány, Budapest, 1117, Hungary ; 4 Former Member of Department of Geometry, Faculty of Natural Sciences, ELTE Eötvös Loránd University, 1/C. Pázmány Péter sétány, Budapest, 1117, Hungary

Keyword(s): Sparse Coding, Group Sparse Coding, Stability Theory, Adversarial Attack.

Abstract: While deep neural networks are sensitive to adversarial noise, sparse coding using the Basis Pursuit (BP) method is robust against such attacks, including its multi-layer extensions. We prove that the stability theorem of BP holds upon the following generalizations: (i) the regularization procedure can be separated into disjoint groups with different weights, (ii) neurons or full layers may form groups, and (iii) the regularizer takes various generalized forms of the `1 norm. This result provides the proof for the architectural generalizations of (Cazenavette et al., 2021) including (iv) an approximation of the complete architecture as a shallow sparse coding network. Due to this approximation, we settled to experimenting with shallow networks and studied their robustness against the Iterative Fast Gradient Sign Method on a synthetic dataset and MNIST. We introduce classification based on the `2 norms of the groups and show numerically that it can be accurate and offers considerable speedups. In this family, linear transformer shows the best performance. Based on the theoretical results and the numerical simulations, we highlight numerical matters that may improve performance further. The proofs of our theorems can be found in the supplementary material a . (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.141.100.120

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Szeghy, D.; Aslan, M.; Fóthi, Á.; Mészáros, B.; Milacski, Z. and Lőrincz, A. (2022). Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness. In Proceedings of the 3rd International Conference on Deep Learning Theory and Applications - DeLTA; ISBN 978-989-758-584-5; ISSN 2184-9277, SciTePress, pages 77-85. DOI: 10.5220/0011138900003277

@conference{delta22,
author={Dávid Szeghy. and Mahmoud Aslan. and Áron Fóthi. and Balázs Mészáros. and Zoltán Ádám Milacski. and András Lőrincz.},
title={Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness},
booktitle={Proceedings of the 3rd International Conference on Deep Learning Theory and Applications - DeLTA},
year={2022},
pages={77-85},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011138900003277},
isbn={978-989-758-584-5},
issn={2184-9277},
}

TY - CONF

JO - Proceedings of the 3rd International Conference on Deep Learning Theory and Applications - DeLTA
TI - Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness
SN - 978-989-758-584-5
IS - 2184-9277
AU - Szeghy, D.
AU - Aslan, M.
AU - Fóthi, Á.
AU - Mészáros, B.
AU - Milacski, Z.
AU - Lőrincz, A.
PY - 2022
SP - 77
EP - 85
DO - 10.5220/0011138900003277
PB - SciTePress