loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Hiroya Kato 1 ; Ryo Meguro 2 ; Seira Hidano 1 ; Takuo Suganuma 2 and Masahiro Hiji 2

Affiliations: 1 KDDI Research, Inc., Saitama, Japan ; 2 Tohoku University, Miyagi, Japan

Keyword(s): Graph Neural Networks, Robustness Certification, Backdoor Attacks, AI Security.

Abstract: Graph neural networks (GNNs) are vulnerable to backdoor attacks. Although empirical defense methods against such attacks are effective to some extent, they may be bypassed by adaptive attacks. Thus, recently, robustness certification that can certify the model robustness against any type of attack has been proposed. However, existing certified defenses have two shortcomings. The first one is that they add uniform defensive noise to the entire dataset, which degrades the robustness certification. The second one is that unnecessary computational costs for data with different sizes are required. To address them, in this paper, we propose flexible noise based robustness certification against backdoor attacks in GNNs. Our method can flexibly add defensive noise to binary elements in an adjacency matrix with two different probabilities. This leads to improvements in the model robustness because the defender can choose appropriate defensive noise depending on datasets. Additionally, our met hod is applicable to graph data with different sizes of adjacency matrices because a calculation in our certification depends only on the size of attack noise. Consequently, computational costs for the certification are reduced compared with a baseline method. Our experimental results on four datasets show that our method can improve the level of robustness compared with a baseline method. Furthermore, we demonstrate that our method can maintain a higher level of robustness with larger sizes of attack noise and poisoning. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 216.73.216.108

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Kato, H., Meguro, R., Hidano, S., Suganuma, T., Hiji and M. (2025). Flexible Noise Based Robustness Certification Against Backdoor Attacks in Graph Neural Networks. In Proceedings of the 11th International Conference on Information Systems Security and Privacy - Volume 2: ICISSP; ISBN 978-989-758-735-1; ISSN 2184-4356, SciTePress, pages 552-563. DOI: 10.5220/0013188700003899

@conference{icissp25,
author={Hiroya Kato and Ryo Meguro and Seira Hidano and Takuo Suganuma and Masahiro Hiji},
title={Flexible Noise Based Robustness Certification Against Backdoor Attacks in Graph Neural Networks},
booktitle={Proceedings of the 11th International Conference on Information Systems Security and Privacy - Volume 2: ICISSP},
year={2025},
pages={552-563},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0013188700003899},
isbn={978-989-758-735-1},
issn={2184-4356},
}

TY - CONF

JO - Proceedings of the 11th International Conference on Information Systems Security and Privacy - Volume 2: ICISSP
TI - Flexible Noise Based Robustness Certification Against Backdoor Attacks in Graph Neural Networks
SN - 978-989-758-735-1
IS - 2184-4356
AU - Kato, H.
AU - Meguro, R.
AU - Hidano, S.
AU - Suganuma, T.
AU - Hiji, M.
PY - 2025
SP - 552
EP - 563
DO - 10.5220/0013188700003899
PB - SciTePress