loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Dennis Gross 1 ; Thiago Simão 1 ; Nils Jansen 1 and Guillermo Pérez 2

Affiliations: 1 Institute for Computing and Information Sciences, Radboud University, Toernooiveld 212, 6525 EC Nijmegen, The Netherlands ; 2 Department of Computer Science, University of Antwerp – Flanders Make, Middelheimlaan 1, 2020 Antwerpen, Belgium

Keyword(s): Adversarial Reinforcement Learning, Model Checking.

Abstract: Deep Reinforcement Learning (DRL) agents are susceptible to adversarial noise in their observations that can mislead their policies and decrease their performance. However, an adversary may be interested not only in decreasing the reward, but also in modifying specific temporal logic properties of the policy. This paper presents a metric that measures the exact impact of adversarial attacks against such properties. We use this metric to craft optimal adversarial attacks. Furthermore, we introduce a model checking method that allows us to verify the robustness of RL policies against adversarial attacks. Our empirical analysis confirms (1) the quality of our metric to craft adversarial attacks against temporal logic properties, and (2) that we are able to concisely assess a system’s robustness against attacks.

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.97.9.172

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Gross, D. ; Simão, T. ; Jansen, N. and Pérez, G. (2023). Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART; ISBN 978-989-758-623-1; ISSN 2184-433X, SciTePress, pages 501-508. DOI: 10.5220/0011693200003393

@conference{icaart23,
author={Dennis Gross and Thiago Simão and Nils Jansen and Guillermo Pérez},
title={Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART},
year={2023},
pages={501-508},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011693200003393},
isbn={978-989-758-623-1},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART
TI - Targeted Adversarial Attacks on Deep Reinforcement Learning Policies via Model Checking
SN - 978-989-758-623-1
IS - 2184-433X
AU - Gross, D.
AU - Simão, T.
AU - Jansen, N.
AU - Pérez, G.
PY - 2023
SP - 501
EP - 508
DO - 10.5220/0011693200003393
PB - SciTePress