loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Author: Ronald Ortner

Affiliation: Lehrstuhl für Informationstechnologie, Montanuniversität Leoben, Austria

Keyword(s): Reinforcement learning, Markov decision process, Multi-armed bandit, Similarity, Regret.

Related Ontology Subjects/Areas/Topics: Artificial Intelligence ; Computational Intelligence ; Evolutionary Computing ; Knowledge Discovery and Information Retrieval ; Knowledge-Based Systems ; Machine Learning ; Reactive AI ; Soft Computing ; Symbolic Systems ; Uncertainty in AI

Abstract: This paper considers reinforcement learning problems with additional similarity information. We start with the simple setting of multi-armed bandits in which the learner knows for each arm its color, where it is assumed that arms of the same color have close mean rewards. An algorithm is presented that shows that this color information can be used to improve the dependency of online regret bounds on the number of arms. Further, we discuss to what extent this approach can be extended to the more general case of Markov decision processes. For the simplest case where the same color for actions means similar rewards and identical transition probabilities, an algorithm and a corresponding online regret bound are given. For the general case where transition probabilities of same-colored actions imply only close but not necessarily identical transition probabilities we give upper and lower bounds on the error by action aggregation with respect to the color information. These bounds also imp ly that the general case is far more difficult to handle. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.141.202.187

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Ortner, R. (2010). EXPLOITING SIMILARITY INFORMATION IN REINFORCEMENT LEARNING - Similarity Models for Multi-Armed Bandits and MDPs. In Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Volume 1: ICAART; ISBN 978-989-674-021-4; ISSN 2184-433X, SciTePress, pages 203-210. DOI: 10.5220/0002703002030210

@conference{icaart10,
author={Ronald Ortner.},
title={EXPLOITING SIMILARITY INFORMATION IN REINFORCEMENT LEARNING - Similarity Models for Multi-Armed Bandits and MDPs},
booktitle={Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},
year={2010},
pages={203-210},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0002703002030210},
isbn={978-989-674-021-4},
issn={2184-433X},
}

TY - CONF

JO - Proceedings of the 2nd International Conference on Agents and Artificial Intelligence - Volume 1: ICAART
TI - EXPLOITING SIMILARITY INFORMATION IN REINFORCEMENT LEARNING - Similarity Models for Multi-Armed Bandits and MDPs
SN - 978-989-674-021-4
IS - 2184-433X
AU - Ortner, R.
PY - 2010
SP - 203
EP - 210
DO - 10.5220/0002703002030210
PB - SciTePress