STRATEGIC DOMINANCE AND DYNAMIC PROGRAMMING FOR MULTI-AGENT PLANNING - Application to the Multi-Robot Box-pushing Problem

Mohamed Amine Hamila, Emmanuelle Grislin-Le Strugeon, René Mandiau, Abdel-Illah Mouaddib

2012

Abstract

This paper presents a planning approach for a multi-agent coordination problem in a dynamic environment. We introduce the algorithm SGInfiniteVI, allowing to apply some theories related to the engineering of multi-agent systems and designed to solve stochastic games. In order to limit the decision complexity and so decreasing the used resources (memory and processor-time), our approach relies on reducing the number of joint-action at each step decision. A scenario of multi-robot Box-pushing is used as a platform to evaluate and validate our approach. We show that only weakly dominated actions can improve the resolution process, despite a slight deterioration of the solution quality due to information loss.

References

  1. Conitzer, V. and Sandholm, T. (2008). New complexity results about nash equilibria. Games and Economic Behavior, 63(2):621-641.
  2. Fudenberg, D. and Tirole, J. (1991). Game theory. MIT Press, Cambridge, MA.
  3. Hamila, M. A., Grislin-le Strugeon, E., Mandiau, R., and Mouaddib, A.-I. (2010). An algorithm for multi-robot planning: Sginfinitevi. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02, WI-IAT 7810, pages 141-148, Washington, DC, USA. IEEE Computer Society.
  4. Hu, J. and Wellman, M. (2003). Nash q-learning for general-sum stochastic games. Journal of Machine Learning Research, 4:1039-1069.
  5. Kearns, M., Mansour, Y., and Singh, S. (2000). Fast planning in stochastic games. In In Proc. UAI-2000, pages 309-316. Morgan Kaufmann.
  6. Leyton-Brown, K. and Shoham, Y. (2008). Essentials of game theory: A concise multidisciplinary introduction. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2(1):1-88.
  7. Nash, J. F. (1950). Equilibrium points in n-person games. Proc. of the National Academy of Sciences of the United States of America, 36(1):48-49.
  8. Neumann, J. V. and Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press, Princeton. Second edition in 1947, third in 1954.
  9. Puterman, M. (2005). Markov Decision Processes: Discrete Stochastic Dynamic Programming. WileyInterscience.
  10. Shapley, L. (1953). Stochastic games. Proc. of the National Academy of Sciences USA, pages 1095-1100.
  11. Shoham, Y., Powers, R., and Grenager, T. (2003). Multiagent reinforcement learning: a critical survey. Technical report, Stanford University.
Download


Paper Citation


in Harvard Style

Amine Hamila M., Grislin-Le Strugeon E., Mandiau R. and Mouaddib A. (2012). STRATEGIC DOMINANCE AND DYNAMIC PROGRAMMING FOR MULTI-AGENT PLANNING - Application to the Multi-Robot Box-pushing Problem . In Proceedings of the 4th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-8425-96-6, pages 91-97. DOI: 10.5220/0003707500910097


in Bibtex Style

@conference{icaart12,
author={Mohamed Amine Hamila and Emmanuelle Grislin-Le Strugeon and René Mandiau and Abdel-Illah Mouaddib},
title={STRATEGIC DOMINANCE AND DYNAMIC PROGRAMMING FOR MULTI-AGENT PLANNING - Application to the Multi-Robot Box-pushing Problem},
booktitle={Proceedings of the 4th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2012},
pages={91-97},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0003707500910097},
isbn={978-989-8425-96-6},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 4th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - STRATEGIC DOMINANCE AND DYNAMIC PROGRAMMING FOR MULTI-AGENT PLANNING - Application to the Multi-Robot Box-pushing Problem
SN - 978-989-8425-96-6
AU - Amine Hamila M.
AU - Grislin-Le Strugeon E.
AU - Mandiau R.
AU - Mouaddib A.
PY - 2012
SP - 91
EP - 97
DO - 10.5220/0003707500910097