Authors:
Garrett Nicolai
1
and
Robert Hilderman
2
Affiliations:
1
Dalhousie University, Canada
;
2
University of Regina, Canada
Keyword(s):
Poker, Evolutionary algorithms, Evolutionary neural networks.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Co-Evolution and Collective Behavior
;
Computational Intelligence
;
Concurrent Co-Operation
;
Enterprise Information Systems
;
Evolution Strategies
;
Evolutionary Computing
;
Evolutionary Robotics and Intelligent Agents
;
Game Theory Applications
;
Knowledge Discovery and Information Retrieval
;
Knowledge-Based Systems
;
Machine Learning
;
Soft Computing
;
Symbolic Systems
Abstract:
Computers have difficulty learning how to play Texas Hold'em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to mis-represent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold'em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained b
y an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold'em Poker agents.
(More)