Authors:
Chris Argenta
and
Jon Doyle
Affiliation:
North Carolina State University, United States
Keyword(s):
Multi-Agent Systems, Plan Recognition.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Data Manipulation
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Methodologies and Methods
;
Multi-Agent Systems
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Soft Computing
;
Software Engineering
;
Symbolic Systems
Abstract:
A key challenge in Multi-agent Plan Recognition (MPAR) is effectively pruning the large search space of
potential goal / team compositions because multi-agent scenarios distribute actions/observables across
agents. This additional dimension also makes creating a priori plan libraries difficult. In this paper, we
describe our strategy for discrete Multi-agent Plan Recognition as Planning (MAPRAP), which extends
Ramirez and Geffner’s Plan Recognition as Planning (PRAP) approach into multi-agent domains. MAPRAP
(like PRAP) uses a planning domain (not a library) to synthesize and compare utility costs of plan instances
that incorporate potential goals and previous observables to identify the plan being carried out by teams of
agents. This initial discrete implementation of MAPRAP includes two pruning strategies to address the
explosion of hypotheses. We establish a performance profile for discrete MAPRAP using the well-known
multi-agent blocks-world benchmark domain. We varied the number
of teams, agent count, and goal sizes.
We measured accuracy, precision, and recall at each time step. For pruning efficiency, we compare two
strategies. In the more aggressive case our multi-agent team blocks scenarios averaged 1.05 plans
synthesized per goal per time step (compared to 0.56 for single agent scenarios) demonstrating feasibility of
MAPRAP and benchmarking for future improvements.
(More)