Authors:
Jose Antonio Martin H.
1
and
Javier de Lope
2
Affiliations:
1
Faculty of Computer Science, Universidad Complutense de Madrid, Spain
;
2
Universidad Politécnica de Madrid, Spain
Keyword(s):
Dynamic Optimization, Goal Coordination, Robotics, Multi-Objective Optimization, Reinforcement Learning, Optimal Control.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Computational Intelligence
;
Enterprise Information Systems
;
Evolutionary Computation and Control
;
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Machine Learning in Control Applications
;
Optimization Algorithms
;
Soft Computing
Abstract:
A general framework for the problem of coordination of multiple competing goals in dynamic environments for physical agents is presented. This approach to goal coordination is a novel tool to incorporate a deep coordination ability to pure reactive agents. The framework is based on the notion of multi-objective optimization. We propose a kind of “aggregating functions” formul−ation with the particularity that the aggregation is weighted by means of a dynamic weighting unitary vector ω (S) which is dependant on the system dynamic state allowing the agent to dynamically coordinate the priorities of its single goals. This dynamic weighting unitary vector is represented as a set of n − 1 angles. The dynamic coordination must be established by means of a mapping between the state of the agent’s environment S to the set of angles Φi (S) using any sort of machine learning tool. In this work we investigate the use of Reinforcement Learning as a first approach to learn that mapping.