Authors:
Daniel A. Braun
and
Pedro A. Ortega
Affiliation:
University of Cambridge, United Kingdom
Keyword(s):
Minimum relative entropy principle, Adaptive control, Bayesian control rule, Linear quadratic regulator.
Related
Ontology
Subjects/Areas/Topics:
Adaptive Signal Processing and Control
;
Informatics in Control, Automation and Robotics
;
Information-Based Models for Control
;
Signal Processing, Sensors, Systems Modeling and Control
Abstract:
The design of optimal adaptive controllers is usually based on heuristics, because solving Bellman’s equations over information states is notoriously intractable. Approximate adaptive controllers often rely on the principle of certainty-equivalence where the control process deals with parameter point estimates as if they represented “true” parameter values. Here we present a stochastic control rule instead where controls are sampled from a posterior distribution over a set of probabilistic input-output models and the true model is identified by Bayesian inference. This allows reformulating the adaptive control problem as an inference and sampling problem derived from a minimum relative entropy principle. Importantly, inference and action sampling both work forward in time and hence such a Bayesian adaptive controller is applicable on-line. We demonstrate the improved performance that can be achieved by such an approach for linear quadratic regulator examples.