Author:
Alexandru E. Şuşu
Affiliation:
EPFL, Switzerland
Keyword(s):
Wireless Sensor Nodes, Energy Harvesting, Constrained Markov Decision Processes, Multi-epoch Actions.
Related
Ontology
Subjects/Areas/Topics:
Discrete Event Systems
;
Environmental Monitoring and Control
;
Informatics in Control, Automation and Robotics
;
Intelligent Control Systems and Optimization
;
Optimization Algorithms
;
Signal Processing, Sensors, Systems Modeling and Control
;
Time Series and System Modeling
Abstract:
The controller of an environmentally powered wireless sensor node (WSN) seeks to maximize the quality of the data measurements and to communicate frequently with the network, while balancing the uncertain energy intake with the consumption. To devise such system manager we use the Markov Decision Process (MDP) optimization framework. However, our problem has physical characteristics that are not captured in the standard MDP model: namely, the radio interface takes a non-negligible amount of time to synchronize with the network before starting to transmit the acquired data, which translates into MDP actions spanning over multiple epochs. Optimizing without considering this multi-epoch actions requirement results in suboptimal MDP policies, which, under certain conditions described in the paper, waste on average 50% of the radio activity. Therefore, we incorporate this new constraint in the MDP formulation, and obtain an optimal policy that performs on average 83% better than a standar
d MDP policy. This solution outperforms also some heuristic policies we use for comparison by 14% and 154%.
(More)