Authors:
Shikhar Raje
1
;
Navjyoti Singh
1
and
Shobhit Mohan
2
Affiliations:
1
International Institute of Information Technology and Hyderabad, India
;
2
Hyderabad Central University, India
Keyword(s):
Preference Aggregation, Stochastic Modelling, Dynamic Voting, Markov Decision Processes, Computational Complexity, Algorithm Analysis.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Co-Evolution and Collective Behavior
;
Computational Intelligence
;
Enterprise Information Systems
;
Evolutionary Computing
;
Evolutionary Multiobjective Optimization
;
Game Theory Applications
;
Representation Techniques
;
Soft Computing
Abstract:
Markov Decision Processes (MDPs) and their variants are standard models in various domains of Artificial Intelligence. However, each model captures a different aspect of real-world phenomena and results in different kinds of computational complexity. Also, MDPs are recently finding use in the scenarios involving aggregation of preferences (such as recommendation systems, e-commerce platforms, etc.). In this paper, we extend one such MDP variant to explore the effect of including observations made by stochastic agents, on the complexity of computing optimal outcomes for voting results. The resulting model captures phenomena of a greater complexity than current models, while being closer to a real world setting. The utility of the theoretical model is demonstrated by application to the real world setting of crowdsourcing. We address a key question in the crowdsourcing domain, namely, the Exploration Vs. Exploitation problem, and demonstrate the flexibility of adaptation of MDP-based
models in Dynamic Voting scenarios.
(More)