Crowdsourcing Reliable Ratings for Underexposed Items

Beatrice Valeri, Shady Elbassuoni, Sihem Amer-Yahia

2016

Abstract

We address the problem of acquiring reliable ratings of items such as restaurants or movies from the crowd. A reliable rating is a truthful rating from a worker that is knowledgeable enough about the item she is rating. We propose a crowdsourcing platform that considers workers’ expertise with respect to the items being rated and assigns workers the best items to rate. In addition, our platform focuses on acquiring ratings for items that only have a few ratings. Traditional crowdsourcing platforms are not suitable for such a task for two reasons. First, ratings are subjective and there is no single correct rating for an item which makes most existing work on predicting the expertise of crowdsourcing workers inapplicable. Second, in traditional crowdsourcing platforms there is no control over task assignment by the requester. In our case, we are interested in providing workers with the best items to rate based on their estimated expertise for the items and the number of ratings the items have. We evaluate the effectiveness of our system using both synthetic and real-world data about restaurants.

References

  1. Dawid, A. P. and Skene, A. M. (1979). Maximum likelihood estimation of observer error-rates using the em algorithm. Applied Statistics, 28(1).
  2. Felfernig, A., Ulz, T., Haas, S., Schwarz, M., Reiterer, S., and Stettinger, M. (2015). Peopleviews: Human computation for constraint-based recommendation. In CrowdRec.
  3. Hacker, S. and von Ahn, L. (2009). Matchin: Eliciting user preferences with an online game. In CHI.
  4. Hirth, M., Hoßfeld, T., and Tran-Gia, P. (2011). CostOptimal Validation Mechanisms and Cheat-Detection for Crowdsourcing Platforms. In FINGNet.
  5. Ho, C., Jabbari, S., and Vaughan, J. W. (2013). Adaptive task assignment for crowdsourced classification. In ICML.
  6. Ho, C.-J. and Vaughan, J. W. (2012). Online task assignment in crowdsourcing markets. In AAAI.
  7. Ipeirotis, P. G., Provost, F., and Wang, J. (2010). Quality management on amazon mechanical turk. In KDD, Workshop on Human Computation.
  8. Joglekar, M., Garcia-Molina, H., and Parameswaran, A. (2013). Evaluating the crowd with confidence. In KDD.
  9. Johnson, S. C. (1967). Hierarchical clustering schemes. Psychometrika, 2.
  10. Karger, D. R., Oh, S., and Shah, D. (2011). Budget-optimal task allocation for reliable crowdsourcing systems. CoRR, abs/1110.3564.
  11. Lee, J., Sun, M., and Lebanon, G. (2012). A comparative study of collaborative filtering algorithms. arXiv preprint arXiv:1205.3193.
  12. Li, H., Zhao, B., and Fuxman, A. (2014). The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing. In WWW.
  13. Mui, L., Mohtashemi, M., and Halberstadt, A. (2002). A computational model of trust and reputation. In HICSS.
  14. Roy, S. B., Lykourentzou, I., Thirumuruganathan, S., AmerYahia, S., and Das, G. (2013). Crowds, not drones: Modeling human factors in interactive crowdsourcing. In DBCrowd.
  15. Satzger, B., Psaier, H., Schall, D., and Dustdar, S. (2012). Auction-based Crowdsourcing Supporting Skill Management. Information Systems, Elsevier.
  16. Tian, Y. and Zhu, J. (2012). Learning from crowds in the presence of schools of thought. In KDD.
  17. Wolley, C. and Quafafou, M. (2013). Scalable expert selection when learning from noisy labelers. In ICMLA.
Download


Paper Citation


in Harvard Style

Valeri B., Elbassuoni S. and Amer-Yahia S. (2016). Crowdsourcing Reliable Ratings for Underexposed Items . In Proceedings of the 12th International Conference on Web Information Systems and Technologies - Volume 2: WEBIST, ISBN 978-989-758-186-1, pages 75-86. DOI: 10.5220/0005770700750086


in Bibtex Style

@conference{webist16,
author={Beatrice Valeri and Shady Elbassuoni and Sihem Amer-Yahia},
title={Crowdsourcing Reliable Ratings for Underexposed Items},
booktitle={Proceedings of the 12th International Conference on Web Information Systems and Technologies - Volume 2: WEBIST,},
year={2016},
pages={75-86},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005770700750086},
isbn={978-989-758-186-1},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 12th International Conference on Web Information Systems and Technologies - Volume 2: WEBIST,
TI - Crowdsourcing Reliable Ratings for Underexposed Items
SN - 978-989-758-186-1
AU - Valeri B.
AU - Elbassuoni S.
AU - Amer-Yahia S.
PY - 2016
SP - 75
EP - 86
DO - 10.5220/0005770700750086