A Computational Model for Simulation of Moral Behavior

Fernanda M. Eliott, Carlos H. C. Ribeiro

2014

Abstract

The extension of our integration to technologies brings about the possibility of inserting moral prototypes into artificial agents, no matter if they are going to interact with other artificial agents or biological creatures. We describe here MultiA, a computational model for simulating moral behavior derived from changes over a biologically inspired architecture. MultiA uses reinforcement learning techniques and is intended to produce selective cooperative behavior as a consequence of a biologically plausible model of morality inspired from a perusal of empathy. MultiA has its sensorial information translated into emotions and homeostatic variable values, which feed cognitive and learning systems. The moral behavior is expected to emerge from the artificial social emotion of sympathy and its associated feeling of empathy, based on an ability to internally emulate other agents internal states.

References

  1. Anderson, S. and Anderson, M. (2011). A prima facie duty approach to machine ethics and its application to elder care. In Human-Robot Interaction in Elder Care.
  2. Aristotle (2013 (350BC)). Aristotle's Politics. Chicago U.
  3. Bello, P., Bignoli, P., and Cassimatis, N. (2007). Attention and association explain the emergence of reasoning about false beliefs in young children. pages 169-174.
  4. Bello, P. and Bringsjord, S. (2012). On how to build a moral machine. Topoi, pages 1-16.
  5. Bentham, J. (1907). An introduction to the principle of morals and legislation. Oxf U. Press.
  6. Brigatti, E. (2008). Consequence of reputation in an open-ended naming game. Physical Review E, 78(4):046108.
  7. Bringsjord, S., Arkoudas, K., and Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. Intelligent Sys., IEEE, 21(4):38-44.
  8. Damásio, A. (1994). Descartes' error (new york: Putnam).
  9. Damásio, A. (2004). Looking for Spinoza: Joy, sorrow, and the feeling brain. Random House.
  10. Dancy, J. (2010). Can a particularist learn the difference between right and wrong? In The Procs. of the twentieth world congress of philosophy, volume 1, pages 59-72.
  11. De Waal, F. (2009). The age of empathy: Nature's lessons for a kinder society. New York: Harmony.
  12. Di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., and Rizzolatti, G. (1992). Understanding motor events: a neurophysiological study. Experimental brain research, 91(1):176-180.
  13. Gadanho, S. (1999). Reinforcement learning in autonomous robots: an empirical investigation of the role of emotions. PhD thesis, U. of Edinburgh. College of Science and Engineering. School of Informatics.
  14. Gadanho, S. (2002). Emotional and cognitive adaptation in real environments. In In the symposium ACE2002 of the 16th European Meeting on Cybernetics and Sys. Research. Citeseer.
  15. Gadanho, S. (2003). Learning behavior-selection by emotions and cognition in a multi-goal robot task. The Journal of Machine Learning Research, 4:385-412.
  16. Gadanho, S. and Custódio, L. (2002). Asynchronous learning by emotions and cognition. In Procs. of the seventh Int. Conf. on simulation of adaptive behavior on From animals to animats, pages 224-225. MIT Press.
  17. Gallese, V. and Goldman, A. (1998). Mirror neurons and the simulation theory of mind-reading. Trends in cognitive sciences, 2(12):493-501.
  18. Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. Intelligent Sys., IEEE, 21(4):22-28.
  19. Guarini, M. (2012). Moral cases, moral reasons, and simulation. AISB/IACAP World Congress, 21(4):22-28.
  20. Haidt, J. (2003). The moral emotions. Handbook of affective sciences, pages 852-870.
  21. Hobbes, T. (1969 (1651)). Leviathan. Scolar Press.
  22. Jackson, J. V. (1987). Idea for a mind. ACM SIGART Bulletin, 101:23-26.
  23. Kandel, E., Schwartz, J., and Jessell, T. (2000). Principles of neural science, volume 4. McGraw-Hill New York.
  24. Lin, L. (1993). Reinforcement learning for robots using neural networks. Technical report, DTIC Document.
  25. Machiavelli, N. (1985 (1532)). The Prince. U. of Chicago.
  26. Matignon, L., Laurent, G., Le Fort-Piat, N., et al. (2012). Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. The Knowledge E. Review., 27(1):1-31.
  27. Proctor, D., Brosnan, S., and De Waal, F. (2013). How fairly do chimpanzees play the ultimatum game? Communicative & integrative biology, 6(3):e23819.
  28. Rizzolatti, G., Fadiga, L., Gallese, V., and Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive brain research, 3(2):131-141.
  29. Rousseau (1971 (1762)). The Social Contract. Penguin.
  30. Sun, R. ; Peterson, T. (1998). Autonomous learning of sequential tasks: experiments and analysis. IEEE Transactions on Neural Networks, 9(6):1217-1234.
  31. Wallach, W. (2010). Robot minds and human ethics: the need for a comprehensive model of moral decision making. Ethics and information technology, 12(3):243-250.
  32. Wallach, W. and Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxf U. Press.
  33. Wallach, W., Franklin, S., and Allen, C. (2010). A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science, 2(3):454-485.
  34. Wang, W.-X., Lai, Y.-C., and Armbruster, D. (2011). Cascading failures and the emergence of cooperation in evolutionary-game based models of social and economical networks. Chaos: An Interdisciplinary Journal of Nonlinear Science, 21(3):033112-033112.
  35. Watkins, C. J. (1989). Learning from delayed rewards. PhD thesis, Kings College, UK.
  36. Wooldridge, M. (2009). An Introduction to MultiAgent Systems. John Wiley & Sons.
Download


Paper Citation


in Harvard Style

M. Eliott F. and H. C. Ribeiro C. (2014). A Computational Model for Simulation of Moral Behavior . In Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014) ISBN 978-989-758-054-3, pages 282-287. DOI: 10.5220/0005139002820287


in Bibtex Style

@conference{ncta14,
author={Fernanda M. Eliott and Carlos H. C. Ribeiro},
title={A Computational Model for Simulation of Moral Behavior},
booktitle={Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014)},
year={2014},
pages={282-287},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005139002820287},
isbn={978-989-758-054-3},
}


in EndNote Style

TY - CONF
JO - Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014)
TI - A Computational Model for Simulation of Moral Behavior
SN - 978-989-758-054-3
AU - M. Eliott F.
AU - H. C. Ribeiro C.
PY - 2014
SP - 282
EP - 287
DO - 10.5220/0005139002820287