
Li, G., Hammoud, H. A. A. K., Itani, H., Khizbullin, D., and
Ghanem, B. (2023). CAMEL: Communicative agents
for "mind" exploration of large scale language model
society. arXiv preprint arXiv:2303.17760.
Lin, B. Y., Fu, Y., Yang, K., Ammanabrolu, P., Brahman,
F., Huang, S., Bhagavatula, C., Choi, Y., and Ren,
X. (2023). SwiftSage: A generative agent with fast
and slow thinking for complex interactive tasks. arXiv
preprint arXiv:2305.17390.
Maes, P. (1995). Artificial life meets entertainment: life-
like autonomous agents. Communications of the ACM,
38(11):108–114.
Maynez, J., Narayan, S., Bohnet, B., and McDonald, R.
(2020). On faithfulness and factuality in abstractive
summarization. arXiv preprint arXiv:2005.00661.
Minsky, M. (1988). The Society of mind. Simon and Schus-
ter.
Mintzberg, H. (1989). The structuring of organizations.
Springer.
Moya, L. J. and Tolk, A. (2007). Towards a taxonomy of
agents and multi-agent systems. In SpringSim (2),
pages 11–18.
Nakajima, Y. (2023). BabyAGI. https://github.com/
yoheinakajima/babyagi.
Narendra, K. S. and Annaswamy, A. M. (2012). Stable
adaptive systems. Courier Corporation.
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright,
C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K.,
Ray, A., et al. (2022). Training language models to
follow instructions with human feedback. Advances
in Neural Information Processing Systems, 35:27730–
27744.
O’reilly Iii, C. A. and Tushman, M. L. (2008). Ambidex-
terity as a dynamic capability: Resolving the innova-
tor’s dilemma. Research in organizational behavior,
28:185–206.
Parasuraman, R., Sheridan, T. B., and Wickens, C. D.
(2000). A model for types and levels of human in-
teraction with automation. IEEE Transactions on sys-
tems, man, and cybernetics-Part A: Systems and Hu-
mans, 30(3):286–297.
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang,
P., and Bernstein, M. S. (2023). Generative agents: In-
teractive simulacra of human behavior. arXiv preprint
arXiv:2304.03442.
Rahmati, A., Fernandes, E., Jung, J., and Prakash, A.
(2017). IFTTT vs. Zapier: A comparative study
of trigger-action programming frameworks. arXiv
preprint arXiv:1709.02788.
Rozanski, N. and Woods, E. (2012). Software systems
architecture: working with stakeholders using view-
points and perspectives. Addison-Wesley.
Russell, S. (2019). Human compatible: Artificial intelli-
gence and the problem of control. Penguin.
Russell, S. (2022). Artificial intelligence and the problem of
control. Perspectives on Digital Humanism, page 19.
Russell, S., Dewey, D., and Tegmark, M. (2015). Re-
search priorities for robust and beneficial artificial in-
telligence. AI magazine, 36(4):105–114.
SAE International (2016). Taxonomy and definitions for
terms related to driving automation systems for on-
road motor vehicles.
Schobbens, P.-Y., Heymans, P., Trigaux, J.-C., and Bon-
temps, Y. (2007). Generic semantics of feature dia-
grams. Computer networks, 51(2):456–479.
Shen, Y., Song, K., Tan, X., Li, D., Lu, W., and Zhuang,
Y. (2023). HuggingGPT: Solving AI tasks with Chat-
GPT and its friends in Hugging Face. arXiv preprint
arXiv:2303.17580.
Shrestha, A., Subedi, S., and Watkins, A. (2023). Agent-
GPT. https://github.com/reworkd/AgentGPT.
Sloman, S. A. (1996). The empirical case for two systems
of reasoning. Psychological bulletin, 119(1):3.
Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kul-
shreshtha, A., Cheng, H.-T., Jin, A., Bos, T., Baker,
L., Du, Y., et al. (2022). Lamda: Language models for
dialog applications. arXiv preprint arXiv:2201.08239.
Torantulino et al. (2023). Auto-GPT. https://github.com/
Significant-Gravitas/Auto-GPT.
Tosic, P. T. and Agha, G. A. (2004). Towards a hierarchical
taxonomy of autonomous agents. In 2004 IEEE In-
ternational Conference on Systems, Man and Cyber-
netics (IEEE Cat. No. 04CH37583), volume 4, pages
3421–3426. IEEE.
TransformerOptimus et al. (2023). SuperAGI. https://
github.com/TransformerOptimus/SuperAGI.
Tufte, E. R. (2001). The visual display of quantitative in-
formation, volume 2. Graphics press Cheshire, CT.
Van Dyke Parunak, H., Brueckner, S., Fleischer, M., and
Odell, J. (2004). A design taxonomy of multi-agent
interactions. In Agent-Oriented Software Engineer-
ing IV: 4th InternationalWorkshop, AOSE 2003, Mel-
bourne, Australia, July 15, 2003. Revised Papers 4,
pages 123–137. Springer.
Wang, L., Ma, C., Feng, X., Zhang, Z., Yang, H., Zhang,
J., Chen, Z., Tang, J., Chen, X., Lin, Y., et al. (2023).
A survey on large language model based autonomous
agents. arXiv preprint arXiv:2308.11432.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F.,
Chi, E., Le, Q. V., Zhou, D., et al. (2022). Chain-of-
thought prompting elicits reasoning in large language
models. Advances in Neural Information Processing
Systems, 35:24824–24837.
Wolf, Y., Wies, N., Levine, Y., and Shashua, A. (2023).
Fundamental limitations of alignment in large lan-
guage models. arXiv preprint arXiv:2304.11082.
Wooldridge, M. and Jennings, N. R. (1995). Intelligent
agents: Theory and practice. The knowledge engineer-
ing review, 10(2):115–152.
Xi, Z., Chen, W., Guo, X., He, W., Ding, Y., Hong, B.,
Zhang, M., Wang, J., Jin, S., Zhou, E., et al. (2023).
The rise and potential of large language model based
agents: A survey. arXiv preprint arXiv:2309.07864.
Yudkowsky, E. (2016). The AI alignment problem: why it
is hard, and where to start. Symbolic Systems Distin-
guished Speaker, 4.
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M.,
Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., et al.
(2022). OPT: Open pre-trained transformer language
models. arXiv preprint arXiv:2205.01068.
KMIS 2023 - 15th International Conference on Knowledge Management and Information Systems
98