passively register emotion—they reshape it,
choreograph it, and sometimes even challenge it.
Artists play a pivotal role in authoring these
emotional narratives, embedding ambiguity, rhythm,
and cultural nuance into the logic of interaction.
By comparing multiple modalities—facial, vocal,
and physiological—this paper has outlined the
technological, aesthetic, and symbolic logic
underlying affective interaction. Each modality offers
distinct affordances, but they are unified by a
common goal: to create a dynamic feedback loop
between human emotion and artistic expression. In
this loop, emotion is no longer a static input; it
becomes performative, interpretive, and affectively
resonant.
Looking forward, future research and creative
work should emphasize not only the technical
accuracy of emotion recognition but also the poetic
potential of emotional ambiguity. Designers must
consider cultural diversity, user agency, and the
affective ethics of machine interpretation. Rather than
narrowing emotion into rigid classifications,
interactive art should aim to open new spaces for
emotional experience—ones that are reflective,
participatory, and deeply human. Ultimately, the true
potential of emotion in interactive art lies not in
control, but in connection.
REFERENCES
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M.,
& Pollak, S. D. (2019). Emotional expressions
reconsidered: Challenges to inferring emotion from
human facial movements. Perspectives on
Psychological Science, 14(6), 917–933.
Dourish, P. (2001). Where the action is: The foundations of
embodied interaction. MIT Press.
Ekman, P., & Friesen, W. V. (1978). Facial Action Coding
System (FACS). Consulting Psychologists Press.
El Ayadi, M., Kamel, M. S., & Karray, F. (2011). Survey
on speech emotion recognition: Features, classification
schemes, and databases. Speech Communication,
53(9–10), 1162–1181.
Eyben, F., Scherer, K. R., Schuller, B. W., Sundberg, J.,
André, E., Busso, C., ... & Truong, K. P. (2016). The
Geneva minimalistic acoustic parameter set (GeMAPS)
for voice research and affective computing. IEEE
Transactions on Affective Computing, 7(2), 190–202.
Gaver, W. W., Beaver, J., & Benford, S. (2003). Ambiguity
as a resource for design. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems,
233–240.
Höök, K. (2008). Affective loop experiences—What are
they? Proceedings of the 3rd International Conference
on Persuasive Technology, 1–12.
Jack, R. E., Garrod, O. G. B., & Schyns, P. G. (2012).
Dynamic facial expressions of emotion transmit an
evolving hierarchy of signals over time. Proceedings of
the National Academy of Sciences, 109(15), 7241–
7246.
Kim, J., & André, E. (2004). Emotion recognition based on
physiological changes in music listening. Proceedings
of the 7th International Conference on Pattern
Recognition (ICPR), 4, 400–403.
LaBelle, B. (2010). Acoustic territories: Sound culture and
everyday life. Bloomsbury Publishing.
Löwgren, J., & Stolterman, E. (2004). Thoughtful
interaction design: A design perspective on information
technology. MIT Press.
Matsumoto, D. (1990). Cultural similarities and differences
in display rules. Journal of Personality and Social
Psychology, 58(1), 128–134.
Norman, D. A. (2004). Emotional design: Why we love (or
hate) everyday things. Basic Books.
Park, L. (2013). Eunoia. http://lisaapark.com/eunoia
Picard, R. W. (1997). Affective computing. MIT Press.
Russell, J. A. (1980). A circumplex model of affect.
Psychological Review, 87(2), 145–153.
Schröder, M., Devillers, L., Cowie, R., Douglas-Cowie, E.,
& Batliner, A. (2011). Approaches to emotion
recognition in speech: Towards emotional corpora.
IEEE Transactions on Affective Computing, 3(2),
132–144.
Schubert, E. (2001). Continuous measurement of self-
report emotional response to music. Music Perception,
23(1), 27–46.
Schuller, B., Rigoll, G., & Lang, M. (2011). Speech
emotion recognition combining acoustic features and
linguistic information in a hybrid support vector
machine – belief network architecture. IEEE
Transactions on Affective Computing, 2(1), 32–45.
Yoo, Y., Jin, B., & Myung, R. (2012). Real-time feedback
display for facial expression recognition using facial
feedback. Proceedings of the 2012 ACM Conference on
Ubiquitous Computing, 689–690.
Zhao, M., Adib, F., & Katabi, D. (2017). Emotion
recognition using wireless signals. IEEE Transactions
on Affective Computing, 8(4), 439–451.