
virtual characters at a much more granular level than
in traditional games. Nevertheless, further efforts in
improvment of overall acceptability of RL generated
animations are needed.
6 CONCLUSION
In this paper we describe the design of a safe falling
animation system based on reinforcement learning
and present the results of a user study comparing said
animation system with an animation system based on
motion capture.
Our animation system consists of a reinforcement
learning agent that controls an articulated, fully phys-
ically simulated 3D humanoid character. The charac-
ter is pushed backwards while standing on flat ground
and the controller tries to minimize the impact of the
character with the ground and the forces inside the
character’s joints. The PPO algorithm is used for
training the controller and curriculum learning is em-
ployed in order to help with convergence.
We conduct a user study comparing the RL ap-
proach with a motion capture approach whilst keep-
ing the surface presentation the same by using the
same humanoid 3D model in both animation systems.
The participants are instructed to make the charac-
ter fall by pushing it at varying strengths and to de-
velop an opinion on the resulting movement. They
are subsequently given a questionnaire in which they
rate the character’s movement on 8 different aspects
on 5-point semantic differential scales. This testing
procedure is carried out for both animation systems
individually.
Results show a statistically significant difference
in ratings between the two systems for all but one as-
pect (injury), with the motion capture system being
rated more favorably in 6 out of 8 aspects, and the
RL system being rated more favorably in reactivity.
The impact of background variables (experience with
falling, previous gaming experience) on ratings of in-
jury and reactivity, respectively, is shown to not be
statistically significant.
ACKNOWLEDGEMENTS
This research was fully supported by the Croatian
National Recovery and Resilience Plan (NPOO) un-
der the project Research and Development of Multi-
ple Innovative Products, Services and Business Mod-
els Aimed at Strengthening Sustainable Tourism and
the Green and Digital Transition of Tourism (ROBO-
CAMP), with grant number NPOO.C1.6.R1-I2.01.
REFERENCES
Bartneck, C., Kuli
´
c, D., Croft, E., and Zoghbi, S. (2009).
Measurement instruments for the anthropomorphism,
animacy, likeability, perceived intelligence, and per-
ceived safety of robots. Intl. journal of social robotics,
1:71–81.
Ha, S. and Liu, C. K. (2014). Iterative training of dynamic
skills inspired by human coaching techniques. ACM
Transactions on Graphics (TOG), 34(1):1–11.
Ha, S. and Liu, C. K. (2015). Multiple contact planning
for minimizing damage of humanoid falls. In 2015
IEEE/RSJ Intl. Conference on Intelligent Robots and
Systems (IROS), pages 2761–2767. IEEE.
Ha, S., Ye, Y., and Liu, C. K. (2012). Falling and landing
motion control for character animation. ACM Trans-
actions on Graphics (TOG), 31(6):1–9.
ITU-T (2022). P. 910: Subjective video quality assessment
methods for multimedia applications. pages 15–16.
Kumar, V. C., Ha, S., and Liu, C. K. (2017). Learning a uni-
fied control policy for safe falling. In 2017 IEEE/RSJ
Intl. Conference on Intelligent Robots and Systems
(IROS), pages 3940–3947. IEEE.
Luo, Y.-S., Soeseno, J. H., Chen, T. P.-C., and Chen, W.-C.
(2020). Carl: Controllable agent with reinforcement
learning for quadruped locomotion. ACM Transac-
tions on Graphics (TOG), 39(4):38–1.
Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor,
M. E., and Stone, P. (2020). Curriculum learning for
reinforcement learning domains: A framework and
survey. The Journal of Machine Learning Research,
21(1):7382–7431.
Rossini, L., Henze, B., Braghin, F., and Roa, M. A. (2019).
Optimal trajectory for active safe falls in humanoid
robots. In 2019 IEEE-RAS 19th Intl. Conference on
Humanoid Robots (Humanoids), pages 305–312.
Tassa, Y., Erez, T., and Todorov, E. (2012). Synthesis and
stabilization of complex behaviors through online tra-
jectory optimization. In 2012 IEEE/RSJ Intl. Confer-
ence on Intelligent Robots and Systems, pages 4906–
4913. IEEE.
Tessler, C., Kasten, Y., Guo, Y., Mannor, S., Chechik, G.,
and Peng, X. B. (2023). Calm: Conditional adver-
sarial latent models for directable virtual characters.
In ACM SIGGRAPH 2023 Conference Proceedings,
pages 1–9.
Yin, Z., Yang, Z., Van De Panne, M., and Yin, K. (2021).
Discovering diverse athletic jumping strategies. ACM
Transactions on Graphics (TOG), 40(4):1–17.
Yu, R., Park, H., and Lee, J. (2019). Figure skating simu-
lation from video. In Computer graphics forum, vol-
ume 38, pages 225–234. Wiley Online Library.
Yu, R., Park, H., and Lee, J. (2021). Human dynamics from
monocular video with dynamic camera movements.
ACM Transactions on Graphics (TOG), 40(6):1–14.
Yun, S.-k., Goswami, A., and Sakagami, Y. (2009). Safe
fall: Humanoid robot fall direction change through in-
telligent stepping and inertia shaping. In 2009 IEEE
intl. conference on robotics and automation, pages
781–787. IEEE.
GRAPP 2025 - 20th International Conference on Computer Graphics Theory and Applications
330