REFERENCES
Barrett, L. F. (2011). Was Darwin Wrong About Emotional
Expressions? Current Directions in Psychological
Science, 20(6):400–406.
Breiman, L. (2001). Random Forests. Machine Learning,
45(1):5–32.
Cheong, J. H., Chang, L., Jolly, E., Xie, T., skbyrne,
Kenney, M., Haines, N., and B
¨
uchner, T. (2022).
Cosanlab/py-feat: 0.4.0. Zenodo.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. (2009). ImageNet: A large-scale hierarchical im-
age database. In 2009 IEEE Conference on Computer
Vision and Pattern Recognition, pages 248–255.
Ekman, P. and Friesen, W. V. (1978). Facial Action Coding
System. Consulting Psychologists Press.
Goodfellow, I. J., Erhan, D., Carrier, P. L., Courville, A.,
Mirza, M., Hamner, B., Cukierski, W., Tang, Y.,
Thaler, D., Lee, D.-H., Zhou, Y., Ramaiah, C., Feng,
F., Li, R., Wang, X., Athanasakis, D., Shawe-Taylor,
J., Milakov, M., Park, J., Ionescu, R., Popescu, M.,
Grozea, C., Bergstra, J., Xie, J., Romaszko, L., Xu,
B., Chuang, Z., and Bengio, Y. (2013). Challenges in
Representation Learning: A report on three machine
learning contests.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative Adversarial Networks. Ad-
vances in neural information processing systems, 27.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep Resid-
ual Learning for Image Recognition. Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, pages 770–778.
Heaven, D. (2020). Why faces don’t always tell the truth
about feelings. Nature, 578(7796):502–504.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. (2017). GANs Trained by a Two Time-
Scale Update Rule Converge to a Local Nash Equi-
librium. Advances in neural information processing
systems, 30.
Hjortsj
¨
o, C.-H. (1969). Man’s Face and Mimic Language.
Studentlitteratur.
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D.,
Wang, W., Weyand, T., Andreetto, M., and Adam,
H. (2017). MobileNets: Efficient Convolutional Neu-
ral Networks for Mobile Vision Applications. CoRR,
abs/1704.04861.
Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017).
Image-to-Image Translation with Conditional Adver-
sarial Networks. Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
1125–1134.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
ageNet Classification with Deep Convolutional Neu-
ral Networks. In Advances in Neural Information Pro-
cessing Systems, volume 25. Curran Associates, Inc.
Lhermitte, S., Verbesselt, J., Verstraeten, W., and Coppin,
P. (2011). A comparison of time series similarity
measures for classification and change detection of
ecosystem dynamics. Remote Sensing of Environment,
115(12):3129–3152.
Li, Y., Liu, S., Yang, J., and Yang, M.-H. (2017). Gener-
ative Face Completion. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recogni-
tion, pages 3911–3919.
Liu, S., Wei, Y., Lu, J., and Zhou, J. (2018). An Im-
proved Evaluation Framework for Generative Adver-
sarial Networks. CoRR, abs/1803.07474.
Luan, P., Huynh, V., and Tuan Anh, T. (2020). Facial ex-
pression recognition using residual masking network.
In IEEE 25th International Conference on Pattern
Recognition, pages 4513–4519.
Mathai, J., Masi, I., and AbdAlmageed, W. (2019). Does
Generative Face Completion Help Face Recognition?
2019 International Conference on Biometrics (ICB).
Mathiasen, A. and Hvilshøj, F. (2021). Backpropagat-
ing through Fr
´
echet Inception Distance. CoRR,
abs/2009.14075.
Nguyen, T., Tran, A. T., and Hoai, M. (2021). Lip-
stick Ain’t Enough: Beyond Color Matching for In-
the-Wild Makeup Transfer. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 13305–13314.
Schaede, R. A., Volk, G. F., Modersohn, L., Barth,
J. M., Denzler, J., and Guntinas-Lichius, O. (2017).
Video Instruction for Synchronous Video Recording
of Mimic Movement of Patients with Facial Palsy.
Laryngo- rhino- otologie, 96(12):844–849.
Shao, Z., Liu, Z., Cai, J., and Ma, L. (2021). JAA-Net:
Joint Facial Action Unit Detection and Face Align-
ment via Adaptive Attention. International Journal
of Computer Vision, 129(2):321–340.
Simonyan, K. and Zisserman, A. (2015). Very Deep Con-
volutional Networks for Large-Scale Image Recogni-
tion. CoRR, abs/1409.1556(arXiv:1409.1556).
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna,
Z. (2015). Rethinking the Inception Architecture for
Computer Vision.
Taigman, Y., Polyak, A., and Wolf, L. (2016). Unsu-
pervised Cross-Domain Image Generation. CoRR,
abs/1611.02200.
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang,
O. (2018). The Unreasonable Effectiveness of Deep
Features as a Perceptual Metric. Proceedings of the
IEEE Conference on Computer Vision and Pattern
Recognition, pages 586–595.
Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017).
Unpaired Image-to-Image Translation using Cycle-
Consistent Adversarial Networks. Proceedings of the
IEEE Conference on Computer Vision and Pattern
Recognition, pages 2223–2232.
VISAPP 2023 - 18th International Conference on Computer Vision Theory and Applications
736