Authors:
Gustavo A. Basílio
1
;
Thiago B. Pereira
1
;
Alessandro L. Koerich
2
;
Hermano Tavares
3
;
Ludmila Dias
1
;
Maria G. S. Teixeira
4
;
Rafael Sousa
5
;
Wilian H. Hisatugu
4
;
Amanda S. Mota
3
;
Anilton S. Garcia
6
;
Marco Aurélio K. Galletta
7
and
Thiago M. Paixão
1
Affiliations:
1
Federal Institute of Espírito Santo (IFES), Campus Serra, Serra, Brazil
;
2
École de Technologie Supérieure ( ´ETS), Montreal, Canada
;
3
Department of Psychiatry, University of São Paulo Medical School (FMUSP), São Paulo, Brazil
;
4
Department of Computing and Electronics, Federal University of Espírito Santo (UFES), Campus São Mateus, São Mateus, Brazil
;
5
Federal University of Mato Grosso (UFMT), Barra do Garças, Brazil
;
6
Federal University of Espírito Santo (UFES), Campus Goiabeiras, Vitória, Brazil
;
7
Department of Obstetrics and Gynecology, University of São Paulo Medical School (FMUSP), São Paulo, Brazil
Keyword(s):
Mobile Health, Mental Health, Pregnancy Healthcare, Affective Computing, Facial Analysis, Convolutional Neural Networks, Visual-Language Models, Deep Learning.
Abstract:
Major Depressive Disorder and anxiety disorders affect millions globally, contributing significantly to the burden of mental health issues. Early screening is crucial for effective intervention, as timely identification of mental health issues can significantly improve treatment outcomes. Artificial intelligence (AI) can be valuable for improving the screening of mental disorders, enabling early intervention and better treatment outcomes. AI-driven screening can leverage the analysis of multiple data sources, including facial features in digital images. However, existing methods often rely on controlled environments or specialized equipment, limiting their broad applicability. This study explores the potential of AI models for ubiquitous depression-anxiety screening given face-centric selfies. The investigation focuses on high-risk pregnant patients, a population that is particularly vulnerable to mental health issues. To cope with limited training data resulting from our clinical se
tup, pre-trained models were utilized in two different approaches: fine-tuning convolutional neural networks (CNNs) originally designed for facial expression recognition and employing vision-language models (VLMs) for zero-shot analysis of facial expressions. Experimental results indicate that the proposed VLM-based method significantly outperforms CNNs, achieving an accuracy of 77.6%. Although there is significant room for improvement, the results suggest that VLMs can be a promising approach for mental health screening.
(More)