Authors:
Refka Hanachi
1
;
Akrem Sellami
2
and
Imed Riadh Farah
3
;
1
Affiliations:
1
RIADI Laboratory, ENSI, University of Manouba, Manouba, 2010, Tunisia
;
2
LORIA Laboratory, University of Lorraine and INRIA/CNRS, UMR 7503, Campus Scientifique, 615 Rue du Jardin-Botanique, F-54506 Vandœuvre-les-Nancy, France
;
3
ITI Department, IMT Atlantique, 655 Avenue du Technopôle, F-29280 Plouzané, France
Keyword(s):
Brain MRI Images, Dimensionality Reduction, Feature Extraction, Multi-view Graph Autoencoder, Behavior Human Interpretation.
Abstract:
Interpretation of human behavior by exploiting the complementarity of the information offered by multimodal functional magnetic resonance imaging (fMRI) data is a challenging task. In this paper, we propose
to fuse task-fMRI for brain activation and rest-fMRI for functional connectivity with the incorporation of
structural MRI (sMRI) as an adjacency matrix to maintain the rich spatial structure between voxels of the
brain. We consider then the structural-functional brain connections (3D mesh) as a graph. The aim is to
quantify each subject’s performance in voice recognition and identification. More specifically, we propose
an advanced multi-view graph auto-encoder based on the attention mechanism called MGATE, which seeks
at learning better representation from both modalities task- and rest-fMRI using the Brain Adjacency Graph
(BAG), which is constructed based on sMRI. It yields a multi-view representation learned at all vertices of
the brain, which be used as input to our trace regr
ession model in order to predict the behavioral score of
each subject. Experimental results show that the proposed model achieves better prediction rates, and reaches
competitive high performances compared to various existing graph representation learning models in the stateof-the-art.
(More)