loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Refka Hanachi 1 ; Akrem Sellami 2 and Imed Riadh Farah 3 ; 1

Affiliations: 1 RIADI Laboratory, ENSI, University of Manouba, Manouba, 2010, Tunisia ; 2 LORIA Laboratory, University of Lorraine and INRIA/CNRS, UMR 7503, Campus Scientifique, 615 Rue du Jardin-Botanique, F-54506 Vandœuvre-les-Nancy, France ; 3 ITI Department, IMT Atlantique, 655 Avenue du Technopôle, F-29280 Plouzané, France

Keyword(s): Brain MRI Images, Dimensionality Reduction, Feature Extraction, Multi-view Graph Autoencoder, Behavior Human Interpretation.

Abstract: Interpretation of human behavior by exploiting the complementarity of the information offered by multimodal functional magnetic resonance imaging (fMRI) data is a challenging task. In this paper, we propose to fuse task-fMRI for brain activation and rest-fMRI for functional connectivity with the incorporation of structural MRI (sMRI) as an adjacency matrix to maintain the rich spatial structure between voxels of the brain. We consider then the structural-functional brain connections (3D mesh) as a graph. The aim is to quantify each subject’s performance in voice recognition and identification. More specifically, we propose an advanced multi-view graph auto-encoder based on the attention mechanism called MGATE, which seeks at learning better representation from both modalities task- and rest-fMRI using the Brain Adjacency Graph (BAG), which is constructed based on sMRI. It yields a multi-view representation learned at all vertices of the brain, which be used as input to our trace regr ession model in order to predict the behavioral score of each subject. Experimental results show that the proposed model achieves better prediction rates, and reaches competitive high performances compared to various existing graph representation learning models in the stateof-the-art. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.145.77.114

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Hanachi, R.; Sellami, A. and Farah, I. (2021). Interpretation of Human Behavior from Multi-modal Brain MRI Images based on Graph Deep Neural Networks and Attention Mechanism. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 4: VISAPP; ISBN 978-989-758-488-6; ISSN 2184-4321, SciTePress, pages 56-66. DOI: 10.5220/0010214400560066

@conference{visapp21,
author={Refka Hanachi. and Akrem Sellami. and Imed Riadh Farah.},
title={Interpretation of Human Behavior from Multi-modal Brain MRI Images based on Graph Deep Neural Networks and Attention Mechanism},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 4: VISAPP},
year={2021},
pages={56-66},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010214400560066},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 4: VISAPP
TI - Interpretation of Human Behavior from Multi-modal Brain MRI Images based on Graph Deep Neural Networks and Attention Mechanism
SN - 978-989-758-488-6
IS - 2184-4321
AU - Hanachi, R.
AU - Sellami, A.
AU - Farah, I.
PY - 2021
SP - 56
EP - 66
DO - 10.5220/0010214400560066
PB - SciTePress