Authors:
Erick Velazquez Godinez
;
Zoltán Szlávik
;
Selene Baez Santamaría
and
Robert-Jan Sips
Affiliation:
Artificial Intelligence Department, myTomorrows, Anthony Fokkerweg 61 1059CP, Amsterdam, The Netherlands
Keyword(s):
Language Detection, Sentence Embedding, Graphotactics, Linguistic Knowledge.
Abstract:
Language identification remains a challenge for short texts originating from social media. Moreover, domain-specific terminology, which is frequent in the medical domain, may not change cross-linguistically, making language identification even more difficult. We conducted language identification on four datasets, two of them with general language, and two of them containing medical language. We evaluated the impact of two embedding representations and a set of linguistic features based on graphotactics. The proposed linguistic features reflect the graphotactics of the languages included in the test dataset. For classification, we implemented two algorithms: random forest and SVM. Our findings show that, when classifying general language, linguistic-based features perform close to the embedding representations of fastText and BERT. However, when classifying text with technical terms, the linguistic features outperform embedding representations. The combination of embeddings with lingu
istic features had a positive impact on the classification task under both settings. Therefore, our results suggest that these linguistic features could be applied for big and small datasets keeping the good performances in both general and medical languages. As future work, we want to test the linguistic features for a more significant set of languages.
(More)