Authors:
Federico Galatolo
1
;
Gabriele Martino
1
;
Mario Cimino
1
and
Chiara Tommasi
2
Affiliations:
1
Dept. Information Engineering, University of Pisa, 56122, Pisa, Italy
;
2
Dept. Civilisations and Forms of Knowledge, University of Pisa, 56126 Pisa, Italy
Keyword(s):
Digital Library, Information Retrieval, Transformer, BERT, Latin.
Abstract:
Dense Information Retrieval (DIR) has recently gained attention due to the advances in deep learning-based word embedding. In particular, for historical languages such as Latin, a DIR task is appropriate although challenging, due to: (i) the complexity of managing searches using traditional Natural Language Processing (NLP); (ii) the availability of fewer resources with respect to modern languages; (iii) the large variation in usage among different eras. In this research, pre-trained transformer models are used as features extractors, to carry out a search on a Latin Digital Library. The system computes embeddings of sentences using state-of-the-art models, i.e., Latin BERT and LaBSE, and uses cosine distance to retrieve the most similar sentences. The paper delineates the system development and summarizes an evaluation of its performance using a quantitative metric based on expert’s per-query documents ranking. The proposed design is suitable for other historical languages. Early re
sults show the higher potential of the LabSE model, encouraging further comparative research. To foster further development, the data and source code have been publicly released.
(More)