
REFERENCES
Allen, C. and Hospedales, T. (2019). Analogies explained:
Towards understanding word embeddings. In Interna-
tional Conference on Machine Learning, pages 223–
231. PMLR.
Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T.
(2017). Enriching word vectors with subword infor-
mation. Transactions of the association for computa-
tional linguistics, 5:135–146.
Drozd, A., Gladkova, A., and Matsuoka, S. (2016). Word
embeddings, analogies, and machine learning: Be-
yond king-man+ woman= queen. In Proceedings
of coling 2016, the 26th international conference on
computational linguistics: Technical papers, pages
3519–3530.
Ethayarajh, K. (2019). Rotate king to get queen: Word rela-
tionships as orthogonal transformations in embedding
space. arXiv preprint arXiv:1909.00504.
Finley, G., Farmer, S., and Pakhomov, S. (2017). What
analogies reveal about word vectors and their com-
positionality. In Proceedings of the 6th joint confer-
ence on lexical and computational semantics (* SEM
2017), pages 1–11.
Garneau, N., Hartmann, M., Sandholm, A., Ruder, S.,
Vuli
´
c, I., and Søgaard, A. (2021). Analogy training
multilingual encoders. In Proceedings of the AAAI
conference on artificial intelligence, volume 35(14),
pages 12884–12892.
Gladkova, A., Drozd, A., and Matsuoka, S. (2016).
Analogy-based detection of morphological and se-
mantic relations with word embeddings: what works
and what doesn’t. In Proceedings of the NAACL Stu-
dent Research Workshop, pages 8–15.
Harris, Z. S. (1954). Distributional structure. WORD, 10(2-
3):146–162.
Jurgens, D., Mohammad, S., Turney, P., and Holyoak,
K. (2012). Semeval-2012 task 2: Measuring de-
grees of relational similarity. In * SEM 2012: The
First Joint Conference on Lexical and Computational
Semantics–Volume 1: Proceedings of the main confer-
ence and the shared task, and Volume 2: Proceedings
of the Sixth International Workshop on Semantic Eval-
uation (SemEval 2012), pages 356–364.
Levy, O. and Goldberg, Y. (2014). Linguistic regularities in
sparse and explicit word representations. In Proceed-
ings of the eighteenth conference on computational
natural language learning, pages 171–180.
Linzen, T. (2016). Issues in evaluating semantic spaces us-
ing word analogies. arXiv preprint arXiv:1606.07736.
Mandelbaum, A. and Shalev, A. (2016). Word embeddings
and their use in sentence classification tasks. arXiv
preprint arXiv:1610.08229.
Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a).
Efficient estimation of word representations in vector
space. arXiv preprint arXiv:1301.3781.
Mikolov, T., Yih, W.-t., and Zweig, G. (2013b). Linguis-
tic regularities in continuous space word representa-
tions. In Vanderwende, L., Daum
´
e III, H., and Kirch-
hoff, K., editors, Proceedings of the 2013 Conference
of the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, pages 746–751, Atlanta, Georgia. Associa-
tion for Computational Linguistics.
Miller, G. A. (1995). Wordnet: a lexical database for en-
glish. Communications of the ACM, 38(11):39–41.
Pennington, J., Socher, R., and Manning, C. D. (2014).
Glove: Global vectors for word representation. In
Proceedings of the 2014 conference on empirical
methods in natural language processing (EMNLP),
pages 1532–1543.
ˇ
Reh
˚
u
ˇ
rek, R. and Sojka, P. (2010). Software Framework
for Topic Modelling with Large Corpora. In Proceed-
ings of the LREC 2010 Workshop on New Challenges
for NLP Frameworks, pages 45–50, Valletta, Malta.
ELRA. http://is.muni.cz/publication/884893/en.
Rogers, A., Drozd, A., and Li, B. (2017). The (too many)
problems of analogical reasoning with word vectors.
In Proceedings of the 6th joint conference on lexical
and computational semantics (* SEM 2017), pages
135–148.
Salton, G., Wong, A., and Yang, C. S. (1975). A vector
space model for automatic indexing. Communications
of the ACM, 18(11):613–620.
WEBIST 2025 - 21st International Conference on Web Information Systems and Technologies
458