
tion, and Public Institutions, pages 19–34. University
of Hawaii Press, Honolulu.
Datta, A., Tschantz, M. C., and Datta, A. (2015). Auto-
mated experiments on ad privacy settings. Proceed-
ings on Privacy Enhancing Technologies, 2015(1):92–
112.
Devinney, H., Bj
¨
orklund, J., and Bj
¨
orklund, H. (2022).
Theories of “gender” in nlp bias research. In Pro-
ceedings of the 2022 ACM Conference on Fairness,
Accountability, and Transparency, FAccT ’22, page
2083–2102, New York, NY, USA. Association for
Computing Machinery.
Dictionary, C. (2022). fairness.
Ekstrand, M. D., Das, A., Burke, R., and Diaz, F.
(2022). Fairness in information access systems.
Foundations and Trends® in Information Retrieval,
16(1–2):1–177.
Eren, E., Hondrich, L., Huang, L., Imana, B., Kettemann,
M., Kuai, J., Mattiuzzo, M., Pirang, A., Pop Stefanija,
A., Rzepka, S., Sekwenz, M., Siebert, Z., Stapel, S.,
and Weckner, F. (2021). Increasing Fairness in Tar-
geted Advertising. The Risk of Gender Stereotyping by
Job Ad Algorithms.
Fang, Y., Liu, H., Tao, Z., and Yurochkin, M. (2022). Fair-
ness of machine learning in search engines. In Pro-
ceedings of the 31st ACM International Conference on
Information & Knowledge Management, CIKM ’22,
page 5132–5135. ACM.
for Justice, E. C. D. G., Consumers., network of legal ex-
perts in gender equality, E., and non discrimination.
(2021). Algorithmic discrimination in Europe: chal-
lenges and opportunities for gender equality and non
discrimination law. Publications Office, LU.
Francazi, E., Lucchi, A., and Baity-Jesi, M. (2024). Initial
guessing bias: How untrained networks favor some
classes. In Forty-first International Conference on
Machine Learning.
Geyik, S. C., Ambler, S., and Kenthapadi, K. (2019).
Fairness-aware ranking in search & recommendation
systems with application to linkedin talent search. In
Proceedings of the 25th ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining,
KDD ’19, page 2221–2231, New York, NY, USA. As-
sociation for Computing Machinery.
Janhunen, J. (2000). Grammatical gender from east to west.
TRENDS IN LINGUISTICS STUDIES AND MONO-
GRAPHS, 124:689–708.
Keyes, O. (2018). The misgendering machines: Trans/hci
implications of automatic gender recognition. Proc.
ACM Hum.-Comput. Interact., 2(CSCW).
Kleinberg, J., Mullainathan, S., and Raghavan, M.
(2017). Inherent Trade-Offs in the Fair De-
termination of Risk Scores. In Papadimitriou,
C. H., editor, 8th Innovations in Theoretical Com-
puter Science Conference (ITCS 2017), volume 67
of Leibniz International Proceedings in Informat-
ics (LIPIcs), pages 43:1–43:23, Dagstuhl, Germany.
Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
Kong, Y. (2022). Are “Intersectionally Fair” AI Algorithms
Really Fair to Women of Color? A Philosophical
Analysis. In 2022 ACM Conference on Fairness, Ac-
countability, and Transparency, pages 485–494, Seoul
Republic of Korea. ACM.
Konishi, T. (1993). The semantics of grammatical gender:
A cross-cultural study. Journal of Psycholinguistic
Research, 22(5):519–534.
Kramer, R. (2020). Grammatical gender: A close look at
gender assignment across languages. Annual Review
of Linguistics, 6(1):45–66.
Loi, M. and Heitz, C. (2022). Is Calibration a Fairness Re-
quirement? An Argument from the Point of View of
Moral Philosophy and Decision Theory. In 2022 ACM
Conference on Fairness, Accountability, and Trans-
parency, FAccT ’22, pages 2026–2034, New York,
NY, USA. Association for Computing Machinery.
Makhortykh, M., Urman, A., and Ulloa, R. (2021). Detect-
ing race and gender bias in visual representation of
ai on web search engines. In Boratto, L., Faralli, S.,
Marras, M., and Stilo, G., editors, Advances in Bias
and Fairness in Information Retrieval, pages 36–50,
Cham. Springer International Publishing.
Phillips, W. and Boroditsky, L. (2013). Can quirks of gram-
mar affect the way you think? grammatical gender
and object concepts. In Proceedings of the 25th An-
nual Cognitive Science Society, pages 928–933. Psy-
chology Press.
Sch
¨
utze, H., Manning, C. D., and Raghavan, P. (2008). In-
troduction to information retrieval, volume 39. Cam-
bridge University Press Cambridge.
Singh, A. and Joachims, T. (2018). Fairness of exposure in
rankings. KDD ’18, page 2219–2228, New York, NY,
USA. Association for Computing Machinery.
Urman, A. and Makhortykh, M. (2022). “foreign beau-
ties want to meet you”: The sexualization of women
in google’s organic and sponsored text search results.
New Media & Society, 26(5):2932–2953.
Verma, S. and Rubin, J. (2018). Fairness Definitions Ex-
plained. In Proceedings of the International Workshop
on Software Fairness, Gothenburg Sweden. ACM.
Wachter, S., Mittelstadt, B., and Russell, C. (2021). Bias
Preservation in Machine Learning: The Legality of
Fairness Metrics Under EU Non-Discrimination Law.
West Virginia Law Review, 123(3):735–790.
West, C. and Zimmerman, D. H. (1987). Doing gender.
Gender & Society, 1(2):125–151.
¨
Zliobait
˙
e, I. (2017). Measuring discrimination in algorith-
mic decision making. Data Mining and Knowledge
Discovery, 31:1060–1089.
KDIR 2025 - 17th International Conference on Knowledge Discovery and Information Retrieval
500