Authors:
Arthur Soares de Quadros
1
;
Sarah Magalhães
1
;
Giulia Zanon de Castro
2
;
Jéssica Almeida de Lima
2
;
Wladmir Brandão
1
and
Alessandro Vieira
2
Affiliations:
1
Institute of Exact Sciences and Informatics, Pontifical Catholic University of Minas Gerais, Dom José Gaspar Street, 500, Belo Horizonte, Brazil
;
2
Sólides S.A., Tomé de Souza Street, 845, Belo Horizonte, Brazil
Keyword(s):
Wage Discrimination, Bias, Artificial Intelligence, Machine Learning, Salary Prediction.
Abstract:
Now more than ever, automated decision-making systems such as Artificial Intelligence models are being used to make decisions based on sensible/social data. For this reason, it is important to understand the impacts of social features in these models for salary predictions and wage classifications, avoiding to perpetuate unfairness that exists in society. In this study, publicly accessible data about job’s and employee’s information in Brazil was analyzed by descriptive and inferential statistical methods to measure social bias. The impact of social features on decision-making systems was also evaluated, with it varying depending on the model. This study concluded that, for a model with a complex approach to analyze the training data, social features are not able to define its predictions with an acceptable pattern, whereas for models with a simpler approach, they are. This means that, depending on the model used, an automated decision-making system can be more, or less, susceptible
to social bias.
(More)