Using the Toulmin Model of Argumentation to Explore the Differences in Human and Automated Hiring Decisions

Hebah Bubakr, Chris Baber


Amazon developed an experimental hiring tool, using AI to review job applicants’ résumés, with the goal of automating the search for the best talent. However, the team found that their software was biased against women because the models were trained on résumés submitted to the company for the previous 10 years and most of these were submitted by men, reflecting male dominance in the tech business. As a result, the models learned that males were preferable, and it excluded résumés that could be inferred to come from female applicants. Gender bias was not the only issue. As well rejecting plausible candidates, problems with the data lead the models to recommend unqualified candidates for jobs. To understand the conflict in this, and similar examples, we apply Toulmin model of argumentation. By considering how arguments are constructed by a human and how a contrasting argument might be constructed by AI, we can conduct pre-mortems of potential conflict in system operation.


Paper Citation