Authors:
Samantha Butcher
and
Beatriz De La Iglesia
Affiliation:
Department of Computing Sciences, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, U.K.
Keyword(s):
Political Discourse, Named Entity Recognition, BERT, Entity Framing, Multi-Task Learning, Natural Language Processing.
Abstract:
Political discourse frequently leverages group identity and moral alignment, with weaponised victimhood (WV) standing out as a powerful rhetorical strategy. Dominant actors employ WV to frame themselves or their allies as victims, thereby justifying exclusionary or retaliatory political actions. Despite advancements in Natural Language Processing (NLP), existing computational approaches struggle to capture such subtle rhetorical framing at scale, especially when alignment is implied rather than explicitly stated. This paper introduces a dual-task framework designed to address this gap by linking Named Entity Recognition (NER) with a nuanced rhetorical positioning classification (positive, negative, or neutral - POSIT). By treating rhetorical alignment as a structured classification task tied to entity references, our approach moves beyond sentiment-based heuristics to yield a more interpretable and fine-grained analysis of political discourse. We train and compare transformer-based m
odels (BERT, DistilBERT, RoBERTa) across Single-Task, Multi-Task, and Task-Conditioned Multi-Task Learning architectures. Our findings demonstrate that NER consistently outperformed rhetorical positioning, achieving higher F1-scores and distinct loss dynamics. While single-task learning showed wide loss disparities (e.g., BERT NER 0.45 vs POSIT 0.99), multi-task setups fostered more balanced learning, with losses converging across tasks. Multi-token rhetorical spans proved challenging but showed modest F1 gains in integrated setups. Neutral positioning remained the weakest category, though targeted improvements were observed. Models displayed greater sensitivity to polarised language (e.g., RoBERTa TC-MTL reaching 0.55 F1 on negative spans). Ultimately, entity-level F1 scores converged (NER: 0.60–0.61; POSIT: 0.50–0.52), suggesting increasingly generalisable learning and reinforcing multi-task modelling as a promising approach for decoding complex rhetorical strategies in real-world political language.
(More)