TY - GEN
T1 - Handling Disagreement in Hate Speech Modelling
AU - Kralj Novak, Petra
AU - Scantamburlo, Teresa
AU - Pelicon, Andraž
AU - Cinelli, Matteo
AU - Mozetič, Igor
AU - Zollo, Fabiana
N1 - Publisher Copyright:
© 2022, The Author(s).
PY - 2022
Y1 - 2022
N2 - Hate speech annotation for training machine learning models is an inherently ambiguous and subjective task. In this paper, we adopt a perspectivist approach to data annotation, model training and evaluation for hate speech classification. We first focus on the annotation process and argue that it drastically influences the final data quality. We then present three large hate speech datasets that incorporate annotator disagreement and use them to train and evaluate machine learning models. As the main point, we propose to evaluate machine learning models through the lens of disagreement by applying proper performance measures to evaluate both annotators’ agreement and models’ quality. We further argue that annotator agreement poses intrinsic limits to the performance achievable by models. When comparing models and annotators, we observed that they achieve consistent levels of agreement across datasets. We reflect upon our results and propose some methodological and ethical considerations that can stimulate the ongoing discussion on hate speech modelling and classification with disagreement.
AB - Hate speech annotation for training machine learning models is an inherently ambiguous and subjective task. In this paper, we adopt a perspectivist approach to data annotation, model training and evaluation for hate speech classification. We first focus on the annotation process and argue that it drastically influences the final data quality. We then present three large hate speech datasets that incorporate annotator disagreement and use them to train and evaluate machine learning models. As the main point, we propose to evaluate machine learning models through the lens of disagreement by applying proper performance measures to evaluate both annotators’ agreement and models’ quality. We further argue that annotator agreement poses intrinsic limits to the performance achievable by models. When comparing models and annotators, we observed that they achieve consistent levels of agreement across datasets. We reflect upon our results and propose some methodological and ethical considerations that can stimulate the ongoing discussion on hate speech modelling and classification with disagreement.
KW - Annotator agreement
KW - Diamond standard evaluation
KW - Hate speech
UR - http://www.scopus.com/inward/record.url?scp=85135085231&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-08974-9_54
DO - 10.1007/978-3-031-08974-9_54
M3 - Conference contribution
AN - SCOPUS:85135085231
SN - 9783031089732
T3 - Communications in Computer and Information Science
SP - 681
EP - 695
BT - Information Processing and Management of Uncertainty in Knowledge-Based Systems - 19th International Conference, IPMU 2022, Proceedings
A2 - Ciucci, Davide
A2 - Couso, Inés
A2 - Medina, Jesús
A2 - Ślęzak, Dominik
A2 - Petturiti, Davide
A2 - Bouchon-Meunier, Bernadette
A2 - Yager, Ronald R.
PB - Springer Science and Business Media Deutschland GmbH
T2 - 19th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2022
Y2 - 11 July 2022 through 15 July 2022
ER -