TY - JOUR
T1 - Quantifying the impact of context on the quality of manual hate speech annotation
AU - Ljubešić, Nikola
AU - Mozetič, Igor
AU - Novak, Petra Kralj
N1 - Publisher Copyright:
© The Author(s), 2022.
PY - 2022/8/22
Y1 - 2022/8/22
N2 - The quality of annotations in manually annotated hate speech datasets is crucial for automatic hate speech detection. This contribution focuses on the positive effects of manually annotating online comments for hate speech within the context in which the comments occur. We quantify the impact of context availability by meticulously designing an experiment: Two annotation rounds are performed, one in-context and one out-of-context, on the same English YouTube data (more than 10,000 comments), by using the same annotation schema and platform, the same highly trained annotators, and quantifying annotation quality through inter-annotator agreement. Our results show that the presence of context has a significant positive impact on the quality of the manual annotations. This positive impact is more noticeable among replies than among comments, although the former is harder to consistently annotate overall. Previous research reporting that out-of-context annotations favour assigning non-hate-speech labels is also corroborated, showing further that this tendency is especially present among comments inciting violence, a highly relevant category for hate speech research and society overall. We believe that this work will improve future annotation campaigns even beyond hate speech and motivate further research on the highly relevant questions of data annotation methodology in natural language processing, especially in the light of the current expansion of its scope of application.
AB - The quality of annotations in manually annotated hate speech datasets is crucial for automatic hate speech detection. This contribution focuses on the positive effects of manually annotating online comments for hate speech within the context in which the comments occur. We quantify the impact of context availability by meticulously designing an experiment: Two annotation rounds are performed, one in-context and one out-of-context, on the same English YouTube data (more than 10,000 comments), by using the same annotation schema and platform, the same highly trained annotators, and quantifying annotation quality through inter-annotator agreement. Our results show that the presence of context has a significant positive impact on the quality of the manual annotations. This positive impact is more noticeable among replies than among comments, although the former is harder to consistently annotate overall. Previous research reporting that out-of-context annotations favour assigning non-hate-speech labels is also corroborated, showing further that this tendency is especially present among comments inciting violence, a highly relevant category for hate speech research and society overall. We believe that this work will improve future annotation campaigns even beyond hate speech and motivate further research on the highly relevant questions of data annotation methodology in natural language processing, especially in the light of the current expansion of its scope of application.
KW - Hate speech
KW - Impact of context
KW - Inter-annotator agreement
KW - Manual annotation
UR - http://www.scopus.com/inward/record.url?scp=85179759795&partnerID=8YFLogxK
U2 - 10.1017/S1351324922000353
DO - 10.1017/S1351324922000353
M3 - Article
AN - SCOPUS:85179759795
SN - 1351-3249
VL - 29
SP - 1481
EP - 1494
JO - Natural Language Engineering
JF - Natural Language Engineering
IS - 6
ER -