Spotlight
in
Workshop: 3rd Workshop on practical ML for Developing Countries: learning under limited/low resource scenarios
WHY SO INFLAMMATORY? EXPLAINABILITY IN AUTOMATIC DETECTION OF INFLAMMATORY SOCIAL MEDIA USERS
Cuong Nguyen · Daniel Nkemelu · Michael Best · Ankit Mehta
Hate speech and misinformation, spread over social networking services (SNS) such as Facebook and Twitter, have inflamed ethnic and political violence in countries across the globe. We argue that there is limited research on this problem within the context of the Global South and present an approach for tackling them. Prior works have shown how machine learning models built with user-level interaction features can effectively identify users who spread inflammatory content. While this technique is beneficial in low-resource language settings where linguistic resources such as ground truth data and processing capabilities are lacking, it is still unclear how these interaction features contribute to model performance. In this work, we investigate and show significant differences in interaction features between users who spread inflammatory content and others who do not, applying explainability tools to understand our trained model. We find that features with higher interaction significance (such as account age and activity count) show higher explanatory power than features with lower interaction significance (such as name length and if the user has a location on their bio). Our work extends research directions that aim to understand the nature of inflammatory content in low-resource, high-risk contexts as the growth of social media use in the Global South outstrips moderation efforts.