The "Stop Hate Speech" project combines natural language processing and machine learning with civil society engagement to counter online hate speech. The project is led by alliance F (Federation of Swiss Women’s Associations) and their partners and implemented in close collaboration with the Digital Democracy Lab (UZH) and the Public Policy Group and Immigration Policy Lab (ETH). Since 2020, the Stop Hate Speech project is generously supported by InnoSuisse and since 2021 also by a grant from BAKOM, the Swiss Federal Office of Communications. The project seeks to algorithmically detect hate speech across a variety of online venues (newspaper and social media) and to generate actionable knowledge about effective strategies for counter speech. For this purpose, the project team will develop a deep learning pipeline for automatic hate speech detection and evaluate a range of promising counter speech strategies with experimental methods. The close cooperation with alliance F and Swiss media outlets ensures that the scientific findings directly translate into effective detection and reduction of online hate speech. The goal is to improve the quality of public discourse and to minimize offline consequences of hostile online behavior.
The project is funded by the Heidelberger Akademie der Wissenschaften, the academy of sciences of the state of Baden-Württemberg as part of the WIN-Kolleg program. The project aims to provide insights into how slanted news media coverage impacts public debates and, in turn, affects collective decision making. News may be subtly biased through specific word choices or framing, intentional omissions or misrepresentation of specific details. In the most extreme cases, fake news may present entirely fabricated facts to intentionally manipulate public opinion towards a given topic. A rich diversity of opinions is desirable but systematically biased information, if not recognized as such, can be problematic as a basis for decision making. Therefore, it is crucial to empower news readers in recognizing relative biases in coverage by providing timely identification of media bias that can be delivered together with the actual news coverage – for example, through a specifically designed news aggregator platform. This project connects a long tradition of social science research on media bias with state-of-the-art methodology from computer science. The first part of the project centers around achieving rapid automated assessment of news media bias from a more technical, computer science point of view. The second, social science part of the project then is concerned with systematically studying how information about (relative) bias in the news could then be disseminated to enable – rather than to hinder – consensus formation and, in turn, collective decision making.