Go back

Navigating Hate Speech in a Changing Landscape

Author: Melissa Sonnino, Facing Facts Network Director

Edited by: Joanna Perry and Daniel Heller

After a decade of working to address hate speech across Europe, we find ourselves at a critical juncture. The landscape has shifted dramatically: platforms are rolling back protections, algorithms amplify division, and the communities our members serve face intensifying harm. This moment calls for honesty about what we’ve learned, what hasn’t worked, and where we go from here.

This blog reflects on CEJI/Facing Facts’ journey from 2014 to 2025—from monitoring hate speech online to building a Europe-wide network of practitioners, from engaging with tech platforms to stepping back from monitoring work entirely. It’s a story of persistence, frustration, adaptation, and ultimately, a strategic pivot towards what we believe can create more meaningful change.

Our Framework

We operate within an international human rights framework that helps us orient ourselves in addressing this complex phenomenon. This includes standards set by the United Nations, the Council of Europe, and the European Union’s Charter of Fundamental Rights, along with relevant case law from the European Court of Human Rights. We also refer to key EU instruments such as the Framework Decision on combating certain forms of racism and xenophobia by means of criminal law, the Victims’ Rights Directive, the Code of Conduct on countering illegal hate speech online (+), and most recently, the Digital Services Act.

At Facing Facts, we place the needs and experiences of victims at the centre of our work. Our role is to support our community of practice in navigating the impact and the very practical implications of hate speech through training, research, and policy engagement. Our engagement with hate speech goes beyond drawing legal lines between what is allowed and what is not. We recognise that expressions that are technically lawful can still be deeply hurtful or isolating.

We aim to contribute meaningfully to future EU policy developments in this area, while strengthening our support for members working at national level. Our contribution to the hate speech response system includes creating space to reflect on hate speech, whether lawful or unlawful; how it is experienced, how it affects those targeted, and how we, as part of the system, respond with care, responsibility, and awareness.

Therefore, we have developed our own working definition of hate speech, shaped by this international framework and informed by our practical experience in the field.

CEJI/Facing Facts understands hate speech to be any expression, online or offline, which is potentially harmful in a given context to an individual or group based on one or more of their identity characteristics. It may be illegal or legal according to local laws. We recognise the fundamental right to freedom of expression and encourage proportionate responses that balance freedom of expression with the right to be protected from targeted abuse.

Navigating Hate Speech in a Changing Landscape

Normalisation of Hate Speech

Hate speech is becoming more prevalent and visible. It is also intensifying, fuelled by a range of global and local developments. These include the resurgence of extremist movements and the growing normalisation of their rhetoric in public discourse, with political leaders in several contexts openly normalising, if not even spreading themselves, hate speech. In most recent years, the influence of U.S. political shifts has played a role as well as similar forms of backlash and anti-progressive rhetoric which have developed across Europe. Together, these dynamics have created a more permissive environment in which hate speech can spread and take hold.

This growing tolerance for hateful and polarising expression highlights the urgent need for clear and enforced codes of conduct for politicians, EU institutions, and the media. The recognition of these risks to democratic resilience has also been addressed at the EU level. The 2025 European Democracy Shield communication highlights the urgency of countering malign interference and hate-fuelled polarisation1. It reinforces the importance of systemic responses to mis- and dis-information, hate speech, and other threats targeting democratic processes and affected communities. Our strategy aligns with this broader EU concern by placing hate speech within a wider civic and democratic context and seeking to address both its human and institutional impacts.

Platform Disengagement and Algorithmic Amplification

We are seeing significant rollbacks in content moderation policies. Platforms such as X (formerly Twitter), TikTok, LinkedIn and META have removed or weakened key policies and moderation systems on hate speech and disinformation, particularly in the US context2. These developments are taking place alongside a broader weakening of international human rights frameworks and a shrinking space for civil society to operate3.

The role of AI is also becoming more prominent, with algorithmic amplification contributing to the spread of hate speech, and misinformation and disinformation becoming more sophisticated and widespread. At a technical level, platform algorithms appear to systematically favour content that is divisive, polarising, and often borderline illegal, because this type of content drives engagement, even at the expense of harm4.

Platform Imbalance

We are also witnessing a growing disconnect between social and video media platforms and the rest of the actors in the hate speech response system. Platforms can operate with limited transparency and with vast legal and financial resources at their disposal. This gives them significant power to challenge or delay regulation, while avoiding meaningful engagement with civil society and public authorities5.

The financial imbalance is significant: there is evidence that platforms can afford to hire top-tier legal firms not only to resist government oversight, but in some cases to intimidate or silence victims and watchdogs6.

Shrinking Civic Space and Weaponised Human Rights Norms

There is also a concerning trend in how international human rights norms are being interpreted and used. Standards originally intended to protect people from harm can be distorted and weaponised to shield those spreading hate. Legal actions taken to address incitement or violent speech can be reframed as violations of free expression7.

Reflections and Lessons Learned

Our work on hate speech encompasses monitoring, training and research. Throughout, we have tried to use these activities to promote multistakeholder cooperation. Especially through our training and Network management work, we have seen opportunities to strengthen the kind of multi-stakeholder connections that are needed to make response systems more effective. We engaged in meaningful cooperation with EU institutions and other International Organisations (IGOs) contributing to what we consider to be impressive policy advancements in the area.

Despite the ongoing efforts invested in this field, we do not see that things are improving for the communities most affected by hate speech. Data from the Fundamental Rights Agency (FRA) on experiences and perception of discrimination in different communities, including online hate, confirm a rise in negative experiences often resulting in normalisation of discriminatory acts and self-exclusion from digital spaces8. What we hear from our members, and what we observe ourselves, confirms that hate speech is intensifying. This is fuelled not only by shifting political scenarios, wars, and economic instability, but also by the way social and video media platforms are structured. Algorithms and recommendation engines, designed to maximise attention, engagement, and ultimately, profit, tend to amplify polarising and hateful content9. Posts that provoke strong reactions, even when challenged through counter-speech, are pushed further into users’ feeds and made more visible. This logic of amplification creates an environment in which hateful narratives spread faster and more widely, while those trying to challenge them often find themselves reinforcing their reach. The lack of transparency around how these recommendation systems operate makes it even harder to assess the risks and intervene effectively.

Does this mean our work was all for nothing? No, we do not believe so. We share this struggle with many actors in the hate speech response system, including EU institutions that have invested significant effort and financial resources into addressing the issue.

Our involvement in monitoring began in a time when there was still a need to prove the extent and harm of the problem. From 2014 to 2024, we participated in several monitoring projects and we are grateful for the knowledge and competencies that originated from these experiences and for the partners we worked with. Today, the evidence is there: we know the extent of hate speech is significant, and we know it is harmful. We also experienced this harm directly. Several members of our team have been significantly affected by the emotional and psychological burden of the monitoring work.

We have engaged in years of dialogue with social and video media companies; sometimes constructive, sometimes frustrating. The current moment feels particularly critical. Platforms have shifted their focus from relatively meaningful engagement with communities, to ensuring compliance with the Digital Services Act (DSA)10. Fear of sanctions has taken priority over honest, long-term cooperation. While they are still important actors in EU and national hate speech response systems, we have observed increasing resistance from platforms to remove content repeatedly flagged as harmful, and a clear limitation in relying on reporting mechanisms alone to manage the sheer volume of online hate11. AI-based content moderation is insufficient, especially when it is not continuously updated to detect evolving or coded discriminatory language–a gap that is particularly evident in languages with significantly fewer content moderators–yet it is being increasingly used to replace human content moderation12. As a result, in early 2025, we made the strategic decision to disengage from direct hate speech monitoring work and removed it from our 2026-2028 planned activities. This decision came from a combination of factors: the emotional toll it placed on our team, the fact that the work was never properly resourced or compensated–including to ensure psychological support– and a growing recognition that the burden of moderation should not fall on civil society. It also became clear that the structures we were trying to work with were not responding in a meaningful way. The algorithms of Very Large Platforms continue to amplify extremist content, expression that incites hatred and violence, and mis- and dis-information. This disengagement is allowing the Network Secretariat to strengthen other core activities such as training, research and meaningful exchange with our members and learners.

Since 2015, we have invested in deepening our expertise in online learning and positioned training as a central tool to reach and support our multi-stakeholder community of practice which brings together a wide range of actors involved in hate speech response systems. We developed online training not only for our Facing Facts community of learners, but also for external clients and partners. CEJI has made significant investments in the Facing Facts Online learning infrastructure13, guided by our online learning strategy. With the continued support of EU funding, this will remain a core focus of our work moving forward.

Our areas of work do not exist in isolation but are inter-dependent and continuously informing each other. At the intersection of our educational and policy work, we conducted research on what motivates our learners to engage with online learning on hate speech and hate crime14. This ongoing research work is collecting evidence of the positive impact of our multistakeholder learning approach.

Since the establishment of the Facing Facts Network in 2022 our role as a connector between grassroots actors and EU-level processes has become clearer. This positioning has empowered us to provide members-informed inputs to EU consultations, based on shared reflections and experiences from across the Network.

The development of two hate speech focused policy briefs between 202215 and 202416 was also grounded on our members’ experience. The first sketches a prototype of a hate speech response system, describing the roles and responsibilities of its key actors. The second analyses the impact of the DSA and explores how our members are navigating the challenges posed by the new regulatory framework. Both briefs have contributed to shaping internal reflection, supported our advocacy efforts, and offered concrete reference points for dialogue with EU institutions and other stakeholders.

Summary of Key Themes and Lessons Learned

ThemeWhat We Learned
Impact on affected communitiesDespite years of effort, we do not see meaningful improvement for those most affected by hate speech. Data confirms rising exposure and self-exclusion from digital spaces.
Platform logic and limitationsAlgorithms continue to reward and amplify hate. Even counter-speech can reinforce visibility of harmful content. Platforms have deprioritised meaningful engagement with CSOs and shifted to legal compliance.
Imbalanced Power DynamicsThe relationship with major social and video media platforms is structurally unequal. Their business model is driven by profit, not safety, and their access to vast financial and legal resources can allow them to avoid accountability and pressure civil society actors into an impossible position.
Monitoring experienceMonitoring was necessary in earlier phases to expose the scale of the problem. But the evidence is now clear. Continuing was emotionally taxing, underfunded, and increasingly disconnected from impact and victims’ experiences.
Shared frustration and demand for changeMany of these concerns are echoed by members across the Facing Facts Network. We are hearing more and more voices calling for new spaces, different conversations, and approaches that can lead to more meaningful and sustainable change.
Online learning and Multi-stakeholder learningThrough our training and research work, we have seen that learning together across different stakeholder groups is effective in creating space for meaningful conversations and making progress. This insight is now informing the implementation of our online learning strategy, which is designed to support shared understanding and practical change within hate speech response systems.

Looking Ahead

The challenges outlined in this reflection are not unique to our organisation. They are systemic issues that demand collective response and structural change. While we’ve made the difficult decision to step back from monitoring work, we remain deeply committed to supporting those on the frontlines of hate speech response systems.

Our focus now is on what we do best: creating spaces for learning, dialogue, and connection across the diverse actors working in this field. Through our online training, research, policy engagement, and the continued growth of the Facing Facts Network, we’re investing in approaches that can sustain long-term change.

The hate speech landscape will continue to evolve, and so will we. Watch this space as we develop our new hate speech strategy and continue to support our community in navigating these complex challenges together.


References



Leave a Reply

Your email address will not be published. Required fields are marked *