Go back

Hate Speech legislation

12/05/2025


Facing Facts Network Newsletter #15

04/09/2024

The Facing Facts Network Newsletter is now live!
You can find the latest edition here. Subscribe here!


Facing Facts Policy Brief

06/06/2024

What is the Digital Services Act and what does it mean for hate speech monitoring and responses?


Facing Facts Policy Brief

03/06/2024

What is the Digital Services Act and what does it mean for hate speech monitoring and responses?


Introduction and overview

The Digital Services Act (DSA) entered into force in February 2024. Marking a pivotal moment in EU leadership on online content regulation, the European Commission explains, ‘The Digital Services Act (DSA) and the Digital Market Act (DMA) form a single set of rules that apply across the whole EU. They have two main goals: To create a safer digital space in which the fundamental rights of all users of digital services are protected, [and] to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally.’[1]

The DSA shifts EU strategy on combating hate speech towards a proactive paradigm by giving the Commission oversight over cooperation between social media platforms and other key actors alongside powers to fine platforms and scrutinise Social Media algorithms. New responsibilities for online platforms have been imposed, and specific roles including National Digital Services Coordinators (DSCs) and Trusted Flaggers have been created.[2] Building on its predecessors, the Code of Conduct on countering illegal hate speech online and E-Commerce Directive,[3] the DSA holds platforms accountable for user-generated content if they fail to take action against illegal hate speech after being notified of it.

The Commission is already starting to use its powers, launching formal proceedings against  several regulated entities[4] and Tiktok.[5] The recent ECtHR Grand Chamber judgment Sanchez v. France[6] establishes that in some specific situations individual users of social media platforms may be held accountable for failing to address hate speech.

In 2023 Facing Facts Network members raised questions about the impact the DSA would have on their monitoring and response activities regarding online hate speech. In response the Facing Facts Secretariat developed and disseminated a survey during January-February 2024 to better understand the experiences and concerns of Facing Facts Network members and other organisations working on this field. This briefing aims to reflect their perspective and offer contextualised analysis and recommendations. 

Our analysis shows a lack of information on DSA implementation at the national level, with implications for hate speech monitoring and responses. The apparent lack of industry knowledge of the Digital Services Coordinators appointed so far combined with the  process of appointing official Trusted Flaggers under the DSA risks undermining the contribution, knowledge and network of civil society organisations that developed under the Code of Conduct of the previous regime.

The Commission and its partners are not starting in a vacuum in terms of legislation, and networks. The key will be to build on the hate speech response system that has strengthened under the Code of Conduct and other measures, paying particular attention to the unique contribution of expert CSOs that understand the nature and impact of hate speech on the most vulnerable communities in Europe. The final section of this briefing makes a number of recommendations that aim to achieve this goal.

The implementation of the DSA takes place in a context of rapid development in policy and technology with the launch of the Code of Practice on Disinformation[7], and the AI Act[8], which aims to “harmonize compliance and tackle cross-border obstacles efficiently”. It will be important to closely monitor these developments and to support key stakeholders to understand their ongoing implications for their monitoring, response and victim support efforts. 


Roles, responsibilities and cooperation under the DSA

The Commission explains that the DSA ‘provides a framework for cooperation between the Commission, EU and national authorities to ensure platforms meet its obligations’[9]. The graphic below sets out the key players and their powers.

Figure one: This graphic shows the key entities and relationships established by the Digital Services Act. Click on the + signed to learn more.

 


Challenges and opportunities in hate speech monitoring and responses under the DSA

15 organisations from 11 countries responded to the Facing Facts questionnaire, including 12 CSOs and three public authorities. Thirteen respondents are members of the Facing Facts Network. Based on an analysis of survey responses, publicly available information on the DSA, and the Network secretariat’s ongoing policy and advocacy work, several challenges facing the effective implementation of the DSA can be identified. 


Available information and the role of the Digital Services Coordinators

While survey respondents reported a high level of awareness about the DSA and its provisions,[10] several were critical of the lack of information on the implementation of the DSA at the national level:

“There was absolutely no information on the implementation of the DSA and appointing of the Digital Services Coordinator.”[11]

Several respondents to the survey pointed to a vacuum of monitoring at the national level because national DSCs and Trusted Flaggers have yet to be appointed, and monitoring exercises established under the EU Code of Conduct have been stalled. A further strong theme of concern expressed in the survey was that proposed DSCs do not have the knowledge, background or network of relationships to effectively carry out their role.

A respondent explained, ‘[the] soon-to-be-appointed National Digital Services Coordinator is an expert body that does not have any previous contact w[ith] civil society and isn’t aware of the work done in this area re: hate speech (primarily the implementation of The EU Code of conduct on countering illegal hate speech online).’ Indeed a recent list of appointed DSCs reveals that those appointed to date are agencies for competition and media services with little obvious experience of hate speech monitoring.[12]

Other responses underline these concerns and point to the central importance of building and maintaining strong relationships with civil society experts and equality bodies, effective coordination with the Commission and undertaking training.

‘I would like to stress…the fact that authorities that will be the coordinators under DSA [do] not have any, or had very limited, previous contact with civil society. These institutions need to start conside[ring] their stakeholders differently (because they are usually market oriented) and steps need to be made in order to develop normal relationships and horizontal cooperation.’

‘While some of these institutions focused on online hate speech before, the majority of them have not yet had such an agenda within their scope and capacities. Therefore, their new responsibilities should encompass comprehensive internal training for a specialised team of employees, as well as international coordination between DSCs and EC.’

‘… [DSAs] are all agencies that worked on tele-communications and competence regulations. Nothing related to hate speech, and mostly the non-inclusion of [an] Equality body in any ‘table’ I think…is also particularly problematic. They are creating this Board (all the DSCs from all EU MS), but they are not consulting or working building up from the good practices of Equality bodies or CSO-institutions working together.’


Trusted flaggers

The burden to identify hate speech falls overwhelmingly on Trusted Flaggers which have a significantly lower status and resources than the platforms/ regulated entities they are tasked to monitor. As such they should not replace the role of these entities as primary duty-bearers for identifying and reporting hate speech. Although platforms must take down content which is not in compliance with their terms and conditions, the risk is that most hate speech goes unaddressed unless reported by Trusted Flaggers – or platforms’ Trusted Partners. Relying on incidents submitted by third parties is not a feasible strategy for effectively addressing hate speech. Sufficient funding needs to be secured for Trusted Flaggers to meet the DSA’s expectation, especially due to the requirement of independence from any provider of online platforms.[13] 

Transparent accreditation and removal processes of Trusted Flaggers are among the most sensitive areas of DSCs’ responsibility. However one survey respondent expressed the following concern, ‘There is a risk that the Digital Services Coordinator will not be neutral when awarding the trusted flagger status.’ DSCs will also need to pay particular attention to not exclude communities from being appointed as Trusted Flaggers, and to ensure that all communities are represented and consulted whether or not they are covered by national legislation on ‘hate speech’. For example, effective hate speech monitoring often requires specific knowledge about linguistic and historical contexts. Diverse pools of Trusted Flaggers are more likely to have this knowledge as well as relationships with various communities. Comprehensive training programs and continuous evaluation mechanisms are also essential for an effective Trusted Flagger system. While there has been progress on developing application processes and guidelines at the national level,[14] as of June 2024, the necessary methodologies and training mechanisms have yet to be fully implemented by the Board for DSCs.[15]

Referring back to figure one, there is a lack of clarity about the position and role of those organisations previously appointed as ‘trusted partners/ flaggers’ for Social Media Platforms. Between 2016-2022 these organisations were a central feature of the previous regime,  carrying out monitoring exercises under the EC Code of Conduct. The upcoming update to the Code of Conduct is expected to offer greater clarity on their role. The majority of respondents to the survey were ‘trusted partners/ flaggers’, however the pathway to becoming a Trusted Flagger under the DSA regime is not without obstacles. Eight respondents to the survey reported that they will apply for the trusted flagger status, six still need to finalise the decision, and one respondent explained that they will not due to budgetary issues.

Overall, for some organisations, applying for the official status of a Trusted Flagger under the DSA is not possible due to bureaucratic and financial constraints or lack of clarity about the terms of independence of their role. There is a concern that social media platforms and Member States will treat only official Trusted Flaggers as their partners and not sufficiently take into account the perspectives and knowledge of other key stakeholders, including the Platforms’ Trusted Partners, and the soon to be appointed Trusted Monitors. Indeed, while there is some anecdotal evidence that social media platforms will continue to engage with Trusted Partners appointed under the Code of Conduct without a formal role, a loss of knowledge and networks developed under the previous regime is a risk to the system. The relationship between Trusted Flaggers appointed under the DSA, Trusted Reporters, social media platforms and the Commission should be further clarified. One respondent explained,

“For us the biggest risk is being made irrelevant or redundant by official trusted flaggers. For many of our members, applying for official TF (trusted flagger) status is a no go. With the appointment of other organisations, platforms and member states could argue that only official TFs are relevant and non-official TFs are redundant or maybe even politically motivated…On the other side, becoming official TFs also has its risks. There is a lot of bureaucratic weight put on official TFs and no foreseeable funding line, which makes applying for the status not too desirable.”

Survey respondents also had questions on how the European Centre for Algorithm Transparency (ECAT) will monitor social media algorithms while also drawing on data from audits prescribed under the DSA, and how this will feed into the oversight process.

In a signal of good practice, one respondent to the survey reported that France is planning to pass its own digital services laws, clarifying certain DSA rules, specifying the role of the supervisory authority, and providing for criminal penalties for the authors of unlawful comments. Other Member States are likely to follow suit.  



Hate speech decision-making

EU law has not so far provided a specific definition of hate speech. However, the 2008 Framework Decision on Combating Certain Forms and Expressions of Racism and Xenophobia (Framework Decision) identified the types of hate speech acts, which should be criminalised across the EU.[16] The Framework Decision is constructed around a closed list of protected grounds: “race,” colour, religion, descent or national or ethnic origin. In 2021, the Commission proposed an initiative to extend the list of EU crimes, as per Article 83(1) of the Treaty on the Functioning of the EU (TFEU), with hate crime and hate speech regardless of a protected ground, based on sex, sexual orientation, gender identity or disability.[17] The unanimous consensus among all Member States in the European Council necessary to adopt this decision has yet to be sought.[18]

The DSA does not establish a substantive legal framework, by for example providing a definition of illegal hate speech applicable across all Member States. According to the DSA, the term “illegal hate speech” is to be interpreted based on “Union law or the law of any Member State”[19]. Both the Framework Decision and national legislation lack some of the key protected grounds such as sexual orientation, gender identity and disability. If an expression is deemed illegal in one Member State, there is a question about how the Commission might take this on board with regard to its own powers in relation to VLOSEs and VLOPs in the context of another Member State in which a particular characteristic is not legally protected.

It is not clear whether the term “illegal” refers only to criminalised hate speech or also speech that violates national civil or administrative law. As explained by one respondent,

“The general public do not understand what constitutes criminal behaviour versus online (non criminal) hate speech. Clarity is required on what should be reported and to whom – the social media company, the national authority or the police.”

There is a need for clarity on the process of reporting. For example, if all reports are to go through the DSCs or the police, the burden could be overwhelming. If complaints go through social media platforms, there are questions regarding their legitimacy in deciding what is legal and what is not. If there is a legal direction that content needs to be removed, the user or the organisation, or any third party can flag it to the social media platform, which is then required to remove the content. However, there are no proactive steps required to be taken by the social media platform. In this regard, the DSA has yet to represent progress from the 2016 Code of Conduct. For some respondents who have been involved in monitoring hate speech, there is a sense that the DSA might not lead to improvements, ‘the risk that, despite DSA rules, platforms will not improve their moderation of online hate content. Even today, on some platforms like X, we can’t see any improvement.’



Conclusions and ways forward

The DSA should increase access to justice, safety and support for victims and groups targeted by hate speech. Close and effective cooperation among the key actors described in figure one as well as effective victim support will be essential to achieve this. There should be clarity on how national or administrative authorities’ orders or requests for information follow the DSA’s criteria and are complied with.[20] To achieve this, there should be a unified methodology for hate speech monitoring, responses, and referrals across the EU, prioritising consistency to mitigate the risk of divergent practices among Member States. Article 35 of the DSA could support this approach:

Hate speech is a specific and systemic risk across the European Union. Article 35 guidelines could bring legacy knowledge and networks from the previous regime and help deliver the potential of the DSA as a step-change in EU regulation on hate speech. Guidance should also cover decision-making processes related to the assessment of what constitutes illegal hate speech and Trusted Flagger accreditation criteria and withdrawal procedures.[21] 

Regulated entities need to take full responsibility to address hate speech on their platform, scrutinise and respond to any content that is incompatible with their terms and conditions regardless of whether they are notified by a third party. Article 9 and 10 of the DSA oblige platforms to cooperate with requests for information from national authorities and to appoint a single point of contact to support this process. These are key provisions to ensure victims can access their right to an effective investigation and an effective remedy. In this context the work and role of ECAT[22] could be further clarified.

Respondents made several suggestions such as to include an indicator for cooperation with civil society in the reports, oversight and analysis of the DSA implementation. Another suggestion is that Digital Services Coordinators should be obliged to organise an annual event with civil society on DSA implementation.

The role and position of DSCs in relation to established regulators, such as press and election regulators that have an existing power or duty with regard to hate speech responses should be clear. Existing relationships and mechanisms should be strengthened not unnecessarily replaced or made weaker. To support this outcome, DSCs should map their national context of regulation and related key stakeholders as a resource of responses legal and non-legal, criminal and non-criminal. The Facing Facts system mapping methodology could be a useful resource for this exercise.

Equality bodies could play a key role by supporting efforts to effectively investigate and support prosecution of illegal hate speech, as well as systematically monitor and collect data on such cases.[23]For example, equality bodies could have powers to ensure that DSCs appoint Trusted Flaggers that represent all targeted communities, including those who are not currently covered by national hate speech laws.[24]


Recommendations

This section draws on our findings and analysis to present recommendations for all stakeholders with responsibilities within the ‘hate speech response system’ at the EU and national levels including the Facing Facts Network, the Commission, DSCs, Governments, Trusted Flaggers, and intermediaries/online platforms.

Facing Facts:

  • Training: Develop regular and up to date training programmes on hate speech monitoring and the regulatory framework, including the DSA. Ensure that trainings incorporate the importance of mental health, and that they are trauma-informed.
  • Dissemination: Share work of members, for example current projects involving ‘massive monitoring’ such as SafeNet
  • Raise public awareness: Share activities and information on how to identify and report criminal and non-illegal HS
  • Monitor implementation: Work with members to monitor how the DSA is implemented at the national level, and bring key themes and gaps to the attention of partners at the EU level, including the High Level Group on Combating Hate Speech and Hate Crime. 

European Commission:

  • Transparency Guidelines: Develop and publish comprehensive transparency guidelines under Article 35 of the DSA outlining the operationalisation of the DSA framework.[25] Ensure clarity on decision-making processes, accreditation criteria, and the process for withdrawing Trusted Flagger status under Article 22(2). Include civil society in the development phase in an inclusive  and effective way, e.g. via focus groups or consultation meetings[26].
    • Unified Methodology: Establish a unified methodology for hate speech decision-making across all EU Member States. This will guarantee consistency and avoid discrepancies in the interpretation and application of hate speech regulations by various actors such as Trusted Flaggers, platforms and, subsidiarily, also police, prosecutors and judicial or administrative authorities.
    • Coordination mechanism: Establish an effective coordination mechanism, focusing on/drilling down to the national level, involving the Commission, DSCs, platforms and Trusted Flaggers as well as civil society organizations working on hate speech allowing for an open and continuous exchange of knowledge and information with the aim of effectively implementing the DSA.
    • Hate speech as an EU crime: continue pursuing the effort to establish hate speech as an EU crime under Article 83(1) TFEU.
    • Support for Trusted Flaggers: Provide adequate funding for Trusted Flaggers who may face challenges in meeting the independence requirements while, at the same time, fulfilling their role as those reporting illegal hate speech to platforms. Without their proactive involvement, the DSA mechanism loses much of its relevance. Tangible financial support provided to Trusted Flaggers is crucial for effective implementation of the DSA.

National Digital Services Coordinators:

  • Continuous Training: Institute ongoing training programs for DSC officers dealing with accreditations of trusted flaggers. This will enhance their capacity to make informed decisions, mitigating the risk of errors or biased evaluations. Participate in and organise training in collaboration with CSOs to understand their perspectives and challenges, understanding the (potential) Trusted Flaggers landscape and diversity.
  • Proactive collaboration with CSOs working in the area of discrimination, hate speech or hate crimes, as well as national Equality Bodies: the DSCs should seek collaboration with the experts at the national level, including through consultation or training, to better understand the legal and non/legal problems of addressing hate speech and the needs of various victims and targeted communities.

Governments:

  • Legislative Alignment: Align national legislation with relevant human rights standards, e.g. the Council of Europe Committee of Ministers Recommendation CM/Rec(2022)16 on hate speech, as well as with the DSA framework to ensure a cohesive approach. This will contribute to harmonising definitions of hate speech and streamlining legal responses across Member States.
    • Support for DSCs: Provide adequate staffing and financing for DSCs. This support is crucial for effective implementation, preventing potential resource gaps or delays.
    • Support for Trusted Flaggers: Provide adequate funding for Trusted Flaggers who may face challenges in meeting the independence requirements while, at the same time, fulfilling their role as those reporting illegal hate speech to platforms. Without their proactive involvement, the DSA mechanism loses much of its relevance. Tangible financial support provided to Trusted Flaggers is crucial for effective implementation of the DSA.
    • National cooperation mechanism: Establish or support cooperation mechanisms to regularly exchange views between the DSCs, other relevant national authorities and civil society.
    • Support the Commission’s initiative to make hate crime and hate speech EU crimes in the European Council.

Trusted Flaggers:

  • Diverse Accreditation Criteria: Advocate the EC, DSCs and the Board for appropriate  accreditation criteria to ensure representation from various organizations and communities working on different topics. This will foster a broad perspective in addressing hate speech, allowing the possibility of tackling diverse types of hate.
    • Transparent Methodology: Call for a transparent methodology in the accreditation process, emphasising diversity, impartiality, and adherence to predefined standards. This will ensure the reliability and fairness of the Trusted Flaggers system.
    • Training: Ensure that staff is trained in the area of recognising illegal hate speech in the EU and national context.

Regulated Entities:

  • Clear Reporting Mechanisms: Implement clear reporting mechanisms in line with DSA requirements, ensuring accessibility and user-friendliness. This will facilitate efficient reporting of hate speech while upholding users’ fundamental rights.
    • Regular Communication with DSCs, coordinated by the EC: Establish regular communication channels with DSCs and civil society organisations to facilitate the exchange of information and support problem-solving. This collaboration will contribute to a more effective and transparent implementation of the DSA.
    • Increase linguistic expertise of content moderation team: Ensure that the composition of moderation teams have a diversity of linguistic expertise, in addition to existing AI functionality.
    • Consistent application of own policies and regulations: Besides complying with the DSA, ensure consistent and proactive application of own policies and regulations to address hate speech, seek improving these regulations in the light of the existing best practices, while respecting freedom of expression across the Member States. Ensure that sufficient resources are allocated to combating hate speech online by developing technological solutions grounded in human rights principles. This includes investing in AI-driven tools for automated moderation and enhancing the capacity of moderators and decision-makers. Collaboration with CSOs and monitoring organisations, regulated entities in the development and funding of this AI tool will support the effective implementation of the Digital Services Act (DSA).
    • Training: Ensure that moderators together with VLOPs and VLOSEs staff receive training from civil society organisations on the impact of hate speech for the different communities and on its linguistic updates.

[1] https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

[2] See the 2022 Facing Facts report on Current Activities & Gaps in hate speech responses: A mapping report for the Facing Facts Network, p. 12: https://www.facingfacts.eu/wp-content/uploads/sites/4/2023/04/Facing-Facts-Network-Mapping-Report-v8.pdf and The European Commission against Racism and Intolerance (ECRI) General Policy Recommendation N°15 on combating hate speech (GPR 15): https://rm.coe.int/ecri-general-policy-recommendation-no-15-on-combating-hate-speech/16808b5b01 

[3] Directive 2000/31/EC, Directive of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market, section 230.

[4] “formal proceedings to assess whether TikTok may have breached the Digital Services Act (DSA) in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_926, the Commission reports, “formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6709, see also links to proceedings against META,https://ec.europa.eu/commission/presscorner/detail/en/IP_24_2664, and Aliepress https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1485

[5] the Commission reports, “formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6709

[6] Sanchez v. France, no. 45581/15, Grand Chamber judgment of 15 May 2023.

[7] https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation

[8] DG CONNECT  coordinates the implementation of both the AI Act and DSA

[9] See more: https://digital-strategy.ec.europa.eu/en/policies/dsa-cooperation#:~:text=The%20Digital%20Services%20Act%20(DSA,ensure%20platforms%20meet%20its%20obligations.

[10] For example 60% agreed with the statement ‘I am aware of the impact the Digital Services Act will have on the activities of my organisation’; 80% agreed with the statement ‘I am aware of the reporting needs under the Digital Services Act’.

[11] Survey respondents also reported a general lack of information about the appointment of Trusted Flaggers at the national level.

[12] See here the list of appointed Digital Services Coordinators https://digital-strategy.ec.europa.eu/en/policies/dsa-dscs

[13] See Article 22 (2) (b) of the DSA.

[14] The Board meets regularly and developments are ongoing. National guidelines have been shared by Ireland DSC. Please see  https://www.cnam.ie/wp-content/uploads/2024/02/20240216_Article22_GuidanceForm_Branded_vF_KW.pdf

[15] The Board meets regularly and developments are ongoing. National guidelines have been shared by Ireland DSC. Please see  https://www.cnam.ie/wp-content/uploads/2024/02/20240216_Article22_GuidanceForm_Branded_vF_KW.pdf

[16] Framework Decision 2008/913/JHA on combating certain forms and expressions of racism and xenophobia by means of criminal law, adopted on 28 November 2008 by the Council of the EU. See See Article 1(1)(a) and (c) which require Member States to punish incitement to violence or hatred or directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin, as well as acts of  denialism.

[17] See Communication from the Commission to the European Parliament and the Council “A more inclusive and protective Europe: extending the list of EU crimes to hate speech and hate crime”, COM/2021/777 final.

[18] It is important to note that there is a high degree of divergence in legislation at the national level. The FF secretariat are compiling a list of legislation as a resource: https://www.facingfacts.eu/hate-speech-legislation/

[19] As specified in Article 3, letter H) of the DSA.

[20] See Articles 9 and 10 of the DSA.

[21] In a recent positive development the EC called on VLOPs & VLOSEs to create guidelines to protect election integrity see https://digital-strategy.ec.europa.eu/en/news/commission-gathering-views-draft-dsa-guidelines-election-integrity

[22] ECAT should support the regulator in assessing whether the functioning of algorithmic systems is in line with the risk management obligations that the DSA establishes for VLOPs and VLOSEs. It should become a gravity centre for the international community of researchers and expert auditors in the field, acting as a knowledge hub for research based on platforms’ data. See more at: https://ai-watch.ec.europa.eu/collaborations/european-centre-algorithmic-transparency-ecat_en or https://algorithmic-transparency.ec.europa.eu/index_en.

[23] 2022 Facing Facts report on Current Activities & Gaps in hate speech responses: A mapping report for the Facing Facts Network, cited above, p. 11.

[24] 2022 Facing Facts report on Current Activities & Gaps in hate speech responses: A mapping report for the Facing Facts Network, cited above, p. 11.

[25] ‘Under Article 35 of the DSA the Commission, together with the Digital Services Coordinators of the Member States, may issue guidelines in relation to specific risks, to present best practices and recommend possible mitigation measures.’ Hate speech is a specific and systemic risk and guidelines would be a way to bring legacy knowledge from the previous regime.

[26] Following the Roundtable with Civil Society Organisations on the implementation of the Digital Services Act.



Facing Facts Policy Brief

30/05/2024

What is the Digital Services Act and what does it mean for hate speech monitoring and responses? Key challenges in the implementation of the DSA


Introduction and overview

The Digital Services Act (DSA) entered into force in February 2024. Marking a pivotal moment in EU leadership on online content regulation, the European Commission explains, ‘The Digital Services Act (DSA) and the Digital Market Act (DMA) form a single set of rules that apply across the whole EU. They have two main goals: To create a safer digital space in which the fundamental rights of all users of digital services are protected, [and] to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally.’[1]

The DSA shifts EU strategy on combating hate speech towards a proactive paradigm by giving the Commission oversight over cooperation between social media platforms and other key actors alongside powers to fine platforms and scrutinise Social Media algorithms. New responsibilities for online platforms have been imposed, and specific roles including National Digital Services Coordinators (DSCs) and Trusted Flaggers have been created.[2] Building on its predecessors, the Code of Conduct on countering illegal hate speech online and E-Commerce Directive,[3] the DSA holds platforms accountable for user-generated content if they fail to take action against illegal hate speech after being notified of it.

The Commission is already starting to use its powers, launching formal proceedings against  several regulated entities[4] and Tiktok.[5] The recent ECtHR Grand Chamber judgment Sanchez v. France[6] establishes that in some specific situations individual users of social media platforms may be held accountable for failing to address hate speech.

In 2023 Facing Facts Network members raised questions about the impact the DSA would have on their monitoring and response activities regarding online hate speech. In response the Facing Facts Secretariat developed and disseminated a survey during January-February 2024 to better understand the experiences and concerns of Facing Facts Network members and other organisations working on this field. This briefing aims to reflect their perspective and offer contextualised analysis and recommendations. 

Our analysis shows a lack of information on DSA implementation at the national level, with implications for hate speech monitoring and responses. The apparent lack of industry knowledge of the Digital Services Coordinators appointed so far combined with the  process of appointing official Trusted Flaggers under the DSA risks undermining the contribution, knowledge and network of civil society organisations that developed under the Code of Conduct of the previous regime.

The Commission and its partners are not starting in a vacuum in terms of legislation, and networks. The key will be to build on the hate speech response system that has strengthened under the Code of Conduct and other measures, paying particular attention to the unique contribution of expert CSOs that understand the nature and impact of hate speech on the most vulnerable communities in Europe. The final section of this briefing makes a number of recommendations that aim to achieve this goal.

The implementation of the DSA takes place in a context of rapid development in policy and technology with the launch of the Code of Practice on Disinformation[7], and the AI Act[8], which aims to “harmonize compliance and tackle cross-border obstacles efficiently”. It will be important to closely monitor these developments and to support key stakeholders to understand their ongoing implications for their monitoring, response and victim support efforts. 


Roles, responsibilities and cooperation under the DSA

The Commission explains that the DSA ‘provides a framework for cooperation between the Commission, EU and national authorities to ensure platforms meet its obligations’[9]. The graphic below sets out the key players and their powers.

Figure one: roles, powers and relationships

 


Challenges and opportunities in hate speech monitoring and responses under the DSA

15 organisations from 11 countries responded to the Facing Facts questionnaire, including 12 CSOs and three public authorities. Thirteen respondents are members of the Facing Facts Network. Based on an analysis of survey responses, publicly available information on the DSA, and the Network secretariat’s ongoing policy and advocacy work, several challenges facing the effective implementation of the DSA can be identified. 


Available information and the role of the Digital Services Coordinators

While survey respondents reported a high level of awareness about the DSA and its provisions,[10] several were critical of the lack of information on the implementation of the DSA at the national level:

“There was absolutely no information on the implementation of the DSA and appointing of the Digital Services Coordinator.”[11]

Several respondents to the survey pointed to a vacuum of monitoring at the national level because national DSCs and Trusted Flaggers have yet to be appointed, and monitoring exercises established under the EU Code of Conduct have been stalled. A further strong theme of concern expressed in the survey was that proposed DSCs do not have the knowledge, background or network of relationships to effectively carry out their role.

A respondent explained, ‘[the] soon-to-be-appointed National Digital Services Coordinator is an expert body that does not have any previous contact w[ith] civil society and isn’t aware of the work done in this area re: hate speech (primarily the implementation of The EU Code of conduct on countering illegal hate speech online).’ Indeed a recent list of appointed DSCs reveals that those appointed to date are agencies for competition and media services with little obvious experience of hate speech monitoring.[12]

Other responses underline these concerns and point to the central importance of building and maintaining strong relationships with civil society experts and equality bodies, effective coordination with the Commission and undertaking training.

‘I would like to stress…the fact that authorities that will be the coordinators under DSA [do] not have any, or had very limited, previous contact with civil society. These institutions need to start conside[ring] their stakeholders differently (because they are usually market oriented) and steps need to be made in order to develop normal relationships and horizontal cooperation.’

‘While some of these institutions focused on online hate speech before, the majority of them have not yet had such an agenda within their scope and capacities. Therefore, their new responsibilities should encompass comprehensive internal training for a specialised team of employees, as well as international coordination between DSCs and EC.’

‘… [DSAs] are all agencies that worked on tele-communications and competence regulations. Nothing related to hate speech, and mostly the non-inclusion of [an] Equality body in any ‘table’ I think…is also particularly problematic. They are creating this Board (all the DSCs from all EU MS), but they are not consulting or working building up from the good practices of Equality bodies or CSO-institutions working together.’


Trusted flaggers

The burden to identify hate speech falls overwhelmingly on Trusted Flaggers which have a significantly lower status and resources than the platforms/ regulated entities they are tasked to monitor. As such they should not replace the role of these entities as primary duty-bearers for identifying and reporting hate speech. Although platforms must take down content which is not in compliance with their terms and conditions, the risk is that most hate speech goes unaddressed unless reported by Trusted Flaggers – or platforms’ Trusted Partners. Relying on incidents submitted by third parties is not a feasible strategy for effectively addressing hate speech. Sufficient funding needs to be secured for Trusted Flaggers to meet the DSA’s expectation, especially due to the requirement of independence from any provider of online platforms.[13] 

Transparent accreditation and removal processes of Trusted Flaggers are among the most sensitive areas of DSCs’ responsibility. However one survey respondent expressed the following concern, ‘There is a risk that the Digital Services Coordinator will not be neutral when awarding the trusted flagger status.’ DSCs will also need to pay particular attention to not exclude communities from being appointed as Trusted Flaggers, and to ensure that all communities are represented and consulted whether or not they are covered by national legislation on ‘hate speech’. For example, effective hate speech monitoring often requires specific knowledge about linguistic and historical contexts. Diverse pools of Trusted Flaggers are more likely to have this knowledge as well as relationships with various communities. Comprehensive training programs and continuous evaluation mechanisms are also essential for an effective Trusted Flagger system. While there has been progress on developing application processes and guidelines at the national level,[14] as of June 2024, the necessary methodologies and training mechanisms have yet to be fully implemented by the Board for DSCs.[15]

Referring back to figure one, there is a lack of clarity about the position and role of those organisations previously appointed as ‘trusted partners/ flaggers’ for Social Media Platforms. Between 2016-2022 these organisations were a central feature of the previous regime,  carrying out monitoring exercises under the EC Code of Conduct. The upcoming update to the Code of Conduct is expected to offer greater clarity on their role. The majority of respondents to the survey were ‘trusted partners/ flaggers’, however the pathway to becoming a Trusted Flagger under the DSA regime is not without obstacles. Eight respondents to the survey reported that they will apply for the trusted flagger status, six still need to finalise the decision, and one respondent explained that they will not due to budgetary issues.

Overall, for some organisations, applying for the official status of a Trusted Flagger under the DSA is not possible due to bureaucratic and financial constraints or lack of clarity about the terms of independence of their role. There is a concern that social media platforms and Member States will treat only official Trusted Flaggers as their partners and not sufficiently take into account the perspectives and knowledge of other key stakeholders, including the Platforms’ Trusted Partners, and the soon to be appointed Trusted Monitors. Indeed, while there is some anecdotal evidence that social media platforms will continue to engage with Trusted Partners appointed under the Code of Conduct without a formal role, a loss of knowledge and networks developed under the previous regime is a risk to the system. The relationship between Trusted Flaggers appointed under the DSA, Trusted Reporters, social media platforms and the Commission should be further clarified. One respondent explained,

“For us the biggest risk is being made irrelevant or redundant by official trusted flaggers. For many of our members, applying for official TF (trusted flagger) status is a no go. With the appointment of other organisations, platforms and member states could argue that only official TFs are relevant and non-official TFs are redundant or maybe even politically motivated…On the other side, becoming official TFs also has its risks. There is a lot of bureaucratic weight put on official TFs and no foreseeable funding line, which makes applying for the status not too desirable.”

Survey respondents also had questions on how the European Centre for Algorithm Transparency (ECAT) will monitor social media algorithms while also drawing on data from audits prescribed under the DSA, and how this will feed into the oversight process.

In a signal of good practice, one respondent to the survey reported that France is planning to pass its own digital services laws, clarifying certain DSA rules, specifying the role of the supervisory authority, and providing for criminal penalties for the authors of unlawful comments. Other Member States are likely to follow suit.  



Hate speech decision-making

EU law has not so far provided a specific definition of hate speech. However, the 2008 Framework Decision on Combating Certain Forms and Expressions of Racism and Xenophobia (Framework Decision) identified the types of hate speech acts, which should be criminalised across the EU.[16] The Framework Decision is constructed around a closed list of protected grounds: “race,” colour, religion, descent or national or ethnic origin. In 2021, the Commission proposed an initiative to extend the list of EU crimes, as per Article 83(1) of the Treaty on the Functioning of the EU (TFEU), with hate crime and hate speech regardless of a protected ground, based on sex, sexual orientation, gender identity or disability.[17] The unanimous consensus among all Member States in the European Council necessary to adopt this decision has yet to be sought.[18]

The DSA does not establish a substantive legal framework, by for example providing a definition of illegal hate speech applicable across all Member States. According to the DSA, the term “illegal hate speech” is to be interpreted based on “Union law or the law of any Member State”[19]. Both the Framework Decision and national legislation lack some of the key protected grounds such as sexual orientation, gender identity and disability. If an expression is deemed illegal in one Member State, there is a question about how the Commission might take this on board with regard to its own powers in relation to VLOSEs and VLOPs in the context of another Member State in which a particular characteristic is not legally protected.

It is not clear whether the term “illegal” refers only to criminalised hate speech or also speech that violates national civil or administrative law. As explained by one respondent,

“The general public do not understand what constitutes criminal behaviour versus online (non criminal) hate speech. Clarity is required on what should be reported and to whom – the social media company, the national authority or the police.”

There is a need for clarity on the process of reporting. For example, if all reports are to go through the DSCs or the police, the burden could be overwhelming. If complaints go through social media platforms, there are questions regarding their legitimacy in deciding what is legal and what is not. If there is a legal direction that content needs to be removed, the user or the organisation, or any third party can flag it to the social media platform, which is then required to remove the content. However, there are no proactive steps required to be taken by the social media platform. In this regard, the DSA has yet to represent progress from the 2016 Code of Conduct. For some respondents who have been involved in monitoring hate speech, there is a sense that the DSA might not lead to improvements, ‘the risk that, despite DSA rules, platforms will not improve their moderation of online hate content. Even today, on some platforms like X, we can’t see any improvement.’



Conclusions and ways forward

The DSA should increase access to justice, safety and support for victims and groups targeted by hate speech. Close and effective cooperation among the key actors described in figure one as well as effective victim support will be essential to achieve this. There should be clarity on how national or administrative authorities’ orders or requests for information follow the DSA’s criteria and are complied with.[20] To achieve this, there should be a unified methodology for hate speech monitoring, responses, and referrals across the EU, prioritising consistency to mitigate the risk of divergent practices among Member States. Article 35 of the DSA could support this approach:

Hate speech is a specific and systemic risk across the European Union. Article 35 guidelines could bring legacy knowledge and networks from the previous regime and help deliver the potential of the DSA as a step-change in EU regulation on hate speech. Guidance should also cover decision-making processes related to the assessment of what constitutes illegal hate speech and Trusted Flagger accreditation criteria and withdrawal procedures.[21] 

Regulated entities need to take full responsibility to address hate speech on their platform, scrutinise and respond to any content that is incompatible with their terms and conditions regardless of whether they are notified by a third party. Article 9 and 10 of the DSA oblige platforms to cooperate with requests for information from national authorities and to appoint a single point of contact to support this process. These are key provisions to ensure victims can access their right to an effective investigation and an effective remedy. In this context the work and role of ECAT[22] could be further clarified.

Respondents made several suggestions such as to include an indicator for cooperation with civil society in the reports, oversight and analysis of the DSA implementation. Another suggestion is that Digital Services Coordinators should be obliged to organise an annual event with civil society on DSA implementation.

The role and position of DSCs in relation to established regulators, such as press and election regulators that have an existing power or duty with regard to hate speech responses should be clear. Existing relationships and mechanisms should be strengthened not unnecessarily replaced or made weaker. To support this outcome, DSCs should map their national context of regulation and related key stakeholders as a resource of responses legal and non-legal, criminal and non-criminal. The Facing Facts system mapping methodology could be a useful resource for this exercise.

Equality bodies could play a key role by supporting efforts to effectively investigate and support prosecution of illegal hate speech, as well as systematically monitor and collect data on such cases.[23]For example, equality bodies could have powers to ensure that DSCs appoint Trusted Flaggers that represent all targeted communities, including those who are not currently covered by national hate speech laws.[24]


Recommendations

This section draws on our findings and analysis to present detailed recommendations for all stakeholders with responsibilities within the ‘hate speech response system’ at the EU and national levels including the Facing Facts Network, the Commission, DSCs, Governments, Trusted Flaggers, and intermediaries/online platforms.

Facing Facts:

  • Training: Develop regular and up to date training programmes on hate speech monitoring and the regulatory framework, including the DSA. Ensure that trainings incorporate the importance of mental health, and that they are trauma-informed.
  • Dissemination: Share work of members, for example current projects involving ‘massive monitoring’ such as SafeNet
  • Raise public awareness: Share activities and information on how to identify and report criminal and non-illegal HS
  • Monitor implementation: Work with members to monitor how the DSA is implemented at the national level, and bring key themes and gaps to the attention of partners at the EU level, including the High Level Group on Combating Hate Speech and Hate Crime. 

European Commission:

  • Transparency Guidelines: Develop and publish comprehensive transparency guidelines under Article 35 of the DSA outlining the operationalisation of the DSA framework.[25] Ensure clarity on decision-making processes, accreditation criteria, and the process for withdrawing Trusted Flagger status under Article 22(2). Include civil society in the development phase in an inclusive  and effective way, e.g. via focus groups or consultation meetings[26].
    • Unified Methodology: Establish a unified methodology for hate speech decision-making across all EU Member States. This will guarantee consistency and avoid discrepancies in the interpretation and application of hate speech regulations by various actors such as Trusted Flaggers, platforms and, subsidiarily, also police, prosecutors and judicial or administrative authorities.
    • Coordination mechanism: Establish an effective coordination mechanism, focusing on/drilling down to the national level, involving the Commission, DSCs, platforms and Trusted Flaggers as well as civil society organizations working on hate speech allowing for an open and continuous exchange of knowledge and information with the aim of effectively implementing the DSA.
    • Hate speech as an EU crime: continue pursuing the effort to establish hate speech as an EU crime under Article 83(1) TFEU.
    • Support for Trusted Flaggers: Provide adequate funding for Trusted Flaggers who may face challenges in meeting the independence requirements while, at the same time, fulfilling their role as those reporting illegal hate speech to platforms. Without their proactive involvement, the DSA mechanism loses much of its relevance. Tangible financial support provided to Trusted Flaggers is crucial for effective implementation of the DSA.

National Digital Services Coordinators:

  • Continuous Training: Institute ongoing training programs for DSC officers dealing with accreditations of trusted flaggers. This will enhance their capacity to make informed decisions, mitigating the risk of errors or biased evaluations. Participate in and organise training in collaboration with CSOs to understand their perspectives and challenges, understanding the (potential) Trusted Flaggers landscape and diversity.
  • Proactive collaboration with CSOs working in the area of discrimination, hate speech or hate crimes, as well as national Equality Bodies: the DSCs should seek collaboration with the experts at the national level, including through consultation or training, to better understand the legal and non/legal problems of addressing hate speech and the needs of various victims and targeted communities.

Governments:

  • Legislative Alignment: Align national legislation with relevant human rights standards, e.g. the Council of Europe Committee of Ministers Recommendation CM/Rec(2022)16 on hate speech, as well as with the DSA framework to ensure a cohesive approach. This will contribute to harmonising definitions of hate speech and streamlining legal responses across Member States.
    • Support for DSCs: Provide adequate staffing and financing for DSCs. This support is crucial for effective implementation, preventing potential resource gaps or delays.
    • Support for Trusted Flaggers: Provide adequate funding for Trusted Flaggers who may face challenges in meeting the independence requirements while, at the same time, fulfilling their role as those reporting illegal hate speech to platforms. Without their proactive involvement, the DSA mechanism loses much of its relevance. Tangible financial support provided to Trusted Flaggers is crucial for effective implementation of the DSA.
    • National cooperation mechanism: Establish or support cooperation mechanisms to regularly exchange views between the DSCs, other relevant national authorities and civil society.
    • Support the Commission’s initiative to make hate crime and hate speech EU crimes in the European Council.

Trusted Flaggers:

  • Diverse Accreditation Criteria: Advocate the EC, DSCs and the Board for appropriate  accreditation criteria to ensure representation from various organizations and communities working on different topics. This will foster a broad perspective in addressing hate speech, allowing the possibility of tackling diverse types of hate.
    • Transparent Methodology: Call for a transparent methodology in the accreditation process, emphasising diversity, impartiality, and adherence to predefined standards. This will ensure the reliability and fairness of the Trusted Flaggers system.
    • Training: Ensure that staff is trained in the area of recognising illegal hate speech in the EU and national context.

Regulated Entities:

  • Clear Reporting Mechanisms: Implement clear reporting mechanisms in line with DSA requirements, ensuring accessibility and user-friendliness. This will facilitate efficient reporting of hate speech while upholding users’ fundamental rights.
    • Regular Communication with DSCs, coordinated by the EC: Establish regular communication channels with DSCs and civil society organisations to facilitate the exchange of information and support problem-solving. This collaboration will contribute to a more effective and transparent implementation of the DSA.
    • Increase linguistic expertise of content moderation team: Ensure that the composition of moderation teams have a diversity of linguistic expertise, in addition to existing AI functionality.
    • Consistent application of own policies and regulations: Besides complying with the DSA, ensure consistent and proactive application of own policies and regulations to address hate speech, seek improving these regulations in the light of the existing best practices, while respecting freedom of expression across the Member States. Ensure that sufficient resources are allocated to combating hate speech online by developing technological solutions grounded in human rights principles. This includes investing in AI-driven tools for automated moderation and enhancing the capacity of moderators and decision-makers. Collaboration with CSOs and monitoring organisations, regulated entities in the development and funding of this AI tool will support the effective implementation of the Digital Services Act (DSA).
    • Training: Ensure that moderators together with VLOPs and VLOSEs staff receive training from civil society organisations on the impact of hate speech for the different communities and on its linguistic updates.

[1] https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

[2] See the 2022 Facing Facts report on Current Activities & Gaps in hate speech responses: A mapping report for the Facing Facts Network, p. 12: https://www.facingfacts.eu/wp-content/uploads/sites/4/2023/04/Facing-Facts-Network-Mapping-Report-v8.pdf and The European Commission against Racism and Intolerance (ECRI) General Policy Recommendation N°15 on combating hate speech (GPR 15): https://rm.coe.int/ecri-general-policy-recommendation-no-15-on-combating-hate-speech/16808b5b01 

[3] Directive 2000/31/EC, Directive of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market, section 230.

[4] “formal proceedings to assess whether TikTok may have breached the Digital Services Act (DSA) in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content. https://ec.europa.eu/commission/presscorner/detail/en/ip_24_926, the Commission reports, “formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6709, see also links to proceedings against META,https://ec.europa.eu/commission/presscorner/detail/en/IP_24_2664, and Aliepress https://ec.europa.eu/commission/presscorner/detail/en/ip_24_1485

[5] the Commission reports, “formal proceedings to assess whether X may have breached the Digital Services Act (DSA) in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6709

[6] Sanchez v. France, no. 45581/15, Grand Chamber judgment of 15 May 2023.

[7] https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation

[8] DG CONNECT  coordinates the implementation of both the AI Act and DSA

[9] See more: https://digital-strategy.ec.europa.eu/en/policies/dsa-cooperation#:~:text=The%20Digital%20Services%20Act%20(DSA,ensure%20platforms%20meet%20its%20obligations.

[10] For example 60% agreed with the statement ‘I am aware of the impact the Digital Services Act will have on the activities of my organisation’; 80% agreed with the statement ‘I am aware of the reporting needs under the Digital Services Act’.

[11] Survey respondents also reported a general lack of information about the appointment of Trusted Flaggers at the national level.

[12] See here the list of appointed Digital Services Coordinators https://digital-strategy.ec.europa.eu/en/policies/dsa-dscs

[13] See Article 22 (2) (b) of the DSA.

[14] The Board meets regularly and developments are ongoing. National guidelines have been shared by Ireland DSC. Please see  https://www.cnam.ie/wp-content/uploads/2024/02/20240216_Article22_GuidanceForm_Branded_vF_KW.pdf

[15] The Board meets regularly and developments are ongoing. National guidelines have been shared by Ireland DSC. Please see  https://www.cnam.ie/wp-content/uploads/2024/02/20240216_Article22_GuidanceForm_Branded_vF_KW.pdf

[16] Framework Decision 2008/913/JHA on combating certain forms and expressions of racism and xenophobia by means of criminal law, adopted on 28 November 2008 by the Council of the EU. See See Article 1(1)(a) and (c) which require Member States to punish incitement to violence or hatred or directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin, as well as acts of  denialism.

[17] See Communication from the Commission to the European Parliament and the Council “A more inclusive and protective Europe: extending the list of EU crimes to hate speech and hate crime”, COM/2021/777 final.

[18] It is important to note that there is a high degree of divergence in legislation at the national level. The FF secretariat are compiling a list of legislation as a resource: https://www.facingfacts.eu/hate-speech-legislation/

[19] As specified in Article 3, letter H) of the DSA.

[20] See Articles 9 and 10 of the DSA.

[21] In a recent positive development the EC called on VLOPs & VLOSEs to create guidelines to protect election integrity see https://digital-strategy.ec.europa.eu/en/news/commission-gathering-views-draft-dsa-guidelines-election-integrity

[22] ECAT should support the regulator in assessing whether the functioning of algorithmic systems is in line with the risk management obligations that the DSA establishes for VLOPs and VLOSEs. It should become a gravity centre for the international community of researchers and expert auditors in the field, acting as a knowledge hub for research based on platforms’ data. See more at: https://ai-watch.ec.europa.eu/collaborations/european-centre-algorithmic-transparency-ecat_en or https://algorithmic-transparency.ec.europa.eu/index_en.

[23] 2022 Facing Facts report on Current Activities & Gaps in hate speech responses: A mapping report for the Facing Facts Network, cited above, p. 11.

[24] 2022 Facing Facts report on Current Activities & Gaps in hate speech responses: A mapping report for the Facing Facts Network, cited above, p. 11.

[25] ‘Under Article 35 of the DSA the Commission, together with the Digital Services Coordinators of the Member States, may issue guidelines in relation to specific risks, to present best practices and recommend possible mitigation measures.’ Hate speech is a specific and systemic risk and guidelines would be a way to bring legacy knowledge from the previous regime.

[26] Following the Roundtable with Civil Society Organisations on the implementation of the Digital Services Act.



Embracing the Digital Classroom – our experience with the Digital Learning Institute

11/03/2024

Facing Facts, coordinated by CEJI, works to better understand and address hate crime and hate speech through training, research and advocacy. Since 2015, we have embarked on a new journey to make quality training more accessible to our hate crime and hate speech community of practice. This is why we created our eLearning platform Facing Facts Online. With the goal of professionalising and standardising our approach to online learning, we, the Facing Facts Team, decided to follow two courses on Digital Learning and Instructional Design offered by the Digital Learning Institute.

Interactive survey

A Personal Perspective: Navigating the Digital Learning / Instructional Design Landscape

From the Facing Facts Team, Melissa and Joanna participated in the Course Professional Diploma in Digital Learning Design, while Daniel and Annabelle joined the Course Professional Certificate in Instructional Design. Select each tab to read on for our individual experience.

How we applied the course insights to our work

  1. Starting from a common base

    Taking the course together as a team allowed us to use a common language and refer to concepts that everyone was knowledgeable of. This led to more efficient workflows, while ensuring high-quality educational standards.

  2. Standardising the process

    We developed the capacity to  provide an  adequate framework for our internal work and for our subject matter experts. For each course, we start with an analysis of programmatic needs. We ask: what is the business case for this course? What are the learning needs that it needs to address? We move on to scoping and analysis, outlining ‘SMART’ learning objectives. We map out the course by using templates that set out the learning assets screen by screen. This not only helps to visualise and assess the course flow, but also contributes to more efficient communication with subject matter experts, instructional designers and digital learning developers.  If you want to contribute to course development as an expert, you can contact us
  3. Innovating our courses

    Artificial intelligence is a game-changer in the world of online learning. We are strategically taking advantage of the benefits that AI offers, for instance by automating the creation of our learning personas with tools of Generative AI. We also experimented with AI tools that can assist in content creation and we will closely monitor the rapid developments in this area to identify the best applications for our context. For example, Articulate’s recent overview of how AI will be integrated into its digital development tools was particularly interesting for us.


Updates: What You Can Expect In Our Upcoming Courses

Navigating within the latest Moodle Version 4.1

Since our platform restyle in February 2023, we had already focused on:

– Making the platform more user-friendly 

Simplifying the course overview and menu

– Facilitating navigation in the courses

– Improving the accessibility of our courses 


The update to the new Moodle Version 4.1 and the new design of our online platforms makes the navigation and design of the eLearning platform more appealing.


Integrating new tools 

Using a multitude of new tools, we ensure that learners can be guided smoothly through their learning experience. For our latest course co-developed with our member CLAIM on anti-Muslim racism, which took place in October 2023, we implemented Articulate as an authoring tool. From our evaluations, learners were very satisfied with this authoring tool, praising its innovative and dynamic side. We also experimented for both synchronous and asynchronous activities with tools such as Genially, Mentimeter and Miro Board.


Updating self-paced courses by bias

With the knowledge gained in the courses, and the experience of our Digital Learning Developer Rennier, we plan to develop two new modules on antisemitism and anti-Muslim bias based on the two already developed in 2019.

Slides that Rennier created on FFO home page. (H5P to slide through with links to course


The impact of the course on our elearning strategy

The impact of the course on our team has been transformative. It not only guided us in approaching course development from a strategic angle, but also played a pivotal role in identifying roles and responsibilities within the team. What followed was the creation of a new position – our digital learning developer, Rennier, who joined our team in August. 

We also integrated a new project management tool to facilitate collaboration within our team. Following the template of the course, we built the segments of our project based on the different parts of the instructional design process. This allows us to attribute tasks and ensure that we keep track of revision processes.

Approaching the instructional design process from an analytical perspective has informed our future strategies and the way we approach our work. This has helped us in taking decisions and evaluating constantly for further improvement. For our course, checking our programmatic needs, our learning personas and learning outcomes, allows us to minimise risks and to be more aligned with our learners’ needs, which we place at the centre of our endeavours.


Have your say: what is your experience with our courses?

Join us on this educational journey as we continue to evolve, innovate and empower our learners. Our research findings inform our online learning endeavours. As we are currently researching what supports and motivates our learners best, which we outlined in our policy briefing, our researcher Katalin Józan is conducting interviews with learners that have followed one of our Facing Facts Online courses. Contact us to share your experience and help us improve our training offer.

Would you like to be informed, once we open new courses? You can join our waiting list to be contacted as soon as we open registration. In 2024, we will open two new courses. “Police Discrimination and Hate Crime” and “Understanding and responding to Hate Speech”. Submit your contact details here.


02/11/2023

FF toolkit in a time of crisis 

At Facing Facts, we stand committed to our role in providing collaborative and effective responses to the escalation of hate across Europe, which is of critical urgency in this time of crisis.

Synagogues, kosher restaurants, Israeli embassies and apartments of Jewish residents have been vandalised with antisemitic content in Berlin, Oporto, Paris, London, Madrid and around the world. Muslim communities are on heightened alert in London and elsewhere, after Muslim women had their head scarfs tugged from their head and several mosques were targeted in Oxford, Lancashire in the UK, in Castrop-Rauxel, and Recklinghaus in Germany and in Bayonne, France. Antisemitic and anti-Muslim incidents are mounting across Europe, as the numbers indicate:

  • In Germany, 202 antisemitic incidents have been registered by the Bundesverband RIAS between 7 and 15 October. This is an increase of 240% compared to the same period last year.
  • Across the UK, between 7 and 31 October, the Community Security Trust recorded at least 893 antisemitic incidents. This is the highest ever total reported to CST across a twenty-five-day period. The Metropolitan Police said it had recorded 408 antisemitic offences against Britain’s Jewish communities this month, compared to 28 in the same period last year, and anti-Muslim hate crime was up from 65 offences in October 2022 to 174 so far this month. Tell MAMA logged 515 anti-Muslim cases between 7 and 29 October, a seven-fold increase in anti-Muslim cases, compared to the same period in 2022.
  • In France, according to the Minister of Interior, since 7 October, 588 antisemitic incidents have been reported to the police. 
  • In Austria, the Jewish Community of Vienna, registered between 7 to 19 October, 76 antisemitic incidents, a 300% increase.

    The data will be periodically updated to include data coming from our members.

These numbers represent a drastic increase in antisemitism and islamophobia / anti-Muslim racism in places, where reporting systems are functioning well. Nevertheless, one has to take into account that numbers might be even higher due to under-reporting.

The increase of bias-motivated cases in Europe show how the events in the Middle East function as fuel in the spread of misinformation, disinformation and hate speech online. This can lead to hate crime and attacks by violent extremists , leaving Jewish and Muslim communities insecure. Hate incidents require swift verification, investigation and responses by authorities to ensure increased protection, safety and justice for victims

As active participants and contributors to the High Level Group on combating hate speech and hate crime, we support the work of the European Commission in creating spaces for improving multi-stakeholder cooperation for better hate crime and hate speech responses,  valuing the importance of monitoring, data collection for the sake of victims support and access to justice.

As a European civil society initiative that has been building capacities of various actors and advocating for victims’ rights and better prevention measures, we invite you to read through our toolkit to better understand, prevent and respond to the increase of hate crime and hate speech during global/international crises. 

Why are hate speech and hate crimes so damaging for victims, communities and society?

Hate speech reinforces stereotypes, prejudices, and discriminatory attitudes against certain groups of people, which can fracture social cohesion and lead to the escalation of violence. Hate crimes go beyond other types of crimes, as they have a damaging impact on victims and their communities, instilling fear, insecurity and sending a message of exclusion, which can lead to isolation and desperation among those affected.

📁 Our resources: We invite you to get familiar with the pyramid of hate to understand and visualise how bias, individual acts of prejudice and discrimination precede all higher levels of violence, such as genocide and bias-motivated crimes. It can show us how to prevent the escalation of violence and where we can contribute to dismantling all levels of the pyramid of hate.

💡Remember that news, disinformation and social media headlines can feed bias and prejudice, and these contribute to an escalation of hate speech and violent crimes, impacting Jewish and Muslim communities. Find CEJI’s toolkit on how to mitigate bias when reading the news.

Who can become a target of hate speech and hate crime?

Anyone! According to ECRI’s general policy recommendation N° 15, “hate speech is based on the unjustified assumption that a person or a group of persons are superior to others; it incites acts of violence or discrimination, thus undermining respect for minority groups and damaging social cohesion”. In our work, we refer to OSCE ODIHR’s hate crime definition: “criminal acts motivated by bias or prejudice towards particular groups of people. People or property associated with – or even perceived to be a member of – a group that shares an identity trait can also be targets of hate crimes”. If we accept that anyone can be targeted because of their real or perceived identity, we should also consider everyone’s responsibility in countering it.

How can we better prevent and respond to hate speech and hate crimes?

  1. Improve relationships with communities

This includes:
1) training police officers, prosecutors, and the court to recognise potential bias indicators and record incidents as potential hate crimes. This approach aims to ensure that the bias element is effectively addressed throughout the criminal justice system, from reporting to investigation, prosecution, and sentencing.
2) developing capacities of law enforcement on how to relate with communities to encourage reporting, provide victim support and develop joint prevention measures. A victim-centred approach is required to encourage reporting and avoid secondary victimisation of those who undergo the reporting process.

📁 Our resources: 

We have been offering digital learning programmes since 2015 on our platform Facing Facts Online. We invite you to follow our self-paced courses focussing on bias indicators available on our eLearning platform and to read our guidebook for Jewish communities and for Muslim communities for police to learn more  and to develop more effective relationships with the respective communities.

  1. Build effective multi-stakeholder cooperation


Identifying gaps and improving the relationships and data collection between the different stakeholders in a hate speech and hate crime response systems, allows for better support to victims. In order to face the rise of misinformation, disinformation and hate speech online, we need to develop effective cooperation with key stakeholders in our ecosystem, such as IT companies, law enforcement and newly appointed National DSA (Digital Services Act) coordinators. It is only by establishing a strengthened multi-stakeholder cooperation that we can respond to hate manifestations online and offline effectively. Interconvictional and intercultural engagement is also relevant in the process of building resilience in order to prevent violent extremism.

📁 Our resources: Our system thinking approach places the victim at the centre of the hate crime and hate speech response systems. Since 2016, we have developed seven national system maps to understand and assess frameworks that support hate crime recording, data collection and exchange, and victim referrals across a ‘system’ of public authorities, CSOs and other stakeholders. Find our latest hate speech mapping exercise in our report 

Current activities & Gaps in hate speech responses, where we capture the multiple actors, who are continuously evolving in a national hate speech response system.

  1. Increase Victim Support, Protection and Justice

All efforts to increase reporting and improve recording of hate crime and hate speech should be grounded in a victim-centred approach. We have developed a victim- and outcome-focused framework, which aims to increase available data, increase access to support and to justice, and to reduce risk and increase security for victims. Decision makers require the skill to understand and assess data. The person receiving the report must have the ability to:

  • Supporting the person to tell their story, which might be unclear, confusing and complex. Arrange interpreting support if needed.
  • Assessing immediate needs, including risks
  • Listening
  • Providing or referral to support
  • Advising on potential legal outcomes
  • Identifying and capturing potential bias indicators that could be used as evidence

As an initiative of CEJI – A Jewish Contribution to an inclusive Europe, we envision an inclusive and democratic Europe in which people enjoy their unique potential with all their diversity. 

📖Explained Terms:

Antisemitism = IHRA’s non-legally binding working definition of antisemitism states that “Antisemitism is a certain perception of Jews, which may be expressed as hatred toward Jews. Rhetorical and physical manifestations of antisemitism are directed toward Jewish or non-Jewish individuals and/or their property, toward Jewish community institutions and religious facilities.”. Antisemitism is spelled as one word, and not anti-Semitism as there is no such thing as Semitism which a person can be against.

Bias indicators = Objective facts that tell us whether an incident might be bias motivated such as victim perception, timing, location and demographic differences between victim(s) and offender(s). These criteria are not all-inclusive, and each case must be examined on its own facts and circumstances.


CSOs = Acronym for Civil Society Organisation, which are non-governmental and not-for-profit organisations that have a presence in public life, expressing the interests and values of their members or others, based on ethical, cultural, political, scientific, religious or philanthropic considerations.

Data collection = Determining what information sets and categories are needed and establishing means for acquiring them.

Digital Services Act (DSA) = Aims to regulate how online platforms, social media and digital services operate in Europe and to be a key element of a ‘comprehensive framework to ensure a safer, more fair digital space for all’. According to the Commission, the new rules contained in the DSA aim to both ‘foster innovation, growth and competitiveness’ and to increase protection of European values, ‘placing citizens at the centre’.

Disinformation = Disinformation is purposefully false or misleading content shared with an intent to deceive and cause harm.

Ḥalal = There were originally three major forbidden practices in Islam: eating pork, eating the flesh of an animal slaughtered in ritual sacrifice to a pagan deity, and eating the flesh of an animal that was already dead before having its throat cut to drain it of its blood. This last stipulation was introduced for hygiene purposes: all inhabitants of the Arabian Peninsula, including pagans, did not eat animals that had not been slaughtered in this manner (blood that remains inside an animal poisons the flesh, and the same applies around the world: all animals must be bled dry, including in the most modern of abattoirs.

Hate incidents = An act that involves prejudice and bias but does not the threshold of criminal offence. Such incidents often precede, accompany or provide the context of hate crimes.

Hate crime = Hate crimes are criminal acts motivated by bias or prejudice towards particular groups of people. This could be based, inter alia, on gender, gender identity, sexual orientation, ethnicity, religion, age, or disability

Hate speech = Speech or other expression including a gesture, writing, or display that involves prejudice and bias towards particular groups of people. This could be based, inter alia, on gender, gender identity, sexual orientation, ethnicity, religion, age, or disability. Such incidents often precede, accompany or provide the context for hate crimes.

Islamophobia / Anti-Muslim racism  = A specific form of racism that refers to acts of violence and discrimination, as well as racist speech, fueled by historical abuses and negative stereotyping and leading to exclusion and dehumanisation of Muslims, and all those perceived as such. Anti-Muslim racism/ is a form of racism in the sense that it is the result of the social construction of a group as a race and to which specificities and stereotypes are attributed, in this case real or perceived religious belonging being used as a proxy for race. Consequently, even those who choose not to practise Islam but who are perceived as Muslim – because of their ethnicity, migration background or the wearing of other religious symbols – are subjected to discrimination. Anti-Muslim racism / Islamophobia has nothing to do with criticism of Islam. Islam, as a religion, as an ideology, is subject to criticism as any other religion or ideology. Islamophobia is a highly contested term, with many preferring the use of “anti-Muslim racism” or “hatred”.

Incident = In discussing hate related occurrences, incidents refer to events which are both acts punishable by law and abusive actions that are not necessarily punishable by law but threaten or intimidate a target in some way. Incidents range anywhere from homicide to hate speech to discriminatory acts. 

Jewish/Jews = there are approximately 15 million Jews in the world, of whom approximately 8 million live in Israel and 1.2 million live in Europe.

Kosher / Kashrut = Jewish dietary laws which govern what food may be eaten and how it is manufactured and served. Only Kosher animals may be eaten, that is those which ‘chew the cud’ and have cloven (split) hooves. This includes cows, sheep, goats and most birds but not pigs. Kosher fish are those with fins and scales, such as cod and plaice, but not shellfish or octopus. Meat and milk are separated and most Jews delay eating one after the other. They also use separate cooking utensils , crockery and cutlery for meat and milk based foods. Kosher shops and restaurants are supervised by Kashrut (religious) authorities to ensure they comply strictly with the dietary laws.

Muslim = A person who adheres to Islam. This can include a wide diversity of different people covering a spectrum that includes both those who are practising and those who are not and includes many, many more in between. Someone might describe themselves as Muslim and could use this to refer to their culture and/or their religion. “Muslim” shall not be conflated with “Islamist”, which is not referring to the religion of Islam, but political ideologies based on some sets of Islamic values.

Misinformation = Misinformation is false, misleading, or out-of-context content shared without an intent to deceive.

Secondary victimisation = Where the response of the authorities or a CSO exacerbates the experience of victimisation from the perspective of the victim. This could include a perceived lack of support or responsiveness or an openly hostile attitude. 

Reporting 

Reporting can have two meanings:

1. the act of reporting an incident to IT Companies, the police, a Civil Society Organisation or another organisation or

2. disseminating information via press releases or published reports to people or organisations (government authorities, European/ international institutions, human rights institutions, etc.) who can take action. 

Response system = There are many actors that make up national hate crime / hate speech response systems, including public authorities such as the police and private organisations such as media companies, international organisations. Mechanisms of prevention and response within a response system implies engagement and coordination of a broad range of actors.

Victims = A victim of a hate motivated incident/hate crime is a person that has suffered any incident, which may or may not constitute a criminal offence, which is perceived by the victim or any other person, as being motivated  by  prejudice  or  hate  based  upon  “race”,  religion,  sexual orientation, faith, disability, etc. The perception of the victim or any other person is the defining factor in determining a hate incident.


Better responses to hate in society: stronger cooperation is key

27/07/2023

Press release

The correlation between hate speech and hate crimes can no longer be denied with the recent antisemitic attack in Pittsburgh. At the Facing All the Facts conference, government officials, civil society representatives, IT companies and community representatives will gather for the shared goal of improving cooperation between stakeholders in order to provide better responses to hate incidents.

Preliminary results of transnational research to improve national hate crime recording systems will be released along with 10 online courses for police and civil society on the topics of hate speech, hate crime and bias indicators during the one-day conference that will take place on 11th December at Microsoft Brussels.

The Facing All the Facts project aims to unmask the full extent and nature of hate crime and hate speech working through a coalition of civil society organizations, policy leads, national law enforcement authorities and practitioners with a focus on, but not limited to, six European Member States: UK, Spain, Ireland, Greece, Italy and Hungary. The conference will be an opportunity to explore for the first time the 10 new online courses: tailored courses on hate crime for police in the UK, Hungary and Italy; specific bias indicator courses on hate crimes targeting Jewish, Muslims, Roma, disabled, LGBT and migrant communities.

“Facing the facts of hate crimes is a crucial step towards better policy measures and to confront the problem of hate in society. The legislation is already in place in the EU, but training is key to effective policy implementation. The Facing Facts Online courses help to bridge the gap between policy and practice,” – says Robin Sclafani, director of CEJI-A Jewish contribution to an inclusive Europe, lead partner of the Facing All the Facts project.

These courses are hosted by Facing Facts Online, the first and only e-learning platform dedicated to the topics of hate crime and hate speech. Launched in December 2016 at Google Brussels, the platform currently offers two highly interactive and easily accessible courses on hate speech in English, French and German and another course for civil society organisations on hate crime monitoring. With over 500 users to date, Facing Facts Online is at the early stage of becoming a specialized online course provider.

The conference will also host the exhibition ‘Beyond the act of hating’ from the award-winning documentary photographer Claudia Janke and conclude in a moved reading of an explosive play about a notorious hate crime that changed the hearts and laws of America for all time: ‘MATT’- performed by professional actors and students from London’s renowned Guildhall School of Music and Drama.

4th December, 2018


Connecting on victim support in Austria

02/06/2023

English version:

Click here to read the full report online in English or download it in PDF.

German version:

  • Full publication in PDF with Self-Assessment (GER)
  • Interactive Austria Systems Map (GER)
  • Interactive Austria Systems Map (EN)
  • Journey of a Hate Crime Case in PDF (GER)

01/09/2022

<!– wp:embed-pdf-viewer/pdf {“id”:2506,”title”:”Call-for-experts”,”description”:{“raw”:””,”rendered”:”\u003cp class=\u0022attachment\u0022\u003e\u003ca href=’https://ceji.org/wp-content/uploads/2022/09/Call-for-experts.pdf’\u003e\u003cimg width=\u0022212\u0022 height=\u0022300\u0022 src=\u0022https://ceji.org/wp-content/uploads/2022/09/Call-for-experts-pdf-212×300.jpg\u0022 class=\u0022attachment-medium size-medium\u0022 alt=\u0022\u0022 loading=\u0022lazy\u0022 /\u003e\u003c/a\u003e\u003c/p\u003e\n”},”url”:”https://ceji.org/wp-content/uploads/2022/09/Call-for-experts.pdf”,”width”:492,”height”:492} –>
<figure class=”wp-block-embed-pdf-viewer-pdf wp-block-embed-pdf-viewer-pdf__content-wrapper aligncenter”><object class=”embed-pdf-viewer” data=”https://ceji.org/wp-content/uploads/2022/09/Call-for-experts.pdf#scrollbar=1&amp;toolbar=1″ type=”application/pdf” height=”492″ width=”492″></object><iframe class=”embed-pdf-viewer” src=”https://docs.google.com/viewer?url=https%3A%2F%2Fceji.org%2Fwp-content%2Fuploads%2F2022%2F09%2FCall-for-experts.pdf&amp;embedded=true” frameborder=”0″ height=”492″ width=”492″></iframe></figure>
<!– /wp:embed-pdf-viewer/pdf –>