Twitter moderation on trial in Paris – POLITICO


Press play to listen to this article

PARIS – A French court will hear a case on Thursday aimed at shedding light on Twitter’s best-kept secret: how much the social network is investing in the fight against illegal content.

The social media platform, opposed to a group of four NGOs also including SOS Racisme, SOS Homophobie and the Union of Jewish Students of France, will argue before the Paris Court of Appeal that it should not have to disclose d ‘detailed information on internal processes.

The case touches on a central problem that has long haunted policymakers and researchers in platform regulation: the real resources – human and financial – allocated to the moderation of illegal and harmful content. So far, companies such as Twitter, Facebook and Google’s YouTube have been reluctant to release detailed and specific information on the number of content moderators by country and / or language to the public.

According to French NGOs, Twitter is not doing enough against hate speech online. In July, a court ordered the company to share very specific information about how it handles content, a first in Europe.

The social media platform was required to provide “any administrative, contractual, technical or commercial document relating to material and human resources” to combat hate speech, homophobia, incitement to violence and apologies for crimes against humanity among other forms of content, according to the court ruling, but decided to appeal the ruling.

In Brussels, the Digital Services Act – the EU content moderation regulation currently under discussion in Brussels – also seeks to increase transparency on moderation practices.

The European Parliament wants the platforms to report on the “full number of content moderators allocated for each official language per Member State”, according to a recent text obtained by POLITICO. EU countries want to so-called very large platforms, with more than 45 million users in the block, to “detail the human resources dedicated … to content moderation”.

It is not yet known whether the final text introduced between the two institutions would actually force Twitter, which might not be considered a “very large platform”, to provide the exact number of moderators.

Child poster

In Paris and Brussels, lawmakers have long complained about the lack of transparency surrounding the means deployed by online platforms to moderate content.

“Moderation: the opacity on the number of moderators and their training cannot last”, tweeted Laetitia Avia, deputy of Emmanuel Macron’s La République en Marche party, when lawmakers assessed national rules on platforms.

The Twitter affair is not the only one targeting the processes of technology companies to fight illegal and harmful content: in March of this year, Reporters Without Borders filed a complaint against Facebook, arguing that the perceived lack of moderation of content amounts to “deceptive marketing practices”.

But, in France, Twitter has become a bit a poster child for hate speech online.

In November, seven people have been condemned after having sent anti-Semitic tweets about a candidate for Miss France, while the lawyer for the civil party criticized “the recklessness of Twitter”.

According to Samuel Lejoyeux, president of the Union of Jewish Students of France, an experiment carried out by the four NGOs in 2020 – which led to the launch of the legal action – shows that Twitter is the “black sheep” among online platforms.

“I’m not saying the situation is perfect at Facebook and YouTube, but there is an effort made, there is a will to moderate,” he said. “At Twitter, there is a desire to leave the culture of clash, the culture of hate and insults [proliferate], it is the foundation of the business model. ”

Twitter declined to comment for this story.

Hate speech test

The case heard on Thursday began during the first confinement linked to the coronavirus, in the spring of 2020.

The four French NGOs decided in May last year to take legal action against Twitter, arguing that the American company was not doing enough to suppress hate speech online.

They said they discovered that the microblogging platform only deleted 11.4% of the hateful and “clearly illegal” tweets they reported in an experiment conducted from March 17 to May 5, 2020. By comparison , organizations found that Facebook removed 67.9% of reported content.

Twitter is required under the E-Commerce Directive to remove reported illegal content “promptly”.

In July this year, after unsuccessful attempts to mediate the case outside of court, the court ordered Twitter to share documentation with non-government groups on how it moderates the content.

Twitter, which generally does not provide information on content moderators, was required to provide the number, location, nationality and language of the people in charge of processing French content reported from the platform, as well as the number posts reported with apologies. for crimes against humanity and incitement to racial hatred; how many of them have been deleted; and how much information has been passed on to the authorities.

The American company has not complied so far and has decided to appeal instead.

Clothilde Goujard contributed reporting.

This article is part of POLITICSThe premium Tech policy coverage of: Pro Technology. Our expert journalism and suite of political intelligence tools allow you to transparently research, track and understand the developments and stakeholders that shape EU technology policy and take decisions. decisions impacting your industry. E-mail [email protected] with the code ‘TECH’ for a free trial.


About Linda Jackson

Check Also

Higher legal risk for third-party content: Social media companies will challenge more restrictions

Social media companies plan to challenge any changes to the law introduced by the government …