Lessons for Elon Musk from Meta Content Moderation

How often do moderators appointed to keep unwanted content off social media get it wrong? Much more than you think.

In its first 15 months of existence, the independent board created by Facebook (now renamed Meta) to oversee the company’s moderation practices offered a sample of 130 content removal decisions it deemed questionable.

Reviewing these cases, Meta itself concluded that its moderators had applied the company’s own rules incorrectly 51 times: in essence, they had failed in their job about 40% of the time.

While this sample comes close to being representative of moderation practices more broadly, it is only the tip of a very large iceberg. This week, the Meta Oversight Committee said to have received 1.1 million complaints in total about how the company’s Facebook and Instagram services acted against user content.

The scale of dissatisfaction – and the seemingly high failure rate in judgments about what users should see – might seem to support Elon Musk’s case for putting fewer controls on online speech. Musk said a key reason for his attempted purchase of Twitter was to remove barriers to online communication, provided it was legal. But he has tacitly changed course in recent weeks, conceding things won’t be as simple as he’s been letting on.

At a Financial Times event last month, Musk said he planned to block “world-destroying” content on Twitter, while saying he would use tactics such as limiting the distribution of certain tweets or the temporary suspension of the accounts of certain users. Last week, he also told Twitter employees that he planned to take action against harassment on the network.

This suggests that it will face many of the same challenges as Meta. For the Facebook owner, bullying and harassment was the largest category of user dissatisfaction, accounting for nearly a third of complaints to the board of supervisors (the other two main sources of dissatisfaction, fueling half of complaints to the advice, relate to Meta’s actions against hate speech, and against violence and incitement).

If Musk wanted to limit the annoyance that his own efforts to control content would generate, he could do worse than look to Meta’s example. Letting an outside board question some of its decisions meant giving up power over an important aspect of its user experience. But it has the advantage of taking some of the controversy away from the company, transferring at least partial responsibility to an independent group designed to act as an outsourced conscience.

Separating tricky decisions like this also helps shine a light on the sheer complexity of applying hard and fast rules to something as malleable as language. The review process is arduous. In its first full report this week, the council said it had taken on just 20 cases in its first 15 months and ended up reversing Meta moderation decisions in 14 of them – one tiny proportion of the total number of complaints it had received.

Publishing details of individual moderation decisions is also a good way to neutralize critics who might be tempted to make categorical judgments about the rights and wrongs of social media ‘censorship’. There’s little black and white here, just shades of gray.

It also doesn’t hurt that by pushing for more influence, the Supervisory Board becomes something of a thorn in Meta’s side. He agitated for more data on how moderation works and pushed the company to be more transparent about its decisions to users. It also tries to have a say in the content policies Meta puts forward for the Metaverse even before this new immersive online environment takes shape.

This all helps keep Meta on its toes, while adding to the perception that it is responding to outside pressure – which could dampen calls for more direct government regulation.

Yet, as the 40% error rate for a small sample of moderation decisions shows, the effort remains woefully insufficient. Human speech is probably too nuanced – and human beings themselves too fallible in their judgments – ever to make content rules capable of rigorous enforcement.

Should he decide to buy Twitter, these are lessons Musk may soon learn the hard way. On the other hand, given his appetite for controversy, jumping into the center of an all-powerful battle over online content might be exactly what the world’s richest man has in mind.

[email protected]

About Linda Jackson

Check Also

Amherst School Officials to Appoint Committee to Contribute to Library Content Review Policy | Latest titles

Planning is underway for the appointment of a committee that will advise the Amherst County …