Over 19.3 million pieces of content were proactively “activated” on Facebook in India in December: Meta

Its photo-sharing platform Instagram took proactive action against more than 2.4 million plays across 12 categories during the same period, according to data shared in a compliance report.

Under IT rules that came into force in May last year, major digital platforms (with more than five million users) must publish periodic compliance reports every month, detailing complaints received and action taken. About them. It also includes details about content removed or disabled through proactive monitoring using automated tools.

Facebook had proactively “acted” on more than 16.2 million pieces of content in October across 13 categories, while Instagram took proactive action on more than 3.2 million pieces in 12 categories over the course of October. the same period.

“Over the years, we have consistently invested in technology, people and processes to advance our agenda to keep our users safe online and to allow them to express themselves freely on our platform.

“We use a combination of artificial intelligence, reports from our community, and review by our teams to identify and review content against our policies,” a spokesperson for Meta said.

The spokesperson added that in accordance with IT rules, the company has published its monthly compliance report for the period of December 1-31 and will contain details of content it has proactively removed using automated tools and details of user complaints received and action taken.

In its latest report, Meta said that 531 user reports were received by Facebook through its Indian complaints mechanism from December 1 to December 31, 2021.

“Of these incoming reports, we provided tools to users to resolve their issues in 436 cases,” the report said.

These include pre-established channels to report content for specific breaches, self-remediation streams where they can upload their data, ways to fix account takeover issues, and more, he added.

From December 1 to 31, Instagram received 436 reports through India’s grievance mechanism.

Facebook’s parent company recently changed its name to Meta. Apps under Meta include Facebook, WhatsApp, Instagram, Messenger and Oculus.

According to the latest report, the more than 19.3 million pieces of content processed by Facebook in December included spam-related content (13.8 million), violent and graphic content (2.1 million), adult nudity and sexual activity (1.5 million) and hate speech (60,800).

Other categories in which content has been actioned include bullying and harassment (1,17,000), suicide and self-harm (3,74,300), dangerous individuals and organizations: terrorist propaganda (1 18,300) and dangerous organizations and individuals: organized hatred (20,000).

Categories such as Child Endangerment – Nudity and Physical Abuse saw 1,57,100 content items actioned, while Child Endangerment – Sexual Exploitation saw 7,96,800 items, and under Violence and incitement, 2,67,100 items were implemented.

Content “processed” refers to the number of content items (such as posts, photos, videos, or comments) where action has been taken for violating the standards. Taking action may include removing a piece of content from Facebook or Instagram or covering photos or videos that may upset certain audiences with a warning.

The proactive rate, which indicates the percentage of all content or accounts on which Facebook found and reported using technology before users reported it, ranged in most of these cases between 57.4 and 99, 9%.

The proactive removal rate for content related to bullying and harassment was 57.4%, as this content is contextual and very personal in nature. In many cases, users must report this behavior to Facebook before it can identify or remove this content.

For Instagram, over 2.4 million pieces of content were processed across 12 categories as of December 2021. This includes content related to suicide and self-harm (8,91,900), violent and graphic content (6, 00,800), adult nudity and sexual activity (4,61,900), and bullying and harassment (2,09,200).

Other categories in which content has been actioned include Hate Speech (16,700), Dangerous Organizations and Individuals: Terrorist Propaganda (10,700), Dangerous Organizations and Individuals: Organized Hate ( 2,500), child endangerment – nudity and physical abuse (21,500), child endangerment – sexual exploitation (1,71,200) and violence and incitement (25,600). PTI SR HRS hours

To subscribe to Mint Bulletins

* Enter a valid email address

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our app now!!

About Linda Jackson

Check Also

Higher legal risk for third-party content: Social media companies will challenge more restrictions

Social media companies plan to challenge any changes to the law introduced by the government …