Meta Platforms Inc. today had to apologize after a glitch caused its Instagram Reels feature to inundate users with videos of real-life horror.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended,” a Meta spokesperson said in a statement. “We apologize for the mistake.”
For a reason Meta hasn’t explained, people started seeing videos of fights in the street, school shootings, murders and accidents that produced deadly injuries that users described as “gore.”
Users on Reddit who experienced this rather unpleasant influx of content said even despite enabling the Sensitive Content Control feature, they saw pornography, castration, beheadings and people being raped. It’s shocking enough that this kind of content is being shared on Instagram in the first place, but why the algorithm started flooding people’s accounts with it is confounding. In some cases, people reported clicking the “not interested” button, but the algorithm persisted.
According to Meta’s policy, users should be protected from seeing such violent imagery. The company’s platforms might allow some amount of violence, but only if it raises awareness of issues such as war, human rights abuses, or acts of terrorism. This kind of content will usually come with a graphic warning label.
The company uses machine learning artificial intelligence to pick up violent content before it gets to the user, with Meta saying “the vast majority of violating content” will be taken down before it even reaches users. There is also a 15,000-strong team of human reviewers to protect consumers from such content.
As TikTok fights for survival in the U.S., Meta has been trying to push its short video content. At the same time, the company saw huge layoffs in 2022 that included losing some of the people working in trust and safety. Moreover, Chief Executive Mark Zuckerberg recently announced major changes to its fact-checking systems, deciding to take a laxer approach to content using community notes rather than third parties.
At the time, Zuckerberg said his companies would use “safety filters for ‘illegal and high-severity’ violations of its content policies,” adding, “For lower-severity violations, we’re going to rely on someone reporting an issue before we take action.” He admitted, “We’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.”
However, a spokesperson for the company told CNN that this recent mishap is not related to any of the changes in content moderation.
Photo: Instagram
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU