Breaking News

Facebook's algorithm was accidentally increasing harmful content over the past six months

Facebook's algorithm was accidentally increasing harmful content over the past six months

According to an internal report on the incident obtained by The Verge, the group of Facebook engineers identified a "massive ranking failure" that exposed half of all News Feed views in the past six months to an "integrity risk". .

Engineers first noticed the issue last October, when misinformation suddenly began to flow through the News Feed, notes the report, which was shared inside the company last week. Instead of suppressing suspicious posts reviewed by the company's network of external fact-checkers, Newsfeed was delivering post distribution, garnering up to 30 percent of views globally. Unable to trace the root cause, engineers noticed that the surge subsided a few weeks later and then flared up repeatedly until the ranking issue was fixed on March 11.

In addition to the posts flagged by fact-checkers, an internal investigation found that, during the bug period, Facebook's systems failed to properly demote nudity, violence, and even Russian state media. The network recently promised to stop recommending it in response to the invasion of the country. of Ukraine. The issue was internally designated a level-one SEV, or critical engineering vulnerability — a label reserved for the company's worst technical problems, such as Russia's ongoing blocks of Facebook and Instagram.

Meta spokesman Joe Osborne confirmed the incident in a statement to The Verge, saying that the company "detected discrepancies in downranking on five separate occasions related to small, temporary increases in internal metrics." Internal documents state that the technical issue was first introduced in 2019, but did not have a noticeable effect until October 2021. "We detected the root cause of a software bug and implemented the necessary fixes," Osborne said, adding that the bug "has not had any meaningful, long-term impact on our metrics."

For years, Facebook has touted downranking as a way to improve the quality of the News Feed and continually expand the types of content its automated systems serve. Downranking has been used in response to wars and controversial political stories, highlighting concerns about shadow bans and law enforcement. Despite its growing importance, Facebook has yet to explain its effect on what people see and, as this phenomenon shows, what happens when the system goes awry.

In 2018, CEO Mark Zuckerberg explained that downranking fights the impulse that people naturally have to engage with "more sensationalist and provocative" content. "Our research shows that no matter where we draw the lines for what's allowed, as a piece of content gets closer to that line, people will, on average, engage more with it—even if they don't see us after it. Tell me. Will tell you don't like the material," he wrote in a Facebook post at the time.

Downranking not only suppresses "borderline" content that comes close to violating its rules, but its AI systems also flag suspicious content as infringing, but require further human review. The company published a high-level list of the depreciation that occurred last September, but did not explain how the demotion actually affected the distribution of the affected material. Officials have told me they hope to shed more light on how the demotions work, but worry that doing so will help opponents steer the system.

Meanwhile, Facebook's leaders regularly brag about how their AI systems are getting better every year at detecting content like hate speech, largely as a way to moderate the technology. attach importance. Last year, Facebook said it would begin reducing all political content in the News Feed — a move that CEO Mark Zuckerberg pushed to bring the Facebook app back to its lighter roots.

I haven't seen any indication that malicious intent was behind this recent ranking bug that affected half of News Feed over a period of months, and thankfully, it didn't break Facebook's other moderation tools. But the incident shows why there is a need for greater transparency in Internet platforms and the algorithms they use, according to Sahar Masachi, a former member of Facebook's Civic Integrity team.

"In such an extremely complex system, bugs are inevitable and understandable," Masachi, co-founder of the non-profit Integrity Institute, told The Verge. “But what happens when a powerful social platform has one of these accidental flaws? How do we even know? We need real transparency to create a sustainable system of accountability, so that we catch these problems as quickly as possible. can. can help."

1 comment:

  1. there is noting accidental about it, you can bet is it deliberate and monetarily beneficial to facebook