Breaking News

Meta CTO thinks poor Metaverse moderation could pose an 'existential threat'

Andrew Bosworth, CTO of Meta (formerly Facebook), warned employees that creating secure virtual reality experiences was an important part of his business plan -- but also potentially largely impossible.

In an internal memo seen by the Financial Times, Bosworth explicitly stated that he wants the Meta virtual world to have "almost Disney-level security," although Meta-produced content can be compared directly to the spaces of third-party developers. is done in. is done. The following may be the standard. If it turns mainstream consumers away from VR, harassment or other toxic behavior could pose an "existential threat" to the company's plans for the future of the Internet.

At the same time, Bosworth stated that "it is practically impossible to measure user behavior on any meaningful scale." FT reporter Hannah Murphy later tweeted that Bosworth was citing Masonic's impossibility theorem: a proverb coined by TechDirt founder Mike Masnick, which says that "it's impossible to do large-scale content moderation well." (Masnick's writing states that this is not an argument against insisting on better moderation, but that the larger system "always discourages very large sections of the population.")

Bosworth has explicitly suggested that Meta may moderate spaces like its Horizon World VR platform by using a stricter version of its existing community rules, saying that VR or Metaverse moderation is "within some sort of spectrum of warnings". is within. is within. With a strong bias towards enforcement, progressively longer suspensions, and eventually removal from multi-user spaces."

While the full memo is not publicly available, Bosworth posted a blog entry later that day that mentioned it. The post, titled "Keeping People Safe in VR and Beyond," references several of Meta's existing VR moderation tools. This includes letting people block other users in VR, as well as a broad Horizon monitoring system to monitor and report bad behavior. META has also pledged $50 million to research the practical and ethical issues of its Metaverse plans.

As the FT notes, Meta's older platforms like Facebook and Instagram suffer from serious moderation failures, including slow and inadequate responses to content promoting hate and violence. The company's recent rebranding offers a potential fresh start, but as the memo notes, VR and virtual worlds will create entirely new problems on top of existing ones.

"The challenges we face, the trade-offs involved and the potential consequences of our work are often open to conversation, both internally and externally," Bosworth wrote in the blog post. "Sports has tough social and technical problems, and we deal with them every day."

No comments