Breaking News

Is my co-worker an AI? Bizarre product reviews surprise Gannett employees

Is my co-worker an AI? Bizarre product reviews surprise Gannett employees

A wide range of recently discovered articles reviewed on Gannett's product review site are giving rise to a common debate: Was it created by artificial intelligence tools or by a human?

The writing is stilted, repetitive and sometimes unnecessary. An article titled "The Best Waist Lamps of 2023" reads, "Before purchasing a product, you need to consider the fit, light settings, and additional features offered by each option." Another says, "Before purchasing a Swedish dishcloth, there are a few questions you may want to ask yourself." On each page, there is a section called "Product Advantages/Disadvantages", which is actually just a list with some features rather than presenting benefits and drawbacks. The pages are filled with low-resolution images, infographics, and dozens of links to Amazon product listings. (At the time of this writing, the articles appear to have been removed.)

This is the type of content readers are beginning to associate with AI, and this wasn't Gannett's first time running into controversy. In August, the company ran a failed "experiment" using AI to generate sports articles, generating a ton of stories repeating awkward phrases like "close encounters of athletic types." Gannett halted use of the tool and said it would reevaluate the tool and processes. But on Tuesday, the NewsGuild of New York — the union representing review workers — shared screenshots of shopping articles viewed by staff, calling it the latest effort by Gannett to use AI tools to curate content. Went.

But Gannett says the new "reviews" were not created with AI. Instead, the content was created by "third-party freelancers employed by the marketing agency partner," said Lark-Marie Anton, Gannett's chief communications officer.

“The pages were posted without accurate associated disclaimers and did not meet our editorial standards. Updates have been published [on Tuesday],” Anton told The Verge in an email. In other words, the article is an affiliate marketing ploy created by employees of another company.

A new disclaimer on the articles reads, “These pages are published as a partnership between Reviewed and ASR Group Holdings, a leading digital marketing company. Products featured are based on consumer reviews and category expertise. Buying guides are produced by ASR Group's editorial team for marketing purposes.

Still, there is something strange in the reviews. According to old job listings, ASR Group also uses the name Advon Commerce – a company specializing in “ML/AI solutions for ecommerce,” according to its LinkedIn page. An Advon Commerce employee listed on the reviewed website says on LinkedIn that he has "mastered the art of inspiring and editing AI generator text" and that he "oversaw a team of 15 copywriters during the time of transition to ChatGPIT and AI generator text." Led the team." Organizes and directs the team."

Additionally, reviewed authors are hard to track down – some of them do not have other published works or LinkedIn pages. In a post on X, Reviewed staff wondered, "Are these people even real?"

When asked about the marketing company and its use of AI tools, Anton said that Gannett has confirmed that the content was not created using AI. AdVon Commerce did not respond to a request for comment.

The dustup with maybe-AI-maybe-not-AI stories comes just weeks after Reviewed's union employees walked off the job to secure bargaining session dates with Gannett. In an emailed statement, the reviewed union said it would raise the issue during the first round of bargaining in the coming days.

“This is an effort to undermine and replace union members, whether using AI, using subcontractors of a marketing firm, or some combination of the two. "As a matter of urgency, we demand that management retract all of these articles and issue a formal apology," the statement said.

"These posts undermine our credibility, they undermine our integrity as journalists," Michael Desjardin, a senior staff writer at Reviewed, told The Verge. Desjardins says he believes the publication of the reviews is retaliation for the previous strike.

According to Desjardins, Gannett leadership did not inform staff that the articles were being published, and they realized it only when they came to the Post on Friday. Staff noticed typographical errors in the headlines; Strange, machine-like phrase; and other "tell-tale signs" that would not meet journalists' editorial standards.

"I and the rest of the unit feel like — if this is really what's happening — that it really undermines what we do," Desjardin told The Verge about Gannett's alleged use of AI tools. “The issue is that it exists on the same platform where we publish.”

The ambiguity between what is AI-generated and what is created by humans has been a recurring theme of 2023, especially at media companies. A similar dynamic emerged at CNET earlier this year, starting with journalists publishing AI-generated stories right next to their work. The staff had little knowledge of how AI-generated articles were produced or fact-checked. More than half of the articles contained errors, and CNET did not explicitly disclose that AI was used until media reports emerged.

"I feel like it's designed to hide itself, to blend into what we do every day," Desjardins says of the materials reviewed.