You Do It: Twitter Outsources AI Image Detection to Its Users
Twitter is responding to an uptick in misleading AI-generated images by expanding its flawed, crowdsourced fact-checking feature, Community Notes, to include images. The new feature, whose debut comes just one week after the social network amplified an AI-generated image of a supposed bomb at the Pentagon, will give Twitter users the ability— and responsibility— to identify “misleading media.” Twitter’s trust and safety team, which would originally take a lead role in sussing out signs of misleading or fabricated content, has been gutted since Elon Musk took over ownership in October. Twitter did not say it would hire additional content moderators to mitigate the spread of fabricated images.
Advertisement
The expanded Community Notes feature will let users with an impact score of 10 attach notes to a specific image included in a tweet. Impact scores are a measure of how helpful a contributor’s notes have been . Users can append t hose notes to add additional context to an image or alert users of media that may be AI-generated or manipulated. Twitter says the notes attached to images will automatically appear “on recent & future matching images.” In theory, that means a questionable image will still show the note next to it even if it’s been published by other users elsewhere on the site. For now, however, Twitter doesn’t seem all that confident in its ability to match tagged images with all other versions popping up across the platform.
“It’s currently intended to err on the side of precision when matching images, which means it likely won’t match every image that looks like a match to you,” Twitter said. “We will work to tune this to expand coverage while avoiding erroneous matches.”
Advertisement Advertisement
Twitter says the pilot feature currently only supports tweets with a single image, though the company said it would like to expand it to work with videos and tweets with multiple images and videos. Gizmodo reached out to Twitter for more details about the program but received a poop emoji in response.
Advertisement
Musk’s penny-pinching philosophy around restricting Twitter’s API access is casting out helpful third-party apps fighting back against some of the platform’s most toxic content. One of those apps, a popular anti-harassment tool called Block Party used to block and mute millions of trolls, said the same day as Twitter’s expansion of Community Notes that Twitter’s changes were forcing it to go on an “indefinite hiatus” on the platform.
Advertisement
“We’re heartbroken that we won’t be able to help protect you from harassers and spammers on the platform, at least for now; we fought very hard to stay, and we’re so sorry that we couldn’t make it happen,” Block Party said.
Advertisement
Twitter feeling the consequences of burning its trust and safety team
The updated crowdsourcing feature comes as Twitter, and other social networks , struggle to control a rise in misleading AI-generated images. While most of those viral AI images up to this point have largely fallen pretty clearly into the parody or satire category, an AI-generated image of a “bombing” occurring outside of the Pentagon last week offered a glimpse into the type of mis information that AI safety experts have warned about for years. That particular situation was made even worse by Twitter’s slapdash pay-for-access verification, which let a verified account masquerading as a Bloomberg News account amplify the image even further. That imposter account has since been suspended.
Advertisement
Trust and safety experts speaking with Gizmodo say AI-generated images or other manipulated media are precisely the types of content paid and trained human workers are best equipped to respond to. Twitter, under Musk, has gutted its trust and safety teams. The expanded notes feature is attempting to essentially outsource labor to Twitter’s user base which, let’s face it, isn’t exactly known for its discipline or nuance. Speaking generally, SmartNews Head of Trust and Safety and former Google and ByteDance trust and safety lead Arjun Narayan told Gizmodo he’s worried this whittling down of expertise could come back to bite companies in the ass.
“As we disinvest, are we waiting for shit to hit the fan?” Narayan asked. “Would it then be too late to reinvest or course correct?”
Source: Gizmodo