Taylor Swift

Taylor Swift Deepfakes Highlight Tech Platforms’ Losing Battle Against AI-Generated Fake Porn

Over the past day, explicit deepfakes depicting pop star Taylor Swift naked have gone viral across social media, accruing tens of millions of views and highlighting the alarming pace at which AI-generated fake pornography is proliferating online.

The images first gained traction Wednesday when a now-suspended verified account on Twitter successor X posted an array of convincing nude depictions of Swift. The post quickly amassed over 27 million views and 260,000 likes in the first 19 hours before the account was removed. However, during that extensive window, the images had already been shared, saved, and reposted exponentially across the platform.

By Thursday, new deepfakes continued to appear, many targeting the singer’s high-profile relationship with NFL player Travis Kelce. Searches for “Taylor Swift AI” also began trending in some regions, further boosting the content’s visibility.

Independent organization Reality Defender analyzed the images and concluded they were likely AI-generated, quite possibly using a platform like Microsoft Designer which has gained notoriety for enabling easy creation of fake nudes. The actual origin remains unclear, though identifying marks suggest they may have first surfaced on an infamous website focused on nude celebrity deepfakes.

Regardless, their meteoric spread across social media once again puts tech companies on the spot when it comes to the policing of such manipulated content. Both Twitter and successor platform X expressly prohibit nonconsensual synthetic media in their guidelines. However, critics argue this policy has failed to translate to decisive action, as AI-powered fake pornography has continued to slip through the cracks.

A Persistent Problem Despite Restrictions

This incident echoes similar events from recent months indicating that social platforms are struggling to control the spread of AI-generated fake porn. Last January, a 17-year-old actress sounded the alarm about convincing nude deepfakes of herself permeating X despite her reports. Likewise, an NBC investigation last June uncovered a wave of graphic deepfakes focused on TikTok stars active on the platform. Only after contact from the media did X remove a portion of the flagged content.

The latest case further illustrates the glaring gap between rules and enforcement, even with such high-profile figures involved. Swift’s fans notably took matters into their own hands, flooding hashtags associated with the deepfakes to make them harder to find. However, such campaigns are no substitute for more assertive moderation from companies like X themselves.

At its core, the episode underscores how AI image generation has greatly outpaced social platforms’ abilities to monitor manipulated media. Services like DALL-E and Stable Diffusion do prohibit pornographic outputs of public figures. But many alternative apps harbor no such restrictions, democratizing deepfake creation to an unprecedented degree. Detecting and managing this content at scale is a steep challenge for even well-resourced organizations.

Compounding Difficulties for Understaffed X

For X in particular, the context points to especially precarious vulnerabilities. The company has weathered months of criticism for enabling misinformation around sensitive global events like the Israel-Hamas war. European regulators currently have the platform under investigation for potentially illegal content and disinformation.

Former moderators also allege that under new leadership, the company drastically reduced its human content oversight, instead hoping its algorithms would suffice. Whether accurate or not, this week’s Swift deepfake bonanza does not signal promising results from any automated approach. If anything, it has showcased AI’s ability to produce harmful fake content outpacing any existing capability to catch it programmatically at scale.

While AI generation marks a novel threat, the core incentives propelling normalized deepfake porn also persist. The Swift incident has sparked yet another wave of toxic gendered harassment from critics attacking her support for Travis Kelce’s athletic career. As long as demand exists for nonconsensual intimate media to intimidate women in public life, supply will respond in turn.

For X, the episode makes clear that any aspiration to remain an open global forum has urgent implications around AI governance. Advanced generative models like DALL-E and Stable Diffusion foreground policy questions about how to police manipulation risks at huge volumes. But companies aspiring to guide public discussion must also recognize longstanding real-world harms that new technologies stand to amplify exponentially if left unchecked. Any platform enabling hundreds of millions of views of nonconsensual fake porn has shown itself utterly indifferent to those much more basic human concerns still waiting to be addressed.

By: Mary Rose Oh
Originally published at: goswifties