Your School. Your Paper. Since 1936.

The Suffolk Journal

Your School. Your Paper. Since 1936.

The Suffolk Journal

Your School. Your Paper. Since 1936.

The Suffolk Journal

OPINION: AI deepfakes don’t just affect Taylor Swift

OPINION%3A+AI+deepfakes+dont+just+affect+Taylor+Swift
Damini Singh

Imagine one morning as you wake up and log into your social media you find a photo of you staring back, only you don’t remember taking this particular photo. Not only that, but this photo is sexually explicit, depicting you at your most vulnerable. 

Yet, you’ve never taken the photo, let alone posted it for the world to see. 

You start to panic – friends and family have already viewed it and reached out. The messages all read as something like, “Have you seen this??” or, “Why did you post this?” There’s no way to know if others have saved, screenshotted or shared it. Your reputation, identity and body are now outside of your control and into the hands of the public and it’s not even you in the photo. 

This is likely what happened to Taylor Swift on Jan. 25, when sexually explicit AI deepfake photos of her emerged on X, formerly known as Twitter. AI needs to be regulated, and social media websites should be held responsible when these undeniably harmful images are spread online.

AI deepfake photos are images that have been manipulated by AI applications, which demonstrate hyper-realistic media content that can be passed on as authentic. Oftentimes this form of media takes an already existing piece of visual media and adds a real person’s face to it. Other times it portrays a person saying or doing something that in reality they didn’t say or do.

One fake photo using Swift’s image was viewed by 45 million users, reposted 24,000 times and received hundreds of thousands of likes and bookmarks. This is just one example of the many photos generated and distributed. However, Swift isn’t the only woman that AI deepfake pornography could and has affected. Since the release of AI technology to the general public, it’s been relatively easy for anyone to go online to create fake content. Celebrities aren’t the only ones at risk of their identities being stolen for misuse – everyday people, especially women, are vulnerable to these technological developments. 

Online abuse that takes this form doesn’t just stop once the digital media is taken down. AI deepfakes can ruin a person’s sense of self and be extremely traumatic. Studies have shown that if a person watches their AI deepfake doppelganger, it causes the person to form inaccurate memories. This means that the person is led to believe, consciously or subconsciously, that they performed the activities that their AI doppelgänger participated in, in real life.

Victims of AI deepfake porn can also find uncertainty when taking legal action, as there are currently no federal regulations concerning AI deepfakes and there are limited state regulations. Both federal and state governments are dangerously behind on policy that protects those who are victims of the technology. Although there are multiple factors at play such as users, social media platforms and discussions about the government’s role in online content surveillance, basic principles must be set before the issue spirals out of hand. 

Two aspects specifically need to be addressed in potential legislation: what AI deepfakes can be used for and who should be held responsible for posted material that violates that use. When tackling what the technology should be used for, it should be explicitly stated that using AI deepfake media for facilitating non-consensual pornographic material is prohibited. The question then, is who should be held responsible for the damage? 

Although the user distributing the content is the active perpetrator, it’s difficult at times to identify who that user is. In cases such as these, social media platforms should be held legally responsible for destructive libel material. If social media platforms are held responsible for the digital material passed around online, it will incur motivation to closely monitor posts. Social media platforms have the power to create strong guidelines for their distinguished sites in hopes of limiting the spread of malicious acting AI deepfakes. 

Even though these advised regulations won’t solve all issues concerning malicious usage of AI deepfake technology, it’s a start in addressing this high-stakes problem that will become increasingly more prominent if nothing is done to stop it.

Leave a Comment
More to Discover
About the Contributor
Damini Singh
Damini Singh, Graphics Editor | she/her
Damini is a senior from Nashua, New Hampshire, majoring in graphic design with a minor in marketing. She is involved with multiple organizations on campus and is also president of Fusion Dhamaka. In her spare time, she often reads, tries different cuisines and loves hanging out with her friends in the Public Gardens.

Comments (0)

All The Suffolk Journal Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *