According to reports from Fortune and Fox5DC, Taylor Swift's recent endorsement of Vice President Kamala Harris was prompted by her concerns over AI-generated deepfakes, highlighting the growing threat of misinformation in the digital age and its potential impact on elections and public perception.
It all began when a viral deepfake video of Swift appeared out of nowhere. There she was in broad daylight making offensive remarks in a random video.1 It wasn't long before her fans realized it wasn't her. However, the damage had already been done—numerous people thought (and still assume) it was Swift. The second deepfake involved Swift "singing" a song that was supposedly a "secret track." Many of her fans assumed it was a new release and began sharing the song. However, she quickly let the world know it wasn't her song.2
Deepfakes are developed when AI and machine learning are used to add someone's face to another body and/or mimic voices that sound very much like a specific person12. The content generated often looks so real that very few people know they're fake3. Sometimes deepfakes are made for a laugh. For example, videos of celebs saying silly things have been going around for years and years4. But unfortunately, there's a new wave of deepfakes being made to speak misinformation that can be extremely damaging56.
Taylor is not laughing at all. She is speaking loud and clear about her disdain for deepfakes. Taylor is a huge advocate for artists owning their work in an age where there are numerous gray areas surrounding consent, privacy, and ownership12. Swift is arguing across social media that AI shouldn't be used lightly: "Just because the technology exists doesn't mean it should be used without consent," she recently shared1. She feels that AI offers tons of power to creators, and with that power should come responsibility3. She has also gone on record saying that her fears and frustrations go far beyond her personal and professional worlds. In her mind, every single human should be worried about its implications in their own lives4.
As AI grows its scope, Swift and many others feel it vital that society take a deep look at who is responsible for what's generated. Who should be held responsible for deepfakes that go viral and negatively impact celebrities, everyday people, businesses, political candidates, and more? Is it the fault of the creator of the videos? What role does the developer of the tools being used play? And, social media platforms are used to spread misinformation…are they to blame as well? These questions are expected to be answered in the coming months and years by legal teams and governments around the world, but in the meantime Taylor and others are expecting some pretty scary things are going to occur.123
According to Taylor Swift, the first step is for the world to become aware of the danger of deepfakes. The key is creating educational videos, articles, and other forms of content that reveal what deepfakes are, how to spot them, and how to report them.12 Secondly, AI tools are being developed as we speak that help detect deepfakes. Of course, there will always be new advancements in the world of deepfakes that will make detection harder and harder. This will keep innovators on their toes to continually create new modes of detection.3 Next, legal frameworks are vital (as mentioned above). With Swift and other celebrities speaking out about the issue, lawmakers are more likely to listen. The development of a legal framework is doable, but it's going to take lots of time and effort.45