Scrolling through the Sora app can feel akin to stepping into a real-life multiverse. Imagine Michael Jackson performing stand-up comedy, an alien from the “Predator” movies flipping burgers at McDonald’s, or a home security camera capturing a moose crashing through a glass door. These improbable realities, fantastical futures, and absurdist videos are the essence of the Sora app, a revolutionary short video platform developed by OpenAI, the creators behind ChatGPT. The continuous stream of hyperreal, short-form videos generated by artificial intelligence is both mind-bending and mesmerizing at first glance, but it quickly raises questions about the authenticity of content.
As AI-generated videos flood the platform, the need to scrutinize every piece of content as either real or fake becomes paramount. Sam Gregory, an expert on deepfakes and executive director at WITNESS, a human rights organization, warns, “The biggest risk with Sora is that it makes plausible deniability impossible to overcome, and that it erodes confidence in our ability to discern authentic from synthetic.” He adds, “Individual fakes matter, but the real damage is a fog of doubt settling over everything we see.”
Since its launch on September 30, the Sora app has skyrocketed in popularity, reaching over a million downloads in less than a week, surpassing the initial growth of ChatGPT. The app quickly climbed to the top of the U.S. App Store. Currently, Sora is exclusively available to iOS users in the United States, and access is limited to those with an invitation code. To use the app, users must scan their faces and read out three displayed numbers to create a unique voice signature. Once set up, users can input custom text prompts to produce hyperreal 10-second videos, complete with background sound and dialogue.
One standout feature of Sora is “Cameos,” which allows users to superimpose their faces or friends' faces onto existing videos. Although all outputs are marked with a visible watermark, several websites have emerged offering tools for watermark removal. Initially, OpenAI adopted a relaxed stance on copyright enforcement, permitting the recreation of copyrighted material unless copyright holders opted out. This led to a wave of AI-generated content featuring characters from popular shows like “SpongeBob SquarePants” and “Friends,” as well as mashups involving deceased celebrities like Tupac Shakur and even historical figures.
However, this creative freedom has not been without controversy. Zelda Williams, daughter of late comedian Robin Williams, expressed her frustration on Instagram, stating, “You’re not making art; you’re making disgusting, over-processed hot dogs out of the lives of human beings.” Similarly, the family of Fred Rogers, the beloved host of “Mister Rogers’ Neighborhood,” voiced their concerns about AI-generated videos misrepresenting the character. Christina Gorski, director of communications at Fred Rogers Productions, criticized the app for undermining the core values of child development that Rogers embodied.
In response to rising concerns over copyright and likeness appropriation, Hollywood talent agencies and unions, including SAG-AFTRA, have begun to accuse OpenAI of misusing actors' likenesses. Sam Altman, CEO of OpenAI, recently shared plans for greater control for rights holders over how their characters can be used in AI videos. He indicated that studios would be able to “opt-in” for their characters to be featured, marking a significant shift from OpenAI’s previous opt-out policy.
The estates of deceased celebrities are also taking steps to protect their likenesses in the era of AI. CMG Worldwide, which represents numerous estates, has partnered with deepfake detection company Loti AI to safeguard against unauthorized digital uses. “Since the launch of Sora 2, our signups have increased roughly 30x as people search for ways to regain control over their digital likeness,” explained Luke Arrigoni, co-founder and CEO of Loti AI.
As legal pressures mount, Sora has started to tighten its policies regarding the recreation of copyrighted characters. Users have begun to encounter content policy violation notices for creating Disney characters or other protected images. Despite these restrictions, a subculture has emerged around what’s been dubbed “AI slop,” featuring bizarre and humorous content that has gained significant traction online. Critics warn that the casual appropriation of likenesses could lead to confusion and misinformation.
While the Sora app offers an innovative platform for creativity, it also poses significant ethical challenges. As Gregory points out, the potential for misuse of AI technology is considerable, raising concerns about the creation of fake news and propaganda. The evolution of this technology will undoubtedly shape the future of content creation and public trust in media. As we navigate this complex landscape, the importance of maintaining control over digital identities and ensuring ethical standards in AI-generated content cannot be overstated.