OpenAI has recently launched its latest AI video generation model, Sora 2, which has captivated users with its remarkably realistic images and physics. The model made its debut with an invite-only release earlier this month. However, it quickly faced significant challenges, including issues related to copyright infringement and the creation of deepfakes featuring historical figures like Martin Luther King Jr. and John F. Kennedy.
In response to the chaos surrounding the rollout, OpenAI took immediate steps to mitigate its legal liabilities. CEO Sam Altman announced that intellectual property holders would need to opt in to the platform before users could remix content from popular shows like SpongeBob SquarePants and Family Guy. Furthermore, the company acted on requests from King’s estate to block any "disrespectful depictions" of the civil rights leader. OpenAI also reassured the labor union SAG-AFTRA that it was implementing measures to protect against deepfakes of recognizable entertainers.
Despite these precautions, troubling content continues to emerge on the Sora app, which operates similarly to TikTok but exclusively features AI-generated videos. Users can consent to have their likenesses used in “Cameos,” where the AI model inserts them into various contexts—many of which are not flattering. Research from Copyleaks, an AI analysis firm, has identified a disturbing trend where Sora videos depict celebrities using hateful racist epithets.
According to Copyleaks, a phenomenon akin to “Kingposting”—a term derived from a viral incident in 2020—has surfaced. This term refers to a situation where a passenger sporting a Burger King crown was filmed shouting racial slurs. In Sora Cameo videos, notable figures such as Altman, billionaire Mark Cuban, influencer-turned-boxer Jake Paul, and streamers xQc and Amouranth have appeared as passengers in Burger King crowns, reenacting this offensive meme. Notably, all these individuals had opted into the Cameo feature, with Cuban encouraging users to engage with his likeness.
To circumvent the platform's filters that block hate speech, users reportedly used “coded or phonetically similar terms” to generate audio mimicking well-known racial slurs. For instance, a deepfaked version of Altman is seen shouting “I hate knitters” while being escorted off an aircraft. As highlighted in the Copyleaks report, this behavior demonstrates a troubling trend where users intentionally test the limits of content moderation systems.
Because these videos can be easily downloaded, they are readily shared across other apps, amplifying their reach. A Sora-generated clip of Jake Paul saying “neck hurts” has already gained 1.5 million views and 168,000 likes on TikTok. Additionally, another Sora video featuring Paul shouting “I hate juice” has been deemed an anti-Semitic provocation. Overall, users of Sora 2 have found little difficulty creating content that visualizes various antisemitic tropes.
The issue extends beyond just offensive content. Popular streamer IShowSpeed expressed frustration upon discovering realistic deepfakes of himself in compromising scenarios, such as kissing a fan and announcing trips abroad. He criticized those who had encouraged him to join the Cameo system, stating, “Whoever told me to make it public, chat, you’re not here for my own safety, bro. I’m fucked, chat.” His only immediate solution was to manually delete each video, a task that Cuban has also taken upon himself.
Similar concerns have arisen with other platforms, like Grok Imagine from Elon Musk’s xAI, which has been criticized for producing harmful deepfakes of celebrities without their consent. Some users have even created explicit content featuring iconic characters and celebrities. Amid a feud with Jay-Z, rapper Nicki Minaj recently shared a Grok-generated image of the hip-hop mogul, showcasing the potential misuse of these technologies.
While users can take advantage of insufficient AI moderation on platforms like Grok Imagine or find ways to bypass guardrails on Sora, the greater concern lies in videos that falsely depict events involving unknown individuals. Copyleaks noted that fake news broadcasts and fabricated footage are gaining traction, with one Sora-generated clip showing a man catching a baby falling from a building garnering nearly 2 million likes on TikTok. This situation highlights how hyper-realistic AI video technology outpaces the average person's ability to discern manipulation.
The implications of AI-generated content, especially deepfakes, are profound and potentially damaging. As bad actors exploit these technologies to spread hate and misinformation, the race for dominance among AI companies often overshadows the ethical considerations. While the wealthy may have resources to protect their reputations, the average individual must learn to navigate a world where seeing is rarely believing.