If you're even slightly engaged with the online world, you've likely encountered an AI-generated video or image. Many of us have been misled by these creations, myself included, as was the case with that viral video featuring bunnies on a trampoline. However, the advent of Sora has elevated AI video technology to unprecedented heights, making it essential for users to learn how to identify AI content accurately. Sora, a sister app to ChatGPT, is developed by OpenAI and is named after its innovative video generator that debuted in 2024. Recently, it underwent a significant upgrade with the introduction of the Sora 2 model, alongside a new social media platform sharing the same name.
The Sora app, reminiscent of TikTok, quickly captured the attention of AI enthusiasts eager to acquire invite codes. However, unlike traditional social media platforms, Sora operates entirely on AI-generated content. Every video you come across on Sora is fabricated, presenting an intriguing yet potentially perilous experience. While the interface appears harmless at first, the risks associated with this technology are substantial, particularly regarding the creation of deepfakes. The implications of such technology have drawn concerns from experts, especially in its potential to spread misinformation and blur the line between reality and fabrication.
From a technical standpoint, the videos produced by Sora are remarkably advanced, outperforming competitors like Midjourney's V1 and Google's Veo 3. Sora's videos boast high resolution, synchronized audio, and a surprising level of creativity. One standout feature, known as cameo, allows users to incorporate the likenesses of others into nearly any AI-generated scenario, resulting in eerily realistic videos. This capability raises alarms, particularly for public figures and celebrities who could become victims of harmful deepfakes. In response, organizations such as SAG-AFTRA have urged OpenAI to enhance its safety measures to protect individuals from misuse of this technology.
Identifying AI-generated content remains a complex challenge for tech companies and social media users alike. However, there are strategies you can employ to discern whether a video was created using Sora. Here are some key indicators to look for:
Every video downloaded from the Sora iOS app features a distinctive watermark — a white cloud icon that moves around the video’s edges, similar to the watermarking used by TikTok. This watermark serves as a visual cue that the content is AI-generated. While watermarking is a useful tool, it's not infallible; static watermarks can be cropped out, and even moving watermarks can be removed using specific apps. OpenAI's CEO, Sam Altman, has acknowledged the need for society to adapt to a reality where anyone can create convincing fake videos.
Though it may seem daunting, checking a video's metadata is a reliable method to determine if it was made with Sora. Metadata is a set of information automatically attached to content upon creation, providing insights such as the type of camera used, the location, and the date and time the content was produced. AI-generated content often includes specific credentials that indicate its origins. Sora videos, for instance, carry C2PA metadata, which you can verify using the Content Authenticity Initiative's tool. Here’s how to check:
Visit Content Authenticity Initiative.Upload the video you want to check.Click 'Open' and review the information in the right-side panel.If the video was generated by Sora, the tool will indicate that it was issued by OpenAI, confirming its AI origins.
On platforms like Instagram and Facebook, Meta has implemented systems to help identify and label AI content. Although these systems are not foolproof, they can offer clarity for flagged posts. TikTok and YouTube have also adopted similar policies. Ultimately, the most reliable way to confirm whether something is AI-generated is through disclosure by the creator. Many social media platforms now allow users to label their posts as AI-generated, enhancing transparency about the content creation process.
It’s crucial to understand that no single method can guarantee accurate identification of AI-generated videos at first glance. To protect yourself from deception, approach online content with a critical mindset. If something feels off, it's worth investigating further. Pay attention to signs like distorted text, anomalous objects, and unrealistic movements. Remember, even experts can fall victim to these advanced technologies, so don’t be too hard on yourself if you occasionally get misled.
As AI systems like Sora increasingly blend reality with artificial creations, it is our collective responsibility to clarify when something is real versus AI-generated. Awareness and vigilance are your best tools in navigating this evolving landscape.
(Disclosure: Ziff Davis, CNET's parent company, has filed a lawsuit against OpenAI, alleging copyright infringement in the training and operation of its AI systems.)