OpenAI has recently taken significant action by blocking users from creating videos of Martin Luther King Jr. on its Sora app. This decision comes in response to complaints from the estate of the renowned civil rights leader regarding the proliferation of disrespectful depictions of King across various social media platforms. Since the app's launch just three weeks ago, users have exploited its capabilities to produce hyper-realistic deepfake videos of King, showcasing him in offensive scenarios, including stealing from grocery stores and perpetuating racial stereotypes.
On Thursday, OpenAI and the estate of Martin Luther King Jr. issued a joint statement confirming that AI videos depicting King would be blocked as part of OpenAI's efforts to enhance safety measures surrounding historical figures. OpenAI acknowledged the importance of free speech, stating there are compelling interests in allowing users to create AI deepfakes of historical icons. However, the company emphasized that estates should have the ultimate authority over how their likenesses are utilized.
The Sora app, which is currently invite-only, has adopted a “shoot-first, aim-later” approach to safety measures. This has raised concerns among intellectual property lawyers, public figures, and disinformation researchers. Upon joining the app, users are required to record a video of themselves from multiple angles and speak to provide the necessary data for the app's AI capabilities. Users have control over whether others can create deepfake videos of them, a feature referred to as a "cameo." However, the app previously allowed users to produce videos of numerous celebrities and historical figures without explicit consent, enabling the creation of fake footage featuring figures like Princess Diana, John F. Kennedy, Kurt Cobain, and Malcolm X.
Kristelia García, an intellectual property law professor at Georgetown Law, pointed out that OpenAI's response to King's estate's complaints reflects a broader trend in the AI industry of seeking forgiveness rather than permission. "The AI industry seems to move really quickly, and first-to-market appears to be the currency of the day," García noted in an email to NPR. She highlighted the complexities surrounding right-to-publicity and defamation laws, which can vary by state and may not consistently apply to deepfakes. This legal ambiguity often results in minimal consequences for companies that allow such portrayals until a complaint is made.
The ability to control one's likeness is heavily influenced by the jurisdiction of the estate, with states like California offering robust protections. In California, heirs or estates of public figures maintain rights to likeness for up to 70 years posthumously. Following the Sora app's debut, OpenAI CEO Sam Altman announced modifications to the app, allowing rights holders to opt-in for their likenesses to be depicted by AI, rather than permitting such portrayals by default.
Despite these changes, families of some deceased celebrities, including Robin Williams, have criticized OpenAI's handling of the situation. Zelda Williams, the late actor's daughter, publicly requested that individuals cease creating AI videos of her father. "Please, just stop sending me AI videos of my dad," she wrote in an Instagram post, emphasizing that it is "NOT what he'd want." Similarly, Bernice King, daughter of Martin Luther King Jr., echoed these sentiments on social media, urging users to refrain from creating disrespectful content.
Hollywood studios and talent agencies have also voiced their concerns regarding OpenAI's decision to launch the Sora app without obtaining consent from copyright holders. As the landscape of AI technology continues to evolve, the balance between innovation and ethical responsibility remains a critical topic of discussion, with significant implications for the future of artificial intelligence and its impact on society.