Elon Musk's Grok AI model has implemented new restrictions aimed at preventing the editing of photos of real people in revealing clothing, following widespread concerns regarding the proliferation of sexualized AI deepfakes. This pivotal decision comes as a response to legal and ethical issues raised about the misuse of AI technologies. The announcement, made via X (formerly Twitter), clarifies that Grok accounts will no longer allow the alteration of images featuring real individuals in attire such as bikinis.
According to the statement released by X, the restrictions apply universally to all users of the Grok AI tool, including those with paid subscriptions. The company has introduced technological measures to ensure that users in jurisdictions where editing such images is illegal will be geo-blocked from generating images of real people in revealing clothing. This proactive step aims to mitigate potential legal ramifications and protect individuals from being depicted inappropriately.
The timing of this announcement coincides with California's top prosecutor initiating a probe into the distribution of sexualized AI deepfakes, particularly those involving children. The state is taking a serious stance on this matter, indicating a broader concern regarding the implications of AI-generated content. X reiterated that only paid users will be allowed to edit images using Grok on its platform, which is intended to create an additional layer of accountability for those who might misuse the tool.
In the context of content moderation, Musk noted that with NSFW (not safe for work) settings enabled, Grok allows upper body nudity of imaginary adult humans, aligning with standards typically seen in R-rated films. However, he acknowledged that regulations may differ based on country-specific laws, emphasizing the need for localized compliance.
Leaders worldwide have voiced their concerns regarding Grok's image editing capabilities. Recently, Malaysia and Indonesia became the first nations to outright ban the Grok AI tool, responding to reports of unauthorized alterations leading to explicit images. In the UK, Ofcom, the media regulator, announced an investigation into whether X has complied with local laws concerning the dissemination of sexual images.
The backlash has led UK political figures, including Sir Keir Starmer, to warn that X may lose its ability to self-regulate if the situation is not addressed promptly. However, Starmer later welcomed the news of X's initiatives to tackle the issue. Some UK Members of Parliament have even opted to leave the X platform amid the controversy.
California Attorney General Rob Bonta highlighted the misuse of explicit material depicting women and children, emphasizing the harassment that has stemmed from such content. Policy researcher Riana Pfefferkorn expressed surprise at the delay in implementing Grok's new safeguards, asserting that the editing features should have been removed immediately once abuse was identified. She raised valid concerns about how X plans to enforce these new policies, questioning the AI model's capability to identify real individuals and the steps to be taken against users who violate these rules.
As the conversation around AI ethics and content moderation continues to evolve, Musk's response and the actions taken by X will be closely scrutinized by both the public and regulatory bodies. The path forward remains uncertain, but the implications of these changes will undoubtedly shape the future of AI technology and its societal impact.