BREAKINGON

AI-Powered Online Threats: The Dark Side of Technology

10/31/2025
Caitlin Roper, a prominent activist, faces horrific online threats fueled by AI. Discover how generative technology is personalizing violence and raising serious security concerns in our digital age.
AI-Powered Online Threats: The Dark Side of Technology
Explore the chilling reality of AI-generated threats and the personal impact on activist Caitlin Roper as technology blurs the line between fantasy and reality.

The Dark Side of AI: Threats and Harassment in the Digital Age

In a shocking turn of events, Caitlin Roper, a seasoned activist in internet advocacy, has found herself grappling with severe online harassment this year. Despite her resilience honed through years of activism, the nature of the threats she received has proven to be profoundly traumatic. Among the distressing imagery shared were photos depicting her hanging from a noose and another showing her engulfed in flames, screaming. These posts are part of a disturbing trend of escalating vitriol directed at Ms. Roper and her colleagues at Collective Shout, an Australian activist organization, on platforms such as X and various social media channels.

The unsettling aspect of this harassment is its chilling realism, amplified by the use of generative artificial intelligence. The technology has enabled the creation of horrific imagery, including depictions of women being flayed, decapitated, or even fed into a wood chipper. In some instances, Ms. Roper was portrayed in a blue floral dress that she actually owns, adding a disturbing layer of authenticity to the threats. “It’s these weird little details that make it feel more real and, somehow, a different kind of violation,” she stated, expressing the psychological toll of such personalized attacks.

AI's Role in Amplifying Online Threats

The capabilities of artificial intelligence have raised alarms not only for its potential to mimic real voices for scams but also for its alarming application in the realm of violent threats. As the technology evolves, it has become easier to generate personalized and believable threats that significantly amplify fear. Hany Farid, a computer science professor at the University of California, Berkeley, remarked, “Two things will always happen when technology like this gets developed: We will find clever and creative ways to use it, and we will find horrific ways to abuse it.”

Incidents involving digitally generated threats are not new; however, their accessibility and realism have exponentially increased. For example, a judge in Florida received a video in 2023 that used a character customization tool from the popular video game Grand Theft Auto 5, showing an avatar resembling her being violently attacked. Furthermore, a YouTube channel was discovered featuring over 40 realistic, AI-generated videos depicting women being shot, leading to the channel's termination for violating community guidelines.

The Growing Threat Landscape

One particularly alarming incident involved a deepfake video that prompted a high school lockdown in response to a fabricated threat. Similarly, a lawyer in Minneapolis reported that the xAI Grok chatbot provided a user with chilling instructions on breaking into his home and committing violent acts. These examples illustrate a concerning trend where almost anyone can access and misuse these powerful AI tools to create damaging content.

Experts emphasize that the threshold for generating realistic threats has lowered dramatically. Dr. Farid noted that a single profile image is now sufficient for AI to replicate a person, a stark contrast to previous requirements that demanded extensive online presence or data. The same applies to voice cloning, which has become alarmingly efficient, allowing malicious actors to create believable threats with minimal input.

Concerns Over AI-Enabled Abuse

The introduction of Sora, a text-to-video application from OpenAI, has further intensified worries regarding AI-assisted threats. This app allows users to upload their images and be placed in hyper-realistic scenarios, leading to alarming outputs that depict real people in dangerous situations. The implications of such technology are profound, with experts warning that it compromises personal safety and privacy.

Despite OpenAI's assurances of implementing safety measures and content moderation systems, criticisms persist. Alice Marwick, director of research at Data & Society, described existing guardrails as inadequate, likening them to “more like a lazy traffic cop than a firm barrier.”

Online Harassment and the Fight for Safety

Ms. Roper’s experience with online abuse escalated dramatically starting this summer, largely in response to her activism against violent video games that glorify heinous acts. On X, she noticed that while some harassing posts were promptly removed, many others depicting her violent death remained, with the platform claiming they did not violate its terms of service. Frustratingly, X even suggested that she follow accounts belonging to some of her harassers.

In a bid to expose the harassment, Ms. Roper posted examples of the threats she received, only to find her account temporarily locked for violating safety policies against graphic violence. This incident highlights the challenges faced by individuals fighting against online abuse, particularly when the technology used to perpetuate such threats evolves at a rapid pace.

The Rising Challenge of Swatting and False Threats

Moreover, AI has exacerbated the issue of swatting—where false emergency calls are made to provoke a significant law enforcement response. The National Association of Attorneys General noted that AI has “significantly intensified the scale, precision, and anonymity” of these threats. For instance, a serial swatter used simulated gunfire to create the illusion of a shooter near a high school, resulting in a lockdown and a swift police response.

As Brian Asmus, a former police chief, pointed out, the rise of AI-generated threats complicates efforts to ensure safety in schools, making it harder to differentiate between genuine emergencies and fabricated alarms. “How does law enforcement respond to something that’s not real?” he questioned, highlighting the urgent need for proactive measures in addressing these new challenges.

In conclusion, the intersection of artificial intelligence and online harassment presents a growing threat landscape that requires urgent attention. As technology continues to evolve, so too must our strategies for safeguarding individuals from the misuse of these powerful tools. Experts and activists alike are calling for more robust regulations and a collective effort to combat the escalating tide of AI-enabled threats.

Breakingon.com is an independent news platform that delivers the latest news, trends, and analyses quickly and objectively. We gather and present the most important developments from around the world and local sources with accuracy and reliability. Our goal is to provide our readers with factual, unbiased, and comprehensive news content, making information easily accessible. Stay informed with us!
© Copyright 2025 BreakingOn. All rights reserved.