As of March 21, 2025, this article has been revised to include crucial findings from a recent report detailing the escalating AI threat landscape. It also features a statement from OpenAI concerning the alarming LLM jailbreak threat that affects users of Google's Chrome password manager. The rise of infostealer malware has reached unprecedented levels, with a staggering 2.1 billion credentials compromised. Alarmingly, 85 million newly stolen passwords are currently being exploited in ongoing cyberattacks. Some malicious tools can bypass browser security measures in as little as 10 seconds, underscoring the urgency of this situation.
New research indicates that the situation may worsen, as hackers are now utilizing a large language model (LLM) jailbreak technique, known as an immersive world attack. This method allows them to instruct AI systems to generate infostealer malware without any prior coding knowledge. According to a recent report by Cato Networks, released on March 18, a threat intelligence researcher successfully managed to jailbreak various LLMs, culminating in the creation of a fully functional and highly dangerous password infostealer.
The report details how this immersive world attack navigates around the security features embedded in LLMs that are designed to prevent such malicious activities. The technique involves a process known as narrative engineering, where the attacker constructs a highly detailed, fictional universe and assigns roles to the LLM within this setting. This allows the researcher to normalize actions that should typically be restricted. Ultimately, the result was the generation of malicious code capable of extracting sensitive credentials from the Google Chrome password manager.
Cato Networks reached out to the AI tools implicated in this report. While DeepSeek did not respond, both Microsoft and OpenAI acknowledged the threat disclosure. Google also confirmed receipt of the report but declined to review the generated code. An OpenAI spokesperson commented, "We value research into AI security and have carefully reviewed this report. The generated code shared does not appear to be inherently malicious—this scenario aligns with normal model behavior and was not the result of circumventing any model safeguards." They also emphasized that ChatGPT generates code based on user prompts but does not execute it.
Further insight into the AI security landscape was provided by Zscaler's March 20 ThreatLabz 2025 AI Security Report. The report reveals that the use of enterprise AI tools has surged by an astonishing 3,000% year-over-year. Zscaler warns that as these technologies become integrated into various sectors, the demand for security measures is more critical than ever. According to their analysis of 536.5 billion AI and machine learning transactions between February 2024 and December 2024, nearly 60% of all transactions were blocked by enterprises due to security concerns.
Data leakage, unauthorized access, and compliance violations are among the potential risks associated with this rapid adoption of AI technologies. Zscaler pointed out that threat actors are increasingly utilizing AI to improve the sophistication and impact of their attacks. This emphasizes the need for both businesses and consumers to reassess their security strategies. Among the most frequently used AI applications, ChatGPT led with 45.2% of the identified global transactions, followed closely by other applications like Grammarly and Microsoft Copilot.
Deepen Desai, Chief Security Officer at Zscaler, noted, "As AI transforms industries, it also creates new and unforeseen security challenges." He advocates for a zero trust approach as a cornerstone for staying ahead in the evolving threat landscape where cybercriminals increasingly leverage AI for their attacks.