BREAKINGON

AI Threat Landscape: New Malware Techniques Unleashed

3/22/2025
A new report reveals the alarming rise of infostealer malware, with hackers using AI to create sophisticated password stealers effortlessly. Learn about the immersive world attack and its implications for Chrome users.
AI Threat Landscape: New Malware Techniques Unleashed
New research uncovers how hackers exploit AI to craft infostealer malware, posing significant risks to Google Chrome users. Act now to protect your data!

Update on the AI Threat Landscape: March 21, 2025

As of March 21, 2025, this article has been revised to include crucial findings from a recent report detailing the escalating AI threat landscape. It also features a statement from OpenAI concerning the alarming LLM jailbreak threat that affects users of Google's Chrome password manager. The rise of infostealer malware has reached unprecedented levels, with a staggering 2.1 billion credentials compromised. Alarmingly, 85 million newly stolen passwords are currently being exploited in ongoing cyberattacks. Some malicious tools can bypass browser security measures in as little as 10 seconds, underscoring the urgency of this situation.

The Disturbing Trend of Infostealer Malware

New research indicates that the situation may worsen, as hackers are now utilizing a large language model (LLM) jailbreak technique, known as an immersive world attack. This method allows them to instruct AI systems to generate infostealer malware without any prior coding knowledge. According to a recent report by Cato Networks, released on March 18, a threat intelligence researcher successfully managed to jailbreak various LLMs, culminating in the creation of a fully functional and highly dangerous password infostealer.

The Immersive World Attack Explained

The report details how this immersive world attack navigates around the security features embedded in LLMs that are designed to prevent such malicious activities. The technique involves a process known as narrative engineering, where the attacker constructs a highly detailed, fictional universe and assigns roles to the LLM within this setting. This allows the researcher to normalize actions that should typically be restricted. Ultimately, the result was the generation of malicious code capable of extracting sensitive credentials from the Google Chrome password manager.

Responses from AI Companies

Cato Networks reached out to the AI tools implicated in this report. While DeepSeek did not respond, both Microsoft and OpenAI acknowledged the threat disclosure. Google also confirmed receipt of the report but declined to review the generated code. An OpenAI spokesperson commented, "We value research into AI security and have carefully reviewed this report. The generated code shared does not appear to be inherently malicious—this scenario aligns with normal model behavior and was not the result of circumventing any model safeguards." They also emphasized that ChatGPT generates code based on user prompts but does not execute it.

Insights from the Zscaler AI Security Report

Further insight into the AI security landscape was provided by Zscaler's March 20 ThreatLabz 2025 AI Security Report. The report reveals that the use of enterprise AI tools has surged by an astonishing 3,000% year-over-year. Zscaler warns that as these technologies become integrated into various sectors, the demand for security measures is more critical than ever. According to their analysis of 536.5 billion AI and machine learning transactions between February 2024 and December 2024, nearly 60% of all transactions were blocked by enterprises due to security concerns.

Recommendations for Enhancing Security

Data leakage, unauthorized access, and compliance violations are among the potential risks associated with this rapid adoption of AI technologies. Zscaler pointed out that threat actors are increasingly utilizing AI to improve the sophistication and impact of their attacks. This emphasizes the need for both businesses and consumers to reassess their security strategies. Among the most frequently used AI applications, ChatGPT led with 45.2% of the identified global transactions, followed closely by other applications like Grammarly and Microsoft Copilot.

Deepen Desai, Chief Security Officer at Zscaler, noted, "As AI transforms industries, it also creates new and unforeseen security challenges." He advocates for a zero trust approach as a cornerstone for staying ahead in the evolving threat landscape where cybercriminals increasingly leverage AI for their attacks.

Breakingon.com is an independent news platform that delivers the latest news, trends, and analyses quickly and objectively. We gather and present the most important developments from around the world and local sources with accuracy and reliability. Our goal is to provide our readers with factual, unbiased, and comprehensive news content, making information easily accessible. Stay informed with us!
© Copyright 2025 BreakingOn. All rights reserved.