Meta has recently addressed a significant security vulnerability that allowed users of its AI chatbot to inadvertently access and view the private prompts and AI-generated responses of other users. This breach raised serious concerns about user privacy and data security within the platform.
Sandeep Hodkasia, the founder of the security testing firm AppSecure, disclosed this information exclusively to TechCrunch. He revealed that Meta rewarded him with $10,000 as part of their bug bounty program for privately reporting the security flaw he discovered on December 26, 2024. The company promptly deployed a fix for the bug on January 24, 2025.
Hodkasia explained that he identified the vulnerability while examining how users of Meta AI are allowed to edit their AI prompts for regenerating text and images. He found that when a user modifies their prompt, Meta’s back-end servers assign a unique identifier to both the prompt and its corresponding AI-generated response. By monitoring network traffic in his browser during this process, Hodkasia realized he could manipulate this unique number to retrieve prompts and responses belonging to other users.
The implications of this bug were serious; it indicated that Meta’s servers failed to adequately verify whether the user requesting the AI-generated content was authorized to access it. Hodkasia remarked that the prompt numbers created by Meta were “easily guessable,” which could potentially enable malicious actors to scrape users’ original prompts by swiftly altering prompt numbers through automated tools.
Upon being contacted by TechCrunch, Meta confirmed that the vulnerability had been fixed in January 2025 and reiterated that they found no evidence of the bug being maliciously exploited. Ryan Daniels, a spokesperson for Meta, emphasized, “We found no evidence of abuse and rewarded the researcher,” underscoring the company's commitment to security and user safety.
This news comes at a critical juncture when major tech companies are racing to launch and improve their AI products. Despite the enormous potential of AI technology, it is accompanied by a myriad of security and privacy risks that require vigilant oversight and prompt action to safeguard user data.
As the landscape of AI continues to evolve, the need for robust security measures becomes increasingly paramount, ensuring that user trust is maintained amid rapid technological advancements.