Imagine the shock of discovering that your internet search history is publicly accessible. This troubling scenario is becoming a reality for some users of Meta AI, as their queries and the corresponding responses are being shared on a public feed without their full awareness. This situation raises significant questions regarding user privacy, as individuals may inadvertently expose personal information that they would prefer to keep confidential.
According to internet safety experts, this exposure poses a serious user experience and security problem. Many posts can be easily traced back to users via their usernames and profile pictures, linking them to their social media accounts. This means that users could unintentionally reveal sensitive information about their searches—such as requests for AI-generated images of scantily-clad characters or even inquiries aimed at cheating on tests.
It remains unclear whether users are fully aware that their searches are being posted on the public feed of the Meta AI app and website. Importantly, this process is not automatic. Users must actively choose to share their posts. A warning message appears before sharing, informing users that “prompts you post are public and visible to everyone... Avoid sharing personal or sensitive information.”
The BBC has reported instances where users uploaded photos of school or university test questions, soliciting answers from Meta AI. One particular chat thread was titled “Generative AI tackles math problems with ease.” Additionally, there were searches involving anthropomorphic characters in minimal clothing and even intimate medical inquiries, such as how to manage an inner thigh rash. These examples illustrate how users may unintentionally share sensitive information linked to their identities.
Launched earlier this year, Meta AI is accessible through various platforms including Facebook, Instagram, and WhatsApp, as well as a standalone product featuring a public Discover feed. Users have the option to make their searches private through their account settings. While Meta claims that users are in control and that nothing is shared to their feed unless explicitly posted, the reality appears more complex.
Rachel Tobac, CEO of the US cybersecurity firm Social Proof Security, highlighted the importance of aligning user expectations with actual functionality. On social media platform X, she stated, “If a user's expectations about how a tool functions don't match reality, you've got yourself a huge user experience and security problem.” Users typically do not anticipate that their interactions with an AI chatbot would become part of a public feed similar to traditional social media platforms, which can lead to inadvertent sharing of sensitive information linked to their identities.
In conclusion, as users embrace new technologies like Meta AI, it is crucial for them to remain vigilant about their privacy settings and understand the implications of sharing information online. As the platform evolves, ongoing discussions about user safety and data protection will be essential in fostering a secure digital environment.