For years, Meta has relied on the billions of public images uploaded by users on platforms like Facebook and Instagram to train its AI programs. However, recent developments indicate a significant shift in strategy. According to a report from TechCrunch, Meta is now considering using billions of images that users have not uploaded to these platforms for AI training purposes. This new approach raises important questions about user privacy and consent.
On Friday, users of Facebook reported encountering pop-up messages when attempting to post on the Story feature. These messages prompt users to opt into a new feature called “cloud processing.” By accepting this option, users would allow Facebook to “select media from your camera roll” and upload it to Meta's cloud on a regular basis. The intent behind this feature is to generate creative ideas such as collages, recaps, and themed content for occasions like birthdays or graduations.
However, by agreeing to this feature, users also consent to Meta's AI terms, which permit the analysis of “media and facial features” from unpublished photos. This includes data such as the date the photos were taken and the presence of other people or objects. Furthermore, users grant Meta the right to “retain and use” this personal information, raising significant privacy concerns.
Meta has openly acknowledged that it has utilized data from all published content on Facebook and Instagram since 2007 to enhance its generative AI models. The company claims that it only uses public posts from adult users aged 18 and older. However, the definition of “public” and the criteria for “adult users” from 2007 remain ambiguous. Unlike Google, which explicitly states it does not use personal data from Google Photos for generative AI training, Meta’s terms do not clarify whether unpublished photos accessed through “cloud processing” are excluded.
As of June 23, 2024, Meta's AI usage terms have not provided sufficient clarity on this matter, leaving many users uneasy about the privacy implications of their unpublished media potentially being used for AI training.
Fortunately, Facebook users have the option to disable the camera roll cloud processing feature in their settings. Activating this option will also initiate the removal of unpublished photos from the cloud after 30 days. This workaround, however, raises concerns about whether it adequately protects user privacy, especially as it seems to facilitate new access to personal data without explicit consent.
Moreover, reports from Reddit users indicate that Meta has already begun offering AI restyling suggestions on previously uploaded photos without prior user knowledge. For instance, one user shared that Facebook had altered her wedding photos in a Studio Ghibli style without her consent or awareness, further highlighting the potential privacy invasions associated with Meta's AI practices.
As Meta continues to evolve its AI training methods, users must remain vigilant about their privacy rights and the implications of consenting to features like cloud processing. With the potential for unpublished images to be mined for AI training, it's crucial for users to understand the risks and take proactive steps to safeguard their personal data.