For years, Meta has implemented rigorous evaluations of potential risks associated with new features for its platforms, including Instagram, WhatsApp, and Facebook. These assessments, known internally as privacy and integrity reviews, were primarily conducted by teams of human reviewers who scrutinized issues such as user privacy, potential harm to minors, and the risk of amplifying misleading or harmful content. However, recent internal documents obtained by NPR reveal a significant shift in this process: up to 90% of all risk assessments will soon be automated.
This transition to automation means that critical updates to Meta's algorithms, new safety features, and changes to content sharing regulations will largely be approved by artificial intelligence systems. Consequently, these modifications will no longer undergo thorough scrutiny by human evaluators who traditionally debated the potential unforeseen repercussions of platform changes. This shift is perceived by some within the company as a victory for product developers, allowing them to launch updates and features more rapidly.
However, both current and former employees express concerns that this automation could lead to significant risks, as AI may struggle to accurately assess complex issues related to user safety. A former Meta executive, who spoke anonymously, warned that the accelerated pace of product launches with diminished scrutiny could lead to unintended negative consequences, stating, “Negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In response to these concerns, Meta has publicly reaffirmed its commitment to user privacy, claiming it has invested billions in supporting user data protection. Since 2012, the company has been under the oversight of the Federal Trade Commission (FTC) regarding its handling of personal information, mandating that privacy reviews for new products be conducted. Meta emphasized that while automation will streamline decision-making, human expertise will still be applied to novel and complex issues, and only low-risk decisions will be automated.
Despite these assurances, internal documents indicate that Meta is contemplating automating reviews in sensitive areas such as AI safety, youth risk, and integrity issues related to violent content and misinformation. The new process will allow product teams to receive instant decisions after completing a questionnaire about their projects, identifying risk areas and necessary requirements. However, this raises the question of whether engineers, who may lack privacy expertise, are equipped to make these critical assessments.
As Zvika Krieger, former director of responsible innovation at Meta, pointed out, product teams often prioritize speed over thoroughness, which can lead to oversight of significant risks. He cautioned that while automation could streamline reviews, pushing this initiative too far could jeopardize the quality of assessments and outcomes.
Interestingly, Meta's internal announcements suggest that users in the European Union may be somewhat insulated from these changes. Oversight and decision-making regarding products and user data for EU users will reportedly remain with Meta's European headquarters in Ireland, adhering to regulations such as the Digital Services Act, which mandates stricter policing of online platforms and user protection.
As Meta pushes to leverage AI for faster product reviews amid competition from platforms like TikTok and Snap, the relationship between speed and safety becomes increasingly crucial. Meta's latest quarterly integrity report noted that the company is utilizing large language models to enhance content moderation, freeing up human reviewers to focus on more complex violations. Industry experts, like Katie Harbath, CEO of Anchor Change, argue that while AI can streamline processes, human checks and balances are vital to ensure that risks are adequately assessed.
However, some former employees have voiced skepticism about the efficacy of expedited risk assessments. One former Meta employee highlighted the irony of moving faster on risk evaluations, stating, “Every time they launch a new product, there is so much scrutiny on it — and that scrutiny regularly finds issues the company should have taken more seriously.”
In summary, while Meta aims to simplify risk management processes through automation, there are significant concerns about the potential implications of reducing human oversight in critical evaluations. As the rollout of automated risk reviews progresses, the balance between innovation, user safety, and regulatory compliance remains a central challenge for the tech giant. The ongoing dialogue among employees and external observers will be essential in navigating these complex issues as Meta continues to evolve its platforms.