Meta, the parent company of Facebook, Instagram, and WhatsApp, is set to automate up to 90% of its privacy and integrity reviews, transitioning from human-led evaluations to AI-driven systems, according to internal documents.
This shift aims to accelerate the rollout of new features and algorithm updates across its platforms, which serve over 3 billion users globally.
However, current and former employees express concerns that delegating complex risk assessments to AI could overlook critical privacy and societal harms, potentially compromising user safety and data protection.
The change comes as Meta faces ongoing scrutiny from regulators and privacy advocates, particularly in the European Union, where data usage for AI training has sparked legal challenges.
From Human Oversight to AI Automation
Historically, Meta’s privacy and integrity reviews relied on human evaluators to assess the risks of new features, such as potential privacy violations or the spread of harmful content.
These reviews were mandatory before updates reached billions of users. Under the new system, AI will deliver instant decisions based on questionnaires completed by product teams, identifying risks and mitigation requirements.
Human reviews will only occur for novel or high-risk projects or when teams request additional feedback. Meta’s chief privacy officer, Michel Protti, stated in a March 2025 internal post that this shift empowers product teams to streamline decision-making, with human expertise reserved for complex issues.
However, a former Meta executive warned that prioritizing speed over scrutiny could lead to unaddressed societal risks, as AI may struggle with nuanced ethical considerations.
ALSO READ | OpenAI Aims to Redefine ChatGPT as a Comprehensive AI Assistant
Concerns Over Engineer-Led Risk Assessments
The new policy allows product engineers, rather than dedicated privacy experts, to make initial risk judgments, raising alarms among former employees. Zvika Krieger, Meta’s former director of responsible innovation, noted that product teams are primarily evaluated on launch speed, not privacy expertise, potentially turning risk assessments into superficial exercises.
Recent findings highlight fears that reduced human oversight could miss subtle issues, such as content moderation failures or data misuse.
Meta insists that low-risk decisions are suitable for automation and that audits will monitor AI-driven outcomes. However, critics argue that the complexity of issues like youth safety and misinformation requires human judgment, especially given Meta’s history of privacy scandals, including the Cambridge Analytica incident.
Did You Know?
Meta’s AI-driven Meta AI chatbot, launched in April 2025, has reached 1 billion monthly active users, but its data collection practices have raised concerns about storing user conversations for ad targeting and model training.
Regulatory and Ethical Challenges
Meta’s automation push coincides with heightened regulatory pressure, particularly in the EU, where the privacy group noyb has threatened legal action over Meta’s use of user data for AI training without explicit consent, citing GDPR violations.
The Irish Data Protection Commission halted the company’s plan to begin AI training with European user data on May 27, 2025, last year due to “regulatory unpredictability.”
Meta claims its approach aligns with EU guidelines, but privacy advocates argue that relying on “legitimate interest” for data use is insufficient. Additionally, Meta’s recent policy changes, such as relaxed content moderation and plans to end U.S. fact-checking operations, have fueled concerns about its commitment to user safety, amplifying the risks of automated assessments.
Comments (0)
Please sign in to leave a comment
No comments yet. Be the first to share your thoughts!