Skip to content
Menu
Menu

Meta’s Oversight Board Calls For New Rules On Deceptive AI During Conflicts

The Meta Oversight Board urged clearer policies after reviewing cases involving misleading AI-altered media tied to armed conflict.

 

Meta’s Oversight Board called on Meta to revise its policies on AI-generated and manipulated media, citing gaps identified in two cases involving content shared during armed conflicts.

The Board said the core issue in both cases is that Meta’s current framework focuses primarily on how content is created (including AI), rather than on whether the content is likely to mislead users, particularly in high-risk contexts such as armed conflict.

In the first case, the Board reviewed a digitally altered video that misrepresents events in a conflict zone. The content appeared authentic but included misleading elements that could distort viewers’ understanding of real-world developments. Meta initially left the video online, determining that it did not violate its manipulated media policy because it did not meet the company’s threshold for technical deception.

In the second case, Meta removed a video that had been altered using editing techniques but did not include synthetic or AI-generated elements. The company determined the content violated its rules on misleading media. The Board found that Meta’s enforcement across the two cases was inconsistent, with similar risks of harm treated differently under existing policies.

“The Board calls on Meta to develop a clear and comprehensive policy on deceptive AI-generated content, particularly during crises and armed conflicts,” the Oversight Board said in its official statement.

To address these gaps, the Board recommended that Meta adopt a broader definition of deceptive media that considers both authenticity and potential harm. It said policies should apply consistently regardless of whether content is AI-generated, digitally edited, or otherwise manipulated.

The Board also called for clearer labeling of AI-generated or altered content so users can better understand its origin and reliability. It recommended that Meta improve transparency in how moderation decisions are made, including publishing more detailed explanations of enforcement outcomes.

In addition, the Board urged Meta to strengthen its approach in conflict settings by incorporating context-specific risk assessments, including how content may influence public perception or incite harm. It also recommended expanding language coverage and regional expertise to ensure consistent enforcement across different geographies.

Meta is required to respond publicly to the Board’s recommendations within 60 days, stating whether it will implement the proposed changes or provide reasons for declining them.

The Oversight Board, an independent body funded by Meta, reviews selected content moderation decisions and issues policy recommendations that are advisory but intended to guide platform governance.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.

Advertise with AI RIsk Today, Today!