Skip to content
Menu
Menu

The Anthropic Institute Launches To Study AI Safety And Policy

The AI company introduced a new standalone research initiative focused on the societal impact of AI.

 

Anthropic launched the Anthropic Institute, a new initiative designed to track the societal impact of AI based on research from across the company. The institute’s goal is to identify and raise awareness of potential or realized societal problems created by AI, and to work with external parties to address these risks. The research and findings provided by the institute will be made public. 

The institute will monitor and study risks related to evolving AI models, including misuse, loss of control, and broader systemic impacts. The institute will also examine how AI systems behave in real-world environments and evaluate methods to improve reliability and safety.

The day-to-day team is drawn from three internal departments, each serving different functions:

The Frontier Red Team – stress-test AI systems 

Societal Impacts – monitor AI use in the real world

Economic Research – tracks AI’s impact on jobs and the economy generally

Co-founder Jack Clark will serve as Head of Public Benefit.

Concurrently, Anthropic announced the expansion of its Public Policy team to “help inform and shape AI governance around the world.” The Public Policy team will be opening its first office in Washington, D.C., and is rapidly hiring. Sarah Heck, formerly Head of External Affairs for the company, heads the Public Policy team. She was previously Senior Director for Global Engagement at the National Security Council under the 2nd Obama administration.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.

Advertise with AI RIsk Today, Today!