Skip to content
Menu
Menu
1280x250

AI Industry Falls Short On Safety – New Report From Future Of Life Institute Sounds Alarm

“AI Safety Index Winter 2025” finds none of the leading AI firms fully meet emerging global standards — experts warn existential-risk safeguards remain largely inadequate.

The Future of Life Institute (FLI) released the Winter 2025 edition of its AI Safety Index, offering a detailed assessment of eight leading AI companies’ efforts to manage both immediate and existential risks. The findings paint a stark picture: although some firms raise the bar modestly, the industry remains underprepared for the growing threat posed by advanced AI systems. 

At the top of the pack are Anthropic and OpenAI, both earning a C+. Google DeepMind follows with a C. Meanwhile, others such as xAI, Z.ai, Meta, DeepSeek, and Alibaba Cloud trail behind, most with D-level grades or lower. 

A core insight from the report is that despite considerable public commitments, none of the companies have demonstrated credible strategies for controlling potentially superintelligent systems. Areas such as “existential safety,” risk assessment, and information sharing remain weak across the board. 

 As FLI notes, “all of the companies reviewed are racing toward AGI/superintelligence without presenting any explicit plans for controlling or aligning such smarter-than-human technology.” 

The gaps are especially pronounced in transparency: most companies provide only limited, if any, public disclosure of internal risk assessments, evaluation methodologies, or whistleblower protections. In the “Information Sharing” domain, only Anthropic scored in the A range; many were rated D or D-. 

The report warns that this lack of oversight and concrete safeguards comes at a perilous time. As firms race to build ever more capable systems, the risk of catastrophic failure or misuse grows exponentially. Without robust, measurable, and enforceable safety standards, the report argues, these market leaders are inching toward a “race to the bottom.”

FLI offers several recommendations to close the gap. Among them are:

  • AI developers should create and publicly commit to detailed risk-management frameworks that include measurable thresholds, pre-deployment testing, and independent oversight. 
  • Companies need to improve transparency significantly. For example, by sharing model evaluations, enabling external audits, and implementing whistleblower protections. 
  • Policymakers should consider binding regulations to ensure companies meet minimum safety and governance standards before deploying frontier AI systems. 

The winter 2025 AI Safety Index underscores a growing consensus among experts: the AI industry’s ambition to advance rapidly must be matched by corresponding safety, transparency, and governance mechanisms. Without such safeguards, many believe, the public and global systems may be exposed to unacceptable levels of risk.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.