Skip to content
Menu
Menu

AI Governance And Disclosure Lag Behind Deployment, UNESCO/Thomson Reuters Study Finds

Analysis of 2,972 companies shows gaps between AI governance commitments and real-world implementation across data, oversight, and accountability.

 


The Responsible AI In Practice Study analyzed 2,972 companies and found that AI governance and disclosure are not keeping pace with the widespread deployment of AI, creating gaps in oversight, accountability, and risk management.

Key Takeaways

  • 43.7% of companies publicly state they have an AI strategy, but only 13% align with a formal governance framework
  • 76% show no evidence of policies to evaluate the quality of training data
  • 72% do not report conducting any AI-related impact assessments
  • 40% report board- or committee-level oversight of AI
  • Only 12.4% have policies ensuring human oversight of AI systems
  • 31% of companies report having training programs in place
  • Just 15.4% can trace AI system impacts to a responsible party across the lifecycle

The findings are based on the AI Company Data Initiative, developed by the Thomson Reuters Foundation in partnership with UNESCO, using one of the largest global datasets of corporate AI disclosures.

The report finds that companies are increasingly embedding AI into products, services, and operations, but governance mechanisms are not keeping pace. This creates what the report describes as a “widening information gap,” where companies communicate high-level principles and strategies but provide limited detail on how AI is actually deployed or controlled in practice.

Additional findings highlight these gaps in the application of governance. Less than one-third of companies report having dedicated AI governance resources and formal structures, such as safety task forces. Only 28.7% of companies publicly state that they adhere to an AI governance framework, such as the EU AI Act or California’s SB53, suggesting that most organizations are not anchoring AI practices to recognized standards.

The report also identifies gaps in data governance and third-party risk. Less than one in four companies with AI strategies report having policies to evaluate training data, and roughly one in five have guidance governing data shared with external AI providers. 

Most companies lack the infrastructure to track systems or assign responsibility when failures occur. Only 2.7% of companies report having a formal AI model registry, and just 15.4% say they can trace AI system impacts to a responsible party across the lifecycle.

AI training and worker preparedness also show gaps. While 31% of companies report having AI training programs, only 12% offer structured training. Further, just 2.5% report having an AI safety and security task force.

Workforce-related governance remains minimal: only 14% of companies report implementing policies to minimize the impact of AI on workers, and very few provide AI-specific complaint mechanisms, reducing visibility into how harms are identified and addressed.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.

Advertise with AI RIsk Today, Today!