Skip to content
Menu
Menu
1280x250

Sedgwick Highlights Artificial Intelligence Risks In 2026 Global Risk Forecasting Report

Annual report identifies preparedness gaps and governance challenges related to artificial intelligence use.

 

Key takeaways

  • Most organizations report limited preparedness for artificial intelligence (AI) risks despite increased governance activity.
  • AI adoption is outpacing formal risk management, controls, and oversight frameworks.
  • Executives cite regulatory uncertainty and operational risk as leading AI-related concerns.

Sedgwick released its 2026 Global Risk Forecasting Report, which identifies AI as a growing risk area for organizations across industries. This article focuses specifically on the AI-related findings within the broader report, which also addresses catastrophe, supply chain, workforce, and global risk trends.

The report is based on research conducted by Sedgwick specialists and survey responses from senior executives at large multinational organizations. According to Sedgwick, the AI section is intended to assess organizations’ preparedness to manage operational, legal, and governance risks associated with expanding AI use.

Sedgwick reported that while AI adoption continues to accelerate, formal risk management has not kept pace. Survey data showed that 70% of respondents said their organizations have established AI risk committees, but only 14% said they are fully prepared to manage AI-related risks across their operations. An additional 31% said they are struggling to keep up with AI risk preparation.

Key AI-related findings from the report include:

  • A majority of organizations have implemented some form of AI governance, but few report enterprise-wide readiness.
  • Risk committees are more common than documented policies, controls, or monitoring processes.
  • Respondents cited concerns about data quality, regulatory compliance, model oversight, and unintended consequences.
  • AI-related risks are increasingly viewed as an enterprise issue rather than a technology-only concern.

The report notes that organizations face challenges aligning AI deployment with existing risk, compliance, and insurance frameworks. Sedgwick said gaps in oversight may increase exposure to claims, regulatory scrutiny, and operational disruption as AI systems are embedded into core business processes.

“Navigating risk in 2026 requires organizations to address AI risks with the same rigor applied to other enterprise exposures,” said Mike Arbour, CEO of Sedgwick, in an official statement accompanying the report. “Governance, accountability, and preparedness will be essential as AI use expands.”

Sedgwick said the AI findings are intended to help organizations evaluate current controls, identify gaps, and align AI governance with broader risk management strategies. The company noted that AI-related risk considerations are expected to remain a central focus for executives as adoption continues across functions and industries.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.