Skip to content
Menu
Menu
1280x250

CISO AI Risk Report Finds AI Identities Widely Embedded In Enterprise Systems With Limited Governance

The 2026 CISO AI Risk Report outlines how enterprise security leaders say AI systems have access to core business systems but lack visibility and governance controls.


Key takeaways

  • 71% of CISOs say AI has access to core business systems, but only 16% govern that access effectively.
  • 92% of organizations lack full visibility into AI identities, and 95% doubt they could detect misuse.
  • 86% do not enforce access policies for AI identities; only 5% feel capable of containing a compromised agent.
  • 75% of organizations have discovered unsanctioned AI tools running in their environments.

Security leaders from large enterprises said that AI tools and autonomous agents now operate inside critical business environments with limited oversight, according to findings released January 24 in the 2026 CISO AI Risk Report by Cybersecurity Insiders and Saviynt. The survey of 235 chief information security officers (CISOs) and senior security leaders finds widespread AI access coupled with significant gaps in governance, monitoring, and policy enforcement.

The report shows that 71% of CISOs say AI has access to core business systems, but only 16% govern that access effectively. These identities can include AI assistants, automated agents, and embedded copilots acting within systems such as enterprise resource planning, customer relationship management, and service platforms.

A large majority of organizations lack clear visibility into these AI identities. 92% report they do not have full visibility into where AI identities are operating, and 95% say they doubt they could detect misuse if it occurred.

Only a minority of organizations enforce access policies for AI identities; 86% say they do not apply such policies, and just 17% govern even half of their AI identities with the same rigor used for human users. A small subset (5%) feels confident they could contain a compromised AI agent.

The report also highlights the prevalence of unsanctioned AI tools, often referred to as “shadow AI.” Three-quarters of respondents said they have discovered AI tools running in their environments without explicit approval, and these tools often use embedded credentials or elevated access tokens that bypass traditional security controls.

The survey’s methodology covered large enterprises with more than 5,000 employees across several sectors, including technology, financial services, healthcare, and manufacturing.

Essential AI Risk Intelligence

Daily insights on AI governance, regulation, and enterprise risk management. Trusted by Chief Risk Officers and compliance leaders globally.

By subscribing, you agree to receive our daily newsletter. Unsubscribe anytime.